text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Introduction to Classification.
Notebook version: 2.1 (Oct 19, 2018)
Author: Jesús Cid Sueiro (jcid@tsc.uc3m.es)
Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Changes: v.1.0 - First version. Extracted from a former notebook on K-NN
v.2.0 - Adapted to Python 3.0 (backcompatible with Python 2.7)
v.2.1 - Minor corrections affecting the notation and assumptions
```
from __future__ import print_function
# To visualize plots in the notebook
%matplotlib inline
# Import some libraries that will be necessary for working with data and displaying plots
import csv # To read csv files
import random
import matplotlib.pyplot as plt
import numpy as np
from scipy import spatial
from sklearn import neighbors, datasets
```
## 1. The Classification problem
In a generic classification problem, we are given an observation vector ${\bf x}\in \mathbb{R}^N$ which is known to belong to one and only one *category* or *class*, $y$, in the set ${\mathcal Y} = \{0, 1, \ldots, M-1\}$. The goal of a classifier system is to predict the value of $y$ based on ${\bf x}$.
To design the classifier, we are given a collection of labelled observations ${\mathcal D} = \{({\bf x}^{(k)}, y^{(k)})\}_{k=0}^{K-1}$ where, for each observation ${\bf x}^{(k)}$, the value of its true category, $y^{(k)}$, is known.
### 1.1 Binary Classification
We will focus in binary classification problems, where the label set is binary, ${\mathcal Y} = \{0, 1\}$. Despite its simplicity, this is the most frequent case.
Many multi-class classification problems are usually solved by decomposing them into a collection of binary problems.
### 1.2. The i.i.d. assumption.
The classification algorithms, as many other machine learning algorithms, are based on two major underlying hypothesis:
- All samples in dataset ${\mathcal D}$ have been generated by the same distribution $p_{{\bf X}, Y}({\bf x}, y)$.
- For any test data, the tuple formed by the input sample and its unknown class, $({\bf x}, y)$, is an independent outcome of the *same* distribution.
These two assumptions are essential to have some guarantees that a classifier design based on ${\mathcal D}$ has a good perfomance when applied to new input samples. Note that, despite assuming the existence of an underlying distribution, such distribution is unknown: otherwise, we could ignore ${\mathcal D}$ and apply classic decision theory to find the optimal predictor based on $p_{{\bf X}, Y}({\bf x}, y)$.
## 2. A simple classification problem: the Iris dataset
(Iris dataset presentation is based on this <a href=http://machinelearningmastery.com/tutorial-to-implement-k-nearest-neighbors-in-python-from-scratch/> Tutorial </a> by <a href=http://machinelearningmastery.com/about/> Jason Brownlee</a>)
As an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository </a>. Quoted from the dataset description:
> This is perhaps the best known database to be found in the pattern recognition literature. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. [...] One class is linearly separable from the other 2; the latter are NOT linearly separable from each other.
The *class* is the species, which is one of *setosa*, *versicolor* or *virginica*. Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters.
```
# Taken from Jason Brownlee notebook.
with open('datasets/iris.data', 'r') as csvfile:
lines = csv.reader(csvfile)
for row in lines:
print(','.join(row))
```
Next, we will split the data into a training dataset, that will be used to learn the classification model, and a test dataset that we can use to evaluate its the accuracy.
We first need to convert the flower measures that were loaded as strings into numbers that we can work with. Next we need to split the data set **randomly** into train and datasets. A ratio of 67/33 for train/test will be used.
The code fragment below defines a function `loadDataset` that loads the data in a CSV with the provided filename and splits it randomly into train and test datasets using the provided split ratio.
```
# Adapted from a notebook by Jason Brownlee
def loadDataset(filename, split):
xTrain = []
cTrain = []
xTest = []
cTest = []
with open(filename, 'r') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for i in range(len(dataset)-1):
for y in range(4):
dataset[i][y] = float(dataset[i][y])
item = dataset[i]
if random.random() < split:
xTrain.append(item[0:-1])
cTrain.append(item[-1])
else:
xTest.append(item[0:-1])
cTest.append(item[-1])
return xTrain, cTrain, xTest, cTest
```
We can use this function to get a data split. Note that, because of the way samples are assigned to the train or test datasets, the number of samples in each partition will differ if you run the code several times.
```
xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('./datasets/iris.data', 0.67)
nTrain_all = len(xTrain_all)
nTest_all = len(xTest_all)
print('Train:', str(nTrain_all))
print('Test:', str(nTest_all))
```
To get some intuition about this four dimensional dataset we can plot 2-dimensional projections taking only two variables each time.
```
i = 2 # Try 0,1,2,3
j = 3 # Try 0,1,2,3 with j!=i
# Take coordinates for each class separately
xiSe = [xTrain_all[n][i] for n in range(nTrain_all) if cTrain_all[n]=='Iris-setosa']
xjSe = [xTrain_all[n][j] for n in range(nTrain_all) if cTrain_all[n]=='Iris-setosa']
xiVe = [xTrain_all[n][i] for n in range(nTrain_all) if cTrain_all[n]=='Iris-versicolor']
xjVe = [xTrain_all[n][j] for n in range(nTrain_all) if cTrain_all[n]=='Iris-versicolor']
xiVi = [xTrain_all[n][i] for n in range(nTrain_all) if cTrain_all[n]=='Iris-virginica']
xjVi = [xTrain_all[n][j] for n in range(nTrain_all) if cTrain_all[n]=='Iris-virginica']
plt.plot(xiSe, xjSe,'bx', label='Setosa')
plt.plot(xiVe, xjVe,'r.', label='Versicolor')
plt.plot(xiVi, xjVi,'g+', label='Virginica')
plt.xlabel('$x_' + str(i) + '$')
plt.ylabel('$x_' + str(j) + '$')
plt.legend(loc='best')
plt.show()
```
In the following, we will design a classifier to separate classes "Versicolor" and "Virginica" using $x_0$ and $x_1$ only. To do so, we build a training set with samples from these categories, and a bynary label $y^{(k)} = 1$ for samples in class "Virginica", and $0$ for "Versicolor" data.
```
# Select two classes
c0 = 'Iris-versicolor'
c1 = 'Iris-virginica'
# Select two coordinates
ind = [0, 1]
# Take training test
X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1])
C_tr = [cTrain_all[n] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1]
Y_tr = np.array([int(c==c1) for c in C_tr])
n_tr = len(X_tr)
# Take test set
X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1])
C_tst = [cTest_all[n] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1]
Y_tst = np.array([int(c==c1) for c in C_tst])
n_tst = len(X_tst)
# Separate components of x into different arrays (just for the plots)
x0c0 = [X_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]
x1c0 = [X_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]
x0c1 = [X_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]
x1c1 = [X_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]
# Scatterplot.
labels = {'Iris-setosa': 'Setosa',
'Iris-versicolor': 'Versicolor',
'Iris-virginica': 'Virginica'}
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.show()
```
## 3. A Baseline Classifier: Maximum A Priori.
For the selected data set, we have two clases and a dataset with the following class proportions:
```
print('Class 0 (' + c0 + '): ' + str(n_tr - sum(Y_tr)) + ' samples')
print('Class 1 (' + c1 + '): ' + str(sum(Y_tr)) + ' samples')
```
The maximum a priori classifier assigns any sample ${\bf x}$ to the most frequent class in the training set. Therefore, the class prediction $y$ for any sample ${\bf x}$ is
```
y = int(2*sum(Y_tr) > n_tr)
print('y = ' + str(y) + ' (' + (c1 if y==1 else c0) + ')')
```
The error rate for this baseline classifier is:
```
# Training and test error arrays
E_tr = (Y_tr != y)
E_tst = (Y_tst != y)
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
print('Pe(train):', str(pe_tr))
print('Pe(test):', str(pe_tst))
```
The error rate of the baseline classifier is a simple benchmark for classification. Since the maximum a priori decision is independent on the observation, ${\bf x}$, any classifier based on ${\bf x}$ should have a better (or, at least, not worse) performance than the baseline classifier.
## 3. Parametric vs non-parametric classification.
Most classification algorithms can be fitted to one of two categories:
1. Parametric classifiers: to classify any input sample ${\bf x}$, the classifier applies some function $f_{\bf w}({\bf x})$ which depends on some parameters ${\bf w}$. The training dataset is used to estimate ${\bf w}$. Once the parameter has been estimated, the training data is no longer needed to classify new inputs.
2. Non-parametric classifiers: the classifier decision for any input ${\bf x}$ depend on the training data in a direct manner. The training data must be preserve to classify new data.
| github_jupyter |
1. read .las pointcloud file
2. convert the pointcloud to the reference local coordinates
3. read bounding boxes
5. enlarge bounding boxes
6. crop points within enlarged bounding boxes
7. write cropped pointcloud objects
```
"""# google colab installation
!pip install open3d
!pip install laspy
!pip install pptk
"""
"""# for google colab
from google.colab import drive
import laspy
import numpy as np
import os
import open3d as o3d
drive.mount("/content/drive")
os.chdir('/content/drive/MyDrive/DREAMS - Zhiang/Projects/3D_rock_detection/data')
print(os.listdir())
"""
import laspy
import numpy as np
import os
import open3d as o3d
os.chdir('/content/drive/MyDrive/DREAMS - Zhiang/Projects/3D_rock_detection/data')
print(os.listdir())
```
# read pointcloud .las
```
pc = laspy.read('granite_dells_wgs_utm.las')
# WGS 84 & UTM 12N
print(pc.x.scaled_array().min())
print(pc.x.scaled_array().max())
print(pc.y.scaled_array().min())
print(pc.y.scaled_array().max())
print(pc.z.scaled_array().min())
print(pc.z.scaled_array().max())
# color value has type of uint16
print(pc.red.max())
print(pc.red.min())
print(pc.green.max())
print(pc.green.min())
print(pc.blue.max())
print(pc.blue.min())
```
# read bounding box
```
bboxes = np.load('pbr_bboxes_wgs_utm.npy')
x1,y1,x2,y2 = bboxes[0]
print(x1,x2,y1,y2)
def box_filter(las, x1, y1, x2, y2, padding=0.2):
x1 = x1 - padding
x2 = x2 + padding
y1 = y1 - padding
y2 = y2 + padding
xgood = (las.x >= x1) & (las.x < x2)
ygood = (las.y >= y1) & (las.y < y2)
good = xgood & ygood
found = (las.x.scaled_array()[good], las.y.scaled_array()[good], las.z.scaled_array()[good], las.red[good], las.green[good], las.blue[good])
return found
def write_las_file(f, pt):
header = laspy.LasHeader(point_format=2, version="1.2")
las = laspy.LasData(header)
las.x = pt[0]
las.y = pt[1]
las.z = pt[2]
las.red = pt[3]
las.green = pt[4]
las.blue = pt[5]
las.write('box_pbr/'+f)
for id, bbox in enumerate(bboxes):
x1,y1,x2,y2 = bbox
pbr_pc = box_filter(pc, x1, y1, x2, y2)
write_las_file('pbr{i}.las'.format(i=id), pbr_pc)
print(id)
```
## Convert .las to .pcd
```
las_files = ['box_pbr/' + f for f in os.listdir('box_pbr/') if f.endswith('.las')]
for las_file in las_files[:5]:
pc = laspy.read(las_file)
x = pc.x.scaled_array()
y = pc.y.scaled_array()
z = pc.z.scaled_array()
r = np.uint8(pc.red/65535.*255)
g = np.uint8(pc.green/65535.*255)
b = np.uint8(pc.blue/65535.*255)
rgb = np.vstack((r, g, b)).transpose()
xyz = np.vstack((x, y, z)).transpose()
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(xyz)
pcd.colors = o3d.utility.Vector3dVector(rgb)
o3d.io.write_point_cloud("pbr.pcd", pcd)
```
| github_jupyter |
Per a recent request somebody posted on Twitter, I thought it'd be fun to write a quick scraper for the [biorxiv](http://biorxiv.org/), an excellent new tool for posting pre-prints of articles before they're locked down with a publisher embargo.
A big benefit of open science is the ability to use modern technologies (like web scraping) to make new use of data that would originally be unavailable to the public. One simple example of this is information and metadata about published articles. While we're not going to dive too deeply here, maybe this will serve as inspiration for somebody else interested in scraping the web.
First we'll do a few imports. We'll rely heavily on the `requests` and `BeautifulSoup` packages, which together make an excellent one-two punch for doing web scraping. We coud use something like `scrapy`, but that seems a little overkill for this small project.
```
import requests
import pandas as pd
import seaborn as sns
import numpy as np
from bs4 import BeautifulSoup as bs
import matplotlib.pyplot as plt
from tqdm import tqdm
%matplotlib inline
```
From a quick look at the biorxiv we can see that its search API works in a pretty simple manner. I tried typing in a simple search query and got something like this:
`http://biorxiv.org/search/neuroscience%20numresults%3A100%20sort%3Arelevance-rank`
Here we can see that the term you search for comes just after `/search/`, and parameters for the search, like `numresults`. The keyword/value pairs are separated by a `%3A` character, which corresponds to `:` (see [this site](http://www.degraeve.com/reference/urlencoding.php) for a reference of url encoding characters), and these key/value pairs are separated by `%20`, which corresponds to a space.
So, let's do a simple scrape and see what the results look like. We'll query the biorxiv API to see what kind of structure the result will have.
```
n_results = 20
url = "http://biorxiv.org/search/neuroscience%20numresults%3A{}".format(
n_results)
resp = requests.post(url)
# I'm not going to print this because it messes up the HTML rendering
# But you get the idea...probably better to look in Chrome anyway ;)
# text = bs(resp.text)
```
If we search through the result, you may notice that search results are organized into a list (denoted by `li` for each item). Inside each item is information about the article's title (in a `div` of class `highwire-cite-title`) and author information (in a `div` of calss `highwire-cite-authors`).
Let's use this information to ask three questions:
1. How has the rate of publications for a term changed over the years
1. Who's been publishing under that term.
1. What kinds of things are people publishing?
For each, we'll simply use the phrase "neuroscience", although you could use whatever you like.
To set up this query, we'll need to use another part of the biorxiv API, the `limit_from` paramter. This lets us constrain the search to a specific month of the year. That way we can see the monthly submissions going back several years.
We'll loop through years / months, and pull out the author and title information. We'll do this with two dataframes, one for authors, one for articles.
```
# Define the URL and start/stop years
stt_year = 2012
stp_year = 2016
search_term = "neuroscience"
url_base = "http://biorxiv.org/search/{}".format(search_term)
url_params = "%20limit_from%3A{0}-{1}-01%20limit_to%3A{0}-{2}-01%20numresults%3A100%20format_result%3Astandard"
url = url_base + url_params
# Now we'll do the scraping...
all_articles = []
all_authors = []
for yr in tqdm(range(stt_year, stp_year + 1)):
for mn in range(1, 12):
# Populate the fields with our current query and post it
this_url = url.format(yr, mn, mn + 1)
resp = requests.post(this_url)
html = bs(resp.text)
# Collect the articles in the result in a list
articles = html.find_all('li', attrs={'class': 'search-result'})
for article in articles:
# Pull the title, if it's empty then skip it
title = article.find('span', attrs={'class': 'highwire-cite-title'})
if title is None:
continue
title = title.text.strip()
# Collect year / month / title information
all_articles.append([yr, mn, title])
# Now collect author information
authors = article.find_all('span', attrs={'class': 'highwire-citation-author'})
for author in authors:
all_authors.append((author.text, title))
# We'll collect these into DataFrames for subsequent use
authors = pd.DataFrame(all_authors, columns=['name', 'title'])
articles = pd.DataFrame(all_articles, columns=['year', 'month', 'title'])
```
To make things easier to cross-reference, we'll add an `id` column that's unique for each title. This way we can more simply join the dataframes to do cool things:
```
# Define a dictionary of title: ID mappings
unique_ids = {title: ii for ii, title in enumerate(articles['title'].unique())}
articles['id'] = [unique_ids[title] for title in articles['title']]
authors['id'] = [unique_ids[title] for title in authors['title']]
```
Now, we can easily join these two dataframes together if we so wish:
```
pd.merge(articles, authors, on=['id', 'title']).head()
```
# Question 1: How has the published articles rate changed?
This one is pretty easy to ask. Since we have both year / month data about each article, we can plot the number or articles for each group of time. To do this, let's first turn these numbers into an actual "datetime" object. This let's us do some clever plotting magic with pandas
```
# Add a "date" column
dates = [pd.datetime(yr, mn, day=1)
for yr, mn in articles[['year', 'month']].values]
articles['date'] = dates
# Now drop the year / month columns because they're redundant
articles = articles.drop(['year', 'month'], axis=1)
```
Now, we can simply group by month, sum the number of results, and plot this over time:
```
monthly = articles.groupby('date').count()['title'].to_frame()
ax = monthly['title'].plot()
ax.set_title('Articles published per month for term\n{}'.format(search_term))
```
We can also plot the cumulative number of papers published:
```
cumulative = np.cumsum(monthly.values)
monthly['cumulative'] = cumulative
# Now plot cumulative totals
ax = monthly['cumulative'].plot()
ax.set_title('Cumulative number of papers matching term \n{}'.format(search_term))
ax.set_ylabel('Number of Papers')
```
# Question 2: Which author uses pre-prints the most?
For this one, we can use the "authors" dataframe. We'll group by author name, and count the number of publications per author:
```
# Group by author and count the number of items
author_counts = authors.groupby('name').count()['title'].to_frame('count')
# We'll take the top 30 authors
author_counts = author_counts.sort_values('count', ascending=False)
author_counts = author_counts.iloc[:30].reset_index()
```
We'll use some `pandas` magical gugu to get this one done. Who is the greatest pre-print neuroscientist of them all?
```
# So we can plot w/ pretty colors
cmap = plt.cm.viridis
colors = cmap(author_counts['count'].values / float(author_counts['count'].max()))
# Make the plot
fig, ax = plt.subplots(figsize=(10, 5))
ax = author_counts.plot.bar('name', 'count', color=colors, ax=ax)
_ = plt.setp(ax.get_xticklabels(), rotation=45, ha='right')
```
Rather than saying congratulations to #1 etc here, I'll just take this space to say that all of these researchers are awesome for helping push scientific publishing technologies into the 21st century ;)
# Question 3: What topics are covered in the titles?
For this one we'll use a super floofy answer, but maybe it'll give us something pretty. We'll use the wordcloud module, which implements `fit` and `predict` methods similar to scikit-learn. We can train it on the words in the titles, and then create a pretty word cloud using these words.
To do this, we'll use the `wordcloud` module along with `sklearn`'s stop words (which are also useful for text analysis, incidentally)
```
import wordcloud as wc
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
# We'll collect the titles and turn them into one giant string
titles = articles['title'].values
titles = ' '.join(titles)
# Then define stop words to use...we'll include some "typical" brain words
our_stop_words = list(ENGLISH_STOP_WORDS) + ['brain', 'neural']
```
Now, generating a word cloud is as easy as a call to `generate_from_text`. Then we can output in whatever format we like
```
# This function takes a buch of dummy arguments and returns random colors
def color_func(word=None, font_size=None, position=None,
orientation=None, font_path=None, random_state=None):
rand = np.clip(np.random.rand(), .2, None)
cols = np.array(plt.cm.rainbow(rand)[:3])
cols = cols * 255
return 'rgb({:.0f}, {:.0f}, {:.0f})'.format(*cols)
# Fit the cloud
cloud = wc.WordCloud(stopwords=our_stop_words,
color_func=color_func)
cloud.generate_from_text(titles)
# Now make a pretty picture
im = cloud.to_array()
fig, ax = plt.subplots()
ax.imshow(im, cmap=plt.cm.viridis)
ax.set_axis_off()
```
Looks like those cognitive neuroscience folks are leading the charge towards pre-print servers. Hopefully in the coming years we'll see increased adoption from the systems and cellular fields as well.
# Wrapup
Here we played with just a few questions that you can ask with some simple web scraping and the useful tools in python. There's a lot more that you could do with it, but I'll leave that up to readers to figure out for themselves :)
| github_jupyter |
# Chebychev polynomial and spline approximantion of various functions
**Randall Romero Aguilar, PhD**
This demo is based on the original Matlab demo accompanying the <a href="https://mitpress.mit.edu/books/applied-computational-economics-and-finance">Computational Economics and Finance</a> 2001 textbook by Mario Miranda and Paul Fackler.
Original (Matlab) CompEcon file: **demapp05.m**
Running this file requires the Python version of CompEcon. This can be installed with pip by running
!pip install compecon --upgrade
<i>Last updated: 2021-Oct-01</i>
<hr>
## About
Demonstrates Chebychev polynomial, cubic spline, and linear spline approximation for the following functions
\begin{align}
y &= 1 + x + 2x^2 - 3x^3 \\
y &= \exp(-x) \\
y &= \frac{1}{1+25x^2} \\
y &= \sqrt{|x|}
\end{align}
## Initial tasks
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from compecon import BasisChebyshev, BasisSpline, nodeunif
```
### Functions to be approximated
```
funcs = [lambda x: 1 + x + 2 * x ** 2 - 3 * x ** 3,
lambda x: np.exp(-x),
lambda x: 1 / ( 1 + 25 * x ** 2),
lambda x: np.sqrt(np.abs(x))]
fst = ['$y = 1 + x + 2x^2 - 3x^3$', '$y = \exp(-x)$',
'$y = 1/(1+25x^2)$', '$y = \sqrt{|x|}$']
```
Set degree of approximation and endpoints of approximation interval
```
n = 7 # degree of approximation
a = -1 # left endpoint
b = 1 # right endpoint
```
Construct uniform grid for error ploting
```
x = np.linspace(a, b, 2001)
def subfig(f, title):
# Construct interpolants
C = BasisChebyshev(n, a, b, f=f)
S = BasisSpline(n, a, b, f=f)
L = BasisSpline(n, a, b, k=1, f=f)
data = pd.DataFrame({
'actual': f(x),
'Chebyshev': C(x),
'Cubic Spline': S(x),
'Linear Spline': L(x)},
index = x)
fig1, axs = plt.subplots(2,2, figsize=[12,6], sharex=True, sharey=True)
fig1.suptitle(title)
data.plot(ax=axs, subplots=True)
errors = data[['Chebyshev', 'Cubic Spline']].subtract(data['actual'], axis=0)
fig2, ax = plt.subplots(figsize=[12,3])
fig2.suptitle("Approximation Error")
errors.plot(ax=ax)
```
## Polynomial
$y = 1 + x + 2x^2 - 3x^3$
```
subfig(lambda x: 1 + x + 2*x**2 - 3*x**3, '$y = 1 + x + 2x^2 - 3x^3$')
```
## Exponential
$y = \exp(-x)$
```
subfig(lambda x: np.exp(-x),'$y = \exp(-x)$')
```
## Rational
$y = 1/(1+25x^2)$
```
subfig(lambda x: 1 / ( 1 + 25 * x ** 2),'$y = 1/(1+25x^2)$')
```
## Kinky
$y = \sqrt{|x|}$
```
subfig(lambda x: np.sqrt(np.abs(x)), '$y = \sqrt{|x|}$')
```
| github_jupyter |
```
from utils import *
import tensorflow as tf
from sklearn.cross_validation import train_test_split
import time
trainset = sklearn.datasets.load_files(container_path = 'data', encoding = 'UTF-8')
trainset.data, trainset.target = separate_dataset(trainset,1.0)
print (trainset.target_names)
print (len(trainset.data))
print (len(trainset.target))
ONEHOT = np.zeros((len(trainset.data),len(trainset.target_names)))
ONEHOT[np.arange(len(trainset.data)),trainset.target] = 1.0
train_X, test_X, train_Y, test_Y, train_onehot, test_onehot = train_test_split(trainset.data,
trainset.target,
ONEHOT, test_size = 0.2)
concat = ' '.join(trainset.data).split()
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
GO = dictionary['GO']
PAD = dictionary['PAD']
EOS = dictionary['EOS']
UNK = dictionary['UNK']
class Model:
def __init__(self, size_layer, num_layers, embedded_size,
dict_size, dimension_output, learning_rate):
def cells(reuse=False):
return tf.nn.rnn_cell.LSTMCell(size_layer//2,initializer=tf.orthogonal_initializer(),reuse=reuse)
def bahdanau(embedded):
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer//2,
memory = embedded)
return tf.contrib.seq2seq.AttentionWrapper(cell = cells(),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer//2)
def luong(embedded):
attention_mechanism = tf.contrib.seq2seq.LuongAttention(num_units = size_layer//2,
memory = encoder_embedded)
return tf.contrib.seq2seq.AttentionWrapper(cell = cells(),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer//2)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None, dimension_output])
encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
for n in range(num_layers):
(out_fw, out_bw), (state_fw, state_bw) = tf.nn.bidirectional_dynamic_rnn(
cell_fw = bahdanau(encoder_embedded),
cell_bw = luong(encoder_embedded),
inputs = encoder_embedded,
dtype = tf.float32,
scope = 'bidirectional_rnn_%d'%(n))
encoder_embedded = tf.concat((out_fw, out_bw), 2)
W = tf.get_variable('w',shape=(size_layer, dimension_output),initializer=tf.orthogonal_initializer())
b = tf.get_variable('b',shape=(dimension_output),initializer=tf.zeros_initializer())
self.logits = tf.matmul(encoder_embedded[:, -1], W) + b
self.cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = self.logits, labels = self.Y))
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
correct_pred = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.Y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 128
num_layers = 2
embedded_size = 128
dimension_output = len(trainset.target_names)
learning_rate = 1e-3
maxlen = 50
batch_size = 128
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(size_layer,num_layers,embedded_size,vocabulary_size+4,dimension_output,learning_rate)
sess.run(tf.global_variables_initializer())
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 5, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n'%(EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
for i in range(0, (len(train_X) // batch_size) * batch_size, batch_size):
batch_x = str_idx(train_X[i:i+batch_size],dictionary,maxlen)
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict = {model.X : batch_x, model.Y : train_onehot[i:i+batch_size]})
train_loss += loss
train_acc += acc
for i in range(0, (len(test_X) // batch_size) * batch_size, batch_size):
batch_x = str_idx(test_X[i:i+batch_size],dictionary,maxlen)
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict = {model.X : batch_x, model.Y : test_onehot[i:i+batch_size]})
test_loss += loss
test_acc += acc
train_loss /= (len(train_X) // batch_size)
train_acc /= (len(train_X) // batch_size)
test_loss /= (len(test_X) // batch_size)
test_acc /= (len(test_X) // batch_size)
if test_acc > CURRENT_ACC:
print('epoch: %d, pass acc: %f, current acc: %f'%(EPOCH,CURRENT_ACC, test_acc))
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time()-lasttime)
print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss,
train_acc,test_loss,
test_acc))
EPOCH += 1
logits = sess.run(model.logits, feed_dict={model.X:str_idx(test_X,dictionary,maxlen)})
print(metrics.classification_report(test_Y, np.argmax(logits,1), target_names = trainset.target_names))
```
| github_jupyter |
```
import sys
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_files
from sklearn.model_selection import train_test_split
from sklearn import metrics
import pandas as pd
from sklearn.linear_model import LinearRegression
from fbprophet import Prophet
import numpy as np
import matplotlib.pyplot as plt
## load the data and get all information for one company ##
data_set = pd.read_csv('../csv_data/reuters_news.csv', low_memory=False)
data_set = data_set.loc[data_set['ticker'] == 'AAPL']
## Data preperation ##
# delete missing prices data
data_set = data_set[data_set['prices'].notna()]
data_set = data_set[data_set['polarity'].notna()]
data_set = data_set[data_set['subjectivity'].notna()]
# delete unneeded columns: 'title' & 'description'
data_set.drop(['ticker', 'title', 'description'], axis=1, inplace=True)
# convert "Nan" values to 0 (when there is no news in a specific day,
# the polarity and subjectivity would be 0 or nutural)
data_set['polarity'] = data_set['polarity'].fillna(0)
data_set['subjectivity'] = data_set['subjectivity'].fillna(0)
# combine rows with the same date (some days have multipe news, so, compine them and take the average)
data_set = data_set.groupby(['date'],as_index=False).agg({'polarity': 'mean', 'subjectivity': 'mean', 'prices': 'mean',})
# print
data_set
########### facebook PROPHET model ###############
## Docs: https://facebook.github.io/prophet/docs/quick_start.html
prophet = data_set
prophet.head()
#train model
m = Prophet(interval_width=0.95, daily_seasonality=True)
m.add_regressor('polarity')
m.add_regressor('subjectivity')
prophet = prophet.rename(columns={'date':'ds', 'prices': 'y'})
model = m.fit(prophet)
#forcast
future = m.make_future_dataframe(periods=100,freq='D')
future['polarity'] = prophet['polarity'] # not sure if this right
future['subjectivity'] = prophet['subjectivity'] # not sure if this right
future = future.dropna()
forecast = m.predict(future)
# forecast.head()
# plot
plot1 = m.plot(forecast)
plt2 = m.plot_components(forecast)
# Vector Auto Regression VAR
# example: https://www.machinelearningplus.com/time-series/vector-autoregression-examples-python/
from statsmodels.tsa.api import VAR
from statsmodels.tsa.stattools import adfuller
from statsmodels.tools.eval_measures import rmse, aic
# Plot
fig, axes = plt.subplots(nrows=2, ncols=2, dpi=120, figsize=(10,6))
for i, ax in enumerate(axes.flatten()):
data = data_set[data_set.columns[i]]
ax.plot(data, color='red', linewidth=1)
# Decorations
ax.set_title(data_set.columns[i])
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.spines["top"].set_alpha(0)
ax.tick_params(labelsize=6)
plt.tight_layout()
# split the data
nobs = int(data_set.shape[0] * 0.2)
df_train, df_test = data_set[0:-nobs], data_set[-nobs:]
# Check size
print(df_train.shape)
print(df_test.shape)
# NOTE: needs to be completed
## LSTM model ##
# example: https://www.relataly.com/stock-market-prediction-using-multivariate-time-series-in-python/1815/
# another example: https://analyticsindiamag.com/how-to-do-multivariate-time-series-forecasting-using-lstm/
```
| github_jupyter |
```
Copyright 2021 IBM Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
# Logistic Regression on MNIST8M Dataset
## Background
The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.
## Source
We use an inflated version of the dataset (`mnist8m`) from the paper:
Gaëlle Loosli, Stéphane Canu and Léon Bottou: *Training Invariant Support Vector Machines using Selective Sampling*, in [Large Scale Kernel Machines](https://leon.bottou.org/papers/lskm-2007), Léon Bottou, Olivier Chapelle, Dennis DeCoste, and Jason Weston editors, 301–320, MIT Press, Cambridge, MA., 2007.
We download the pre-processed dataset from the [LIBSVM dataset repository](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/).
## Goal
The goal of this notebook is to illustrate how Snap ML can accelerate training of a logistic regression model on this dataset.
## Code
```
cd ../../
CACHE_DIR='cache-dir'
import numpy as np
import time
from datasets import Mnist8m
from sklearn.linear_model import LogisticRegression
from snapml import LogisticRegression as SnapLogisticRegression
from sklearn.metrics import accuracy_score as score
dataset = Mnist8m(cache_dir=CACHE_DIR)
X_train, X_test, y_train, y_test = dataset.get_train_test_split()
print("Number of examples: %d" % (X_train.shape[0]))
print("Number of features: %d" % (X_train.shape[1]))
print("Number of classes: %d" % (len(np.unique(y_train))))
model = LogisticRegression(fit_intercept=False, n_jobs=4, multi_class='ovr')
t0 = time.time()
model.fit(X_train, y_train)
t_fit_sklearn = time.time()-t0
score_sklearn = score(y_test, model.predict(X_test))
print("Training time (sklearn): %6.2f seconds" % (t_fit_sklearn))
print("Accuracy score (sklearn): %.4f" % (score_sklearn))
model = SnapLogisticRegression(fit_intercept=False, n_jobs=4)
t0 = time.time()
model.fit(X_train, y_train)
t_fit_snapml = time.time()-t0
score_snapml = score(y_test, model.predict(X_test))
print("Training time (snapml): %6.2f seconds" % (t_fit_snapml))
print("Accuracy score (snapml): %.4f" % (score_snapml))
speed_up = t_fit_sklearn/t_fit_snapml
score_diff = (score_snapml-score_sklearn)/score_sklearn
print("Speed-up: %.1f x" % (speed_up))
print("Relative diff. in score: %.4f" % (score_diff))
```
## Disclaimer
Performance results always depend on the hardware and software environment.
Information regarding the environment that was used to run this notebook are provided below:
```
import utils
environment = utils.get_environment()
for k,v in environment.items():
print("%15s: %s" % (k, v))
```
## Record Statistics
Finally, we record the enviroment and performance statistics for analysis outside of this standalone notebook.
```
import scrapbook as sb
sb.glue("result", {
'dataset': dataset.name,
'n_examples_train': X_train.shape[0],
'n_examples_test': X_test.shape[0],
'n_features': X_train.shape[1],
'n_classes': len(np.unique(y_train)),
'model': type(model).__name__,
'score': score.__name__,
't_fit_sklearn': t_fit_sklearn,
'score_sklearn': score_sklearn,
't_fit_snapml': t_fit_snapml,
'score_snapml': score_snapml,
'score_diff': score_diff,
'speed_up': speed_up,
**environment,
})
```
| github_jupyter |
# SageMaker で Neural Network Libraries のコンテナを作成して学習する
#### ノートブックに含まれる内容
- [Newral Network Libraries](https://github.com/sony/nnabla) を使った学習用コンテナの作成
- SageMaker で BYOA(Bring Your Own Container) により,作成したコンテナを使って学習
#### ノートブックで使われている手法の詳細
- Docker
- MNIST ([nnabla-example](https://github.com/sony/nnabla-examples/blob/master/)の `mnist-collection/classification.py` を実行する形)
## 前準備
このノートブックでは Amazon Elastic Container Registory (ECR) を使用するため,あらかじめ SageMaker を実行している IAM Role に対して,以下の IAM Policy をアタッチしてください
```
AmazonEC2ContainerRegistryFullAccess
```
## 概要
SageMaker を使うことで,このような問題を解決することができます.SageMaker は Docker コンテナを活用することにより,モデル学習や API による推論をスケーラブルな形で実行します.そのため,SageMaker を実際に使用する前に,学習および推論を行うための Docker イメージをまず最初に作成します.それから,作成した Docker イメージを使って,実際に SageMaker API 経由で学習,および推論を実行します.
このノートブックでは,nnabla での学習・推論を行うための Docker イメージについて説明します.以下,まず Part 1 で Docker イメージのアーキテクチャについて説明,Part 2 で実際に Docker イメージの作成を行います.
## SageMaker 用の Docker イメージの構成
### SageMaker での Docker の利用の仕方
SageMaker の Docker イメージは,学習のときと推論のときで,同じイメージを用いることができます.SageMaker の中では,学習と推論のそれぞれでコンテナを立ち上げる際に,以下のコマンドが実行されます
* 学習: `docker run $IMAGE_ID train`
* 推論: `docker run $IMAGE_ID serve`
このため,Docker イメージは `train` および `serve` というコマンドを持つ必要があります.この例では,Docker イメージ作成時に使用するスクリプト群をまとめて `container` ディレクトリに以下のように配置しました.`container/mnist-collection` 内に `train` のスクリプトが配置されているのが確認できるかと思います(この例では,学習のみで推論は行わないため,serve スクリプトは用意していません).このスクリプトは Bash で書かれていますが,実際にはどの言語で書いても問題はありません.
.
└── container
├── Dockerfile
├── build_and_push.sh
└── mnist-collection
├── args.py
├── classification.py
├── mnist_data.py
├── requirements.txt
└── train
* __`Dockerfile`__ には,Docker イメージをどのようにビルドするかが記述されています
* __`build_and_push.sh`__ は Dockerfile を使ってコンテナイメージをビルドし,ECR にプッシュするためのスクリプトです
* __`mnist-collection`__ コンテナ内に含まれるファイルを配置したディレクトリです
```
# ディレクトリの中身の確認
!ls -lR container
# train スクリプトの中身の確認
!cat container/mnist-collection/train
```
### 学習時のコンテナの実行
SageMaker が学習ジョブを走らせる際,`train` スクリプトが通常の Python プログラムのように実行されます.その際に SageMaker の仕様として,コンテナ内の `/opt/ml` ディレクトリ内に,さまざまなファイルを配置して使用する形をとります.
/opt/ml
├── input
│ ├── config
│ │ ├── hyperparameters.json
│ │ └── resourceConfig.json
│ └── data
│ └── <channel_name>
│ └── <input data>
├── model
│ └── <model files>
└── output
└── failure
#### インプット
* `/opt/ml/input/config` には,どのように学習処理を実行するかの情報が置かれます.`hyperparameters.json` はハイパーパラメタの名前とその値を JSON フォーマットで格納したファイルです.値は常に `string` 型として読みだされるため,その後適切な型に変換する必要があります.`resourceConfig.json` はマルチノードでの分散学習を行う際のネットワークレイアウトを記述した JSON フォーマットのファイルです.
* `/opt/ml/input/data/<channel_name>/` はデータ入力方式が FILE モードのときに使われるディレクトリです.チャンネルはジョブ実行時に叩く `CreateTrainingJob` に引き渡すパラメタとして指定することができます.入力データはチャネルごとに,こちらもパラメタで指定された S3 ディレクトリからロードされたものが配置されます.今回の学習では,入力データはコード内で Web からダウンロードしてくるため,ここには何も置かれません
* `/opt/ml/input/data/<channel_name>_<epoch_number>` はデータ入力方式が PIPE モードのときに使われるディレクトリです.エポックは 0 から始まり順に増えていきます.ディレクトリ名はチャンネルとエポックで指定されます
#### アウトプット
* `/opt/ml/model/` は,アルゴリズムにより生成された結果のモデルが保存されるディレクトリです.モデルのフォーマットは自由に指定することができます.単一ファイルでもよいですし,階層構造を持ったディレクトリの形でも構いません.SageMaker はこのディレクトリ内のすべてのデータを圧縮済みの tar アーカイブにまとめます.このアーカイブファイルは,`DescribeTrainingJob` API のレスポンスに含まれる S3 ロケーションに置かれます
* `/opt/ml/output` にはジョブが失敗した際に,その原因が記述された `failure` ファイルが配置されます.このファイルの中身は,`DescribeTrainingJob` API のレスポンスに含まれる `FailureReason` の内容と同じです.ジョブが成功した際には,ここには何も書き出されません
### 推論時のコンテナの実行
推論時には,コンテナが API サーバとしてホストされた形で実行されます.そのため,HTTP 経由で推論のリクエストを受け付けることができます.SageMaker で API サーバをホストする際には,以下の 2 つのエンドポイントが必要です
* `/ping` はインフラからの `GET` リクエストを受けるためのエンドポイントです.リクエストを受けたら,レスポンスコード 200 を返します
* `/invocations` はクライアントからの `POST` 推論リクエストを受けるためのエンドポイントです.リクエストとレスポンスのフォーマットは自由に指定することができます.クライアントで `ContentType` と `Accept` ヘッダをつけた場合には,そのままエンドポイント側に引き渡されます
推論用のコンテナでは,SageMaker はモデルファイルを学習時と同じディレクトリに配置して使用します
/opt/ml
└── model
└── <model files>
## Docker イメージの作成
### Dockerfile
ここまで説明してきた仕組みを実現するために,Dockerfile でコンテナイメージの構成を定義します.こちらは,[nnabla-examples の Dockerfile](https://github.com/sony/nnabla-examples/blob/master/Dockerfile)を少しだけ加工したものになります.
```
!cat container/Dockerfile
```
### コンテナイメージをビルドして登録
以下のシェルで,`docker build` コマンドを使ってコンテナイメージをビルドし,ECR (Elastic Container Registry) にプッシュします.このスクリプトは,`container/build-and-push.sh` にシェルスクリプトとしてまとまっており,`build-and-push.sh nnabla-example-mnist` の形で実行することで,`nnabla-example-mnist` イメージを ECR にプッシュすることができます.
ECR リポジトリは,SageMaker のノートブックインスタンスがあるのと同一リージョンのものが使われます.もしリポジトリがない場合には,自動的に作られます.
以下のスクリプトを実行する前に,**<span style="color: red;">5 行目の `account_number=XX` の `XX` を指定された適切な数字に変更</span>**してください
```
%%sh
# アルゴリズムの名前
# アカウントナンバーを修正
account_number=XX
algorithm_name=nnabla-example-mnist-$account_number
cd container
chmod +x mnist-collection/train
account=$(aws sts get-caller-identity --query Account --output text)
# 現在の設定を確認して,リージョンをセット (もし定義されていない場合には,us-west-2 に設定)
region=$(aws configure get region)
region=${region:-us-west-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
# もしリポジトリが ECR に存在しない場合には作成
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# ECR からログインコマンドを取得し,直接実行
$(aws ecr get-login --region ${region} --no-include-email)
docker build -t ${algorithm_name} . --build-arg CUDA_VER=9.2 --build-arg CUDNN_VER=7 --build-arg PYTHON_VER=3.6
docker tag ${algorithm_name} ${fullname}
docker push ${fullname}
```
無事上記コマンドが実行できたら,ECR の URL を開いて,作成した `decision-trees-sample-XX` のリポジトリが存在することを確認.
## SageMaker セッションのセットアップ
```
import boto3
import re
import os
import numpy as np
import pandas as pd
from sagemaker import get_execution_role
# AWS credential で指定された role を返す
role = get_execution_role()
# SageMaker のセッションを作成
import sagemaker as sage
from time import gmtime, strftime
sess = sage.Session()
```
## モデルの学習を実行
SageMaker で学習を行うために,SageMaker SDK で Estimator オブジェクトをつくります.このオブジェクトには,学習をおこなうために以下の設定が含まれます.その上で,fit() メソッドで学習を実施します.学習には 5 分程度時間がかかります.
- container name: 上で作成した ECR のコンテナイメージ
- role: ジョブを実行する IAM role
- instance count: 学習ジョブに使うインスタンス数
- instance type 学習ジョブに使うインスタンスタイプ
- output path: 学習の成果物が置かれる S3 の場所
- session: すぐ上で作成した,SageMaker セッション
- また,以下を実行する前に,3 行目の **<span style="color: red;">5 行目の `account_number=XX` の `XX` を指定された適切な数字に変更</span>**してください
```
account = sess.boto_session.client('sts').get_caller_identity()['Account']
region = sess.boto_session.region_name
image = '{}.dkr.ecr.{}.amazonaws.com/nnabla-example-mnist-XX'.format(account, region)
classifier = sage.estimator.Estimator(
image,
role,
1,
'ml.m4.xlarge',
output_path="s3://{}/output".format(sess.default_bucket()),
sagemaker_session=sess)
classifier.fit()
```
無事実行できたら,コンソールに戻って,ジョブ実行画面を開いてください.学習済みモデルが出力されている S3 パスを開いて,実際にモデルが作成されたことを確認してください.以上で独自コンテナの作成と実行が完了しました.
| github_jupyter |
## <i>Import libraries</i>
```
import pandas as pd
import mysql.connector as mysql
```
## Sambungkan ke MySQL
```
koneksi = mysql.connect(host = "localhost",
database = "kampus",
user = "root",
password = "Rakhid@16")
```
## Ambil tabel jurusan
```
cursor = koneksi.cursor()
# yang di eksekusi adalah query sql'nya
cursor.execute("SELECT * FROM jurusan")
tabel_jurusan = cursor.fetchall()
tabel_jurusan
```
## Ubah ke dalam DataFrame
```
data_jurusan = pd.DataFrame(columns = ["ID", "Nama Jurusan"],
data = tabel_jurusan)
# Ubah kolom indeks
data_jurusan = data_jurusan.set_index('ID')
data_jurusan
data_jurusan['Nama Jurusan'] = data_jurusan['Nama Jurusan'].str.capitalize()
data_jurusan
```
## Lihat data tabel mahasiswa
```
# yang di eksekusi adalah query sql'nya
cursor.execute("SELECT * FROM mahasiswa")
tabel_mahasiswa = cursor.fetchall()
tabel_mahasiswa
tabel_mahasiswa[0][2]
```
## Ubah ke dalam DataFrame
```
data_mahasiswa = pd.DataFrame(columns = ["ID MHS",
"ID Jurusan",
"Nama",
"Tanggal lahir",
"No Tel",
"alamat"],
data = tabel_mahasiswa)
# Ubah kolom indeks
data_mahasiswa = data_mahasiswa.set_index('ID MHS')
data_mahasiswa
data_mahasiswa['ID Jurusan'].value_counts().plot(kind='pie')
```
## Buat <i>TABLE</i> baru
```
cursor.execute("CREATE TABLE mhs_baru "+
"(id_mhs_baru INT(11) NOT NULL, "+
"nama_jurusan VARCHAR(30) NOT NULL, "
"nama_mhs VARCHAR(50) NOT NULL, " +
"tanggal_lahir VARCHAR(30) NOT NULL)")
```
## JOIN TABLE
```
cursor.execute("SELECT mahasiswa.id_mhs, jurusan.nama_jurusan, mahasiswa.nama_mhs, mahasiswa.tgl_lahir "+
"FROM mahasiswa INNER JOIN jurusan on jurusan.id_jurusan = mahasiswa.id_jurusan "+
"ORDER BY mahasiswa.id_mhs")
tabel_baru = cursor.fetchall()
tabel_baru
```
## Masukan data hasil JOIN'an ke TABLE baru
```
for baris in tabel_baru:
sql_query = "INSERT INTO mhs_baru VALUES (%s, %s, %s, %s)"
cursor.execute(sql_query, baris)
print(baris, "Telah dimasukkan")
koneksi.commit()
```
## Lihat isi TABLE baru dalam bentuk DataFrame
```
cursor.execute("SELECT * FROM mhs_baru")
mhs_baru = cursor.fetchall()
data_mhs_baru = pd.DataFrame(columns = ["ID", "Nama Jurusan", "Nama", "Tanggal Lahir"],
data = mhs_baru)
data_mhs_baru = data_mhs_baru.set_index("ID")
data_mhs_baru
data_mhs_baru['Nama Jurusan'].value_counts().plot(kind='barh')
```
| github_jupyter |
## Example 1 - Common Driver
Here we investigate the statistical association between summer precipitation (JJA mean) in Denmark (DK) and the Mediterranean (MED). A standard correlation test shows them to be negatively correlated (r = -0.24). However, this association is not causal but is due to both regions being affected by the position of the North Atlantic storm tracks, as described by the North Atlantic Oscillation (NAO) index.
<img src="../images/ex1.png" width="500" height="600">
### References / Notes
1. Mediterranean region as described in http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.368.3679&rep=rep1&type=pdf
## Imports
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import os
import iris
import iris.quickplot as qplt
import statsmodels.api as sm
from scipy import signal
from scipy import stats
```
### Step 1) Load the data + Extract regions of interest
```
precip = iris.load_cube('../sample_data/precip_jja.nc', 'Monthly Mean of Precipitation Rate')
nao = iris.load_cube('../sample_data/nao_jja.nc', 'nao')
```
#### Extract regions of interest:
#### Mediterranean (MED)
```
med = precip.intersection(longitude=(10.0, 30.0), latitude=(36, 41.0))
qplt.pcolormesh(med[0])
plt.gca().coastlines()
```
#### Denmark (DK)
```
dk = precip.intersection(longitude=(2, 15), latitude=(50, 60))
qplt.pcolormesh(dk[0])
plt.gca().coastlines()
```
#### Create regional means
```
def areal_mean(cube):
grid_areas = iris.analysis.cartography.area_weights(cube)
cube = cube.collapsed(['longitude', 'latitude'], iris.analysis.MEAN, weights=grid_areas)
return cube
# Areal mean
med = areal_mean(med)
dk = areal_mean(dk)
```
### Step 2) Plotting + Data Processing
```
fig = plt.figure(figsize=(8, 8))
plt.subplot(311)
qplt.plot(nao)
plt.title('NAO')
plt.subplot(312)
qplt.plot(med)
plt.title('Med precip')
plt.subplot(313)
qplt.plot(dk)
plt.title('Denmark precip')
plt.tight_layout()
```
#### Standardize the data (zero mean, unit variance)
```
NAO = (nao - np.mean(nao.data))/np.std(nao.data)
MED = (med - np.mean(med.data))/np.std(med.data)
DK = (dk - np.mean(dk.data))/np.std(dk.data)
```
#### Detrend
```
NAO = signal.detrend(NAO.data)
MED = signal.detrend(MED.data)
DK = signal.detrend(DK.data)
```
### Step 3) Data analysis
```
#==========================================================
# Calculate the Pearson Correlation of MED and DK
#==========================================================
X = DK[:]
Y = MED[:]
r_dk_med, p_dk_med = stats.pearsonr(X, Y)
print(" The correlation of DK and MED is ", round(r_dk_med,2))
print(" p-value is ", round(p_dk_med, 2))
#==================================================================================
# Condtion out the effect of NAO
# here this is done by calculating the partial correlation of DK and MED conditiona on NAO
# alternatively, one could also just regress DK on on MED and NAO
#==================================================================================
# 1) regress MED on NAO
X = NAO[:]
Y = MED[:]
model = sm.OLS(Y,X)
results = model.fit()
res_med = results.resid
# 2) regress DK on NAO
X = NAO[:]
Y = DK[:]
model = sm.OLS(Y,X)
results = model.fit()
res_dk = results.resid
# 3) correlate the residuals (= partial correlation)
par_corr, p = stats.pearsonr(res_dk, res_med)
print(" The partial correlation of DK and MED (cond on NAO) is ", round(par_corr, 2))
#=====================================================
# Determine the causal effect from NAO --> MED
#=====================================================
Y = MED[:]
X = NAO[:]
model = sm.OLS(Y,X)
results = model.fit()
ce_nao_med = results.params[0]
print("The causal effect of NAO on MED is ", round(ce_nao_med,2))
#=====================================================
# Determine the causal effect from NAO --> DK
#=====================================================
Y = DK[:]
X = NAO[:]
model = sm.OLS(Y,X)
results = model.fit()
ce_nao_dk = results.params[0]
print("The effect of NAO on DK is ", round(ce_nao_dk,2))
#=====================================================
# Path tracing rule:
#=====================================================
exp_corr_dk_med = ce_nao_med * ce_nao_dk
print("The expected correlation of MED and DK is ", round(exp_corr_dk_med,2))
print("The actual correlation of MED and DK is ", round(r_dk_med, 2))
```
### Conclusions
There is a spurious correlation of MED and DK due to the influence of the common driver NAO. If one controls for NAO the correlation is shown to be negligible.
| github_jupyter |
# Trabajo en grupo 2018 - Filtros de imágenes
## Versión: SIMD
### Autores:
- Alejandro
- Álvaro Baños Gomez
- Iñaki
- Guillermo Facundo Colunga
### Enunciado
Realizar una versión monohilo con extensiones multimedia y una versión multihilo del programa anterior para aprovechar las capacidades de paralelismo y concurrencia del sistema y así obtener una ganancia de rendimiento.
El segundo programa monohilo a desarrollar consiste en el empleo de las extensiones SIMD de la arquitectura para la mejora del rendimiento, para lo cual en esta parte se empleará el proyecto Singlethread-SIMD. Las extensiones SIMD se usarán para la implementación de las operaciones repetitivas que se llevan a cabo durante el procesamiento de las imágenes indicado por tu profesor. A estas extensiones también se las conoce como extensiones multimedia o vectoriales, pues en este campo de aplicación es habitual la realización de operaciones repetitivas muy simples sobre grandes vectores o matrices que codifican imágenes, audio y vídeo.
Las extensiones multimedia son opcionales y se agrupan en conjuntos de instrucciones. Dependiendo del procesador que incorpore el equipo pueden estar disponibles algunas y otras no. Por esta razón, en el trabajo a realizar en primer lugar se debe proporcionar una lista de las extensiones SIMD soportadas.
El empleo de las extensiones SIMD se lleva a cabo empleando lo que se denomina funciones intrínsecas (intrinsics). En apariencia son funciones de C, pero en realidad no lo son pues no son llamadas como las funciones habituales. Cada referencia a una función intrínseca en el código se traduce directamente en una instrucción ensamblador.
> **Extensiones SIMD soportadas:** MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX.
### Estructuras de datos para imágenes
Las imágenes se cargan en el sistema con la libreria `CImg` y se emplea el tipo `float` para representarlas. Usando esta librería podemos obtener un único vector que contiene todos los datos de la imágen. El formato del vector para una imágen RGB sería:
$$v = [ r_{1}, r_{2}, r_{3},...,r_{n}, g_{1}, g_{2}, g_{3},...,g_{n}, b_{1}, b_{2}, b_{3},...,b_{n}]$$
Por lo tanto lo que tenemos es un único vector que contiene el valor de las componentes rojas, luego verdes y finalmente azules. De lo que se deduce que el tamaño de $v$ es: $ancho \cdot alto \cdot númeroComponentes$.
### Algoritmo a implementar
El algoritmo asignado es el de fusionar dos imágenes con el modo amplitud. Para conseguir el modo amplitud los vectores R' G' y B', que representan las componentes de la imagen resultante, se calculan de la siguente manera:
$${R'} = \frac{\sqrt{Ra^{2} + Rb^{2}}}{\sqrt{2}}$$
$${G'} = \frac{\sqrt{Ga^{2} + Gb^{2}}}{\sqrt{2}}$$
$${B'} = \frac{\sqrt{Ba^{2} + Bb^{2}}}{\sqrt{2}}$$
De las fórmulas anteriores se puede deducir que para cualquier coordenada RGB de las imágenes 1 y 2 la fórmula de transformación será:
$$(x{_{3}}, y{_{3}}, z{_{3}}) = \left ( \frac{\sqrt{x{_{1}}^{2} + x{_{2}}^{2}}}{\sqrt{2}}, \frac{\sqrt{y{_{1}}^{2} + y{_{2}}^{2}}}{\sqrt{2}}, \frac{\sqrt{z{_{1}}^{2} + z{_{2}}^{2}}}{\sqrt{2}} \right )$$
Siendo $x{_{3}}, y{_{3}}, z{_{3}}$ las coordenadas RGB de la imágen resultante de la fusión con el modo amplitud de las imágenes 1 y 2.
### Algoritmo implementado
Cómo hemos visto las imágenes se encuentran representadas por medio de vectores que contienen sus componentes, por lo tanto, tendremos:
$imagen_{1} = [ r_{11}, r_{12}, r_{13},...,r_{1n}, g_{11}, g_{12}, g_{13},...,g_{1n}, b_{11}, b_{12}, b_{13},...,b_{1n}]$
$imagen_{2} = [ r_{21}, r_{22}, r_{23},...,r_{2n}, g_{21}, g_{22}, g_{23},...,g_{2n}, b_{21}, b_{22}, b_{23},...,b_{2n}]$
> **Nota:** El ancho, alto y alto de las imágenes ha de coincidir.
$imagen_{3} = [ancho \cdot alto \cdot numeroComponentes]$
PARA CADA i DESDE 0 HASTA $(ancho \cdot alto \cdot numeroComponentes)$
$$imagen_{3i} = \frac{\sqrt{imagen_{1i}^{2} + imagen_{2i}^{2}}}{\sqrt{2}}$$
Teniendo en cuanta las intrucciones SIMD que soporta el procesador en el que se implementa el algoritmo se usará la version de AVX, ya que es la más moderna de las intrucciones SIMD soportada.
Con las instrucciones SIMD podemos realizar operaciones sobre vectores de `SIMD_BANDWITH` elementos en una sola operación con lo que recorreremos el vector de `SIMD_BANDWIT` en `SIMD_BANDWIT` elementos.
Por lo tanto y para poder aplicar la fórmula anterior a `SIMD_BANDWITH` elementos al unisono debemos de descomponer la fórmula en operaciones elementales. El siguiente código ilustra una versión descompuesta del algoritmo implementado:
```c++
for (int i = 0; i < size; i += SIMD_BANDWITH) {
a = _mm256_loadu_ps(&pcompImage1[i]); // cargar img1
b = _mm256_loadu_ps(&pcompImage2[i]); // cargar img2
a2 = _mm256_mul_ps(a, a); // img1^2
b2 = _mm256_mul_ps(b, b); // img2^2
ab2 = _mm256_add_ps(a2, b2); // img1^2 + img2^2
raizab2 = _mm256_sqrt_ps(ab2); // raiz_cuadrada( img1^2 + img2^2 )
res8 = _mm256_div_ps(raizab2, vsqrt2); // raiz_cuadrada( img1^2 + img2^2 ) / raiz_cuadrada( 2.0 )
_mm256_storeu_ps(&pdstImage[i], res8); // img3 = raiz_cuadrada( img1^2 + img2^2 ) / raiz_cuadrada( 2.0 )
}
```
El siguiente código muestra el algoritmo que se implementó:
```c++
for (int i = 0; i < size; i += SIMD_BANDWITH) {
_mm256_storeu_ps(
&pdstImage[i],
_mm256_div_ps(
_mm256_sqrt_ps(
_mm256_add_ps(
_mm256_mul_ps(
_mm256_loadu_ps(&pcompImage1[i]),
_mm256_loadu_ps(&pcompImage1[i])),
_mm256_mul_ps(
_mm256_loadu_ps(&pcompImage2[i]),
_mm256_loadu_ps(&pcompImage2[i])
)
)
),
vsqrt2
)
);
}
```
> **Nota**: _Como la ejecucion del algoritmo dura menos de 5 segundos se anida el anterior algoritmo dentro de un bucle for que lo repetirá 40 veces con lo que el tiempo de la ejecución del programa será superior a los 5 segundos, **pero se estará ejecutando el algoritmo 40 veces**._
> **Nota:** El algoritmo anterior se encuentra implementado únicamente con instrucciones SIMD anidadas, sin variables.
### Análisis del algoritmo
Para la ejecución del algoritmo anterior se obtienen los siguientes datos tras realizar 10 ejecuciones en modo release:
```
import pandas as pd
data = pd.Series([5.2364, 5.2364, 5.2364, 5.2364, 5.2364, 5.2364, 5.2364, 5.2364, 5.2364, 5.2364],
index=['1', '2', '3', '4', '5', '6', '7', '8', '9', '10'])
table = pd.DataFrame({'Duración':data})
table
import numpy as np
mean = np.mean(data)
std = np.std(data)
print("Media 40 ejecuciones: ", mean, "s.")
print("Desviación estándar: ", std, ".")
print("Media 1 ejecución: ", mean/40.0, "s."); # 40 es el número de ejecuciones.
```
#### Comparación con versión monohilo sin instrucciones SIMD
A continuación se muestra la acceleración obtenida de esta versión con respecto a la versión monohilo y sin instrucciones SIMD. Las mejoras de esta versió que pueden afectar a la aceleración son:
- Único bucle para recorrer vector entero.
- Reducción en el uso de punteros.
- Implementación de instrucciones SIMD con ancho de banda de 256 bit.
```
import pandas as pd
data = pd.Series([2.38472996667, 0.13091],
index=['monohilo', 'simd']);
table = pd.DataFrame({'T. x 1 Ejecución':data})
table
mean_monohilo = 2.38472996667;
mean_simd = 0.13091
print("Aceleración: ", mean_monohilo/mean_simd , "s.");
```
| github_jupyter |
# Extinction Efficiency Factor
Figure 6.5 from Chapter 6 of *Interstellar and Intergalactic Medium* by Ryden & Pogge, 2021,
Cambridge University Press.
Plot the efficiency factor Q$_{ext}$ for two values of the real index of refraction, $n_r=1.5$ (glass) and
$n_r=1.33$ (water ice).
Uses van de Hulst's method to compute Mie scattering in the astrophysically interesting limit that the
spherical scatterer is large compared to the wavelength of light ($2\pi a >> \lambda$) and only moderately
refractive and absorptive at wavelengths of interest ($|n-1|<1$). In this limit, the solution for the pure
scattering case yeilds an efficiency factor
\begin{equation}
Q_{ext} = Q_{sca}\approx 2-\frac{4}{\varrho}\sin\varrho + \frac{4}{\varrho^2}(1-\cos\varrho)
\end{equation}
where
\begin{equation}
\varrho = 2\left(\frac{2\pi a}{\lambda}\right)|n_r-1|
\end{equation}
in the short-wavelength limit this function approaches the limiting value of $Q_{ext}=2.0$.
```
%matplotlib inline
import os
import sys
import math
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator, LogLocator, NullFormatter
import warnings
warnings.filterwarnings('ignore',category=UserWarning, append=True)
```
## Standard Plot Format
Setup the standard plotting format and make the plot. Fonts and resolution adopted follow CUP style.
```
figName = 'Fig6_5'
# graphic aspect ratio = width/height
aspect = 4.0/3.0 # 4:3
# Text width in inches - don't change, this is defined by the print layout
textWidth = 6.0 # inches
# output format and resolution
figFmt = 'png'
dpi = 600
# Graphic dimensions
plotWidth = dpi*textWidth
plotHeight = plotWidth/aspect
axisFontSize = 10
labelFontSize = 6
lwidth = 0.5
axisPad = 5
wInches = textWidth
hInches = wInches/aspect
# Plot filename
plotFile = f'{figName}.{figFmt}'
# LaTeX is used throughout for markup of symbols, Times-Roman serif font
plt.rc('text', usetex=True)
plt.rc('font', **{'family':'serif','serif':['Times-Roman'],'weight':'bold','size':'16'})
# Font and line weight defaults for axes
matplotlib.rc('axes',linewidth=lwidth)
matplotlib.rcParams.update({'font.size':axisFontSize})
# axis and label padding
plt.rcParams['xtick.major.pad'] = f'{axisPad}'
plt.rcParams['ytick.major.pad'] = f'{axisPad}'
plt.rcParams['axes.labelpad'] = f'{axisPad}'
```
## Scattering Efficiency Factor
Do the computation for real two indices of refraction
* $n_r=1.5$ - typical of glass (fused silica) at visible wavelengths
* $n_r=1.33$ - typical of pure water ice at visible wavelenths
parameterize using $x=2\pi a/\lambda$.
```
# Range of x
xMin = 0.01
xMax = 14.0
x = np.linspace(xMin,xMax,501)
yMin = 0.0
yMax = 4.0
# glass (fused silica)
nr = 1.5
rho = 2.0*x*(nr-1.0)
Qext1 = 2.0 - (4.0/rho)*np.sin(rho)+(4.0/(rho*rho))*(1-np.cos(rho))
# pure water ice
nr = 1.33
rho = 2.0*x*(nr-1.0)
Qext2 = 2.0 - (4.0/rho)*np.sin(rho)+(4.0/(rho*rho))*(1-np.cos(rho))
```
## Make the plot
Plot $n_r$=1.5 (glass) as a solid line, $n_r$=1.33 (ice) as a dotted line, with $Q_{ext}$=2.0 for reference
(see text).
```
fig,ax = plt.subplots()
fig.set_dpi(dpi)
fig.set_size_inches(wInches,hInches,forward=True)
ax.tick_params('both',length=6,width=lwidth,which='major',direction='in',top='on',right='on')
ax.tick_params('both',length=3,width=lwidth,which='minor',direction='in',top='on',right='on')
# Limits
plt.xlim(xMin,xMax)
ax.xaxis.set_major_locator(MultipleLocator(2))
ax.xaxis.set_minor_locator(MultipleLocator(1))
plt.xlabel(r'$x=2\pi a/\lambda$',fontsize=axisFontSize)
plt.ylim(yMin,yMax)
ax.yaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_minor_locator(MultipleLocator(0.5))
plt.ylabel(r'Q$_{\rm ext}$',fontsize=axisFontSize)
# glass (nr=1.5)
plt.plot(x,Qext1,'-',color='black',lw=1.0,zorder=10)
plt.text(4.0,3.2,r'$n_{r}$=1.5',fontsize=axisFontSize,ha='right',color='black')
# ice (nr=1.33)
plt.plot(x,Qext2,':',color='black',lw=1.0,zorder=10)
plt.text(7.5,3.0,r'$n_{r}$=1.33',fontsize=axisFontSize,ha='left',color='black')
# large x asymptote at Qext=2
plt.hlines(2.0,xMin,xMax,ls='--',color='black',lw=0.5,zorder=8)
plt.plot()
plt.savefig(plotFile,bbox_inches='tight',facecolor='white')
```
| github_jupyter |
```
import pandas as pd
import pickle
import os
import numpy as np
import sys
from sklearn.metrics import roc_auc_score, make_scorer,brier_score_loss,log_loss,average_precision_score
import shutil
import os
def convert_hba1c_mmol_mol_2_percentage(row):
try:
row = 0.0915 * row + 2.15
except:
row = None
return row
```
An A1C level below 5.7% is considered normal
An A1C level between 5.7% and 6.4% is considered prediabetes
An A1C level of 6.5% or higher on two separate tests indicates type 2 diabetes
```
def get_strat_cohort_indeces():
Test_file_path = "/net/mraid08/export/jafar/UKBioBank/Data/ukb29741_a1c_below_65_updates_scoreboard_test.csv"
a1c_col="30750-0.0"
a1c_df=pd.read_csv(Test_file_path,usecols=["eid",a1c_col],index_col="eid")
a1c_df["hba1c"]=a1c_df.apply(convert_hba1c_mmol_mol_2_percentage)
a1c_df_pre=a1c_df[a1c_df["hba1c"]>=5.7]
a1c_df_pre_index=a1c_df_pre.index
a1c_df_healthy=a1c_df[a1c_df["hba1c"]<5.7]
a1c_df_healthy=a1c_df_healthy[a1c_df["hba1c"]>=4]
a1c_df_healthy_index=a1c_df_healthy.index
return a1c_df_healthy_index,a1c_df_pre_index
def stratify_results(folder_name):
a1c_df_healthy_index,a1c_df_pre_index=get_strat_cohort_indeces()
base_path="/home/edlitzy/UKBB_Tree_Runs/For_article/Revision_runs/results_folder/"
test_path=os.path.join(base_path,folder_name,"y_LR_test.csv")
pred_path=os.path.join(base_path,folder_name,"final_scores.csv")
test_df=pd.read_csv(test_path,index_col="eid")
pred_df=pd.read_csv(pred_path,index_col="eid")
tot_df=test_df.join(pred_df)
res_pre_df=tot_df.loc[a1c_df_pre_index,:]
res_healthy_df=tot_df.loc[a1c_df_healthy_index,:]
return res_pre_df,res_healthy_df
def calc_ci(df,folder_name):
roc_list=[]
aps_list=[]
for ind in range(1000):
tmp_df=df.sample(n=df.shape[0],replace=True,random_state=ind)
roc_list.append(roc_auc_score(y_true=tmp_df.iloc[:,0].values,y_score=tmp_df.iloc[:,1]))
aps_list.append(average_precision_score(y_true=tmp_df.iloc[:,0].values,y_score=tmp_df.iloc[:,1]))
res_df=pd.DataFrame(
index=[folder_name],columns=["auROC min","auROC mean","auROC max","APS min","APS mean","APS max"])
res_df["auROC min"]="{:.2f}".format(np.quantile(roc_list,0.025))
res_df["auROC max"]="{:.2f}".format(np.quantile(roc_list,0.975))
res_df["auROC mean"]="{:.2f}".format(np.mean(roc_list))
res_df["APS min"]="{:.2f}".format(np.quantile(aps_list,0.025))
res_df["APS max"]="{:.2f}".format(np.quantile(aps_list,0.975))
res_df["APS mean"]="{:.2f}".format(np.mean(aps_list))
print(res_df)
return res_df
res_list=[]
folder_name="LR_No_reticulocytes_scoreboard"
four_bt_res_pre_df,four_bt_res_healthy_df=stratify_results(folder_name)
res_list.append(calc_ci(df=four_bt_res_pre_df,folder_name="Pre diab "+folder_name))
res_list.append(calc_ci(df=four_bt_res_healthy_df,folder_name="Healthy " +folder_name))
four_bt_res_pre_df.shape
four_bt_res_pre_df["2443-3.0"].sum()/four_bt_res_pre_df.shape[0]
four_bt_res_healthy_df.shape
four_bt_res_healthy_df["2443-3.0"].sum()/four_bt_res_healthy_df.shape[0]
folder_name="LR_Anthro_scoreboard"
anhtro_res_pre_df,anthro_res_healthy_df=stratify_results(folder_name)
res_list.append(calc_ci(df=anhtro_res_pre_df,folder_name="Pre diab "+folder_name))
res_list.append(calc_ci(df=anthro_res_healthy_df,folder_name="Healthy "+folder_name))
tot_res=pd.concat(res_list)
tot_res
def build_summary_table(df):
sum_df=pd.DataFrame(index=df.index,columns=["auROC [95% CI]","APS [95% CI]"])
sum_df["auROC [95% CI]"]=df["auROC mean"]+" ["+df["auROC min"]+"-"+df["auROC max"]+"]"
sum_df["APS [95% CI]"]=df["APS mean"]+" ["+df["APS min"]+"-"+df["APS max"]+"]"
return sum_df
final_df=build_summary_table(tot_res)
final_df
final_df.to_csv("/home/edlitzy/UKBB_Tree_Runs/For_article/Revision_runs/Tables/stratified_original_results.csv")
```
| github_jupyter |
```
## https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html - original tutorial
```
## Packages
```
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
from PIL import Image
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
```
## Task
- The agent has to decide between two actions - moving the cart left or right - so that the pole attached to it stays upright.
- As the agent observes the current state of the environment and chooses an action, the environment transitions to a new state, and also returns a reward that indicates the consequences of the action. In this task, the environment terminates if the pole falls over too far.
```
env = gym.make('CartPole-v0').unwrapped
# set up matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
from IPython import display
plt.ion()
# if gpu is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
## Replay memory
- We’ll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated.
It has been shown that this greatly stabilizes and improves the DQN training procedure.
```
Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward'))
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
"""Saves a transition."""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
```
## Q-network
- Our model will be a convolutional neural network that takes in the difference between the current and previous screen patches. It has two outputs, representing Q(s,left) and Q(s,right) (where s is the input to the network).
In effect, the network is trying to predict the quality of taking each action given the current input.
```
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
self.head = nn.Linear(6336, 2)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
return self.head(x.view(x.size(0), -1))
```
## Utilities
```
resize = T.Compose([T.ToPILImage(),
T.Resize(100, interpolation=Image.CUBIC),
T.ToTensor()])
# This is based on the code from gym.
screen_width = 600
def get_cart_location():
world_width = env.x_threshold * 2
scale = screen_width / world_width
return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART
def get_screen():
screen = env.render(mode='rgb_array').transpose((2, 0, 1)) # transpose into torch order (CHW)
# Strip off the top and bottom of the screen
screen = screen[:, 160:320]
view_width = 320
cart_location = get_cart_location()
if cart_location < view_width // 2:
slice_range = slice(view_width)
elif cart_location > (screen_width - view_width // 2):
slice_range = slice(-view_width, None)
else:
slice_range = slice(cart_location - view_width // 2,
cart_location + view_width // 2)
# Strip off the edges, so that we have a square image centered on a cart
screen = screen[:, :, slice_range]
# Convert to float, rescare, convert to torch tensor
# (this doesn't require a copy)
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
screen = torch.from_numpy(screen)
# Resize, and add a batch dimension (BCHW)
return resize(screen).unsqueeze(0).to(device)
env.reset()
plt.figure()
plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none')
plt.title('Example extracted screen')
plt.show()
```
## TRAINING
```
BATCH_SIZE = 256
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
TARGET_UPDATE = 10
policy_net = DQN().to(device)
target_net = DQN().to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval()
optimizer = optim.Adam(policy_net.parameters())
memory = ReplayMemory(20000)
steps_done = 0
def select_action(state):
global steps_done
sample = random.random()
eps_threshold = EPS_END + (EPS_START - EPS_END) * math.exp(-1. * steps_done / EPS_DECAY)
steps_done += 1
if sample > eps_threshold:
with torch.no_grad():
return policy_net(state).max(1)[1].view(1, 1)
else:
return torch.tensor([[random.randrange(2)]], device=device, dtype=torch.long)
episode_durations = []
def plot_durations():
plt.figure(2)
plt.clf()
durations_t = torch.tensor(episode_durations, dtype=torch.float)
plt.title('Training...')
plt.xlabel('Episode')
plt.ylabel('Duration')
plt.plot(durations_t.numpy())
# Take 100 episode averages and plot them too
if len(durations_t) >= 100:
means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
means = torch.cat((torch.zeros(99), means))
plt.plot(means.numpy())
plt.pause(0.001) # pause a bit so that plots are updated
if is_ipython:
display.clear_output(wait=True)
display.display(plt.gcf())
```
## Training loop
```
def optimize_model():
if len(memory) < BATCH_SIZE:
return
transitions = memory.sample(BATCH_SIZE)
# Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for
# detailed explanation).
batch = Transition(*zip(*transitions))
# Compute a mask of non-final states and concatenate the batch elements
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
batch.next_state)), device=device, dtype=torch.uint8)
non_final_next_states = torch.cat([s for s in batch.next_state
if s is not None])
state_batch = torch.cat(batch.state)
action_batch = torch.cat(batch.action)
reward_batch = torch.cat(batch.reward)
# Compute Q(s_t, a) - the model computes Q(s_t), then we select the
# columns of actions taken
state_action_values = policy_net(state_batch).gather(1, action_batch)
# Compute V(s_{t+1}) for all next states.
next_state_values = torch.zeros(BATCH_SIZE, device=device)
next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()
# Compute the expected Q values
expected_state_action_values = (next_state_values * GAMMA) + reward_batch
# Compute Huber loss
loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))
# Optimize the model
optimizer.zero_grad()
loss.backward()
for param in policy_net.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
```
- Below, you can find the main training loop. At the beginning we reset the environment and initialize the state Tensor.
Then, we sample an action, execute it, observe the next screen and the reward (always 1), and optimize our model once.
When the episode ends (our model fails), we restart the loop.
```
num_episodes = 1000
for i_episode in range(num_episodes):
# Initialize the environment and state
env.reset()
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
for t in count():
# Select and perform an action
action = select_action(state)
_, reward, done, _ = env.step(action.item())
reward = torch.tensor([reward], device=device)
# Observe new state
last_screen = current_screen
current_screen = get_screen()
if not done:
next_state = current_screen - last_screen
else:
next_state = None
# Store the transition in memory
memory.push(state, action, next_state, reward)
# Move to the next state
state = next_state
# Perform one step of the optimization (on the target network)
optimize_model()
if done:
episode_durations.append(t + 1)
plot_durations()
break
# Update the target network
if i_episode % TARGET_UPDATE == 0:
target_net.load_state_dict(policy_net.state_dict())
print('Complete')
env.render()
env.close()
plt.ioff()
plt.show()
```
| github_jupyter |
# Probabilidad y Estadística
# Axiomas de probabilidad
Son reglas mínimas que definen la lógica y el sistema de deductivo en torno al estudio de las probabilidades.
+ **No negatividad:** suceso imposible tiene probabilidad 0.
+ **Certidumbre:** suceso seguro tiene probabilidad 1. Esto no tiene que ver con el resultado del suceso. La probabilidad de que el evento no ocurra, es igual a 1 menos la probabilidad de que ocurra.
+ **Adición:** la probabilidad de dos eventos mutuamente excluyentes es la suma de la probabilidad de cada uno.
De los axiomas se derivan las propiedades de la probabilidad.
## 1. Distribuciones de probabilidad
### 1.1 Distribución Uniforme Discreta
Ejemplo: un dado de seis caras. La probabilidad de que al tirar el dado salga una cara es $1/6$. Graficando la probabilidad para cada resultado posible, se obtiene un gráfico como el siguiente:
```
import numpy as np
import matplotlib.pyplot as plt
valores = np.arange(1,7) # Posibles valores del dado
probas = np.zeros(6) + 1/6 # Probabilidad uniforme
plt.bar(valores, probas) # Bar plot - Gráfico de barras
plt.title('Distribución de probabilidad uniforme: lanzamiento de un dado')
# plt.savefig('distribucion_dado.png', dpi = 400) # Guarda la figura en caso que se requiera
plt.show()
```
En este caso, se dice que la distribución de probabilidad es *uniforme discreta*. Asigna la misma probabilidad a los seis valores que pueden salir al tirar el dado. Si el dado estuviera cargado, ya no sería una distribución uniforme.
**Comentarios adicionales**:
1. El resultado de tirar un dado es un ejemplo de una *variable aleatoria*.
2. En el caso del dado, la variable aleatoria puede tomar valores *discretos* y *acotados* (limitados): 1, 2, 3, 4, 5 y 6.
3. Existen variables aleatorias donde los posibles valores que puede tomar son continuos y no acotados. A continuación la distribución más famosa on estas características.
### 1.2 Distribución Normal o Gaussiana
Es una distribución de variable continua. Muchas variables asociadas a fenómenos naturales siguen una distribución normal; un ejemplo típico es la estatura de las personas. La forma que tiene esta distribución está dada por la siguiente fórmula:
$$f(x|\mu, \sigma^2)=\frac{1}{\sqrt{2 \pi \sigma^2}}e^{\frac{-(x - \mu)^2}{2\sigma^2}}$$
Parámetros:
+ Valor medio $\mu$
+ Desviacíon estándar $\sigma$.
Estos valores son *teóricos*, es decir, son propios de la distribución de probabilidad.
**Distribución Normal en NumPy**
A continuación se generan muestras de dos distribuciones normales usando `np.random.normal()`. Ambas tienen la misma media pero distinta desviación estándar. <font color='blue'>**Consultar**</font> la ayuda de la función para entender bien qué hace.
```
mu = 2.0 # El centro de los datos (media). La distribución se forma con datos a ambos lados de mu.
sigma_1 = 5.0 # Es la medida de la std dev. Muestra 1 va a ser más "ancha" en su distribución de probabilidad.
sigma_2 = 2.0
muestras_1 = np.random.normal(loc = mu, scale = sigma_1, size = 400) # Genera una dist normal con mu, sigma y tamaño muestral dadas.
muestras_2 = np.random.normal(loc = mu, scale = sigma_2, size = 400)
print(muestras_1, muestras_2)
```
Histograma de las muestras.
*Recordar que un histograma toma un número determinado de intervalos (`bins = 20`) y cuenta cuántas muestras "caen" en cada intervalo.*
```
plt.hist(muestras_1, bins = 20, alpha = 0.5, label = 'Histrograma Muestra 1') # Entrega el histograma para para una muestra dada
plt.hist(muestras_2, bins = 20, alpha = 0.5, label = 'Histrograma Muestra 2') # Histograma muestra los datos observados
plt.legend()
plt.grid()
plt.show()
```
<font color='orange'>**Ejercicio:**</font> Volver a generar las muestras y hacer sus histogramas. Analizar si hay cambios. **Investigar** qué es una *semilla* (`seed`) en NumPy e implementar. También cambiar la cantidad de muestras modificando el argumento `size`.
**R/:** por ser una función aleatoria las muestras cambian cada que se vuelven a generar. Por lo tanto, la gráfica del histograma se modifica.
**Seed:** hace que Numpy establezca la semilla en un número aleatorio. Hace que los datos aleatorios se puedan repetir.
```
mu = 2.0 # El centro de los datos (media). La distribución se forma con datos a ambos lados de mu.
sigma_3 = 5.0 # Es la medida de la std dev. Muestra 1 va a ser más "ancha" en su distribución de probabilidad.
sigma_24= 2.0
np.random.seed(1)
muestras_3 = np.random.normal(loc = mu, scale = sigma_1, size = 500) # Genera una dist normal con mu, sigma y tamaño muestral dadas.
muestras_4 = np.random.normal(loc = mu, scale = sigma_2, size = 500)
#print(muestras_3, muestras_4)
plt.hist(muestras_3, bins = 20, alpha = 0.5, label = 'Histrograma Muestra 3') # Entrega el histograma para para una muestra dada
plt.hist(muestras_4, bins = 20, alpha = 0.5, label = 'Histrograma Muestra 4') # Histograma muestra los datos observados
plt.legend()
plt.grid()
plt.show()
#La siguiente gráfica muestra:
# En el eje X es el dato
# En el eje Y la cantidad de observaciones de ese dato
```
### 1.3 Relación entre Probabilidad y Estadística
**Promedio y desviación estándar en una distribución Normal**
En una distribución normal el promedio de las muestras obtenidas *tiende* al valor medio $\mu$ de la distribución, y la desviación estándar *tiende* a la desviacíon estándar $\sigma$ de la distribución. Notar, entonces, que existen valores calculados (promedio, desviación estándar) y valores teóricos ($\mu$ y $\sigma$). Confundirlos entre sí es un error común.
Veamos un ejemplo. Nuevamente, obtenemos muestras de una distribución normal:
```
mu = 8.5 # valor medio
sigma = 3.0 # std dev
muestras = np.random.normal(loc = mu, scale = sigma, size = 100)
```
Y calculamos su promedio y desviación estándar, y comparamos con $\mu$ y $\sigma$.
```
print('Valor medio teórico:', mu, '. Valor medio calculado:', muestras.mean())
print('Desviacion estandar teórica:', sigma, '. Desviacion estandar calculada:', muestras.std())
```
Comparemos el histograma de las muestras y la distribución teórica, que graficaremos haciendo uso de la librería `SciPy`:
```
from scipy.stats import norm
plt.hist(muestras, bins=20, density=True, alpha=0.6, color='g') # Devuelvo el histograma (datos observados)
# En la fórmula anterior, la función de bins es el ancho (cant de datos) de las columnas verdes del histograma.
# El siguiente código genera la gráfica de la Distribución de probabilidad:
xmin, xmax = plt.xlim() # Setear los límites gráficos del eje x
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, sigma) # Hace el ajuste de los números a una normal.
plt.plot(x, p, 'k', linewidth=2, label = 'Distribución Teórica')
title = "Muestras obtenidas de una distribución normal con mu = %.2f, sigma = %.2f" % (mu, sigma)
plt.title(title)
plt.legend()
plt.show()
#La siguiente gráfica muestra:
# En el eje X el dato observado
# En el eje Y la probabilidad de ocurrencia
from scipy.stats import kurtosis
from scipy.stats import skew
print(kurtosis(p)) # Calcula el exceso de curtosis - Platicúrtica
print(skew(p)) # Coef de Asimetría - Asim positiva
```
**Notar** que la escala en el eje *y* es distinta a la escala de los histogramas anteriores. Esto se debe a que, en un histograma, además de graficar la cantidad de muestras que entran en cada intervalo, también se puede graficar la **proporción** de muestras que entran en cada intervalo.
**Para pensar y probar:**
1. ¿Por qué no coinciden $\mu$ y $\sigma$ con los valores calculados?¿Qué podemos hacer para que se parezcan cada vez más?¿Y qué ocurre en ese caso con el histograma y la distribución teórica?
**R/** Para que se parezcan más, hay que tener un tamaño muestral mayor. Cuando esto sucede, en el histograma y en la distribución de probabilidad, los gráficos tienen colas menores (están más recogidos en función al valor medio). Aumenta la curtosis.
## <font color='orange'>Ejercicios</font>
<font color='orange'>**Ejercicio 1:**</font> Simulación de los resultados de tirar dos dados.
```
muestras_dado1 = np.random.choice([1,2,3,4,5,6], size = 30000)
print(muestras_dado1)
muestras_dado2 = np.random.choice([1,2,3,4,5,6], size = 30000)
print(muestras_dado2)
suma = muestras_dado1 + muestras_dado2
print(suma)
plt.hist(suma, bins = np.arange(1.5,13.5,1), density=True, rwidth = 0.5,)
plt.show()
```
<font color='orange'>**Ejercicio 2:**</font> Partiendo de la simulación anterior, obtiener la distribución de probabilidad de la variable aleatoria *máximo valor obtenido al tirar dos dados.* Por ejemplo, si obtenemos 2 y 5, el resultado es 5.
```
muestra3 = np.random.choice([1,2,3,4,5,6], size = 10000)
print(muestra3)
muestra4 = np.random.choice([1,2,3,4,5,6], size = 10000)
print(muestra4)
'''Combinamos las muestras obtenidas en un arreglo de dos filas, donde
cada columna corresponde a una tirada'''
muestras = np.array([muestra3,muestra4])
print(muestras.shape)
'''Obtenemos el máximo de cada tirada. Recordar obtener el máximo
en el eje (axis) correspondiente'''
maximos = np.max(muestras, axis = 0)
print(maximos)
plt.hist(maximos, bins = np.arange(0.5,7.5,1), density=True, rwidth = 0.8)
plt.show()
```
## 2. Correlación
El objetivo de esta sección es que te familiarices con los conceptos de **Covarianza** y **Correlación**. También que prestes atención a cómo a veces es útil simular datos para aprender o acercarse a algunas técnicas.
Tenemos dos variables aleatorias $X$ e $Y$, de las cuales tenemos $n$ muestras de cada una, $x_1,x_2,..., x_n$ e $y_1,y_2,..., y_n$. Sus valores medios son $\bar{x}$ e $\bar{y}$, respectivamente. Definimos la Covarianza como
$$Cov(X,Y) = \sum_{i=1}^{n} \frac{(x_i - \bar{x})(y_i - \bar{y})}{n}$$
A veces verás que, en lugar de dividir por $n$, se divide por $n - 1$ ó $n - 2$. Según Wikipedia, "la covarianza es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables y además es el dato necesario para estimar otros parámetros básicos, como el coeficiente de correlación lineal o la recta de regresión.".
Empezamos generandos muestras al azar de dos variables aleatorias no relacionadas entre sí.
```
import matplotlib.pyplot as plt
import numpy as np
n = 1000
sigma_1 = 2
sigma_2 = 20
x = np.random.normal(size = n, scale = sigma_1)
y = np.random.normal(size = n, scale = sigma_2)
# Graficamos
plt.scatter(x, y) # Gráfico de dispersión
plt.grid()
plt.xlim([-60,60])
plt.ylim([-60,60])
plt.show()
```
¿Hay alguna relación entre ellos? Por relación nos referimos a "variación conjunta". Y por "variación conjunta" podemos imaginarnos que si una de las variables aumenta, la otra también lo hace. Y si una variable disminuye su valor, la otra también lo hace. La covarianza intenta cuantificar esa relación.
```
cov = np.sum((x - x.mean())*(y - y.mean()))/x.size
print(cov)
```
La covarianza, sin embargo, tiene un pequeño problema: depende de la escala de nuestros datos. Entonces, para deshacernos de la escala, se puede definir la Correlación, que no es otra cosa que la covarianza dividida la desviación estándar de cada variable aletaria.
$$Corr(X,Y) = \frac{Cov(X,Y)}{\sigma_X \sigma_Y}$$
```
corr = cov/(x.std()*y.std())
print(corr)
```
Y con eso nos deshacemos de la escala. Un valor cercano a cero nos indica que no existe una relación (¿lineal?) entre las variables.
**Probar** con distintas escalas (modificando `sigma_1` y `sigma_2`) y verán que `cov` tomará valores en un rango muy amplio, mientras que `corr` se mantendrá cercana a cero.
### 2.1 Relación entre variables (caso 1: relación lineal)
Veamos otro ejemplo: sabemos que existe una relación lineal entre $X$ e $Y$, es decir, podemos aproximar $Y =aX+b$, donde $a$ y $b$ son la pendiente y la ordenada al origen.
```
n = 100
x = np.linspace(-1,1,n) + 0.25*np.random.normal(size = n)
y = 4.5*x + 0.25*np.random.normal(size = n)
# Graficamos
plt.scatter(x, y) # Gráfico de dispersión
plt.grid()
plt.show()
```
La covarianza nos da
```
cov = np.sum((x - x.mean())*(y - y.mean()))/x.size
print(cov)
corr = cov/(x.std()*y.std())
corr
```
El valor es cercano a uno, indicando una relación lineal creciente entre ambas variables.
**Probar** cambiando la pendiente de la función lineal (el número que multiplica a `x` en `y = ...`) y mirar qué pasa. ¿Qué pasa si la pendiente es negativa?
#### <font color='green'>Conclusiones</font>
1. La covarianza es una medida de la variación conjunta de dos variables. Pero tiene un problema: depende de la escala.
2. Para "deshacernos" de la escala, definimos la correlación, que es simplemente la covarianza dividida por el producto de la desviación estándar de cada variable. **Para pensar:** ¿por qué la desviación estándar está asociada a la escala de una variable?
3. La correlación es un valor entre -1 y 1. La correlación toma un valor cercano a uno cuando hay una relación lineal creciente entre las variables, cero cuando no hay relación y -1 cuando hay una relación lineal decreciente.
4. Esta correlación tiene un nombre particular: **Correlación de Pearson**.
### 2.2 Covarianza y Correlación con NumPy
**Esta sección es opcional**
NumPy tiene incorporadas funciones que calculan la covarianza y la correlación entre dos variables. La única diferencia es que, en lugar de devolver un único valor, devuelve cuatro valores, que corresponden a la covarianza/correlación entre $X$ con $X$, $X$ con $Y$, $Y$ con $X$, e $Y$ con $Y$.
```
np.cov([x,y])
np.corrcoef([x,y])
```
### 2.3 Relación No-Lineal entre variables
¿Qué ocurre cuando la relación no es lineal entre las variables?
```
n = 1000
x = np.linspace(-5,5,n) + 0.25*np.random.normal(size = n)
y = x**2 + 0.25*np.random.normal(size = n)
# Graficamos
plt.scatter(x, y) # Gráfico de dispersión
plt.grid()
plt.show()
```
La covarianza nos da
```
cov = np.sum((x - x.mean())*(y - y.mean()))/x.size
print(cov)
corr = cov/(x.std()*y.std())
corr
```
**Lo anterior es una correlación lineal. Como los datos tienen relación parabólica, por eso no funciona el cof de correlación.**
Notar que la correlación de un valor alrededor de cero, indicando que no hay una correlación entre ambas variables. Pero esto NO indica que no hay una *relación* entre esas variables, solamente nos dice que no es lineal. Por eso es muy importante graficar.
**Probar** cambiando la relación matemática entre `x` e `y` y mirar qué pasa.
Para tratar con relaciones no lineal entre variables, existen otros tipos de correlaciones. La que vimos se llama **Correlación de Pearson**, que es la más famosa. Pero también existen otras, Spearman y Kendall, que son muy útiles cuando existe una relación no lineal entre variables.
### 3. Correlación en Pandas
Volvemos a usar el Iris Dataset del encuentro anterior.
```
import pandas as pd
data = pd.read_csv('DS_Bitácora_04_Iris.csv')
data.drop(columns = 'Id', inplace = True) # Esta línea de código me borra de entrada la columna Id
data.head()
```
Para obtener las correlaciones entre las distintas variables, simplemente tenemos que hacer:
```
data.corr()
```
## Material complementario
Se recomienda visitar y practicar el [curso](https://github.com/institutohumai/cursos-python/tree/master/AnalisisDeDatos) de Análisis de Datos del Instituto Humai.
| github_jupyter |
```
import os
import sys
import math
import numpy as np
import pandas as pd
import librosa
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
from sklearn import linear_model
from collections import Counter
# load other modules --> repo root path
sys.path.insert(0, "../")
from utils import text
from utils import audio
from utils.logging import Logger
from params.params import Params as hp
from dataset.dataset import TextToSpeechDataset, TextToSpeechDatasetCollection, TextToSpeechCollate
```
### Load dataset and prepare data
```
hp.sample_rate = 22050
hp.languages = ["german", "dutch", "french", "greek", "japanese", "russian", "chinese", "finnish", "german", "hungarian", "spanish"]
common = ' '
greece = 'άέήίαβγδεζηθικλμνξοπρςíστυφχψωόύώ'
russian = 'абвгдежзийклмнопрстуфхцчшщъыьэюяё'
asciis = 'abcdefghijklmnopqrstuvwxyz'
chinese = 'ōǎǐǒàáǔèéìíūòóùúüāēěī'
finnish = 'éöä'
german = 'ßàäéöü'
hungarian = 'őáéíóöűúü'
french = 'àâçèéêíôùû'
spanish = 'áèéíñóöúü'
hp.characters = ''.join(set(common + greece + russian + asciis + chinese + finnish + german + hungarian + french + spanish))
hp.predict_linear = True
hp.num_fft = 1102
metafile = "all_reduced.txt"
dataset_root = "../data/css10"
data = TextToSpeechDataset(os.path.join(dataset_root, metafile), dataset_root)
durations = []
lengths = []
num_words = []
lengths_phon = []
languages = []
freq_chars = {l: Counter() for l in hp.languages}
freq_phon = {l: Counter() for l in hp.languages}
Logger.progress(0, prefix='Computing stats:')
for i, item in enumerate(data.items):
languages.append(hp.languages[item['language']])
audio_path = item['audio']
full_audio_path = os.path.join(dataset_root, audio_path)
waveform = audio.load(full_audio_path)
durations.append(audio.duration(waveform))
utterance = text.to_text(item['text'], use_phonemes=False)
clear_utterance = text.remove_punctuation(utterance)
clear_words = clear_utterance.split()
lengths.append(len(utterance))
num_words.append(len(clear_words))
clear_utterance = clear_utterance.replace(' ', '')
freq_chars[hp.languages[item['language']]].update(clear_utterance)
utterance_pho = text.to_text(item['phonemes'], use_phonemes=True)
lengths_phon.append(len(utterance_pho))
utterance_pho = utterance_pho.replace(' ', '')
utterance_pho = text.remove_punctuation(utterance_pho)
freq_phon[hp.languages[item['language']]].update(utterance_pho)
Logger.progress((i + 1) / len(data.items), prefix='Computing stats:')
```
## Item from data
```
item = data.items[0]
audio_path = item['audio']
full_audio_path = os.path.join(dataset_root, audio_path)
waveform = audio.load(full_audio_path)
print(item['text'])
print(text.to_text(item['text'], False))
print(text.to_text(item['phonemes'], True))
print(audio.duration(waveform))
melspec = audio.mel_spectrogram(waveform)
spec = audio.spectrogram(waveform)
```
# Data analysis
```
sns.set(rc={'figure.figsize':(16,4)})
sns.set_style("white")
df = pd.DataFrame({'Words' :pd.Series(num_words, dtype='int'),
'Length' :pd.Series(lengths, dtype='int'),
'Duration' :pd.Series(durations, dtype='float'),
'LengthPhon' :pd.Series(lengths_phon, dtype='int'),
'Language' :pd.Series(languages, dtype='category')},
columns=['Words', 'Length', 'Duration', 'LengthPhon', 'Language'])
print(len(df))
df = df[df['Length'] < 190]
print(len(df))
df = df[df['Duration'] < 10.1]
print(len(df))
df = df[df['Duration'] > 0.5]
print(len(df))
df = df[df['Length'] > 2]
print(len(df))
total = pd.DataFrame()
for name, group in df.groupby('Language'):
#group_mean = df.groupby("Length").mean()
#group_mean = group_mean.loc[df['Length']].reset_index()["Duration"]
lr = linear_model.LinearRegression().fit(group['Length'].values.reshape(-1,1), group['Duration'].values.reshape(-1,1))
group_mean = lr.predict(np.array(group['Length']).reshape(-1,1)).squeeze(-1)
group_std = group.groupby("Length").std()
group_std = group_std.loc[group['Length']]["Duration"]
group_std.index = group.index
m = group[(abs(group['Duration'] - group_mean) < np.log10(group['Length'])+1)] # & (abs(group['Duration'] - group_mean) - 3 * group_std < 0)
total = pd.concat([m, total])
df = total
print(len(df))
# out_file = "idxes_clean.txt"
# with open(os.path.join(dataset_root, out_file), mode='w') as f:
# for i in sorted(df.index):
# print(f'{i}'.zfill(6), file=f)
# join -t '|' idxes_clean.txt a.txt > b.txt
```
### Duration distribution
```
for name, group in df.groupby('Language'):
print(f'{name}:\t{sum(group["Duration"])/3600}')
print(f'Total:\t{sum(df["Duration"])/3600}')
for name, group in df.groupby('Language'):
print(f'Min duration: {min(group["Duration"])}')
print(f'Max duration: {max(group["Duration"])}')
ax = sns.distplot(group['Duration'], hist=True, rug=False, fit=stats.norm, color="c", kde_kws={"color": "b", "lw": 3}, fit_kws={"color": "r", "lw": 3})
ax.set(xlabel='Duration (s)', title=name);
plt.show()
```
### Length distribution
```
for name, group in df.groupby('Language'):
print(f'Min length: {min(group["Length"])}')
print(f'Max length: {max(group["Length"])}')
ax = sns.distplot(group['Length'], kde=True, rug=False, fit=stats.norm, color="c", kde_kws={"color": "b", "lw": 3}, fit_kws={"color": "r", "lw": 3})
ax.set(xlabel='Length', title=name);
plt.show()
```
### Word count distribution
```
ax = sns.distplot(df['Words'], kde=True, rug=False, fit=stats.norm, color="c", kde_kws={"color": "b", "lw": 3}, fit_kws={"color": "r", "lw": 3})
ax.set(xlabel='Word count');
```
### Phonemized length distribution
```
ax = sns.distplot(df['LengthPhon'], kde=True, rug=False, fit=stats.norm, color="c", kde_kws={"color": "b", "lw": 3}, fit_kws={"color": "r", "lw": 3})
ax.set(xlabel='Phonemized length');
```
### Duration vs Length
```
for name, group in df.groupby('Language'):
ax = sns.jointplot(group['Length'], group['Duration'], kind="hex", space=0, color="b")
ax.fig.set_figwidth(7)
ax.ax_joint.set(xlabel='Length', ylabel='Duration', title=name);
sns.set_style("whitegrid")
ax = sns.relplot(x="Length", y="Duration", kind="line", ci="sd", linewidth=3, data=df)
ax.fig.set_figwidth(15)
ax.fig.set_figheight(4)
ax.set(yticks=np.arange(round(min(df['Duration'])), max(df['Duration']) + 1,2))
plt.ylim(min(df['Duration']) - 1, max(df['Duration']) + 1);
sns.set_style("white")
```
### Duration vs Phonemized length
```
ax = sns.jointplot(df['LengthPhon'], df['Duration'], kind="hex", space=0, color="b")
ax.fig.set_figwidth(7)
ax.ax_joint.set(xlabel='Word count', ylabel='Duration');
sns.set_style("whitegrid")
ax = sns.relplot(x="LengthPhon", y="Duration", kind="line", ci="sd", linewidth=3, data=df)
ax.fig.set_figwidth(15)
ax.fig.set_figheight(4)
ax.set(yticks=np.arange(round(min(df['Duration'])), max(df['Duration']) + 1, 2))
plt.ylim(min(durations) - 1, max(df['Duration']) + 1);
sns.set_style("white")
```
### Phonemes distribution
```
symbols_phon = hp.phonemes.replace(' ', '')
symbols_phon
for k, v in freq_phon.items():
sk = sorted(v.keys())
g = sns.barplot(x=list(sk), y=[v[x] for x in sk]).set_title(k)
plt.show()
total = Counter()
for k, v in freq_phon.items():
total.update(v)
sns.barplot(x=list(symbols_phon), y=[total[x] for x in symbols_phon]).set_title("Total")
plt.show()
''.join(list(total.keys()))
```
| github_jupyter |
# Composite Executors
Executors that execute more than one flow.
# Preliminaries
```
# Black Codeformatter
%load_ext lab_black
```
## Imports
```
import numpy as np
import pandas as pd
import os
from affe.execs import CompositeExecutor
```
# Implementation
This is where functions and classes are implemented.
## Generate demo Flows
```
import time
import affe
from affe.io import (
get_root_directory,
get_flow_directory,
insert_subdirectory,
abspath,
check_existence_of_directory,
dump_object,
)
from affe.flow import Flow
root_dir = get_root_directory()
exp_dir = get_flow_directory(keyword="flow")
root_dir, exp_dir
fs = insert_subdirectory(root_dir, parent="out", child=exp_dir,)
fs
def get_dummy_fs():
root_dir = get_root_directory()
flow_dir = get_flow_directory(keyword="flow")
dummy_fs = insert_subdirectory(root_dir, parent="out", child=flow_dir,)
return dummy_fs
def get_dummy_config(message="hi", content=dict(a=3, b=4)):
dummy_fs = get_dummy_fs()
return dict(io=dict(fs=dummy_fs), message=message, content=content)
def dummy_imports():
import time
import affe
from affe.io import (
get_root_directory,
get_flow_directory,
insert_subdirectory,
abspath,
check_existence_of_directory,
dump_object,
)
print("Imports succesful")
return
def dummy_flow(config):
print("Hello world")
fs = config.get("io").get("fs")
content = config.get("content")
message = config.get("message")
results_directory_key = "out.flow.results"
check_existence_of_directory(fs, results_directory_key)
fn_results = abspath(fs, results_directory_key, filename="{}.json".format(message))
results = content
dump_object(results, fn_results)
# Some extra actions
sleep_a_few_s = 2
time.sleep(sleep_a_few_s)
print("{} secs passed".format(sleep_a_few_s))
print(message)
return True
def get_dummy_flow(message="hi", content=dict(a=1, b=2), timeout_s=20):
# config
dummy_config = get_dummy_config(message=message, content=content)
dummy_fs = dummy_config.get("io").get("fs")
# flow-object
logs_directory_key = "out.flow.logs"
check_existence_of_directory(dummy_fs, logs_directory_key)
log_filepath = abspath(dummy_fs, logs_directory_key, "logfile")
f = Flow(
config=dummy_config,
imports=dummy_imports,
flow=dummy_flow,
timeout_s=timeout_s,
log_filepath=log_filepath,
)
return f
flows = [get_dummy_flow(message="hi" * (i + 1)) for i in range(3)]
for f in flows:
f.run()
```
# Sandbox
This is where functions and classes are tested.
## Flow
Just making some easy flows to test my stuff.
| github_jupyter |
# TF object detection API in Azure Machine Learning
This notebook demonstrates how to train an object detection model using Tensorflow Object detection API in Azure Machine Learning service
```
%load_ext autoreload
%autoreload 2
import wget
import os
from azureml.core import Workspace, Experiment, VERSION
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.environment import Environment
from azureml.train.estimator import Estimator
from azureml.core.compute_target import ComputeTargetException
from azureml.widgets import RunDetails
SUBSCRIPTION_ID = ""
RESOURCE_GROUP = ""
WORKSPACE_NAME = ""
EXP_NAME = 'running-pets'
CLUSTER_NAME = "gpu-cluster"
OBJ_DETECT_URL = "http://storage.googleapis.com/download.tensorflow.org/models/object_detection"
COCO_TAR_BALL = "faster_rcnn_resnet101_coco_11_06_2017.tar.gz"
TRAIN_DIR = "train"
DATA_DIR="data"
DATASET_URL="https://amlgitsamples.blob.core.windows.net/objectdetection"
DATASET_TAR_BALL="petdataset.tar.gz"
os.makedirs(DATA_DIR,exist_ok=True)
print("SDK version:", VERSION)
```
## Initialize Workspace
Initialize a workspace object from persisted configuration.
```
ws = Workspace(subscription_id = SUBSCRIPTION_ID,
resource_group =RESOURCE_GROUP ,
workspace_name = WORKSPACE_NAME
)
exp = Experiment(workspace=ws, name=EXP_NAME)
```
### Download COCO-pretrained Model for Transfer Learning
```
wget.download(os.path.join(OBJ_DETECT_URL,COCO_TAR_BALL),
out=os.getcwd()
)
!tar -xvf $COCO_TAR_BALL -C data --strip 1
```
### Upload dataset and model to blob storage
The Oxford-IIIT Pet dataset available in raw format [here](https://www.robots.ox.ac.uk/~vgg/data/pets/). For convinience, it is provided with the example on TF record file format
```
wget.download(os.path.join(DATASET_URL,DATASET_TAR_BALL),
out=os.getcwd()
)
!tar -xvf $DATASET_TAR_BALL -C data --strip 1
datastore = ws.get_default_datastore()
ds_reference = datastore.upload('data',
target_path='pet_dataset',
overwrite=True,
show_progress=True)
```
### Create or attach Azure ML compute to be used for training
```
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
found = False
cts = ws.compute_targets
if CLUSTER_NAME in cts and cts[CLUSTER_NAME].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[CLUSTER_NAME]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_NC6",max_nodes = 4)
# Create the cluster.\n",
compute_target = ComputeTarget.create(ws, CLUSTER_NAME, provisioning_config)
print('Checking cluster status...')
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
```
The docker file contains the instructions to install the TF Models. It's provided with the example, if you'd like to build your own.
For more information on how to perfrom training with custom images on Azure Machine Learning, you can visit
https://github.com/Azure/AzureML-Containers/
```
conda_env = Environment("running_pet_env")
conda_env.python.user_managed_dependencies = True
conda_env.python.interpreter_path = '/opt/miniconda/bin/python'
conda_env.docker.base_image='datashinobi/tf_models:0.1'
conda_env.docker.enabled=True
conda_env.docker.gpu_support=True
```
## Create estimator
We use Estimator base class here instead of [Tensorflow Estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) since we install Tensorflow in the custom base image.
```
script_params = {
'--dataset-path':ds_reference.as_mount(),
'--epochs':100
}
est = Estimator(source_directory=TRAIN_DIR,
script_params=script_params,
compute_target=compute_target,
entry_script='train.py',
environment_definition=conda_env
)
```
## Submit Experiment
```
run = exp.submit(est)
RunDetails(run).show()
```
## Monitoring with Tensorboard
**This step requires installing locally Tensorflow**, for more information on tensorboard intergation in Azure Machine Learning
https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-monitor-tensorboard
```
import tensorflow
from azureml.tensorboard import Tensorboard
tb = Tensorboard([run])
tb.start()
```
| github_jupyter |
# StreamingPhish
------
**Author**: Wes Connell <a href="https://twitter.com/wesleyraptor">@wesleyraptor</a>
This notebook is a subset of the streamingphish command-line tool and is focused exclusively on describing the process of going from raw data to a trained predictive model. If you've never trained a predictive model before, hopefully you find this notebook to be useful.
We'll walk through each step in the machine learning lifecycle for developing a predictive model that examines a fully-qualified domain name (i.e. 'help.github.com') and predicts it as either phishing or not phishing. I've loosely defined these steps as follows:
1. Load training data.
2. Define features to extract from raw data.
3. Compute the features.
4. Train the classifier.
5. Explore classifier performance metrics.
6. Test the classifier against new data.
## 1. Load Training Data
The data necessary for training (~5K phishing domains, and ~5K non-phishing domains) has already been curated. Calling this function loads the domain names from disk and returns them in a list. The function also returns the labels for the data, which is mandatory for supervised learning (0 = not phishing, 1 = phishing).
Note that the ~10K total training samples are a subset of what the streamingphish CLI tool uses. This notebook is for demonstration purposes and I wanted to keep the feature extraction time to a few seconds (vs a minute or longer).
```
import os
import random
def load_training_data():
"""
Load the phishing domains and benign domains from disk into python lists
NOTE: I'm using a smaller set of samples than from the CLI tool so the feature extraction is quicker.
@return training_data: dictionary where keys are domain names and values
are labels (0 = benign, 1 = phishing).
"""
training_data = {}
benign_path = "/opt/streamingphish/training_data/benign/"
for root, dirs, files in os.walk(benign_path):
files = [f for f in files if not f[0] == "."]
for f in files:
with open(os.path.join(root, f)) as infile:
for item in infile.readlines():
# Safeguard to prevent adding duplicate data to training set.
if item not in training_data:
training_data[item.strip('\n')] = 0
phishing_path = "/opt/streamingphish/training_data/malicious/"
for root, dirs, files in os.walk(phishing_path):
files = [f for f in files if not f[0] == "."]
for f in files:
with open(os.path.join(root, f)) as infile:
for item in infile.readlines():
# Safeguard to prevent adding duplicate data to training set.
if item not in training_data:
training_data[item.strip('\n')] = 1
print("[+] Completed.")
print("\t - Not phishing domains: {}".format(sum(x == 0 for x in training_data.values())))
print("\t - Phishing domains: {}".format(sum(x == 1 for x in training_data.values())))
return training_data
training_data = load_training_data()
```
## 2. Define and Compute Features From Data
Next step is to identify characteristics/features in the data that we think will be effective in distinguishing between the two classes (phishing and not phishing). As humans, we tend to prefer practically significant features (i.e. is 'paypal' in the subdomain?), but it's important to also consider statistically significant features that may not be obvious (i.e. measuring the standard deviation for number of subdomains across the entire population).
The features identified in this research are spatial features, meaning each domain name is evaluated independently. The benefit is that the feature extraction is pretty simple (no need to focus on time intervals). Other implementations of machine learning for enterprise information security tend to be far more complex (multiple data sources, temporal features, sophisticated algorithms, etc).
Features can either be categorical or continuous - this prototype uses a mix of both. Generally speaking, continuous features can be measured (number of dashes in a FQDN), whereas categorical features are more of a boolean expression (is the top-level domain 'co.uk'? is the top-level domain 'bid'? is the top-level domain 'com'?). The features from this prototype are as follows:
- [Categorical] Top-level domain (TLD).
- [Categorical] Targeted phishing brand presence in subdomain.
- [Categorical] Targeted phishing brand presence in domain.
- [Categorical] 1:1 keyword match of common phishing words.
- [Continuous] Domain entropy (randomness).
- [Categorical] Levenshtein distance of 1 to common phishing words (word similarity).
- [Continuous] Number of periods.
- [Continuous] Number of dashes.
We're merely defining the features we want to extract in the code snippet below (we'll actually invoke this method a few steps down the road).
```
import os
import math
import re
from collections import Counter, OrderedDict
from Levenshtein import distance
import tldextract
import pandas as pd
import numpy as np
FEATURE_PATHS = {
'targeted_brands_dir': '/opt/streamingphish/training_data/targeted_brands/',
'keywords_dir': '/opt/streamingphish/training_data/keywords/',
'fqdn_keywords_dir': '/opt/streamingphish/training_data/fqdn_keywords/',
'similarity_words_dir': '/opt/streamingphish/training_data/similarity_words/',
'tld_dir': '/opt/streamingphish/training_data/tlds/'
}
class PhishFeatures:
"""
Library of functions that extract features from FQDNs. Each of those functions returns
a dictionary with feature names and their corresponding values, i.e.:
{
'num_dashes': 0,
'paypal_kw_present': 1,
'alexa_25k_domain': 0,
'entropy': 0
}
"""
def __init__(self):
"""
Loads keywords, phishing words, and targeted brands used by other functions in this class.
Args:
data_config (dictionary): Contains paths to files on disk needed for training.
"""
self._brands = self._load_from_directory(FEATURE_PATHS['targeted_brands_dir'])
self._keywords = self._load_from_directory(FEATURE_PATHS['keywords_dir'])
self._fqdn_keywords = self._load_from_directory(FEATURE_PATHS['fqdn_keywords_dir'])
self._similarity_words = self._load_from_directory(FEATURE_PATHS['similarity_words_dir'])
self._tlds = self._load_from_directory(FEATURE_PATHS['tld_dir'])
@staticmethod
def _remove_common_hosts(fqdn):
"""
Takes a FQDN, removes common hosts prepended to it in the subdomain, and returns it.
Args:
fqdn (string): FQDN from certstream.
Returns:
fqdn (string): FQDN with common benign hosts removed (these hosts have no bearing
on malicious/benign determination).
"""
try:
first_host = fqdn.split(".")[0]
except IndexError:
# In the event the FQDN doesn't have any periods?
# This would only happen in manual mode.
return fqdn
if first_host == "*":
fqdn = fqdn[2:]
elif first_host == "www":
fqdn = fqdn[4:]
elif first_host == "mail":
fqdn = fqdn[5:]
elif first_host == "cpanel":
fqdn = fqdn[7:]
elif first_host == "webmail":
fqdn = fqdn[8:]
elif first_host == "webdisk":
fqdn = fqdn[8:]
elif first_host == "autodiscover":
fqdn = fqdn[13:]
return fqdn
@staticmethod
def _fqdn_parts(fqdn):
"""
Break apart domain parts and return a dictionary representing the individual attributes
like subdomain, domain, and tld.
Args:
fqdn (string): FQDN being analyzed.
Returns:
result (dictionary): Each part of the fqdn, i.e. subdomain, domain, domain + tld
"""
parts = tldextract.extract(fqdn)
result = {}
result['subdomain'] = parts.subdomain
result['domain'] = parts.domain
result['tld'] = parts.suffix
return result
@staticmethod
def _load_from_directory(path):
"""
Read all text files from a directory on disk, creates list, and returns.
Args:
path (string): Path to directory on disk, i.e. '/opt/streamingphish/keywords/'
Returns:
values (list): Values from all text files in the supplied directory.
"""
values = []
# Load brand names from all the text files in the provided folder.
for root, _, files in os.walk(path):
files = [f for f in files if not f[0] == "."]
for f in files:
with open(os.path.join(root, f)) as infile:
for item in infile.readlines():
values.append(item.strip('\n'))
return values
def compute_features(self, fqdns, values_only=True):
"""
Calls all the methods in this class that begin with '_fe_'. Not sure how pythonic
this is, but I wanted dynamic functions so those can be written without having
to manually define them here. Shooting for how python's unittest module works,
there's a chance this is a python crime.
Args:
fqdns (list): fqdns to compute features for.
values_only (boolean, optional): Instead computes a np array w/ values only
and returns that instead of a list of dictionaries (reduces perf overhead).
Returns:
result (dict): 'values' will always be returned - list of feature values of
each FQDN being analyzed. Optional key included is 'names', which is the
feature vector and will be returned if values_only=True.
"""
result = {}
# Raw features are a list of dictionaries, where keys = feature names and
# values = feature values.
features = []
for fqdn in fqdns:
sample = self._fqdn_parts(fqdn=fqdn)
sample['fqdn'] = self._remove_common_hosts(fqdn=fqdn)
sample['fqdn_words'] = re.split('\W+', fqdn)
analysis = OrderedDict()
for item in dir(self):
if item.startswith('_fe_'):
method = getattr(self, item)
result = method(sample)
analysis = {**analysis, **result}
# Must sort dictionary by key before adding.
analysis = OrderedDict(sorted(analysis.items()))
features.append(analysis)
# Split out keys and values from list of dictionaries. Keys = feature names, and
# values = feature values.
result = {}
result['values'] = []
for item in features:
result['values'].append(np.fromiter(item.values(), dtype=float))
if not values_only:
# Take the dictionary keys from the first item - this is the feature vector.
result['names'] = list(features[0].keys())
return result
def _fe_extract_tld(self, sample):
"""
Check if TLD is in a list of ~30 TLDs indicative of phishing / not phishing. Originally,
this was a categorical feature extended via get_dummies / one hot encoding, but it was
adding too many unnecessary features to the feature vector resulting in a large tax
performance wise.
Args:
sample (dictionary): Info about the sample being analyzed i.e. subdomain, tld, fqdn
Returns:
result (dictionary): Keys are feature names, values are feature scores.
"""
result = OrderedDict()
for item in self._tlds:
result["tld_{}".format(item)] = 1 if item == sample['tld'] else 0
return result
def _fe_brand_presence(self, sample):
"""
Checks for brands targeted by phishing in subdomain (likely phishing) and in domain
+ TLD (not phishing).
Args:
sample (dictionary): Info about the sample being analyzed i.e. subdomain, tld, fqdn
Retuns:
result (dictionary): Keys are feature names, values are feature scores.
"""
result = OrderedDict()
for item in self._brands:
result["{}_brand_subdomain".format(item)] = 1 if item in sample['subdomain'] else 0
result["{}_brand_domain".format(item)] = 1 if item in sample['domain'] else 0
return result
def _fe_keyword_match(self, sample):
"""
Look for presence of keywords anywhere in the FQDN i.e. 'account' would match on
'dswaccounting.tk'.
Args:
sample (dictionary): Info about the sample being analyzed i.e. subdomain, tld, fqdn
Returns:
result (dictionary): Keys are feature names, values are feature scores.
"""
result = OrderedDict()
for item in self._keywords:
result[item + "_kw"] = 1 if item in sample['fqdn'] else 0
return result
def _fe_keyword_match_fqdn_words(self, sample):
"""
Compare FQDN words (previous regex on special characters) against a list of common
phishing keywords, look for exact match on those words. Probably more decisive
in identifying phishing domains.
Args:
sample (dictionary): Info about the sample being analyzed i.e. subdomain, tld, fqdn
Returns:
result (dictionary): Keys are feature names, values are feature scores.
"""
result = OrderedDict()
for item in self._fqdn_keywords:
result[item + "_kw_fqdn_words"] = 1 if item in sample['fqdn_words'] else 0
return result
@staticmethod
def _fe_compute_domain_entropy(sample):
"""
Takes domain name from FQDN and computes entropy (randomness, repeated characters, etc).
Args:
sample (dictionary): Info about the sample being analyzed i.e. subdomain, tld, fqdn
Returns:
result (dictionary): Keys are feature names, values are feature scores.
"""
# Compute entropy of domain.
result = OrderedDict()
p, lns = Counter(sample['domain']), float(len(sample['domain']))
entropy = -sum(count / lns * math.log(count / lns, 2) for count in list(p.values()))
result['entropy'] = entropy
return result
def _fe_check_phishing_similarity_words(self, sample):
"""
Takes a list of words from the FQDN (split by special characters) and checks them
for similarity against words commonly disguised as phishing words. This method only
searches for a distance of 1.
i.e. 'pavpal' = 1 for 'paypal', 'verifycation' = 1 for 'verification',
'app1eid' = 1 for 'appleid'.
Args:
sample (dictionary): Info about the sample being analyzed i.e. subdomain, tld, fqdn
Returns:
result (dictionary): Keys are feature names, values are feature scores.
"""
result = OrderedDict()
for key in self._similarity_words:
result[key + "_lev_1"] = 0
for word in sample['fqdn_words']:
if distance(word, key) == 1:
result[key + "_lev_1"] = 1
return result
@staticmethod
def _fe_number_of_dashes(sample):
"""
Compute the number of dashes - several could be a sign of URL padding, etc.
Args:
sample (dictionary): Info about the sample being analyzed i.e. subdomain, tld, fqdn
Returns:
result (dictionary): Keys are feature names, values are feature scores.
"""
result = OrderedDict()
result['num_dashes'] = 0 if "xn--" in sample['fqdn'] else sample['fqdn'].count("-")
return result
@staticmethod
def _fe_number_of_periods(sample):
"""
Compute number of periods - several subdomains could be indicative of a phishing domain.
Args:
sample (dictionary): Info about the sample being analyzed i.e. subdomain, tld, fqdn
Returns:
result (dictionary): Keys are feature names, values are feature scores.
"""
result = OrderedDict()
result['num_periods'] = sample['fqdn'].count(".")
return result
```
# 3. Compute the Features
Let's create an instance of the `Features` class and invoke the `compute_features()` method. This method returns a list of numbers representing each domain in our training set. The position of each number is very important because it aligns to a single feature from the feature vector, for example:
|Sample | TLD |
|----------------|-------|
|espn.com | com |
|torproject.org | org |
|notphishing.tk | tk |
|Sample |TLD_com |TLD_org |TLD_tk |
|----------------|----------|------------|-----------|
|espn.com |1.0 |0.0 |0.0 |
|torproject.org |0.0 |1.0 |0.0 |
|notphishing.tk |0.0 |0.0 |1.0 |
We also save the feature vector to a variable named `feature_vector` because we'll use it shortly to visually depict the names of the features that have the highest coefficients (i.e. significantly impact the prediction score returned by the classifier).
```
# Compute features.
print("[*] Computing features...")
f = PhishFeatures()
training_features = f.compute_features(training_data.keys(), values_only=False)
feature_vector = training_features['names']
print("[+] Features computed for the {} samples in the training set.".format(len(training_features['values'])))
```
# 4. Train the Classifier
So far we've transformed the raw training data (['espn.com', 'api.twitter.com', 'apppleid-suspended.activity.apple.com.490548678792.tk']) into features that describe the data and also created the feature vector. Now we'll run through the remaining routines to train a classifier:
1. Assign the labels (0 = benign, 1 = phishing) from the training samples to an array. We got the labels when we read in the training data from text files in step 1.
2. Split the data into a training set and a test set (helps evaluate model performance like overfitting, accuracy, etc).
3. Train a classifier using the Logistic Regression algorithm.
**NOTE**: If this were anything beyond a simple prototype, I would evaluate multiple algorithms, multiple parameters for said algorithms, features for down-selection, and multiple folds for cross validation. Feel free to explore these concepts on your own - they are currently out of scope.
```
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
# Assign the labels (0s and 1s) to a numpy array.
labels = np.fromiter(training_data.values(), dtype=np.float)
print("[+] Assigned the labels to a numpy array.")
# Split the data into a training set and a test set.
X_train, X_test, y_train, y_test = train_test_split(training_features['values'], labels, random_state=5)
print("[+] Split the data into a training set and test set.")
# Insert silver bullet / black magic / david blaine / unicorn one-liner here :)
classifier = LogisticRegression(C=10).fit(X_train, y_train)
print("[+] Completed training the classifier: {}".format(classifier))
```
# 5. Explore Classifier Performance Metrics
This section could be a book by itself. To keep this brief, we'll briefly touch on a few metrics we can use to evaluate performance:
<h4>Accuracy Against Training and Test Sets:</h4> Since we have labeled data, we can run the features from each sample from our training set through the classifier and see if it predicts the right label. That's what the scores here represent. We can identify things like overfitting and underfitting (which can be attributed to any number of things, as we're in control of several independent variables like the algorithm, parameters, feature vector, training data, etc).
```
# See how well it performs against training and test sets.
print("Accuracy on training set: {:.3f}".format(classifier.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(classifier.score(X_test, y_test)))
```
<h4>Logistic Regression Coefficient Weights:</h4> This is the 'secret sauce' of supervised machine learning. Based on the training data, their respective labels, and the feature vector we generated, the algorithm determined the most optimal weights for each feature. The chart below depicts the features that were deemed to be most significant by the algorithm. The presence or absence of these features in domains that we evaluate **significantly** impacts the score returned by the trained classifier.
```
import mglearn
%matplotlib inline
import matplotlib.pyplot as plt
print("Number of features: {}".format(len(feature_vector)))
# Visualize the most important coefficients from the LogisticRegression model.
coef = classifier.coef_
mglearn.tools.visualize_coefficients(coef, feature_vector, n_top_features=10)
```
<h4>Precision / Recall:</h4> Precision shows how often the classifier is right when it cries wolf. Recall shows how many fish (no pun intended) the classifier caught out of all the fish in the pond. By default, the classifier assumes a malicious threshold of 0.5 on a scale of 0 to 1. This chart (and the subsequent TPR vs FPR chart) shows how these metrics change when increasing or decreasing the malicious threshold.<br>
```
from sklearn.metrics import precision_recall_curve
precision, recall, thresholds = precision_recall_curve(y_test, classifier.predict_proba(X_test)[:, 1])
close_zero = np.argmin(np.abs(thresholds - 0.5))
plt.plot(precision[close_zero], recall[close_zero], 'o', markersize=10, label="threshold 0.5", fillstyle="none",
c='k', mew=2)
plt.plot(precision, recall, label="precision recall curve")
plt.xlabel("Precision")
plt.ylabel("Recall")
plt.legend(loc="best")
print("Precision: {:.3f}\nRecall: {:.3f}\nThreshold: {:.3f}".format(precision[close_zero], recall[close_zero], thresholds[close_zero]))
```
<h4>True Positive Rate (TPR) / False Positive Rate (FPR):</h4> Basically a summary of misclassifications from the classifier against the test set.<br>
```
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_test, classifier.predict_proba(X_test)[:, 1])
plt.plot(fpr, tpr, label="ROC Curve")
plt.xlabel("FPR")
plt.ylabel("TPR (recall)")
close_zero = np.argmin(np.abs(thresholds - 0.5))
plt.plot(fpr[close_zero], tpr[close_zero], 'o', markersize=10, label="threshold 0.5", fillstyle="none", c="k",
mew=2)
plt.legend(loc=4)
print("TPR: {:.3f}\nFPR: {:.3f}\nThreshold: {:.3f}".format(tpr[close_zero], fpr[close_zero],
thresholds[close_zero]))
```
<h4>Classification Report:</h4> Shows a lot of the same metrics plus the f1-score (combination of precision and recall) as well as support (shows possible class imbalance from the training set.<br>
```
from sklearn.metrics import classification_report
predictions = classifier.predict_proba(X_test)[:, 1] > 0.5
print(classification_report(y_test, predictions, target_names=['Not Phishing', 'Phishing']))
```
<h4>Confusion Matrix:</h4> Also depicts misclassifications against the test dataset.
```
# Confusion matrix.
from sklearn.metrics import confusion_matrix
confusion = confusion_matrix(y_test, predictions)
print("Confusion matrix:\n{}".format(confusion))
# A prettier way to see the same data.
scores_image = mglearn.tools.heatmap(
confusion_matrix(y_test, predictions), xlabel="Predicted Label", ylabel="True Label",
xticklabels=["Not Phishing", "Phishing"],
yticklabels=["Not Phishing", "Phishing"],
cmap=plt.cm.gray_r, fmt="%d")
plt.title("Confusion Matrix")
plt.gca().invert_yaxis()
```
# 6. Test Classifier Against New Data
The metrics look great. The code snippet below shows how you can transform a list of any FQDNs you'd like, extract features, reindex the features against the feature vector from training, and make a prediction.
```
phish = PhishFeatures() # We need the compute_features() method to evaluate new data.
LABEL_MAP = {0: "Not Phishing", 1: "Phishing"}
example_domains = [
"paypal.com",
"apple.com",
"patternex.com",
"support-apple.xyz",
"paypall.com",
"pavpal-verify.com"
]
# Compute features, and also note we need to provide the feature vector from when we
# trained the model earlier in this notebook.
features = phish.compute_features(example_domains)
prediction = classifier.predict_proba(features['values'])[:, 1] > 0.5
prediction_scores = classifier.predict_proba(features['values'])[:, 1]
for domain, classification, score in zip(example_domains, prediction, prediction_scores):
print("[{}]\t{}\t{:.3f}".format(LABEL_MAP[classification], domain, score))
```
<div style="float: center; margin: 10px 10px 10px 10px"><img src="../images/fishwordgraph.png"></div>
| github_jupyter |
<a href="https://colab.research.google.com/github/facebookresearch/habitat-sim/blob/master/examples/tutorials/colabs/ECCV_2020_Interactivity.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Habitat-sim Interactivity
This use-case driven tutorial covers Habitat-sim interactivity, including:
- Adding new objects to a scene
- Kinematic object manipulation
- Physics simulation API
- Sampling valid object locations
- Generating a NavMesh including STATIC objects
- Agent embodiment and continuous control
```
# @title Installation { display-mode: "form" }
# @markdown (double click to show code).
!curl -L https://raw.githubusercontent.com/facebookresearch/habitat-sim/master/examples/colab_utils/colab_install.sh | NIGHTLY=true bash -s
!wget -c http://dl.fbaipublicfiles.com/habitat/mp3d_example.zip && unzip -o mp3d_example.zip -d /content/habitat-sim/data/scene_datasets/mp3d/
# @title Path Setup and Imports { display-mode: "form" }
# @markdown (double click to show code).
%cd /content/habitat-sim
## [setup]
import math
import os
import random
import sys
import git
import magnum as mn
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
from PIL import Image
import habitat_sim
from habitat_sim.utils import common as ut
from habitat_sim.utils import viz_utils as vut
try:
import ipywidgets as widgets
from IPython.display import display as ipydisplay
# For using jupyter/ipywidget IO components
HAS_WIDGETS = True
except ImportError:
HAS_WIDGETS = False
if "google.colab" in sys.modules:
os.environ["IMAGEIO_FFMPEG_EXE"] = "/usr/bin/ffmpeg"
repo = git.Repo(".", search_parent_directories=True)
dir_path = repo.working_tree_dir
%cd $dir_path
data_path = os.path.join(dir_path, "data")
output_directory = "examples/tutorials/interactivity_output/" # @param {type:"string"}
output_path = os.path.join(dir_path, output_directory)
if not os.path.exists(output_path):
os.mkdir(output_path)
# define some globals the first time we run.
if "sim" not in globals():
global sim
sim = None
global obj_attr_mgr
obj_attr_mgr = None
global prim_attr_mgr
obj_attr_mgr = None
global stage_attr_mgr
stage_attr_mgr = None
# @title Define Configuration Utility Functions { display-mode: "form" }
# @markdown (double click to show code)
# @markdown This cell defines a number of utility functions used throughout the tutorial to make simulator reconstruction easy:
# @markdown - make_cfg
# @markdown - make_default_settings
# @markdown - make_simulator_from_settings
def make_cfg(settings):
sim_cfg = habitat_sim.SimulatorConfiguration()
sim_cfg.gpu_device_id = 0
sim_cfg.scene_id = settings["scene"]
sim_cfg.enable_physics = settings["enable_physics"]
# Note: all sensors must have the same resolution
sensor_specs = []
if settings["color_sensor_1st_person"]:
color_sensor_1st_person_spec = habitat_sim.CameraSensorSpec()
color_sensor_1st_person_spec.uuid = "color_sensor_1st_person"
color_sensor_1st_person_spec.sensor_type = habitat_sim.SensorType.COLOR
color_sensor_1st_person_spec.resolution = [
settings["height"],
settings["width"],
]
color_sensor_1st_person_spec.postition = [0.0, settings["sensor_height"], 0.0]
color_sensor_1st_person_spec.orientation = [
settings["sensor_pitch"],
0.0,
0.0,
]
color_sensor_1st_person_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(color_sensor_1st_person_spec)
if settings["depth_sensor_1st_person"]:
depth_sensor_1st_person_spec = habitat_sim.CameraSensorSpec()
depth_sensor_1st_person_spec.uuid = "depth_sensor_1st_person"
depth_sensor_1st_person_spec.sensor_type = habitat_sim.SensorType.DEPTH
depth_sensor_1st_person_spec.resolution = [
settings["height"],
settings["width"],
]
depth_sensor_1st_person_spec.postition = [0.0, settings["sensor_height"], 0.0]
depth_sensor_1st_person_spec.orientation = [
settings["sensor_pitch"],
0.0,
0.0,
]
depth_sensor_1st_person_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(depth_sensor_1st_person_spec)
if settings["semantic_sensor_1st_person"]:
semantic_sensor_1st_person_spec = habitat_sim.CameraSensorSpec()
semantic_sensor_1st_person_spec.uuid = "semantic_sensor_1st_person"
semantic_sensor_1st_person_spec.sensor_type = habitat_sim.SensorType.SEMANTIC
semantic_sensor_1st_person_spec.resolution = [
settings["height"],
settings["width"],
]
semantic_sensor_1st_person_spec.postition = [
0.0,
settings["sensor_height"],
0.0,
]
semantic_sensor_1st_person_spec.orientation = [
settings["sensor_pitch"],
0.0,
0.0,
]
semantic_sensor_1st_person_spec.sensor_subtype = (
habitat_sim.SensorSubType.PINHOLE
)
sensor_specs.append(semantic_sensor_1st_person_spec)
if settings["color_sensor_3rd_person"]:
color_sensor_3rd_person_spec = habitat_sim.CameraSensorSpec()
color_sensor_3rd_person_spec.uuid = "color_sensor_3rd_person"
color_sensor_3rd_person_spec.sensor_type = habitat_sim.SensorType.COLOR
color_sensor_3rd_person_spec.resolution = [
settings["height"],
settings["width"],
]
color_sensor_3rd_person_spec.postition = [
0.0,
settings["sensor_height"] + 0.2,
0.2,
]
color_sensor_3rd_person_spec.orientation = [-math.pi / 4, 0, 0]
color_sensor_3rd_person_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(color_sensor_3rd_person_spec)
# Here you can specify the amount of displacement in a forward action and the turn angle
agent_cfg = habitat_sim.agent.AgentConfiguration()
agent_cfg.sensor_specifications = sensor_specs
return habitat_sim.Configuration(sim_cfg, [agent_cfg])
def make_default_settings():
settings = {
"width": 720, # Spatial resolution of the observations
"height": 544,
"scene": "./data/scene_datasets/mp3d/17DRP5sb8fy/17DRP5sb8fy.glb", # Scene path
"default_agent": 0,
"sensor_height": 1.5, # Height of sensors in meters
"sensor_pitch": -math.pi / 8.0, # sensor pitch (x rotation in rads)
"color_sensor_1st_person": True, # RGB sensor
"color_sensor_3rd_person": False, # RGB sensor 3rd person
"depth_sensor_1st_person": False, # Depth sensor
"semantic_sensor_1st_person": False, # Semantic sensor
"seed": 1,
"enable_physics": True, # enable dynamics simulation
}
return settings
def make_simulator_from_settings(sim_settings):
cfg = make_cfg(sim_settings)
# clean-up the current simulator instance if it exists
global sim
global obj_attr_mgr
global prim_attr_mgr
global stage_attr_mgr
if sim != None:
sim.close()
# initialize the simulator
sim = habitat_sim.Simulator(cfg)
# Managers of various Attributes templates
obj_attr_mgr = sim.get_object_template_manager()
obj_attr_mgr.load_configs(str(os.path.join(data_path, "objects")))
prim_attr_mgr = sim.get_asset_template_manager()
stage_attr_mgr = sim.get_stage_template_manager()
# @title Define Simulation Utility Functions { display-mode: "form" }
# @markdown (double click to show code)
# @markdown - remove_all_objects
# @markdown - simulate
# @markdown - sample_object_state
def remove_all_objects(sim):
for obj_id in sim.get_existing_object_ids():
sim.remove_object(obj_id)
def simulate(sim, dt=1.0, get_frames=True):
# simulate dt seconds at 60Hz to the nearest fixed timestep
print("Simulating " + str(dt) + " world seconds.")
observations = []
start_time = sim.get_world_time()
while sim.get_world_time() < start_time + dt:
sim.step_physics(1.0 / 60.0)
if get_frames:
observations.append(sim.get_sensor_observations())
return observations
# Set an object transform relative to the agent state
def set_object_state_from_agent(
sim,
ob_id,
offset=np.array([0, 2.0, -1.5]),
orientation=mn.Quaternion(((0, 0, 0), 1)),
):
agent_transform = sim.agents[0].scene_node.transformation_matrix()
ob_translation = agent_transform.transform_point(offset)
sim.set_translation(ob_translation, ob_id)
sim.set_rotation(orientation, ob_id)
# sample a random valid state for the object from the scene bounding box or navmesh
def sample_object_state(
sim, object_id, from_navmesh=True, maintain_object_up=True, max_tries=100, bb=None
):
# check that the object is not STATIC
if sim.get_object_motion_type(object_id) is habitat_sim.physics.MotionType.STATIC:
print("sample_object_state : Object is STATIC, aborting.")
if from_navmesh:
if not sim.pathfinder.is_loaded:
print("sample_object_state : No pathfinder, aborting.")
return False
elif not bb:
print(
"sample_object_state : from_navmesh not specified and no bounding box provided, aborting."
)
return False
tries = 0
valid_placement = False
# Note: following assumes sim was not reconfigured without close
scene_collision_margin = stage_attr_mgr.get_template_by_ID(0).margin
while not valid_placement and tries < max_tries:
tries += 1
# initialize sample location to random point in scene bounding box
sample_location = np.array([0, 0, 0])
if from_navmesh:
# query random navigable point
sample_location = sim.pathfinder.get_random_navigable_point()
else:
sample_location = np.random.uniform(bb.min, bb.max)
# set the test state
sim.set_translation(sample_location, object_id)
if maintain_object_up:
# random rotation only on the Y axis
y_rotation = mn.Quaternion.rotation(
mn.Rad(random.random() * 2 * math.pi), mn.Vector3(0, 1.0, 0)
)
sim.set_rotation(y_rotation * sim.get_rotation(object_id), object_id)
else:
# unconstrained random rotation
sim.set_rotation(ut.random_quaternion(), object_id)
# raise object such that lowest bounding box corner is above the navmesh sample point.
if from_navmesh:
obj_node = sim.get_object_scene_node(object_id)
xform_bb = habitat_sim.geo.get_transformed_bb(
obj_node.cumulative_bb, obj_node.transformation
)
# also account for collision margin of the scene
y_translation = mn.Vector3(
0, xform_bb.size_y() / 2.0 + scene_collision_margin, 0
)
sim.set_translation(
y_translation + sim.get_translation(object_id), object_id
)
# test for penetration with the environment
if not sim.contact_test(object_id):
valid_placement = True
if not valid_placement:
return False
return True
# @title Define Visualization Utility Function { display-mode: "form" }
# @markdown (double click to show code)
# @markdown - display_sample
# Change to do something like this maybe: https://stackoverflow.com/a/41432704
def display_sample(
rgb_obs, semantic_obs=np.array([]), depth_obs=np.array([]), key_points=None
):
from habitat_sim.utils.common import d3_40_colors_rgb
rgb_img = Image.fromarray(rgb_obs, mode="RGBA")
arr = [rgb_img]
titles = ["rgb"]
if semantic_obs.size != 0:
semantic_img = Image.new("P", (semantic_obs.shape[1], semantic_obs.shape[0]))
semantic_img.putpalette(d3_40_colors_rgb.flatten())
semantic_img.putdata((semantic_obs.flatten() % 40).astype(np.uint8))
semantic_img = semantic_img.convert("RGBA")
arr.append(semantic_img)
titles.append("semantic")
if depth_obs.size != 0:
depth_img = Image.fromarray((depth_obs / 10 * 255).astype(np.uint8), mode="L")
arr.append(depth_img)
titles.append("depth")
plt.figure(figsize=(12, 8))
for i, data in enumerate(arr):
ax = plt.subplot(1, 3, i + 1)
ax.axis("off")
ax.set_title(titles[i])
# plot points on images
if key_points is not None:
for point in key_points:
plt.plot(point[0], point[1], marker="o", markersize=10, alpha=0.8)
plt.imshow(data)
plt.show(block=False)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--no-display", dest="display", action="store_false")
parser.add_argument("--no-make-video", dest="make_video", action="store_false")
parser.set_defaults(show_video=True, make_video=True)
args, _ = parser.parse_known_args()
show_video = args.display
display = args.display
make_video = args.make_video
else:
show_video = False
make_video = False
display = False
# @title Define Colab GUI Utility Functions { display-mode: "form" }
# @markdown (double click to show code)
# Event handler for dropdowns displaying file-based object handles
def on_file_obj_ddl_change(ddl_values):
global sel_file_obj_handle
sel_file_obj_handle = ddl_values["new"]
return sel_file_obj_handle
# Event handler for dropdowns displaying prim-based object handles
def on_prim_obj_ddl_change(ddl_values):
global sel_prim_obj_handle
sel_prim_obj_handle = ddl_values["new"]
return sel_prim_obj_handle
# Event handler for dropdowns displaying asset handles
def on_prim_ddl_change(ddl_values):
global sel_asset_handle
sel_asset_handle = ddl_values["new"]
return sel_asset_handle
# Build a dropdown list holding obj_handles and set its event handler
def set_handle_ddl_widget(obj_handles, handle_types, sel_handle, on_change):
sel_handle = obj_handles[0]
descStr = handle_types + " Template Handles:"
style = {"description_width": "300px"}
obj_ddl = widgets.Dropdown(
options=obj_handles,
value=sel_handle,
description=descStr,
style=style,
disabled=False,
layout={"width": "max-content"},
)
obj_ddl.observe(on_change, names="value")
return obj_ddl, sel_handle
def set_button_launcher(desc):
button = widgets.Button(
description=desc,
layout={"width": "max-content"},
)
return button
def make_sim_and_vid_button(prefix, dt=1.0):
if not HAS_WIDGETS:
return
def on_sim_click(b):
observations = simulate(sim, dt=dt)
vut.make_video(
observations, "color_sensor_1st_person", "color", output_path + prefix
)
sim_and_vid_btn = set_button_launcher("Simulate and Make Video")
sim_and_vid_btn.on_click(on_sim_click)
ipydisplay(sim_and_vid_btn)
def make_clear_all_objects_button():
if not HAS_WIDGETS:
return
def on_clear_click(b):
remove_all_objects(sim)
clear_objs_button = set_button_launcher("Clear all objects")
clear_objs_button.on_click(on_clear_click)
ipydisplay(clear_objs_button)
# Builds widget-based UI components
def build_widget_ui(obj_attr_mgr, prim_attr_mgr):
# Holds the user's desired file-based object template handle
global sel_file_obj_handle
sel_file_obj_handle = ""
# Holds the user's desired primitive-based object template handle
global sel_prim_obj_handle
sel_prim_obj_handle = ""
# Holds the user's desired primitive asset template handle
global sel_asset_handle
sel_asset_handle = ""
# Construct DDLs and assign event handlers
# All file-based object template handles
file_obj_handles = obj_attr_mgr.get_file_template_handles()
prim_obj_handles = obj_attr_mgr.get_synth_template_handles()
prim_asset_handles = prim_attr_mgr.get_template_handles()
if not HAS_WIDGETS:
sel_file_obj_handle = file_obj_handles[0]
sel_prim_obj_handle = prim_obj_handles[0]
sel_prim_obj_handle = prim_asset_handles[0]
return
file_obj_ddl, sel_file_obj_handle = set_handle_ddl_widget(
file_obj_handles,
"File-based Object",
sel_file_obj_handle,
on_file_obj_ddl_change,
)
# All primitive asset-based object template handles
prim_obj_ddl, sel_prim_obj_handle = set_handle_ddl_widget(
prim_obj_handles,
"Primitive-based Object",
sel_prim_obj_handle,
on_prim_obj_ddl_change,
)
# All primitive asset handles template handles
prim_asset_ddl, sel_asset_handle = set_handle_ddl_widget(
prim_asset_handles, "Primitive Asset", sel_asset_handle, on_prim_ddl_change
)
# Display DDLs
ipydisplay(file_obj_ddl)
ipydisplay(prim_obj_ddl)
ipydisplay(prim_asset_ddl)
# @title Initialize Simulator and Load Scene { display-mode: "form" }
# convienience functions defined in Utility cell manage global variables
sim_settings = make_default_settings()
# set globals: sim,
make_simulator_from_settings(sim_settings)
```
#Interactivity in Habitat-sim
This tutorial covers how to configure and use the Habitat-sim object manipulation API to setup and run physical interaction simulations.
## Outline:
This section is divided into four use-case driven sub-sections:
1. Introduction to Interactivity
2. Physical Reasoning
3. Generating Scene Clutter on the NavMesh
4. Continuous Embodied Navigation
For more tutorial examples and details see the [Interactive Rigid Objects tutorial](https://aihabitat.org/docs/habitat-sim/rigid-object-tutorial.html) also available for Colab [here](https://github.com/facebookresearch/habitat-sim/blob/master/examples/tutorials/colabs/rigid_object_tutorial.ipynb).
## Introduction to Interactivity
####Easily add an object and simulate!
```
# @title Select a Simulation Object Template: { display-mode: "form" }
# @markdown Use the dropdown menu below to select an object template for use in the following examples.
# @markdown File-based object templates are loaded from and named after an asset file (e.g. banana.glb), while Primitive-based object templates are generated programmatically (e.g. uv_sphere) with handles (name/key for reference) uniquely generated from a specific parameterization.
# @markdown See the Advanced Features tutorial for more details about asset configuration.
build_widget_ui(obj_attr_mgr, prim_attr_mgr)
# @title Add either a File-based or Primitive Asset-based object to the scene at a user-specified location.{ display-mode: "form" }
# @markdown Running this will add a physically-modelled object of the selected type to the scene at the location specified by user, simulate forward for a few seconds and save a movie of the results.
# @markdown Choose either the primitive or file-based template recently selected in the dropdown:
obj_template_handle = sel_file_obj_handle
asset_tempalte_handle = sel_asset_handle
object_type = "File-based" # @param ["File-based","Primitive-based"]
if "File" in object_type:
# Handle File-based object handle
obj_template_handle = sel_file_obj_handle
elif "Primitive" in object_type:
# Handle Primitive-based object handle
obj_template_handle = sel_prim_obj_handle
else:
# Unknown - defaults to file-based
pass
# @markdown Configure the initial object location (local offset from the agent body node):
# default : offset=np.array([0,2.0,-1.5]), orientation=np.quaternion(1,0,0,0)
offset_x = 0.5 # @param {type:"slider", min:-2, max:2, step:0.1}
offset_y = 1.4 # @param {type:"slider", min:0, max:3.0, step:0.1}
offset_z = -1.5 # @param {type:"slider", min:-3, max:0, step:0.1}
offset = np.array([offset_x, offset_y, offset_z])
# @markdown Configure the initial object orientation via local Euler angle (degrees):
orientation_x = 0 # @param {type:"slider", min:-180, max:180, step:1}
orientation_y = 0 # @param {type:"slider", min:-180, max:180, step:1}
orientation_z = 0 # @param {type:"slider", min:-180, max:180, step:1}
# compose the rotations
rotation_x = mn.Quaternion.rotation(mn.Deg(orientation_x), mn.Vector3(1.0, 0, 0))
rotation_y = mn.Quaternion.rotation(mn.Deg(orientation_y), mn.Vector3(1.0, 0, 0))
rotation_z = mn.Quaternion.rotation(mn.Deg(orientation_z), mn.Vector3(1.0, 0, 0))
orientation = rotation_z * rotation_y * rotation_x
# Add object instantiated by desired template using template handle
obj_id_1 = sim.add_object_by_handle(obj_template_handle)
# @markdown Note: agent local coordinate system is Y up and -Z forward.
# Move object to be in front of the agent
set_object_state_from_agent(sim, obj_id_1, offset=offset, orientation=orientation)
# display a still frame of the scene after the object is added if RGB sensor is enabled
observations = sim.get_sensor_observations()
if display and sim_settings["color_sensor_1st_person"]:
display_sample(observations["color_sensor_1st_person"])
example_type = "adding objects test"
make_sim_and_vid_button(example_type)
make_clear_all_objects_button()
```
## Physical Reasoning
This section demonstrates simple setups for physical reasoning tasks in Habitat-sim with a fixed camera position collecting data:
- Scripted vs. Dynamic Motion
- Object Permanence
- Physical plausibility classification
- Trajectory Prediction
```
# @title Select object templates from the GUI: { display-mode: "form" }
build_widget_ui(obj_attr_mgr, prim_attr_mgr)
# @title Scripted vs. Dynamic Motion { display-mode: "form" }
# @markdown A quick script to generate video data for AI classification of dynamically dropping vs. kinematically moving objects.
remove_all_objects(sim)
# @markdown Set the scene as dynamic or kinematic:
scenario_is_kinematic = True # @param {type:"boolean"}
# add the selected object
obj_id_1 = sim.add_object_by_handle(sel_file_obj_handle)
# place the object
set_object_state_from_agent(
sim, obj_id_1, offset=np.array([0, 2.0, -1.0]), orientation=ut.random_quaternion()
)
if scenario_is_kinematic:
# use the velocity control struct to setup a constant rate kinematic motion
sim.set_object_motion_type(habitat_sim.physics.MotionType.KINEMATIC, obj_id_1)
vel_control = sim.get_object_velocity_control(obj_id_1)
vel_control.controlling_lin_vel = True
vel_control.linear_velocity = np.array([0, -1.0, 0])
# simulate and collect observations
example_type = "kinematic vs dynamic"
observations = simulate(sim, dt=2.0)
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
remove_all_objects(sim)
# @title Object Permanence { display-mode: "form" }
# @markdown This example script demonstrates a possible object permanence task.
# @markdown Two objects are dropped behind an occluder. One is removed while occluded.
remove_all_objects(sim)
# @markdown 1. Add the two dynamic objects.
# add the selected objects
obj_id_1 = sim.add_object_by_handle(sel_file_obj_handle)
obj_id_2 = sim.add_object_by_handle(sel_file_obj_handle)
# place the objects
set_object_state_from_agent(
sim, obj_id_1, offset=np.array([0.5, 2.0, -1.0]), orientation=ut.random_quaternion()
)
set_object_state_from_agent(
sim,
obj_id_2,
offset=np.array([-0.5, 2.0, -1.0]),
orientation=ut.random_quaternion(),
)
# @markdown 2. Configure and add an occluder from a scaled cube primitive.
# Get a default cube primitive template
obj_attr_mgr = sim.get_object_template_manager()
cube_handle = obj_attr_mgr.get_template_handles("cube")[0]
cube_template_cpy = obj_attr_mgr.get_template_by_handle(cube_handle)
# Modify the template's configured scale.
cube_template_cpy.scale = np.array([0.32, 0.075, 0.01])
# Register the modified template under a new name.
obj_attr_mgr.register_template(cube_template_cpy, "occluder_cube")
# Instance and place the occluder object from the template.
occluder_id = sim.add_object_by_handle("occluder_cube")
set_object_state_from_agent(sim, occluder_id, offset=np.array([0.0, 1.4, -0.4]))
sim.set_object_motion_type(habitat_sim.physics.MotionType.KINEMATIC, occluder_id)
# fmt off
# @markdown 3. Simulate at 60Hz, removing one object when it's center of mass drops below that of the occluder.
# fmt on
# Simulate and remove object when it passes the midpoint of the occluder
dt = 2.0
print("Simulating " + str(dt) + " world seconds.")
observations = []
# simulate at 60Hz to the nearest fixed timestep
start_time = sim.get_world_time()
while sim.get_world_time() < start_time + dt:
sim.step_physics(1.0 / 60.0)
# remove the object once it passes the occluder center
if (
obj_id_2 in sim.get_existing_object_ids()
and sim.get_translation(obj_id_2)[1] <= sim.get_translation(occluder_id)[1]
):
sim.remove_object(obj_id_2)
observations.append(sim.get_sensor_observations())
example_type = "object permanence"
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
remove_all_objects(sim)
# @title Physical Plausibility Classification { display-mode: "form" }
# @markdown This example demonstrates a physical plausibility expirement. A sphere
# @markdown is dropped onto the back of a couch to roll onto the floor. Optionally,
# @markdown an invisible plane is introduced for the sphere to roll onto producing
# @markdown non-physical motion.
introduce_surface = True # @param{type:"boolean"}
remove_all_objects(sim)
# add a rolling object
obj_attr_mgr = sim.get_object_template_manager()
sphere_handle = obj_attr_mgr.get_template_handles("uvSphereSolid")[0]
obj_id_1 = sim.add_object_by_handle(sphere_handle)
set_object_state_from_agent(sim, obj_id_1, offset=np.array([1.0, 1.6, -1.95]))
if introduce_surface:
# optionally add invisible surface
cube_handle = obj_attr_mgr.get_template_handles("cube")[0]
cube_template_cpy = obj_attr_mgr.get_template_by_handle(cube_handle)
# Modify the template.
cube_template_cpy.scale = np.array([1.0, 0.04, 1.0])
surface_is_visible = False # @param{type:"boolean"}
cube_template_cpy.is_visibile = surface_is_visible
# Register the modified template under a new name.
obj_attr_mgr.register_template(cube_template_cpy, "invisible_surface")
# Instance and place the surface object from the template.
surface_id = sim.add_object_by_handle("invisible_surface")
set_object_state_from_agent(sim, surface_id, offset=np.array([0.4, 0.88, -1.6]))
sim.set_object_motion_type(habitat_sim.physics.MotionType.STATIC, surface_id)
example_type = "physical plausibility"
observations = simulate(sim, dt=3.0)
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
remove_all_objects(sim)
# @title Trajectory Prediction { display-mode: "form" }
# @markdown This example demonstrates setup of a trajectory prediction task.
# @markdown Boxes are placed in a target zone and a sphere is given an initial
# @markdown velocity with the goal of knocking the boxes off the counter.
# @markdown ---
# @markdown Configure Parameters:
obj_attr_mgr = sim.get_object_template_manager()
remove_all_objects(sim)
seed = 2 # @param{type:"integer"}
random.seed(seed)
sim.seed(seed)
np.random.seed(seed)
# setup agent state manually to face the bar
agent_state = sim.agents[0].state
agent_state.position = np.array([-1.97496, 0.072447, -2.0894])
agent_state.rotation = ut.quat_from_coeffs([0, -1, 0, 0])
sim.agents[0].set_state(agent_state)
# load the target objects
cheezit_handle = obj_attr_mgr.get_template_handles("cheezit")[0]
# create range from center and half-extent
target_zone = mn.Range3D.from_center(
mn.Vector3(-2.07496, 1.07245, -0.2894), mn.Vector3(0.5, 0.05, 0.1)
)
num_targets = 9 # @param{type:"integer"}
for _target in range(num_targets):
obj_id = sim.add_object_by_handle(cheezit_handle)
# rotate boxes off of their sides
rotate = mn.Quaternion.rotation(mn.Rad(-mn.math.pi_half), mn.Vector3(1.0, 0, 0))
sim.set_rotation(rotate, obj_id)
# sample state from the target zone
if not sample_object_state(sim, obj_id, False, True, 100, target_zone):
sim.remove_object(obj_id)
show_target_zone = False # @param{type:"boolean"}
if show_target_zone:
# Get and modify the wire cube template from the range
cube_handle = obj_attr_mgr.get_template_handles("cubeWireframe")[0]
cube_template_cpy = obj_attr_mgr.get_template_by_handle(cube_handle)
cube_template_cpy.scale = target_zone.size()
cube_template_cpy.is_collidable = False
# Register the modified template under a new name.
obj_attr_mgr.register_template(cube_template_cpy, "target_zone")
# instance and place the object from the template
target_zone_id = sim.add_object_by_handle("target_zone")
sim.set_translation(target_zone.center(), target_zone_id)
sim.set_object_motion_type(habitat_sim.physics.MotionType.STATIC, target_zone_id)
# print("target_zone_center = " + str(sim.get_translation(target_zone_id)))
# @markdown ---
# @markdown ###Ball properties:
# load the ball
sphere_handle = obj_attr_mgr.get_template_handles("uvSphereSolid")[0]
sphere_template_cpy = obj_attr_mgr.get_template_by_handle(sphere_handle)
# @markdown Mass:
ball_mass = 5.01 # @param {type:"slider", min:0.01, max:50.0, step:0.01}
sphere_template_cpy.mass = ball_mass
obj_attr_mgr.register_template(sphere_template_cpy, "ball")
ball_id = sim.add_object_by_handle("ball")
set_object_state_from_agent(sim, ball_id, offset=np.array([0, 1.4, 0]))
# @markdown Initial linear velocity (m/sec):
lin_vel_x = 0 # @param {type:"slider", min:-10, max:10, step:0.1}
lin_vel_y = 1 # @param {type:"slider", min:-10, max:10, step:0.1}
lin_vel_z = 5 # @param {type:"slider", min:0, max:10, step:0.1}
initial_linear_velocity = mn.Vector3(lin_vel_x, lin_vel_y, lin_vel_z)
sim.set_linear_velocity(initial_linear_velocity, ball_id)
# @markdown Initial angular velocity (rad/sec):
ang_vel_x = 0 # @param {type:"slider", min:-100, max:100, step:0.1}
ang_vel_y = 0 # @param {type:"slider", min:-100, max:100, step:0.1}
ang_vel_z = 0 # @param {type:"slider", min:-100, max:100, step:0.1}
initial_angular_velocity = mn.Vector3(ang_vel_x, ang_vel_y, ang_vel_z)
sim.set_angular_velocity(initial_angular_velocity, ball_id)
example_type = "trajectory prediction"
observations = simulate(sim, dt=3.0)
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
remove_all_objects(sim)
```
## Generating Scene Clutter on the NavMesh
The NavMesh can be used to place objects on surfaces in the scene. Once objects are placed they can be set to MotionType::STATIC, indiciating that they are not moveable (kinematics and dynamics are disabled for STATIC objects). The NavMesh can then be recomputed including STATIC object meshes in the voxelization.
This example demonstrates using the NavMesh to generate a cluttered scene for navigation. In this script we will:
- Place objects off the NavMesh
- Set them to MotionType::STATIC
- Recompute the NavMesh including STATIC objects
- Visualize the results
```
# @title Initialize Simulator and Load Scene { display-mode: "form" }
# @markdown (load the apartment_1 scene for clutter generation in an open space)
sim_settings = make_default_settings()
sim_settings["scene"] = "./data/scene_datasets/habitat-test-scenes/apartment_1.glb"
sim_settings["sensor_pitch"] = 0
make_simulator_from_settings(sim_settings)
# @title Select clutter object from the GUI: { display-mode: "form" }
build_widget_ui(obj_attr_mgr, prim_attr_mgr)
# @title Clutter Generation Script
# @markdown Configure some example parameters:
seed = 2 # @param {type:"integer"}
random.seed(seed)
sim.seed(seed)
np.random.seed(seed)
# position the agent
sim.agents[0].scene_node.translation = mn.Vector3(0.5, -1.60025, 6.15)
print(sim.agents[0].scene_node.rotation)
agent_orientation_y = -23 # @param{type:"integer"}
sim.agents[0].scene_node.rotation = mn.Quaternion.rotation(
mn.Deg(agent_orientation_y), mn.Vector3(0, 1.0, 0)
)
num_objects = 10 # @param {type:"slider", min:0, max:20, step:1}
object_scale = 5 # @param {type:"slider", min:1.0, max:10.0, step:0.1}
# scale up the selected object
sel_obj_template_cpy = obj_attr_mgr.get_template_by_handle(sel_file_obj_handle)
sel_obj_template_cpy.scale = mn.Vector3(object_scale)
obj_attr_mgr.register_template(sel_obj_template_cpy, "scaled_sel_obj")
# add the selected object
sim.navmesh_visualization = True
remove_all_objects(sim)
fails = 0
for _obj in range(num_objects):
obj_id_1 = sim.add_object_by_handle("scaled_sel_obj")
# place the object
placement_success = sample_object_state(
sim, obj_id_1, from_navmesh=True, maintain_object_up=True, max_tries=100
)
if not placement_success:
fails += 1
sim.remove_object(obj_id_1)
else:
# set the objects to STATIC so they can be added to the NavMesh
sim.set_object_motion_type(habitat_sim.physics.MotionType.STATIC, obj_id_1)
print("Placement fails = " + str(fails) + "/" + str(num_objects))
# recompute the NavMesh with STATIC objects
navmesh_settings = habitat_sim.NavMeshSettings()
navmesh_settings.set_defaults()
navmesh_success = sim.recompute_navmesh(
sim.pathfinder, navmesh_settings, include_static_objects=True
)
# simulate and collect observations
example_type = "clutter generation"
observations = simulate(sim, dt=2.0)
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + example_type,
open_vid=show_video,
)
remove_all_objects(sim)
sim.navmesh_visualization = False
```
## Embodied Continuous Navigation
The following example demonstrates setup and excecution of an embodied navigation and interaction scenario. An object and an agent embodied by a rigid locobot mesh are placed randomly on the NavMesh. A path is computed for the agent to reach the object which is executed by a continuous path-following controller. The object is then kinematically gripped by the agent and a second path is computed for the agent to reach a goal location, also executed by a continuous controller. The gripped object is then released and thrown in front of the agent.
Note: for a more detailed explanation of the NavMesh see Habitat-sim Basics tutorial.
```
# @title Select target object from the GUI: { display-mode: "form" }
build_widget_ui(obj_attr_mgr, prim_attr_mgr)
# @title Continuous Path Follower Example { display-mode: "form" }
# @markdown A python Class to provide waypoints along a path given agent states
class ContinuousPathFollower(object):
def __init__(self, sim, path, agent_scene_node, waypoint_threshold):
self._sim = sim
self._points = path.points[:]
assert len(self._points) > 0
self._length = path.geodesic_distance
self._node = agent_scene_node
self._threshold = waypoint_threshold
self._step_size = 0.01
self.progress = 0 # geodesic distance -> [0,1]
self.waypoint = path.points[0]
# setup progress waypoints
_point_progress = [0]
_segment_tangents = []
_length = self._length
for ix, point in enumerate(self._points):
if ix > 0:
segment = point - self._points[ix - 1]
segment_length = np.linalg.norm(segment)
segment_tangent = segment / segment_length
_point_progress.append(
segment_length / _length + _point_progress[ix - 1]
)
# t-1 -> t
_segment_tangents.append(segment_tangent)
self._point_progress = _point_progress
self._segment_tangents = _segment_tangents
# final tangent is duplicated
self._segment_tangents.append(self._segment_tangents[-1])
print("self._length = " + str(self._length))
print("num points = " + str(len(self._points)))
print("self._point_progress = " + str(self._point_progress))
print("self._segment_tangents = " + str(self._segment_tangents))
def pos_at(self, progress):
if progress <= 0:
return self._points[0]
elif progress >= 1.0:
return self._points[-1]
path_ix = 0
for ix, prog in enumerate(self._point_progress):
if prog > progress:
path_ix = ix
break
segment_distance = self._length * (progress - self._point_progress[path_ix - 1])
return (
self._points[path_ix - 1]
+ self._segment_tangents[path_ix - 1] * segment_distance
)
def update_waypoint(self):
if self.progress < 1.0:
wp_disp = self.waypoint - self._node.absolute_translation
wp_dist = np.linalg.norm(wp_disp)
node_pos = self._node.absolute_translation
step_size = self._step_size
threshold = self._threshold
while wp_dist < threshold:
self.progress += step_size
self.waypoint = self.pos_at(self.progress)
if self.progress >= 1.0:
break
wp_disp = self.waypoint - node_pos
wp_dist = np.linalg.norm(wp_disp)
def setup_path_visualization(sim, path_follower, vis_samples=100):
vis_ids = []
sphere_handle = obj_attr_mgr.get_template_handles("uvSphereSolid")[0]
sphere_template_cpy = obj_attr_mgr.get_template_by_handle(sphere_handle)
sphere_template_cpy.scale *= 0.2
template_id = obj_attr_mgr.register_template(sphere_template_cpy, "mini-sphere")
print("template_id = " + str(template_id))
if template_id < 0:
return None
vis_ids.append(sim.add_object_by_handle(sphere_handle))
for point in path_follower._points:
cp_id = sim.add_object_by_handle(sphere_handle)
if cp_id < 0:
print(cp_id)
return None
sim.set_translation(point, cp_id)
vis_ids.append(cp_id)
for i in range(vis_samples):
cp_id = sim.add_object_by_handle("mini-sphere")
if cp_id < 0:
print(cp_id)
return None
sim.set_translation(path_follower.pos_at(float(i / vis_samples)), cp_id)
vis_ids.append(cp_id)
for obj_id in vis_ids:
if obj_id < 0:
print(obj_id)
return None
for obj_id in vis_ids:
sim.set_object_motion_type(habitat_sim.physics.MotionType.KINEMATIC, obj_id)
return vis_ids
def track_waypoint(waypoint, rs, vc, dt=1.0 / 60.0):
angular_error_threshold = 0.5
max_linear_speed = 1.0
max_turn_speed = 1.0
glob_forward = rs.rotation.transform_vector(mn.Vector3(0, 0, -1.0)).normalized()
glob_right = rs.rotation.transform_vector(mn.Vector3(-1.0, 0, 0)).normalized()
to_waypoint = mn.Vector3(waypoint) - rs.translation
u_to_waypoint = to_waypoint.normalized()
angle_error = float(mn.math.angle(glob_forward, u_to_waypoint))
new_velocity = 0
if angle_error < angular_error_threshold:
# speed up to max
new_velocity = (vc.linear_velocity[2] - max_linear_speed) / 2.0
else:
# slow down to 0
new_velocity = (vc.linear_velocity[2]) / 2.0
vc.linear_velocity = mn.Vector3(0, 0, new_velocity)
# angular part
rot_dir = 1.0
if mn.math.dot(glob_right, u_to_waypoint) < 0:
rot_dir = -1.0
angular_correction = 0.0
if angle_error > (max_turn_speed * 10.0 * dt):
angular_correction = max_turn_speed
else:
angular_correction = angle_error / 2.0
vc.angular_velocity = mn.Vector3(
0, np.clip(rot_dir * angular_correction, -max_turn_speed, max_turn_speed), 0
)
# grip/release and sync gripped object state kineamtically
class ObjectGripper(object):
def __init__(
self,
sim,
agent_scene_node,
end_effector_offset,
):
self._sim = sim
self._node = agent_scene_node
self._offset = end_effector_offset
self._gripped_obj_id = -1
self._gripped_obj_buffer = 0 # bounding box y dimension offset of the offset
def sync_states(self):
if self._gripped_obj_id != -1:
agent_t = self._node.absolute_transformation_matrix()
agent_t.translation += self._offset + mn.Vector3(
0, self._gripped_obj_buffer, 0.0
)
sim.set_transformation(agent_t, self._gripped_obj_id)
def grip(self, obj_id):
if self._gripped_obj_id != -1:
print("Oops, can't carry more than one item.")
return
self._gripped_obj_id = obj_id
sim.set_object_motion_type(habitat_sim.physics.MotionType.KINEMATIC, obj_id)
object_node = sim.get_object_scene_node(obj_id)
self._gripped_obj_buffer = object_node.cumulative_bb.size_y() / 2.0
self.sync_states()
def release(self):
if self._gripped_obj_id == -1:
print("Oops, can't release nothing.")
return
sim.set_object_motion_type(
habitat_sim.physics.MotionType.DYNAMIC, self._gripped_obj_id
)
sim.set_linear_velocity(
self._node.absolute_transformation_matrix().transform_vector(
mn.Vector3(0, 0, -1.0)
)
+ mn.Vector3(0, 2.0, 0),
self._gripped_obj_id,
)
self._gripped_obj_id = -1
# @title Embodied Continuous Navigation Example { display-mode: "form" }
# @markdown This example cell runs the object retrieval task.
# @markdown First the Simulator is re-initialized with:
# @markdown - a 3rd person camera view
# @markdown - modified 1st person sensor placement
sim_settings = make_default_settings()
# fmt: off
sim_settings["scene"] = "./data/scene_datasets/mp3d/17DRP5sb8fy/17DRP5sb8fy.glb" # @param{type:"string"}
# fmt: on
sim_settings["sensor_pitch"] = 0
sim_settings["sensor_height"] = 0.6
sim_settings["color_sensor_3rd_person"] = True
sim_settings["depth_sensor_1st_person"] = True
sim_settings["semantic_sensor_1st_person"] = True
make_simulator_from_settings(sim_settings)
default_nav_mesh_settings = habitat_sim.NavMeshSettings()
default_nav_mesh_settings.set_defaults()
inflated_nav_mesh_settings = habitat_sim.NavMeshSettings()
inflated_nav_mesh_settings.set_defaults()
inflated_nav_mesh_settings.agent_radius = 0.2
inflated_nav_mesh_settings.agent_height = 1.5
recompute_successful = sim.recompute_navmesh(sim.pathfinder, inflated_nav_mesh_settings)
if not recompute_successful:
print("Failed to recompute navmesh!")
# @markdown ---
# @markdown ### Set other example parameters:
seed = 24 # @param {type:"integer"}
random.seed(seed)
sim.seed(seed)
np.random.seed(seed)
sim.config.sim_cfg.allow_sliding = True # @param {type:"boolean"}
print(sel_file_obj_handle)
# load a selected target object and place it on the NavMesh
obj_id_1 = sim.add_object_by_handle(sel_file_obj_handle)
# load the locobot_merged asset
locobot_template_handle = obj_attr_mgr.get_file_template_handles("locobot")[0]
# add robot object to the scene with the agent/camera SceneNode attached
locobot_id = sim.add_object_by_handle(locobot_template_handle, sim.agents[0].scene_node)
# set the agent's body to kinematic since we will be updating position manually
sim.set_object_motion_type(habitat_sim.physics.MotionType.KINEMATIC, locobot_id)
# create and configure a new VelocityControl structure
# Note: this is NOT the object's VelocityControl, so it will not be consumed automatically in sim.step_physics
vel_control = habitat_sim.physics.VelocityControl()
vel_control.controlling_lin_vel = True
vel_control.lin_vel_is_local = True
vel_control.controlling_ang_vel = True
vel_control.ang_vel_is_local = True
# reset observations and robot state
sim.set_translation(sim.pathfinder.get_random_navigable_point(), locobot_id)
observations = []
# get shortest path to the object from the agent position
found_path = False
path1 = habitat_sim.ShortestPath()
path2 = habitat_sim.ShortestPath()
while not found_path:
if not sample_object_state(
sim, obj_id_1, from_navmesh=True, maintain_object_up=True, max_tries=1000
):
print("Couldn't find an initial object placement. Aborting.")
break
path1.requested_start = sim.get_translation(locobot_id)
path1.requested_end = sim.get_translation(obj_id_1)
path2.requested_start = path1.requested_end
path2.requested_end = sim.pathfinder.get_random_navigable_point()
found_path = sim.pathfinder.find_path(path1) and sim.pathfinder.find_path(path2)
if not found_path:
print("Could not find path to object, aborting!")
vis_ids = []
recompute_successful = sim.recompute_navmesh(sim.pathfinder, default_nav_mesh_settings)
if not recompute_successful:
print("Failed to recompute navmesh 2!")
gripper = ObjectGripper(
sim, sim.get_object_scene_node(locobot_id), np.array([0.0, 0.6, 0.0])
)
continuous_path_follower = ContinuousPathFollower(
sim, path1, sim.get_object_scene_node(locobot_id), waypoint_threshold=0.4
)
show_waypoint_indicators = False # @param {type:"boolean"}
time_step = 1.0 / 30.0
for i in range(2):
if i == 1:
gripper.grip(obj_id_1)
continuous_path_follower = ContinuousPathFollower(
sim, path2, sim.get_object_scene_node(locobot_id), waypoint_threshold=0.4
)
if show_waypoint_indicators:
for obj_id in vis_ids:
sim.remove_object(obj_id)
vis_ids = setup_path_visualization(sim, continuous_path_follower)
# manually control the object's kinematic state via velocity integration
start_time = sim.get_world_time()
max_time = 30.0
while (
continuous_path_follower.progress < 1.0
and sim.get_world_time() - start_time < max_time
):
continuous_path_follower.update_waypoint()
if show_waypoint_indicators:
sim.set_translation(continuous_path_follower.waypoint, vis_ids[0])
if locobot_id < 0:
print("locobot_id " + str(locobot_id))
break
previous_rigid_state = sim.get_rigid_state(locobot_id)
# set velocities based on relative waypoint position/direction
track_waypoint(
continuous_path_follower.waypoint,
previous_rigid_state,
vel_control,
dt=time_step,
)
# manually integrate the rigid state
target_rigid_state = vel_control.integrate_transform(
time_step, previous_rigid_state
)
# snap rigid state to navmesh and set state to object/agent
end_pos = sim.step_filter(
previous_rigid_state.translation, target_rigid_state.translation
)
sim.set_translation(end_pos, locobot_id)
sim.set_rotation(target_rigid_state.rotation, locobot_id)
# Check if a collision occured
dist_moved_before_filter = (
target_rigid_state.translation - previous_rigid_state.translation
).dot()
dist_moved_after_filter = (end_pos - previous_rigid_state.translation).dot()
# NB: There are some cases where ||filter_end - end_pos|| > 0 when a
# collision _didn't_ happen. One such case is going up stairs. Instead,
# we check to see if the the amount moved after the application of the filter
# is _less_ than the amount moved before the application of the filter
EPS = 1e-5
collided = (dist_moved_after_filter + EPS) < dist_moved_before_filter
gripper.sync_states()
# run any dynamics simulation
sim.step_physics(time_step)
# render observation
observations.append(sim.get_sensor_observations())
# release
gripper.release()
start_time = sim.get_world_time()
while sim.get_world_time() - start_time < 2.0:
sim.step_physics(time_step)
observations.append(sim.get_sensor_observations())
# video rendering with embedded 1st person view
video_prefix = "fetch"
if make_video:
overlay_dims = (int(sim_settings["width"] / 5), int(sim_settings["height"] / 5))
print("overlay_dims = " + str(overlay_dims))
overlay_settings = [
{
"obs": "color_sensor_1st_person",
"type": "color",
"dims": overlay_dims,
"pos": (10, 10),
"border": 2,
},
{
"obs": "depth_sensor_1st_person",
"type": "depth",
"dims": overlay_dims,
"pos": (10, 30 + overlay_dims[1]),
"border": 2,
},
{
"obs": "semantic_sensor_1st_person",
"type": "semantic",
"dims": overlay_dims,
"pos": (10, 50 + overlay_dims[1] * 2),
"border": 2,
},
]
print("overlay_settings = " + str(overlay_settings))
vut.make_video(
observations=observations,
primary_obs="color_sensor_3rd_person",
primary_obs_type="color",
video_file=output_path + video_prefix,
fps=int(1.0 / time_step),
open_vid=show_video,
overlay_settings=overlay_settings,
depth_clip=10.0,
)
# remove locobot while leaving the agent node for later use
sim.remove_object(locobot_id, delete_object_node=False)
remove_all_objects(sim)
```
| github_jupyter |
### Step 4 - Generate Results
This script has gone through a lot of development, and is now in two version - with the 'operational' one being the automated version also labelled as Step 4 in this folder.
This is really where most of the complexity lies in the Yemen analysis, as we have been asked to do cuts and analyses for the WHO and UNICEF teams on the ground in Yemen that no other team has (as of yet!) requested from the GOST team.
This is a good script for learning what happens in the even more complicated Automated version.
Import the usual suspects
```
import pandas as pd
import os, sys
sys.path.append(r'C:\Users\charl\Documents\GitHub\GOST_PublicGoods\GOSTNets\GOSTNets')
sys.path.append(r'C:\Users\charl\Documents\GitHub\GOST')
import GOSTnet as gn
import importlib
import geopandas as gpd
import rasterio as rt
from rasterio import features
from shapely.wkt import loads
import numpy as np
import networkx as nx
from shapely.geometry import box, Point, Polygon
```
### Settings
Here, we build our scenario for this run through of Generate Results (which must be run multiple times for multiple scenarios).
It is this bit which is effectively automated in the 'automated version' of the script. Here, we set the 'scenario variables':
- whether or not to model access to a motor vehicle for driving along roads;
- whether to incorporate a version of the graph with adjustments made for conflict (i.e. road closures, conflicts);
- the year of the analysis;
- whether we are looking at access to hosptials, Primary HealthCare facilities (PHCs) or both,
- the range of services which we are analyzing access to (a subset of the facilities)
```
walking = 0 # set to 1 for walking, 0 for driving.
conflict = 1 # set to 1 to prevent people from crossing warfronts, and to incorporate road closures per UN logistics cluster data
facility_type = 'ALL' # Options: 'HOS' or 'PHC' or 'ALL'
year = 2018 # default = 2018; can be 2016 if 2016 origin layer prepared
service_index = 8 # Set to 0 for all services / access to hospitals. A choice from the next list.
services = ['ALL',
'Antenatal',
'BEmONC',
'CEmONC',
'Under_5',
'Emergency_Surgery',
'Immunizations',
'Malnutrition',
'Int_Outreach']
zonal_stats = 1 # set to 1 to produce summary zonal stats layer
```
### Import All-Destination OD
The OD will have on one axis all of the uniquely snapped-to nodes for the origin points, and on the other, all of the uniquely snapped-to nodes amongst the destinations.
```
basepth = r'C:\Users\charl\Documents\GOST\Yemen'
# path to the graphtool folder for the files that generated the OD matrix
pth = os.path.join(basepth, 'graphtool')
# path for utility files, e.g. admin boundaries, warfront file
pth = os.path.join(basepth, 'util_files')
# path to the SRTM tiles used for matching on node elevation
pth = os.path.join(basepth, 'SRTM')
```
In this block, we translate some of the settings elected above into file name suffixes to help us tell different outputs apart.
```
# in the event walking is on, we want the walk graph, not the normal driving graph.
if walking == 1:
type_tag = 'walking'
net_name = r'walk_graph.pickle'
else:
type_tag = 'driving'
# this network has both conflict and non-conflict drive times on its edges.
net_name = r'G_salty_time_conflict_adj.pickle'
# if we are looking at conflict, at the ConflictAdj suffix to output files
if conflict == 1:
conflict_tag = 'ConflictAdj'
else:
conflict_tag = 'NoConflict'
# our path to our OD matrix should be the graphtool folder (most of the time!). Adjust if necessary
OD_pth = pth
OD_name = r'output_Jan24th_%s.csv' % type_tag
# same is also true of our network path - adjust if necessary
net_pth = pth
# Define our CRS for the analysis - WGS84 and projected.
WGS = {'init':'epsg:4326'}
measure_crs = {'init':'epsg:32638'}
# Here we build the whole filename based on our choice of settings, and print those beneath
subset = r'%s_24th_HERAMS_%s_%s_%s_%s' % (type_tag, facility_type, services[service_index], conflict_tag, year)
print("Output files will have name: ", subset)
print("network: ",net_name)
print("OD Matrix: ",OD_name)
print("Conflict setting: ",conflict_tag)
# set the speed at which people are assumed to move over open ground with no paths
# e.g. when walking from desitnation node to actual destination. KEY ASSUMPTION!!
offroad_speed = 4
```
### Read in OD Matrix
Here, we read in our OD matrix, and do a bit of housekeeping on it.
```
OD = pd.read_csv(os.path.join(OD_pth, OD_name))
OD = OD.rename(columns = {'Unnamed: 0':'O_ID'})
OD = OD.set_index('O_ID')
OD = OD.replace([np.inf, -np.inf], np.nan)
OD_original = OD.copy()
```
### Subset to Accepted Nodes
In this block, we do simila operations to the HeRAMS file, including:
- removing facilities which aren't of the selected type
- removing non-operational facilities
- If selected, dropping facilities which don't offer the required services
```
# initial read in
acceptable_df = pd.read_csv(os.path.join(OD_pth, 'HeRAMS 2018 April_snapped.csv'))
# Adjust for facility type - drop facilities as necessary. 1 = hospital, 2/3 = PHC
if facility_type == 'HOS':
acceptable_df = acceptable_df.loc[acceptable_df['Health Facility Type Coded'].isin(['1',1])]
elif facility_type == 'PHC':
acceptable_df = acceptable_df.loc[acceptable_df['Health Facility Type Coded'].isin([2,'2',3,'3'])]
elif facility_type == 'ALL':
pass
else:
raise ValueError('unacceptable facility_type entry!')
# Adjust for functionality in a given year. Functioning facilities given by a 1 or a 2.
acceptable_df = acceptable_df.loc[acceptable_df['Functioning %s' % year].isin(['1','2',1,2])]
# Adjust for availability of service.
# first we build a dictionary of all the columns we care about with a standardized KEY (note: not value!)
# we use the keys themselves to then subset the Pandas DF
SERVICE_DICT = {'Antenatal_2018':'ANC 2018',
'Antenatal_2016':'Antenatal Care (P422) 2016',
'BEmONC_2018':'Basic emergency obstetric care 2018',
'BEmONC_2016':'Basic Emergency Obsteteric Care (P424) 2016',
'CEmONC_2018':'Comprehensive emergency obstetric care 2018',
'CEmONC_2016':'Comprehensive Emergency Obstetric Care (S424) 2016',
'Under_5_2018':'Under 5 clinics 2018',
'Under_5_2016':'Under-5 clinic services (P23) 2016',
'Emergency_Surgery_2018':'Emergency and elective surgery 2018',
'Emergency_Surgery_2016':'Emergency and Elective Surgery (S14) 2016',
'Immunizations_2018':'EPI 2018',
'Immunizations_2016':'EPI (P21a) 2016',
'Malnutrition_2018':'Malnutrition services 2018',
'Malnutrition_2016':'Malnutrition services (P25) 2016',
'Int_Outreach_2018':'Integrated outreach (IMCI+EPI+ANC+Nutrition_Services) 2018',
'Int_Outreach_2016':'Integrated Outreach (P22) 2016'}
if service_index == 0:
pass
else:
# take this line slowly...
acceptable_df = acceptable_df.loc[acceptable_df[SERVICE_DICT['%s_%s' % (services[service_index],year)]].isin(['1',1])]
# print out the length of the acceptable_df - which is, at this point, the list of valid destinations for this analysis.
len(acceptable_df)
```
### OD-Matrix slicing for valid destinations
In this section, we pick out the snapped-to nodes for the remaining valid destinations, and then slice the larger OD matrix accordingly.
```
# load the geometry column, make it a GeoDataFrame
acceptable_df['geometry'] = acceptable_df['geometry'].apply(loads)
acceptable_gdf = gpd.GeoDataFrame(acceptable_df, geometry = 'geometry', crs = {'init':'epsg:4326'})
# convert types of nearest node (currently stored as int) into string, find unique set.
accepted_facilities = list(set(list(acceptable_df.NN)))
accepted_facilities_str = [str(i) for i in accepted_facilities]
# keep ONLY the columns in the OD-Matrix relating to nodes snapped to by these destinations.
# Massive reduction in size of the OD-matrix in most cases
OD = OD_original[accepted_facilities_str]
# Send to file the destination .csv - used for generating outputs as the facility locations in this case
acceptable_df.to_csv(os.path.join(basepth,'output_layers','Round 3','%s.csv' % subset))
# print out dimensions to make sure we are on track.
print(OD_original.shape)
print(OD.shape)
```
### Define function to add elevation to a point GeoDataFrame
This function takes the work in Step 3.a, and functionalizes it - allowing us to add an elevation field to a point GeoDataFrame, assuming we have a path to all the SRTM tiles and we have denoted x and y Lat / Long columns in WGS84. See Step 3.a for a detailed walkthrough of this function's methodology.
```
def add_elevation(df, x, y, srtm_pth):
# walk all tiles, find path
tiles = []
for root, folder, files in os.walk(os.path.join(srtm_pth,'high_res')):
for f in files:
if f[-3:] == 'hgt':
tiles.append(f[:-4])
# load dictionary of tiles
arrs = {}
for t in tiles:
arrs[t] = rt.open(srtm_pth+r'\high_res\{}.hgt\{}.hgt'.format(t, t), 'r')
# assign a code
uniques = []
df['code'] = 'placeholder'
def tile_code(z):
E = str(z[x])[:2]
N = str(z[y])[:2]
return 'N{}E0{}'.format(N, E)
df['code'] = df.apply(lambda z: tile_code(z), axis = 1)
unique_codes = list(set(df['code'].unique()))
z = {}
# Match on High Precision Elevation
property_name = 'elevation'
for code in unique_codes:
df2 = df.copy()
df2 = df2.loc[df2['code'] == code]
dataset = arrs[code]
b = dataset.bounds
datasetBoundary = box(b[0], b[1], b[2], b[3])
selKeys = []
selPts = []
for index, row in df2.iterrows():
if Point(row[x], row[y]).intersects(datasetBoundary):
selPts.append((row[x],row[y]))
selKeys.append(index)
raster_values = list(dataset.sample(selPts))
raster_values = [x[0] for x in raster_values]
# generate new dictionary of {node ID: raster values}
z.update(zip(selKeys, raster_values))
elev_df = pd.DataFrame.from_dict(z, orient='index')
elev_df.columns = ['elevation']
# match on low-precision elevation
missing = elev_df.copy()
missing = missing.loc[missing.elevation < 0]
if len(missing) > 0:
missing_df = df.copy()
missing_df = missing_df.loc[missing.index]
low_res_tifpath = os.path.join(srtm_pth, 'clipped', 'clipped_e20N40.tif')
dataset = rt.open(low_res_tifpath, 'r')
b = dataset.bounds
datasetBoundary = box(b[0], b[1], b[2], b[3])
selKeys = []
selPts = []
for index, row in missing_df.iterrows():
if Point(row[x], row[y]).intersects(datasetBoundary):
selPts.append((row[x],row[y]))
selKeys.append(index)
raster_values = list(dataset.sample(selPts))
raster_values = [x[0] for x in raster_values]
z.update(zip(selKeys, raster_values))
elev_df = pd.DataFrame.from_dict(z, orient='index')
elev_df.columns = ['elevation']
df['point_elev'] = elev_df['elevation']
df = df.drop('code', axis = 1)
return df
```
### Define function to convert distances to walk times
Once again, the Tobler's hiking function reappears from Step 3.a to help us generate elevation-adjusted walk times for our network. This verion is modified to return a walk time for the distances between:
- an origin / destination coordinate pair (the raw lat/long in the WHO's HeRAMS file or an origin centroid in WorldPop), and
- the nearest node on the network.
This explains why the 'dist' argument is set by default to 'NN_dist' - or the distance to the nearest node for each point in the frame.
```
def generate_walktimes(df, start = 'point_elev', end = 'node_elev', dist = 'NN_dist', max_walkspeed = 6, min_speed = 0.1):
def speed(incline_ratio, max_speed):
walkspeed = max_speed * np.exp(-3.5 * abs(incline_ratio + 0.05))
return walkspeed
speeds = {}
times = {}
for index, data in df.iterrows():
if data[dist] > 0:
delta_elevation = data[end] - data[start]
incline_ratio = delta_elevation / data[dist]
speed_kmph = speed(incline_ratio = incline_ratio, max_speed = max_walkspeed)
speed_kmph = max(speed_kmph, min_speed)
speeds[index] = (speed_kmph)
times[index] = (data[dist] / 1000 * 3600 / speed_kmph)
speed_df = pd.DataFrame.from_dict(speeds, orient = 'index')
time_df = pd.DataFrame.from_dict(times, orient = 'index')
df['walkspeed'] = speed_df[0]
df['walk_time'] = time_df[0]
return df
```
### Add elevation for destination nodes
What it says on the tin - adding an elevation column for the dest_df, the destinations GeoDataFrame. Note we add the elevation for the _points themselves_, NOT their nearest nodes - which we will do soon.
We sneakily also set the index to 'Nearest Node' (which will always here be referred to as 'NN').
```
dest_df = acceptable_df[['NN','NN_dist','Latitude','Longitude']]
dest_df = add_elevation(dest_df, 'Longitude','Latitude', srtm_pth).set_index('NN')
```
### Add elevation from graph nodes
It will come in useful to also have a dataframe of our nodes with their elevation matched on. We generate this here using tried and tested methods.
```
G = nx.read_gpickle(os.path.join(OD_pth, net_name))
G_node_df = gn.node_gdf_from_graph(G)
G_node_df = add_elevation(G_node_df, 'x', 'y', srtm_pth)
match_node_elevs = G_node_df[['node_ID','point_elev']].set_index('node_ID')
match_node_elevs.loc[match_node_elevs.point_elev < 0] = 0
```
### Match on node elevations for dest_df nearest nodes; calculate travel times to nearest node
Here, having earlier set the dest_df index to nearest node, and having generate above a reference frame for all node elevations, we match on the elevation of each destination's _nearest node_. This will allow us to calculate walk times to the network with accuracy for each destination.
```
# match on NN elevation (not actual destination elevation!)
dest_df['node_elev'] = match_node_elevs['point_elev']
# with all our constituent fields generated, now we can generate our network-to-actual-destination walktimes.
dest_df = generate_walktimes(dest_df, start = 'node_elev', end = 'point_elev', dist = 'NN_dist', max_walkspeed = offroad_speed)
# we sort, for fun, mainly.
dest_df = dest_df.sort_values(by = 'walk_time', ascending = False)
```
### Add Walk Time to all travel times in OD matrix
We have now generated one component of three of a given travel time between an origin and a destination.
Due to the organization of the OD matrix (we only have the unique snapped-to origins nodes, NOT all 560,000 origin nodes in there...) it becomes easier to adjust every journey time to add on required walk time at the end from network to destination.
We do that here.
```
# subset the destination DataFrame into one that is just equal to the walk-time from network. index = NN.
dest_df = dest_df[['walk_time']]
# convert to type that will work with the OD matrix
dest_df.index = dest_df.index.map(str)
# flip the OD-matrix! Now, origins are the columns, not the rows.
d_f = OD.transpose()
# for each origin node column,
for i in d_f.columns:
# match on this column to the destination DataFrame (bear with me, I know this is funky)
dest_df[i] = d_f[i]
# now, we add the walk time to each and every value, laterally:
for i in dest_df.columns:
if i == 'walk_time':
pass
# executed here
else:
dest_df[i] = dest_df[i] + dest_df['walk_time']
# before dropping the walk_time - it has been added everywhere, afterall.
dest_df = dest_df.drop('walk_time', axis = 1)
# then we flip our OD-matrix back the right way up, with rows once again equal to origin nodes, and columns = destination nodes.
dest_df = dest_df.transpose()
```
This is the hardest bit to follow in this script. It has to be done this way with our limited computing resources (and is actually fairly efficient). Nonetheless, it isn't pretty.
### Import Shapefile Describing Regions of Control
In this block, we import one of two shapefiles. If conflict is OFF, we have the NoConflict.shp - which is a single shape, with no warfronts in it. Otherwise, w pick up the merged_dists.shp, generated in the Prep A workbook.
If you intend to do a scenario involving conflict, go and run that first and come back when merged_dists.shp exists.
```
# choose relevant file
if conflict == 1:
conflict_file = r'merged_dists.shp'
elif conflict == 0:
conflict_file = r'NoConflict.shp'
# read in relevant file
merged_dists = gpd.read_file(os.path.join(util_path, conflict_file))
# project if necessary
if merged_dists.crs != {'init':'epsg:4326'}:
merged_dists = merged_dists.to_crs({'init':'epsg:4326'})
# only keep Polygons (nothing else should be in there anyway)
merged_dists = merged_dists.loc[merged_dists.geometry.type == 'Polygon']
```
### Factor in lines of Control - Import Areas of Control Shapefile
This function helps us intersect GeoDataFrames and the warfronts file.
It returns a dictionary where:
- the index is the polygon index in a GeoDataFrame containing polygons,
- the value is a list of objects which fall within that polygon
It does this very quickly by leveraging spatial indices to speed up the process. Further, we cut up polygons which are too big into smaller blocks and iterate through those to further turbocharge the intersection.
```
# Intersect points with merged districts shapefile, identify relationship
def AggressiveSpatialIntersect(points, polygons):
# points must be a GeoDataFrame containing points
# polygons must be a GeoDataFrame containing polygons
import osmnx as ox
# make a spatial index of the points to be intersected
spatial_index = points.sindex
# set up the dictionary to be returned
container = {}
cut_geoms = []
# iterate through each polygon
for index, row in polygons.iterrows():
# pick out the shapely object
polygon = row.geometry
# the polygon is big,
if polygon.area > 0.5:
# cut the geometry into quadrants of width 0.5 arc-seconds
geometry_cut = ox.quadrat_cut_geometry(polygon, quadrat_width=0.5)
# add this to a list of cut geometries
cut_geoms.append(geometry_cut)
# notify that we are taking some scissors to this polygon in particular
print('cutting geometry %s into %s pieces' % (index, len(geometry_cut)))
index_list = []
# now go through these geometry pieces, and perform the spatial intersect
for P in geometry_cut:
possible_matches_index = list(spatial_index.intersection(P.bounds))
possible_matches = points.iloc[possible_matches_index]
precise_matches = possible_matches[possible_matches.intersects(P)]
if len(precise_matches) > 0:
index_list.append(precise_matches.index)
flat_list = [item for sublist in index_list for item in sublist]
container[index] = list(set(flat_list))
# if it is a small polygon, just go for the intersection anyway (via spatial index)
else:
possible_matches_index = list(spatial_index.intersection(polygon.bounds))
possible_matches = points.iloc[possible_matches_index]
precise_matches = possible_matches[possible_matches.intersects(polygon)]
if len(precise_matches) > 0:
container[index] = list(precise_matches.index)
# return the dictionary of which points lie in which polygons.
return container
```
Here, we deploy this function to identify which graph nodes lie in which areas of control. Possible_snap_nodes now is set to the resultant dictionary, of format: {polygon : [contained nodes list]}
```
graph_node_gdf = gn.node_gdf_from_graph(G)
gdf = graph_node_gdf.copy()
gdf = gdf.set_index('node_ID')
possible_snap_nodes = AggressiveSpatialIntersect(graph_node_gdf, merged_dists)
print('**bag of possible node snapping locations has been successfully generated**')
```
### Load Origins Grid
This is the first time we pick up and work with the large origin GeoDataFrame. We load the correct one in for the year we are working with, and perform some housekeeping.
```
# Match on network time from origin node (time travelling along network + walking to destination)
if year == 2018:
year_raster = 2018
elif year == 2016:
year_raster = 2015
grid_name = r'origins_1km_%s_snapped.csv' % year_raster
grid = pd.read_csv(os.path.join(OD_pth, grid_name))
grid = grid.rename({'Unnamed: 0':'PointID'}, axis = 1)
grid['geometry'] = grid['geometry'].apply(loads)
grid_gdf = gpd.GeoDataFrame(grid, crs = WGS, geometry = 'geometry')
grid_gdf = grid_gdf.set_index('PointID')
```
### Adjust Nearest Node snapping for War
This gets kinda interesting. The first thing we do is generate the reference dictionary for the origin layer, of which origin points lie within which polygon of homogenous control (Remember if conflict is turned off, then all points lie within one single homogenous polygon - Yemen's territorial boundaries (minue Socotra...I digress)).
Thereafter, we remake the origin file with the polygon reference for each point in the final line, where we concat 'bundle' - the slices of the original file that lie inside each polygon.
```
origin_container = AggressiveSpatialIntersect(grid_gdf, merged_dists)
print('bag of possible origins locations has been successfully generated')
bundle = []
for key in origin_container.keys():
origins = origin_container[key]
possible_nodes = graph_node_gdf.loc[possible_snap_nodes[key]]
origin_subset = grid_gdf.loc[origins]
origin_subset_snapped = gn.pandana_snap_points(origin_subset,
possible_nodes,
source_crs = 'epsg:4326',
target_crs = 'epsg:32638',
add_dist_to_node_col = True)
bundle.append(origin_subset_snapped)
grid_gdf_adjusted = pd.concat(bundle)
```
We set the grid_gdf back to this conflict adjusted file - so we carry forward the changes we just made.
(for those working through the script, might be worth comparing grid_gdf_adjusted and grid_gdf to see the change this block makes)
```
grid_gdf = grid_gdf_adjusted
```
### Adjust acceptable destinations for each node for the war
We repeat the process in the last cell, but this time for graph nodes snapped-to as desintations, and those snapped-to as origins
```
#### for Destination nodes (dest_Df.columns)
# take the nodes GDF
gdf = graph_node_gdf.copy()
# housekeep for ID datatype
gdf['node_ID'] = gdf['node_ID'].astype('str')
# take only the nodes that also appear in the dest_df columns (i.e. the OD matrix). Cols = Dests
gdf = gdf.loc[gdf.node_ID.isin(list(dest_df.columns))]
# set the index as the node ID
gdf = gdf.set_index('node_ID')
# generate our reference dictionary for snapped-to destination nodes
dest_container = AggressiveSpatialIntersect(gdf, merged_dists)
# repeat process for Origin nodes (dest_Df.index)
gdf = graph_node_gdf.copy()
# remember, the index of dest_df (the OD matrix) is the nodes snapped-to by origin points
gdf = gdf.loc[gdf.node_ID.isin(list(dest_df.index))]
gdf = gdf.set_index('node_ID')
# generate our reference dictionary
origin_snap_container = AggressiveSpatialIntersect(gdf, merged_dists)
```
### Working out the Min-time for each polygon
This is a fairly crucial block. Here, we iterate through the polygons, take only the valid origins and destinations in that polygon, and work out the minimum time, for each selected origin, to a valid destination. This is how we prevent people crossing borders - we effectively run N small-universe analyses, where the rest of the country outside the polygon doesn't exist! Remember this is on-network travel only.
```
bundle = []
# for each polygon
for key in origin_snap_container.keys():
# select the origins which exist in this polygon
origins = origin_snap_container[key]
# select the destinations which exist in this polygon
destinations = dest_container[key]
# subset the OD-matrix to include just these origins and destinations
Q = dest_df[destinations].loc[origins]
# make a new column, min-time - for each origin, its closest destination
Q['min_time'] = Q.min(axis = 1)
# subset to just this column
Q2 = Q[['min_time']]
# append to bundle
bundle.append(Q2)
# concatenate results
Q3 = pd.concat(bundle)
# add to the OD matrix
dest_df['min_time'] = Q3['min_time']
```
### Return to Normal Process
```
grid_gdf = grid_gdf.rename(columns = {'NN':'O_ID'})
```
### Merge on min Time
Here, we add the min-time on the network for each origin node to each origin point. This is a large 1-to-many join for each snapped-to origin node (c. 36k unique origin nodes on the network being joined over to c. 560k origin points, many of which have duplicate closest nodes, as you would expect).
```
grid_gdf = grid_gdf.reset_index()
grid_gdf = grid_gdf.set_index(grid_gdf['O_ID'])
grid_gdf['on_network_time'] = dest_df['min_time']
grid_gdf = grid_gdf.set_index('PointID')
```
### Add origin node distance to network - walking time
We now have 2/3 of the puzzle done for each journey - the walk from the network to the destination (baked in to all OD matrix values now) and the on-network minimum time to a valid destination (in the same polygon as the origin point!). Now, we go ahead and add the walk time from the specific origin point to the network.
```
grid = grid_gdf
grid = add_elevation(grid, 'Longitude','Latitude', srtm_pth)
grid = grid.reset_index()
grid = grid.set_index('O_ID')
grid['node_elev'] = match_node_elevs['point_elev']
grid = grid.set_index('PointID')
grid = generate_walktimes(grid, start = 'point_elev', end = 'node_elev', dist = 'NN_dist', max_walkspeed = offroad_speed)
grid = grid.rename({'node_elev':'nr_node_on_net_elev',
'walkspeed':'walkspeed_to_net',
'walk_time':'walk_time_to_net',
'NN_dist':'NN_dist_to_net'}, axis = 1)
# thus, the process is complete - the total time is the network time, plus the walk time to the network.
grid['total_time_net'] = grid['on_network_time'] + grid['walk_time_to_net']
```
### Calculate Direct Walking Time (not using road network), vs. network Time
In some cases, it won't be logical to use the network at all - walking to the network for a long time, only to drive for 2 mins and then walk back off the network again, will not be logical. To circumvent this, we also calculate what happens if you walk in a straight line towards your closest destination - ignoring roads. If this time is less than the network time, we use this instead.
To do this, we take our dataframe of acceptable destinations, and snap the origin points _Directly_ to the destinations - then work out the corresponding elevations, and thus travel times.
```
bundle = []
W = graph_node_gdf.copy()
W['node_ID'] = W['node_ID'].astype(str)
W = W.set_index('node_ID')
# Generate dictionary of locations by homogenously controlled polygon
locations_gdf = gpd.GeoDataFrame(acceptable_df, geometry = 'geometry', crs = {'init':'epsg:4326'})
locations_container = AggressiveSpatialIntersect(locations_gdf, merged_dists)
# for each polygon, snap the origins in that polygon to the acceptable destinations
for key in origin_container.keys():
# select subset of all origin points - only the ones in this polygon
origins = origin_container[key]
origin_subset = grid.copy()
origin_subset = origin_subset.loc[origins]
# select subset of all locations - only the ones in this polygon
locations = locations_gdf.loc[locations_container[key]]
# control for no points of service in this polygon. Early end! No pandana snap.
if len(locations) < 1:
origin_subset['NN'] = None
origin_subset['NN_dist'] = None
bundle.append(origin_subset)
else:
# Here, we find the closest hospital to each origin point as the crow-flies by using pandana snap
origin_subset_snapped = gn.pandana_snap_points(origin_subset,
locations,
source_crs = 'epsg:4326',
target_crs = 'epsg:32638',
add_dist_to_node_col = True)
bundle.append(origin_subset_snapped)
grid_gdf_adjusted = pd.concat(bundle)
grid = grid_gdf_adjusted
# Y is now a copy of the grid to save space
Y = grid.copy()
objs = []
if len(Y.loc[Y['NN'].isnull() == True]) > 0:
Y2 = Y.loc[Y['NN'].isnull() == True]
Y2['walkspeed_direct'] = 0
Y2['walk_time_direct'] = 9999999
Y2['NN_dist_direct'] = 9999999
objs.append(Y2)
# add location elevations
location_elevs = add_elevation(locations_gdf, 'Longitude','Latitude', srtm_pth)
# match on to the grid the elevation of the nearest location to that point
Y1 = Y.loc[Y['NN'].isnull() == False]
Y1['NN'] = Y1['NN'].astype(int)
Y1 = Y1.set_index('NN')
Y1['dest_NN_elev'] = location_elevs['point_elev']
Y1 = Y1.reset_index()
# generate walktimes for each grid point directly to the nearest point of service
Y1 = generate_walktimes(Y1, start = 'point_elev', end = 'dest_NN_elev', dist = 'NN_dist', max_walkspeed = offroad_speed).reset_index()
# housekeeping - rename columns for easier identification
Y1 = Y1.rename({'walkspeed':'walkspeed_direct',
'walk_time':'walk_time_direct',
'NN_dist':'NN_dist_direct'}, axis = 1)
objs.append(Y1)
grid = pd.concat(objs)
# take the MINIMUM travel time of a.) directly walking there, and b.) using the network
grid['PLOT_TIME_SECS'] = grid[['walk_time_direct','total_time_net']].min(axis = 1)
# convert to minutes
grid['PLOT_TIME_MINS'] = grid['PLOT_TIME_SECS'] / 60
```
### Generate Output Raster
At this point, all of the maths is done. We want to load our results onto a raster for visualization. We start by copying the input raster - the Worldpop grid - which made all of this possible. Then, we adjust the metadata to allow for another band of data to be added. This band will represent the travel time to the closest facility for that grid cell. Hence, the resultant raster will be dual-band - the first layer, the population, and the second layer, the accessibility of the people living in that gridcell.
```
# This is the raster we start from
rst_fn = os.path.join(pth,'pop18_resampled.tif')
# set output filename + filepath
out_fn = os.path.join(basepth,'output_layers','Round 3','%s.tif' % subset)
# Copy the metadata of the WorldPop raster
rst = rt.open(rst_fn, 'r')
meta = rst.meta.copy()
D_type = rt.float64
# By upping count to 2, we allow for a second data band
meta.update(compress='lzw', dtype = D_type, count = 2)
# open both the template file and the new file
with rt.open(out_fn, 'w', **meta) as out:
with rt.open(rst_fn, 'r') as pop:
# this is where we create a generator of geom, value pairs to use in rasterizing
shapes = ((geom,value) for geom, value in zip(grid.geometry, grid.PLOT_TIME_MINS))
# we copy our population layer
population = pop.read(1).astype(D_type)
cpy = population.copy()
# to generate the travel times, we rasterize the geometry: value pairs in the Grid GeoDataFrame
travel_times = features.rasterize(shapes=shapes, fill=0, out=cpy, transform=out.transform)
# We then write these bands out, one after another
out.write_band(1, population)
out.write_band(2, travel_times)
# That's it - we are done for this scenario
print('**process complete**')
```
### Generate Zonal Stats
This function is taken from Ben Stewart's GOSTRocks library. The only difference is the setting of 'all touched' to False to prevent over-counting of the population.
We will use this function to summarize the number of people, in any given polygon, that have access / lack access to a functioning, valid facility within X minutes.
```
def zonalStats(inShp, inRaster, bandNum=1, mask_A = None, reProj = False, minVal = '', maxVal = '', verbose=False , rastType='N', unqVals=[]):
import sys, os, inspect, logging, json
import rasterio, affine
import pandas as pd
import geopandas as gpd
import numpy as np
from collections import Counter
from shapely.geometry import box
from affine import Affine
from rasterio import features
from rasterio.mask import mask
from rasterio.features import rasterize
from rasterio.warp import reproject, Resampling
from osgeo import gdal
''' Run zonal statistics against an input shapefile
INPUT VARIABLES
inShp [string or geopandas object] - path to input shapefile
inRaster [string or rasterio object] - path to input raster
OPTIONAL
bandNum [integer] - band in raster to analyze
reProj [boolean] - whether to reproject data to match, if not, raise an error
minVal [number] - if defined, will only calculation statistics on values above this number
verbose [boolean] - whether to be loud with responses
rastType [string N or C] - N is numeric and C is categorical. Categorical returns counts of numbers
unqVals [array of numbers] - used in categorical zonal statistics, tabulates all these numbers, will report 0 counts
mask_A [numpy boolean mask] - mask the desired band using an identical shape boolean mask. Useful for doing conditional zonal stats
RETURNS
array of arrays, one for each feature in inShp
'''
if isinstance(inShp, str):
inVector = gpd.read_file(inShp)
else:
inVector = inShp
if isinstance(inRaster, str):
curRaster = rasterio.open(inRaster, 'r+')
else:
curRaster = inRaster
# If mask is not none, apply mask
if mask_A is not None:
curRaster.write_mask(np.invert(mask_A))
outputData=[]
if inVector.crs != curRaster.crs:
if reProj:
inVector = inVector.to_crs(curRaster.crs)
else:
raise ValueError("Input CRS do not match")
fCount = 0
tCount = len(inVector['geometry'])
#generate bounding box geometry for raster bbox
b = curRaster.bounds
rBox = box(b[0], b[1], b[2], b[3])
for geometry in inVector['geometry']:
#This test is used in case the geometry extends beyond the edge of the raster
# I think it is computationally heavy, but I don't know of an easier way to do it
if not rBox.contains(geometry):
geometry = geometry.intersection(rBox)
try:
fCount = fCount + 1
if fCount % 1000 == 0 and verbose:
tPrint("Processing %s of %s" % (fCount, tCount) )
# get pixel coordinates of the geometry's bounding box
ul = curRaster.index(*geometry.bounds[0:2])
lr = curRaster.index(*geometry.bounds[2:4])
'''
TODO: There is a problem with the indexing - if the shape falls outside the boundaries, it errors
I want to change it to just grab what it can find, but my brain is wrecked and I cannot figure it out
print(geometry.bounds)
print(curRaster.shape)
print(lr)
print(ul)
lr = (max(lr[0], 0), min(lr[1], curRaster.shape[1]))
ul = (min(ul[0], curRaster.shape[0]), min(ul[1]))
'''
# read the subset of the data into a numpy array
window = ((float(lr[0]), float(ul[0]+1)), (float(ul[1]), float(lr[1]+1)))
if mask is not None:
data = curRaster.read(bandNum, window=window, masked = True)
else:
data = curRaster.read(bandNum, window=window, masked = False)
# create an affine transform for the subset data
t = curRaster.transform
shifted_affine = Affine(t.a, t.b, t.c+ul[1]*t.a, t.d, t.e, t.f+lr[0]*t.e)
# rasterize the geometry
mask = rasterize(
[(geometry, 0)],
out_shape=data.shape,
transform=shifted_affine,
fill=1,
all_touched=False,
dtype=np.uint8)
# create a masked numpy array
masked_data = np.ma.array(data=data, mask=mask.astype(bool))
if rastType == 'N':
if minVal != '' or maxVal != '':
if minVal != '':
masked_data = np.ma.masked_where(masked_data < minVal, masked_data)
if maxVal != '':
masked_data = np.ma.masked_where(masked_data > maxVal, masked_data)
if masked_data.count() > 0:
results = [masked_data.sum(), masked_data.min(), masked_data.max(), masked_data.mean()]
else :
results = [-1, -1, -1, -1]
else:
results = [masked_data.sum(), masked_data.min(), masked_data.max(), masked_data.mean()]
if rastType == 'C':
if len(unqVals) > 0:
xx = dict(Counter(data.flatten()))
results = [xx.get(i, 0) for i in unqVals]
else:
results = np.unique(masked_data, return_counts=True)
outputData.append(results)
except Exception as e:
print(e)
outputData.append([-1, -1, -1, -1])
return outputData
```
# Additional Analyses / Visualization Aids
The main process is now finished. However, in order to generate the many output images and special analyses for the Yemen project, standalone sub-processes have been devised to generate outputs like:
- District and national level zonal statistics
- 2016 v 2018 change in access detection
- Mash-ups with the vulnerability matrix data
We detail four of these examples below. Other can be found at the foot of the automated version of the file (these were used later).
### Running Zonal Stats
Here, we make use of our zonal statistics function to generate summary values for an administrative boundary layer.
The script is currently set up to accept either a national (adm-0) level shapefile, or a district (adm-2) level shapefile - though, with some cursory and superficial modification, it could be used to generate zonal stats for any given polygon.
```
# sometimes, we may want to run the script from top to bottom without doing this bit. Thus, we have a control variable,
# zonal_stats, which if set to 0 will disable this chunk. Ensure zonal_stats != 0 to continue.
if zonal_stats == 0:
pass
else:
# These are the two shapefiles we have routinely used. We generate outputs for both.
for resolution in ['national','district']:
# We pick up our raster from the write location (Don't change the output names!)
out_fn = os.path.join(r'C:\Users\charl\Documents\GOST\Yemen\output_layers','Round 3',
'%s.tif' % subset)
# Set the util path - where we can load the admin boundary polygon from
utils = r'C:\Users\charl\Documents\GOST\Yemen\util_files'
# Here we load the national-level shapefile - the Yemen bound
yemen_shp_name = os.path.join(utils, r'Yemen_bound.shp')
yemen_shp = gpd.read_file(yemen_shp_name)
# Reproject to WGS84 if necessary
if yemen_shp.crs != {'init': 'epsg:4326'}:
yemen_shp = yemen_shp.to_crs({'init': 'epsg:4326'})
# Pick up the district shapefile, and reproject if necessary
district_shp_name = os.path.join(r'C:\Users\charl\Documents\GOST\Yemen\VulnerabilityMatrix', r'VM.shp')
district_shp = gpd.read_file(district_shp_name)
if district_shp.crs != {'init': 'epsg:4326'}:
district_shp = district_shp.to_crs({'init': 'epsg:4326'})
# Here we read in the output raster we just created
inraster = out_fn
ras = rt.open(inraster, mode = 'r+')
# pop is the population band
pop = ras.read(1)
# tt_matrix is the travel time band
tt_matrix = ras.read(2)
# set the target shape
if resolution == 'national':
target_shp = yemen_shp
elif resolution == 'district':
target_shp = district_shp
# Add on the total population of the district to each district shape
mask_pop = np.ma.masked_where(pop > (200000), pop).mask
base_pop = zonalStats(target_shp, # target shp
inraster, # our output raster
bandNum = 1, # pick band 1 - the population band
mask_A = mask_pop, # use mask_pop, which removes values greater than 200,000 (i.e. errors)
reProj = False, # do not reproject
minVal = 0,
maxVal = np.inf,
verbose = True,
rastType='N') # as opposed to categorical
# the zonalStats function returns 4 outputs - sum, min, max and mean. We only want sum
cols = ['total_pop','min','max','mean']
temp_df = pd.DataFrame(base_pop, columns = cols)
# we match the sum back on to our original target_shp file
target_shp['total_pop'] = temp_df['total_pop']
target_shp['total_pop'].loc[target_shp['total_pop'] == -1] = 0
# Having added the base population, we now calculate the population within a range
# of time thresholds from the destination set.
# we do this for four time thresholds. Note the change in the mask argument:
for time_thresh in [30,60,120, 240]:
# this is our new, special, mask - we mask all values ABOVE the threshold
mask_obj = np.ma.masked_where(tt_matrix > (time_thresh), tt_matrix).mask
# we pass this to our same zonal stats function. We are only counting population values below the threshold now
raw = zonalStats(target_shp,
inraster,
bandNum = 1,
mask_A = mask_obj,
reProj = False,
minVal = 0,
maxVal = np.inf,
verbose = True,
rastType='N')
# create temp_df of the results
cols = ['pop_%s' % time_thresh,'min','max','mean']
temp_df = pd.DataFrame(raw, columns = cols)
# Add in this new populaiton count to the file
target_shp['pop_%s' % time_thresh] = temp_df['pop_%s' % time_thresh]
target_shp['pop_%s' % time_thresh].loc[target_shp['pop_%s' % time_thresh] == -1] = 0
# note the percentage of the total population that has access within this thresh
target_shp['frac_%s' % time_thresh] = (target_shp['pop_%s' % time_thresh]) / (target_shp['total_pop']).fillna(0)
target_shp['frac_%s' % time_thresh].replace([np.inf, -np.inf], 0)
target_shp['frac_%s' % time_thresh] = target_shp['frac_%s' % time_thresh].fillna(0)
# Save to file. The only difference is that we save a summary .csv instead of a .shp for national - as we never want to
# visualize the national-level results (it would be one single block of color)
if resolution == 'national':
print('saving national')
outter = target_shp[['total_pop','pop_30','frac_30','pop_60','frac_60','pop_120','frac_120','pop_240','frac_240']]
outter.to_csv(os.path.join(basepth, 'output_layers','Round 3','%s_zonal_%s.csv'% (subset, resolution)))
else:
print('saving district')
target_shp['abs_pop_iso'] = target_shp['total_pop'] - target_shp['pop_30']
target_shp.to_file(os.path.join(basepth, 'output_layers','Round 3','%s_zonal_%s.shp' % (subset, resolution)), driver = 'ESRI Shapefile')
```
### Generate Change Raster
This analysis assumes that the user has already run a scenario for 2016, and a scenario for 2018 - with all other settings held entirely constant.
We do some basic raster maths on the travel time band of two output rasters to generate a delta / change raster.
```
# set to 'off' to continue
test_mode = 'on'
if test_mode == 'on':
pass
else:
# name the output file
subset = r'PHCs_2018_conflict_delta'
# this is the first raster
pre_raster = os.path.join(basepth, 'output_layers','Round 3','driving_24th_HERAMS_PHC_NoConflict.tif')
# this is the second raster
post_raster = os.path.join(basepth, 'output_layers','Round 3','driving_24th_HERAMS_PHCs_ConflictAdj.tif')
# define the output / delta / change raster filename and location
out_fn = os.path.join(basepth,'output_layers','Round 3','%s.tif' % subset)
# open the first raster, read in band 2
pre = rasterio.open(pre_raster, 'r')
arr_pre = pre.read(2)
# repeat for second raster
post = rasterio.open(post_raster, 'r')
arr_post = post.read(2)
# subtract post from pre
delta = arr_pre - arr_post
# Update metadata from template raster
rst_fn = os.path.join(pth,'pop18_resampled.tif')
rst = rasterio.open(rst_fn, 'r')
meta = rst.meta.copy()
D_type = rasterio.float64
meta.update(compress='lzw', dtype = D_type, count = 3)
# write out change raster
with rasterio.open(out_fn, 'w', **meta) as out:
with rasterio.open(rst_fn, 'r') as pop:
# keep the original population later
population = pop.read(1).astype(D_type)
# the only surprise here is band 3 - the number of people x the minutes of change.
# Useful for zonal stats to identify the areas which changed the most for the better or worse.
out.write_band(1, population)
out.write_band(2, delta)
out.write_band(3, delta * population)
```
### Relationship Graphs: Vulnerability Component Scores, Accessibility
Here, we generate some line plots to try to identify any simple / obvious relationships between Vulnerability Matrix factor scores and accessibility.
```
# adjust test_mode variable to != on to continue
if test_mode == 'on':
pass
else:
import seaborn as sns
import matplotlib.pyplot as plt
# Here, we need to import a prepared target_shp file. This has the VM details in it + access stats from the zonal stats step.
# this is important - it must have both VM data AND access data from zonal stats in the same file.
# the path below is the path to the raw VM file - which should then be used in the zonal stats step before running this block.
target_shp = os.path.join(r'C:\Users\charl\Documents\GOST\Yemen\VulnerabilityMatrix', r'VM.shp')
# rename columns for easier access
target_shp2 = target_shp.rename({
'Overall Vu':'Overall Vulnerability Level',
'Health Sys':'Health System Capacity Score',
'Hazards':'Hazard Score',
'Impact on':'Impact on Exposed Population Score',
'Food Secur':'Food Security Score',
'Morbidity':'Morbidity Score',
'Nutrition':'Nutrition Score',
'WASH':'WASH Score',
'Social Det':'Social Determinants and Health Outcomes Score',
'total_pop':'Total Population',
}, axis = 1)
# pick out the factors / scores we want to generate graphs for
factors = ['Overall Vulnerability Level','Health System Capacity Score',
'Hazard Score','Impact on Exposed Population Score','Food Security Score',
'WASH Score','Social Determinants and Health Outcomes Score']
# define our time thresholds of interest
fracs = {'frac_30':'30 minutes',
'frac_60': '1 hour',
'frac_120': '2 hours',
'frac_240': '4 hours'}
# now, for each factor
for groupa in factors:
# we group by each factor in turn. These factors have values between 1 and 7, so we have c. 8 groups
subg = target_shp2[[groupa,'Total Population','pop_30','pop_60','pop_120','pop_240']].groupby(groupa).sum()
# work out the fraction which has access in each district
for i in [30, 60, 120, 240]:
subg['frac_%s' % i] = subg['pop_%s' % i] / subg['Total Population']
# We plot out the average fraction that has access for each value present in the VM factor
plotter = subg[['frac_30','frac_60','frac_120','frac_240']]
plt.clf()
plt.figure(figsize=(8,4))
# title says it all
title = 'Fraction of Population with access to a HeRAMS hospital for each value \nof the WHO %s' % (groupa)
# set color palette
pal = sns.cubehelix_palette(4, start=2.7, rot=.1, light = .7, dark=.1)
ax = sns.lineplot(data = plotter,
palette = pal,
dashes = False).set_title(title)
# add a legend and save the plot down
plt.legend(loc='center right', bbox_to_anchor=(1.2, 0.5), ncol=1)
plt.savefig(os.path.join(r'C:\Users\charl\Documents\GOST\Yemen\output_layers\VM_graphs','pop_chart_%s' % groupa), bbox_inches='tight')
```
### Scatter plots: Vulnerability Matrix
The previous analysis sees us generate plots which are summaries for each VM score for a given factor. Although interesting, it is also useful to see the distribution of districts within each VM factor band - so we can see if there is tight grouping, or not. We create a scatter plot to do that here. The code is extremely similar to the previous box.
```
if test_mode == 'on':
pass
else:
import seaborn as sns
import matplotlib.pyplot as plt
# again, make sure you have a correctly set up 'target_shp' file
target_shp2 = target_shp.rename({
'Overall Vu':'Overall Vulnerability Level',
'Health Sys':'Health System Capacity Score',
'Hazards':'Hazard Score',
'Impact on':'Impact on Exposed Population Score',
'Food Secur':'Food Security Score',
'Morbidity':'Morbidity Score',
'Nutrition':'Nutrition Score',
'WASH':'WASH Score',
'Social Det':'Social Determinants and Health Outcomes Score',
'total_pop':'Total Population',
}, axis = 1)
p = 7
factors = ['Overall Vulnerability Level','Health System Capacity Score',
'Hazard Score','Impact on Exposed Population Score','Food Security Score',
'WASH Score','Social Determinants and Health Outcomes Score']
fracs = {'frac_30':'30 minutes',
'frac_60': '1 hour',
'frac_120': '2 hours',
'frac_240': '4 hours'}
# we generate scatter plots for each accessibility threshold
for frac in ['frac_30','frac_60','frac_120','frac_240']:
# and for each Vulnerability matrix factor
for groupa in factors:
plt.clf()
plt.figure(figsize=(8,4))
title = 'Fraction of district population living within %s of nearest HeRAMS hospital, \nbreakdown by WHO %s' % (fracs[frac], groupa)
# this time, we are not running a groupby function - so we plot all values
d = target_shp2[[groupa,frac]]
# set color palette
pal = sns.cubehelix_palette(7, rot=-.5, light = .8, dark=.2)
# plot on the axes
ax = sns.swarmplot(y = frac, x = groupa, data=d, palette=pal).set_title(title)
# generate some boxplots to go on top to describe the data
ax = sns.boxplot(y = frac, x = groupa, data=d, whis=np.inf, color = "1", linewidth = 0.7, dodge = True)
# save down
plt.savefig(os.path.join(r'C:\Users\charl\Documents\GOST\Yemen\output_layers\VM_graphs','Boxplot_%s_%s.png'% (groupa, frac)), bbox_inches='tight')
```
| github_jupyter |
```
import re
import os
import sys
sys.path.insert(0, '../')
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import astropy.units as u
import pandas as pd
from src.utils import *
from src.GMM import *
%load_ext autoreload
%autoreload 2
drct = "../scripts/get_globular_clusters/result.txt"
summary_table = pd.read_csv(drct, sep='\t')
summary_table = summary_table.iloc[1:,:]
summary_table.rename(columns={'# Name ':'Name'}, inplace=True)
summary_table['Name'] = summary_table['Name'].str.strip()
summary_table.set_index('Name', inplace=True)
summary_table.head()
summary_table.loc['NGC_104_47Tuc', 'rscale']
```
### Read and initial plot of data
```
drct = "../scripts/get_globular_clusters/output"
gc = pd.read_csv(os.path.join(drct, "NGC_104_47Tuc.csv"), skiprows=54, sep=" ")
gc.head()
gc["r"] = np.hypot(gc.x, gc.y)
gc["pm"] = np.hypot(gc.pmx, gc.pmy)
gc.describe()
```
Spatial Distribution
```
plt.figure(figsize=(7,7))
H, xb, yb, _ = plt.hist2d(gc.x, gc.y, bins=200, range=[[-0.66,0.66],[-0.66,0.66]], norm=LogNorm(), cmap="gnuplot2")
sns.distplot(gc.r)
plt.xlabel("r")
```
Pruning by scale radius and upper limit of proper motion
```
gcs = gc[(gc.r<18.3/60) & (gc.pm<15)]
sns.distplot(gcs.r)
plt.xlabel("r")
```
Distribution of proper motion
```
plt.figure(figsize=(6,6))
H, xb, yb, _ = plt.hist2d(gcs.pmx, gcs.pmy, bins=100, range=[[-10,15],[-15,10]], norm=LogNorm(), cmap="gnuplot2")
sns.distplot(gcs.pm[gcs.pm<15])
plt.xlabel("PM")
H, xb, yb, _ = plt.hist2d(gc.r, gc.pm, bins=50, range=[[0,0.6],[0,10]], cmap="gnuplot2")
# R scale
plt.axvline(18.3/60, color="gold")
plt.xlabel("r")
plt.ylabel("PM")
```
### Binning Start From Here
Bin the distribution of pm in radial bins.
```
bins = np.logspace(np.log10(0.1), np.log10(0.3), 9)
sns.set_palette("RdBu", len(bins))
r_rbin, z_rbin = profile_binning(gc.r, gc.pm, bins=bins, z_clip=[2,15],
return_bin=False, plot=True)
```
Acquire all the stars from a particular bin:
```
z_bins = profile_binning(gcs.r, gcs.pm, bins=bins, z_clip=[2,15],
return_bin=True, plot=False)
z_bins["5"]
```
PM radial profile
```
plt.figure(figsize=(7,5))
plt.plot(r_rbin, z_rbin, "k-o")
plt.xlabel("r")
plt.ylabel("PM")
```
### GMM start from here
Perform GMM decomposition. The best number of gaussians is given by the lowest BIC. One can also specify the number of components by giving values to parameter n_comp in GMM( ).
```
gmm_bins = []
for i in range(len(bins)-1):
gcb = gcs[(bins[i]<gcs.r)&(gcs.r<bins[i+1])]
gmm = GMM(gcb.pm, max_n_comp=6, verbose=False)
gmm_bins.append(gmm)
```
Probability that a star is giving by the major population. Passing a subsample
```
prob_main_pop = predict_main_pop(z_bins["5"]["pm"], gmm_bins[5])
prob_main_pop
```
Visualize GMM discrimination in each bin:
```
plot_predict_main_pop(gcs, bins, gmm_bins, p_thre=0.8)
```
Visualize GMM results combining bins:
```
for i, b in enumerate(bins[:-1]):
r = z_bins[str(i)]["r"]
pm = z_bins[str(i)]["pm"]
gmm = gmm_bins[i]
prob_main_pop = predict_main_pop(pm, gmm)
is_main_pop = prob_main_pop > 0.8
plt.scatter(r[is_main_pop], pm[is_main_pop], color="navy", alpha=0.1, s=2)
plt.scatter(r[~is_main_pop], pm[~is_main_pop], color="gold", alpha=0.3, s=5)
plt.axvline(bins[i], color="k", alpha=0.2)
plt.ylabel("Proper Motion (mas)")
plt.xlabel("Radius (deg)")
plt.ylim(0, 15)
plt.show()
```
# Surface Density DON'T NEED TO DO
TODO a much better estimate of the surface density than just a histrogram
```
suface_density = []
for i, b in enumerate(bins[:-1]):
r = z_bins[str(i)]["r"]
pm = z_bins[str(i)]["pm"]
gmm = gmm_bins[i]
prob_main_pop = predict_main_pop(pm, gmm)
is_main_pop = prob_main_pop > 0.8
area = np.pi * (bins[i+1]**2 - b**2)
suface_density.append(len(r) / area)
plt.plot((bins[:-1]+bins[1:])/2, suface_density)
import astropy
from astropy.modeling.functional_models import KingProjectedAnalytic1D
from astropy.modeling import models, fitting
x = (bins[:-1]+bins[1:])/2
y = suface_density
KP_int = KingProjectedAnalytic1D(amplitude = 1, r_core = 1., r_tide = 1)
fit_KP = fitting.LevMarLSQFitter()
KP = fit_KP(KP_int, x, y, maxiter=1000)
plt.figure(figsize=(8,5))
plt.loglog(x, y, 'ko')
plt.plot(x, KP(x), label='Gaussian')
KP.amplitude, KP.r_core, KP.r_tide
```
# Velocity Dispersion
Assume Plummer sphere model.
Still need to normalize.
Do better fit:
- https://symfit.readthedocs.io/en/stable/intro.html
- https://lmfit.github.io/lmfit-py/fitting.html
- and an MCMC (can use symfit)
```
from galpy.df import jeans
from galpy import potential
x = (bins[:-1]+bins[1:])/2 / summary_table
%%time
for amp, scale in zip(np.linspace(10, 100, num=20), np.linspace(0.1, 5, num=20)):
pot = potential.PlummerPotential(amp=amp, b=scale)
model = np.array([jeans.sigmar(pot, r) for r in x])
# Grid Parameters for 47 Tuc
gc_amp = np.logspace(9e5, 1.1e6, num=5e1) * u.solMass # The GC mass varies pretty wildly: 10^4 - few*10^6
gc_scale = np.linspace(0.5, 1.5, num=1e2)
bh_amp = np.linspace(0, 1e5, num=1e2) * u.solMass
# radius
radii = np.linspace(0, 10, num=1e3) # units of radius / scale_radius
pot = potential.PlummerPotential(amp=gc_amp, b=gc_scale * )
potential.PlummerPotential??
rnorm = ((x * u.deg).to_value(u.rad) / (18.3 *u.arcmin).to_value(u.rad))
from astropy import units as u
amp = 10**6 * u.solMass
scale = 1
pot = potential.PlummerPotential(amp=amp, b=scale)
model1 = np.array([jeans.sigmar(pot, r).to_value(u.km/u.s) for r in rnorm]) * u.km / u.s
plt.plot(x, model1, label='NO')
bh_amp=10**3 * u.solMass
amp = 10**6 * u.solMass - bh_amp
pot = potential.PlummerPotential(amp=amp, b=scale) + potential.KeplerPotential(amp=bh_amp)
model2 = np.array([jeans.sigmar(pot, r).to_value(u.km/u.s) for r in rnorm]) * u.km / u.s
plt.plot(x, model2, label='1e3')
bh_amp=10**4 * u.solMass
amp = 10**6 * u.solMass - bh_amp
pot = potential.PlummerPotential(amp=amp, b=scale) + potential.KeplerPotential(amp=bh_amp)
model3 = np.array([jeans.sigmar(pot, r).to_value(u.km/u.s) for r in rnorm]) * u.km / u.s
plt.plot(x, model3, label=1e4)
plt.legend()
import astropy.units as u
from lmfit import Parameters, fit_report, minimize
import galpy
from galpy.df import jeans
from galpy import potential
fit_params = Parameters()
fit_params.add('amp', value=100000, min=0)
fit_params.add('scale', value=1, min=0)
def residual(pars, x, data=None):
"""Model and subtract data."""
vals = pars.valuesdict()
amp = vals['amp']
scale = vals['scale']
pot = potential.PlummerPotential(amp=amp, b=scale)
model = np.array([jeans.sigmar(pot, r) for r in x])
if data is None:
return model
return model - data
# getting data
vel_disp = []
for i, b in enumerate(bins[:-1]):
r = z_bins[str(i)]["r"]
pm = z_bins[str(i)]["pm"]
gmm = gmm_bins[i]
prob_main_pop = predict_main_pop(pm, gmm)
is_main_pop = prob_main_pop > 0.8
vs = ((4 * u.kpc) * pm[is_main_pop] * u.mas/u.yr).to_value(u.km*u.rad/u.s)
disp = np.var(vs)
vel_disp.append(disp)
x = (bins[:-1]+bins[1:])/2
data = vel_disp
plt.plot(x, vel_disp)
mi = minimize(residual, fit_params, args=(x,), kws={'data': data})
print(fit_report(mi))
plt.plot(x, data, 'b')
plt.plot(x, data + residual(mi.params, x, data=data), 'r', label='best fit')
plt.legend(loc='best')
plt.show()
```
With a BH
```
import astropy.units as u
from lmfit import Parameters, fit_report, minimize
import galpy
from galpy.df import jeans
from galpy import potential
fit_params = Parameters()
fit_params.add('amp', value=100000, min=0)
fit_params.add('scale', value=1, min=0)
fit_params.add('bh_amp', value=10000, min=0)
def residual(pars, x, data=None):
"""Model and subtract data."""
vals = pars.valuesdict()
amp = vals['amp']
scale = vals['scale']
bh_amp = vals['bh_amp']
pot = potential.PlummerPotential(amp=amp, b=scale) + potential.KeplerPotential(amp=bh_amp)
model = np.array([jeans.sigmar(pot, r) for r in x])
if data is None:
return model
return model - data
# getting data
vel_disp = []
for i, b in enumerate(bins[:-1]):
r = z_bins[str(i)]["r"]
pm = z_bins[str(i)]["pm"]
gmm = gmm_bins[i]
prob_main_pop = predict_main_pop(pm, gmm)
is_main_pop = prob_main_pop > 0.8
vs = ((4 * u.kpc) * pm[is_main_pop] * u.mas/u.yr).to_value(u.km*u.rad/u.s)
disp = np.var(vs)
vel_disp.append(disp)
x = (bins[:-1]+bins[1:])/2
data = vel_disp
mi = minimize(residual, fit_params, args=(x,), kws={'data': data})
plt.plot(x, data, 'b')
plt.plot(x, data + residual(mi.params, x, data=data), 'r', label='best fit')
plt.legend(loc='best')
plt.show()
print(fit_report(mi))
galpy.potential.Kin
# raise Exception('Run MCMC?')
# import lmfit as lf
# mi.params.add('__lnsigma', value=np.log(0.1), min=np.log(0.001), max=np.log(2))
# res = lf.minimize(residual, method='emcee', args=(x,), kws={'data': data}, nan_policy='omit', burn=300, steps=1000, thin=20,
# params=mi.params, is_weighted=False, progress=True)
# plt.plot(res.acceptance_fraction)
# plt.xlabel('walker')
# plt.ylabel('acceptance fraction')
# plt.show()
# import corner
# emcee_plot = corner.corner(res.flatchain, labels=res.var_names,
# truths=list(res.params.valuesdict().values()))
```
| github_jupyter |
```
# 定义一组字典列表,用来表示多个数据样本(每个字典代表一个数据样本)。
measurements = [{'city': 'Dubai', 'temperature': 33.}, {'city': 'London', 'temperature': 12.}, {'city': 'San Fransisco', 'temperature': 18.}]
# 从sklearn.feature_extraction 导入 DictVectorizer
from sklearn.feature_extraction import DictVectorizer
# 初始化DictVectorizer特征抽取器
vec = DictVectorizer()
# 输出转化之后的特征矩阵。
print vec.fit_transform(measurements).toarray()
# 输出各个维度的特征含义。
print vec.get_feature_names()
# 从sklearn.datasets里导入20类新闻文本数据抓取器。
from sklearn.datasets import fetch_20newsgroups
# 从互联网上即时下载新闻样本,subset='all'参数代表下载全部近2万条文本存储在变量news中。
news = fetch_20newsgroups(subset='all')
# 从sklearn.cross_validation导入train_test_split模块用于分割数据集。
from sklearn.cross_validation import train_test_split
# 对news中的数据data进行分割,25%的文本用作测试集;75%作为训练集。
X_train, X_test, y_train, y_test = train_test_split(news.data, news.target, test_size=0.25, random_state=33)
# 从sklearn.feature_extraction.text里导入CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer
# 采用默认的配置对CountVectorizer进行初始化(默认配置不去除英文停用词),并且赋值给变量count_vec。
count_vec = CountVectorizer()
# 只使用词频统计的方式将原始训练和测试文本转化为特征向量。
X_count_train = count_vec.fit_transform(X_train)
X_count_test = count_vec.transform(X_test)
# 从sklearn.naive_bayes里导入朴素贝叶斯分类器。
from sklearn.naive_bayes import MultinomialNB
# 使用默认的配置对分类器进行初始化。
mnb_count = MultinomialNB()
# 使用朴素贝叶斯分类器,对CountVectorizer(不去除停用词)后的训练样本进行参数学习。
mnb_count.fit(X_count_train, y_train)
# 输出模型准确性结果。
print 'The accuracy of classifying 20newsgroups using Naive Bayes (CountVectorizer without filtering stopwords):', mnb_count.score(X_count_test, y_test)
# 将分类预测的结果存储在变量y_count_predict中。
y_count_predict = mnb_count.predict(X_count_test)
# 从sklearn.metrics 导入 classification_report。
from sklearn.metrics import classification_report
# 输出更加详细的其他评价分类性能的指标。
print classification_report(y_test, y_count_predict, target_names = news.target_names)
# 从sklearn.feature_extraction.text里分别导入TfidfVectorizer。
from sklearn.feature_extraction.text import TfidfVectorizer
# 采用默认的配置对TfidfVectorizer进行初始化(默认配置不去除英文停用词),并且赋值给变量tfidf_vec。
tfidf_vec = TfidfVectorizer()
# 使用tfidf的方式,将原始训练和测试文本转化为特征向量。
X_tfidf_train = tfidf_vec.fit_transform(X_train)
X_tfidf_test = tfidf_vec.transform(X_test)
# 依然使用默认配置的朴素贝叶斯分类器,在相同的训练和测试数据上,对新的特征量化方式进行性能评估。
mnb_tfidf = MultinomialNB()
mnb_tfidf.fit(X_tfidf_train, y_train)
print 'The accuracy of classifying 20newsgroups with Naive Bayes (TfidfVectorizer without filtering stopwords):', mnb_tfidf.score(X_tfidf_test, y_test)
y_tfidf_predict = mnb_tfidf.predict(X_tfidf_test)
print classification_report(y_test, y_tfidf_predict, target_names = news.target_names)
# 继续沿用代码56与代码57中导入的工具包(在同一份源代码中,或者不关闭解释器环境),分别使用停用词过滤配置初始化CountVectorizer与TfidfVectorizer。
count_filter_vec, tfidf_filter_vec = CountVectorizer(analyzer='word', stop_words='english'), TfidfVectorizer(analyzer='word', stop_words='english')
# 使用带有停用词过滤的CountVectorizer对训练和测试文本分别进行量化处理。
X_count_filter_train = count_filter_vec.fit_transform(X_train)
X_count_filter_test = count_filter_vec.transform(X_test)
# 使用带有停用词过滤的TfidfVectorizer对训练和测试文本分别进行量化处理。
X_tfidf_filter_train = tfidf_filter_vec.fit_transform(X_train)
X_tfidf_filter_test = tfidf_filter_vec.transform(X_test)
# 初始化默认配置的朴素贝叶斯分类器,并对CountVectorizer后的数据进行预测与准确性评估。
mnb_count_filter = MultinomialNB()
mnb_count_filter.fit(X_count_filter_train, y_train)
print 'The accuracy of classifying 20newsgroups using Naive Bayes (CountVectorizer by filtering stopwords):', mnb_count_filter.score(X_count_filter_test, y_test)
y_count_filter_predict = mnb_count_filter.predict(X_count_filter_test)
# 初始化另一个默认配置的朴素贝叶斯分类器,并对TfidfVectorizer后的数据进行预测与准确性评估。
mnb_tfidf_filter = MultinomialNB()
mnb_tfidf_filter.fit(X_tfidf_filter_train, y_train)
print 'The accuracy of classifying 20newsgroups with Naive Bayes (TfidfVectorizer by filtering stopwords):', mnb_tfidf_filter.score(X_tfidf_filter_test, y_test)
y_tfidf_filter_predict = mnb_tfidf_filter.predict(X_tfidf_filter_test)
# 对上述两个模型进行更加详细的性能评估。
from sklearn.metrics import classification_report
print classification_report(y_test, y_count_filter_predict, target_names = news.target_names)
print classification_report(y_test, y_tfidf_filter_predict, target_names = news.target_names)
```
| github_jupyter |
# **Preprocessing**
The purpose of this notebook is to execute preprocessing by combining posts and comments. The raw data are in the `JSON` format and we need to transform them into data frames for the further analysis.
```
import pandas as pd
import numpy as np
import json
from ast import literal_eval
import multiprocess as mp
```
### Load submission and comment data from the `txt` files.
```
subs = []
comments = []
# load submissions data
with open('depressed_submission.txt') as file:
for line in file:
if literal_eval(line)['selftext'] != '[removed]' and literal_eval(line)['selftext'] != '[deleted]' and len(literal_eval(line)['selftext']) != 0:
subs.append(literal_eval(line))
# load comment data
with open('depressed_comment.txt') as file:
for line in file:
if literal_eval(line)['body'] != '[removed]' and literal_eval(line)['body'] != '[deleted]' and len(literal_eval(line)['body']) != 0:
comments.append(literal_eval(line))
# formulate to dataframe
posts_df = pd.DataFrame(subs)[['id', 'title', 'author', 'subreddit', 'selftext']]
display(posts_df.head())
comments_df = pd.DataFrame(comments)[['id', 'author', 'link_id', 'parent_id', 'body']]
display(comments_df.head())
# rename columns' name for both dataframes
posts_df.rename(columns = {"id": "link_id", "author": "post_author", "selftext": "post_content"}, inplace = True)
comments_df.rename(columns = {"id": "comment_id", "author": "comment_author", "body": "comment_content"}, inplace = True)
# remove unnecessary characters in ids
comments_df.link_id = comments_df.link_id.apply(lambda x: x.split('_')[1])
comments_df.parent_id = comments_df.parent_id.apply(lambda x: x.split('_')[1])
comments_df.head()
# merge post df and comment df
df = posts_df.merge(comments_df, on = 'link_id')
# rearrage by link_id (:= post) and parent_id (:= thread)
df.sort_values(by = ['link_id', 'parent_id'], inplace = True)
df.head(10)
# remove deleted post contents or comment contents
df = df[(df.post_content != '[deleted]') & (df.post_content != '[removed]') & (df.comment_content != '[deleted]') & (df.comment_content != '[removed]')]
df = df[(df.post_content.str.len() != 0) & (df.comment_content.str.len() != 0)]
df.dropna(subset = ['post_content', 'comment_content'], inplace = True)
# remove deleted author names for both post and comment
df = df[(df.post_author != '[deleted]') & (df.post_author != '[removed]') & (df.comment_author != '[deleted]') & (df.comment_author != '[removed]')]
df = df[(df.post_author.str.len() != 0) & (df.comment_author.str.len() != 0)]
df.dropna(subset = ['post_author', 'comment_author'], inplace = True)
df = df.reset_index().drop('index', axis = 1)
df
# save dataframe
df.to_csv('depressed_df_convs.csv', index = False)
```
### 2. For Large files, restart the notebook and directly load the saved dataframe to produce dyadic conversations and multiparty conversations.
```
# open the processed dataframe
dtypes = {'link_id': pd.np.str,
'title':pd.np.str,
'post_author':pd.np.str,
'subreddit':pd.np.str,
'post_content':pd.np.str,
'comment_id':pd.np.str,
'comment_author':pd.np.str,
'parent_id':pd.np.str,
'comment_content':pd.np.str}
df_chunks = pd.read_csv('Anxietyhelp_df_convs.csv', dtype = dtypes, chunksize = 100000, engine = 'python')
def get_dyadic_convs(df):
"""
From the original dataframe, build a sub-dataframe containing dyadic conversations
between post authors and first comment authors, including only the first comment thread.
Arg:
df: The raw dataframe got by scraping in a given subreddit.
Return:
df_dyadic_convs: The dataframe containing only the dyadic conversations between
post author and comment author from the first thread.
"""
df_dyadic_convs = pd.DataFrame()
# consider each post
for link_id in df['link_id'].unique():
df_link_id = df[df['link_id'] == link_id].reset_index().drop('index', axis = 1)
# consider only the first conversation thread between post author and comment author
if len(df_link_id) < 1:
continue
post_author = df_link_id.loc[0, 'post_author']
first_comment_author = df_link_id.loc[0, 'comment_author']
if len(df_link_id[df_link_id['link_id'] == df_link_id['parent_id']].index) > 1:
first_thread_index = df_link_id[df_link_id['link_id'] == df_link_id['parent_id']].index[0]
second_thread_index = df_link_id[df_link_id['link_id'] == df_link_id['parent_id']].index[1]
first_thread = df_link_id.loc[first_thread_index:second_thread_index, :]
elif len(df_link_id[df_link_id['link_id'] == df_link_id['parent_id']].index) == 1:
first_thread_index = df_link_id[df_link_id['link_id'] == df_link_id['parent_id']].index[0]
first_thread = df_link_id.loc[first_thread_index:, :]
else:
continue
df_dyadic_convs = df_dyadic_convs.append(first_thread.loc[first_thread_index, :])
if len(first_thread) > 1:
for i in range(first_thread_index+1, first_thread_index+len(first_thread)):
if first_thread.loc[i, 'comment_author'] == post_author:
df_dyadic_convs = df_dyadic_convs.append(first_thread.loc[i, :])
elif first_thread.loc[i, 'comment_author'] == first_comment_author:
df_dyadic_convs = df_dyadic_convs.append(first_thread.loc[i, :])
return df_dyadic_convs
def get_multi_convs(df):
"""
From the original dataframe, build a sub-dataframe containing multiparty conversations
between post authors and the comment authors in the longest thread.
Arg:
df: The raw dataframe got by scraping in a given subreddit.
Return:
df_multi_convs: The dataframe containing only the multi-party conversations from the longest threads.
"""
df_multi_convs = pd.DataFrame()
# consider each post
for link_id in df['link_id'].unique():
df_link_id = df[df['link_id'] == link_id].reset_index().drop('index', axis = 1)
# consider only the first conversation thread between post author and comment author
if len(df_link_id) < 1:
continue
# include those rows only in the thread with longest conversations
if len(df_link_id[df_link_id['link_id'] == df_link_id['parent_id']].index) > 1:
post_indices = list(df_link_id[df_link_id['link_id'] == df_link_id['parent_id']].index)
post_convs_list = []
# store each conversation into a list
for i in range(len(post_indices)):
if i == len(post_indices)-1:
current_post_ind = post_indices[i]
post_convs_list.append(df_link_id.loc[current_post_ind:, :])
break
current_post_ind = post_indices[i]
next_post_ind = post_indices[i+1]
post_convs_list.append(df_link_id.loc[current_post_ind:next_post_ind-1, :])
# pick the longest conversation
len_convs_list = list(map(len, post_convs_list))
max_convs_ind = len_convs_list.index(max(len_convs_list))
long_thread = post_convs_list[max_convs_ind].reset_index().drop('index', axis = 1)
elif len(df_link_id[df_link_id['link_id'] == df_link_id['parent_id']].index) == 1:
long_thread_index = df_link_id[df_link_id['link_id'] == df_link_id['parent_id']].index[0]
long_thread = df_link_id.loc[long_thread_index:, :]
else:
continue
if len(long_thread) > 1:
df_multi_convs = df_multi_convs.append(long_thread)
return df_multi_convs
pool = mp.Pool(16)
chunk_list_dyadic = []
for chunk in df_chunks:
# preprocess each chunk
filtered_chunk = chunk.drop_duplicates().dropna().reset_index().drop('index', axis = 1)
# get dyadic conversations
chunk_dyadic_convs = (pool.apply_async(get_dyadic_convs, [filtered_chunk])).get()
# append the result to the list
chunk_list_dyadic.append(chunk_dyadic_convs)
# construct dyadic conversations based on link_id, post_author, and comment_author
df_dyadic_convs = pd.concat(chunk_list_dyadic).reset_index().drop('index', axis = 1)
df_dyadic_convs
# construct multiparty conversations based on link_id, post_author, and comment_author
pool = mp.Pool(16)
chunk_list_multi = []
for chunk in df_chunks:
# preprocess each chunk
filtered_chunk = chunk.drop_duplicates().dropna().reset_index().drop('index', axis = 1)
# get dyadic conversations
chunk_multi_convs = (pool.apply_async(get_multi_convs, [filtered_chunk])).get()
# append the result to the list
chunk_list_multi.append(chunk_multi_convs)
df_multi_convs = pd.concat(chunk_list_multi).reset_index().drop('index', axis = 1)
df_multi_convs
# save dataframe
df_dyadic_convs.to_csv('depression_dyadic_convs.csv')
df_multi_convs.to_csv('Anxietyhelp_multi_convs.csv')
```
| github_jupyter |
## Multi-label prediction with Planet Amazon dataset
```
!curl https://course.fast.ai/setup/colab | bash
from fastai.vision import *
```
## Getting the data
The planet dataset isn't available on the [fastai dataset page](https://course.fast.ai/datasets) due to copyright restrictions. You can download it from Kaggle however. Let's see how to do this by using the [Kaggle API](https://github.com/Kaggle/kaggle-api) as it's going to be pretty useful to you if you want to join a competition or use other Kaggle datasets later on.
First, install the Kaggle API by uncommenting the following line and executing it, or by executing it in your terminal (depending on your platform you may need to modify this slightly to either add `source activate fastai` or similar, or prefix `pip` with a path. Have a look at how `conda install` is called for your platform in the appropriate *Returning to work* section of https://course.fast.ai/. (Depending on your environment, you may also need to append "--user" to the command.)
```
pip install kaggle --upgrade
```
Then you need to upload your credentials from Kaggle on your instance. Login to kaggle and click on your profile picture on the top left corner, then 'My account'. Scroll down until you find a button named 'Create New API Token' and click on it. This will trigger the download of a file named 'kaggle.json'.
Upload this file to the directory this notebook is running in, by clicking "Upload" on your main Jupyter page, then uncomment and execute the next two commands (or run them in a terminal). For Windows, uncomment the last two commands.
```
! mkdir -p ~/.kaggle/
! mv kaggle.json ~/.kaggle/
# For Windows, uncomment these two commands
# ! mkdir %userprofile%\.kaggle
# ! move kaggle.json %userprofile%\.kaggle
```
You're all set to download the data from [planet competition](https://www.kaggle.com/c/planet-understanding-the-amazon-from-space). You **first need to go to its main page and accept its rules**, and run the two cells below (uncomment the shell commands to download and unzip the data). If you get a `403 forbidden` error it means you haven't accepted the competition rules yet (you have to go to the competition page, click on *Rules* tab, and then scroll to the bottom to find the *accept* button).
```
path = Config.data_path()/'planet'
path.mkdir(parents=True, exist_ok=True)
path
! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path}
! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p {path}
! unzip -q -n {path}/train_v2.csv.zip -d {path}
```
To extract the content of this file, we'll need 7zip, so uncomment the following line if you need to install it (or run `sudo apt install p7zip-full` in your terminal).
```
! conda install --yes --prefix {sys.prefix} -c haasad eidl7zip
```
And now we can unpack the data (uncomment to run - this might take a few minutes to complete).
```
! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path.as_posix()}
```
## Multiclassification
Contrary to the pets dataset studied in last lesson, here each picture can have multiple labels. If we take a look at the csv file containing the labels (in 'train_v2.csv' here) we see that each 'image_name' is associated to several tags separated by spaces.
```
df = pd.read_csv(path/'train_v2.csv')
df.head()
```
To put this in a `DataBunch` while using the [data block API](https://docs.fast.ai/data_block.html), we then need to using `ImageList` (and not `ImageDataBunch`). This will make sure the model created has the proper loss function to deal with the multiple classes.
```
path
tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
```
We use parentheses around the data block pipeline below, so that we can use a multiline statement without needing to add '\\'.
```
np.random.seed(42)
src = (ImageList.from_csv(path, 'train_v2.csv', folder='train-jpg', suffix='.jpg')
.split_by_rand_pct(0.2)
.label_from_df(label_delim=' '))
data = (src.transform(tfms, size=128)
.databunch().normalize(imagenet_stats))
ImageList.from_csv??
ImageList.from_df??
path
il = ImageList.from_csv(path, 'train_v2.csv', folder='train-jpg', suffix='.jpg')
il
il[0]
ils = il.split_by_rand_pct(0.2)
ils
ill = ils.label_from_df(label_delim=' ')
ill
ll_t = ill.transform(tfms=get_transforms(), size=128)
ll_t
bunch = ll_t.databunch()
bunch
bunch = bunch.normalize(imagenet_stats)
bunch
LabelLists.transform??
```
`show_batch` still works, and show us the different labels separated by `;`.
```
data.show_batch(rows=3, figsize=(12,9))
```
To create a `Learner` we use the same function as in lesson 1. Our base architecture is resnet50 again, but the metrics are a little bit differeent: we use `accuracy_thresh` instead of `accuracy`. In lesson 1, we determined the predicition for a given class by picking the final activation that was the biggest, but here, each activation can be 0. or 1. `accuracy_thresh` selects the ones that are above a certain threshold (0.5 by default) and compares them to the ground truth.
As for Fbeta, it's the metric that was used by Kaggle on this competition. See [here](https://en.wikipedia.org/wiki/F1_score) for more details.
```
arch = models.resnet50
fbeta??
accuracy_thresh??
data.classes
acc_02 = partial(accuracy_thresh, thresh=0.2)
f_score = partial(fbeta, thresh=0.2)
learn = cnn_learner(data, arch, metrics=[acc_02, f_score])
```
We use the LR Finder to pick a good learning rate.
```
learn.lr_find()
learn.recorder.plot()
```
Then we can fit the head of our network.
```
lr = 0.01
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1-rn50')
```
...And fine-tune the whole model:
```
learn.unfreeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(5, slice(1e-5, lr/5))
learn.save('stage-2-rn50')
data = (src.transform(tfms, size=256)
.databunch().normalize(imagenet_stats))
learn.data = data
data.train_ds[0][0].shape
learn.freeze()
learn.lr_find()
learn.recorder.plot()
lr=1e-2/2
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1-256-rn50')
learn.unfreeze()
learn.fit_one_cycle(5, slice(1e-5, lr/5))
learn.recorder.plot_losses()
learn.save('stage-2-256-rn50')
```
You won't really know how you're going until you submit to Kaggle, since the leaderboard isn't using the same subset as we have for training. But as a guide, 50th place (out of 938 teams) on the private leaderboard was a score of `0.930`.
```
learn.export()
```
## fin
(This section will be covered in part 2 - please don't ask about it just yet! :) )
```
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg.tar.7z -p {path}
#! 7za -bd -y -so x {path}/test-jpg.tar.7z | tar xf - -C {path}
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg-additional.tar.7z -p {path}
#! 7za -bd -y -so x {path}/test-jpg-additional.tar.7z | tar xf - -C {path}
test = ImageList.from_folder(path/'test-jpg').add(ImageList.from_folder(path/'test-jpg-additional'))
len(test)
learn = load_learner(path, test=test)
preds, _ = learn.get_preds(ds_type=DatasetType.Test)
thresh = 0.2
labelled_preds = [' '.join([learn.data.classes[i] for i,p in enumerate(pred) if p > thresh]) for pred in preds]
labelled_preds[:5]
fnames = [f.name[:-4] for f in learn.data.test_ds.items]
df = pd.DataFrame({'image_name':fnames, 'tags':labelled_preds}, columns=['image_name', 'tags'])
df.to_csv(path/'submission.csv', index=False)
! kaggle competitions submit planet-understanding-the-amazon-from-space -f {path/'submission.csv'} -m "My submission"
```
Private Leaderboard score: 0.9296 (around 80th)
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/student/W2D1_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 2: Training loop of CNNs
**Week 2, Day 1: Convnets And Recurrent Neural Networks**
**By Neuromatch Academy**
__Content creators:__ Dawn McKnight, Richard Gerum, Cassidy Pirlot, Rohan Saha, Liam Peet-Pare, Saeed Najafi, Alona Fyshe
__Content reviewers:__ Saeed Salehi, Lily Cheng, Yu-Fang Yang, Polina Turishcheva, Bettina Hein
__Content editors:__ Nina Kudryashova, Anmol Gupta, Spiros Chavlis
__Production editors:__ Alex Tran-Van-Minh, Spiros Chavlis
*Based on material from:* Konrad Kording, Hmrishav Bandyopadhyay, Rahul Shekhar, Tejas Srivastava
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Tutorial Objectives
At the end of this tutorial, we will be able to:
- Understand pooling
- Code a simple Convolutional Neural Nework (CNN) in pytorch
```
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/mkgqs/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
---
# Setup
```
# @title Install dependencies
!pip install livelossplot --quiet
# Imports
import torch
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.nn.functional as F
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
from IPython.display import display
from tqdm.notebook import tqdm, trange
# @title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
plt.rcParams["mpl_toolkits.legacy_colorbar"] = False
import warnings
warnings.filterwarnings("ignore", category=UserWarning, module="matplotlib")
# @title Helper functions
# just returns accuracy on test data
def test(model, device, data_loader):
model.eval()
correct = 0
total = 0
for data in data_loader:
inputs, labels = data
inputs = inputs.to(device).float()
labels = labels.to(device).long()
outputs = model(inputs)
_, predicted = torch.max(outputs, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc = 100 * correct / total
return f"{acc}%"
# @title Plotting functions
# code to plot loss and accuracy
def plot_loss_accuracy(train_loss, train_acc, validation_loss, validation_acc):
epochs = len(train_loss)
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot(list(range(epochs)), train_loss, label='Training Loss')
ax1.plot(list(range(epochs)), validation_loss, label='Validation Loss')
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Loss')
ax1.set_title('Epoch vs Loss')
ax1.legend()
ax2.plot(list(range(epochs)), train_acc, label='Training Accuracy')
ax2.plot(list(range(epochs)), validation_acc, label='Validation Accuracy')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Accuracy')
ax2.set_title('Epoch vs Accuracy')
ax2.legend()
fig.set_size_inches(15.5, 5.5)
#plt.show()
# @title Set random seed
# @markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
# @title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("WARNING: For this notebook to perform best, "
"if possible, in the menu under `Runtime` -> "
"`Change runtime type.` select `GPU` ")
else:
print("GPU is enabled in this notebook.")
return device
SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()
```
---
# Section 1: Training a CNN
In the last section we coded up a CNN, but trained it with some predefined functions. In this section, we will walk through an example of training loop for a convolution net. In this section, we will train a CNN using convolution layers and maxpool and then observe what the training and validation curves look like. In Section 6, we will add regularization and data augmentation to see what effects they have on the curves and why it is important to incorporate them while training our network.
<br>
```
# @title Video 1: Writing your own training loop
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Ko4y1Q7UG", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"L0XG-QKv5_w", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Section 1.1: Understand the Dataset
The dataset we are going to use for this task is called Fashion-MNIST. It consists of a training set of 60,000 examples and a test set of 10,000 examples. We further divide the test set into a validation set and a test set (8000 and 2000 resp). Each example is a 28*28 gray scale image, associated with a label from 10 classes. Following are the labels of the dataset:
0 T-shirt/top <br>
1 Trouser <br>
2 Pullover <br>
3 Dress <br>
4 Coat <br>
5 Sandal <br>
6 Shirt <br>
7 Sneaker <br>
8 Bag <br>
9 Ankle boot <br>
**NOTE:** we will reduce the dataset to just the two categories T-shirt/top and Shirt to reduce the training time from about 10min to 2min. We later provide pretrained results to give you an idea how the results would look on the whole dataset.
```
# @markdown Getting Fashion-Mnist Data
def get_fashion_mnist_dataset(seed):
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_data = datasets.FashionMNIST(root='./data', download=True,
train=True, transform=transform)
test_data = datasets.FashionMNIST(root='./data', download=True,
train=False, transform=transform)
set_seed(seed)
validation_data, test_data = torch.utils.data.random_split(test_data,
[int(0.8*len(test_data)),
int(0.2*len(test_data))])
return train_data, validation_data, test_data
num_classes = 10
train_data, validation_data, test_data = get_fashion_mnist_dataset(seed=SEED)
# @markdown Reducing Fashion-Mnist Data (to two categories)
# @markdown *NOTE: if you want to train on the whole dataset, just run the cell above
# @markdown and do not execute this cell.*
# need to split into train, validation, test
def reduce_classes(data):
# only want T-Shirts (0) and Shirts (6) labels
train_idx = (data.targets == 0) | (data.targets == 6)
data.targets = data.targets[train_idx]
data.data = data.data[train_idx]
# convert Xs predictions to 1, Os predictions to 0
data.targets[data.targets == 6] = 1
return data
def get_fashion_mnist_dataset(seed):
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_data = datasets.FashionMNIST(root='./data', download=True,
train=True, transform=transform)
train_data = reduce_classes(train_data)
test_data = datasets.FashionMNIST(root='./data', download=True,
train=False, transform=transform)
test_data = reduce_classes(test_data)
set_seed(seed)
validation_data, test_data = torch.utils.data.random_split(test_data,
[int(0.8*len(test_data)),
int(0.2*len(test_data))])
return train_data, validation_data, test_data
num_classes = 2
train_data, validation_data, test_data = get_fashion_mnist_dataset(seed=SEED)
```
Here's some code to visualize the dataset.
```
fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4)
ax1.imshow(train_data[0][0].reshape(28, 28), cmap=plt.get_cmap('gray'))
ax2.imshow(train_data[1][0].reshape(28, 28), cmap=plt.get_cmap('gray'))
ax3.imshow(train_data[2][0].reshape(28, 28), cmap=plt.get_cmap('gray'))
ax4.imshow(train_data[3][0].reshape(28, 28), cmap=plt.get_cmap('gray'))
fig.set_size_inches(18.5, 10.5)
plt.show()
```
Take a minute with your pod and talk about which classes you think would be most confusable. How hard will it be to differentiate t-shirt/tops from shirts?
```
# @title Video 2: The Training Loop
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1av411n7VJ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ZgYYgktqaP8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Section 1.2: Backpropagation Reminder
_Feel free to skip if you've got a good handle on Backpropagation_
We know that we multiply the input data/tensors with weight matrices to obtain some output. Initially, we don't know what the actual weight matrices are so we initialize them with some random values. These random weight matrices when applied as a transformation on the input gives us some output. At first the outputs/predictions will match the true labels only by chance.
To improve performance, we need to change the weight matrices so that the predicted outputs are similar to the true outputs (labels). We first calculate how far away the predicted outputs are to the true outputs using a loss function. Based on the loss function, we change the values of our weight matrices using the gradients of the error with respect to the weight matrices.
Since we are using PyTorch throughout the course, we will use the built-in functions to update the weights. We call the `backward()` method on our 'loss' variable to calculate the gradients/derivatives with respect to all the weight matrices and biases. And then we call the `step()` method on the optimizer variable to apply the gradient updates to our weight matrices.
Here's an animation of backpropagation works.
<figure>
<center><img src=https://machinelearningknowledge.ai/wp-content/uploads/2019/10/Backpropagation.gif>
<figcaption> Stride Two </figcaption>
</center>
</figure>
[This](https://machinelearningknowledge.ai/animated-explanation-of-feed-forward-neural-network-architecture/) article has more animations.
Let's first see a sample training loop. First, we create the network and load a dataset. Then we look at the training loop.
```
# Create a sample network
class emnist_net(nn.Module):
def __init__(self):
super().__init__()
# First define the layers.
self.conv1 = nn.Conv2d(1, 32, kernel_size=5, padding=2)
self.conv2 = nn.Conv2d(32, 64, kernel_size=5, padding=2)
self.fc1 = nn.Linear(7*7*64, 256)
self.fc2 = nn.Linear(256, 26)
def forward(self, x):
# Conv layer 1.
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2)
# Conv layer 2.
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2)
# Fully connected layer 1.
x = x.view(-1, 7*7*64) # You have to first flatten the ourput from the
# previous convolution layer.
x = self.fc1(x)
x = F.relu(x)
# Fully connected layer 2.
x = self.fc2(x)
# x = F.softmax(x)
return x
# @markdown ### Load a sample dataset
# Load the data
mnist_train = datasets.EMNIST(root="./datasets", train=True,
transform=transforms.ToTensor(),
download=True, split='letters')
mnist_test = datasets.EMNIST(root="./datasets", train=False,
transform=transforms.ToTensor(),
download=True, split='letters')
# labels should start from 0
mnist_train.targets -= 1
mnist_test.targets -= 1
# create data loaders
g_seed = torch.Generator()
g_seed.manual_seed(SEED)
train_loader = torch.utils.data.DataLoader(mnist_train, batch_size=100,
shuffle=False,
worker_init_fn=seed_worker,
generator=g_seed)
test_loader = torch.utils.data.DataLoader(mnist_test, batch_size=100,
shuffle=False,
worker_init_fn=seed_worker,
generator=g_seed)
# Training
# Instantiate model
# Puts the Model on the GPU (Select runtime-type as GPU
# from the 'Runtime->Change Runtime type' option).
model = emnist_net().to(DEVICE)
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # <---- change here
# Iterate through train set minibatchs
for epoch in trange(3): # <---- change here
for images, labels in tqdm(train_loader):
# Zero out the gradients
optimizer.zero_grad() # Fill this out.
# Forward pass
x = images
# Move the data to GPU for faster execution.
x, labs = x.to(DEVICE), labels.to(DEVICE)
y = model(x)
# Calculate loss.
loss = criterion(y, labs)
# Backpropagation and gradient update.
loss.backward() # Calculate gradients.
optimizer.step() # Apply gradient udpate.
## Testing
correct = 0
total = len(mnist_test)
with torch.no_grad():
# Iterate through test set minibatchs
for images, labels in tqdm(test_loader):
# Forward pass
x = images
# Move the data to GPU for faster execution.
x, labs = x.to(DEVICE), labels.to(DEVICE)
y = model(x)
predictions = torch.argmax(y, dim=1)
correct += torch.sum((predictions == labs).float())
print('Test accuracy: {}'.format(correct/total))
```
You already coded the structure of a CNN. Now, you are going to implement the training loop for a CNN.
- Choose the correct criterion
- Code up the training part (calculating gradients, loss, stepping forward)
- Keep a track of the running loss i.e for each epoch we want to to know the average loss of the batch size. We have already done the same for accuracy for you.
## Section 1.3: Fashion-MNIST dataset
Now Let us train on the actual Fashion-MNIST dataset.
```
# @markdown #### Getting the DataLoaders (Run Me)
def get_data_loaders(train_dataset, validation_dataset, test_dataset, seed,
batch_size=64):
g_seed = torch.Generator()
g_seed.manual_seed(seed)
train_loader = DataLoader(train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0,
worker_init_fn=seed_worker,
generator=g_seed)
validation_loader = DataLoader(validation_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0,
worker_init_fn=seed_worker,
generator=g_seed)
test_loader = DataLoader(test_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0,
worker_init_fn=seed_worker,
generator=g_seed)
return train_loader, validation_loader, test_loader
train_loader, validation_loader, test_loader = get_data_loaders(train_data,
validation_data,
test_data, SEED)
# This cell contains the code for the CNN we will be using in this section.
class FMNIST_Net1(nn.Module):
def __init__(self):
super(FMNIST_Net1, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, num_classes)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
```
## Coding Exercise 1: Code the training loop
Now try coding the training loop.
You should first have a `criterion` defined (you can use `CrossEntropyLoss` here, which you learned about last week) so that you can calculate the loss. Next, you should to put everything together. Start the training process by first obtaining the model output, calculating the loss, and finally updating the weights.
*Don't forget to zero out the gradients.*
NOTE: The comments in the `train` function provides many hints that will help you fill in the missing code. This will give you a solid understanding of the different steps involved in the training loop.
```
def train(model, device, train_loader, validation_loader, epochs):
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),
lr=0.01, momentum=0.9)
train_loss, validation_loss = [], []
train_acc, validation_acc = [], []
with tqdm(range(epochs), unit='epoch') as tepochs:
tepochs.set_description('Training')
for epoch in tepochs:
model.train()
# keeps track of the running loss
running_loss = 0.
correct, total = 0, 0
for data, target in train_loader:
data, target = data.to(device), target.to(device)
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Update the steps of the train loop")
####################################################################
# COMPLETE CODE FOR TRAINING LOOP by following these steps
# 1. Get the model output (call the model with the data from this batch)
output = ...
# 2. Zero the gradients out (i.e. reset the gradient that the optimizer
# has collected so far with optimizer.zero_grad())
...
# 3. Get the Loss (call the loss criterion with the model's output
# and the target values)
loss = ...
# 4. Calculate the gradients (do the pass backwards from the loss
# with loss.backward())
...
# 5. Update the weights (using the training step of the optimizer,
# optimizer.step())
...
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Update the set_postfix function")
####################################################################
# set loss to whatever you end up naming your variable when
# calling criterion
# for example, loss = criterion(output, target)
# then set loss = loss.item() in the set_postfix function
tepochs.set_postfix(loss=...)
running_loss += ... # add the loss for this batch
# get accuracy
_, predicted = torch.max(output, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Append the train_loss")
####################################################################
train_loss.append(...) # append the loss for this epoch (running loss divided by the number of batches e.g. len(train_loader))
train_acc.append(correct/total)
# evaluate on validation data
model.eval()
running_loss = 0.
correct, total = 0, 0
for data, target in validation_loader:
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
tepochs.set_postfix(loss=loss.item())
running_loss += loss.item()
# get accuracy
_, predicted = torch.max(output, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
validation_loss.append(running_loss/len(validation_loader))
validation_acc.append(correct/total)
return train_loss, train_acc, validation_loss, validation_acc
set_seed(SEED)
## Uncomment to test your training loop
# net = FMNIST_Net1().to(DEVICE)
# train_loss, train_acc, validation_loss, validation_acc = train(net, DEVICE, train_loader, validation_loader, 20)
# plot_loss_accuracy(train_loss, train_acc, validation_loss, validation_acc)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/solutions/W2D1_Tutorial2_Solution_8b552183.py)
*Example output:*
<img alt='Solution hint' align='left' width=2195.0 height=755.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/static/W2D1_Tutorial2_Solution_8b552183_3.png>
The next cell contains the code for the CNN we will be using in this section.
Run the next cell to get the accuracy on the data!
```
test(net, DEVICE, test_loader)
```
## Think! 1: Overfitting
Do you think this network is overfitting?
If yes, what can you do to combat this?
**Hint**: overfitting occurs when the training accuracy greatly exceeds the validation accuracy
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/solutions/W2D1_Tutorial2_Solution_a367e834.py)
---
# Section 2: Overfitting - symptoms and cures
So you spent some time last week learning about regularization techniques. Below is a copy of the CNN model we used previously. Now we want you to add some dropout regularization, and check if that helps reduce overfitting. If you're up for a challenge, you can try methods other than dropout as well.
## Coding Exercise 2.1: Adding Regularization
Add various regularization methods, feel free to add any and play around!
```
class FMNIST_Net2(nn.Module):
def __init__(self):
super(FMNIST_Net2, self).__init__()
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Add regularization layers")
####################################################################
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = ...
self.dropout2 = ...
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, num_classes)
def forward(self, x):
####################################################################
# Now add the layers in your forward pass in appropriate order
# then remove or comment the line below to test your function
raise NotImplementedError("Add regularization in the forward pass")
####################################################################
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = ...
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = ...
x = self.fc2(x)
return x
set_seed(SEED)
## Uncomment below to check your code
# net2 = FMNIST_Net2().to(DEVICE)
# train_loss, train_acc, validation_loss, validation_acc = train(net2, DEVICE, train_loader, validation_loader, 20)
# plot_loss_accuracy(train_loss, train_acc, validation_loss, validation_acc)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/solutions/W2D1_Tutorial2_Solution_36a8481f.py)
*Example output:*
<img alt='Solution hint' align='left' width=2195.0 height=755.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/static/W2D1_Tutorial2_Solution_36a8481f_3.png>
Run the next cell to get the accuracy on the data!
```
test(net2, DEVICE, test_loader)
```
## Think! 2.1: Regularization
1. Is the training accuracy slightly reduced from before adding regularization? What accuracy were you able to reduce it to?
2. Why does the validation accuracy start higher than training accuracy?
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/solutions/W2D1_Tutorial2_Solution_6e9ea2ef.py)
## Interactive Demo 2: Dropout exploration
If you want to try out more dropout parameter combinations, but do not have the time to run them, we have here precalculated some combinations you can use the sliders to explore them.
```
# @markdown *Run this cell to enable the widget*
from ipywidgets import interactive, widgets, interactive_output
data = [[0, 0, [0.3495898238046372, 0.2901147632522786, 0.2504794800931469, 0.23571575765914105, 0.21297093365896255, 0.19087818914905508, 0.186408187797729, 0.19487689035211472, 0.16774938120803934, 0.1548648244958926, 0.1390149021382503, 0.10919439224922593, 0.10054351237820501, 0.09900783193594914, 0.08370604479507088, 0.07831853718318521, 0.06859792241866285, 0.06152600247383197, 0.046342475851873885, 0.055123823092992796], [0.83475, 0.8659166666666667, 0.8874166666666666, 0.8913333333333333, 0.8998333333333334, 0.9140833333333334, 0.9178333333333333, 0.9138333333333334, 0.9251666666666667, 0.92975, 0.939, 0.9525833333333333, 0.9548333333333333, 0.9585833333333333, 0.9655833333333333, 0.9661666666666666, 0.9704166666666667, 0.9743333333333334, 0.9808333333333333, 0.9775], [0.334623601436615, 0.2977438402175903, 0.2655304968357086, 0.25506321132183074, 0.2588835284113884, 0.2336345863342285, 0.3029863876104355, 0.240766831189394, 0.2719801160693169, 0.25231350839138034, 0.2500132185220718, 0.26699506521224975, 0.2934862145781517, 0.361227530837059, 0.33196919202804565, 0.36985905408859254, 0.4042587959766388, 0.3716402840614319, 0.3707024946808815, 0.4652537405490875], [0.866875, 0.851875, 0.8775, 0.889375, 0.881875, 0.900625, 0.85, 0.898125, 0.885625, 0.876875, 0.899375, 0.90625, 0.89875, 0.87, 0.898125, 0.884375, 0.874375, 0.89375, 0.903125, 0.890625]], [0, 0.25, [0.35404509995528993, 0.30616586227366266, 0.2872369573946963, 0.27564131199045383, 0.25969504263806853, 0.24728168408445855, 0.23505379509260046, 0.21552803914280647, 0.209761732277718, 0.19977611067526518, 0.19632092922767427, 0.18672360206379535, 0.16564940239124476, 0.1654047035671612, 0.1684555298985636, 0.1627526102349796, 0.13878319327263755, 0.12881529055773577, 0.12628930977525862, 0.11346105090837846], [0.8324166666666667, 0.8604166666666667, 0.8680833333333333, 0.8728333333333333, 0.8829166666666667, 0.88625, 0.89425, 0.90125, 0.9015833333333333, 0.90925, 0.9114166666666667, 0.917, 0.9268333333333333, 0.92475, 0.921, 0.9255833333333333, 0.9385, 0.9428333333333333, 0.9424166666666667, 0.9484166666666667], [0.3533937376737595, 0.29569859683513644, 0.27531551957130435, 0.2576177391409874, 0.26947550356388095, 0.25361743807792664, 0.2527468180656433, 0.24179009914398195, 0.28664454460144045, 0.23347773611545564, 0.24672816634178163, 0.27822364538908007, 0.2380720081925392, 0.24426509588956832, 0.2443918392062187, 0.24207917481660843, 0.2519641682505608, 0.3075403380393982, 0.2798181238770485, 0.26709021866321564], [0.826875, 0.87, 0.870625, 0.8875, 0.883125, 0.88625, 0.891875, 0.891875, 0.890625, 0.903125, 0.89375, 0.885625, 0.903125, 0.888125, 0.899375, 0.898125, 0.905, 0.905625, 0.898125, 0.901875]], [0, 0.5, [0.39775496332886373, 0.33771887778284704, 0.321900939132939, 0.3079229625774191, 0.304149763301966, 0.28249239723416086, 0.2861261191044716, 0.27356165798103554, 0.2654648520686525, 0.2697350280557541, 0.25354846321204877, 0.24612889034633942, 0.23482802549892284, 0.2389904112416379, 0.23742155821875055, 0.232423192127905, 0.22337309338469455, 0.2141852991932884, 0.20677659985549907, 0.19355326712607068], [0.8155, 0.83625, 0.8481666666666666, 0.8530833333333333, 0.8571666666666666, 0.86775, 0.8623333333333333, 0.8711666666666666, 0.8748333333333334, 0.8685833333333334, 0.8785, 0.8804166666666666, 0.8835833333333334, 0.8840833333333333, 0.88875, 0.8919166666666667, 0.8946666666666667, 0.8960833333333333, 0.906, 0.9063333333333333], [0.3430288594961166, 0.4062050700187683, 0.29745822548866274, 0.27728439271450045, 0.28092808067798614, 0.2577864158153534, 0.2651400637626648, 0.25632822573184966, 0.3082498562335968, 0.2812121778726578, 0.26345942318439486, 0.2577408078312874, 0.25757989794015884, 0.26434457510709763, 0.24917411386966706, 0.27261342853307724, 0.2445397639274597, 0.26001051396131514, 0.24147838801145555, 0.2471102523803711], [0.82875, 0.795625, 0.87, 0.87375, 0.865625, 0.8825, 0.8825, 0.87625, 0.848125, 0.87875, 0.8675, 0.889375, 0.8925, 0.866875, 0.87375, 0.87125, 0.895625, 0.90375, 0.90125, 0.88625]], [0, 0.75, [0.4454924576777093, 0.43416607585993217, 0.42200265769311723, 0.40520024616667566, 0.41137005166804536, 0.404100904280835, 0.40118067664034823, 0.40139733080534223, 0.3797615355158106, 0.3596332479030528, 0.3600061919460905, 0.3554147962242999, 0.34480382890460337, 0.3329520877054397, 0.33164913056695716, 0.31860941466181836, 0.30702565340919696, 0.30605297186907304, 0.2953788426486736, 0.2877389984403519], [0.7788333333333334, 0.7825, 0.7854166666666667, 0.7916666666666666, 0.7885, 0.7833333333333333, 0.7923333333333333, 0.79525, 0.805, 0.81475, 0.8161666666666667, 0.8188333333333333, 0.817, 0.8266666666666667, 0.82225, 0.8360833333333333, 0.8456666666666667, 0.8430833333333333, 0.8491666666666666, 0.8486666666666667], [0.3507828885316849, 0.3337512403726578, 0.34320746660232543, 0.3476085543632507, 0.3326113569736481, 0.33033264458179473, 0.32014619171619413, 0.3182142299413681, 0.30076164126396177, 0.3263852882385254, 0.27597591280937195, 0.29062016785144806, 0.2765174686908722, 0.269492534995079, 0.2679423809051514, 0.2691828978061676, 0.2726386785507202, 0.2541181230545044, 0.2580208206176758, 0.26315389811992645], [0.839375, 0.843125, 0.823125, 0.821875, 0.81875, 0.819375, 0.8225, 0.826875, 0.835625, 0.865, 0.868125, 0.855625, 0.868125, 0.884375, 0.883125, 0.875, 0.87375, 0.883125, 0.8975, 0.885]], [0.25, 0, [0.34561181647029326, 0.2834314257699124, 0.2583787844298368, 0.23892096465730922, 0.23207981773513428, 0.20245029634617745, 0.183908417583146, 0.17489413774393975, 0.17696723581707857, 0.15615438255778652, 0.14469048382833283, 0.12424647461305907, 0.11314761043189371, 0.11249036608422373, 0.10725672634199579, 0.09081190969160896, 0.0942245383271353, 0.08525650047677312, 0.06622548752583246, 0.06039895973307021], [0.8356666666666667, 0.8675833333333334, 0.88175, 0.8933333333333333, 0.8975833333333333, 0.91175, 0.91825, 0.9249166666666667, 0.9238333333333333, 0.9305, 0.938, 0.9465833333333333, 0.9525833333333333, 0.9539166666666666, 0.9555, 0.9615, 0.9606666666666667, 0.96275, 0.9725, 0.9764166666666667], [0.31630186855792997, 0.2702121251821518, 0.2915778249502182, 0.26050266206264494, 0.27837209939956664, 0.24276352763175965, 0.3567117482423782, 0.2752074319124222, 0.2423130339384079, 0.2565067422389984, 0.28710135877132414, 0.266545415520668, 0.31818037331104276, 0.28757534325122835, 0.2777567034959793, 0.2998969575762749, 0.3292293107509613, 0.30775387287139894, 0.32681577146053314, 0.44882203072309496], [0.85375, 0.879375, 0.875625, 0.89, 0.86125, 0.884375, 0.851875, 0.8875, 0.89625, 0.875625, 0.8675, 0.895, 0.888125, 0.89125, 0.889375, 0.880625, 0.87875, 0.8875, 0.894375, 0.891875]], [0.25, 0.25, [0.35970850011452715, 0.31336131549261986, 0.2881505932421126, 0.2732012960267194, 0.26232245425753137, 0.2490472443639598, 0.24866499093935845, 0.22930880945096624, 0.21745950407645803, 0.20700296882460725, 0.197304340356842, 0.20665066804182022, 0.19864868348900308, 0.184807124210799, 0.1684703354703936, 0.17377675851767369, 0.16638460063791655, 0.15944768343754906, 0.14876513817208878, 0.1388207479835825], [0.83375, 0.85175, 0.86725, 0.8719166666666667, 0.8761666666666666, 0.8865833333333333, 0.88275, 0.8956666666666667, 0.8995833333333333, 0.9034166666666666, 0.90825, 0.9043333333333333, 0.9093333333333333, 0.9145, 0.9196666666666666, 0.9196666666666666, 0.9216666666666666, 0.9273333333333333, 0.9299166666666666, 0.93675], [0.3166788029670715, 0.28422485530376435, 0.38055971562862395, 0.2586472672224045, 0.2588653892278671, 0.27983254253864287, 0.25693483114242555, 0.26412731170654297, 0.2733065390586853, 0.24399636536836625, 0.24481021404266357, 0.2689305514097214, 0.2527604129910469, 0.24829535871744157, 0.2654112687706947, 0.23074268400669098, 0.24625462979078294, 0.26423920392990113, 0.25540480852127073, 0.25536185175180437], [0.856875, 0.86625, 0.815, 0.8825, 0.88125, 0.875625, 0.89, 0.8775, 0.870625, 0.895, 0.8975, 0.87375, 0.88625, 0.89125, 0.903125, 0.9, 0.893125, 0.89, 0.8925, 0.899375]], [0.25, 0.5, [0.3975753842040579, 0.34884724409339274, 0.3296900932142075, 0.3150389680361494, 0.31285368667003954, 0.30415422033439293, 0.29553352716438314, 0.289314468094009, 0.2806722329969102, 0.2724469883486311, 0.26634286379719035, 0.2645016222241077, 0.2619251853766594, 0.2551752221473354, 0.26411766035759704, 0.24515971153023394, 0.2390686312412962, 0.23573122312255362, 0.221005061562074, 0.22358600648635246], [0.8106666666666666, 0.8286666666666667, 0.844, 0.8513333333333334, 0.84975, 0.8570833333333333, 0.8624166666666667, 0.8626666666666667, 0.866, 0.8706666666666667, 0.8738333333333334, 0.8748333333333334, 0.8778333333333334, 0.8798333333333334, 0.87375, 0.8865, 0.8898333333333334, 0.8885833333333333, 0.8991666666666667, 0.8968333333333334], [0.3597823417186737, 0.31115993797779085, 0.29929635107517244, 0.2986589139699936, 0.2938830828666687, 0.28118040919303894, 0.2711684626340866, 0.2844697123765945, 0.26613601863384245, 0.2783134698867798, 0.2540236383676529, 0.25821100890636445, 0.2618845862150192, 0.2554920208454132, 0.26543013513088226, 0.24074569433927537, 0.26475649774074556, 0.25578504264354707, 0.2648500043153763, 0.25700133621692656], [0.825, 0.8375, 0.85875, 0.855625, 0.861875, 0.868125, 0.875, 0.85375, 0.886875, 0.86375, 0.88375, 0.885625, 0.875625, 0.87375, 0.8875, 0.895, 0.874375, 0.89125, 0.88625, 0.895625]], [0.25, 0.75, [0.4584837538447786, 0.4506375778545725, 0.4378386567089152, 0.4066803843734112, 0.3897064097542712, 0.3855383962868376, 0.39160584618753574, 0.3731403942120836, 0.37915910170116324, 0.36966170814443144, 0.35735995298687445, 0.35630573094525236, 0.346426092167484, 0.34040802899510303, 0.32829743726773464, 0.3284692421872565, 0.3186114077713895, 0.32295761503120685, 0.3201326223764014, 0.30581602454185486], [0.7803333333333333, 0.7709166666666667, 0.7723333333333333, 0.7850833333333334, 0.7885, 0.7903333333333333, 0.7986666666666666, 0.805, 0.8011666666666667, 0.8068333333333333, 0.8095833333333333, 0.8226666666666667, 0.8285, 0.83125, 0.8369166666666666, 0.8395, 0.8441666666666666, 0.8393333333333334, 0.8490833333333333, 0.8546666666666667], [0.43526833415031435, 0.3598956459760666, 0.3492005372047424, 0.33501910269260404, 0.31689528703689573, 0.3113307124376297, 0.32388085544109346, 0.3084335786104202, 0.3013568025827408, 0.28992725372314454, 0.28726822674274444, 0.26945948660373686, 0.276592333316803, 0.27462401330471037, 0.27574350595474245, 0.2710308712720871, 0.2702724140882492, 0.27323003828525544, 0.25551479041576386, 0.26488787233829497], [0.808125, 0.81625, 0.805, 0.8325, 0.846875, 0.835625, 0.850625, 0.838125, 0.836875, 0.861875, 0.85375, 0.866875, 0.858125, 0.8825, 0.879375, 0.874375, 0.874375, 0.886875, 0.883125, 0.86875]], [0.5, 0, [0.3579516930783049, 0.29596046564426826, 0.2779693031247626, 0.2563994538356015, 0.24771526356802342, 0.2324555875693864, 0.2139121579362991, 0.20474095547452886, 0.19138856208387842, 0.18883306279461434, 0.1763652620757831, 0.1698919345248253, 0.16033914366221808, 0.1557997044651432, 0.1432509447467771, 0.13817814606776896, 0.12609625801919622, 0.11830132696381275, 0.11182412960903441, 0.112559904720872], [0.8314166666666667, 0.8611666666666666, 0.8736666666666667, 0.8800833333333333, 0.885, 0.8944166666666666, 0.9036666666666666, 0.9090833333333334, 0.9193333333333333, 0.9161666666666667, 0.92225, 0.9255, 0.93075, 0.93225, 0.939, 0.9414166666666667, 0.94375, 0.9485833333333333, 0.9535833333333333, 0.9524166666666667], [0.30677567660808563, 0.32954772651195524, 0.25747098088264464, 0.2736126834154129, 0.2561805549263954, 0.23671718776226044, 0.24553639352321624, 0.2338863667845726, 0.24586652517318724, 0.23423030972480774, 0.26579618513584136, 0.2781539523601532, 0.27084136098623274, 0.23948652744293214, 0.26023868829011915, 0.2419952344894409, 0.2511997854709625, 0.23935708701610564, 0.2701922015845776, 0.27307246536016466], [0.870625, 0.855625, 0.886875, 0.875625, 0.878125, 0.8925, 0.885, 0.890625, 0.876875, 0.896875, 0.881875, 0.8875, 0.89, 0.898125, 0.896875, 0.89, 0.89875, 0.904375, 0.906875, 0.894375]], [0.5, 0.25, [0.3712943946903056, 0.3198322071594761, 0.29978102302931725, 0.295274139798068, 0.2861913934032968, 0.27165328782606635, 0.25972246442069397, 0.2543164194819141, 0.24795781916126292, 0.24630710007028378, 0.23296909834793272, 0.23382153587931015, 0.2239028559799524, 0.21443849290780564, 0.2149274461367663, 0.20642021417300752, 0.19801520536396097, 0.1978839404009124, 0.19118623847657062, 0.18144798041024107], [0.8235833333333333, 0.8538333333333333, 0.8604166666666667, 0.86075, 0.8664166666666666, 0.8754166666666666, 0.8799166666666667, 0.8815833333333334, 0.88725, 0.8848333333333334, 0.8936666666666667, 0.8935, 0.895, 0.8995, 0.89625, 0.9068333333333334, 0.9098333333333334, 0.9120833333333334, 0.91375, 0.9175833333333333], [0.3184810388088226, 0.2948088157176971, 0.29438531696796416, 0.27669853866100313, 0.2634278678894043, 0.25847582578659056, 0.2500907778739929, 0.2538330048322678, 0.25127841770648957, 0.2519759064912796, 0.2455715072154999, 0.2437664610147476, 0.259639236330986, 0.24515749186277389, 0.2553828465938568, 0.2324645048379898, 0.24492083072662355, 0.24482838332653045, 0.23327024638652802, 0.2520161652565002], [0.855, 0.865, 0.8525, 0.856875, 0.876875, 0.88125, 0.8825, 0.8875, 0.8925, 0.8925, 0.88875, 0.889375, 0.87375, 0.895, 0.889375, 0.90625, 0.883125, 0.895, 0.899375, 0.901875]], [0.5, 0.5, [0.40442772225496615, 0.36662670541951, 0.355034276367502, 0.3396551510755052, 0.3378269396563794, 0.32084332002287214, 0.31314464951766297, 0.2982726935693558, 0.2885229691387491, 0.2888992782285873, 0.2893476904706752, 0.281817957996688, 0.2771622718490185, 0.2693793097550565, 0.2617615883416952, 0.2657115764995205, 0.25631817549150043, 0.24793559907281654, 0.2538738044652533, 0.23912971732305718], [0.8093333333333333, 0.82825, 0.8341666666666666, 0.84525, 0.84525, 0.8515, 0.8583333333333333, 0.8626666666666667, 0.8688333333333333, 0.8685, 0.8689166666666667, 0.8693333333333333, 0.8711666666666666, 0.8766666666666667, 0.88275, 0.88175, 0.8839166666666667, 0.8866666666666667, 0.8839166666666667, 0.8929166666666667], [0.38392188608646394, 0.3653419762849808, 0.3050421380996704, 0.30614266455173494, 0.2937217426300049, 0.30008585572242735, 0.2794034606218338, 0.27541795969009397, 0.31378355383872986, 0.2670704126358032, 0.26745485186576845, 0.2471194839477539, 0.26509816259145735, 0.25458798944950106, 0.2481587851047516, 0.25591064751148224, 0.2596563971042633, 0.2569611769914627, 0.2435744071006775, 0.2507249677181244], [0.820625, 0.846875, 0.856875, 0.868125, 0.860625, 0.87125, 0.86625, 0.87375, 0.865625, 0.87875, 0.878125, 0.889375, 0.87875, 0.886875, 0.89125, 0.89, 0.87375, 0.884375, 0.88875, 0.89375]], [0.5, 0.75, [0.46106574311852455, 0.4519433615372536, 0.4446939624687459, 0.4284856241751224, 0.4527993325857406, 0.4220876024758562, 0.40969764266876463, 0.39233948219012704, 0.42498463344700793, 0.3869199570506177, 0.38021832910623954, 0.3855376149270129, 0.3721433773319772, 0.3662295250340979, 0.3629763710530514, 0.358500304691335, 0.3490118366131123, 0.34879197790584665, 0.33399240054348683, 0.3347948451149971], [0.7866666666666666, 0.7865, 0.784, 0.79375, 0.7755833333333333, 0.79125, 0.7973333333333333, 0.8085833333333333, 0.7913333333333333, 0.8125833333333333, 0.81675, 0.812, 0.8173333333333334, 0.8235833333333333, 0.831, 0.8306666666666667, 0.8353333333333334, 0.8320833333333333, 0.84375, 0.8410833333333333], [0.35159709095954894, 0.3579048192501068, 0.3501501774787903, 0.33594816565513613, 0.3741619431972504, 0.34183687329292295, 0.3353554099798203, 0.32617265462875367, 0.3640907108783722, 0.33187183618545535, 0.32401839792728426, 0.30536725163459777, 0.31303414940834046, 0.2893040508031845, 0.3063929396867752, 0.2909839802980423, 0.2858921372890472, 0.2850045281648636, 0.28049838364124297, 0.2873564797639847], [0.816875, 0.793125, 0.810625, 0.821875, 0.8175, 0.82, 0.816875, 0.814375, 0.828125, 0.83875, 0.818125, 0.843125, 0.834375, 0.85875, 0.874375, 0.85375, 0.870625, 0.85375, 0.883125, 0.848125]], [0.75, 0, [0.37716902824158366, 0.3260373148195287, 0.3128290904012132, 0.2998493126732238, 0.29384377892030045, 0.2759418967873492, 0.26431119905665834, 0.2577077782455277, 0.25772295725789474, 0.24954422610871335, 0.24065862928933285, 0.23703582263848882, 0.23237684028262787, 0.2200249534575863, 0.22110319957929722, 0.21804759631607126, 0.21419822757548473, 0.19927451733816812, 0.19864692467641323, 0.18966749441274938], [0.8215833333333333, 0.848, 0.8526666666666667, 0.8585, 0.8639166666666667, 0.8716666666666667, 0.8783333333333333, 0.8849166666666667, 0.88325, 0.88325, 0.8918333333333334, 0.8913333333333333, 0.896, 0.9010833333333333, 0.8996666666666666, 0.9016666666666666, 0.902, 0.9120833333333334, 0.9105833333333333, 0.9160833333333334], [0.3255926352739334, 0.3397491586208343, 0.3148202610015869, 0.30447013437747955, 0.27427292466163633, 0.2607581865787506, 0.2583494257926941, 0.24150457441806794, 0.24839721441268922, 0.24157819360494615, 0.24594406485557557, 0.2547012311220169, 0.24132476687431337, 0.2433958488702774, 0.2358475297689438, 0.24675665378570558, 0.23343635857105255, 0.22841362684965133, 0.2247604575753212, 0.24281086921691894], [0.85125, 0.85125, 0.853125, 0.851875, 0.876875, 0.87875, 0.883125, 0.888125, 0.89, 0.888125, 0.88375, 0.86625, 0.88375, 0.888125, 0.898125, 0.88875, 0.896875, 0.894375, 0.899375, 0.88625]], [0.75, 0.25, [0.3795942336796446, 0.33614943612446174, 0.3235826115024851, 0.3267444484728448, 0.30353531146303137, 0.29750882636042353, 0.2964640334248543, 0.28714796314214136, 0.2744278162717819, 0.27310871372514584, 0.2624819800257683, 0.2579742945889209, 0.25963644726954876, 0.25635017161356644, 0.2501001837960583, 0.24249463702769988, 0.23696896695393196, 0.23254455582417072, 0.22419108628751117, 0.22851746232110134], [0.8204166666666667, 0.839, 0.847, 0.8506666666666667, 0.8571666666666666, 0.8635, 0.8639166666666667, 0.8711666666666666, 0.8711666666666666, 0.87475, 0.87875, 0.87925, 0.8805833333333334, 0.8845, 0.88675, 0.8908333333333334, 0.8926666666666667, 0.89525, 0.8985, 0.8955833333333333], [0.3383863967657089, 0.31120560944080355, 0.32110977828502657, 0.3080899566411972, 0.2866462391614914, 0.27701647162437437, 0.29040718913078306, 0.2702513742446899, 0.2590403389930725, 0.26199558019638064, 0.26484714448451996, 0.2940529054403305, 0.2654808533191681, 0.25154681205749513, 0.26637687146663663, 0.24435366928577423, 0.24174826145172118, 0.2444209086894989, 0.247626873254776, 0.24192263156175614], [0.843125, 0.8575, 0.86, 0.86375, 0.87, 0.875625, 0.865, 0.88, 0.879375, 0.885, 0.888125, 0.85625, 0.87625, 0.88375, 0.879375, 0.888125, 0.8875, 0.886875, 0.8825, 0.8925]], [0.75, 0.5, [0.41032169133107715, 0.37122817583223605, 0.35897897873470125, 0.3438001747064768, 0.33858899811797954, 0.3389760729797343, 0.32536247420184156, 0.3152934226425404, 0.30936657058748795, 0.3078679118226183, 0.30974164977669716, 0.30031369174731537, 0.29489042173991814, 0.28921707251921613, 0.28369594476324445, 0.2849519875772456, 0.27076949349584734, 0.26930386248104116, 0.26349931491657774, 0.26431971300948176], [0.8086666666666666, 0.82875, 0.8284166666666667, 0.8381666666666666, 0.837, 0.8389166666666666, 0.8490833333333333, 0.8488333333333333, 0.8533333333333334, 0.8551666666666666, 0.8509166666666667, 0.8615, 0.8628333333333333, 0.86225, 0.8715, 0.86775, 0.8748333333333334, 0.8719166666666667, 0.8814166666666666, 0.8835], [0.3464747530221939, 0.3193131250143051, 0.3464068531990051, 0.3129056388139725, 0.3131117367744446, 0.30689118325710296, 0.2929005026817322, 0.3131696957349777, 0.302835636138916, 0.27934255003929137, 0.300513002872467, 0.26962003886699676, 0.2676294481754303, 0.26430738389492037, 0.2525753951072693, 0.2508367341756821, 0.25303518533706665, 0.24774718701839446, 0.24518848478794097, 0.26084545016288757], [0.8225, 0.85375, 0.849375, 0.853125, 0.85875, 0.848125, 0.856875, 0.8575, 0.87, 0.869375, 0.863125, 0.886875, 0.8725, 0.878125, 0.894375, 0.888125, 0.8875, 0.89125, 0.88875, 0.86875]], [0.75, 0.75, [0.4765880586619073, 0.4503744399928032, 0.4249279998401378, 0.42333967214886176, 0.4236916420941657, 0.4269233151002133, 0.4192506206479478, 0.41413671872083174, 0.41084911515738104, 0.389948022413127, 0.39566395788433706, 0.3741930383951106, 0.3794517093040842, 0.3692300356131919, 0.3640432547223061, 0.3608953575504587, 0.3419572095129084, 0.34907091543712515, 0.33601277535583113, 0.3408893179544743], [0.77625, 0.7823333333333333, 0.7916666666666666, 0.80075, 0.7973333333333333, 0.7810833333333334, 0.7928333333333333, 0.7930833333333334, 0.7951666666666667, 0.8015833333333333, 0.8000833333333334, 0.8126666666666666, 0.811, 0.81775, 0.8236666666666667, 0.8215, 0.8305833333333333, 0.8251666666666667, 0.8299166666666666, 0.836], [0.3674533206224442, 0.36733597874641416, 0.35894496202468873, 0.3514183223247528, 0.35345671892166136, 0.36494161546230314, 0.35217500329017637, 0.3447349113225937, 0.34697150766849516, 0.36931039452552794, 0.3350031852722168, 0.3416145300865173, 0.32389605045318604, 0.3109715062379837, 0.3322615468502045, 0.327584428191185, 0.31910278856754304, 0.311815539598465, 0.2950947880744934, 0.2948034608364105], [0.808125, 0.789375, 0.826875, 0.821875, 0.81375, 0.804375, 0.80625, 0.83, 0.820625, 0.848125, 0.816875, 0.8125, 0.83, 0.84625, 0.824375, 0.828125, 0.825625, 0.840625, 0.8475, 0.844375]]]
data = [[0, 0, [0.400307985173582, 0.2597426520640662, 0.20706942731312025, 0.17091670006251475, 0.13984850759524653, 0.11444453444522518, 0.0929887340481538, 0.07584588486117436, 0.06030314570384176, 0.04997897459031356, 0.037156337104278056, 0.02793900864590992, 0.02030197833807442, 0.01789472087045391, 0.0175876492686666, 0.019220354652448274, 0.013543135874294319, 0.006956856955481477, 0.0024507183060002227, 0.00206579088377317], [0.8547833333333333, 0.9049, 0.9241666666666667, 0.9360166666666667, 0.94695, 0.9585833333333333, 0.9658666666666667, 0.9723166666666667, 0.9780333333333333, 0.9820166666666666, 0.9868, 0.9906666666666667, 0.9936833333333334, 0.9941333333333333, 0.99405, 0.9932833333333333, 0.9960666666666667, 0.9979666666666667, 0.9996666666666667, 0.9995666666666667], [0.36797549843788147, 0.2586278670430183, 0.24208260095119477, 0.24353929474949837, 0.24164094921946525, 0.2638056704550982, 0.2579395814836025, 0.27675500786304474, 0.2851512663513422, 0.30380481338500975, 0.3235128371268511, 0.3284085538983345, 0.3443841063082218, 0.41086878085136413, 0.457796107493341, 0.4356938077956438, 0.4109785168170929, 0.4433729724138975, 0.4688420155197382, 0.4773445381522179], [0.87, 0.908375, 0.91475, 0.915125, 0.91525, 0.91725, 0.924875, 0.91975, 0.922375, 0.92025, 0.920375, 0.924875, 0.9235, 0.918125, 0.91525, 0.918875, 0.923625, 0.9235, 0.92625, 0.925]], [0, 0.25, [0.4710115425463424, 0.3166707545550647, 0.25890692547440275, 0.22350736999753187, 0.19296910860009794, 0.17304379170113154, 0.15315235079105285, 0.13728606270383925, 0.12178339355929034, 0.10961619754736898, 0.10074329449495337, 0.08793247367408294, 0.07651288138686625, 0.06934997136779089, 0.06243234033510685, 0.056774082654433795, 0.05116950291028218, 0.04961718403588313, 0.04289388027836952, 0.040430180404756245], [0.8289666666666666, 0.8851833333333333, 0.9045166666666666, 0.9167666666666666, 0.9294166666666667, 0.93545, 0.94275, 0.9486666666666667, 0.95365, 0.95855, 0.9618833333333333, 0.9667, 0.9717666666666667, 0.9745833333333334, 0.9765833333333334, 0.9793, 0.9809833333333333, 0.9820333333333333, 0.9839166666666667, 0.9849166666666667], [0.3629846270084381, 0.31240448981523516, 0.24729759228229523, 0.2697310926616192, 0.24718070650100707, 0.23403583562374114, 0.2295891786813736, 0.22117181441187858, 0.2475375788807869, 0.23771390727162361, 0.2562992911040783, 0.25533875498175623, 0.27057862806320193, 0.2820998176634312, 0.29471745146811007, 0.2795617451965809, 0.3008101430237293, 0.28815430629253386, 0.31814645100384953, 0.3106237706840038], [0.874125, 0.88875, 0.908875, 0.9045, 0.9145, 0.918125, 0.919375, 0.9245, 0.91975, 0.926, 0.923625, 0.925875, 0.92475, 0.926375, 0.925125, 0.92525, 0.924625, 0.930875, 0.924875, 0.926625]], [0, 0.5, [0.6091368444629316, 0.40709905083309106, 0.33330900164873106, 0.29541655938063605, 0.26824146830864043, 0.24633059249535552, 0.22803501166832219, 0.21262132842689435, 0.20038021789160745, 0.18430457027680647, 0.1744787511763288, 0.165271017740149, 0.15522625095554507, 0.1432937567076608, 0.13617747858651222, 0.12876031456241158, 0.12141566201230325, 0.11405601029369686, 0.11116664642408522, 0.10308189516060992], [0.7803833333333333, 0.8559166666666667, 0.8823, 0.89505, 0.9027333333333334, 0.9099166666666667, 0.9162333333333333, 0.9224833333333333, 0.9243166666666667, 0.9321, 0.9345833333333333, 0.9375333333333333, 0.9418833333333333, 0.9456666666666667, 0.9482333333333334, 0.9513666666666667, 0.9527333333333333, 0.9559, 0.9576166666666667, 0.9611], [0.36491659212112426, 0.29200539910793305, 0.2840233483910561, 0.2591339669823646, 0.24114771646261215, 0.2436459481716156, 0.2374294084906578, 0.24284198743104934, 0.22679156363010405, 0.2229055170416832, 0.21932773572206496, 0.23045065227150918, 0.23631879675388337, 0.22048399156332016, 0.2563135535418987, 0.2494968646839261, 0.24099056956171988, 0.23974315640330315, 0.24684958010911942, 0.25887142738699914], [0.8665, 0.8925, 0.897, 0.907375, 0.914125, 0.9125, 0.913875, 0.911875, 0.921125, 0.922625, 0.923375, 0.924125, 0.922625, 0.926, 0.915625, 0.926125, 0.932625, 0.927875, 0.93, 0.92525]], [0, 0.75, [1.187068938827718, 0.9080034740316842, 0.6863665148329887, 0.5706229420867301, 0.5069490017921432, 0.46316734996876485, 0.42913920047885573, 0.4107565824855874, 0.3908677859061054, 0.37283689377785745, 0.3606657798388111, 0.353545261082301, 0.34009441143986, 0.3239413740506559, 0.3193119444620253, 0.31045137204404577, 0.3003838519091164, 0.29092520530194615, 0.28635713599447504, 0.2760026559138349], [0.5551333333333334, 0.6467, 0.7338666666666667, 0.7841333333333333, 0.8128, 0.82845, 0.8430833333333333, 0.8501666666666666, 0.8580833333333333, 0.8646166666666667, 0.8667666666666667, 0.8709833333333333, 0.8766166666666667, 0.8816666666666667, 0.8812, 0.88465, 0.8898833333333334, 0.8934666666666666, 0.8940833333333333, 0.8977666666666667], [0.6463955206871033, 0.5193838343620301, 0.4155286856889725, 0.3316091845035553, 0.3148408111333847, 0.29354524302482604, 0.2875490103960037, 0.26903486740589144, 0.27737221759557723, 0.262776792883873, 0.25498255288600924, 0.2390553195178509, 0.24918611392378806, 0.23830307483673097, 0.23538302001357078, 0.24996423116326333, 0.2464654156267643, 0.24081429636478424, 0.23204647853970528, 0.23771219885349273], [0.763875, 0.81925, 0.8685, 0.8885, 0.8895, 0.895625, 0.902, 0.904125, 0.906125, 0.908, 0.909375, 0.9145, 0.916125, 0.9175, 0.91875, 0.91425, 0.915375, 0.918875, 0.91975, 0.91825]], [0.25, 0, [0.4140813298491654, 0.27481235485118843, 0.22397600941614174, 0.1890777693286951, 0.16538111197112848, 0.1448796250478132, 0.12440053254032313, 0.10817898457734855, 0.09634132136696025, 0.08548538653410352, 0.07339220296349257, 0.06470446296305314, 0.060030178171393875, 0.053294485403614034, 0.04429284706704323, 0.04014099264770115, 0.03974721442450951, 0.03304463665041803, 0.02955428938137994, 0.026940144761875052], [0.8496666666666667, 0.8982666666666667, 0.9162166666666667, 0.9292166666666667, 0.93805, 0.9457666666666666, 0.9534333333333334, 0.9596, 0.9645833333333333, 0.9679, 0.9726166666666667, 0.9761666666666666, 0.9775, 0.9800166666666666, 0.9842, 0.9855333333333334, 0.9857, 0.98805, 0.9895666666666667, 0.9905833333333334], [0.3327465409040451, 0.27738857254385946, 0.23834018683433533, 0.24359044748544692, 0.23630736249685289, 0.26239568686485293, 0.23089197066426276, 0.23183160039782524, 0.2287161501646042, 0.23795067170262338, 0.2680365410447121, 0.28079107534885406, 0.2745736412107945, 0.27641161236166956, 0.2967236565724015, 0.29836027943715454, 0.28526886811852453, 0.3188628684282303, 0.3159900237545371, 0.33990017675608397], [0.876875, 0.899875, 0.918125, 0.9105, 0.918125, 0.91, 0.92075, 0.922625, 0.924, 0.921, 0.920875, 0.921, 0.9285, 0.927625, 0.9265, 0.927375, 0.925875, 0.927, 0.92575, 0.925875]], [0.25, 0.25, [0.48859380523978013, 0.3269256727337075, 0.275135099903734, 0.24039912359244914, 0.21368402032566858, 0.19328243048317523, 0.17890911489359732, 0.16624130663682402, 0.15215728174088827, 0.1416037013468299, 0.13273427299440288, 0.12227611260405227, 0.11463099068699917, 0.10616964906720179, 0.09988978996809357, 0.09424899211093815, 0.08670466838887077, 0.0835973875783781, 0.0778748192367698, 0.07327510508696741], [0.82055, 0.8806666666666667, 0.9004333333333333, 0.9117333333333333, 0.9206333333333333, 0.92785, 0.9333, 0.9384166666666667, 0.9430333333333333, 0.9471833333333334, 0.95055, 0.9540166666666666, 0.9568833333333333, 0.9601666666666666, 0.9620333333333333, 0.9652, 0.9676833333333333, 0.9682666666666667, 0.9706, 0.9724333333333334], [0.34025013536214826, 0.29788709819316866, 0.2680273652672768, 0.2463292105793953, 0.23471139985322953, 0.22580294385552407, 0.21676637730002404, 0.20925517010688782, 0.23552959233522416, 0.21975916308164598, 0.23494828915596008, 0.21611644634604454, 0.22251244640350343, 0.22066593673825263, 0.2214409472346306, 0.22849382662773132, 0.24493269926309585, 0.2397777333110571, 0.23578458192944526, 0.2563280282020569], [0.870875, 0.8875, 0.900375, 0.906625, 0.9145, 0.921125, 0.92125, 0.92425, 0.916, 0.923125, 0.920375, 0.92675, 0.92575, 0.924875, 0.925, 0.924875, 0.922875, 0.931125, 0.932375, 0.929]], [0.25, 0.5, [0.6104797730917362, 0.42115319246994154, 0.3527538229359874, 0.3136731511446586, 0.2857721160565104, 0.26646374052426197, 0.24732486170523965, 0.23057452346613286, 0.21953405395769743, 0.20952929538100767, 0.19584925043811677, 0.18926965880162044, 0.18003955145856973, 0.17379174885878176, 0.16635702809354644, 0.15807223409366633, 0.1509416516620054, 0.1477138751140758, 0.14028569269798266, 0.13906246528172417], [0.7786833333333333, 0.8482166666666666, 0.8730833333333333, 0.888, 0.8978, 0.9033666666666667, 0.9089166666666667, 0.9147666666666666, 0.91955, 0.9221833333333334, 0.92715, 0.9309666666666667, 0.9334, 0.93495, 0.9376833333333333, 0.9402666666666667, 0.94405, 0.9439166666666666, 0.9466833333333333, 0.9464833333333333], [0.3859497320652008, 0.3124091213941574, 0.28177140313386917, 0.2564259949326515, 0.24969424712657928, 0.23137387067079543, 0.22758139592409135, 0.22978509336709976, 0.2293499847650528, 0.22430640310049058, 0.21563700905442237, 0.21529569518566133, 0.22171301135420798, 0.2105387990772724, 0.21190602815151213, 0.21494245541095733, 0.21312989933788776, 0.20670134457945824, 0.2146600303351879, 0.21474341893941165], [0.86, 0.888, 0.89625, 0.907, 0.908, 0.915, 0.917875, 0.92, 0.921125, 0.917625, 0.924, 0.921875, 0.925875, 0.92575, 0.928125, 0.92775, 0.928625, 0.93075, 0.92975, 0.930375]], [0.25, 0.75, [1.1724896589194789, 0.8803599189911315, 0.692622532690766, 0.5974764075837156, 0.5319996399920124, 0.49373906012028773, 0.4741932853007876, 0.45601858158927483, 0.43706520244892216, 0.4238534729236733, 0.41077356216813454, 0.38932509837882606, 0.3771154705856019, 0.3687882057305719, 0.34927689276937485, 0.3379922736602933, 0.33547254843212393, 0.3263144160448107, 0.31800466419251233, 0.3133781185822446], [0.5631833333333334, 0.6579333333333334, 0.7342166666666666, 0.7765833333333333, 0.8036333333333333, 0.8197166666666666, 0.82755, 0.8320166666666666, 0.8397833333333333, 0.8432666666666667, 0.8519333333333333, 0.85835, 0.86285, 0.8641, 0.87105, 0.8756666666666667, 0.8775166666666666, 0.87965, 0.88255, 0.8832333333333333], [0.5745115535259246, 0.4740168128013611, 0.4092038922309876, 0.345498643040657, 0.32894178831577303, 0.2999964846372604, 0.28456189918518066, 0.28186965006589887, 0.26958267349004744, 0.26703972268104553, 0.2667745503783226, 0.2553461962342262, 0.25764305877685545, 0.2528705199956894, 0.24987997275590895, 0.24210182267427444, 0.2366510547697544, 0.24053962442278862, 0.22825994032621383, 0.2270425768494606], [0.776875, 0.822625, 0.848875, 0.87825, 0.88925, 0.899875, 0.9015, 0.904375, 0.9035, 0.906, 0.906875, 0.91125, 0.907, 0.908625, 0.91175, 0.917125, 0.91675, 0.916125, 0.919875, 0.917625]], [0.5, 0, [0.43062501005145276, 0.29807482149078646, 0.2541527441585623, 0.21918726423338278, 0.1950343672964555, 0.17517360023010387, 0.16213757058244144, 0.14869415854364, 0.13477844860392815, 0.12352272007129848, 0.11392300839184412, 0.10589898744228679, 0.09751250602896692, 0.089864786467088, 0.08516462990539526, 0.07973235945548934, 0.07441158362824137, 0.07053931183896578, 0.06258528833356954, 0.06177985634201014], [0.8429, 0.88905, 0.9052166666666667, 0.9182166666666667, 0.92755, 0.9337666666666666, 0.93835, 0.944, 0.9489333333333333, 0.95365, 0.9565333333333333, 0.9599166666666666, 0.9637833333333333, 0.9659666666666666, 0.9685666666666667, 0.9705, 0.9713666666666667, 0.9738, 0.9770166666666666, 0.9769833333333333], [0.32814766228199005, 0.29447353577613833, 0.25052148789167406, 0.22761481428146363, 0.23280890756845474, 0.23155913531780242, 0.21984874603152274, 0.2166314404308796, 0.2202563073039055, 0.22508277136087418, 0.2237191815972328, 0.2246915928721428, 0.22815296687185765, 0.2254556802213192, 0.2337513281852007, 0.2381753808259964, 0.24798179551959038, 0.24766947883367538, 0.24877363580465317, 0.2518915164768696], [0.879625, 0.89025, 0.907875, 0.916625, 0.91625, 0.91825, 0.920875, 0.923625, 0.922625, 0.923, 0.92575, 0.927125, 0.928625, 0.92625, 0.925375, 0.925625, 0.926375, 0.92475, 0.9255, 0.92675]], [0.5, 0.25, [0.5022556754285847, 0.3545388207554436, 0.2965180559564374, 0.2689443711818917, 0.24340009927622544, 0.22504497168144819, 0.21177587015574167, 0.19926073912507308, 0.18498492261557692, 0.1792394390810273, 0.16716771742809555, 0.16088557891500022, 0.15540826101420022, 0.1471743908549931, 0.14383414784458273, 0.1351151093741311, 0.1312572255915305, 0.12904865093140014, 0.12332957751079918, 0.11934908895072208], [0.8186333333333333, 0.8711666666666666, 0.8905666666666666, 0.9020666666666667, 0.9106333333333333, 0.9169333333333334, 0.9227, 0.9258166666666666, 0.9317, 0.9329666666666667, 0.9384833333333333, 0.9394333333333333, 0.94185, 0.9447666666666666, 0.9449833333333333, 0.9489, 0.9506, 0.9520333333333333, 0.95295, 0.9556833333333333], [0.37072600054740906, 0.2894986196160316, 0.2896255247592926, 0.2553737629055977, 0.2347450014948845, 0.23144772934913635, 0.22532679361104965, 0.2152210614681244, 0.21610748746991157, 0.22872606116533278, 0.22058768355846406, 0.20230921444296837, 0.2118315652012825, 0.20028054055571556, 0.20844366964697839, 0.20884322375059128, 0.21231223946809769, 0.19875787001848222, 0.2072589308321476, 0.22480831852555275], [0.862, 0.894, 0.892375, 0.906375, 0.912625, 0.91375, 0.916875, 0.918875, 0.92125, 0.9185, 0.920375, 0.92825, 0.9255, 0.92925, 0.926875, 0.9285, 0.926375, 0.93075, 0.931125, 0.922875]], [0.5, 0.5, [0.6208003907124879, 0.4341448332582201, 0.3655890760454796, 0.3245583019102179, 0.3000562671722888, 0.2840681741280215, 0.2686156402947679, 0.25843519997844566, 0.24892204790227196, 0.23988707410469493, 0.22968693327770304, 0.22323107979953416, 0.21376596502403714, 0.21353628940340172, 0.208721635311143, 0.20283085862393063, 0.19862186088204892, 0.1939613972542319, 0.18833921627917968, 0.18451892669552933], [0.7769666666666667, 0.8453333333333334, 0.86965, 0.88425, 0.8911, 0.8957666666666667, 0.90125, 0.9056666666666666, 0.9083833333333333, 0.9122666666666667, 0.91455, 0.9176833333333333, 0.92035, 0.9217, 0.9232333333333334, 0.9238333333333333, 0.9270333333333334, 0.9283, 0.93035, 0.9312333333333334], [0.390482270359993, 0.3140819278359413, 0.286346542596817, 0.26530489122867584, 0.25648517191410064, 0.25534764647483826, 0.24066219604015351, 0.22813884472846985, 0.22091108289361, 0.22591463786363603, 0.22548504903912545, 0.21807716876268388, 0.23463654381036758, 0.21917386519908905, 0.2077158398628235, 0.2112607652246952, 0.205703763961792, 0.21748955991864205, 0.20092388433218003, 0.20742826372385026], [0.859125, 0.884375, 0.89225, 0.9035, 0.9045, 0.904875, 0.907875, 0.915375, 0.914875, 0.915375, 0.916375, 0.92075, 0.91575, 0.91825, 0.92375, 0.924, 0.924875, 0.917125, 0.926875, 0.920875]], [0.5, 0.75, [1.1608194957918196, 0.8736483463918222, 0.7270457689632485, 0.6118623841482439, 0.5539627463769302, 0.5169604117872872, 0.4843029365547176, 0.4664089765979537, 0.449539397952399, 0.4308713404481599, 0.4170197155842903, 0.4104185118508746, 0.3983522486299086, 0.3890672579232945, 0.38423672571047535, 0.38125834129512437, 0.36963055836461756, 0.36898326972273116, 0.3608236700328174, 0.35822524538617145], [0.56785, 0.6591833333333333, 0.71765, 0.7660333333333333, 0.7931666666666667, 0.8079666666666667, 0.8198833333333333, 0.8275166666666667, 0.8349833333333333, 0.8422, 0.8473666666666667, 0.8486833333333333, 0.85425, 0.85675, 0.8578666666666667, 0.8603333333333333, 0.8643333333333333, 0.8637833333333333, 0.8684333333333333, 0.8680166666666667], [0.5984484012126923, 0.5152713191509247, 0.42289899206161496, 0.3746640253067017, 0.3369040569067001, 0.32359291434288023, 0.2978636801838875, 0.2998174095153809, 0.2883352539539337, 0.2839300352931023, 0.2775397801399231, 0.2616970262527466, 0.259125192284584, 0.25470315623283385, 0.2535187450051308, 0.2600560383200645, 0.25031394577026367, 0.2547155976295471, 0.23950587111711502, 0.24401323813199996], [0.750875, 0.78025, 0.86225, 0.869875, 0.884875, 0.891625, 0.898875, 0.89275, 0.901875, 0.9005, 0.899875, 0.908375, 0.91125, 0.910375, 0.910375, 0.907, 0.9135, 0.910375, 0.914125, 0.911625]], [0.75, 0, [0.5018121279410716, 0.3649225841834347, 0.31199926770985253, 0.2825479824850554, 0.25993211727057186, 0.2431308363737074, 0.22870161555913973, 0.22126636312587428, 0.2113911879540824, 0.20279224649834227, 0.19300907663603836, 0.18686007729360163, 0.1815741605866057, 0.1759802805684777, 0.17041425832084564, 0.16513840764014323, 0.15892388751861383, 0.1548161118118557, 0.1498002242614656, 0.14744469122107284], [0.8158, 0.8648, 0.8846833333333334, 0.8954666666666666, 0.9035333333333333, 0.9097666666666666, 0.9142666666666667, 0.91615, 0.9219166666666667, 0.9239333333333334, 0.9268166666666666, 0.9287666666666666, 0.9304833333333333, 0.9327333333333333, 0.9365, 0.9368666666666666, 0.9395333333333333, 0.9418833333333333, 0.9445, 0.9450166666666666], [0.35916801404953, 0.30038927191495896, 0.2824265750646591, 0.28094157111644746, 0.2402345055937767, 0.24779821130633353, 0.2263277245759964, 0.22270147562026976, 0.22010754531621932, 0.20850908517837524, 0.21723379525542258, 0.20454896742105483, 0.2065480750799179, 0.20593296563625335, 0.21030707907676696, 0.2015896993279457, 0.19770563289523124, 0.19552358242869378, 0.197759574085474, 0.19900305101275445], [0.867125, 0.890875, 0.896875, 0.896, 0.912125, 0.90875, 0.9185, 0.916875, 0.920375, 0.925125, 0.919375, 0.92675, 0.927125, 0.924625, 0.924125, 0.9275, 0.928, 0.928875, 0.93325, 0.930125]], [0.75, 0.25, [0.564780301424359, 0.41836969141385705, 0.3581543931924204, 0.3251280398018706, 0.30215959723538427, 0.28700008430778345, 0.27507679125488693, 0.26540731782439164, 0.25373875692105496, 0.24964979071734048, 0.24098571216357922, 0.23604591902512223, 0.2270722362135392, 0.2229606584985373, 0.22031292727570545, 0.21439386613126885, 0.21020108821200156, 0.2042837777872012, 0.20376247368149283, 0.20021205727082453], [0.7927, 0.8474166666666667, 0.8672166666666666, 0.8811833333333333, 0.8883, 0.8952833333333333, 0.89795, 0.9011333333333333, 0.9055833333333333, 0.9071166666666667, 0.9100333333333334, 0.911, 0.91515, 0.9162166666666667, 0.91775, 0.9197833333333333, 0.9218666666666666, 0.9239, 0.9236833333333333, 0.92455], [0.39558523416519165, 0.3187315353155136, 0.30105597496032716, 0.2717038299441338, 0.25286867189407347, 0.24664685553312302, 0.24286985045671464, 0.23643679201602935, 0.23006864881515504, 0.2277349520921707, 0.22591854375600814, 0.2165311907827854, 0.21385486593842506, 0.21402871897816658, 0.2096972267627716, 0.21242560443282127, 0.2098898750245571, 0.2062524998188019, 0.19932547932863234, 0.20170186588168143], [0.850625, 0.88125, 0.8845, 0.897125, 0.9065, 0.9085, 0.907625, 0.91275, 0.917125, 0.9135, 0.91825, 0.922625, 0.91925, 0.921125, 0.923625, 0.92225, 0.923375, 0.922875, 0.925625, 0.92775]], [0.75, 0.5, [0.6916971901205303, 0.4947840944567977, 0.41710148827988963, 0.38678343986460906, 0.36429949198513906, 0.34339441834831796, 0.33055868282564665, 0.3199633415272114, 0.31550557391920575, 0.3022628513289921, 0.2959158662110885, 0.2941135993993867, 0.28555906579089063, 0.27903660322462065, 0.2769482293601102, 0.27154609372716215, 0.26548120195963487, 0.26188135733291795, 0.2588035051009929, 0.2574938320115939], [0.7497333333333334, 0.8236833333333333, 0.8482333333333333, 0.8618666666666667, 0.8703666666666666, 0.8772166666666666, 0.8803333333333333, 0.8829166666666667, 0.88525, 0.88945, 0.89275, 0.8937166666666667, 0.8969, 0.8977666666666667, 0.9, 0.90175, 0.9041666666666667, 0.9035, 0.9049, 0.9046166666666666], [0.41916924858093263, 0.3380992366075516, 0.31549062132835387, 0.2921286026239395, 0.2786481494307518, 0.28516836106777194, 0.25556409001350405, 0.2538892236948013, 0.24726227968931197, 0.24262803781032563, 0.24080126863718032, 0.24242325466871262, 0.23416680485010147, 0.22847312396764755, 0.22423979061841964, 0.2311997367441654, 0.22794704174995423, 0.21943940049409866, 0.21820387506484987, 0.21150743806362152], [0.8435, 0.87725, 0.88425, 0.890375, 0.898125, 0.89275, 0.905625, 0.906125, 0.911, 0.910625, 0.911, 0.909875, 0.914875, 0.915375, 0.917875, 0.915, 0.91475, 0.919625, 0.923875, 0.92425]], [0.75, 0.75, [1.162218615571573, 0.8284856370453642, 0.7309887468624217, 0.6590983641744931, 0.6089096262510906, 0.5663433943285363, 0.5383681068733048, 0.5242803116787725, 0.49926126579930785, 0.48940120944018556, 0.4789252862779062, 0.46633604049746163, 0.4596060775458686, 0.4464966354847971, 0.4418302221593064, 0.43759817490254893, 0.42892070028827645, 0.4226101264516428, 0.418694807601763, 0.4110745745840103], [0.58005, 0.6824666666666667, 0.7223333333333334, 0.7464333333333333, 0.7711333333333333, 0.7891833333333333, 0.8012333333333334, 0.80635, 0.8172666666666667, 0.82225, 0.8271833333333334, 0.831, 0.8335833333333333, 0.8371833333333333, 0.8412166666666666, 0.84265, 0.8458833333333333, 0.8471166666666666, 0.8497666666666667, 0.8522833333333333], [0.5945872340202332, 0.518519122838974, 0.4681703653335571, 0.42978407418727876, 0.40349935555458066, 0.37377681517601014, 0.35234942865371705, 0.3359788683652878, 0.3217720929384232, 0.3279728285074234, 0.3114012089371681, 0.3060767319202423, 0.2949701727628708, 0.2981588536500931, 0.2855641575455666, 0.28112928783893587, 0.28212732630968096, 0.27846804082393645, 0.27372796374559405, 0.27415593349933626], [0.78525, 0.8215, 0.820125, 0.844375, 0.86375, 0.875125, 0.876625, 0.882, 0.887875, 0.884625, 0.890375, 0.892125, 0.897125, 0.894125, 0.902625, 0.89975, 0.89975, 0.90125, 0.902, 0.90075]]]
Dropout1 = 0.25 #param {type:"slider", min:0, max:0.75, step:0.25}
Dropout2 = 0.75 #param {type:"slider", min:0, max:0.75, step:0.25}
def plot(Dropout1, Dropout2):
d1, d2, train_loss, train_acc, validation_loss, validation_acc = data[int(Dropout1*4)*4+int(Dropout2*4)]
print(d1, d2)
plot_loss_accuracy(train_loss, train_acc, validation_loss, validation_acc)
plt.gcf().axes[0].set_ylim(0, 1.2)
plt.gcf().axes[1].set_ylim(0.5, 1)
import io, base64
my_stringIObytes = io.BytesIO()
plt.savefig(my_stringIObytes, format='png', dpi=90)
my_stringIObytes.seek(0)
my_base64_jpgData = base64.b64encode(my_stringIObytes.read())
plt.close()
p.value = """<img src="data:image/png;base64,"""+str(my_base64_jpgData)[2:-1]+"""" alt="Graph">"""
d1 = widgets.FloatSlider(min=0, max=0.75, value=0.25, step=0.25, description="Dropout 1", style={'description_width': 'initial', 'width': '800px'}, )
d2 = widgets.FloatSlider(min=0, max=0.75, value=0.25, step=0.25, description="Dropout 2", style={'description_width': 'initial', 'width': '800px'}, )
p = widgets.HTML(value="aasdsd")
w = interactive_output(plot, {"Dropout1":d1, "Dropout2": d2})
#w.layout.height = '450px'
display(widgets.VBox([d1, d2, p, w]))
```
## Coding Exercise 2.2: How much does augmentation help?
Last week you also learned how data augmentation can regularize a network. Let's add data augmentation to our model via transforms and see if that helps our model to better generalize! In the following cell, add the transforms you want in the list `augmentation_transforms`. We will then run the same network you created in the above exercise (with regularization) and then plot the loss and accuracies.
Here's the link to the list of transforms available in pytorch: https://pytorch.org/vision/stable/transforms.html
```
def transforms_custom(seed):
# basic preprocessing
preprocessing_transforms = [transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]
# add the augmentation transforms to the preprocessing
train_transform = transforms.Compose(get_augmentation_transforms() + \
preprocessing_transforms)
# load the FashonMNIST dataset with the transforms
train_data = datasets.FashionMNIST(root='./data', download=True, train=True,
transform=train_transform)
# reduce to our two classes to speed up training
train_data = reduce_classes(train_data)
# get the data loader instances for the dataset
train_loader, validation_loader, test_loader = get_data_loaders(train_data,
validation_data,
test_data,
seed)
return train_loader, validation_loader, test_loader
def get_augmentation_transforms():
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Add Transforms")
####################################################################
augmentation_transforms = [..., ...]
return augmentation_transforms
set_seed(SEED)
net3 = FMNIST_Net2().to(DEVICE) # get the network
## Uncomment below to test your function
# train_loader, validation_loader, test_loader = transforms_custom(SEED)
# train_loss, train_acc, validation_loss, validation_acc = train(net3, DEVICE, train_loader, validation_loader, 20)
# plot_loss_accuracy(train_loss, train_acc, validation_loss, validation_acc)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/solutions/W2D1_Tutorial2_Solution_8d4116f3.py)
*Example output:*
<img alt='Solution hint' align='left' width=2195.0 height=755.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/static/W2D1_Tutorial2_Solution_8d4116f3_3.png>
Run the next cell to get the accuracy on the data!
```
test(net3, DEVICE, test_loader)
```
## Think! 3.1: Data Augmentation
Did the training accuracy reduce further compared to with dropout alone? Is the model still overfitting?
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W2D1_ConvnetsAndRecurrentNeuralNetworks/solutions/W2D1_Tutorial2_Solution_ae125a93.py)
Great! In this section you trained what may have been your very first CNN. You added regularization and data augmentation in order to get a model that generalizes well. All the pieces are beginning to fit together!
Next we will talk about RNNs, which parameter share over time.
| github_jupyter |
```
#collapse
# Kelloggs
> notebook that creates the kerry datasets.
- toc: true
- badges: true
- comments: true
- categories: [jupyter]
- image: images/chart-preview.png
#collapse
#IMPORT LIBRARIES FROM PYTHON
import pandas as pd
#IMPORT DATA SETS FROM TRUSTED SOURCES
#US CENSUS DATA FROM -
censusData = pd.read_csv('../_data/cen.csv')
#COUNTY MASK USAGE FROM -
maskData = pd.read_csv('../_data/maskbycounty.csv')
#RT DATA FROM -
D=pd.read_csv('../_data/rttestcounty.csv')
countyData = pd.read_csv('../_data/isee.csv')
rtData = pd.read_csv('https://d14wlfuexuxgcm.cloudfront.net/covid/rt.csv')
#CLEAN UP DATA AND STANDARDIZE THE COLUMNS
ACol=[ 'uid', 'date', 'resolution', 'date_lag', 'Rt_plot', 'Rt_upr',
'Rt_lwr', 'Rt_loess_fit', 'Rt_loess_lwr', 'Rt_loess_upr',
'positiveIncrease', 'positive', 'positive_7day', 'positive_percapita',
'positiveIncr_percapita', 'deathIncrease', 'death', 'death_percapita',
'deathIncr_percapita']
As=countyData[ACol]
pCol=['fips', 'POPESTIMATE2019']
censusDataCleaned=censusData[pCol]
```
<H3>IMPORTING CLIENT DATA</H3>
```
#KERRY LOCATIONS
#PEPSI LOCATIONS
#KELLOGGS LOCATIONS
clientLocations=pd.read_csv('../_data/kellData3.csv')
fCol=['uid', 'fips', 'state', 'county']
clientDataCleaned=clientLocations[fCol]
#collapse
mergeClientCensus=pd.merge(censusDataCleaned, clientDataCleaned.astype(str), on ="fips")
mergeData=mergeClientCensus.sort_values('state')
md=pd.merge(mergeData, D.astype(str), on='fips')
mn=md.sort_values('state')
UT=pd.merge(mergeData, As.astype(str), on='uid')
today = '08/01/2020'
UTT = UT[UT['date']==today]
AwA=pd.merge(As.astype(str), mergeData, on='uid', )
AwCol=[ 'state', 'county', 'date', 'resolution', 'Rt_plot', 'Rt_upr', 'Rt_lwr',
'Rt_loess_fit', 'Rt_loess_lwr', 'Rt_loess_upr', 'positiveIncrease',
'positive', 'positive_7day', 'positive_percapita',
'positiveIncr_percapita', 'deathIncrease', 'death', 'death_percapita',
'deathIncr_percapita', 'POPESTIMATE2019', 'fips' ]
AwCleaned = AwA[AwCol]
ATT=pd.merge(AwCleaned, maskData.astype(str), on='fips')
pd.options.display.max_rows = 2800
ATCol=['state', 'county', 'date', 'Rt_plot', 'positiveIncrease',
'positive', 'positive_7day', 'positive_percapita',
'positiveIncr_percapita', 'deathIncrease', 'death', 'death_percapita',
'deathIncr_percapita', 'POPESTIMATE2019', 'NEVER', 'RARELY',
'SOMETIMES', 'FREQUENTLY', 'ALWAYS']
ATR=ATT[ATCol]
ATY = ATR[ATR['date']=='7/18/2020']
ATY
```
| github_jupyter |
# Performance plots for Gaia FGK benchmark stars
## Author(s): Sven Buder (SB, WG4)
### History:
180926 SB Created
200313 SB Switched the analysis to the final DR3 values. Seperated the initial FREE runs
```
# Preamble for notebook
# Compatibility with Python 3
from __future__ import (absolute_import, division, print_function)
try:
%matplotlib inline
%config InlineBackend.figure_format='retina'
except:
pass
# Basic packages
import numpy as np
np.seterr(divide='ignore', invalid='ignore')
import os
import sys
import glob
import pickle
import pandas
# Packages to work with FITS and (IDL) SME.out files
import astropy.io.fits as pyfits
import astropy.table as table
from scipy.io.idl import readsav
# Matplotlib and associated packages for plotting
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from matplotlib.transforms import Bbox,TransformedBbox
from matplotlib.image import BboxImage
from matplotlib.legend_handler import HandlerBase
from matplotlib._png import read_png
from matplotlib.backends.backend_pdf import PdfPages
from matplotlib.colors import ListedColormap
import matplotlib.colors as colors
params = {
'font.family' : 'sans',
'font.size' : 17,
'axes.labelsize' : 20,
'ytick.labelsize' : 16,
'xtick.labelsize' : 16,
'legend.fontsize' : 20,
'text.usetex' : True,
'text.latex.preamble': [r'\usepackage{upgreek}', r'\usepackage{amsmath}'],
}
plt.rcParams.update(params)
_parula_data = [[0.2081, 0.1663, 0.5292],
[0.2116238095, 0.1897809524, 0.5776761905],
[0.212252381, 0.2137714286, 0.6269714286],
[0.2081, 0.2386, 0.6770857143],
[0.1959047619, 0.2644571429, 0.7279],
[0.1707285714, 0.2919380952, 0.779247619],
[0.1252714286, 0.3242428571, 0.8302714286],
[0.0591333333, 0.3598333333, 0.8683333333],
[0.0116952381, 0.3875095238, 0.8819571429],
[0.0059571429, 0.4086142857, 0.8828428571],
[0.0165142857, 0.4266, 0.8786333333],
[0.032852381, 0.4430428571, 0.8719571429],
[0.0498142857, 0.4585714286, 0.8640571429],
[0.0629333333, 0.4736904762, 0.8554380952],
[0.0722666667, 0.4886666667, 0.8467],
[0.0779428571, 0.5039857143, 0.8383714286],
[0.079347619, 0.5200238095, 0.8311809524],
[0.0749428571, 0.5375428571, 0.8262714286],
[0.0640571429, 0.5569857143, 0.8239571429],
[0.0487714286, 0.5772238095, 0.8228285714],
[0.0343428571, 0.5965809524, 0.819852381],
[0.0265, 0.6137, 0.8135],
[0.0238904762, 0.6286619048, 0.8037619048],
[0.0230904762, 0.6417857143, 0.7912666667],
[0.0227714286, 0.6534857143, 0.7767571429],
[0.0266619048, 0.6641952381, 0.7607190476],
[0.0383714286, 0.6742714286, 0.743552381],
[0.0589714286, 0.6837571429, 0.7253857143],
[0.0843, 0.6928333333, 0.7061666667],
[0.1132952381, 0.7015, 0.6858571429],
[0.1452714286, 0.7097571429, 0.6646285714],
[0.1801333333, 0.7176571429, 0.6424333333],
[0.2178285714, 0.7250428571, 0.6192619048],
[0.2586428571, 0.7317142857, 0.5954285714],
[0.3021714286, 0.7376047619, 0.5711857143],
[0.3481666667, 0.7424333333, 0.5472666667],
[0.3952571429, 0.7459, 0.5244428571],
[0.4420095238, 0.7480809524, 0.5033142857],
[0.4871238095, 0.7490619048, 0.4839761905],
[0.5300285714, 0.7491142857, 0.4661142857],
[0.5708571429, 0.7485190476, 0.4493904762],
[0.609852381, 0.7473142857, 0.4336857143],
[0.6473, 0.7456, 0.4188],
[0.6834190476, 0.7434761905, 0.4044333333],
[0.7184095238, 0.7411333333, 0.3904761905],
[0.7524857143, 0.7384, 0.3768142857],
[0.7858428571, 0.7355666667, 0.3632714286],
[0.8185047619, 0.7327333333, 0.3497904762],
[0.8506571429, 0.7299, 0.3360285714],
[0.8824333333, 0.7274333333, 0.3217],
[0.9139333333, 0.7257857143, 0.3062761905],
[0.9449571429, 0.7261142857, 0.2886428571],
[0.9738952381, 0.7313952381, 0.266647619],
[0.9937714286, 0.7454571429, 0.240347619],
[0.9990428571, 0.7653142857, 0.2164142857],
[0.9955333333, 0.7860571429, 0.196652381],
[0.988, 0.8066, 0.1793666667],
[0.9788571429, 0.8271428571, 0.1633142857],
[0.9697, 0.8481380952, 0.147452381],
[0.9625857143, 0.8705142857, 0.1309],
[0.9588714286, 0.8949, 0.1132428571],
[0.9598238095, 0.9218333333, 0.0948380952],
[0.9661, 0.9514428571, 0.0755333333],
[0.9763, 0.9831, 0.0538]]
parula = ListedColormap(_parula_data, name='parula')
parula_zero = _parula_data[0]
parula_0 = ListedColormap(_parula_data, name='parula_0')
parula_0.set_bad((1,1,1))
parula_r = ListedColormap(_parula_data[::-1], name='parula_r')
willi_blau = [0.0722666667, 0.4886666667, 0.8467]
# Some other useful dictionaries
kwargs_scatter_black = dict(
alpha=0.05,
s = 1,
rasterized = True)
kwargs_scatter = dict(
cmap = parula,
s = 15,
rasterized = True)
kwargs_hist = dict(
c = 'C0',
lw=5
)
tex_dict = dict(
teff =r'$T_\mathrm{eff}$',
logg =r'$\log g$',
fe_h =r'$\mathrm{[Fe/H]}$',
bp_rp =r'$\mathrm{BP} - \mathrm{RP}$',
M_G =r'$\mathrm{M_G}$'
)
# Gaia FGK values from Jofre's 2.1 catalog
gbs = pyfits.getdata('data/GALAH_GBS2.1.fits',1)
gbs_raw = pyfits.getdata('data/GBS2.1.fits',1)
# Use the final GALAH main catalog values rather than the dedicated GBS analysis
#galah = pyfits.getdata('data/GALAH_gbs_lbol.fits',1)
galah = pyfits.getdata('../../../catalogs/GALAH_DR3_main_200604_extended_caution_v2.fits',1)
galah['fe_h_atmo'] -= 0.10
print('NB: We reversed the shift of the atmospheric [Fe/H]')
gbs_galah_match = []
galah_gbs_match = []
for each_sobject_id in range(len(galah['sobject_id'])):
if galah['sobject_id'][each_sobject_id] not in [140709001901194,150204002101256]:
try:
side_a = np.where(
galah['sobject_id'][each_sobject_id] == gbs['sobject_id']
)[0][0]
side_b = each_sobject_id
gbs_galah_match.append(side_a)
galah_gbs_match.append(side_b)
except:
pass
gbs_galah_match = np.array(gbs_galah_match)
galah_gbs_match = np.array(galah_gbs_match)
comparison = dict()
comparison['GBS'] = gbs['StarID1'][gbs_galah_match]
comparison['sobject_id'] = gbs['sobject_id'][gbs_galah_match]
comparison['teff_gbs'] = np.array([float(gbs['Teff'][gbs_galah_match][x]) for x in range(len(gbs['Teff'][gbs_galah_match]))])
comparison['teff_gbs'][comparison['teff_gbs'] < 0] = np.NaN
comparison['logg_gbs'] = gbs['logg'][gbs_galah_match]
comparison['fe_h_gbs'] = gbs['__Fe_H_'][gbs_galah_match]
comparison['e_teff_gbs'] = np.array([float(gbs['e_Teff'][gbs_galah_match][x]) for x in range(len(gbs['e_Teff'][gbs_galah_match]))])
comparison['e_logg_gbs'] = gbs['e_logg'][gbs_galah_match]
comparison['e_fe_h_gbs'] = gbs['e__Fe_H_'][gbs_galah_match]
comparison['teff'] = galah['TEFF'][galah_gbs_match]
comparison['e_teff'] = galah['E_TEFF'][galah_gbs_match]
comparison['logg'] = galah['LOGG'][galah_gbs_match]
comparison['e_logg'] = np.array([0.1 for x in range(len(gbs['E_LOGG'][gbs_galah_match]))])
comparison['m_h'] = galah['fe_h_atmo'][galah_gbs_match]
comparison['e_m_h'] = galah['e_fe_h_atmo'][galah_gbs_match]
#use the final GALAH uncertainty rather than the initial GBS analysis
comparison['fe_h'] = galah['fe_h'][galah_gbs_match]
comparison['e_fe_h'] = galah['e_fe_h'][galah_gbs_match]
#comparison['fe_h'] = galah['A_ABUND'][galah_gbs_match,1] - 7.3830223
#comparison['e_fe_h'] = galah['E_ABUND'][galah_gbs_match,1]
def weighted_avg_and_std(values, weights):
"""
Return the weighted average and standard deviation.
values, weights -- Numpy ndarrays with the same shape.
"""
average = np.average(values, weights=weights)
variance = np.average((values-average)**2, weights=weights) # Fast and numerically precise
return (average, np.sqrt(variance))
lbol_bias = {}
for each_param in ['teff','logg','fe_h']:
good = (
np.isfinite(comparison[each_param]) & np.isfinite(comparison[each_param+'_gbs'])
)
lbol_bias[each_param] = weighted_avg_and_std(
comparison[each_param][good] - comparison[each_param+'_gbs'][good],
1./(comparison['e_'+each_param][good]**2+comparison['e_'+each_param+'_gbs'][good]**2))
good = (
np.isfinite(comparison[each_param]) & np.isfinite(comparison[each_param+'_gbs'])
)
lbol_bias['m_h'] = weighted_avg_and_std(
comparison['m_h'][good] - comparison['fe_h_gbs'][good],
1./(comparison['e_m_h'][good]**2+comparison['e_fe_h_gbs'][good]**2))
lbol_bias
lbol_bias['teff'] = (lbol_bias['teff'][0], 67)
lbol_bias['fe_h'] = (lbol_bias['fe_h'][0], 0.034)
lbol_bias['m_h'] = (lbol_bias['m_h'][0], 0.059)
f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize = (6,7))
s = ax1.errorbar(
np.nan+comparison['fe_h_gbs'],
comparison['m_h'] - comparison['fe_h_gbs'],
xerr = comparison['e_fe_h_gbs'],
yerr = np.sqrt(comparison['e_fe_h_gbs']**2 + comparison['e_m_h']**2),
fmt = 'o',
c ='k',
#label=r'$\mathrm{[Fe/H]}^\text{atmo}$'
)
s1 = ax1.errorbar(
comparison['teff_gbs'],
comparison['teff'] - comparison['teff_gbs'],
xerr = comparison['e_teff_gbs'],
yerr = np.sqrt(comparison['e_teff_gbs']**2 + comparison['e_teff']**2),
fmt = 'o',
label=r'$\mathrm{[Fe/H]}$'
)
s2 = ax2.errorbar(
comparison['logg_gbs'],
comparison['logg'] - comparison['logg_gbs'],
xerr = comparison['e_logg_gbs'],
yerr = np.sqrt(comparison['e_logg_gbs']**2 + comparison['e_logg']**2),
fmt = 'o'
)
s3 = ax3.errorbar(
comparison['fe_h_gbs'],
comparison['m_h'] - comparison['fe_h_gbs'],
xerr = comparison['e_fe_h_gbs'],
yerr = np.sqrt(comparison['e_fe_h_gbs']**2 + comparison['e_m_h']**2),
fmt = 'o',
c ='k',
label=r'$\mathrm{[Fe/H]}^\text{atmo}$'
)
s3 = ax3.errorbar(
comparison['fe_h_gbs'],
comparison['fe_h'] - comparison['fe_h_gbs'],
xerr = comparison['e_fe_h_gbs'],
yerr = np.sqrt(comparison['e_fe_h_gbs']**2 + comparison['e_fe_h']**2),
fmt = 'o',
#label=r'$\mathrm{[Fe/H]}$'
)
ax1.set_ylim(-300,300)
ax2.set_ylim(-0.51,0.51)
ax3.set_ylim(-0.4,0.4)
ax1.axhline(0,lw=0.5,c='k')
ax2.axhline(0,lw=0.5,c='k')
ax3.axhline(0,lw=0.5,c='k')
ax1.set_xlabel('Teff (GBS)')
ax2.set_xlabel('logg (GBS)')
ax3.set_xlabel('[Fe/H] (GBS)')
ax1.set_ylabel(r'$\Delta$Teff')
ax2.set_ylabel(r'$\Delta$logg')
ax3.set_ylabel(r'$\Delta$[Fe/H]')
ax1.text(0.01,0.05,r'Bias = $'+str('%.0f' % lbol_bias['teff'][0])+' \pm '+str('%.0f' % lbol_bias['teff'][1])+'$',transform=ax1.transAxes,color='C0',fontsize=15)
ax2.text(0.01,0.05,r'Bias = $'+str('%.3f' % lbol_bias['logg'][0])+' \pm '+str('%.3f' % lbol_bias['logg'][1])+'$',transform=ax2.transAxes,color='C0',fontsize=15)
ax3.text(0.01,0.05,r'Bias = $'+str('%.3f' % lbol_bias['fe_h'][0])+' \pm '+str('%.3f' % lbol_bias['fe_h'][1])+'$',transform=ax3.transAxes,color='C0',fontsize=15)
ax3.text(0.01,0.2,r'Bias = $'+str('%.3f' % lbol_bias['m_h'][0])+' \pm '+str('%.3f' % lbol_bias['m_h'][1])+'$',transform=ax3.transAxes,color='k',fontsize=15)
ax1.legend(loc='upper right',
#bbox_to_anchor=(0.5, 1.75),
ncol=4,
fontsize=12,fancybox=True,handletextpad=-0.2,columnspacing=0.25)
ax3.legend(loc='upper center',
#bbox_to_anchor=(0.5, 1.75),
ncol=4,
fontsize=12,fancybox=True,handletextpad=-0.2)
ax1.text(-0.1,1.33,r'$\Delta$ = GALAH DR3 - GBS',transform=ax1.transAxes,fontsize=18)
ax1.text(0.75,1.45,r'$\mathrm{[Fe/H]^\text{atmo}}$ traced by',transform=ax1.transAxes,fontsize=18,ha='center')
ax1.text(0.75,1.2,r'H/Sc/Ti/Fe lines',transform=ax1.transAxes,fontsize=18,ha='center')
props = dict(boxstyle='round', facecolor='w', alpha=0.75)
ax1.text(0.02, 0.925, 'a)', transform=ax1.transAxes, fontsize=12,
verticalalignment='top', bbox=props)
ax2.text(0.02, 0.925, 'b)', transform=ax2.transAxes, fontsize=12,
verticalalignment='top', bbox=props)
ax3.text(0.02, 0.925, 'c)', transform=ax3.transAxes, fontsize=12,
verticalalignment='top', bbox=props)
plt.tight_layout()
plt.savefig('figures/gbs_performance_lbol.png',bbox_inches='tight',dpi=300)
plt.savefig('../../../dr3_release_paper/figures/gbs_performance_lbol.png',bbox_inches='tight',dpi=300)
f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize = (10,7.5))
s = ax1.errorbar(
np.nan+comparison['fe_h_gbs'],
(comparison['m_h'] - comparison['fe_h_gbs'])/np.abs(comparison['fe_h_gbs']),
xerr = comparison['e_fe_h_gbs'],
yerr = np.sqrt(comparison['e_fe_h_gbs']**2 + comparison['e_m_h']**2)/np.abs(comparison['fe_h_gbs']),
fmt = 'o',
c ='k',
label=r'$\mathrm{[Fe/H]}^\text{atmo}$'
)
s1 = ax1.errorbar(
comparison['teff_gbs'],
(comparison['teff'] - comparison['teff_gbs'])/np.abs(comparison['teff_gbs']),
xerr = comparison['e_teff_gbs'],
yerr = np.sqrt(comparison['e_teff_gbs']**2 + comparison['e_teff']**2)/np.abs(comparison['teff_gbs']),
fmt = 'o',
label=r'$\mathrm{[Fe/H]}$'
)
s2 = ax2.errorbar(
comparison['logg_gbs'],
(comparison['logg'] - comparison['logg_gbs'])/np.abs(comparison['logg_gbs']),
xerr = comparison['e_logg_gbs'],
yerr = np.sqrt(comparison['e_logg_gbs']**2 + comparison['e_logg']**2)/np.abs(comparison['logg_gbs']),
fmt = 'o'
)
s3 = ax3.errorbar(
comparison['fe_h_gbs'],
(comparison['m_h'] - comparison['fe_h_gbs'])/np.abs(comparison['fe_h_gbs']),
xerr = comparison['e_fe_h_gbs'],
yerr = np.sqrt(comparison['e_fe_h_gbs']**2 + comparison['e_m_h']**2)/np.abs(comparison['fe_h_gbs']),
fmt = 'o',
c ='k'
)
s3 = ax3.errorbar(
comparison['fe_h_gbs'],
(comparison['fe_h'] - comparison['fe_h_gbs'])/np.abs(comparison['fe_h_gbs']),
xerr = comparison['e_fe_h_gbs'],
yerr = np.sqrt(comparison['e_fe_h_gbs']**2 + comparison['e_fe_h']**2)/np.abs(comparison['fe_h_gbs']),
fmt = 'o'
)
ax1.set_ylim(-0.05,0.05)
ax2.set_ylim(-0.50,0.50)
ax3.set_ylim(-0.25,0.25)
ax1.axhline(0,lw=0.5,c='k')
ax2.axhline(0,lw=0.5,c='k')
ax3.axhline(0,lw=0.5,c='k')
ax1.set_xlabel('Teff (GBS)')
ax2.set_xlabel('logg (GBS)')
ax3.set_xlabel('[Fe/H] (GBS)')
ax1.set_ylabel(r'$\Delta$Teff')
ax2.set_ylabel(r'$\Delta$logg')
ax3.set_ylabel(r'$\Delta$[Fe/H]')
ax1.text(0.01,0.05,r'Bias = $'+str('%.0f' % lbol_bias['teff'][0])+' \pm '+str('%.0f' % lbol_bias['teff'][1])+'$',transform=ax1.transAxes,color='C0',fontsize=15)
ax2.text(0.01,0.05,r'Bias = $'+str('%.3f' % lbol_bias['logg'][0])+' \pm '+str('%.3f' % lbol_bias['logg'][1])+'$',transform=ax2.transAxes,color='C0',fontsize=15)
ax3.text(0.01,0.05,r'Bias = $'+str('%.3f' % lbol_bias['fe_h'][0])+' \pm '+str('%.3f' % lbol_bias['fe_h'][1])+'$',transform=ax3.transAxes,color='C0',fontsize=15)
ax3.text(0.01,0.2,r'Bias = $'+str('%.3f' % lbol_bias['m_h'][0])+' \pm '+str('%.3f' % lbol_bias['m_h'][1])+'$',transform=ax3.transAxes,color='k',fontsize=15)
ax1.legend(loc='upper center', bbox_to_anchor=(0.5, 1.75),ncol=4, fancybox=True,handletextpad=-0.2,columnspacing=0.25)
ax1.text(-0.05,1.37,r'$\Delta$ = GALAH DR3 - GBS',transform=ax1.transAxes,fontsize=20)
ax1.text(0.85,1.45,r'$\mathrm{[Fe/H]^\text{atmo}}$ traced by',transform=ax1.transAxes,fontsize=20,ha='center')
ax1.text(0.85,1.2,r'H/Sc/Ti/Fe lines',transform=ax1.transAxes,fontsize=20,ha='center')
plt.tight_layout()
plt.savefig('figures/gbs_performance_lbol_percent.png',bbox_inches='tight',dpi=300)
# Outlier in logg: Bad wavelength solution
selected = ((comparison['logg'] - comparison['logg_gbs']) < -0.2)
print(comparison['sobject_id'][selected], comparison['GBS'][selected])
print(comparison['teff'][selected], comparison['logg'][selected], comparison['fe_h'][selected])
print(comparison['teff_gbs'][selected], comparison['logg_gbs'][selected], comparison['fe_h_gbs'][selected])
```
# FREE SETUP
```
# Read in dedicated FREE setup runs
galah_free = pyfits.getdata('data/GALAH_gbs.fits',1)
gbs_galah_match_free = []
galah_gbs_match_free = []
for each_sobject_id in range(len(galah_free['sobject_id'])):
if galah_free['sobject_id'][each_sobject_id] not in [140709001901194,150204002101256]:
try:
side_a = np.where(
galah_free['sobject_id'][each_sobject_id] == gbs['sobject_id']
)[0][0]
side_b = each_sobject_id
gbs_galah_match_free.append(side_a)
galah_gbs_match_free.append(side_b)
except:
pass
gbs_galah_match_free = np.array(gbs_galah_match_free)
galah_gbs_match_free = np.array(galah_gbs_match_free)
#leave out FREE run
comparison_free = dict()
comparison_free['GBS'] = gbs['StarID1'][gbs_galah_match_free]
comparison_free['sobject_id'] = gbs['sobject_id'][gbs_galah_match_free]
comparison_free['teff_gbs'] = np.array([float(gbs['Teff'][gbs_galah_match_free][x]) for x in range(len(gbs['Teff'][gbs_galah_match_free]))])
comparison_free['teff_gbs'][comparison_free['teff_gbs'] < 0] = np.NaN
comparison_free['logg_gbs'] = gbs['logg'][gbs_galah_match_free]
comparison_free['fe_h_gbs'] = gbs['__Fe_H_'][gbs_galah_match_free]
comparison_free['e_teff_gbs'] = np.array([float(gbs['e_Teff'][gbs_galah_match_free][x]) for x in range(len(gbs['e_Teff'][gbs_galah_match_free]))])
comparison_free['e_logg_gbs'] = gbs['e_logg'][gbs_galah_match_free]
comparison_free['e_fe_h_gbs'] = gbs['e__Fe_H_'][gbs_galah_match_free]
comparison_free['teff_free'] = galah_free['TEFF'][galah_gbs_match_free]
comparison_free['e_teff_free'] = galah_free['E_TEFF'][galah_gbs_match_free]
comparison_free['logg_free'] = galah_free['LOGG'][galah_gbs_match_free]
comparison_free['e_logg_free'] = np.array([0.1 for x in range(len(galah_free['E_LOGG'][gbs_galah_match_free]))])
comparison_free['fe_h_free'] = galah_free['FEH'][galah_gbs_match_free]
comparison_free['e_fe_h_free'] = galah_free['E_FEH'][galah_gbs_match_free]
teff_bias = np.nanmean(comparison_free['teff_free'] - comparison_free['teff_gbs'])
teff_std = np.nanstd(comparison_free['teff_free'] - comparison_free['teff_gbs'])
logg_bias = np.nanmean(comparison_free['logg_free'] - comparison_free['logg_gbs'])
logg_std = np.nanstd(comparison_free['logg_free'] - comparison_free['logg_gbs'])
feh_bias = np.nanmean(comparison_free['fe_h_free'] - comparison_free['fe_h_gbs'])
feh_std = np.nanstd(comparison_free['fe_h_free'] - comparison_free['fe_h_gbs'])
free_bias = {}
for each_param in ['teff','logg','fe_h']:
good = (
np.isfinite(comparison_free[each_param+'_free']) & np.isfinite(comparison_free[each_param+'_gbs'])
)
free_bias[each_param] = weighted_avg_and_std(
comparison_free[each_param+'_free'][good] - comparison_free[each_param+'_gbs'][good],
1./(comparison_free['e_'+each_param+'_free'][good]**2+comparison_free['e_'+each_param+'_gbs'][good]**2))
f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize = (10,7.5))
s1 = ax1.errorbar(
comparison['teff_gbs'],
comparison['teff_free'] - comparison['teff_gbs'],
xerr = comparison['e_teff_gbs'],
yerr = np.sqrt(comparison['e_teff_gbs']**2 + comparison['e_teff_free']**2),
fmt = 'o',c='k',
label='free'
)
s2 = ax2.errorbar(
comparison['logg_gbs'],
comparison['logg_free'] - comparison['logg_gbs'],
xerr = comparison['e_logg_gbs'],
yerr = np.sqrt(comparison['e_logg_gbs']**2 + comparison['e_logg_free']**2),
fmt = 'o',c='k'
)
s3 = ax3.errorbar(
comparison['fe_h_gbs'],
comparison['fe_h_free'] - comparison['fe_h_gbs'],
xerr = comparison['e_fe_h_gbs'],
yerr = np.sqrt(comparison['e_fe_h_gbs']**2 + comparison['e_fe_h_free']**2),
fmt = 'o',c='k'
)
ax1.text(0.01,0.225,r'Bias = $'+str('%.0f' % free_bias['teff'][0])+' \pm '+str('%.0f' % free_bias['teff'][1])+'$',transform=ax1.transAxes,color='k',fontsize=15)
ax2.text(0.01,0.225,r'Bias = $'+str('%.2f' % free_bias['logg'][0])+' \pm '+str('%.2f' % free_bias['logg'][1])+'$',transform=ax2.transAxes,color='k',fontsize=15)
ax3.text(0.01,0.225,r'Bias = $'+str('%.2f' % free_bias['fe_h'][0])+' \pm '+str('%.2f' % free_bias['fe_h'][1])+'$',transform=ax3.transAxes,color='k',fontsize=15)
ax1.set_ylim(-300,300)
ax2.set_ylim(-1.1,1.1)
ax3.set_ylim(-0.4,0.4)
ax1.axhline(0,lw=0.5,c='k')
ax2.axhline(0,lw=0.5,c='k')
ax3.axhline(0,lw=0.5,c='k')
ax1.set_xlabel('Teff (GBS)')
ax2.set_xlabel('logg (GBS)')
ax3.set_xlabel('[Fe/H] (GBS)')
ax1.set_ylabel(r'$\Delta$Teff')
ax2.set_ylabel(r'$\Delta$logg')
ax3.set_ylabel(r'$\Delta$[Fe/H]')
ax1.legend(loc='upper center', bbox_to_anchor=(0.5, 1.75),ncol=4, fancybox=True,handletextpad=-0.2,columnspacing=0.25)
plt.tight_layout()
plt.savefig('figures/gbs_performance_free.png',bbox_inches='tight',dpi=300)
f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize = (10,7.5))
s1 = ax1.errorbar(
comparison['teff_gbs'],
comparison['teff_free'] - comparison['teff_gbs'],
xerr = comparison['e_teff_gbs'],
yerr = np.sqrt(comparison['e_teff_gbs']**2 + comparison['e_teff_free']**2),
fmt = 'o',c='k',
label='free'
)
s1 = ax1.errorbar(
comparison['teff_gbs'],
comparison['teff'] - comparison['teff_gbs'],
xerr = comparison['e_teff_gbs'],
yerr = np.sqrt(comparison['e_teff_gbs']**2 + comparison['e_teff']**2),
fmt = 'o',
label=r'lbol'
)
s2 = ax2.errorbar(
comparison['logg_gbs'],
comparison['logg_free'] - comparison['logg_gbs'],
xerr = comparison['e_logg_gbs'],
yerr = np.sqrt(comparison['e_logg_gbs']**2 + comparison['e_logg_free']**2),
fmt = 'o',c='k'
)
s2 = ax2.errorbar(
comparison['logg_gbs'],
comparison['logg'] - comparison['logg_gbs'],
xerr = comparison['e_logg_gbs'],
yerr = np.sqrt(comparison['e_logg_gbs']**2 + comparison['e_logg']**2),
fmt = 'o'
)
s3 = ax3.errorbar(
comparison['fe_h_gbs'],
comparison['fe_h_free'] - comparison['fe_h_gbs'],
xerr = comparison['e_fe_h_gbs'],
yerr = np.sqrt(comparison['e_fe_h_gbs']**2 + comparison['e_fe_h_free']**2),
fmt = 'o',c='k'
)
s3 = ax3.errorbar(
comparison['fe_h_gbs'],
comparison['fe_h'] - comparison['fe_h_gbs'],
xerr = comparison['e_fe_h_gbs'],
yerr = np.sqrt(comparison['e_fe_h_gbs']**2 + comparison['e_fe_h']**2),
fmt = 'o'
)
ax1.text(0.01,0.05,r'Bias = $'+str('%.0f' % lbol_bias['teff'][0])+' \pm '+str('%.0f' % lbol_bias['teff'][1])+'$',transform=ax1.transAxes,color='C0',fontsize=15)
ax2.text(0.01,0.05,r'Bias = $'+str('%.2f' % lbol_bias['logg'][0])+' \pm '+str('%.2f' % lbol_bias['logg'][1])+'$',transform=ax2.transAxes,color='C0',fontsize=15)
ax3.text(0.01,0.05,r'Bias = $'+str('%.2f' % lbol_bias['fe_h'][0])+' \pm '+str('%.2f' % lbol_bias['fe_h'][1])+'$',transform=ax3.transAxes,color='C0',fontsize=15)
ax1.text(0.01,0.225,r'Bias = $'+str('%.0f' % free_bias['teff'][0])+' \pm '+str('%.0f' % free_bias['teff'][1])+'$',transform=ax1.transAxes,color='k',fontsize=15)
ax2.text(0.01,0.225,r'Bias = $'+str('%.2f' % free_bias['logg'][0])+' \pm '+str('%.2f' % free_bias['logg'][1])+'$',transform=ax2.transAxes,color='k',fontsize=15)
ax3.text(0.01,0.225,r'Bias = $'+str('%.2f' % free_bias['fe_h'][0])+' \pm '+str('%.2f' % free_bias['fe_h'][1])+'$',transform=ax3.transAxes,color='k',fontsize=15)
ax1.set_ylim(-300,300)
ax2.set_ylim(-1.1,1.1)
ax3.set_ylim(-0.4,0.4)
ax1.axhline(0,lw=0.5,c='k')
ax2.axhline(0,lw=0.5,c='k')
ax3.axhline(0,lw=0.5,c='k')
ax1.set_xlabel('Teff (GBS)')
ax2.set_xlabel('logg (GBS)')
ax3.set_xlabel('[Fe/H] (GBS)')
ax1.set_ylabel(r'$\Delta$Teff')
ax2.set_ylabel(r'$\Delta$logg')
ax3.set_ylabel(r'$\Delta$[Fe/H]')
ax1.legend(loc='upper center', bbox_to_anchor=(0.5, 1.75),ncol=4, fancybox=True,handletextpad=-0.2,columnspacing=0.25)
plt.tight_layout()
plt.savefig('figures/gbs_performance_free_lbol.png',bbox_inches='tight',dpi=300)
```
| github_jupyter |
## Basic Relative Permeability Example in 2D
This example is about finding relative permeability of two phases in the medium. We use invasion percolation to invade air (non-wetting) into a water-filled (wetting) network. Here we use a 2D network so that we can visualize the results easily.
```
import warnings
import pandas as pd
import scipy as sp
import numpy as np
import openpnm as op
from openpnm import models
import matplotlib.pyplot as plt
np.set_printoptions(precision=4)
np.random.seed(10)
%matplotlib inline
ws = op.Workspace()
ws.settings["loglevel"] = 40
```
### Initialize Required Objects
We'll use several pre-defined classes here to simplify the example, allowing us to focus on the actual problem of computing relative permeability. Note that each phase is assigned to a separate physics.
```
shape=[100, 100, 1]
pn = op.network.Cubic(shape)
geo = op.geometry.StickAndBall(network=pn, pores=pn.Ps, throats=pn.Ts)
air = op.phases.Air(network=pn)
water = op.phases.Water(network=pn)
phys_air = op.physics.Standard(network=pn, phase=air, geometry=geo)
phys_water = op.physics.Standard(network=pn, phase=water, geometry=geo)
```
### Using InvasionPercolation to Simulate Air Invasion
The InvasionPercolation (IP) algorithm will be used to simulaton the air invasion. We'll inject only from one face (pores on the 'left' side). All settings in IP are set to be default. For instance we don't use trapping options, which is an extra step in IP.
```
ip = op.algorithms.InvasionPercolation(network=pn)
ip.setup(phase=air)
in_pores=pn.pores("left")
ip.set_inlets(pores=in_pores)
ip.run()
```
After implementing IP, we usually plot intrusion curves (Pc-S curves) but this is not the scope of this example. Instead we want to visualize the distribution of phases (occupancy of pores and throats) at a specified saturation point. Pore and throat occupancies are defined as boolean arrays in IP with the same length as Pores and throats numbers in the network. For each saturation point (in the range of [0,1]) during the invasion, the pore occupancy shows whether it is occupied by air or water. This is done by assigning a boolean value to the corresponding element in occupancy array. Note that at the beginning (Snwp=0 all pores are occupied by water so all elements are False for air occupancy).
For a desired saturation point (Snwp=0.15) pores and throats occupancies can be accessed by ``results`` method in IP.
```
occupancies=ip.results(Snwp=0.25)
print(occupancies)
```
We can then assign the pores and throats occupancies to the invading phase. Note that in this context occupancy and saturation are two different words. The occupancy here is the distribution of fluid for the specific saturation point (Snwp=0.25 for now). It means 0.15 of void volume is saturated by air and the detail of occupation is found from occupancy distribution.
```
air.update(ip.results(Snwp=0.25))
```
Let's plot the occupancies to visualize it.
```
fig = plt.figure(figsize=(6, 6))
fig = op.topotools.plot_coordinates(network=pn, fig=fig)
fig = op.topotools.plot_coordinates(network=pn, fig=fig,
pores=air['pore.occupancy'],
color='yellow')
```
### StokesFlow Algorithm for effective permeabilities
Now that the invasion pattern for this domain has been established using IP, we can calculate the effective and relative permeabilty of the phases at each saturation point using Stokes Flow. Note that relative permeability of air at a specific saturation is the ratio of effective permeability of air while water is also existing (flowing) in the medium to absolute permeability of the medium.
Absolute permeability of a medium with area $A$, and length $L$ can be calculated as following:
$Q_{S=1}=\frac{K_{abs}A}{\mu L}\delta P$ , where $Q$ and $P$ are rate and pressure. $K$ in this equation is the absolute permeability of the medium assuming the medium is fully saturated by a phase with viscosity of $\mu$ (here water). To find the relative permeability of water for example, we need its effective permeability value. The effective permeability value is found in a similar way using stokes flow. The only difference is the medium is saturated with more than 1 phase (here both water and air). Let's write down the equation for water flow when air is invaded:$Q_{S}=\frac{K_{eff} A}{\mu L}\delta P$. Relative permeability of water is then found by: $K_{rel}=\frac{K_{eff}}{K_{abs}}$. From previous equations we then have $K_{rel}=\frac{Q_{S}}{Q_{S=1}}$. Therefore, in this problem we don't need to find $K$ values directly. We just need to calculate the rates. (Note: for air we also need viscosity ratios as $\mu_{air} \ne \mu_{water}$, but here we calculate the absolute rate for each phase separately and viscosity ratio is not required)
Let's first find the rate of water flow for when the medium is fully saturated by water.
```
st = op.algorithms.StokesFlow(network=pn)
st.setup(phase=water)
st.set_value_BC(pores=pn.pores('back'), values=1)
st.set_value_BC(pores=pn.pores('front'), values=0)
```
We can solve the flow problem on the netowrk without altering the throat conductances, which will give us the maximum flow through the domain for single phase flow. This value is used for finding absolute permeability as well.
```
st.run()
Q_abs_water = st.rate(pores=pn.pores('back'))
print(Q_abs_water)
```
Next we will illustrate how to alter the hydraulic conductances of the water phase to account for the presence of air filled pores and throats. Start by passing 'pore.occupancy' and 'throat.occupancy' to the air object at a specified saturation (0.1 in this case), then reach into the ``phys2`` object and set the conductance of the air filled throats to 1000x lower than the least conductive water filled throat. A similar procedure is provided by a pore scale model in Physics.models.multiphase called conduit_conductance, which we could use instead. Note that in that case we use regenerate_models method instead of manually change the hydraulic conductance. This is important because at each saturation point the pore/throat occupancies are different so as their conductance.
```
air.update(ip.results(Snwp=0.20))
val = np.amin(phys_water['throat.hydraulic_conductance'])/1000
phys_water['throat.hydraulic_conductance'][air['throat.occupancy']] = val
```
We then re-run the flow problem, which will now utilize the altered hydraulic conductance values.
```
st.run()
Q_eff = st.rate(pores=pn.pores('back'))
print(Q_eff)
```
Relative permeability of water at saturation point Snwp=0.2 (or Swp=0.8) is:
```
K_rel_w=Q_eff/Q_abs_water
print(K_rel_w)
```
### Calculate Relative Permeability Curve
The above illustration showed how to get the effective permeability at one saturation. We now put this logic into a for loop to obtain water flow rates throat the partialy air-invaded network at a variety of saturations.
Calculations for water phase: Note that we first start with regenerating models for phys_water to make sure the conductance values are reset to the state where medium was fully saturated with water. We then change those values for throats that are occupied by air at each saturation point.
```
#NBVAL_IGNORE_OUTPUT
phys_water.regenerate_models() # Regenerate phys2 to reset any calculation done above
Q_water = [] # Initialize a list to hold data
#stokes flow for water
for s in np.arange(0, 1, 0.1): # Loop through saturations
# 1: Update air object with occupancy at given saturation
air.update(ip.results(Snwp=s))
# 2: Overwrite water's hydraulic conductance in air-filled locations
phys_water['throat.hydraulic_conductance'][air['throat.occupancy']] = val
# 3: Re-run flow problem
st.run()
# 4: Compute flow through inlet phase and append to data
Q_water.append(st.rate(pores=pn.pores('back')))
phys_water.regenerate_models()
```
Claculations for air phase:
```
phys_water.regenerate_models()
phys_air.regenerate_models()
Q_air=[]
#stokes flow for air
st_a = op.algorithms.StokesFlow(network=pn)
st_a.setup(phase=air)
st_a.set_value_BC(pores=pn.pores('back'), values=1)
st_a.set_value_BC(pores=pn.pores('front'), values=0)
st_a.run()
Q_abs_air=st_a.rate(pores=pn.pores('back'))
for s in np.arange(0, 1, 0.1): # Loop through saturations
# 1: Update air object with occupancy at given saturation
air.update(ip.results(Snwp=s))
# 2: Overwrite air's hydraulic conductance in water-filled locations
phys_air['throat.hydraulic_conductance'][~air['throat.occupancy']] = val
# 3: Re-run flow problem
st_a.run()
# 4: Compute flow through inlet phase and append to data
Q_air.append(st_a.rate(pores=pn.pores('back')))
phys_air.regenerate_models()
data = {'Snwp':np.arange(0, 1, 0.1),'Kr_water':np.hstack(Q_water/Q_abs_water),'Kr_air':np.hstack((Q_air/Q_abs_air))}
DF=pd.DataFrame(data)
DF
```
Let's plot the curves. Note that we used less saturation points. To get a smoother plot with more data points decrease the saturation increments (e.g. np.arange(0, 1, 0.01) in the loop).
```
# NBVAL_IGNORE_OUTPUT
f = plt.figure()
ax = f.add_subplot(111)
ax.plot(DF['Snwp'],DF['Kr_water'],label='Kr_water')
ax.plot(DF['Snwp'],DF['Kr_air'],label='Kr_air')
ax.set_xlabel('Snw')
ax.set_ylabel('Kr')
ax.set_title('Relative Permability Curves')
ax.legend()
```
| github_jupyter |
```
# creating R environment in Google Colab
%load_ext rpy2.ipython
%%R
# installing necessary libraries
install.packages('tidyverse')
library(tidyverse)
install.packages('caret')
library(caret)
install.packages('corrplot')
library(corrplot)
install.packages('xgboost')
library(xgboost)
%%R
# reading the file
data = read.csv('wine.csv')
```
### Basic Data Summary
```
%%R
# head of dataset
head(data)
%%R
# structure of dataset
str(data)
%%R
# converting alcohol column to double
data = transform(data, alcohol = as.numeric(alcohol))
%%R
# summary of data
summary(data)
%%R
# filling missing values in column alcohol with mean
data$alcohol[is.na(data$alcohol)] = mean(data$alcohol, na.rm = TRUE)
```
### Data Visualization
#### Boxplots for different feature variables across different categories of Quality
```
%%R
# boxplot fixed_acidity vs quality
boxplot(fixed_acidity~quality, data=data)
%%R
# boxplot volatile_acidity vs quality
boxplot(volatile_acidity~quality, data=data)
%%R
# boxplot citric_acid vs quality
boxplot(citric_acid~quality, data=data)
%%R
# boxplot residual_sugar vs quality
boxplot(residual_sugar~quality, data=data)
%%R
# boxplot chlorides vs quality
boxplot(chlorides~quality, data=data)
%%R
# boxplot free_sulfur_dioxide vs quality
boxplot(free_sulfur_dioxide~quality, data=data)
%%R
# boxplot total_sulfur_dioxide vs quality
boxplot(total_sulfur_dioxide~quality, data=data)
%%R
# boxplot pH vs quality
boxplot(pH~quality, data=data)
%%R
# boxplot sulphates vs quality
boxplot(sulphates~quality, data=data)
%%R
# boxplot alcohol vs quality
boxplot(alcohol~quality, data=data)
```
#### Scatterplots for different feature variables.
```
%%R
# scatterplot volatile_acidity vs fixed_acidity
plot(x=data$volatile_acidity, y=data$fixed_acidity, xlab='Volatile Acidity', ylab='Fixed Acidity')
%%R
# scatterplot citric_acid vs residual_sugar
plot(x=data$citric_acid, y=data$residual_sugar, xlab='Citric Acid', ylab='Residual Sugar')
%%R
# scatterplot free_sulfur_dioxide vs fixed_total_sulfur_dioxide
plot(x=data$free_sulfur_dioxide, y=data$total_sulfur_dioxide, xlab='Free Sulfur Dioxide', ylab='Total Sulfur Dioxide')
%%R
# scatterplot chlorides vs sulphates
plot(x=data$chlorides, y=data$sulphates, xlab='Chlorides', ylab='Sulphates')
%%R
# scatterplot alcohol vs fixed pH
plot(x=data$alcohol, y=data$pH, xlab='Alcohol', ylab='pH')
```
Correlation across different features
```
%%R
# correlation heatmap
corrplot(cor(data))
```
### Data Cleaning and Processing
#### Mapping values of quality
```
%%R
# turning categorical into numerical column
quality_map_function = function(q){
if ((q == 3) || (q == 4)){
return (0)
}
else if ((q == 5) || (q == 6) || (q == 7)){
return (1)
}
else if ((q == 8) || (q == 9)){
return (2)
}
}
quality_map = unlist(lapply(data$quality, quality_map_function))
data = cbind(data, quality_map)
data = data[,-12]
```
### Data Split and Scale
```
%%R
# splitting the dataset
train_index = sample(1:nrow(data), nrow(data)*0.75)
%%R
# splitting the dataset into dependent and independent sets
data_variables = as.matrix(data[,-12])
data_label = data[,12]
data_matrix = xgb.DMatrix(data = as.matrix(data), label = data_label)
%%R
# splitting dataset into training and testing sets
train_data = data_variables[train_index,]
train_data = scale(train_data)
train_label = data_label[train_index]
train_matrix = xgb.DMatrix(data = train_data, label = train_label)
test_data = data_variables[-train_index,]
test_data = scale(test_data)
test_label = data_label[-train_index]
test_matrix = xgb.DMatrix(data = test_data, label = test_label)
```
### Model Building
```
%%R
# building model with 5-fold cross validation
xgb_params = list("objective" = "multi:softprob",
"eval_metric" = "mlogloss",
"num_class" = 3)
cv_model = xgb.cv(params = xgb_params,
data = train_matrix,
verbose = FALSE,
nfold = 5,
nrounds = 50,
prediction = TRUE)
%%R
# predictions from model on training set
prediction <- data.frame(cv_model$pred) %>%
mutate(max_prob = max.col(., ties.method = "last"),
label = train_label + 1)
head(prediction)
%%R
# confusion matrix of train set
confusionMatrix(factor(prediction$max_prob),
factor(prediction$label),
mode = "everything")
```
### Model Testing
```
%%R
# training the model
bst_model = xgb.train(params = xgb_params,
data = train_matrix,
nrounds = nround)
# Predict hold-out test set
test_pred = predict(bst_model, newdata = test_matrix)
test_prediction <- matrix(test_pred, nrow = numberOfClasses,
ncol=length(test_pred)/numberOfClasses) %>%
t() %>%
data.frame() %>%
mutate(label = test_label + 1,
max_prob = max.col(., "last"))
# confusion matrix of test set
confusionMatrix(factor(test_prediction$max_prob),
factor(test_prediction$label),
mode = "everything")
```
## Final Accuracy: 93.73%
| github_jupyter |
# Project 2: Breakout Strategy
## Instructions
Each problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a `# TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity.
## Packages
When you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.
The other packages that we're importing is `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems.
### Install Packages
```
import sys
!{sys.executable} -m pip install -r requirements.txt
```
### Load Packages
```
import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests
```
## Market Data
The data source we use for most of the projects is the [Wiki End of Day data](https://www.quandl.com/databases/WIKIP) hosted at [Quandl](https://www.quandl.com). This contains data for many stocks, but we'll just be looking at stocks in the S&P 500. We also made things a little easier to run by narrowing our range of time to limit the size of the data.
### Set API Key
Set the `quandl_api_key` variable to your Quandl api key. You can find your Quandl api key [here](https://www.quandl.com/account/api).
```
# TODO: Add your Quandl API Key
quandl_api_key = ''
```
### Download Data
```
import os
snp500_file_path = 'data/tickers_SnP500.txt'
wiki_file_path = 'data/WIKI_PRICES.csv'
start_date, end_date = '2013-07-01', '2017-06-30'
use_columns = ['date', 'ticker', 'adj_close', 'adj_high', 'adj_low']
if not os.path.exists(wiki_file_path):
with open(snp500_file_path) as f:
tickers = f.read().split()
helper.download_quandl_dataset(quandl_api_key, 'WIKI', 'PRICES', wiki_file_path, use_columns, tickers, start_date, end_date)
else:
print('Data already downloaded')
```
### Load Data
While using real data will give you hands on experience, it's doesn't cover all the topics we try to condense in one project. We'll solve this by creating new stocks. We've create a scenario where companies mining [Terbium](https://en.wikipedia.org/wiki/Terbium) are making huge profits. All the companies in this sector of the market are made up. They represent a sector with large growth that will be used for demonstration latter in this project.
```
df_original = pd.read_csv(wiki_file_path, parse_dates=['date'], index_col=False)
# Add TB sector to the market
df = df_original
df = pd.concat([df] + project_helper.generate_tb_sector(df[df['ticker'] == 'AAPL']['date']), ignore_index=True)
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
high = df.reset_index().pivot(index='date', columns='ticker', values='adj_high')
low = df.reset_index().pivot(index='date', columns='ticker', values='adj_low')
print('Loaded Data')
```
### View Data
To see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix.
```
close
```
### Stock Example
Let's see what a single stock looks like from the closing prices. For this example and future display examples in this project, we'll use Apple's stock (AAPL). If we tried to graph all the stocks, it would be too much information.
```
apple_ticker = 'AAPL'
project_helper.plot_stock(close[apple_ticker], '{} Stock'.format(apple_ticker))
```
## The Alpha Research Process
In this project you will code and evaluate a "breakout" signal. It is important to understand where these steps fit in the alpha research workflow. The signal-to-noise ratio in trading signals is very low and, as such, it is very easy to fall into the trap of _overfitting_ to noise. It is therefore inadvisable to jump right into signal coding. To help mitigate overfitting, it is best to start with a general observation and hypothesis; i.e., you should be able to answer the following question _before_ you touch any data:
> What feature of markets or investor behaviour would lead to a persistent anomaly that my signal will try to use?
Ideally the assumptions behind the hypothesis will be testable _before_ you actually code and evaluate the signal itself. The workflow therefore is as follows:

In this project, we assume that the first three steps area done ("observe & research", "form hypothesis", "validate hypothesis"). The hypothesis you'll be using for this project is the following:
- In the absence of news or significant investor trading interest, stocks oscillate in a range.
- Traders seek to capitalize on this range-bound behaviour periodically by selling/shorting at the top of the range and buying/covering at the bottom of the range. This behaviour reinforces the existence of the range.
- When stocks break out of the range, due to, e.g., a significant news release or from market pressure from a large investor:
- the liquidity traders who have been providing liquidity at the bounds of the range seek to cover their positions to mitigate losses, thus magnifying the move out of the range, _and_
- the move out of the range attracts other investor interest; these investors, due to the behavioural bias of _herding_ (e.g., [Herd Behavior](https://www.investopedia.com/university/behavioral_finance/behavioral8.asp)) build positions which favor continuation of the trend.
Using this hypothesis, let start coding..
## Compute the Highs and Lows in a Window
You'll use the price highs and lows as an indicator for the breakout strategy. In this section, implement `get_high_lows_lookback` to get the maximum high price and minimum low price over a window of days. The variable `lookback_days` contains the number of days to look in the past. Make sure this doesn't include the current day.
```
def get_high_lows_lookback(high, low, lookback_days):
"""
Get the highs and lows in a lookback window.
Parameters
----------
high : DataFrame
High price for each ticker and date
low : DataFrame
Low price for each ticker and date
lookback_days : int
The number of days to look back
Returns
-------
lookback_high : DataFrame
Lookback high price for each ticker and date
lookback_low : DataFrame
Lookback low price for each ticker and date
"""
#TODO: Implement function
return None, None
project_tests.test_get_high_lows_lookback(get_high_lows_lookback)
```
### View Data
Let's use your implementation of `get_high_lows_lookback` to get the highs and lows for the past 50 days and compare it to it their respective stock. Just like last time, we'll use Apple's stock as the example to look at.
```
lookback_days = 50
lookback_high, lookback_low = get_high_lows_lookback(high, low, lookback_days)
project_helper.plot_high_low(
close[apple_ticker],
lookback_high[apple_ticker],
lookback_low[apple_ticker],
'High and Low of {} Stock'.format(apple_ticker))
```
## Compute Long and Short Signals
Using the generated indicator of highs and lows, create long and short signals using a breakout strategy. Implement `get_long_short` to generate the following signals:
| Signal | Condition |
|----|------|
| -1 | Low > Close Price |
| 1 | High < Close Price |
| 0 | Otherwise |
In this chart, **Close Price** is the `close` parameter. **Low** and **High** are the values generated from `get_high_lows_lookback`, the `lookback_high` and `lookback_low` parameters.
```
def get_long_short(close, lookback_high, lookback_low):
"""
Generate the signals long, short, and do nothing.
Parameters
----------
close : DataFrame
Close price for each ticker and date
lookback_high : DataFrame
Lookback high price for each ticker and date
lookback_low : DataFrame
Lookback low price for each ticker and date
Returns
-------
long_short : DataFrame
The long, short, and do nothing signals for each ticker and date
"""
#TODO: Implement function
return None
project_tests.test_get_long_short(get_long_short)
```
### View Data
Let's compare the signals you generated against the close prices. This chart will show a lot of signals. Too many in fact. We'll talk about filtering the redundant signals in the next problem.
```
signal = get_long_short(close, lookback_high, lookback_low)
project_helper.plot_signal(
close[apple_ticker],
signal[apple_ticker],
'Long and Short of {} Stock'.format(apple_ticker))
```
## Filter Signals
That was a lot of repeated signals! If we're already shorting a stock, having an additional signal to short a stock isn't helpful for this strategy. This also applies to additional long signals when the last signal was long.
Implement `filter_signals` to filter out repeated long or short signals within the `lookahead_days`. If the previous signal was the same, change the signal to `0` (do nothing signal). For example, say you have a single stock time series that is
`[1, 0, 1, 0, 1, 0, -1, -1]`
Running `filter_signals` with a lookahead of 3 days should turn those signals into
`[1, 0, 0, 0, 1, 0, -1, 0]`
To help you implement the function, we have provided you with the `clear_signals` function. This will remove all signals within a window after the last signal. For example, say you're using a windows size of 3 with `clear_signals`. It would turn the Series of long signals
`[0, 1, 0, 0, 1, 1, 0, 1, 0]`
into
`[0, 1, 0, 0, 0, 1, 0, 0, 0]`
Note: it only takes a Series of the same type of signals, where `1` is the signal and `0` is no signal. It can't take a mix of long and short signals. Using this function, implement `filter_signals`.
```
def clear_signals(signals, window_size):
"""
Clear out signals in a Series of just long or short signals.
Remove the number of signals down to 1 within the window size time period.
Parameters
----------
signals : Pandas Series
The long, short, or do nothing signals
window_size : int
The number of days to have a single signal
Returns
-------
signals : Pandas Series
Signals with the signals removed from the window size
"""
# Start with buffer of window size
# This handles the edge case of calculating past_signal in the beginning
clean_signals = [0]*window_size
for signal_i, current_signal in enumerate(signals):
# Check if there was a signal in the past window_size of days
has_past_signal = bool(sum(clean_signals[signal_i:signal_i+window_size]))
# Use the current signal if there's no past signal, else 0/False
clean_signals.append(not has_past_signal and current_signal)
# Remove buffer
clean_signals = clean_signals[window_size:]
# Return the signals as a Series of Ints
return pd.Series(np.array(clean_signals).astype(np.int), signals.index)
def filter_signals(signal, lookahead_days):
"""
Filter out signals in a DataFrame.
Parameters
----------
signal : DataFrame
The long, short, and do nothing signals for each ticker and date
lookahead_days : int
The number of days to look ahead
Returns
-------
filtered_signal : DataFrame
The filtered long, short, and do nothing signals for each ticker and date
"""
#TODO: Implement function
return None
project_tests.test_filter_signals(filter_signals)
```
### View Data
Let's view the same chart as before, but with the redundant signals removed.
```
signal_5 = filter_signals(signal, 5)
signal_10 = filter_signals(signal, 10)
signal_20 = filter_signals(signal, 20)
for signal_data, signal_days in [(signal_5, 5), (signal_10, 10), (signal_20, 20)]:
project_helper.plot_signal(
close[apple_ticker],
signal_data[apple_ticker],
'Long and Short of {} Stock with {} day signal window'.format(apple_ticker, signal_days))
```
## Lookahead Close Prices
With the trading signal done, we can start working on evaluating how many days to short or long the stocks. In this problem, implement `get_lookahead_prices` to get the close price days ahead in time. You can get the number of days from the variable `lookahead_days`. We'll use the lookahead prices to calculate future returns in another problem.
```
def get_lookahead_prices(close, lookahead_days):
"""
Get the lookahead prices for `lookahead_days` number of days.
Parameters
----------
close : DataFrame
Close price for each ticker and date
lookahead_days : int
The number of days to look ahead
Returns
-------
lookahead_prices : DataFrame
The lookahead prices for each ticker and date
"""
#TODO: Implement function
return None
project_tests.test_get_lookahead_prices(get_lookahead_prices)
```
### View Data
Using the `get_lookahead_prices` function, let's generate lookahead closing prices for 5, 10, and 20 days.
Let's also chart a subsection of a few months of the Apple stock instead of years. This will allow you to view the differences between the 5, 10, and 20 day lookaheads. Otherwise, they will mesh together when looking at a chart that is zoomed out.
```
lookahead_5 = get_lookahead_prices(close, 5)
lookahead_10 = get_lookahead_prices(close, 10)
lookahead_20 = get_lookahead_prices(close, 20)
project_helper.plot_lookahead_prices(
close[apple_ticker].iloc[150:250],
[
(lookahead_5[apple_ticker].iloc[150:250], 5),
(lookahead_10[apple_ticker].iloc[150:250], 10),
(lookahead_20[apple_ticker].iloc[150:250], 20)],
'5, 10, and 20 day Lookahead Prices for Slice of {} Stock'.format(apple_ticker))
```
## Lookahead Price Returns
Implement `get_return_lookahead` to generate the log price return between the closing price and the lookahead price.
```
def get_return_lookahead(close, lookahead_prices):
"""
Calculate the log returns from the lookahead days to the signal day.
Parameters
----------
close : DataFrame
Close price for each ticker and date
lookahead_prices : DataFrame
The lookahead prices for each ticker and date
Returns
-------
lookahead_returns : DataFrame
The lookahead log returns for each ticker and date
"""
#TODO: Implement function
return None
project_tests.test_get_return_lookahead(get_return_lookahead)
```
### View Data
Using the same lookahead prices and same subsection of the Apple stock from the previous problem, we'll view the lookahead returns.
In order to view price returns on the same chart as the stock, a second y-axis will be added. When viewing this chart, the axis for the price of the stock will be on the left side, like previous charts. The axis for price returns will be located on the right side.
```
price_return_5 = get_return_lookahead(close, lookahead_5)
price_return_10 = get_return_lookahead(close, lookahead_10)
price_return_20 = get_return_lookahead(close, lookahead_20)
project_helper.plot_price_returns(
close[apple_ticker].iloc[150:250],
[
(price_return_5[apple_ticker].iloc[150:250], 5),
(price_return_10[apple_ticker].iloc[150:250], 10),
(price_return_20[apple_ticker].iloc[150:250], 20)],
'5, 10, and 20 day Lookahead Returns for Slice {} Stock'.format(apple_ticker))
```
## Compute the Signal Return
Using the price returns generate the signal returns.
```
def get_signal_return(signal, lookahead_returns):
"""
Compute the signal returns.
Parameters
----------
signal : DataFrame
The long, short, and do nothing signals for each ticker and date
lookahead_returns : DataFrame
The lookahead log returns for each ticker and date
Returns
-------
signal_return : DataFrame
Signal returns for each ticker and date
"""
#TODO: Implement function
return None
project_tests.test_get_signal_return(get_signal_return)
```
### View Data
Let's continue using the previous lookahead prices to view the signal returns. Just like before, the axis for the signal returns is on the right side of the chart.
```
title_string = '{} day LookaheadSignal Returns for {} Stock'
signal_return_5 = get_signal_return(signal_5, price_return_5)
signal_return_10 = get_signal_return(signal_10, price_return_10)
signal_return_20 = get_signal_return(signal_20, price_return_20)
project_helper.plot_signal_returns(
close[apple_ticker],
[
(signal_return_5[apple_ticker], signal_5[apple_ticker], 5),
(signal_return_10[apple_ticker], signal_10[apple_ticker], 10),
(signal_return_20[apple_ticker], signal_20[apple_ticker], 20)],
[title_string.format(5, apple_ticker), title_string.format(10, apple_ticker), title_string.format(20, apple_ticker)])
```
## Test for Significance
### Histogram
Let's plot a histogram of the signal return values.
```
project_helper.plot_signal_histograms(
[signal_return_5, signal_return_10, signal_return_20],
'Signal Return',
('5 Days', '10 Days', '20 Days'))
```
### Question: What do the histograms tell you about the signal returns?
*#TODO: Put Answer In this Cell*
### P-Value
Let's calculate the P-Value from the signal return.
```
pval_5 = project_helper.get_signal_return_pval(signal_return_5)
print('5 Day P-value: {}'.format(pval_5))
pval_10 = project_helper.get_signal_return_pval(signal_return_10)
print('10 Day P-value: {}'.format(pval_10))
pval_20 = project_helper.get_signal_return_pval(signal_return_20)
print('20 Day P-value: {}'.format(pval_20))
```
### Question: What do the p-values tell you about the null hypothesis?
*#TODO: Put Answer In this Cell*
## Outliers
You might have noticed the outliers in the 10 and 20 day histograms. To better visualize the outliers, let's compare the 5, 10, and 20 day signals returns to normal distributions with the same mean and deviation for each signal return distributions.
```
project_helper.plot_signal_to_normal_histograms(
[signal_return_5, signal_return_10, signal_return_20],
'Signal Return',
('5 Days', '10 Days', '20 Days'))
```
## Kolmogorov-Smirnov Test
While you can see the outliers in the histogram, we need to find the stocks that are causing these outlying returns. We'll use the Kolmogorov-Smirnov Test or KS-Test. This test will be applied to teach ticker's signal returns where a long or short signal exits.
```
# Filter out returns that don't have a long or short signal.
long_short_signal_returns_5 = signal_return_5[signal_5 != 0].stack()
long_short_signal_returns_10 = signal_return_10[signal_10 != 0].stack()
long_short_signal_returns_20 = signal_return_20[signal_20 != 0].stack()
# Get just ticker and signal return
long_short_signal_returns_5 = long_short_signal_returns_5\
.reset_index().rename(columns={0:'signal_return'})[['ticker', 'signal_return']]
long_short_signal_returns_10 = long_short_signal_returns_10\
.reset_index().rename(columns={0:'signal_return'})[['ticker', 'signal_return']]
long_short_signal_returns_20 = long_short_signal_returns_20\
.reset_index().rename(columns={0:'signal_return'})[['ticker', 'signal_return']]
# View some of the data
long_short_signal_returns_5.head(10)
```
This gives you the data to use in the KS-Test.
Now it's time to implement the function `calculate_kstest` to use Kolmogorov-Smirnov test (KS test) between a normal distribution and each stock's signal returns. Run KS test on a normal distribution against each stock's signal returns. Use [`scipy.stats.kstest`](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.kstest.html#scipy-stats-kstest) perform the KS test.
For this function, we don't reccommend you try to find a vectorized solution. Instead, you should use the [`groupby`](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.groupby.html) function. In conjunction, you can either use the [`apply`](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.core.groupby.GroupBy.apply.html) function or iterate over groupby.
```
from scipy.stats import kstest
def calculate_kstest(long_short_signal_returns):
"""
Calculate the KS-Test against the signal returns with a long or short signal.
Parameters
----------
long_short_signal_returns : DataFrame
The signal returns which have a signal.
This DataFrame contains two columns, "ticker" and "signal_return"
Returns
-------
ks_values : Pandas Series
KS static for all the tickers
p_values : Pandas Series
P value for all the tickers
"""
#TODO: Implement function
return None, None
project_tests.test_calculate_kstest(calculate_kstest)
```
### View Data
Using the signal returns we created above, let's calculate the ks and p values.
```
ks_values_5, p_values_5 = calculate_kstest(long_short_signal_returns_5)
ks_values_10, p_values_10 = calculate_kstest(long_short_signal_returns_10)
ks_values_20, p_values_20 = calculate_kstest(long_short_signal_returns_20)
print('ks_values_5')
print(ks_values_5.head(10))
print('p_values_5')
print(p_values_5.head(10))
```
## Find Outliers
With the ks and p values calculate, let's find which symbols are the outliers. Implement the `find_outliers` function to find the following outliers:
- Symbols that pass the null hypothesis with a p-value less than `pvalue_threshold`.
- Symbols that with a KS value above `ks_threshold`.
```
def find_outliers(ks_values, p_values, ks_threshold, pvalue_threshold=0.05):
"""
Find outlying symbols using KS values and P-values
Parameters
----------
ks_values : Pandas Series
KS static for all the tickers
p_values : Pandas Series
P value for all the tickers
ks_threshold : float
The threshold for the KS statistic
pvalue_threshold : float
The threshold for the p-value
Returns
-------
outliers : set of str
Symbols that are outliers
"""
#TODO: Implement function
return None
project_tests.test_find_outliers(find_outliers)
```
### View Data
Using the `find_outliers` function you implemented, let's see what we found.
```
ks_threshold = 0.8
outliers_5 = find_outliers(ks_values_5, p_values_5, ks_threshold)
outliers_10 = find_outliers(ks_values_10, p_values_10, ks_threshold)
outliers_20 = find_outliers(ks_values_20, p_values_20, ks_threshold)
outlier_tickers = outliers_5.union(outliers_10).union(outliers_20)
print('{} Outliers Found:\n{}'.format(len(outlier_tickers), ', '.join(list(outlier_tickers))))
```
### Show Significance without Outliers
Let's compare the 5, 10, and 20 day signals returns without outliers to normal distributions. Also, let's see how the P-Value has changed with the outliers removed.
```
good_tickers = list(set(close.columns) - outlier_tickers)
project_helper.plot_signal_to_normal_histograms(
[signal_return_5[good_tickers], signal_return_10[good_tickers], signal_return_20[good_tickers]],
'Signal Return Without Outliers',
('5 Days', '10 Days', '20 Days'))
outliers_removed_pval_5 = project_helper.get_signal_return_pval(signal_return_5[good_tickers])
outliers_removed_pval_10 = project_helper.get_signal_return_pval(signal_return_10[good_tickers])
outliers_removed_pval_20 = project_helper.get_signal_return_pval(signal_return_20[good_tickers])
print('5 Day P-value (with outliers): {}'.format(pval_5))
print('5 Day P-value (without outliers): {}'.format(outliers_removed_pval_5))
print('')
print('10 Day P-value (with outliers): {}'.format(pval_10))
print('10 Day P-value (without outliers): {}'.format(outliers_removed_pval_10))
print('')
print('20 Day P-value (with outliers): {}'.format(pval_20))
print('20 Day P-value (without outliers): {}'.format(outliers_removed_pval_20))
```
That's more like it! The returns are closer to a normal distribution. You have finished the research phase of a Breakout Strategy. You can now submit your project.
## Submission
Now that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D1_BayesianDecisions/student/W3D1_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 1: Bayes with a binary hidden state
**Week 3, Day 1: Bayesian Decisions**
**By Neuromatch Academy**
__Content creators:__ [insert your name here]
__Content reviewers:__
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
# Tutorial Objectives
This is the first in a series of two core tutorials on Bayesian statistics. In these tutorials, we will explore the fundemental concepts of the Bayesian approach from two perspectives. This tutorial will work through an example of Bayesian inference and decision making using a binary hidden state. The second main tutorial extends these concepts to a continuous hidden state. In the next days, each of these basic ideas will be extended--first through time as we consider what happens when we infere a hidden state using multiple observations and when the hidden state changes across time. In the third day, we will introduce the notion of how to use inference and decisions to select actions for optimal control. For this tutorial, you will be introduced to our binary state fishing problem!
This notebook will introduce the fundamental building blocks for Bayesian statistics:
1. How do we use probability distributions to represent hidden states?
2. How does marginalization work and how can we use it?
3. How do we combine new information with our prior knowledge?
4. How do we combine the possible loss (or gain) for making a decision with our probabilitic knowledge?
```
# @title Video 1: Introduction to Bayesian Statistics
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="JiEIn9QsrFg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Setup
Please execute the cells below to initialize the notebook environment.
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import patches
from matplotlib import transforms
from matplotlib import gridspec
from scipy.optimize import fsolve
from collections import namedtuple
#@title Figure Settings
import ipywidgets as widgets # interactive display
from ipywidgets import GridspecLayout
from IPython.display import clear_output
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
import warnings
warnings.filterwarnings("ignore")
# @title Plotting Functions
def plot_joint_probs(P, ):
assert np.all(P >= 0), "probabilities should be >= 0"
# normalize if not
P = P / np.sum(P)
marginal_y = np.sum(P,axis=1)
marginal_x = np.sum(P,axis=0)
# definitions for the axes
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
spacing = 0.005
# start with a square Figure
fig = plt.figure(figsize=(5, 5))
joint_prob = [left, bottom, width, height]
rect_histx = [left, bottom + height + spacing, width, 0.2]
rect_histy = [left + width + spacing, bottom, 0.2, height]
rect_x_cmap = plt.cm.Blues
rect_y_cmap = plt.cm.Reds
# Show joint probs and marginals
ax = fig.add_axes(joint_prob)
ax_x = fig.add_axes(rect_histx, sharex=ax)
ax_y = fig.add_axes(rect_histy, sharey=ax)
# Show joint probs and marginals
ax.matshow(P,vmin=0., vmax=1., cmap='Greys')
ax_x.bar(0, marginal_x[0], facecolor=rect_x_cmap(marginal_x[0]))
ax_x.bar(1, marginal_x[1], facecolor=rect_x_cmap(marginal_x[1]))
ax_y.barh(0, marginal_y[0], facecolor=rect_y_cmap(marginal_y[0]))
ax_y.barh(1, marginal_y[1], facecolor=rect_y_cmap(marginal_y[1]))
# set limits
ax_x.set_ylim([0,1])
ax_y.set_xlim([0,1])
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i,j in zip(x.flatten(), y.flatten()):
c = f"{P[i,j]:.2f}"
ax.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = marginal_x[i]
c = f"{v:.2f}"
ax_x.text(i, v +0.1, c, va='center', ha='center', color='black')
v = marginal_y[i]
c = f"{v:.2f}"
ax_y.text(v+0.2, i, c, va='center', ha='center', color='black')
# set up labels
ax.xaxis.tick_bottom()
ax.yaxis.tick_left()
ax.set_xticks([0,1])
ax.set_yticks([0,1])
ax.set_xticklabels(['Silver','Gold'])
ax.set_yticklabels(['Small', 'Large'])
ax.set_xlabel('color')
ax.set_ylabel('size')
ax_x.axis('off')
ax_y.axis('off')
return fig
# test
# P = np.random.rand(2,2)
# P = np.asarray([[0.9, 0.8], [0.4, 0.1]])
# P = P / np.sum(P)
# fig = plot_joint_probs(P)
# plt.show(fig)
# plt.close(fig)
# fig = plot_prior_likelihood(0.5, 0.3)
# plt.show(fig)
# plt.close(fig)
def plot_prior_likelihood_posterior(prior, likelihood, posterior):
# definitions for the axes
left, width = 0.05, 0.3
bottom, height = 0.05, 0.9
padding = 0.1
small_width = 0.1
left_space = left + small_width + padding
added_space = padding + width
fig = plt.figure(figsize=(10, 4))
rect_prior = [left, bottom, small_width, height]
rect_likelihood = [left_space , bottom , width, height]
rect_posterior = [left_space + added_space, bottom , width, height]
ax_prior = fig.add_axes(rect_prior)
ax_likelihood = fig.add_axes(rect_likelihood, sharey=ax_prior)
ax_posterior = fig.add_axes(rect_posterior, sharey = ax_prior)
rect_colormap = plt.cm.Blues
# Show posterior probs and marginals
ax_prior.barh(0, prior[0], facecolor = rect_colormap(prior[0, 0]))
ax_prior.barh(1, prior[1], facecolor = rect_colormap(prior[1, 0]))
ax_likelihood.matshow(likelihood, vmin=0., vmax=1., cmap='Reds')
ax_posterior.matshow(posterior, vmin=0., vmax=1., cmap='Greens')
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Prior p(s)")
ax_prior.axis('off')
# Likelihood plot details
ax_likelihood.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Likelihood p(m (right) | s)')
ax_likelihood.xaxis.set_ticks_position('bottom')
ax_likelihood.spines['left'].set_visible(False)
ax_likelihood.spines['bottom'].set_visible(False)
# Posterior plot details
ax_posterior.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Posterior p(s | m)')
ax_posterior.xaxis.set_ticks_position('bottom')
ax_posterior.spines['left'].set_visible(False)
ax_posterior.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i,j in zip(x.flatten(), y.flatten()):
c = f"{posterior[i,j]:.2f}"
ax_posterior.text(j,i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{likelihood[i,j]:.2f}"
ax_likelihood.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i, 0]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
def plot_prior_likelihood(ps, p_a_s1, p_a_s0, measurement):
likelihood = np.asarray([[p_a_s1, 1-p_a_s1],[p_a_s0, 1-p_a_s0]])
assert 0.0 <= ps <= 1.0
prior = np.asarray([ps, 1 - ps])
if measurement:
posterior = likelihood[:, 0] * prior
else:
posterior = (likelihood[:, 1] * prior).reshape(-1)
posterior /= np.sum(posterior)
# definitions for the axes
left, width = 0.05, 0.3
bottom, height = 0.05, 0.9
padding = 0.1
small_width = 0.22
left_space = left + small_width + padding
small_padding = 0.05
fig = plt.figure(figsize=(10, 4))
rect_prior = [left, bottom, small_width, height]
rect_likelihood = [left_space , bottom , width, height]
rect_posterior = [left_space + width + small_padding, bottom , small_width, height]
ax_prior = fig.add_axes(rect_prior)
ax_likelihood = fig.add_axes(rect_likelihood, sharey=ax_prior)
ax_posterior = fig.add_axes(rect_posterior, sharey=ax_prior)
prior_colormap = plt.cm.Blues
posterior_colormap = plt.cm.Greens
# Show posterior probs and marginals
ax_prior.barh(0, prior[0], facecolor = prior_colormap(prior[0]))
ax_prior.barh(1, prior[1], facecolor = prior_colormap(prior[1]))
ax_likelihood.matshow(likelihood, vmin=0., vmax=1., cmap='Reds')
# ax_posterior.matshow(posterior, vmin=0., vmax=1., cmap='')
ax_posterior.barh(0, posterior[0], facecolor = posterior_colormap(posterior[0]))
ax_posterior.barh(1, posterior[1], facecolor = posterior_colormap(posterior[1]))
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Prior p(s)")
ax_prior.axis('off')
# Likelihood plot details
ax_likelihood.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Likelihood p(m | s)')
ax_likelihood.xaxis.set_ticks_position('bottom')
ax_likelihood.spines['left'].set_visible(False)
ax_likelihood.spines['bottom'].set_visible(False)
# Posterior plot details
ax_posterior.set(xlim = [0, 1], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Posterior p(s | m)")
ax_posterior.axis('off')
# ax_posterior.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
# yticks = [0, 1], yticklabels = ['left', 'right'],
# ylabel = 'state (s)', xlabel = 'measurement (m)',
# title = 'Posterior p(s | m)')
# ax_posterior.xaxis.set_ticks_position('bottom')
# ax_posterior.spines['left'].set_visible(False)
# ax_posterior.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
# for i,j in zip(x.flatten(), y.flatten()):
# c = f"{posterior[i,j]:.2f}"
# ax_posterior.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = posterior[i]
c = f"{v:.2f}"
ax_posterior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{likelihood[i,j]:.2f}"
ax_likelihood.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
return fig
# fig = plot_prior_likelihood(0.5, 0.3)
# plt.show(fig)
# plt.close(fig)
from matplotlib import colors
def plot_utility(ps):
prior = np.asarray([ps, 1 - ps])
utility = np.array([[2, -3], [-2, 1]])
expected = prior @ utility
# definitions for the axes
left, width = 0.05, 0.16
bottom, height = 0.05, 0.9
padding = 0.04
small_width = 0.1
left_space = left + small_width + padding
added_space = padding + width
fig = plt.figure(figsize=(17, 3))
rect_prior = [left, bottom, small_width, height]
rect_utility = [left + added_space , bottom , width, height]
rect_expected = [left + 2* added_space, bottom , width, height]
ax_prior = fig.add_axes(rect_prior)
ax_utility = fig.add_axes(rect_utility, sharey=ax_prior)
ax_expected = fig.add_axes(rect_expected)
rect_colormap = plt.cm.Blues
# Data of plots
ax_prior.barh(0, prior[0], facecolor = rect_colormap(prior[0]))
ax_prior.barh(1, prior[1], facecolor = rect_colormap(prior[1]))
ax_utility.matshow(utility, cmap='cool')
norm = colors.Normalize(vmin=-3, vmax=3)
ax_expected.bar(0, expected[0], facecolor = rect_colormap(norm(expected[0])))
ax_expected.bar(1, expected[1], facecolor = rect_colormap(norm(expected[1])))
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Probability of state")
ax_prior.axis('off')
# Utility plot details
ax_utility.set(xticks = [0, 1], xticklabels = ['left', 'right'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'action (a)',
title = 'Utility')
ax_utility.xaxis.set_ticks_position('bottom')
ax_utility.spines['left'].set_visible(False)
ax_utility.spines['bottom'].set_visible(False)
# Expected utility plot details
ax_expected.set(title = 'Expected utility', ylim = [-3, 3],
xticks = [0, 1], xticklabels = ['left', 'right'],
xlabel = 'action (a)',
yticks = [])
ax_expected.xaxis.set_ticks_position('bottom')
ax_expected.spines['left'].set_visible(False)
ax_expected.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i,j in zip(x.flatten(), y.flatten()):
c = f"{utility[i,j]:.2f}"
ax_utility.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i in ind:
v = expected[i]
c = f"{v:.2f}"
ax_expected.text(i, 2.5, c, va='center', ha='center', color='black')
return fig
def plot_prior_likelihood_utility(ps, p_a_s1, p_a_s0,measurement):
assert 0.0 <= ps <= 1.0
assert 0.0 <= p_a_s1 <= 1.0
assert 0.0 <= p_a_s0 <= 1.0
prior = np.asarray([ps, 1 - ps])
likelihood = np.asarray([[p_a_s1, 1-p_a_s1],[p_a_s0, 1-p_a_s0]])
utility = np.array([[2.0, -3.0], [-2.0, 1.0]])
# expected = np.zeros_like(utility)
if measurement:
posterior = likelihood[:, 0] * prior
else:
posterior = (likelihood[:, 1] * prior).reshape(-1)
posterior /= np.sum(posterior)
# expected[:, 0] = utility[:, 0] * posterior
# expected[:, 1] = utility[:, 1] * posterior
expected = posterior @ utility
# definitions for the axes
left, width = 0.05, 0.15
bottom, height = 0.05, 0.9
padding = 0.05
small_width = 0.1
large_padding = 0.07
left_space = left + small_width + large_padding
fig = plt.figure(figsize=(17, 4))
rect_prior = [left, bottom+0.05, small_width, height-0.1]
rect_likelihood = [left_space, bottom , width, height]
rect_posterior = [left_space + padding + width - 0.02, bottom+0.05 , small_width, height-0.1]
rect_utility = [left_space + padding + width + padding + small_width, bottom , width, height]
rect_expected = [left_space + padding + width + padding + small_width + padding + width, bottom+0.05 , width, height-0.1]
ax_likelihood = fig.add_axes(rect_likelihood)
ax_prior = fig.add_axes(rect_prior, sharey=ax_likelihood)
ax_posterior = fig.add_axes(rect_posterior, sharey=ax_likelihood)
ax_utility = fig.add_axes(rect_utility, sharey=ax_posterior)
ax_expected = fig.add_axes(rect_expected)
prior_colormap = plt.cm.Blues
posterior_colormap = plt.cm.Greens
expected_colormap = plt.cm.Wistia
# Show posterior probs and marginals
ax_prior.barh(0, prior[0], facecolor = prior_colormap(prior[0]))
ax_prior.barh(1, prior[1], facecolor = prior_colormap(prior[1]))
ax_likelihood.matshow(likelihood, vmin=0., vmax=1., cmap='Reds')
ax_posterior.barh(0, posterior[0], facecolor = posterior_colormap(posterior[0]))
ax_posterior.barh(1, posterior[1], facecolor = posterior_colormap(posterior[1]))
ax_utility.matshow(utility, vmin=0., vmax=1., cmap='cool')
# ax_expected.matshow(expected, vmin=0., vmax=1., cmap='Wistia')
ax_expected.bar(0, expected[0], facecolor = expected_colormap(expected[0]))
ax_expected.bar(1, expected[1], facecolor = expected_colormap(expected[1]))
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Prior p(s)")
ax_prior.axis('off')
# Likelihood plot details
ax_likelihood.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Likelihood p(m | s)')
ax_likelihood.xaxis.set_ticks_position('bottom')
ax_likelihood.spines['left'].set_visible(False)
ax_likelihood.spines['bottom'].set_visible(False)
# Posterior plot details
ax_posterior.set(xlim = [0, 1], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Posterior p(s | m)")
ax_posterior.axis('off')
# Utility plot details
ax_utility.set(xticks = [0, 1], xticklabels = ['left', 'right'],
xlabel = 'action (a)',
title = 'Utility')
ax_utility.xaxis.set_ticks_position('bottom')
ax_utility.spines['left'].set_visible(False)
ax_utility.spines['bottom'].set_visible(False)
# Expected Utility plot details
ax_expected.set(ylim = [-2, 2], xticks = [0, 1], xticklabels = ['left', 'right'],
xlabel = 'action (a)', title = 'Expected utility', yticks=[])
# ax_expected.axis('off')
ax_expected.spines['left'].set_visible(False)
# ax_expected.set(xticks = [0, 1], xticklabels = ['left', 'right'],
# xlabel = 'action (a)',
# title = 'Expected utility')
# ax_expected.xaxis.set_ticks_position('bottom')
# ax_expected.spines['left'].set_visible(False)
# ax_expected.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i in ind:
v = posterior[i]
c = f"{v:.2f}"
ax_posterior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{likelihood[i,j]:.2f}"
ax_likelihood.text(j,i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{utility[i,j]:.2f}"
ax_utility.text(j,i, c, va='center', ha='center', color='black')
# for i,j in zip(x.flatten(), y.flatten()):
# c = f"{expected[i,j]:.2f}"
# ax_expected.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i in ind:
v = expected[i]
c = f"{v:.2f}"
ax_expected.text(i, v, c, va='center', ha='center', color='black')
# # show values
# ind = np.arange(2)
# x,y = np.meshgrid(ind,ind)
# for i,j in zip(x.flatten(), y.flatten()):
# c = f"{P[i,j]:.2f}"
# ax.text(j,i, c, va='center', ha='center', color='white')
# for i in ind:
# v = marginal_x[i]
# c = f"{v:.2f}"
# ax_x.text(i, v +0.2, c, va='center', ha='center', color='black')
# v = marginal_y[i]
# c = f"{v:.2f}"
# ax_y.text(v+0.2, i, c, va='center', ha='center', color='black')
return fig
# @title Helper Functions
def compute_marginal(px, py, cor):
# calculate 2x2 joint probabilities given marginals p(x=1), p(y=1) and correlation
p11 = px*py + cor*np.sqrt(px*py*(1-px)*(1-py))
p01 = px - p11
p10 = py - p11
p00 = 1.0 - p11 - p01 - p10
return np.asarray([[p00, p01], [p10, p11]])
# test
# print(compute_marginal(0.4, 0.6, -0.8))
def compute_cor_range(px,py):
# Calculate the allowed range of correlation values given marginals p(x=1) and p(y=1)
def p11(corr):
return px*py + corr*np.sqrt(px*py*(1-px)*(1-py))
def p01(corr):
return px - p11(corr)
def p10(corr):
return py - p11(corr)
def p00(corr):
return 1.0 - p11(corr) - p01(corr) - p10(corr)
Cmax = min(fsolve(p01, 0.0), fsolve(p10, 0.0))
Cmin = max(fsolve(p11, 0.0), fsolve(p00, 0.0))
return Cmin, Cmax
```
---
# Section 1: Gone Fishin'
```
# @title Video 2: Gone Fishin'
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="McALsTzb494", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
You were just introduced to the **binary hidden state problem** we are going to explore. You need to decide which side to fish on. We know fish like to school together. On different days the school of fish is either on the left or right side, but we don’t know what the case is today. We will represent our knowledge probabilistically, asking how to make a decision (where to decide the fish are or where to fish) and what to expect in terms of gains or losses. In the next two sections we will consider just the probability of where the fish might be and what you gain or lose by choosing where to fish.
Remember, you can either think of your self as a scientist conducting an experiment or as a brain trying to make a decision. The Bayesian approach is the same!
---
# Section 2: Deciding where to fish
```
# @title Video 3: Utility
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="xvIVZrqF_5s", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
You know the probability that the school of fish is on the left side of the dock today, $P(s = left)$. You also know the probability that it is on the right, $P(s = right)$, because these two probabilities must add up to 1. You need to decide where to fish. It may seem obvious - you could just fish on the side where the probability of the fish being is higher! Unfortunately, decisions and actions are always a little more complicated. Deciding to fish may be influenced by more than just the probability of the school of fish being there as we saw by the potential issues of submarines and sunburn.
We quantify these factors numerically using **utility**, which describes the consequences of your actions: how much value you gain (or if negative, lose) given the state of the world ($s$) and the action you take ($a$). In our example, our utility can be summarized as:
| Utility: U(s,a) | a = left | a = right |
| ----------------- |----------|----------|
| s = Left | 2 | -3 |
| s = right | -2 | 1 |
To use utility to choose an action, we calculate the **expected utility** of that action by weighing these utilities with the probability of that state occuring. This allows us to choose actions by taking probabilities of events into account: we don't care if the outcome of an action-state pair is a loss if the probability of that state is very low. We can formalize this as:
$$\text{Expected utility of action a} = \sum_{s}U(s,a)P(s) $$
In other words, the expected utility of an action a is the sum over possible states of the utility of that action and state times the probability of that state.
## Interactive Demo 2: Exploring the decision
Let's start to get a sense of how all this works.
Take a look at the interactive demo below. You can change the probability that the school of fish is on the left side ($p(s = left)$ using the slider. You will see the utility matrix and the corresponding expected utility of each action.
First, make sure you understand how the expected utility of each action is being computed from the probabilities and the utility values. In the initial state: the probability of the fish being on the left is 0.9 and on the right is 0.1. The expected utility of the action of fishing on the left is then $U(s = left,a = left)p(s = left) + U(s = right,a = left)p(s = right) = 2(0.9) + -2(0.1) = 1.6$.
For each of these scenarios, think and discuss first. Then use the demo to try out each and see if your action would have been correct (that is, if the expected value of that action is the highest).
1. You just arrived at the dock for the first time and have no sense of where the fish might be. So you guess that the probability of the school being on the left side is 0.5 (so the probability on the right side is also 0.5). Which side would you choose to fish on given our utility values?
2. You think that the probability of the school being on the left side is very low (0.1) and correspondingly high on the right side (0.9). Which side would you choose to fish on given our utility values?
3. What would you choose if the probability of the school being on the left side is slightly lower than on the right side (0. 4 vs 0.6)?
```
# @markdown Execute this cell to use the widget
ps_widget = widgets.FloatSlider(0.9, description='p(s = left)', min=0.0, max=1.0, step=0.01)
@widgets.interact(
ps = ps_widget,
)
def make_utility_plot(ps):
fig = plot_utility(ps)
plt.show(fig)
plt.close(fig)
return None
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D1_BayesianDecisions/solutions/W3D1_Tutorial1_Solution_459cbf35.py)
In this section, you have seen that both the utility of various state and action pairs and our knowledge of the probability of each state affects your decision. Importantly, we want our knowledge of the probability of each state to be as accurate as possible!
So how do we know these probabilities? We may have prior knowledge from years of fishing at the same dock. Over those years, we may have learned that the fish are more likely to be on the left side for example. We want to make sure this knowledge is as accurate as possible though. To do this, we want to collect more data, or take some more measurements! For the next few sections, we will focus on making our knowledge of the probability as accurate as possible, before coming back to using utility to make decisions.
---
# Section 3: Likelihood of the fish being on either side
```
# @title Video 4: Likelihood
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="l4m0JzMWGio", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
First, we'll think about what it means to take a measurement (also often called an observation or just data) and what it tells you about what the hidden state may be. Specifically, we'll be looking at the **likelihood**, which is the probability of your measurement ($m$) given the hidden state ($s$): $P(m | s)$. Remember that in this case, the hidden state is which side of the dock the school of fish is on.
We will watch someone fish (for let's say 10 minutes) and our measurement is whether they catch a fish or not. We know something about what catching a fish means for the likelihood of the fish being on one side or the other.
## Think! 3: Guessing the location of the fish
Let's say we go to different dock from the one in the video. Here, there are different probabilities of catching fish given the state of the world. In this case, if they fish on the side of the dock where the fish are, they have a 70% chance of catching a fish. Otherwise, they catch a fish with only 20% probability.
The fisherperson is fishing on the left side.
1) Figure out each of the following:
- probability of catching a fish given that the school of fish is on the left side, $P(m = catch\text{ } fish | s = left )$
- probability of not catching a fish given that the school of fish is on the left side, $P(m = no \text{ } fish | s = left)$
- probability of catching a fish given that the school of fish is on the right side, $P(m = catch \text{ } fish | s = right)$
- probability of not catching a fish given that the school of fish is on the right side, $P(m = no \text{ } fish | s = right)$
2) If the fisherperson catches a fish, which side would you guess the school is on? Why?
3) If the fisherperson does not catch a fish, which side would you guess the school is on? Why?
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D1_BayesianDecisions/solutions/W3D1_Tutorial1_Solution_2284aaf4.py)
In the prior exercise, you guessed where the school of fish was based on the measurement you took (watching someone fish). You did this by choosing the state (side of school) that maximized the probability of the measurement. In other words, you estimated the state by maximizing the likelihood (had the highest probability of measurement given state $P(m|s$)). This is called maximum likelihood estimation (MLE) and you've encountered it before during this course, in W1D3!
What if you had been going to this river for years and you knew that the fish were almost always on the left side? This would probably affect how you make your estimate - you would rely less on the single new measurement and more on your prior knowledge. This is the idea behind Bayesian inference, as we will see later in this tutorial!
---
# Section 4: Correlation and marginalization
```
# @title Video 5: Correlation and marginalization
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="vsDjtWi-BVo", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
In this section, we are going to take a step back for a bit and think more generally about the amount of information shared between two random variables. We want to know how much information you gain when you observe one variable (take a measurement) if you know something about another. We will see that the fundamental concept is the same if we think about two attributes, for example the size and color of the fish, or the prior information and the likelihood.
## Math Exercise 4: Computing marginal likelihoods
To understand the information between two variables, let's first consider the size and color of the fish.
| P(X, Y) | Y = silver | Y = gold |
| ----------------- |----------|----------|
| X = small | 0.4 | 0.2 |
| X = large | 0.1 | 0.3 |
The table above shows us the **joint probabilities**: the probability of both specific attributes occuring together. For example, the probability of a fish being small and silver ($P(X = small, Y = silver$) is 0.4.
We want to know what the probability of a fish being small regardless of color. Since the fish are either silver or gold, this would be the probability of a fish being small and silver plus the probability of a fish being small and gold. This is an example of marginalizing, or averaging out, the variable we are not interested in across the rows or columns.. In math speak: $P(X = small) = \sum_y{P(X = small, Y)}$. This gives us a **marginal probability**, a probability of a variable outcome (in this case size), regardless of the other variables (in this case color).
Please complete the following math problems to further practice thinking through probabilities:
1. Calculate the probability of a fish being silver.
2. Calculate the probability of a fish being small, large, silver, or gold.
3. Calculate the probability of a fish being small OR gold. (Hint: $P(A\ \textrm{or}\ B) = P(A) + P(B) - P(A\ \textrm{and}\ B)$)
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D1_BayesianDecisions/solutions/W3D1_Tutorial1_Solution_65e69cd1.py)
## Think! 4: Covarying probability distributions
The relationship between the marginal probabilities and the joint probabilities is determined by the correlation between the two random variables - a normalized measure of how much the variables covary. We can also think of this as gaining some information about one of the variables when we observe a measurement from the other. We will think about this more formally in Tutorial 2.
Here, we want to think about how the correlation between size and color of these fish changes how much information we gain about one attribute based on the other. See Bonus Section 1 for the formula for correlation.
Use the widget below and answer the following questions:
1. When the correlation is zero, $\rho = 0$, what does the distribution of size tell you about color?
2. Set $\rho$ to something small. As you change the probability of golden fish, what happens to the ratio of size probabilities? Set $\rho$ larger (can be negative). Can you explain the pattern of changes in the probabilities of size as you change the probability of golden fish?
3. Set the probability of golden fish and of large fish to around 65%. As the correlation goes towards 1, how often will you see silver large fish?
4. What is increasing the (absolute) correlation telling you about how likely you are to see one of the properties if you see a fish with the other?
```
# @markdown Execute this cell to enable the widget
style = {'description_width': 'initial'}
gs = GridspecLayout(2,2)
cor_widget = widgets.FloatSlider(0.0, description='ρ', min=-1, max=1, step=0.01)
px_widget = widgets.FloatSlider(0.5, description='p(color=golden)', min=0.01, max=0.99, step=0.01, style=style)
py_widget = widgets.FloatSlider(0.5, description='p(size=large)', min=0.01, max=0.99, step=0.01, style=style)
gs[0,0] = cor_widget
gs[0,1] = px_widget
gs[1,0] = py_widget
@widgets.interact(
px=px_widget,
py=py_widget,
cor=cor_widget,
)
def make_corr_plot(px, py, cor):
Cmin, Cmax = compute_cor_range(px, py) #allow correlation values
cor_widget.min, cor_widget.max = Cmin+0.01, Cmax-0.01
if cor_widget.value > Cmax:
cor_widget.value = Cmax
if cor_widget.value < Cmin:
cor_widget.value = Cmin
cor = cor_widget.value
P = compute_marginal(px,py,cor)
# print(P)
fig = plot_joint_probs(P)
plt.show(fig)
plt.close(fig)
return None
# gs[1,1] = make_corr_plot()
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D1_BayesianDecisions/solutions/W3D1_Tutorial1_Solution_9727d8f3.py)
We have just seen how two random variables can be more or less independent. The more correlated, the less independent, and the more shared information. We also learned that we can marginalize to determine the marginal likelihood of a hidden state or to find the marginal probability distribution of two random variables. We are going to now complete our journey towards being fully Bayesian!
---
# Section 5: Bayes' Rule and the Posterior
Marginalization is going to be used to combine our prior knowlege, which we call the **prior**, and our new information from a measurement, the **likelihood**. Only in this case, the information we gain about the hidden state we are interested in, where the fish are, is based on the relationship between the probabilities of the measurement and our prior.
We can now calculate the full posterior distribution for the hidden state ($s$) using Bayes' Rule. As we've seen, the posterior is proportional the the prior times the likelihood. This means that the posterior probability of the hidden state ($s$) given a measurement ($m$) is proportional to the likelihood of the measurement given the state times the prior probability of that state (the marginal likelihood):
$$ P(s | m) \propto P(m | s) P(s) $$
We say proportional to instead of equal because we need to normalize to produce a full probability distribution:
$$ P(s | m) = \frac{P(m | s) P(s)}{P(m)} $$
Normalizing by this $P(m)$ means that our posterior is a complete probability distribution that sums or integrates to 1 appropriately. We now can use this new, complete probability distribution for any future inference or decisions we like! In fact, as we will see tomorrow, we can use it as a new prior! Finally, we often call this probability distribution our beliefs over the hidden states, to emphasize that it is our subjective knowlege about the hidden state.
For many complicated cases, like those we might be using to model behavioral or brain inferences, the normalization term can be intractable or extremely complex to calculate. We can be careful to choose probability distributions were we can analytically calculate the posterior probability or numerical approximation is reliable. Better yet, we sometimes don't need to bother with this normalization! The normalization term, $P(m)$, is the probability of the measurement. This does not depend on state so is essentially a constant we can often ignore. We can compare the unnormalized posterior distribution values for different states because how they relate to each other is unchanged when divided by the same constant. We will see how to do this to compare evidence for different hypotheses tomorrow. (It's also used to compare the likelihood of models fit using maximum likelihood estimation, as you did in W1D5.)
In this relatively simple example, we can compute the marginal probability $P(m)$ easily by using:
$$P(m) = \sum_s P(m | s) P(s)$$
We can then normalize so that we deal with the full posterior distribution.
## Math Exercise 5: Calculating a posterior probability
Our prior is $p(s = left) = 0.3$ and $p(s = right) = 0.7$. In the video, we learned that the chance of catching a fish given they fish on the same side as the school was 50%. Otherwise, it was 10%. We observe a person fishing on the left side. Our likelihood is:
| Likelihood: p(m \| s) | m = catch fish | m = no fish |
| ----------------- |----------|----------|
| s = left | 0.5 | 0.5 |
| s = right | 0.1 | 0.9 |
Calculate the posterior probability (on paper) that:
1. The school is on the left if the fisherperson catches a fish: $p(s = left | m = catch fish)$ (hint: normalize by compute $p(m = catch fish)$)
2. The school is on the right if the fisherperson does not catch a fish: $p(s = right | m = no fish)$
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D1_BayesianDecisions/solutions/W3D1_Tutorial1_Solution_46a5b352.py)
## Coding Exercise 5: Computing Posteriors
Let's implement our above math to be able to compute posteriors for different priors and likelihood.s
As before, our prior is $p(s = left) = 0.3$ and $p(s = right) = 0.7$. In the video, we learned that the chance of catching a fish given they fish on the same side as the school was 50%. Otherwise, it was 10%. We observe a person fishing on the left side. Our likelihood is:
| Likelihood: p(m \| s) | m = catch fish | m = no fish |
| ----------------- |----------|----------|
| s = left | 0.5 | 0.5 |
| s = right | 0.1 | 0.9 |
We want our full posterior to take the same 2 by 2 form. Make sure the outputs match your math answers!
```
def compute_posterior(likelihood, prior):
""" Use Bayes' Rule to compute posterior from likelihood and prior
Args:
likelihood (ndarray): i x j array with likelihood probabilities where i is
number of state options, j is number of measurement options
prior (ndarray): i x 1 array with prior probability of each state
Returns:
ndarray: i x j array with posterior probabilities where i is
number of state options, j is number of measurement options
"""
#################################################
## TODO for students ##
# Fill out function and remove
raise NotImplementedError("Student exercise: implement compute_posterior")
#################################################
# Compute unnormalized posterior (likelihood times prior)
posterior = ... # first row is s = left, second row is s = right
# Compute p(m)
p_m = np.sum(posterior, axis = 0)
# Normalize posterior (divide elements by p_m)
posterior /= ...
return posterior
# Make prior
prior = np.array([0.3, 0.7]).reshape((2, 1)) # first row is s = left, second row is s = right
# Make likelihood
likelihood = np.array([[0.5, 0.5], [0.1, 0.9]]) # first row is s = left, second row is s = right
# Compute posterior
posterior = compute_posterior(likelihood, prior)
# Visualize
with plt.xkcd():
plot_prior_likelihood_posterior(prior, likelihood, posterior)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D1_BayesianDecisions/solutions/W3D1_Tutorial1_Solution_042b00b7.py)
*Example output:*
<img alt='Solution hint' align='left' width=669 height=314 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D1_BayesianDecisions/static/W3D1_Tutorial1_Solution_042b00b7_0.png>
## Interactive Demo 5: What affects the posterior?
Now that we can understand the implementation of *Bayes rule*, let's vary the parameters of the prior and likelihood to see how changing the prior and likelihood affect the posterior.
In the demo below, you can change the prior by playing with the slider for $p( s = left)$. You can also change the likelihood by changing the probability of catching a fish given that the school is on the left and the probability of catching a fish given that the school is on the right. The fisherperson you are observing is fishing on the left.
1. Keeping the likelihood constant, when does the prior have the strongest influence over the posterior? Meaning, when does the posterior look most like the prior no matter whether a fish was caught or not?
2. Keeping the likelihood constant, when does the prior exert the weakest influence? Meaning, when does the posterior look least like the prior and depend most on whether a fish was caught or not?
3. Set the prior probability of the state = left to 0.6 and play with the likelihood. When does the likelihood exert the most influence over the posterior?
```
# @markdown Execute this cell to enable the widget
style = {'description_width': 'initial'}
ps_widget = widgets.FloatSlider(0.3, description='p(s = left)',
min=0.01, max=0.99, step=0.01)
p_a_s1_widget = widgets.FloatSlider(0.5, description='p(fish | s = left)',
min=0.01, max=0.99, step=0.01, style=style)
p_a_s0_widget = widgets.FloatSlider(0.1, description='p(fish | s = right)',
min=0.01, max=0.99, step=0.01, style=style)
observed_widget = widgets.Checkbox(value=False, description='Observed fish (m)',
disabled=False, indent=False, layout={'width': 'max-content'})
@widgets.interact(
ps=ps_widget,
p_a_s1=p_a_s1_widget,
p_a_s0=p_a_s0_widget,
m_right=observed_widget
)
def make_prior_likelihood_plot(ps,p_a_s1,p_a_s0,m_right):
fig = plot_prior_likelihood(ps,p_a_s1,p_a_s0,m_right)
plt.show(fig)
plt.close(fig)
return None
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D1_BayesianDecisions/solutions/W3D1_Tutorial1_Solution_a004d68d.py)
# Section 6: Making Bayesian fishing decisions
We will explore how to consider the expected utility of an action based on our belief (the posterior distribution) about where we think the fish are. Now we have all the components of a Bayesian decision: our prior information, the likelihood given a measurement, the posterior distribution (belief) and our utility (the gains and losses). This allows us to consider the relationship between the true value of the hidden state, $s$, and what we *expect* to get if we take action, $a$, based on our belief!
Let's use the following widget to think about the relationship between these probability distributions and utility function.
## Think! 6: What is more important, the probabilities or the utilities?
We are now going to put everything we've learned together to gain some intuitions for how each of the elements that goes into a Bayesian decision comes together. Remember, the common assumption in neuroscience, psychology, economics, ecology, etc. is that we (humans and animals) are tying to maximize our expected utility.
1. Can you find a situation where the expected utility is the same for both actions?
2. What is more important for determining the expected utility: the prior or a new measurement (the likelihood)?
3. Why is this a normative model?
4. Can you think of ways in which this model would need to be extended to describe human or animal behavior?
```
# @markdown Execute this cell to enable the widget
style = {'description_width': 'initial'}
ps_widget = widgets.FloatSlider(0.3, description='p(s)',
min=0.01, max=0.99, step=0.01)
p_a_s1_widget = widgets.FloatSlider(0.5, description='p(fish | s = left)',
min=0.01, max=0.99, step=0.01, style=style)
p_a_s0_widget = widgets.FloatSlider(0.1, description='p(fish | s = right)',
min=0.01, max=0.99, step=0.01, style=style)
observed_widget = widgets.Checkbox(value=False, description='Observed fish (m)',
disabled=False, indent=False, layout={'width': 'max-content'})
@widgets.interact(
ps=ps_widget,
p_a_s1=p_a_s1_widget,
p_a_s0=p_a_s0_widget,
m_right=observed_widget
)
def make_prior_likelihood_utility_plot(ps, p_a_s1, p_a_s0,m_right):
fig = plot_prior_likelihood_utility(ps, p_a_s1, p_a_s0,m_right)
plt.show(fig)
plt.close(fig)
return None
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D1_BayesianDecisions/solutions/W3D1_Tutorial1_Solution_3a230382.py)
---
# Summary
In this tutorial, you learned about combining prior information with new measurements to update your knowledge using Bayes Rulem, in the context of a fishing problem.
Specifically, we covered:
* That the likelihood is the probability of the measurement given some hidden state
* That how the prior and likelihood interact to create the posterior, the probability of the hidden state given a measurement, depends on how they covary
* That utility is the gain from each action and state pair, and the expected utility for an action is the sum of the utility for all state pairs, weighted by the probability of that state happening. You can then choose the action with highest expected utility.
---
# Bonus
## Bonus Section 1: Correlation Formula
To understand the way we calculate the correlation, we need to review the definition of covariance and correlation.
Covariance:
$$
cov(X,Y) = \sigma_{XY} = E[(X - \mu_{x})(Y - \mu_{y})] = E[X]E[Y] - \mu_{x}\mu_{y}
$$
Correlation:
$$
\rho_{XY} = \frac{cov(Y,Y)}{\sqrt{V(X)V(Y)}} = \frac{\sigma_{XY}}{\sigma_{X}\sigma_{Y}}
$$
| github_jupyter |
# Compare trained NPEs accuracy as a function of $N_{\rm train}$
```
import numpy as np
from scipy import stats
from sedflow import obs as Obs
from sedflow import train as Train
from IPython.display import IFrame
# --- plotting ---
import corner as DFM
import matplotlib as mpl
import matplotlib.pyplot as plt
#mpl.use('PDF')
#mpl.rcParams['text.usetex'] = True
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
import torch
import torch.nn as nn
import torch.nn.functional as F
from sbi import utils as Ut
from sbi import inference as Inference
```
# Load test data
```
# x = theta_sps
# y = [u, g, r, i, z, sigma_u, sigma_g, sigma_r, sigma_i, sigma_z, z]
x_test, y_test = Train.load_data('test', version=1, sample='flow', params='thetas_unt')
x_test[:,6] = np.log10(x_test[:,6])
x_test[:,7] = np.log10(x_test[:,7])
```
# calculate KS test p-value for trained `SEDflow` models
```
prior_low = [7, 0., 0., 0., 0., 1e-2, np.log10(4.5e-5), np.log10(4.5e-5), 0, 0., -2.]
prior_high = [12.5, 1., 1., 1., 1., 13.27, np.log10(1.5e-2), np.log10(1.5e-2), 3., 3., 1.]
lower_bounds = torch.tensor(prior_low)
upper_bounds = torch.tensor(prior_high)
prior = Ut.BoxUniform(low=lower_bounds, high=upper_bounds, device='cpu')
def pps(anpe_samples, ntest=100, nmcmc=10000):
''' given npe, calculate pp for ntest test data
'''
pp_thetas, rank_thetas = [], []
for igal in np.arange(ntest):
_mcmc_anpe = anpe_samples[igal]
pp_theta, rank_theta = [], []
for itheta in range(_mcmc_anpe.shape[1]):
pp_theta.append(stats.percentileofscore(_mcmc_anpe[:,itheta], x_test[igal,itheta])/100.)
rank_theta.append(np.sum(np.array(_mcmc_anpe[:,itheta]) < x_test[igal,itheta]))
pp_thetas.append(pp_theta)
rank_thetas.append(rank_theta)
pp_thetas = np.array(pp_thetas)
rank_thetas = np.array(rank_thetas)
return pp_thetas, rank_thetas
# architectures
archs = ['500x10.0', '500x10.1', '500x10.2', '500x10.3', '500x10.4']
nhidden = [500 for arch in archs]
nblocks = [10 for arch in archs]
ks_pvalues, ks_tot_pvalues = [], []
for i in range(len(archs)):
anpe_samples = np.load('/scratch/network/chhahn/sedflow/anpe_thetaunt_magsigz.toy.%s.samples.npy' % archs[i])
_pp, _rank = pps(anpe_samples, ntest=1000, nmcmc=10000)
ks_p = []
for ii in range(_pp.shape[1]):
_, _ks_p = stats.kstest(_pp[ii], 'uniform')
ks_p.append(_ks_p)
ks_pvalues.append(np.array(ks_p))
_, _ks_tot_p = stats.kstest(_pp.flatten(), 'uniform')
ks_tot_pvalues.append(_ks_tot_p)
print(archs[i], _ks_tot_p)
ks_all_p_nt, ks_tot_p_nt = [], []
for ntrain in [500000, 200000, 100000, 50000]:
anpe_samples = np.load('/scratch/network/chhahn/sedflow/anpe_thetaunt_magsigz.toy.ntrain%i.%s.samples.npy' % (ntrain, archs[i]))
_pp_nt, _rank_nt = pps(anpe_samples, ntest=1000, nmcmc=10000)
ks_p_nt = []
for ii in range(_pp_nt.shape[1]):
_, _ks_p = stats.kstest(_pp_nt[ii], 'uniform')
ks_p_nt.append(_ks_p)
ks_all_p_nt.append(ks_p_nt)
_, _ks_tot_p_nt = stats.kstest(_pp_nt.flatten(), 'uniform')
print('ntrain%i.%s' % (ntrain, archs[i]), _ks_tot_p_nt)
ks_tot_p_nt.append(_ks_tot_p_nt)
ks_all_p_nt = np.array(ks_all_p_nt)
ks_tot_p_nt = np.array(ks_tot_p_nt)
print(ks_tot_p_nt)
theta_lbls = [r'$\log M_*$', r"$\beta'_1$", r"$\beta'_2$", r"$\beta'_3$", r'$f_{\rm burst}$', r'$t_{\rm burst}$', r'$\log \gamma_1$', r'$\log \gamma_2$', r'$\tau_1$', r'$\tau_2$', r'$n_{\rm dust}$']
fig = plt.figure(figsize=(18,8))
sub = fig.add_subplot(121)
for itheta in range(_pp.shape[1]):
sub.plot([1e6, 500000, 200000, 100000, 50000], np.array([ks_p[itheta]] + list(ks_all_p_nt[:,itheta]))/ks_p[itheta], label=theta_lbls[itheta])
sub.legend(loc='upper left', fontsize=20)
sub.set_xlim(1.1e6, 5e4)
sub.set_yscale('log')
sub = fig.add_subplot(122)
print(_ks_tot_p)
print(ks_tot_p_nt)
print(np.array([_ks_tot_p] + list(ks_tot_p_nt))/_ks_tot_p)
sub.plot([1e6, 500000, 200000, 100000, 50000], np.array([_ks_tot_p] + list(ks_tot_p_nt))/_ks_tot_p)
sub.set_xlim(1.1e6, 5e4)
sub.set_yscale('log')
plt.show()
```
| github_jupyter |
```
import databehandling
#databehandling?
"""
InfoSec, Python Programming beginner (version, 3.9+) Data processing.
[*]This code might now tbe the most user friendly, but it sure is quite effiencent and scalable.
[*]The code is not developed for eternal use only internal.
[*]Any function only works with class object that matches with self props.
[*]It is very tailored made and not that open.
Date Creation: 2021-06-05
Author: Christopher Ek (chrome91)
Estimated Work Time: 2 hours (Took a while to figure out classes on python from C#)
"""
# Data processing.
import os
from datetime import datetime
from collections import defaultdict
# Creates a simple dict for scalability. This is where you enter
file_exts = {'.horse', '.katt'}
grouped_files = defaultdict(int)
cat_list = []
horse_list = []
# List created for efficiency.
combined_list = []
# These two classes are created as object storage.
# We use this method to easily access variables insides the objects.
class Horse:
"""
Horse class.
"""
def __init__(self, name, weight, height, date):
self.name = name
self.weight = weight
self.height = height
self.date = date
class Cat:
"""
Cat class.
"""
def __init__(self, name, weight, height, date):
self.name = name
self.weight = weight
self.height = height
self.date = date
def weight_height_score(lists):
"""
Score table.
:param x: object list of Class Horse or Cat.
:return: avg_weight_score, avg_height_score, least_height, most_height in a tuple.
"""
# I am aware that two of these vars are not being used.
# This is only for future scalability.
total_weight_score = 0
avg_weigth_score = 0
total_height_score = 0
avg_height_score = 0
enteries = 0
# low and upper height vars
least_height = None
most_height = 0
for i in lists:
# print(i.weight)
total_weight_score += int(i.weight)
total_height_score += int(i.height)
enteries += 1
if int(i.height) > most_height:
most_height = int(i.height)
# Implementation to avoid upper & low bounds.
if least_height is None:
least_height = int(i.height)
elif int(i.height) < least_height:
least_height = int(i.height)
# print(least_height)
# This just creates the average score of both animals or any other provided objects list.
avg_weight_score = total_weight_score / enteries
avg_height_score = total_height_score / enteries
# Returns all the information, some tuple should be created here using (x,y,z,c) vars for read friendly.
return avg_weight_score, avg_height_score, least_height, most_height
def date_sort(lists):
"""
Date sorting (Should have coded my own implementation, like quick sort but .sort is fine and fast.)
:param x: object list of Class Horse or Cat. (Only formarted date with 2021-07-24)
:return: sorted list:
"""
date_sorted = []
# Creating a new list with the date only.
for i in lists:
date_sorted.append(i.date)
# Sorting the dates.
date_sorted.sort(key=lambda date: datetime.strptime(date, '%Y-%m-%d'))
# Returning list of sorted dates, not any objects.
return date_sorted
def name_matching(name):
"""
Name matching,
:param x: name :datatype: string
:return: Animal weight.
"""
for i in cat_list:
if name == i.name:
print(i.weight)
for i in horse_list:
if name == i.name:
print(i.weight)
def main():
"""
Main functionality.
:param: empty
:return: None
"""
# Listing all the directories.
for f in os.listdir("."):
# Could use further development. Didnt really have time to focus on the open part.
if f.endswith(".katt"):
#print(12)
with open('{}'.format(f)) as katt_file:
# Let's read the data and process is.
data = katt_file.readlines()
# Essence of OOP is Encapsulation, Polymorphism, Inheritance, Abstraction
# The part of Abstraction is not me. I do develop more closed in/specific programs.
cat_list.append(Cat(data[0],
data[1],
data[2],
data[3]))
combined_list.append(Cat(data[0],
data[1],
data[2],
data[3]))
# We taking care of hast files.
if f.endswith(".hast"):
with open("{}".format(f)) as horse_file:
# Data
data = horse_file.readlines()
horse_list.append(Horse(data[0],
data[1],
data[2],
data[3]))
combined_list.append(Horse(data[0],
data[1],
data[2],
data[3]))
try:
# This just printing stuff, super boring and ugly.
cat_avg_weight_score, cat_avg_height_score, cat_least_height, cat_most_height = weight_height_score(cat_list)
horse_avg_weight_score, horse_avg_height_score, horse_least_height, horse_most_height = weight_height_score(horse_list)
print("\n[*]Average Horse height {}\n[*]Average Horse weight {}\n[*]Average Cat height: {}\n[*]Average Cat weight: {}".format(horse_avg_height_score, horse_avg_weight_score, cat_avg_height_score, cat_avg_weight_score))
print("\n[*]Least Horse height {}\n[*]Most Horse height: {}\n[*]Least Cat height {}\n[*]Most Cat height: {}".format(horse_least_height, horse_most_height, cat_least_height, cat_most_height))
oldest_cat_date = date_sort(cat_list)
oldest_horse_date = date_sort(horse_list)
print("\n[*]Oldest Cat Date: {}".format(oldest_cat_date))
print("[*]Oldest Horse Date: {}".format(oldest_horse_date))
user_input = input("Please Write Your Name: ")
for i in combined_list:
if i.name == user_input:
print(i.weight)
except Exception as e:
print("There is something wrong.")
if __name__ == "__main__":
main()
```
```
for f in os.listdir("."):
# Could use further development. Didnt really have time to focus on the open part.
if f.endswith(".katt"):
print(12)
with open('{}'.format(f)) as katt_file:
# Let's read the data and process is.
data = katt_file.readlines()
# Essence of OOP is Encapsulation, Polymorphism, Inheritance, Abstraction
# The part of Abstraction is not me. I do develop more closed in/specific programs.
cat_list.append(Cat(data[0],
data[1],
data[2],
data[3]))
combined_list.append(Cat(data[0],
data[1],
data[2],
data[3]))
else:
print("cat file not uploading.")
# We taking care of hast files.
if f.endswith(".hast"):
with open("{}".format(f)) as horse_file:
# Data
data = horse_file.readlines()
horse_list.append(Horse(data[0],
data[1],
data[2],
data[3]))
combined_list.append(Horse(data[0],
data[1],
data[2],
data[3]))
else:
print("horse file not uploading.")
"""
import os
from collections import defaultdict
EXTENSIONS = {'.json', '.txt'}
directory = '.'
grouped_files = defaultdict(int)
for f in os.listdir(directory):
name, ext = os.path.splitext(os.path.join(directory, f))
if ext in EXTENSIONS:
grouped_files[name] += 1
for name in grouped_files:
if grouped_files[name] == len(EXTENSIONS):
with open('{}.txt'.format(name)) as txt_file, \
open('{}.json'.format(name)) as json_file:
# process files
print(txt_file, json_file)
a = None
b = 125
if int(a) < b:
print("Hello")
```
| github_jupyter |
# Memory Information
# GPU Information
```
!ls -lha kaggle.json
!pip install -q kaggle
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
import kaggle
import os
dataset_dir = '/content/home/'
def download_drive(dataset_dir):
"""
Downloads dataset from Kaggle and loads it in dataset_dir.
"""
if not os.path.exists(dataset_dir):
kaggle.api.authenticate()
kaggle.api.dataset_download_files(dataset="mbonyani/segmentdatactc", path=dataset_dir, unzip=True)
print('Download completed.')
else:
print('dataset already exists.')
return True
download_drive(dataset_dir)
!pip install tensorflow_addons==0.11.2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import pandas as pnd
from tqdm import tqdm
import sys
import math
import os
import zipfile
import six
import warnings
import random
import gc
import six
from math import ceil
from sklearn.model_selection import KFold, StratifiedKFold,train_test_split
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from keras.metrics import top_k_categorical_accuracy
import cv2
import imgaug as ia
from imgaug import augmenters as iaa
from moviepy.editor import VideoFileClip
from PIL import Image
from tensorflow.keras.utils import to_categorical
from scipy.io import loadmat
import keras.backend as K
from keras.engine.topology import get_source_inputs
from keras.layers import Activation
from keras.layers import AveragePooling3D
from keras.layers import BatchNormalization
from keras.layers import Conv3D
from keras.layers import Conv3DTranspose
from keras.layers import Dense
from keras.layers import Dropout,Flatten
from keras.layers import GlobalAveragePooling3D
from keras.layers import GlobalMaxPooling3D
from keras.layers import Input
from keras.layers import MaxPooling3D
from keras.layers import Reshape
from keras.layers import UpSampling3D
from keras.layers import Concatenate
from tensorflow.keras.models import Model,load_model
from keras.regularizers import l2
!pip install git+https://www.github.com/keras-team/keras-contrib.git
from keras_contrib.layers import SubPixelUpscaling
from tensorflow.keras.callbacks import LearningRateScheduler
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from tensorflow.keras.optimizers import SGD,Adam
import tensorflow
import tensorflow.keras.backend as K
from tensorflow.keras import backend as keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import *
from tensorflow.keras.optimizers import *
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.losses import *
import gc
X_train = np.load('/content/home/train_img.npy').reshape((19637, 128, 128,1))
y_train = np.load('/content/home/train_seg.npy').reshape((19637, 128, 128,1))
gc.collect()
X_val = np.load('/content/home/val_img.npy').reshape((7521, 128, 128,1))
y_val = np.load('/content/home/val_seg.npy').reshape((7521, 128, 128,1))
# X_test = np.load('/content/home/data_road_asl.npy')
# y_test = np.load('/content/home/data_face_asl.npy')
print(X_train.shape)
print(y_train.shape)
print(X_val.shape)
print(y_val.shape)
def closs(y_true, y_pred):
def dice_loss(y_true, y_pred):
numerator = 2 * tensorflow.reduce_sum(y_true * y_pred, axis=(1,2,3))
denominator = tensorflow.reduce_sum(y_true + y_pred, axis=(1,2,3))
return K.reshape(1 - numerator / denominator, (-1, 1, 1))
return binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
def iou_core(y_true, y_pred, smooth=1):
intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
union = K.sum(y_true,-1) + K.sum(y_pred,-1) - intersection
iou = (intersection + smooth) / ( union + smooth)
return iou
import numpy as np
import os
import skimage.io as io
import skimage.transform as trans
import numpy as np
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras import backend as keras
def unet(pretrained_weights = None,input_size = (128,128,1)):
inputs = Input(input_size)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
drop4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
drop5 = Dropout(0.5)(conv5)
up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)
up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)
model = Model(inputs = inputs, outputs = conv10)
model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy'])
#model.summary()
if(pretrained_weights):
model.load_weights(pretrained_weights)
return model
model = unet()
model_checkpoint = ModelCheckpoint('unet_membrane.hdf5', monitor='loss',verbose=1, save_best_only=True)
model.fit(X_train,y_train, epochs=30,batch_size=16, validation_data=(X_val, y_val),callbacks=[model_checkpoint])
```
| github_jupyter |
# Lista 6
```
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from scipy import signal
import matplotlib.style as style
plt.rcParams['font.size'] = 20
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['xtick.labelsize'] = 15
plt.rcParams['ytick.labelsize'] = 15
plt.rcParams['legend.fontsize'] = 20
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['figure.figsize'] = 10, 8
plt.rcParams['text.usetex'] = True
```
### Nesse caso estou considerando que eu acertei exatamente o número de coeficientes do filtro
### Considerando diversos valores de $\lambda$
```
# Sistema que eu quero descobrir
h = np.array([[-2], [0.25]])
# Número de iterações
n_iteracoes = 800
# Número de Monte Carlo
n_mc = 50
Lambdas = [0.99, 0.98, 0.97, 0.96, 0.95, 0.94, 0.93, 0.8, 0.7, 0.6, 0.4, 0.3]
coef_numa_rodada = []
mse = []
delta = 0.01
for Lambda in Lambdas:
for mc in range(n_mc):
# Inicializações
erros = np.array([])
w_atual = np.array([[0], [0]])
x = np.array([[0], [0]])
# Ruído com média zero
ruido = np.random.random(n_iteracoes)
ruido = ruido - ruido.mean()
S_D = delta * np.eye(2)
for ii in range(n_iteracoes):
desejado = np.dot(x.T, h) + 0.3*ruido[ii]
saida = np.dot(x.T, w_atual)
erros = np.append(erros, [desejado - saida])
psi = S_D.dot(x.conj())
S_D = (1./Lambda) * (S_D - np.outer(psi, psi.conj())/(Lambda + psi.T.conj().dot(x)))
# O LMS
w_prox = w_atual + erros[ii]*S_D.dot(x)
# Atualiza o valor dos coeficientes do filtro
w_atual = w_prox
coef_numa_rodada += [w_prox]
#Atualiza o x (shifta o que tem e coloca um valor novo no final)
x = np.roll(x, shift=-1)
x = np.append(arr=x[:-1], values=ruido[ii])
x = np.reshape(x, (len(x), 1))
if mc == 0:
erros_mc = erros
else:
erros_mc = np.vstack((erros_mc, erros))
coeficientes += [w_prox]
mse += [np.mean(erros_mc*erros_mc, axis=0)]
eps = 10**-16
x_plt = []
for cada in mse:
cada = cada[10:]
x_plt += [10 * np.log10(cada + eps)]
plt.rcParams['font.size'] = 20
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['xtick.labelsize'] = 15
plt.rcParams['ytick.labelsize'] = 15
plt.rcParams['legend.fontsize'] = 20
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['figure.figsize'] = 10, 8
plt.rcParams['lines.linewidth'] = 1
plt.rcParams['text.usetex'] = True
fig, ax1 = plt.subplots()
ax1.plot(range(len(mse[0][10:])), x_plt[0], label=r'$\lambda = 0.99$')
ax1.plot(range(len(mse[1][10:])), x_plt[1], 'r', label=r'$\lambda = 0.98$')
# ax1.plot(range(len(mse[2][10:])), x_plt[2], 'g', label=r'$\lambda = 0.97$')
# ax1.plot(range(len(mse[2][10:])), x_plt[3], 'b', label=r'$\lambda = 0.96$')
# ax1.plot(range(len(mse[2][10:])), x_plt[4], label=r'$\lambda = 0.95$')
# ax1.plot(range(len(mse[2][10:])), x_plt[4], label=r'$\lambda = 0.94$')
# ax1.plot(range(len(mse[2][10:])), x_plt[5], label=r'$\lambda = 0.93$')
ax1.plot(range(len(mse[2][10:])), x_plt[6], label=r'$\lambda = 0.8$')
# ax1.plot(range(len(mse[2][10:])), x_plt[7], label=r'$\lambda = 0.7$')
# ax1.plot(range(len(mse[2][10:])), x_plt[8], label=r'$\lambda = 0.6$')
ax1.plot(range(len(mse[2][10:])), x_plt[9], label=r'$\lambda = 0.4$')
# ax1.plot(range(len(mse[2][10:])), x_plt[10], label=r'$\lambda = 0.3$')
ax1.legend(loc='upper right')
ax1.set_ylabel(r'MSE~(dB)')
ax1.set_xlabel(r'Iteração')
ax1.grid(True)
plt.savefig('mse_rls.pdf', dpi=300, transparent=True, optimize=True, bbox_inches='tight')
plt.show()
```
### Considerando menos valores de $\lambda$ para olhar de perto o MSE
```
# Sistema que eu quero descobrir
h = np.array([[-2], [0.25]])
# Número de iterações
n_iteracoes = 800
# Número de Monte Carlo
n_mc = 100
Lambdas = [0.98]
mse = []
delta = 0.01
for Lambda in Lambdas:
coef_num_mc = np.array([])
for mc in range(n_mc):
# Inicializações
erros = np.array([])
coef_numa_rodada = np.array([])
w_atual = np.array([[0], [0]])
x = np.array([[0], [0]])
# Ruído com média zero
ruido = np.random.random(n_iteracoes)
ruido = ruido - ruido.mean()
S_D = delta * np.eye(2)
for ii in range(n_iteracoes):
desejado = np.dot(x.T, h) + 0.1*ruido[ii]
saida = np.dot(x.T, w_atual)
erros = np.append(erros, [desejado - saida])
psi = S_D.dot(x.conj())
S_D = (1./Lambda) * (S_D - np.outer(psi, psi.conj())/(Lambda + psi.T.conj().dot(x)))
# O LMS
w_prox = w_atual + erros[ii]*S_D.dot(x)
# Atualiza o valor dos coeficientes do filtro
w_atual = w_prox
if ii == 0:
coef_numa_rodada = w_atual
else:
coef_numa_rodada = np.hstack((coef_numa_rodada, w_atual))
#Atualiza o x (shifta o que tem e coloca um valor novo no final)
x = np.roll(x, shift=-1)
x = np.append(arr=x[:-1], values=ruido[ii])
x = np.reshape(x, (len(x), 1))
if mc == 0:
coef_num_mc = coef_numa_rodada
erros_mc = erros
else:
coef_num_mc = np.dstack((coef_num_mc, coef_numa_rodada))
erros_mc = np.vstack((erros_mc, erros))
mse += [np.mean(erros_mc*erros_mc, axis=0)]
```
Média dos coeficientes no Monte Carlo
```
media_lambda_99 = np.mean(coef_num_mc, axis=2)
plt.rcParams['font.size'] = 20
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['xtick.labelsize'] = 15
plt.rcParams['ytick.labelsize'] = 15
plt.rcParams['legend.fontsize'] = 20
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['lines.linewidth'] = 4
plt.rcParams['figure.figsize'] = 12, 10
plt.rcParams['text.usetex'] = True
fig, ax1 = plt.subplots()
ax1.plot(range(n_iteracoes), media_lambda_99[0,:], label=r'$\mathbf{w}[0]$ $\lambda$ = 0.98')
ax1.plot(range(n_iteracoes), media_lambda_99[1,:], label='$\mathbf{w}[1]$ $\lambda$ = 0.98')
ax1.plot(range(n_iteracoes), media_lambda_30[0,:], label='$\mathbf{w}[0]$ $\lambda$ = 0.4')
ax1.plot(range(n_iteracoes), media_lambda_30[1,:], label='$\mathbf{w}[1]$ $\lambda$ = 0.4')
ax1.axhline(y=h[0], xmin=0, xmax=1, color='k')
ax1.axhline(y=h[1], xmin=0, xmax=1, color='k')
plt.legend()
ax1.set_ylabel(r'Coeficientes')
ax1.set_xlabel(r'Iteração')
ax1.grid(True)
plt.savefig('coeficientes.pdf', dpi=300, transparent=True, optimize=True, bbox_inches='tight')
plt.show()
```
Aqui eu quero mostrar convergencia dos coeficientes para lambdas diferentes
```
# Sistema que eu quero descobrir
h = np.array([[-2], [0.25]])
# Número de iterações
n_iteracoes = 800
# Número de Monte Carlo
n_mc = 100
Lambdas = [0.4]
mse = []
delta = 0.01
for Lambda in Lambdas:
coef_num_mc = np.array([])
for mc in range(n_mc):
# Inicializações
erros = np.array([])
coef_numa_rodada = np.array([])
w_atual = np.array([[0], [0]])
x = np.array([[0], [0]])
# Ruído com média zero
ruido = np.random.random(n_iteracoes)
ruido = ruido - ruido.mean()
S_D = delta * np.eye(2)
for ii in range(n_iteracoes):
desejado = np.dot(x.T, h) + 0.1*ruido[ii]
saida = np.dot(x.T, w_atual)
erros = np.append(erros, [desejado - saida])
psi = S_D.dot(x.conj())
S_D = (1./Lambda) * (S_D - np.outer(psi, psi.conj())/(Lambda + psi.T.conj().dot(x)))
# O LMS
w_prox = w_atual + erros[ii]*S_D.dot(x)
# Atualiza o valor dos coeficientes do filtro
w_atual = w_prox
if ii == 0:
coef_numa_rodada = w_atual
else:
coef_numa_rodada = np.hstack((coef_numa_rodada, w_atual))
#Atualiza o x (shifta o que tem e coloca um valor novo no final)
x = np.roll(x, shift=-1)
x = np.append(arr=x[:-1], values=ruido[ii])
x = np.reshape(x, (len(x), 1))
if mc == 0:
coef_num_mc = coef_numa_rodada
erros_mc = erros
else:
coef_num_mc = np.dstack((coef_num_mc, coef_numa_rodada))
erros_mc = np.vstack((erros_mc, erros))
mse += [np.mean(erros_mc*erros_mc, axis=0)]
media_lambda_30 = np.mean(coef_num_mc, axis=2)
```
### Diferentes valores de ruído
```
# Sistema que eu quero descobrir
h = np.array([[-2], [0.25]])
# Número de iterações
n_iteracoes = 800
# Número de Monte Carlo
n_mc = 50
# Lambdas = [0.99, 0.98, 0.97, 0.96, 0.95, 0.94, 0.93, 0.8, 0.7, 0.6, 0.4, 0.3]
Lambda = 0.98
sigmas_ruido = [0.1, 0.2, 0.3, 0.5, 1, 1.1, 2, 5]
coef_numa_rodada = []
mse = []
delta = 0.01
for sigma_ruido in sigmas_ruido:
for mc in range(n_mc):
# Inicializações
erros = np.array([])
w_atual = np.array([[0], [0]])
x = np.array([[0], [0]])
# Ruído com média zero
ruido = np.random.random(n_iteracoes)
ruido = ruido - ruido.mean()
S_D = delta * np.eye(2)
for ii in range(n_iteracoes):
desejado = np.dot(x.T, h) + sigma_ruido*ruido[ii]
saida = np.dot(x.T, w_atual)
erros = np.append(erros, [desejado - saida])
psi = S_D.dot(x.conj())
S_D = (1./Lambda) * (S_D - np.outer(psi, psi.conj())/(Lambda + psi.T.conj().dot(x)))
# O LMS
w_prox = w_atual + erros[ii]*S_D.dot(x)
# Atualiza o valor dos coeficientes do filtro
w_atual = w_prox
coef_numa_rodada += [w_prox]
#Atualiza o x (shifta o que tem e coloca um valor novo no final)
x = np.roll(x, shift=-1)
x = np.append(arr=x[:-1], values=ruido[ii])
x = np.reshape(x, (len(x), 1))
if mc == 0:
erros_mc = erros
else:
erros_mc = np.vstack((erros_mc, erros))
coeficientes += [w_prox]
mse += [np.mean(erros_mc*erros_mc, axis=0)]
eps = 10**-16
x_plt = []
for cada in mse:
cada = cada[10:]
x_plt += [10 * np.log10(cada + eps)]
plt.rcParams['font.size'] = 20
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['xtick.labelsize'] = 15
plt.rcParams['ytick.labelsize'] = 15
plt.rcParams['legend.fontsize'] = 20
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['figure.figsize'] = 14, 12
plt.rcParams['lines.linewidth'] = 1
plt.rcParams['text.usetex'] = True
fig, ax1 = plt.subplots()
ax1.plot(range(len(mse[0][10:])), x_plt[0], label=r'$a = 0.1$')
ax1.plot(range(len(mse[1][10:])), x_plt[1], 'r', label=r'$a = 0.2$')
ax1.plot(range(len(mse[2][10:])), x_plt[2], 'g', label=r'$a = 0.3$')
ax1.plot(range(len(mse[2][10:])), x_plt[3], 'b', label=r'$a = 0.5$')
ax1.plot(range(len(mse[2][10:])), x_plt[4], label=r'$a = 1$')
ax1.plot(range(len(mse[2][10:])), x_plt[5], label=r'$a = 1.1$')
ax1.plot(range(len(mse[2][10:])), x_plt[6], label=r'$a = 2$')
ax1.plot(range(len(mse[2][10:])), x_plt[7], label=r'$a = 5$')
ax1.legend(loc='upper right')
ax1.set_ylabel(r'MSE~(dB)')
ax1.set_xlabel(r'Iteração')
ax1.grid(True)
plt.savefig('ruidos.pdf', dpi=300, transparent=True, optimize=True, bbox_inches='tight')
plt.show()
```
| github_jupyter |

# YES BANK DATATHON
## Machine Learning Challenge Round 3 - Classification
## EDA
```
import numpy as np
import pandas as pd
train=pd.read_csv('Yes_Bank_Train.csv')
test=pd.read_csv('Yes_Bank_Test_int.csv')
train.info()
sub=pd.read_csv('sample_clusters.csv')
```
No Null
```
test.describe(include='all').T
train.describe(include='all').T
```
**Outliers**
```
train.resident_since.value_counts()
test.resident_since.value_counts()
train[train.duration_month>60]
train[(train.age>68) | (train.duration_month>60)]
print(test.shape)
print(train.shape)
train[train.credit_amount==1500]
dftrain=train[(train.age<=68) & (train.duration_month<=60)]
dftrain.shape
```
### Cluster Formation
```
def tocluster(val):
if val>=4000 and val<=20000:
return 1
elif val>=1500 and val<=4000:
return 2
elif val<1500:
return 3
dftrain['cluster_number']=dftrain.credit_amount.apply(tocluster)
dftrain.head()
dftrain=dftrain.drop(['credit_amount'],axis=1)
dftrain=pd.get_dummies(dftrain,drop_first=True)
dftrain.shape
dftrain.cluster_number.value_counts()
```
**Train Test Split**
```
X,y=dftrain.drop('cluster_number',axis=1),dftrain.cluster_number
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1994,shuffle=True)
y_train.value_counts()
y_test.value_counts()
from sklearn.ensemble import RandomForestRegressor,RandomForestClassifier
rf=RandomForestRegressor(n_estimators=400,max_features=20,max_depth=120,min_samples_split=3)
rf.fit(X_train,y_train)
y_pred=rf.predict(X_test)
# from sklearn.tree import DecisionTreeRegressor,DecisionTreeClassifier
# dc=DecisionTreeClassifier()
# dc.fit(X_train,y_train)
# y_pred=dc.predict(X_test)
from sklearn.neural_network import MLPClassifier, MLPRegressor
mlp=MLPClassifier(hidden_layer_sizes=(50,50,550,), activation="relu", max_iter=500, random_state=8,solver='adam')
mlp.fit(X_train,y_train)
# p=mlp.predict(X_test)
y_pred=mlp.predict(X_test)
y_pred
y_pred=np.round(y_pred).astype(int)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test,y_pred))
```
### Test Data
```
test.head()
dftest=pd.get_dummies(test,drop_first=True)
dftest.shape
rf.fit(X,y)
pred=rf.predict(dftest)
pred
pred=np.round(pred).astype(int)
pred
sub=pd.DataFrame({'serial number':test['serial number'],'cluster_number':pred})
sub.head()
sub.cluster_number.value_counts()
sub.to_csv('classDiff.csv',index=False)
```
| github_jupyter |
# High-Order Example
[](https://mybinder.org/v2/gh/teseoch/fem-intro/master?filepath=fem-intro-high-order.ipynb)

Run it with binder!
```
import numpy as np
import scipy.sparse as spr
from scipy.sparse.linalg import spsolve
import plotly.graph_objects as go
```
As in the linear case, the domain is $\Omega = [0, 1]$, which we discretize with $n_{el}$ segments (or elements) $s_i$.
Now we also create the high-order nodes.
Note that we append them at the end.
```
#domain
omega = np.array([0, 1])
#number of bases and elements
n_el = 10
#Regular nodes, as before
s = np.linspace(omega[0], omega[1], num=n_el+1)
# s = np.cumsum(np.random.rand(n_elements+1))
s = (s-s[0])/(s[-1]-s[0])
# we now pick the order
order = 3
nodes = s
#more bases
n_bases = n_el + 1 + n_el*(order-1)
#create the nodes for the plots
for e in range(n_el):
#For every segment, we create order + 1 new high-order nodes
tmp = np.linspace(s[e], s[e+1], num=order+1)
#exclude the first and last since they already exists
tmp = tmp[1:-1]
#and append at the end of nodes
nodes = np.append(nodes, tmp)
```
Plot, in orange the high-order nodes, in blue the linear nodes.
```
go.Figure(data=[
go.Scatter(x=s, y=np.zeros(s.shape), mode='lines+markers'),
go.Scatter(x=nodes[n_el+1:], y=np.zeros(nodes.shape), mode='markers')
])
```
# Local bases
As in the linear case, we define the **reference element** $\hat s= [0, 1]$, a segment of unit length.
On each element we now have `order+1` (e.g., 2 for linear, 3 for quadratic) **non-zero** local bases.
We define their "piece" on $\hat s$.
It is important to respect the order of the nodes: the first 2 bases are always for the endpoints, and the others are ordered left to right.
Definition of linear bases, same as before
```
def hat_phi_1_0(x):
return 1-x
def hat_phi_1_1(x):
return x
```
Definition of quadratic bases
```
def hat_phi_2_0(x):
return 2*(x-0.5)*(x-1)
def hat_phi_2_1(x):
return 2*(x-0)*(x-0.5)
def hat_phi_2_2(x):
return -4*(x-0.5)**2+1
```
Definition of cubic bases
```
def hat_phi_3_0(x):
return -9/2*(x-1/3)*(x-2/3)*(x-1)
def hat_phi_3_1(x):
return 9/2*(x-0)*(x-1/3)*(x-2/3)
def hat_phi_3_2(x):
return 27/2*(x-0)*(x-2/3)*(x-1)
def hat_phi_3_3(x):
return -27/2*(x-0)*(x-1/3)*(x-1)
```
Utility function to return the list of functions
```
def hat_phis(order):
if order == 1:
return [hat_phi_1_0, hat_phi_1_1]
elif order == 2:
return [hat_phi_2_0, hat_phi_2_1, hat_phi_2_2]
elif order == 3:
return [hat_phi_3_0, hat_phi_3_1, hat_phi_3_2, hat_phi_3_3]
```
We can now plot the `order+1` local bases, same code as before.
Note that the first two bases correspond to the end-points, and the others are ordered.
```
x = np.linspace(0, 1)
data = []
tmp = hat_phis(order)
for o in range(order+1):
data.append(go.Scatter(x=x, y=tmp[o](x), mode='lines', name="$\hat\phi_{}$".format(o)))
go.Figure(data=data)
```
We use `sympy` to compute the gradients of the local bases.
```
import sympy as sp
xsym = sp.Symbol('x')
def grad_hat_phis(order):
#For linear we need to get the correct size
if order == 1:
return [lambda x : -np.ones(x.shape), lambda x : np.ones(x.shape)]
res = []
tmp = hat_phis(order)
for fun in tmp:
res.append(sp.lambdify(xsym, fun(xsym).diff(xsym)))
return res
```
Plotting gradients
```
x = np.linspace(0, 1)
data = []
tmp = grad_hat_phis(order)
for o in range(order+1):
data.append(go.Scatter(x=x, y=tmp[o](x), mode='lines', name="$\hat\phi_{}$".format(o)))
go.Figure(data=data)
```
# Basis construction
This code is exacly as before.
The only difficulty is the local to global mapping:
- the first 2 nodes are always the same
$$g_e^0 = e \qquad\mathrm{and}\qquad g_e^1=g+1$$
- the others are
$$g_e^i = n_{el} + e (\mathrm{order}-1) + i.$$
```
elements = []
for e in range(n_el):
el = {}
el["n_bases"] = order+1
#2 bases
el["phi"] = hat_phis(order)
el["grad_phi"] = grad_hat_phis(order)
#local to global mapping
high_order_nodes = list(range(n_el + 1 + e*(order-1), n_el + e*(order-1) + order))
el["loc_2_glob"] = [e, e+1] + high_order_nodes
#geometric mapping
el["gmapping"] = lambda x, e=e : s[e] + x*(s[e+1]-s[e])
el["grad_gmapping"] = lambda x : (s[e+1]-s[e])
elements.append(el)
```
We define a function to interpolate the vector $\vec{u}$ using the local to global, geometric mapping, and local bases to interpolate the data, as before.
```
def interpolate(u):
uinterp = np.array([])
x = np.array([])
xhat = np.linspace(0, 1)
for e in range(n_el):
el = elements[e]
uloc = np.zeros(xhat.shape)
for i in range(el["n_bases"]):
glob_node = el["loc_2_glob"][i]
loc_base = el["phi"][i]
uloc += u[glob_node] * loc_base(xhat)
uinterp = np.append(uinterp, uloc)
x = np.append(x, el["gmapping"](xhat))
return x, uinterp
```
We can generate a random vector $\vec{u}$ and use the previous function. This will interpolate all nodes.
```
u = np.random.rand(n_bases)
x, uinterp = interpolate(u)
go.Figure(data=[
go.Scatter(x=x, y=uinterp, mode='lines'),
go.Scatter(x=nodes, y=u, mode='markers'),
])
```
# Assembly
We are now ready the assemble the global stiffness matrix, which is exacly as before.
```
import quadpy
scheme = quadpy.line_segment.gauss_patterson(5)
rows = []
cols = []
vals = []
for e in range(n_el):
el = elements[e]
for i in range(el["n_bases"]):
for j in range(el["n_bases"]):
val = scheme.integrate(
lambda x:
el["grad_phi"][i](x) * el["grad_phi"][j](x) / el["grad_gmapping"](x),
[0.0, 1.0])
rows.append(el["loc_2_glob"][i])
cols.append(el["loc_2_glob"][j])
vals.append(val)
rows = np.array(rows)
cols = np.array(cols)
vals = np.array(vals)
L = spr.coo_matrix((vals, (rows, cols)))
L = spr.csr_matrix(L)
```
We set the rows `0` and `n_el` to identity for the boundary conditions.
```
for bc in [0, n_el]:
_, nnz = L[bc,:].nonzero()
for j in nnz:
if j != bc:
L[bc, j] = 0.0
L[bc, bc] = 1.0
```
We set the right-hand side to zero, and set the two boundary conditions to 1 and 4.
```
f = np.zeros((n_bases, 1))
f[0] = 1
f[n_el] = 4
```
We now solve $L\vec{u}=f$ for $\vec{u}$.
```
u = spsolve(L, f)
```
We now plot the solution. We expect a line, independently of `order`!
```
x, uinterp = interpolate(u)
go.Figure(data=[
go.Scatter(x=x, y=uinterp, mode='lines', name="solution"),
go.Scatter(x=nodes, y=u, mode='markers', name="$u$"),
])
```
# Mass Matrix
This is exactly as before!
```
rows = []
cols = []
vals = []
for e in range(n_el):
el = elements[e]
for i in range(el["n_bases"]):
for j in range(el["n_bases"]):
val = scheme.integrate(
lambda x:
el["phi"][i](x) * el["phi"][j](x) * el["grad_gmapping"](x),
[0.0, 1.0])
rows.append(el["loc_2_glob"][i])
cols.append(el["loc_2_glob"][j])
vals.append(val)
rows = np.array(rows)
cols = np.array(cols)
vals = np.array(vals)
M = spr.coo_matrix((vals, (rows, cols)))
M = spr.csr_matrix(M)
```
Now we set $\vec{f}=4$ and zero boundary conditions.
```
f = 4*np.ones((n_bases, 1))
f = M*f
f[0] = 0
f[n_el] = 0
```
We now solve $L\vec{u}=M\vec{f}$ for $\vec{u}$
```
u = spsolve(L, f)
x, uinterp = interpolate(u)
go.Figure(data=[
go.Scatter(x=x, y=uinterp, mode='lines', name="solution"),
go.Scatter(x=nodes, y=u, mode='markers', name="$u$"),
])
```
| github_jupyter |
```
import collections
import math
import torch
from torch import nn
from d2l import torch as d2l
class Seq2SeqEncoder(d2l.Encoder):
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers, dropout=0, **kwargs):
super(Seq2SeqEncoder, self).__init__(**kwargs)
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = nn.GRU(embed_size, num_hiddens, num_layers, dropout=dropout)
def forward(self, X, *args):
X = self.embedding(X)
X = X.permute(1, 0, 2)
output, state = self.rnn(X)
return output, state
encoder = Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16, num_layers=2)
encoder.eval()
X = torch.zeros((4, 7), dtype=torch.long)
output, state = encoder(X)
output.shape
state.shape
class Seq2SeqDecoder(d2l.Decoder):
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers, dropout=0, **kwargs):
super(Seq2SeqDecoder, self).__init__(**kwargs)
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = nn.GRU(embed_size + num_hiddens, num_hiddens, num_layers, dropout=dropout)
self.dense = nn.Linear(num_hiddens, vocab_size)
def init_state(self, enc_outputs, *args):
return enc_outputs[1]
def forward(self, X, state):
X = self.embedding(X).permute(1, 0, 2)
context = state[-1].repeat(X.shape[0], 1, 1)
X_and_context = torch.cat((X, context), 2)
output, state = self.rnn(X_and_context, state)
output = self.dense(output).permute(1, 0, 2)
return output, state
decoder = Seq2SeqDecoder(vocab_size=10, embed_size=8, num_hiddens=16, num_layers=2)
decoder.eval()
state = decoder.init_state(encoder(X))
output, state = decoder(X, state)
output.shape, state.shape
def sequence_mask(X, valid_len, value=0):
maxlen = X.size(1)
mask = torch.arange((maxlen), dtype=torch.float32, device=X.device)[None, :] < valid_len[:, None]
X[~mask] = value
return X
X = torch.tensor([[1, 2, 3], [4, 5, 6]])
sequence_mask(X, torch.tensor([1, 2]))
X = torch.ones(2, 3, 4)
sequence_mask(X, torch.tensor([1, 2]), value=-1)
class MaskedSoftmaxCELoss(nn.CrossEntropyLoss):
# `pred` shape: (`batch_size`, `num_steps`, `vocab_size`)
# `label` shape: (`batch_size`, `num_steps`)
# `valid_len` shape: (`batch_size`,)
def forward(self, pred, label, valid_len):
weights = torch.ones_like(label)
weights = sequence_mask(weights, valid_len)
self.reduction = 'none'
unweighted_loss = super(MaskedSoftmaxCELoss, self).forward(pred.permute(0, 2, 1), label)
weighted_loss = (unweighted_loss * weights).mean(dim=1)
return weighted_loss
loss = MaskedSoftmaxCELoss()
loss(torch.ones(3, 4, 10), torch.ones((3, 4), dtype=torch.long), torch.tensor([4, 2, 0]))
def xavier_init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_uniform_(m.weight)
if type(m) == nn.GRU:
for param in m._flat_weights_names:
if "weight" in param:
nn.init.xavier_uniform_(m._parameters[param])
def train_seq2seq(net, data_iter, lr, num_epochs, target_vocab, device):
net.apply(xavier_init_weights)
net.to(device)
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
loss = MaskedSoftmaxCELoss()
net.train()
animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[10, num_epochs])
for epoch in range(num_epochs):
timer = d2l.Timer()
metric = d2l.Accumulator(2)
for batch in data_iter:
optimizer.zero_grad()
X, X_valid_len, Y, Y_valid_len = [x.to(device) for x in batch]
bos = torch.tensor([target_vocab['<bos>']] * Y.shape[0], device=device).reshape(-1, 1)
dec_input = torch.cat([bos, Y[:, :-1]], 1)
Y_hat, _ = net(X, dec_input, X_valid_len)
l = loss(Y_hat, Y, Y_valid_len)
l.sum().backward()
d2l.grad_clipping(net, 1)
num_tokens = Y_valid_len.sum()
optimizer.step()
with torch.no_grad():
metric.add(l.sum(), num_tokens)
if (epoch + 1) % 10 == 0:
animator.add(epoch + 1, (metric[0] / metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, {metric[1] / timer.stop():.1f} 'f'tokens/sec on {str(device)}')
embed_size, num_hiddens, num_layers, dropout = 32, 32, 2, 0.1
batch_size, num_steps = 64, 10
lr, num_epochs, device = 0.005, 300, d2l.try_gpu()
train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps)
encoder = Seq2SeqEncoder(len(src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqDecoder(len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
net = d2l.EncoderDecoder(encoder, decoder)
train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)
def predict_seq2seq(net, src_sentence, src_vocab, tgt_vocab, num_steps, device, save_attention_weights=False):
net.eval()
src_tokens = src_vocab[src_sentence.lower().split(' ')] + [src_vocab['<eos>']]
enc_valid_len = torch.tensor([len(src_tokens)], device=device)
src_tokens = d2l.truncate_pad(src_tokens, num_steps, src_vocab['<pad>'])
enc_X = torch.unsqueeze(torch.tensor(src_tokens, dtype=torch.long, device=device), dim=0)
enc_outputs = net.encoder(enc_X, enc_valid_len)
dec_state = net.decoder.init_state(enc_outputs, enc_valid_len)
dec_X = torch.unsqueeze(torch.tensor([tgt_vocab['<bos>']], dtype=torch.long, device=device), dim=0)
output_seq, attention_weight_seq = [], []
for _ in range(num_steps):
Y, dec_state = net.decoder(dec_X, dec_state)
dec_X = Y.argmax(dim=2)
pred = dec_X.squeeze(dim=0).type(torch.int32).item()
if save_attention_weights:
attention_weight_seq.append(net.decoder.attention_weights)
if pred == tgt_vocab['<eos>']:
break
output_seq.append(pred)
return ' '.join(tgt_vocab.to_tokens(output_seq)), attention_weight_seq
def bleu(pred_seq, label_seq, k):
pred_tokens, label_tokens = pred_seq.split(' '), label_seq.split(' ')
len_pred, len_label = len(pred_tokens), len(label_tokens)
score = math.exp(min(0, 1 - len_label / len_pred))
for n in range(1, k + 1):
num_matches, label_subs = 0, collections.defaultdict(int)
for i in range(len_label - n + 1):
label_subs[''.join(label_tokens[i:i + n])] += 1
for i in range(len_pred - n + 1):
if label_subs[''.join(pred_tokens[i:i + n])] > 0:
num_matches += 1
label_subs[''.join(pred_tokens[i:i + n])] -= 1
score *= math.pow(num_matches / (len_pred - n + 1), math.pow(0.5, n))
return score
engs = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
for eng, fra in zip(engs, fras):
translation, attention_weight_seq = predict_seq2seq(net, eng, src_vocab, tgt_vocab, num_steps, device)
print(f'{eng} => {translation}, bleu {bleu(translation, fra, k=2):.3f}')
```
| github_jupyter |
# **Amazon Lookout for Equipment** - SDK Tutorial
#### Temporary cell to be executed until module is published on PyPI:
```
!pip install --quiet --use-feature=in-tree-build ..
```
## Initialization
---
### Imports
```
import boto3
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import numpy as np
import os
import pandas as pd
import sagemaker
import sys
from lookoutequipment import plot, dataset, model, evaluation, scheduler
```
### Parameters
<span style="color: white; background-color: OrangeRed; padding: 0px 15px 0px 15px; border-radius: 20px;">**Note:** Update the value of the **bucket** and **prefix** variables below **before** running the following cell</span>
Make sure the IAM role used to run your notebook has access to the chosen bucket.
```
bucket = '<<YOUR-BUCKET>>'
prefix = '<<YOUR_PREFIX>>/' # Keep the trailing slash at the end
plt.style.use('Solarize_Light2')
plt.rcParams['lines.linewidth'] = 0.5
```
### Dataset preparation
```
data = dataset.load_dataset(dataset_name='expander', target_dir='expander-data')
dataset.upload_dataset('expander-data', bucket, prefix)
```
## Role definition
---
Before you can run this notebook (for instance, from a SageMaker environment), you will need:
* To allow SageMaker to run Lookout for Equipment API calls
* To allow Amazon Lookout for Equipment to access your training data (located in the bucket and prefix defined in the previous cell)
### Authorizing SageMaker to make Lookout for Equipment calls
You need to ensure that this notebook instance has an IAM role which allows it to call the Amazon Lookout for Equipment APIs:
1. In your IAM console, look for the SageMaker execution role endorsed by your notebook instance (a role with a name like `AmazonSageMaker-ExecutionRole-yyyymmddTHHMMSS`)
2. On the `Permissions` tab, click on `Attach policies`
3. In the Filter policies search field, look for `AmazonLookoutEquipmentFullAccess`, tick the checkbox next to it and click on `Attach policy`
4. Browse to the `Trust relationship` tab for this role, click on the `Edit trust relationship` button and fill in the following policy. You may already have a trust relationship in place for this role, in this case, just add the **"lookoutequipment.amazonaws.com"** in the service list:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"sagemaker.amazonaws.com",
// ... Other services
"lookoutequipment.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
```
5. Click on `Update the Trust Policy`: your SageMaker notebook instance can now call the Lookout for Equipment APIs
### Give access to your S3 data to Lookout for Equipment
When Lookout for Equipment will run, it will try to access your S3 data at several occasions:
* When ingesting the training data
* At training time when accessing the label data
* At inference time to run the input data and output the results
To enable these access, you need to create a role that Lookout for Equipment can endorse by following these steps:
1. Log in again to your [**IAM console**](https://console.aws.amazon.com/iamv2/home)
2. On the left menu bar click on `Roles` and then on the `Create role` button located at the top right
3. On the create role screen, selected `AWS Service` as the type of trusted entity
4. In the following section (`Choose a use case`), locate `SageMaker` and click on the service name. Not all AWS services appear in these ready to configure use cases and this is why we are using SageMaker as the baseline for our new role. In the next steps, we will adjust the role created to configure it specifically for Amazon Lookout for Equipment.
5. Click on the `Next` button until you reach the last step (`Review`): give a name and a description to your role (for instance `LookoutEquipmentS3AccessRole`)
6. Click on `Create role`: your role is created and you are brought back to the list of existing role
7. In the search bar, search for the role you just created and choose it from the returned result to see a summary of your role
8. At the top of your screen, you will see a role ARN field: **copy this ARN and paste it in the following cell, replacing the `<<YOUR_ROLE_ARN>>` string below**
9. Click on the cross at the far right of the `AmazonSageMakerFullAccess` managed policy to remove this permission for this role as we don't need it.
10. Click on `Add inline policy` and then on the `JSON` tab. Then fill in the policy with the following document (update the name of the bucket with the one you created earlier):
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<<YOUR-BUCKET>>/*",
"arn:aws:s3:::<<YOUR-BUCKET>>"
]
}
]
}
```
10. Give a name to your policy (for instance: `LookoutEquipmentS3AccessPolicy`) and click on `Create policy`.
11. On the `Trust relationships` tab, choose `Edit trust relationship`.
12. Under policy document, replace the whole policy by the following document and click on the `Update Trust Policy` button on the bottom right:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "lookoutequipment.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
```
And you're done! When Amazon Lookout for Equipment will try to read the datasets you just uploaded in S3, it will request permissions from IAM by using the role we just created:
1. The **trust policy** allows Lookout for Equipment to assume this role.
2. The **inline policy** specifies that Lookout for Equipment is authorized to list and access the objects in the S3 bucket you created earlier.
<span style="color: white; background-color: OrangeRed; padding: 0px 15px 0px 15px; border-radius: 20px;">Don't forget to update the **role_arn** variable below with the ARN of the role you just create **before** running the following cell</span>
```
role_arn = '<<YOUR_ROLE_ARN>>'
```
## Lookout for Equipment end-to-end walkthrough
---
### Dataset creation and data ingestion
```
lookout_dataset = dataset.LookoutEquipmentDataset(
dataset_name='my_dataset',
access_role_arn=role_arn,
component_root_dir=f's3://{bucket}/{prefix}training-data'
)
lookout_dataset.create()
response = lookout_dataset.ingest_data(bucket, prefix + 'training-data/', wait=True)
```
### Building an anomaly detection model
#### Model training
```
lookout_model = model.LookoutEquipmentModel(model_name='my_model',
dataset_name='my_dataset')
lookout_model.set_time_periods(data['evaluation_start'],
data['evaluation_end'],
data['training_start'],
data['training_end'])
lookout_model.set_label_data(bucket=bucket,
prefix=prefix + 'label-data/',
access_role_arn=role_arn)
lookout_model.set_target_sampling_rate(sampling_rate='PT5M')
response = lookout_model.train()
lookout_model.poll_model_training(sleep_time=300)
```
#### Trained model evaluation overview
```
LookoutDiagnostics = evaluation.LookoutEquipmentAnalysis(model_name='my_model', tags_df=data['data'])
predicted_ranges = LookoutDiagnostics.get_predictions()
labels_fname = os.path.join('expander-data', 'labels.csv')
labeled_range = LookoutDiagnostics.get_labels(labels_fname)
TSViz = plot.TimeSeriesVisualization(timeseries_df=data['data'], data_format='tabular')
TSViz.add_signal(['signal-028'])
TSViz.add_labels(labeled_range)
TSViz.add_predictions([predicted_ranges])
TSViz.add_train_test_split(data['evaluation_start'])
TSViz.add_rolling_average(60*24)
TSViz.legend_format = {'loc': 'upper left', 'framealpha': 0.4, 'ncol': 3}
fig, axis = TSViz.plot()
```
### Scheduling inferences
#### Preparing inferencing data
```
dataset.prepare_inference_data(
root_dir='expander-data',
sample_data_dict=data,
bucket=bucket,
prefix=prefix,
start_date='2015-11-21 04:00:00',
num_sequences=12
)
```
#### Configuring and starting a scheduler
```
lookout_scheduler = scheduler.LookoutEquipmentScheduler(
scheduler_name='my_scheduler',
model_name='my_model'
)
scheduler_params = {
'input_bucket': bucket,
'input_prefix': prefix + 'inference-data/input/',
'output_bucket': bucket,
'output_prefix': prefix + 'inference-data/output/',
'role_arn': role_arn,
'upload_frequency': 'PT5M',
'delay_offset': None,
'timezone_offset': '+00:00',
'component_delimiter': '_',
'timestamp_format': 'yyyyMMddHHmmss'
}
lookout_scheduler.set_parameters(**scheduler_params)
response = lookout_scheduler.create()
```
#### Post-processing the inference results
```
results_df = lookout_scheduler.get_predictions()
results_df.head()
event_details = pd.DataFrame(results_df.iloc[0, 1:]).reset_index()
fig, ax = plot.plot_event_barh(event_details, fig_width=12)
```
| github_jupyter |
# *Density Matrices and Path Integrals*
`Doruk Efe Gökmen -- 14/08/2018 -- Ankara`
## Stationary states of the quantum harmonic oscillator
The 1-dimensional (1D) quantum mechanical harmonic oscillator with characteristic frequency $\omega$ is described by the same potential energy as its classical counterpart acting on a mass $m$: $V(x)=\frac{1}{2}m\omega^2x^2$. The physical structure of the allowed states subjected to this potential is governed by the time independent Schrödinger equation (TISE) $\mathcal{H}\psi=\left(-\frac{\hbar^2}{2m}\frac{\text{d}^2}{\text{d}x^2}+\frac{1}{2}m\omega^2x^2\right)\psi=E\psi$, where $E$ is an energy eigenvalue. Note that here we have taken $\hbar=1$, $m=1$, $\omega=1$ for simplicity. The stationary states $\psi_n(x)$ (Hermite polynomials) and the corresponding energy eigenvalues $E_n$ are calculated by the following program.
```
%pylab inline
import math, pylab
n_states = 20 #number of stationary states to be plotted
grid_x = [i * 0.1 for i in range(-50, 51)] #define the x-grid
psi = {} #intialise the list of stationary states
for x in grid_x:
psi[x] = [math.exp(-x ** 2 / 2.0) / math.pi ** 0.25] # ground state
psi[x].append(math.sqrt(2.0) * x * psi[x][0]) # first excited state
# other excited states (through Hermite polynomial recursion relations):
for n in range(2, n_states):
psi[x].append(math.sqrt(2.0 / n) * x * psi[x][n - 1] -
math.sqrt((n - 1.0) / n) * psi[x][n - 2])
# graphics output
for n in range(n_states):
shifted_psi = [psi[x][n] + n for x in grid_x] # vertical shift
pylab.plot(grid_x, shifted_psi)
pylab.title('Harmonic oscillator wavefunctions')
pylab.xlabel('$x$', fontsize=16)
pylab.ylabel('$\psi_n(x)$ (shifted)', fontsize=16)
pylab.xlim(-5.0, 5.0)
pylab.savefig('plot-harmonic_wavefunction.png')
pylab.show()
```
The following section checks whether the above results are correct (normalisation, ortanormality and TISE). TISE condition is verified by a discrete appoximation of the second derivative.
```
import math
def orthonormality_check(n, m):
integral_n_m = sum(psi[n][i] * psi[m][i] for i in range(nx)) * dx
return integral_n_m
nx = 1000
L = 10.0
dx = L / (nx - 1)
x = [- L / 2.0 + i * dx for i in range(nx)]
n_states = 4
psi = [[math.exp(-x[i] ** 2 / 2.0) / math.pi ** 0.25 for i in range(nx)]]
psi.append([math.sqrt(2.0) * x[i] * psi[0][i] for i in range(nx)])
for n in range(2, n_states):
psi.append([math.sqrt(2.0 / n) * x[i] * psi[n - 1][i] - \
math.sqrt((n - 1.0) / n) * psi[n - 2][i] for i in range(nx)])
n = n_states - 1
print 'checking energy level', n
#discrete approximation for the second derivative
H_psi = [0.0] + [(- 0.5 * (psi[n][i + 1] - 2.0 * psi[n][i] + psi[n][i - 1]) /
dx ** 2 + 0.5 * x[i] ** 2 * psi[n][i]) for i in range(1, nx - 1)]
for i in range(1, nx - 1):
print n, x[i], H_psi[i] / psi[n][i]
import math, pylab
nx = 300 # nx is even, to avoid division by zero
L = 10.0
dx = L / (nx - 1)
x = [- L / 2.0 + i * dx for i in range(nx)]
# construct wavefunctions:
n_states = 4
psi = [[math.exp(-x[i] ** 2 / 2.0) / math.pi ** 0.25 for i in range(nx)]] # ground state
psi.append([math.sqrt(2.0) * x[i] * psi[0][i] for i in range(nx)]) # first excited state
for n in range(2, n_states):
psi.append([math.sqrt(2.0 / n) * x[i] * psi[n - 1][i] - \
math.sqrt((n - 1.0) / n) * psi[n - 2][i] for i in range(nx)])
# local energy check:
H_psi_over_psi = []
for n in range(n_states):
H_psi = [(- 0.5 * (psi[n][i + 1] - 2.0 * psi[n][i] + psi[n][i - 1])
/ dx ** 2 + 0.5 * x[i] ** 2 * psi[n][i]) for i in range(1, nx - 1)]
H_psi_over_psi.append([H_psi[i] / psi[n][i+1] for i in range(nx - 2)])
# graphics output:
for n in range(n_states):
pylab.plot(x[1:-1], [n + 0.5 for i in x[1:-1]], 'k--', lw=1.5)
pylab.plot(x[1:-1], H_psi_over_psi[n], '-', lw=1.5)
pylab.xlabel('$x$', fontsize=18)
pylab.ylabel('$H \psi_%i(x)/\psi_%i(x)$' % (n, n), fontsize=18)
pylab.xlim(x[0], x[-1])
pylab.ylim(n, n + 1)
pylab.title('Schroedinger equation check (local energy)')
#pylab.savefig('plot-check_schroedinger_energy-%i.png' % n)
pylab.show()
```
## Quantum statistical mechanics - Density matrices
In a thermal ensemble, the probability of being in $n$th energy eigenstate is given by the Boltzmann factor $\pi(n)\propto e^{-\beta E_n}$, where $\beta=\frac{1}{k_BT}$. Hence, e.g the probability $\pi(x,n)$ to be in state $n$ and in position $x$ is proportional to $e^{-\beta E_n}|\psi_n(x)|^2$.
We can consider the diagonal density matrix $\rho(x,x,\beta)=\sum_n e^{\beta E_n}\psi_n(x)\psi_n^*(x)$, which is the probability $\pi(x)$ of being at position $x$. This is a special case of the more general density matrix $\rho(x,x',\beta)=\sum_n e^{\beta E_n}\psi_n(x)\psi_n^*(x')$, which is the central object of quantum statistical mechanics. The partition function is given by $Z(\beta)=\text{Tr}\rho_u=\int_{-\infty}^\infty \rho_u(x,x,\beta)\text{d}x$, where $\rho_u=e^{-\beta \mathcal{H}}$ is the unnormalised density matrix. It follows that $\rho(\beta)=\frac{e^{-\beta\mathcal{H}}}{\text{Tr}(e^{-\beta\mathcal{H}})}$.
Properties of the density matrix:
* *The convolution property*: $\int \rho(x,x',\beta_1) \rho(x',x'',\beta_2) \text{d}x' = \int \text{d}x' \sum_{n,m} \psi_n(x)e^{-\beta_1 E_n} \psi_n^*(x')\psi_m(x')e^{-\beta_2 E_m}\psi_m^*(x'')$ $ = \sum_{n,m} \psi_n(x)e^{-\beta_1 E_n} \int \text{d}x' \psi_n^*(x')\psi_m(x')e^{-\beta_2 E_m}\psi_m^*(x'') = \sum_n \psi_n(x)e^{-(\beta_1+\beta_2)E_n}\psi_n^*(x'')=\rho(x,x'',\beta_1+\beta_2)$ $\implies \boxed{ \int \rho(x,x',\beta) \rho(x',x'',\beta) \text{d}x' = \rho(x,x'',2\beta)}$ (note that in the discrete case, this is just matrix squaring). **So, if we have the density matrix at temperature $T=k_B/\beta$ this equation allows us to compute the density matrix at temperature $T/2$**.
* *The free density matrix* for a system of infinte size is $\rho^\text{free}(x,x',\beta)=\frac{1}{\sqrt{2\pi\beta}}\exp{\left[-\frac{(x-x')^2}{2\beta}\right]}$. Notice that in the high temperature limit ($\beta\rightarrow 0$) the density matrix becomes classical: $\rho^\text{free}\rightarrow \delta(x-x')$. The quantum system exihibits its peculiar properties more visibly at low temperatures.
* *High temperature limit and the Trotter decomposition*. In general any Hamiltonian can be written as $\mathcal{H}=\mathcal{H}^\text{free}+V(x)$. At high temperatures ($\beta\rightarrow 0$) we can approximate the density matrix as $\rho(x,x',\beta)=e^{-\beta V(x)/2}\rho^\text{free}e^{-\beta V(x')/2}$ (Trotter expansion). Hence an explicit expression for the density matrix is available without solving the Schrödinger (or more preciesly Liouville) equation for any potential.
Getting the density matrix for the harmonic oscillator at high temperatures by the Trotter decomposition.
```
%pylab inline
import math, pylab
# density matrix for a free particle (exact)
def funct_rho_free(x, xp, beta):
return (math.exp(-(x - xp) ** 2 / (2.0 * beta)) /
math.sqrt(2.0 * math.pi * beta))
beta = 0.1
nx = 300
L = 10.0
x = [-L / 2.0 + i * L / float(nx - 1) for i in range(nx)]
rho_free, rho_harm = [], []
for i in range(nx):
rho_free.append([funct_rho_free(x[i], x[j], beta) for j in range(nx)])
rho_harm.append([rho_free[i][j] * math.exp(- beta * x[i] ** 2 / 4.0 -
beta * x[j] ** 2 / 4.0) for j in range(nx)])
# graphics output (free particle)
pylab.imshow(rho_free, extent=[0.0, L, 0.0, L], origin='lower')
pylab.xlabel('$x$', fontsize=16)
pylab.ylabel('$x\'$', fontsize=16)
pylab.colorbar()
pylab.title('$\\beta$=%s (free)' % beta)
pylab.savefig('plot-trotter-free.png')
pylab.show()
# graphics output (harmonic potential)
pylab.imshow(rho_harm, extent=[0.0, L, 0.0, L], origin='lower')
pylab.xlabel('$x$', fontsize=16)
pylab.ylabel('$x\'$', fontsize=16)
pylab.colorbar()
pylab.title('$\\beta$=%s (harmonic)' % beta)
pylab.savefig('plot-trotter-harmonic.png')
```
So, at high temperature, the density matrix is given by a simple correction to the free density matrix as seen above. Taking $\rho^\text{free}$ as a starting point, by the convolution property we can obtain the density matrix at low temperatures too, hence leading to a convenient numerical scheme through matrix squaring. The following section contains an implementation of this.
```
import math, numpy, pylab
#matrix squaring and convolution to calculate the density matrix at any temperature.
# Free off-diagonal density matrix
def rho_free(x, xp, beta):
return (math.exp(-(x - xp) ** 2 / (2.0 * beta)) /
math.sqrt(2.0 * math.pi * beta))
# Harmonic density matrix in the Trotter approximation (returns the full matrix)
def rho_harmonic_trotter(grid, beta):
return numpy.array([[rho_free(x, xp, beta) * \
numpy.exp(-0.5 * beta * 0.5 * (x ** 2 + xp ** 2)) \
for x in grid] for xp in grid])
#construct the position grid
x_max = 5.0 #maximum position value on the grid
nx = 100 #number of grid elements
dx = 2.0 * x_max / (nx - 1) #the grid spacing
x = [i * dx for i in range(-(nx - 1) / 2, nx / 2 + 1)] #the position grid
beta_tmp = 2.0 ** (-8) # initial value of beta (power of 2)
beta = 2.0 ** 0 # actual value of beta (power of 2)
rho = rho_harmonic_trotter(x, beta_tmp) # density matrix at initial beta
#reduce the temperature in log_2 steps by the convolution property (matrix squaring)
#and get the updated density matrix rho
while beta_tmp < beta:
rho = numpy.dot(rho, rho) #matrix squaring is implemented by the dot product in numpy
rho *= dx #multiply by the position differential since we are in the position representation
beta_tmp *= 2.0 #reduce the temperute by a factor of 2
# graphics output
pylab.imshow(rho, extent=[-x_max, x_max, -x_max, x_max], origin='lower')
pylab.colorbar()
pylab.title('$\\beta = 2^{%i}$' % math.log(beta, 2))
pylab.xlabel('$x$', fontsize=18)
pylab.ylabel('$x\'$', fontsize=18)
pylab.savefig('plot-harmonic-rho.png')
```
### $\rho^\text{free}$ with periodic boundary conditions
Free density matrix in periodic boundary conditions (periodic box of size $L$) can be obtained by the *Poisson sum rule?* by $\rho^\text{per}(x,x',\beta)=\frac{1}{L}\sum^\infty_{n=-\infty}e^{i\frac{2\pi n (x-x')}{L}}e^{-\beta\frac{2\pi^2 n^2}{L^2}}=\sum^\infty_{w=-\infty}\rho^\text{free}(x,x'+wL,\beta)$, where $w$ is the *winding number* (that is the winding around the box of size L). The diagonal stripe is a manifestation of the fact that the system is translation invariant, i.e. $\rho^\text{free}(x,x',\beta)$ is a function of $x-x'$.
```
import math, cmath, pylab
ntot = 21 # odd number
beta = 1.0 #inverse temperature
nx = 100 #number of grid elements
L = 10.0 #length of the system
x = [i * L / float(nx - 1) for i in range(nx)] #position grid
rho_complex = []
for i in range(nx):
rho_complex.append([sum(
math.exp(- 2.0 * beta * (math.pi * n / L) ** 2) *
cmath.exp(1j * 2.0 * n * math.pi * (x[i] - x[j]) / L) / L
for n in range(-(ntot - 1) / 2, (ntot + 1) / 2))
for j in range(nx)]) #append the i'th line to the density matrix
#(j loop is for constructing the line)
rho_real = [[rho_complex[i][j].real for i in range(nx)] for j in range(nx)]
# graphics output
pylab.imshow(rho_real, extent=[0.0, L, 0.0, L], origin='lower')
pylab.colorbar()
pylab.title('$\\beta$=%s (complex exp)' % beta)
pylab.xlabel('$x$', fontsize=16)
pylab.ylabel('$x\'$', fontsize=16)
pylab.savefig('plot-periodic-complex.png')
```
## Path integrals - Quantum Monte Carlo
### Path integral representation of the kernel
The kernel $K$ is the matrix element of the unitary time evolution operator $U(t_i-t_f)=e^{-i/\hbar(t_f-t_i)\mathcal{H}}$ in the position representation: $K(x_i,x_f;t_f-t_i)=\langle x_f \left| U(t_f-t_i) \right| x_i \rangle$. We can write $K(x_i,x_f;t_f-t_i)=\langle x_f \left| U^N((t_f-t_i)/N) \right| x_i \rangle$, that is, divide the time interval $[t_i,t_f]$ into $N$ equal intervals $[t_k,t_{k+1}]$ of length $\epsilon$, where $\epsilon=t_{k+1}-t_k=(t_f-t_i)/N$.
Then we can insert $N-1$ resolutions of identity ($\int_{-\infty}^\infty \text{d} x_k \left|x_k\rangle\langle x_k\right|$) to obtain
$K(x_i,x_f;t_f-t_i)= \left[\Pi_{k=1}^{N-1}\int_{-\infty}^\infty dx_k \right] \left[\Pi_{k=0}^{N-1} K(x_i,x_f;\epsilon = (t_f-t_i)/N)\right]$,
where $x_f=x_N$ and $x_i=x_0$. In the continuous limit, we would have
$K(x_i,x_f;t_f-t_i)= \lim_{N\rightarrow\infty} \left[\Pi_{k=1}^{N-1}\int_{-\infty}^\infty dx_k \right] \left[\Pi_{k=0}^{N-1} K(x_i,x_f;\epsilon = (t_f-t_i)/N)\right]$. (A)
Let us now consider the limit $\epsilon\rightarrow 0$ ($N\rightarrow \infty$) to obtain the short time kernel $K(x_i,x_f;\epsilon)$ and thereby switching from discrete to the continuous limit. It is known that for small $\epsilon$ the Trotter formula implies that to a very good approximation
$K(x_i,x_f;\epsilon = (t_f-t_i)/N) \simeq \langle x_{k+1} \left| e^{-i(\hbar\epsilon T} e^{-i/\hbar \epsilon V} \right| x_k\rangle$,
which becomes exact as $\epsilon\rightarrow 0$. If we insert resolution of identity $\int \text{d}p_k \left| p_k \rangle\langle p_k \right|$, we get
$K(x_i,x_f;\epsilon) = \int_{-\infty}^\infty \text{d}p_k \langle x_{k+1} \left| e^{-i(\hbar\epsilon T} \left| p_k \rangle\langle p_k \right| e^{-i/\hbar \epsilon V} \right| x_k\rangle = \int_{-\infty}^\infty \text{d}p_k \langle x_{k+1} \left| p_k \rangle\langle p_k \right| x_k\rangle e^{-i/\hbar \epsilon \left(\frac{p_k}{2m} + V(x)\right)}$
$\implies K(x_i,x_f;\epsilon) = \frac{1}{2\pi \hbar}\int_{-\infty}^\infty \text{d}p_k e^{i/\hbar \epsilon \left[p_k\frac{x_{k+1}-x_k}{\epsilon}-\mathcal{H}(p_k,x_k) \right]}$. (B)
Hence, inserting (B) into (A) we get
$K(x_i,x_f;t_f-t_i) = \lim_{N\rightarrow \infty}\left[\Pi_{k=1}^{N-1}\int_{-\infty}^\infty dx_k \right] \left \{ \Pi_{k=0}^{N-1} \int_{-\infty}^\infty \text{d}p_k e^{i/\hbar \epsilon \left[p_k\frac{x_{k+1}-x_k}{\epsilon}-\mathcal{H}(p_k,x_k) \right]} \right\}$. (C)
We can simplify the exponent of the integrand in the limiting case $N\rightarrow \infty$,
$\lim_{N\rightarrow \infty} \epsilon \sum_{k=0}^{N-1}\left[p_k\frac{x_{k+1}-x_k}{\epsilon}-\mathcal{H}(p_k,x_k) \right] =\int_{t_1}^{t_2}\text{d}t[p(t)\dot{x}(t)-\mathcal{H}[p(t),x(t)]]$
$=\int_{t_1}^{t_2}\text{d}t \mathcal{L}[x(t),\dot{x}(t)] = \mathcal{S}[x(t);t_f,t_i]$, (D)
where $\mathcal{L}[x(t),\dot{x}(t)] = \frac{m}{2}\dot{x}(t)^2-V[x(t)]$ is the Lagrangian and $\mathcal{S}[x(t);t_f,t_i]$ is the action between times $t_f$ and $t_i$.
Furthermore we can introduce the following notation for the integrals over *paths*:
$\lim_{N\rightarrow \infty}\left(\Pi_{k=1}^{N-1} \int_{-\infty}^\infty \text{d}x_k\right)=\int_{x(t_i)=x_i}^{x(t_f)=x_f}\mathcal{D}[x(t)]$, (E.1)
$\lim_{N\rightarrow \infty}\left(\Pi_{k=1}^{N-1}\int_{-\infty}^\infty\frac{\text{d}p_k}{2\pi\hbar}\right) =\int \mathcal{D}\left[\frac{p(t)}{2\pi\hbar}\right]$. (E.2)
Using (D) and (E) in (C), we get the path integral representation of the kernel
$K(x_i,x_f;t_f-t_i)= \int_{x(t_i)=x_i}^{x(t_f)=x_f}\mathcal{D}[x(t)] \int \mathcal{D}\left[\frac{p(t)}{2\pi\hbar}\right] e^{i/\hbar \mathcal{S}[x(t)]}$
$\implies \boxed{K(x_i,x_f;t_f-t_i)= \mathcal{N} \int_{x(t_i)=x_i}^{x(t_f)=x_f}\mathcal{D}[x(t)] e^{i/\hbar \mathcal{S}[x(t)]}}$, (F)
where $\mathcal{N}$ is the normalisation factor.
Here we see that each path has a phase proportional to the action. The equation (F) implies that we sum over all paths, which in fact interfere with one another. The true quantum mechanical amplitude is determined by the constructive and destructive interferences between these paths. For example, actions that are very large compared to $\hbar$, lead to very different phases even between nearby paths that differ only slightly, and that causes destructive interference between them. Only in the extremely close vicinity of the classical path $\bar x(t)$, where the action changes little when the phase varies, will neighbouring paths contirbute to the interference constructively. This leads to a classical deterministic path $\bar x(t)$, and this is why the classical approximation is valid when the action is very large compared to $\hbar$. Hence we see how the classical laws of motion arise from quantum mechanics.
### Path integral representation of the partition function
**Heuristic derivation of the discrete case:** Recall the convolution property of the density matrix, we can apply it repeatedly:
$\rho(x_0,x_2,\beta) = \int \rho(x_0,x_2,\beta/2) \rho(x_2,x_1,\beta/2) \text{d}x_2 = \int \int \int \rho(x_0,x_3,\beta/4) \rho(x_3, x_2,\beta/4) \rho(x_2,x_4,\beta/4) \rho(x_4,x_1 ,\beta/4) \text{d}x_2 \text{d}x_3 \text{d}x_4 = \cdots $
In other words: $\rho(x_0,x_N,\beta) = \int\int \cdots \int \text{d}x_1 \text{d}x_2 \cdots \text{d}x_{N-1}\rho(x_0,x_1,\beta/N)\rho(x_1,x_2,\beta/N)\cdots\rho(x_{N-1},x_N,\beta/N)$. The variables $x_k$ in this integral is called a *path*. We can imagine the variable $x_k$ to be at position $x_k$ at given slice $k\beta/N$ of an imaginary time variable $\tau$ that goes from $0$ to $\beta$ in steps of $\Delta\tau=\beta/N$. Density matrices and partition functions can thus be expressed as multiple integrals over path variables, which are none other than the path integrals that were introduced in the previous subsection.
Given the unnormalised density matrix $\rho_u$, the discrete partition $Z_d(\beta)$ function can be written as a path integral for all ***closed*** paths (because of the trace property), i.e., paths with the same beginning and end points ($x_0=x_N$), over a “time” interval $−i\hbar\beta$.
$Z_d(\beta)= \text{Tr}(e^{-\beta \mathcal{H}}) = \text{Tr}(\rho_u(x_0,x_N,\beta) )=\int \text{d}x_0 \rho_u (x_0,x_N=x_0,\beta) $ $ = \int \int\int \cdots \int \text{d}x_0 \text{d}x_1 \text{d}x_2 \cdots \text{d}x_{N-1}\rho_u(x_0,x_1,\beta/N)\rho_u(x_1,x_2,\beta/N)\cdots\rho_u(x_{N-1},x_N,\beta/N)\rho_u(x_{N-1},x_0,\beta/N)$.
The integrand is the probabilistic weight $\Phi\left[\{x_i\}\right]$ of the discrete path consisting of points $\{x_i\}$. The continuous case can be obtained by taking the limit $N\rightarrow \infty$. By defining
$\Phi[x(\tau)] = \lim_{N\rightarrow \infty} \rho_u(x_0,x_1,\beta/N)\cdots \rho_u(x_{N-1},x_N,\beta/N)$, (G)
(note that this is the probability weight of a particular continuous path), and by using (E.1), we can express the continuous partition function $Z(\beta)$ as
$Z(\beta) = \int_{x(0)}^{x(\hbar \beta)=x(0)}\mathcal{D}[x(\tau)] \Phi[x(\tau)]$. (H)
But what is $\Phi[x(\tau)]$?
**Derivation of the continuous case:** Again we start from $Z(\beta)= \text{Tr}(e^{-\beta \mathcal{H}})$. The main point of the argument that follows is the operational resemblance between the unitary time evolution operator $U(t)=e^{-(i/\hbar) t\mathcal{H}}$ and the unnormalised density matrix $e^{-\beta \mathcal{H}}$: the former is used to define the kernel which reads $K(x,x';t)=\langle x \left| e^{-(i/\hbar) t\mathcal{H}} \right| x' \rangle$; and the latter is used in defining the density matrix which reads $\rho(x,x';\beta)=\langle x \left| e^{-\beta \mathcal{H}} \right| x' \rangle$. If we regard $\beta$ as the analytic continuation of the real time $t$ to the imaginary values: $t\rightarrow i \tau \rightarrow i \hbar \beta$, and $t=t_i-t_f$, we get the cousin of the partition function that lives in the imaginary spacetime (i.e. Euclidian rather than Minkowskian)
$Z\left[\beta\rightarrow -\frac{i}{\hbar}(t_f-t_i)\right]=\text{Tr}\left[U(t_f-t_i)\right]=\int_{-\infty}^\infty \text{d}x \langle x \left| U(t_f-t_i) \right| x \rangle$
$=\int_{-\infty}^\infty \text{d}x K(x,x;t_f-t_i)$
$=\int_{-\infty}^\infty \text{d}x \mathcal{N} \int_{x(t_i)=x}^{x(t_f)=x}\mathcal{D}[x(t)] e^{i/\hbar \int_{t_i}^{t_f}\text{d}t \mathcal{L}[x(t),\dot{x}(t)]} $ (using (F))
$=\mathcal{N} \int_{x(t_f)=x(t_i)}\mathcal{D}[x(t)] e^{i/\hbar \int_{t_i}^{t_f}\text{d}t \mathcal{L}[x(t),\dot{x}(t)]} = \mathcal{N} \int_{x(t_f)=x(t_i)}\mathcal{D}[x(t)] e^{i/\hbar \int_{t_i}^{t_f}\text{d}t \left[\frac{m}{2}\dot{x}(t)^2-V[x(t)]\right]}$,
which means that one is integrating not over all paths but over all *closed* paths (loops) at $x$. We are now ready to get the path integral representation of the real partition function by making the transformations $t\rightarrow i\tau$ so that $t_i\rightarrow 0$ and $t_f\rightarrow -i\hbar \beta$ (also note that $\dot{x(t)}=\frac{\partial x(t)}{\partial t}\rightarrow -i \frac{\partial x(\tau)} {\partial \tau} = -i x'(\tau) \implies \dot{x}(t)^2 \rightarrow -x'(\tau)^2$):
$\implies Z(\beta)=\mathcal{N} \int_{x(\hbar \beta)=x(0)}\mathcal{D}[x(\tau)] e^{-\frac{1}{\hbar} \int_{0}^{\beta \hbar}\text{d}\tau\left( \frac{m}{2}x'(\tau)^2+V[x(\tau)]\right)}$
$\implies \boxed{ Z(\beta)=\mathcal{N} \int_{x(\hbar \beta)=x(0)}\mathcal{D}[x(\tau)] e^{-\frac{1}{\hbar} \int_{0}^{\beta \hbar}\text{d}\tau \mathcal{H}[p(\tau),x(\tau)]} }$. (I)
Notice that by comparing (H) and (I) we get an expression for the probabilistic weight $\Phi[x(\tau)]$ of a particular path $x(\tau)$, that is
$\Phi[x(\tau)] = \lim_{N\rightarrow \infty} \rho_u(x_0,x_1;\beta/N)\cdots \rho_u(x_{N-1},x_N;\beta/N) = \exp{\left\{ e^{-\frac{1}{\hbar} \int_{0}^{\beta \hbar}\text{d}\tau \mathcal{H}[p(\tau),x(\tau)]}\right\}}$ (J), which is very intuitive, considering the definition of the unnormalised density matrix $\rho_u$. This is an intriguing result, since we were able to obtain the complete statistical description of a quantum mechanical system without the appearance of complex numbers.
Because of this reason, using (J) it is easy to see why some paths contribute very little to the path integral: those are paths for which the exponent is very large due to high energy, and thus the integrand is negligibly small. *Furthermore, it is unnecessary to consider whether or not nearby paths cancel each other's contributions, for in the present case they do not interfere (since no complex numbers involved) i.e. all contributions add together with some being large and others small.*
#### Path integral Monte Carlo
In the algorithm, so called the *naïve path integral (Markov-chain) Monte Carlo*, we move from one path configuration consisting of $\{x_i\}$ to another one consisting of $\{x'_i\}$ by choosing a single position $x_k$ and by making a little displacement $\Delta x$ that can be positive or negative. We compute the weight before ($\Phi[\{x_i\}]$) this move and after ($\Phi[\{x'_i\}]$) the move and accept the move with the Metropolis acceptance rate (reject with certainty if the new weight is greater than the old one, smaller the new weight is, the higher the acceptance rate). Defining $\epsilon \equiv \beta/N$, we can approximate $\Phi[\{x_i\}]$ by making a Trotter decomposition *only around the point $x_k$*:
$\Phi\left[\{x_i\}\right]\approx \cdots \rho^\text{free}(x_{k-1},x_k;\epsilon) e^{-\frac{1}{2}\epsilon V(x_k)} e^{-\frac{1}{2}\epsilon V(x_k)} \rho^\text{free}(x_{k},x_{k+1};\epsilon)\cdots$.
Therefore, the acceptance ratio $\frac{\Phi\left[\{x'_i\}\right]}{\Phi\left[\{x_i\}\right]}$ can be approximated as
$\frac{\Phi\left[\{x'_i\}\right]}{\Phi\left[\{x_i\}\right]}\approx\frac{\rho^\text{free}(x_{k-1},x'_k;\epsilon) e^{-\epsilon V(x'_k)}\rho^\text{free}(x'_k,x_{k+1};\epsilon)}{\rho^\text{free}(x_{k-1},x_k;\epsilon) e^{-\epsilon V(x_k)} \rho^\text{free}(x_k,x_{k+1};\epsilon)}$.
This is implemented in the following program.
```
%pylab qt
import math, random, pylab, os
# Exact quantum position distribution:
def p_quant(x, beta):
p_q = sqrt(tanh(beta / 2.0) / pi) * exp(- x**2.0 * tanh(beta / 2.0))
return p_q
def rho_free(x, y, beta): # free off-diagonal density matrix
return math.exp(-(x - y) ** 2 / (2.0 * beta))
output_dir = 'snapshots_naive_harmonic_path'
if not os.path.exists(output_dir): os.makedirs(output_dir)
fig = pylab.figure(figsize=(6, 10))
def show_path(x, k, x_old, Accepted, hist_data, step, fig):
pylab.clf()
path = x + [x[0]] #Final position is the same as the initial position.
#Note that this notation appends the first element of x as a new element to x
y_axis = range(len(x) + 1) #construct the imaginary time axis
ax = fig.add_subplot(2, 1, 1)
#Plot the paths
if Accepted:
old_path = x[:] #save the updated path as the old path
old_path[k] = x_old #revert the update to get the actual old path
old_path = old_path + [old_path[0]] #final position is the initial position
ax.plot(old_path, y_axis, 'ko--', label='old path')
if not Accepted and step !=0:
old_path = x[:]
old_path[k] = x_old
old_path = old_path + [old_path[0]]
ax.plot(old_path, y_axis, 'ro-', label='rejection', linewidth=3)
ax.plot(path, y_axis, 'bo-', label='new path') #plot the new path
ax.legend()
ax.set_xlim(-2.5, 2.5)
ax.set_ylabel('$\\tau$', fontsize=14)
ax.set_title('Naive path integral Monte Carlo, step %i' % step)
ax.grid()
#Plot the histogram
ax = fig.add_subplot(2, 1, 2)
x = [a / 10.0 for a in range(-100, 100)]
y = [p_quant(a, beta) for a in x]
ax.plot(x, y, c='gray', linewidth=1.0, label='Exact quantum distribution')
ax.hist(hist_data, 10, histtype='step', normed = 'True', label='Path integral Monte Carlo') #histogram of the sample
ax.set_title('Position distribution at $T=%.2f$' % T)
ax.set_xlim(-2.5, 2.5) #restrict the range over which the histogram is shown
ax.set_xlabel('$x$', fontsize = 14)
ax.set_ylabel('$\pi(x)=e^{-\\beta E_n}|\psi_n(x)|^2$', fontsize = 14)
ax.legend(fontsize = 6)
ax.grid()
pylab.pause(0.2)
pylab.savefig(output_dir + '/snapshot_%05i.png' % step)
beta = 4.0 # inverse temperature
T = 1 / beta
N = 8 # number of (imagimary time) slices
dtau = beta / N
delta = 1.0 # maximum displacement on one slice
n_steps = 4 # number of Monte Carlo steps
hist_data = []
x = [random.uniform(-1.0, 1.0) for k in range(N)] # initial path (a position for each time)
show_path(x, 0, 0.0, False, hist_data, 0, fig) #show the initial path
for step in range(n_steps):
#print 'step',step
k = random.randint(0, N - 1) # randomly choose slice
knext, kprev = (k + 1) % N, (k - 1) % N # next/previous slices
x_old = x[k]
x_new = x[k] + random.uniform(-delta, delta) # new position at slice k
#calculate the weight before and after the move
old_weight = (rho_free(x[knext], x_old, dtau) *
rho_free(x_old, x[kprev], dtau) *
math.exp(-0.5 * dtau * x_old ** 2))
new_weight = (rho_free(x[knext], x_new, dtau) *
rho_free(x_new, x[kprev], dtau) *
math.exp(-0.5 * dtau * x_new ** 2))
if random.uniform(0.0, 1.0) < new_weight / old_weight: #accept with metropolis acceptance rate
x[k] = x_new
Accepted = True
else:
Accepted = False
show_path(x, k, x_old, Accepted, hist_data, step + 1, fig)
hist_data.append(x[k])
```

Note that the above program is very slow, as it takes very long to explore all of the available phase space.
## Unitary time evolution
Taking advantage of the Fourier transforms, the Trotter decomposition can also be used to efficiently simulate the unitary time evolution of a wavefunction as demonstrated by the following algorithm.
```
%pylab qt
import numpy, pylab, os
#Define the direct and inverse Fourier transformations:
def fourier_x_to_p(phi_x, dx):
phi_p = [(phi_x * numpy.exp(-1j * p * grid_x)).sum() * dx for p in grid_p]
return numpy.array(phi_p)
def fourier_p_to_x(phi_p, dp):
phi_x = [(phi_p * numpy.exp(1j * x * grid_p)).sum() * dp for x in grid_x]
return numpy.array(phi_x) / (2.0 * numpy.pi)
#The time evolution algorithm (using the Trotter decomposition)
def time_step_evolution(psi0, potential, grid_x, grid_p, dx, dp, delta_t):
psi0 = numpy.exp(-1j * potential * delta_t / 2.0) * psi0 #potential part of U (multiplicative)
psi0 = fourier_x_to_p(psi0, dx) #pass to the momentum space to apply the kinetic energy part
psi0 = numpy.exp(-1j * grid_p ** 2 * delta_t / 2.0) * psi0 #kinetic part of U (multiplicative)
psi0 = fourier_p_to_x(psi0, dp) #return to the position space
psi0 = numpy.exp(-1j * potential * delta_t / 2.0) * psi0 #potential part of U (multiplicative)
return psi0
#Potential function (barrier potential to demonstrate tunneling):
def funct_potential(x):
if x < -8.0: return (x + 8.0) ** 2 #barrier on the left hand side
elif x <= -1.0: return 0.0 #0 potential in between the left wall and the bump barrier
elif x < 1.0: return numpy.exp(-1.0 / (1.0 - x ** 2)) / numpy.exp(-1.0) #gaussian bump barrier
else: return 0.0 #0 potential elsewhere
#movie output of the time evolution
output_dir = 'snapshots_time_evolution'
if not os.path.exists(output_dir): os.makedirs(output_dir)
def show(x, psi, pot, time, timestep):
pylab.clf()
pylab.plot(x, psi, 'g', linewidth = 2.0, label = '$|\psi(x)|^2$') #plot wf in green colour
pylab.xlim(-10, 15)
pylab.ylim(-0.1, 1.15)
pylab.plot(x, pot, 'k', linewidth = 2.0, label = '$V(x)$') #plot potential in black colour
pylab.xlabel('$x$', fontsize = 20)
pylab.title('time = %s' % time)
pylab.legend(loc=1)
pylab.savefig(output_dir + '/snapshot_%05i.png' % timestep)
timestep += 1 #updtate the current time step
pylab.pause(0.1)
pylab.show()
steps = 800 #total number of position (momentum) steps
x_min = -12.0 #minimum position (momentum)
x_max = 40.0 #maximum position (momentum)
grid_x = numpy.linspace(x_min, x_max, steps) #position grid
grid_p = numpy.linspace(x_min, x_max, steps) #momentum grid
dx = grid_x[1] - grid_x[0] #position step
dp = grid_p[1] - grid_p[0] #momentum step
delta_t = 0.05 #time step width
t_max = 16.0 #maximum time
potential = [funct_potential(x) for x in grid_x] #save the potential on the position grid
potential = numpy.array(potential)
# initial state:
x0 = -8.0 #centre location
sigma = .5 #width of the gaussian
psi = numpy.exp(-(grid_x - x0) ** 2 / (2.0 * sigma ** 2) ) #initial state is a gaussian
psi /= numpy.sqrt( sigma * numpy.sqrt( numpy.pi ) ) #normalisation
# time evolution
time = 0.0 #initialise the time
timestep = 0 #initialise the current timestep
while time < t_max:
if timestep % 1 == 0:
show(grid_x, numpy.absolute(psi) ** 2.0, potential, time, timestep) #plot the wavefunction
#print time
time += delta_t #update the current time
timestep += 1 #update the current timestep
psi = time_step_evolution(psi, potential, grid_x, grid_p, dx, dp, delta_t) #update the wf
```

## Harmonic and anharmonic oscillators
### Harmonic oscillator
#### Markov-chain sampling by Metropolis acceptance using exact stationary states (Hermite polynomials)
Probability distribution at $T=0$ is $|\psi_0(x)|^2$. We can easily develop a Monte Carlo scheme for this system, because the stationary states of the harmonic oscillator are known, i.e. Hermite polynomials. In the following section, we obtain this distribution for $0$ temperature and finite temperatures by using the Markov-chain Monte Carlo algorithms implementing the Metropolis acceptance rate.
```
import random, math, pylab
from math import *
def psi_0_sq(x):
psi = exp(- x ** 2.0 / 2.0) / pi ** (1.0 / 4.0)
return abs(psi) ** 2.0
xx = 0.0
delta = 0.1
hist_data = []
for k in range(1000000):
x_new = xx + random.uniform(-delta, delta)
if random.uniform(0.0, 1.0) < psi_0_sq(x_new) / psi_0_sq(xx):
xx = x_new
hist_data.append(xx)
#print x
pylab.hist(hist_data, 500, normed = 'True', label='Markov-chain sampling') #histogram of the sample
x = [a / 10.0 for a in range(-30, 30)]
y = [psi_0_sq(a) for a in x]
pylab.plot(x, y, c='red', linewidth=2.0, label='Exact quantum')
pylab.title('Position distribution at $T=0$', fontsize = 13)
pylab.xlabel('$x$', fontsize = 15)
pylab.ylabel('$\pi(x)=|\psi_0(x)|^2$', fontsize = 15)
pylab.legend()
pylab.savefig('plot_T0_prob.png')
pylab.show()
```
Probability distribution at a finite temperature is given by $e^{-\beta E_n}|\psi_n(x)|^2$, where $\beta=1/T$.
```
import random, math, pylab
from math import *
# Energy eigenstates of the harmonic oscillator
def psi_n_sq(x, n):
if n == -1:
return 0.0
else:
psi = [math.exp(-x ** 2 / 2.0) / math.pi ** 0.25]
psi.append(math.sqrt(2.0) * x * psi[0]) #save the wf's in a vector "psi"
for k in range(2, n + 1):
psi.append(math.sqrt(2.0 / k) * x * psi[k - 1] -
math.sqrt((k - 1.0) / k) * psi[k - 2]) #Hermite polynomial recursion relations
return psi[n] ** 2
# Energy eigenvalues of the harmonic oscillator
def E(n):
E = n + 1.0 / 2.0
return E
# Markov-chain Monte Carlo algorithm:
def markov_prob(beta, n_trials):
# Energy move:
xx = 0.0
delta = 0.1
n = 0
hist_data_n = []
hist_data_x = []
for l in range(1000000):
if xx == 0.0:
xx += 0.00001 #avoid division by 0
m = n + random.choice([1,-1]) #take a random energy step
if m >= 0 and random.uniform(0.0, 1.0) \
< psi_n_sq(xx, m) / psi_n_sq(xx, n) * exp(-beta * (E(m) - E(n))):
n = m
hist_data_n.append(n)
# Position move:
x_new = xx + random.uniform(-delta, delta) #take a random position step
if random.uniform(0.0, 1.0) < psi_n_sq(x_new, n) / psi_n_sq(xx, n):
xx = x_new
hist_data_x.append(xx)
return hist_data_x, hist_data_n
#Exact quantum position distribution
def p_quant(x, beta):
p_q = sqrt(tanh(beta / 2.0) / pi) * exp(- x**2.0 * tanh(beta / 2.0))
return p_q
#Exact classical position distribution
def p_class(x, beta):
p_c = sqrt(beta / (2.0 * pi)) * exp(- beta * x**2.0 / 2.0)
return p_c
#Run the algorithm for different values of temperature:
n_trials = 10000
for beta in [0.2, 1.0, 5.0]:
B = beta
T = 1 / beta
hist_data_x, hist_data_n = markov_prob(beta, n_trials)
pylab.hist(hist_data_x, 500, normed = 'True', label='Markov-chain sampling') #position histogram of the sample
x = [a / 10.0 for a in range(-100, 100)]
y1 = [p_quant(a, beta) for a in x]
y2 = [p_class(a, beta) for a in x]
pylab.plot(x, y1, c='red', linewidth=4.0, label='exact quantum')
pylab.plot(x, y2, c='green', linewidth=2.0, label='exact classical')
pylab.title('Position distribution at $T=$%.2f' % T, fontsize = 13)
pylab.xlabel('$x$', fontsize = 15)
pylab.ylabel('$\pi(x)=e^{-\\beta E_n}|\psi_n(x)|^2$', fontsize = 15)
pylab.xlim([-7,7])
pylab.legend()
pylab.savefig('plot_T_%.2f_prob.png' % T)
pylab.show()
pylab.hist(hist_data_n, 100, normed = 'True') #energy histogram of the sample
pylab.title('Energy distribution at $T=$%.2f' % T, fontsize = 13)
pylab.xlabel('$n$', fontsize = 15)
pylab.ylabel('$\pi(n)$', fontsize = 15)
pylab.legend()
pylab.grid()
pylab.savefig('plot_T_%.2f_energy.png' % T)
pylab.show()
```
One can see that at high temperatures e.g $T=5$, the position distributions are almost the same. Hence the classical harmonic oscillator is a very good approximation for the quantum harmonic oscillator at high temperatures. The quantum behaviour becomes more prominent at low temperatures (eventually only the ground state is available for a sufficiently low thermal energy), especially below $T=0.2$, as one can see from the above figures.
Here we also got an histogram for the energy ($n$) distribution. The result indicates that the values of $n$ are distributed according to a Poisson distribution?
#### Trotter decomposition (convolution) and path integral monte carlo simulation
On the other hand, we can still obtain the position distributions even if we do not a priori have the analytic stationary states at our disposal. That is, we can approximate the density matrix at high temperatures by the Trotter decomposition and then take advantage of the convolution property to obtain the density matrix at successively reduced temperatures. This is implemented in the following algorithm.
```
%pylab inline
import math, numpy, pylab
from numpy import *
# Free off-diagonal density matrix:
def rho_free(x, xp, beta):
return (math.exp(-(x - xp) ** 2 / (2.0 * beta)) /
math.sqrt(2.0 * math.pi * beta))
# Harmonic density matrix in the Trotter approximation (returns the full matrix):
def rho_harmonic_trotter(grid, beta):
return numpy.array([[rho_free(x, xp, beta) * \
numpy.exp(-0.5 * beta * 0.5 * (x ** 2 + xp ** 2)) \
for x in grid] for xp in grid])
# Exact quantum position distribution:
def p_quant(x, beta):
p_q = sqrt(tanh(beta / 2.0) / pi) * exp(- x**2.0 * tanh(beta / 2.0))
return p_q
# Construct the position grid:
x_max = 5 #maximum position value
nx = 100 #number of elements on the x grid
dx = 2.0 * x_max / (nx - 1) #position differential
x = [i * dx for i in range(-(nx - 1) / 2, nx / 2 + 1)] #position grid
beta_tmp = 2.0 ** (-5) # initial (low) value of beta (power of 2) (high temperature)
beta = 2.0 ** 2 # actual value of beta (power of 2)
rho = rho_harmonic_trotter(x, beta_tmp) # density matrix at initial (low) beta (Trotter decomp.)
# Reduce the temperature by the convolution property (matrix squaring):
while beta_tmp < beta:
rho = numpy.dot(rho, rho) #matrix squaring (convolution)
rho *= dx #also multiply by the differential since we are in position representation
beta_tmp *= 2.0 #reduce the temperature by a factor of 2
#print 'beta: %s -> %s' % (beta_tmp / 2.0, beta_tmp)
# Output position distribution pi(x) at the final beta onto a file:
Z = sum(rho[j, j] for j in range(nx + 1)) * dx #partition function (to normalise)
pi_of_x = [rho[j, j] / Z for j in range(nx + 1)] #the diagonal element of the density matrix
f = open('data_harm_matrixsquaring_beta' + str(beta) + '.dat', 'w')
for j in range(nx + 1):
f.write(str(x[j]) + ' ' + str(rho[j, j] / Z) + '\n')
f.close()
# Plot the obtained final position distribution:
T = 1 / beta
x = linspace(-x_max, x_max, nx+1)
y1 = [p_quant(a, beta) for a in x]
pylab.plot(x, pi_of_x, c='red', linewidth=4.0, label='matrix squaring')
pylab.plot(x, y1, c='green', linewidth=2.0, label='exact quantum')
pylab.title('Position distribution at $T=$%.2f' % T, fontsize = 13)
pylab.xlabel('$x$', fontsize = 15)
pylab.xlim([-2,2])
pylab.ylabel('$\pi(x)=e^{-\\beta E_n}|\psi_n(x)|^2$', fontsize = 15)
pylab.legend()
pylab.grid()
pylab.savefig('plot_T_%.2f_prob_matrix_squaring.png' % T)
pylab.show()
```
Path integral Monte Carlo method is implemented in the following program.
```
%pylab inline
import math, random, pylab
def rho_free(x, y, beta): # free off-diagonal density matrix
return math.exp(-(x - y) ** 2 / (2.0 * beta))
def read_file(filename):
list_x = []
list_y = []
with open(filename) as f:
for line in f:
x, y = line.split()
list_x.append(float(x))
list_y.append(float(y))
f.close()
return list_x, list_y
beta = 4.0
T = 1 / beta
N = 10 # number of slices
dtau = beta / N
delta = 1.0 # maximum displacement on one slice
n_steps = 1000000 # number of Monte Carlo steps
x = [0.0] * N # initial path
hist_data = []
for step in range(n_steps):
k = random.randint(0, N - 1) # random slice
knext, kprev = (k + 1) % N, (k - 1) % N # next/previous slices
x_new = x[k] + random.uniform(-delta, delta) # new position at slice k
old_weight = (rho_free(x[knext], x[k], dtau) *
rho_free(x[k], x[kprev], dtau) *
math.exp(-0.5 * dtau * x[k] ** 2))
new_weight = (rho_free(x[knext], x_new, dtau) *
rho_free(x_new, x[kprev], dtau) *
math.exp(-0.5 * dtau * x_new ** 2))
if random.uniform(0.0, 1.0) < new_weight / old_weight:
x[k] = x_new
if step % 10 == 0:
hist_data.append(x[0])
# Figure output:
list_x, list_y = read_file('data_harm_matrixsquaring_beta' + str(beta) + '.dat')
pylab.plot(list_x, list_y, c='red', linewidth=4.0, label='path integral Monte Carlo')
pylab.hist(hist_data, 100, normed = 'True', label='matrix squaring') #histogram of the sample
pylab.title('Position distribution at $T=%.2f$' % T, fontsize = 13)
pylab.xlim(-2.0, 2.0) #restrict the range over which the histogram is shown
pylab.xlabel('$x$', fontsize = 15)
pylab.ylabel('$\pi(x)=e^{-\\beta E_n}|\psi_n(x)|^2$', fontsize = 15)
pylab.legend()
pylab.savefig('plot_T_%.2f_prob_path_int.png' % T)
pylab.show()
```
### Anharmonic oscillator
Our anharmonic oscillator is described by the potential $V_a(x)=\frac{x^2}{2}+\gamma_{cubic}x^3 + \gamma_{quartic}x^4$, where the coefficients $\gamma_{cubic}, \gamma_{quartic}$ are small. We consider the case $-\gamma_{cubic}=\gamma_{quartic}>0$.
#### Trotter decomposition
When the cubic and quartic parameters are rather small, the anharmonic potential is similar to the harmonic one. In this case, there exists a perturbative expression for the energy levels $E_n(\gamma_{cubic}, \gamma_{quartic})$ of the anharmonic oscillator. This expression (that is too complicated for us to derive, see e.g. Landau Lifshitz: "Quantum Mechanics (vol 3)", exercise 3 of chap 38) allows us to compute the partition function $\sum_n \exp(-\beta E_n)$ for small $\gamma_{cubic}$ and $\gamma_{quartic}$ (this is the meaning of the word "perturbative"), but it becomes totally wrong at larger values of the parameters.
```
import math, numpy, pylab
from numpy import *
# Define the anharmonic (quartic) potential
def V_anharmonic(x, gamma, kappa):
V = x**2 / 2 + gamma * x**3 + kappa * x**4
return V
# Free off-diagonal density matrix:
def rho_free(x, xp, beta):
return (math.exp(-(x - xp) ** 2 / (2.0 * beta)) /
math.sqrt(2.0 * math.pi * beta))
# Harmonic density matrix in the Trotter approximation (returns the full matrix):
def rho_anharmonic_trotter(grid, beta):
return numpy.array([[rho_free(x, xp, beta) * \
numpy.exp(-0.5 * beta * (V_anharmonic(x, -g, g) + V_anharmonic(xp, -g, g))) \
for x in grid] for xp in grid])
# Exact harmonic oscillator quantum position distribution:
def p_quant(x, beta):
p_q = sqrt(tanh(beta / 2.0) / pi) * exp(- x**2.0 * tanh(beta / 2.0))
return p_q
# Perturbative energy levels
def Energy_pert(n, cubic, quartic):
return n + 0.5 - 15.0 / 4.0 * cubic **2 * (n ** 2 + n + 11.0 / 30.0) \
+ 3.0 / 2.0 * quartic * (n ** 2 + n + 1.0 / 2.0)
# Partition function obtained using perturbative energies
def Z_pert(cubic, quartic, beta, n_max):
Z = sum(math.exp(-beta * Energy_pert(n, cubic, quartic)) for n in range(n_max + 1))
return Z
# Construct the position grid:
x_max = 5 #maximum position value
nx = 100 #number of elements on the x grid
dx = 2.0 * x_max / (nx - 1) #position differential
x = [i * dx for i in range(-(nx - 1) / 2, nx / 2 + 1)] #position grid
beta_tmp = 2.0 ** (-5) # initial (low) value of beta (power of 2) (high temperature)
beta = 2.0 ** 1 # actual value of beta (power of 2)
#g = 1.0 #-cubic and quartic coefficient
for g in [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5]:
Z_p = Z_pert(-g, g, beta, 15)
rho = rho_anharmonic_trotter(x, beta_tmp) # density matrix at initial (low) beta (Trotter decomp.)
# Reduce the temperature by the convolution property (matrix squaring):
while beta_tmp < beta:
rho = numpy.dot(rho, rho) #matrix squaring (convolution)
rho *= dx #also multiply by the differential since we are in position representation
beta_tmp *= 2.0 #reduce the temperature by a factor of 2
#print 'beta: %s -> %s' % (beta_tmp / 2.0, beta_tmp)
# Output position distribution pi(x) at the final beta onto a file:
Z = sum(rho[j, j] for j in range(nx + 1)) * dx #partition function
pi_of_x = [rho[j, j] / Z for j in range(nx + 1)] #the diagonal element of the density matrix
f = open('data_anharm_matrixsquaring_beta' + str(beta) + '.dat', 'w')
for j in range(nx + 1):
f.write(str(x[j]) + ' ' + str(rho[j, j] / Z) + '\n')
f.close()
# Plot the obtained final position distribution:
T = 1 / beta
x = linspace(-x_max, x_max, nx+1)
y2 = [V_anharmonic(a, -g, g) for a in x]
y1 = [p_quant(a, beta) for a in x]
pylab.plot(x, y2, c='gray', linewidth=2.0, label='Anharmonic potential')
pylab.plot(x, y1, c='green', linewidth=2.0, label='Harmonic exact quantum')
pylab.plot(x, pi_of_x, c='red', linewidth=4.0, label='Anharmonic matrix squaring')
pylab.ylim(0,1)
pylab.xlim(-2,2)
pylab.title('Anharmonic oscillator position distribution at $T=$%.2f' % T, fontsize = 13)
pylab.xlabel('$x$', fontsize = 15)
pylab.ylabel('$\pi(x)$', fontsize = 15)
pylab.legend()
pylab.grid()
pylab.savefig('plot_T_%.2f_anharm_g_%.1f_prob_matrix_squaring.png' % (T,g))
pylab.show()
print 'g =', g, 'Perturbative partition function:', Z_p, 'Monte Carlo partition function', Z
```
#### Path integral Monte Carlo
```
%pylab inline
import math, random, pylab
# Define the anharmonic (quartic) potential
def V_anharmonic(x, gamma, kappa):
V = x**2 / 2 + gamma * x**3 + kappa * x**4
return V
def rho_free(x, y, beta): # free off-diagonal density matrix
return math.exp(-(x - y) ** 2 / (2.0 * beta))
def read_file(filename):
list_x = []
list_y = []
with open(filename) as f:
for line in f:
x, y = line.split()
list_x.append(float(x))
list_y.append(float(y))
f.close()
return list_x, list_y
beta = 4.0
g = 1.0 #-cubic and quartic coefficients
T = 1 / beta
N = 16 # number of imaginary times slices
dtau = beta / N
delta = 1.0 # maximum displacement on one slice
n_steps = 1000000 # number of Monte Carlo steps
x = [0.0] * N # initial path
hist_data = []
for step in range(n_steps):
k = random.randint(0, N - 1) # random slice
knext, kprev = (k + 1) % N, (k - 1) % N # next/previous slices
x_new = x[k] + random.uniform(-delta, delta) # new position at slice k
old_weight = (rho_free(x[knext], x[k], dtau) *
rho_free(x[k], x[kprev], dtau) *
math.exp(-dtau * V_anharmonic(x[k], -g, g)))
new_weight = (rho_free(x[knext], x_new, dtau) *
rho_free(x_new, x[kprev], dtau) *
math.exp(-dtau * V_anharmonic(x_new ,-g, g)))
if random.uniform(0.0, 1.0) < new_weight / old_weight:
x[k] = x_new
if step % 10 == 0:
hist_data.append(x[0])
# Figure output:
list_x, list_y = read_file('data_anharm_matrixsquaring_beta' + str(beta) + '.dat')
v = [V_anharmonic(a, -g, g) for a in list_x]
pylab.plot(list_x, v, c='gray', linewidth=2.0, label='Anharmonic potential')
pylab.plot(list_x, list_y, c='red', linewidth=4.0, label='path integral Monte Carlo')
pylab.hist(hist_data, 100, normed = 'True', label='matrix squaring') #histogram of the sample
pylab.ylim(0,1)
pylab.xlim(-2,2)
pylab.title('Position distribution at $T=%.2f$, $\gamma_{cubic}=%.2f$, $\gamma_{quartic}=%.2f$' % (T,-g,g), fontsize = 13)
pylab.xlim(-2.0, 2.0) #restrict the range over which the histogram is shown
pylab.xlabel('$x$', fontsize = 15)
pylab.ylabel('$\pi(x)$', fontsize = 15)
pylab.legend()
pylab.savefig('plot_T_%.2f_anharm_g_%.1f_prob_path_int.png' % (T,g))
pylab.show()
```
| github_jupyter |
<center>
<img src="http://sct.inf.utfsm.cl/wp-content/uploads/2020/04/logo_di.png" style="width:60%">
<h1> INF285 - Computación Científica </h1>
<h2> Finding 2 Chebyshev points graphically </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.04</h2>
</center>
## Table of Contents
* [Finding 2 Chebyshev points](#cheb)
* [Python Modules and Functions](#py)
* [Acknowledgements](#acknowledgements)
```
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
%matplotlib inline
```
<div id='cheb' />
## Finding 2 Chebyshev points
We compute them so we can compare them later.
```
n=2
i=1
theta1=(2*i-1)*np.pi/(2*n)
i=2
theta2=(2*i-1)*np.pi/(2*n)
c1=np.cos(theta1)
c2=np.cos(theta2)
```
Recall that the Chebyshev points are points that minimize the following expression:
$$
\displaystyle{\omega(x_1,x_2,\dots,x_n)=\max_{x} |(x-x_1)\,(x-x_2)\,\cdots\,(x-x_n)|}.
$$
This comes from the Interpolation Error Formula (I hope you remember it, otherwise see the textbook or the classnotes!).
In this notebook, we will find the $\min$ for 2 points,
this means:
$$
\begin{align*}
[x_1,x_2]&= \displaystyle{\mathop{\mathrm{argmin}}_{\widehat{x}_1,\widehat{x}_2\in [-1,1]}} \,\omega(\widehat{x}_1,\widehat{x}_2)\\
&=\displaystyle{\mathop{\mathrm{argmin}}_{\widehat{x}_1,\widehat{x}_2\in [-1,1]}}\,
\max_{x\in [-1,1]} |(x-\widehat{x}_1)\,(x-\widehat{x}_2)|.
\end{align*}
$$
For doing this, we first need to build $\omega(\widehat{x}_1,\widehat{x}_2)$,
```
N=50
x=np.linspace(-1,1,N)
w = lambda x1,x2: np.max(np.abs((x-x1)*(x-x2)))
wv=np.vectorize(w)
```
For instance, if you want to evaluate at $\widehat{x}_1=0.2$ and $\widehat{x}_2=0.5$ you get,
```
w(0.2,0.5)
```
But a better value would be,
```
w(0.7,-0.7)
```
Thus, the idea is to find the minimim.
Now we need to evaluate $\omega(x_1,x_2)$ over the domain $\Omega=[0,1]^2$.
```
[X,Y]=np.meshgrid(x,x)
W=wv(X,Y)
```
With this data, we can now plot the function $\omega(x_1,x_2)$ on $\Omega$.
The minimun value of is shown by the color at the bottom of the colorbar.
By visual inspection, we see that we have two mins.
They are located at the bottom right and top left.
```
plt.figure(figsize=(8,8))
#plt.contourf(X, Y, W,100, cmap=cm.hsv, antialiased=False)
plt.contourf(X, Y, W,100, cmap=cm.nipy_spectral, antialiased=False)
plt.xlabel(r'$\widehat{x}_1$')
plt.ylabel(r'$\widehat{x}_2$')
plt.colorbar()
plt.show()
```
Finally, we have included the min in the plot and we see the agreement between the min of $\omega(x_1,x_2)$ and the Chebyshev points found.
```
plt.figure(figsize=(8,8))
plt.contourf(X, Y, W,100, cmap=cm.nipy_spectral, antialiased=False)
plt.plot(c1,c2,'w.',markersize=16)
plt.plot(c2,c1,'w.',markersize=16)
plt.colorbar()
plt.xlabel(r'$\widehat{x}_1$')
plt.ylabel(r'$\widehat{x}_2$')
plt.show()
```
<div id='py' />
## Python Modules and Functions
An interesting module:
https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.polynomials.chebyshev.html
<div id='acknowledgements' />
# Acknowledgements
* _Material created by professor Claudio Torres_ (`ctorres@inf.utfsm.cl`). DI UTFSM. _May 2018._
* _Update June 2020 - v1.02 - C.Torres_ : Fixing formatting issues.
* _Update May 2021 - v1.03 - C.Torres_ : Adding \widehat to the corresponding x's.
* _Update June 2021 - v1.04 - C.Torres_ : Format and explanation.
| github_jupyter |
## Polynomial Regression ##
What is polynomial regression?
> This is simply regression at a higher order.
Why?
> Sometimes a line just does not cut it, and you need a curve
Basic terms which should be pretty self explanatory.
* Linear - $y = \beta_0 + \beta_1 x$
* Quadratic - $y = \beta_0 + \beta_1 x + \beta_2 x^2$
* Cubic - $y = \beta_0 + \beta_1 x + \beta_2 x^2 + \beta_3 x^3$
* Throwing in interaction terms (see Aside - Exploring Interactions) - $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1 x_2$
```
# The usual
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn.datasets import load_boston
from sklearn import linear_model
%matplotlib inline
resale = pd.read_csv('resale.csv')
ppi = pd.read_csv('ppi.csv')
# I took the first 80 data points only so that I can deomnstrate different ways of fitting
ppi = ppi[:80]
merged = pd.merge(ppi,resale,on='quarter')
merged = merged.drop(['level_1'], axis=1)
merged.columns = ['Quarter', 'Resale Index', 'PPI']
plt.plot(merged['Resale Index'], merged['PPI'], 'ro')
observations = len(merged)
y = merged['Resale Index']
X = merged['PPI']
```
Simple comparisons of linear and higher order regressions
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
# Recall we set interaction_only to True in Basics - Exploring Interactions. Here we want all terms in so we turn it off
linear_regression = linear_model.LinearRegression(normalize=False, fit_intercept=True)
create_cubic = PolynomialFeatures(degree=3, interaction_only=False, include_bias=False)
create_quadratic = PolynomialFeatures(degree=2, interaction_only=False, include_bias=False)
# Putting these in a pipline as before
linear_predictor = make_pipeline(linear_regression)
quadratic_predictor = make_pipeline(create_quadratic, linear_regression)
cubic_predictor = make_pipeline(create_cubic, linear_regression)
```
### Linear ###
```
predictor = 'PPI'
x = merged['PPI'].values.reshape((observations,1))
xt = np.arange(0,merged[predictor].max(),0.1).reshape((int(merged[predictor].max()/0.1)+1,1))
x_range = [merged[predictor].min(),merged[predictor].max()]
y_range = [merged['Resale Index'].min(),merged['Resale Index'].max()]
scatter = merged.plot(kind='scatter', x=predictor, y='Resale Index', xlim=x_range, ylim=y_range)
regr_line = scatter.plot(xt, linear_predictor.fit(x,y).predict(xt), '-', color='red', linewidth=2)
```
The line above obviously does not look good, and we can visually infer some curve from the datapoints so let's move on to a quadratic form
```
scatter = merged.plot(kind='scatter', x=predictor, y='Resale Index', xlim=x_range, ylim=y_range)
regr_line = scatter.plot(xt, quadratic_predictor.fit(x,y).predict(xt), '-', color='red', linewidth=2)
```
Maybe a cubic form is even better?
```
scatter = merged.plot(kind='scatter', x=predictor, y='Resale Index', xlim=x_range, ylim=y_range)
regr_line = scatter.plot(xt, cubic_predictor.fit(x,y).predict(xt), '-', color='red', linewidth=2)
```
And if we want to go even crazier?
```
create_ten = PolynomialFeatures(degree=10, interaction_only=False, include_bias=False)
ten_predictor = make_pipeline(create_ten, linear_regression)
scatter = merged.plot(kind='scatter', x=predictor, y='Resale Index', xlim=x_range, ylim=y_range)
regr_line = scatter.plot(xt, ten_predictor.fit(x,y).predict(xt), '-', color='red', linewidth=2)
```
Is a better fit always better?
> Definitely not!
Why not? Doesn't it mean I can make perfect predictions?
> When you reach to a stage when your regression function fits your sample data perfectly, it only means one thing (unless your sample data IS the population data) - your model is overfitted, and it is not likely to perform very well once you present it with new data.
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# BERT Question Answer with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_question_answer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task.
# Introduction to BERT Question Answer Task
The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer.
<p align="center"><img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_squad_showcase.png" width="500"></p>
<p align="center">
<em>Answers are spans in the passage (image credit: <a href="https://rajpurkar.github.io/mlx/qa-and-squad/">SQuAD blog</a>) </em>
</p>
As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.
The size of input could be set and adjusted according to the length of passage and question.
## End-to-End Overview
The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format.
```python
# Chooses a model specification that represents the model.
spec = model_spec.get('mobilebert_qa')
# Gets the training data and validation data.
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
# Fine-tunes the model.
model = question_answer.create(train_data, model_spec=spec)
# Gets the evaluation result.
metric = model.evaluate(validation_data)
# Exports the model to the TensorFlow Lite format with metadata in the export directory.
model.export(export_dir)
```
The following sections explain the code in more detail.
## Prerequisites
To run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
```
!pip install -q tflite-model-maker-nightly
```
Import the required packages.
```
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.question_answer import DataLoader
```
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail.
## Choose a model_spec that represents a model for question answer
Each `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.
Supported Model | Name of model_spec | Model Description
--- | --- | ---
[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.
[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).
[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.
In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
```
spec = model_spec.get('mobilebert_qa_squad')
```
## Load Input Data Specific to an On-device ML App and Preprocess the Data
The [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.
To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqa#miscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:
* Skipping the samples that couldn't find any answer in the context document;
* Getting the original answer in the context without uppercase or lowercase.
Download the archived version of the already converted dataset.
```
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
```
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_question_answer.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
Use the `DataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
```
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
```
## Customize the TensorFlow Model
Create a custom question answer model based on the loaded data. The `create` function comprises the following steps:
1. Creates the model for question answer according to `model_spec`.
2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
```
model = question_answer.create(train_data, model_spec=spec)
```
Have a look at the detailed model structure.
```
model.summary()
```
## Evaluate the Customized Model
Evaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
```
model.evaluate(validation_data)
```
## Export to TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application.
Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
```
config = QuantizationConfig.for_dynamic()
config.experimental_new_quantizer = True
```
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
```
model.export(export_dir='.', quantization_config=config)
```
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab.
The allowed export formats can be one or a list of the following:
* `ExportFormat.TFLITE`
* `ExportFormat.VOCAB`
* `ExportFormat.SAVED_MODEL`
By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
```
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
```
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
```
model.evaluate_tflite('model.tflite', validation_data)
```
## Advanced Usage
The `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQASpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:
1. Creates the model for question answer according to `model_spec`.
2. Train the question answer model.
This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc.
### Adjust the model
You can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQASpec` class.
Adjustable parameters for model:
* `seq_len`: Length of the passage to feed into the model.
* `query_len`: Length of the question to feed into the model.
* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.
* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.
* `trainable`: Boolean, whether pre-trained layer is trainable.
Adjustable parameters for training pipeline:
* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.
* `dropout_rate`: The rate for dropout.
* `learning_rate`: The initial learning rate for Adam.
* `predict_batch_size`: Batch size for prediction.
* `tpu`: TPU address to connect to. Only used if using tpu.
For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
```
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
```
The remaining steps are the same. Note that you must rerun both the `dataloader` and `create` parts as different model specs may have different preprocessing steps.
### Tune training hyperparameters
You can also tune the training hyperparameters like `epochs` and `batch_size` to impact the model performance. For instance,
* `epochs`: more epochs could achieve better performance, but may lead to overfitting.
* `batch_size`: number of samples to use in one training step.
For example, you can train with more epochs and with a bigger batch size like:
```python
model = question_answer.create(train_data, model_spec=spec, epochs=5, batch_size=64)
```
### Change the Model Architecture
You can change the base model your data trains on by changing the `model_spec`. For example, to change to the BERT-Base model, run:
```python
spec = model_spec.get('bert_qa')
```
The remaining steps are the same.
| github_jupyter |
```
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
import random
import collections
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
sns.set_context('paper', font_scale=2, rc={'lines.linewidth': 2})
sns.set_style(style='whitegrid')
colors = ["windows blue", "amber", "greyish", "faded green", "dusty purple"]
sns.set_palette(sns.xkcd_palette(colors))
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%reload_ext autoreload
%autoreload 2
import sys
sys.path.insert(0, '..')
```
## Loading dataset
```
import pandas as pd
import glob
video_df = pd.read_csv('../data/youtube_faces_with_keypoints_large.csv')
video_df.head()
# create a dictionary that maps videoIDs to full file paths
npz_files_full_path = glob.glob('../data/youtube_faces_*/*.npz')
video_ids = [x.split('/')[-1].split('.')[0] for x in npz_files_full_path]
full_paths = {}
for video_id, full_path in zip(video_ids, npz_files_full_path):
full_paths[video_id] = full_path
# remove from the large csv file all videos that weren't uploaded yet
video_df = video_df.loc[video_df.loc[:,'videoID'].isin(full_paths.keys()),:].reset_index(drop=True)
print('Number of Videos uploaded so far is %d' %(video_df.shape[0]))
print('Number of Unique Individuals so far is %d' %(len(video_df['personName'].unique())))
```
## Show Overview of Dataset Content (that has been uploaded so far)
```
# overview of the contents of the dataset
grouped_by_person = video_df.groupby("personName")
num_videos_per_person = grouped_by_person.count()['videoID']
grouped_by_person.count().sort_values('videoID', axis=0, ascending=False)
plt.close('all')
plt.figure(figsize=(25,20))
plt.subplot(2,2,1)
plt.hist(x=num_videos_per_person,bins=0.5+np.arange(num_videos_per_person.min()-1,num_videos_per_person.max()+1))
plt.title('Number of Videos per Person',fontsize=30);
plt.xlabel('Number of Videos',fontsize=25); plt.ylabel('Number of People',fontsize=25)
plt.subplot(2,2,2)
plt.hist(x=video_df['videoDuration'],bins=20);
plt.title('Distribution of Video Duration',fontsize=30);
plt.xlabel('duration, frames',fontsize=25); plt.ylabel('Number of Videos',fontsize=25)
plt.xlim(video_df['videoDuration'].min()-2,video_df['videoDuration'].max()+2)
plt.subplot(2,2,3)
plt.scatter(x=video_df['imageWidth'], y=video_df['imageHeight'])
plt.title('Distribution of Image Sizes',fontsize=30)
plt.xlabel('Image Width, pixels',fontsize=25); plt.ylabel('Image Height, pixels',fontsize=25)
plt.xlim(0,video_df['imageWidth'].max() +15)
plt.ylim(0,video_df['imageHeight'].max()+15)
plt.subplot(2,2,4)
average_face_size_wo_nans = np.array(video_df['averageFaceSize'])
average_face_size_wo_nans = average_face_size_wo_nans[np.logical_not(np.isnan(average_face_size_wo_nans))]
plt.hist(average_face_size_wo_nans, bins=20);
plt.title('Distribution of Average Face Sizes ',fontsize=30);
plt.xlabel('Average Face Sizem, pixels',fontsize=25); plt.ylabel('Number of Videos',fontsize=25);
```
## Define some shape normalization utility functions
```
#%% define shape normalization utility functions
def normalize_shapes(shapes_im_coords):
(numPoints, num_dims, _) = shapes_im_coords.shape
"""shapes_normalized, scale_factors, mean_coords = NormlizeShapes(shapes_im_coords)"""
# calc mean coords and subtract from shapes
mean_coords = shapes_im_coords.mean(axis=0)
shapes_centered = np.zeros(shapes_im_coords.shape)
shapes_centered = shapes_im_coords - np.tile(mean_coords,[numPoints,1,1])
# calc scale factors and divide shapes
scale_factors = np.sqrt((shapes_centered**2).sum(axis=1)).mean(axis=0)
shapes_normalized = np.zeros(shapes_centered.shape)
shapes_normalized = shapes_centered / np.tile(scale_factors, [numPoints,num_dims,1])
return shapes_normalized, scale_factors, mean_coords
def transform_shape_back_to_image_coords(shapes_normalized, scale_factors, mean_coords):
"""shapes_im_coords_rec = transform_shape_back_to_image_coords(shapes_normalized, scale_factors, mean_coords)"""
(numPoints, num_dims, _) = shapes_normalized.shape
# move back to the correct scale
shapes_centered = shapes_normalized * np.tile(scale_factors, [numPoints,num_dims,1])
# move back to the correct location
shapes_im_coords = shapes_centered + np.tile(mean_coords,[numPoints,1,1])
return shapes_im_coords
```
## Show Images from YouTube Faces with 2D Keypoints Overlaid
```
# show several frames from each video and overlay 2D keypoints
np.random.seed(42)
num_videos = 4
frames_to_show_from_video = np.array([0.1,0.5,0.9])
num_frames_per_video = len(frames_to_show_from_video)
# define which points need to be connected with a line
jaw_points = [ 0,17]
right_eyebrow_points = [17,22]
left_eyebrow_points = [22,27]
nose_ridge_points = [27,31]
nose_base_points = [31,36]
right_eye_points = [36,42]
left_eye_points = [42,48]
outer_mouth_points = [48,60]
inner_mouth_points = [60,68]
list_of_all_connected_points = [jaw_points,right_eyebrow_points,left_eyebrow_points,
nose_ridge_points,nose_base_points,
right_eye_points,left_eye_points,outer_mouth_points,inner_mouth_points]
# select a random subset of 'num_videos' from the available videos
rand_videos_id = video_df.loc[np.random.choice(video_df.index,size=num_videos,replace=False),'videoID']
fig, ax_array = plt.subplots(nrows=num_videos,ncols=num_frames_per_video,figsize=(14,18))
for i, video_id in enumerate(rand_videos_id):
# load video
video_file = np.load(full_paths[video_id])
color_images = video_file['colorImages']
bounding_box = video_file['boundingBox']
landmarks2D = video_file['landmarks2D']
landmarks3D = video_file['landmarks3D']
# select frames and show their content
selected_frames = (frames_to_show_from_video*(color_images.shape[3]-1)).astype(int)
for j, frame_idx in enumerate(selected_frames):
ax_array[i][j].imshow(color_images[:,:,:,frame_idx])
ax_array[i][j].scatter(x=landmarks2D[:,0,frame_idx],y=landmarks2D[:,1,frame_idx],s=9,c='r')
for con_pts in list_of_all_connected_points:
x_pts = landmarks2D[con_pts[0]:con_pts[-1],0,frame_idx]
y_pts = landmarks2D[con_pts[0]:con_pts[-1],1,frame_idx]
ax_array[i][j].plot(x_pts,y_pts,c='w',lw=1)
ax_array[i][j].set_title('"%s" (t=%d)' %(video_id,frame_idx), fontsize=12)
ax_array[i][j].set_axis_off()
```
| github_jupyter |
```
# Write a dynamic programming based program to find minimum number insertions (addition) needed to make a palindrome
# Examples:
# ab: Number of insertions required is 1 i.e. bab aa: Number of insertions required is 0 i.e. aa
# abcd: Number of insertions required is 3 i.e. dcb + abcd
# Algorithm:
# The table should be filled in diagonal fashion. For the string abcde, 0….4,
# the following should be order in which the table is filled:
# Gap = 1: (0, 1) (1, 2) (2, 3) (3, 4) --> l, h values
# Gap = 2: (0, 2) (1, 3) (2, 4)
# Gap = 3: (0, 3) (1, 4)
# Gap = 4: (0, 4)
import numpy as np
def min_insertions_for_palindrome(str1): # look for similar letters at the edges and calculate the steps needed to make it similar
n = len(str1)
# Create a table of size n*n to store minimum number of insertions
# needed to convert str1[l..h] to a palindrome. Fill the table diagonally
table = np.zeros([n,n], dtype = int)
for gap in range(1, n):
l = 0
for h in range(gap, n):
# if the beginning and ending letters are the same, move one index towards inner part of the string
# Ex iovoi then the next step is to check ovo
if str1[l] == str1[h]:
table[l][h] = table[l + 1][h - 1]
# if the letters at both edges are not the same, look for either moving from one side or the other
# find the minimum and add one more step
else:
table[l][h] = (min(table[l][h - 1], table[l + 1][h]) + 1)
l += 1
print(table)
# Return minimum number of insertions for str1[0..n-1]
return table[0][n - 1];
# Driver Code
test_str = "vyoili"
print(min_insertions_for_palindrome(test_str))
# Minimum insertions to form a palindrome can be found easily by using Longest Common Subsequence Algorithm
# Find the LCS of the string and its reverse. Items that are not in LCS are the ones missing to form a palindrome
test_str = "ilivyo"
rev = "".join(list(reversed(test_str)))
print(rev)
def lcsubs(string):
reverse_string = "".join(list(reversed(string)))
n = len(string)
lcsubs_list = []
dp = np.zeros([n+1, n+1], dtype = int)
for i in range(1, n+1):
for j in range(1 , n+1):
if string[i-1] == reverse_string[j-1]:
dp[i][j] = dp[i-1][j-1] + 1
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
i = n
j = n
while i > 0 and j > 0:
if string[i-1] == reverse_string[j-1]:
lcsubs_list.insert(0, string[i-1])
i -= 1
j -= 1
elif dp[i-1][j] > dp[i][j-1]:
i -= 1
else:
j -= 1
a_dict = {} # append the items needed to make the array palindrome
for i in range(len(string)):
if string[i] not in lcsubs_list:
a_dict[string[i]] = 'Needed'
print("these items are needed to add the string to make a palindrome")
return a_dict.keys()
lcsubs(test_str)
# Python program to maximize the profit by doing at most k transactions given stock prices for n days
def max_profit(prices, k):
n = len(prices)
if k > n: return
# Bottom-up DP approach
profit = np.zeros([n, k+1])
# Profit is zero for the first
# day and for zero transactions
for j in range(1, k+1): # for each transaction
for i in range(1, n): # for each day's stock value
max_so_far = 0
for l in range(i): # for all past transactions
max_so_far = max(max_so_far, prices[i] - prices[l] + profit[l][j-1])
profit[i][j] = max(profit[i - 1][j], max_so_far)
print(profit) # to view the table
return profit[n - 1][k]
# Driver code
k = 3
prices = [10, 22, 5, 75, 65, 80]
n = len(prices)
print("Maximum profit is:", max_profit(prices, k))
# if k = 2, trader earns 87 as sum of 12 and 75 Buy at price 10, sell at 22, buy at 5 and sell at 80,
# Alternative solution to maximize the profit by doing at most k transactions given stock prices for n days
import numpy as np
def maximize_profit_with_k_trsc(price_list, k):
n = len(price_list)
if n <= 1 or k > len(price_list): return 0
memo_table = np.zeros([k+1, n+1] , dtype = int) # number of transactions in rows
for i in range(1, k+1): # for each buy and sell transaction, i
max_profit = -np.inf
for j in range(1, n+1): # for each day, j, starting from the first day
prev_day_prev_transact = memo_table[i-1][j-1] # max profit until prev day with prev transact capacity
price_today = price_list[j-1]
prev_day_price = price_list[j-2]
max_transaction_until_today = memo_table[i][j-1] # max profit reached until prev day with curr transact.
max_profit = max(max_profit, prev_day_prev_transact - prev_day_price)
memo_table[i][j] = max(max_transaction_until_today, max_profit + price_today)
print(memo_table)
return memo_table[k][n]
prices = [10, 22, 5, 75, 65, 80]
maximize_profit_with_k_trsc(prices, 3)
# Return the list of items of the shortest supersequence of X and Y strings
import numpy as np
def shortest_supersequence(X, Y):
m = len(X)
n = len(Y)
memo_list = np.zeros([m+1, n+1], dtype = int)
for i in range(1, m + 1):
for j in range(1, n+1):
memo_list[0][j] = j # if there is no item in the X string, the result is length of Y
memo_list[i][0] = i # if there is no item in the Y string, the result is length of X
if (X[i - 1] == Y[j - 1]):
memo_list[i][j] = 1 + memo_list[i - 1][j - 1]
else:
memo_list[i][j] = 1 + min(memo_list[i - 1][j], memo_list[i][j - 1])
print(memo_list) # to view
print(memo_list[m][n]) # answer to the length of the supersequence
i = m
j = n
result = []
while i > 0 and j > 0:
if X[i - 1] == Y[j - 1]:
result.insert(0, X[i-1])
i -= 1
j -= 1
elif memo_list[i - 1][j] > memo_list[i][j - 1]:
result.insert(0, Y[j-1])
j -= 1
else:
result.insert(0, X[i-1])
i -= 1
# if j == 0
while i > 0:
result.insert(0, X[i-1])
i -= 1
#if i == 0
while j > 0:
result.insert(0, Y[j-1])
j -= 1
return result
test_x = "VKLAB"
test_y = "ZKADB"
shortest_supersequence(test_x, test_y)
# Reminder of the min coin change problem to help the min cost table and max chain problems below
change_list = [1,3,5,10]
target = 21
def min_coin_change(coin_list,change):
# create an array for the worst number of coins for change
dp = np.arange(change + 1, dtype = int)
# optimize the array for the min number of coins
for i in range(1, len(dp)):
for c in [coin for coin in coin_list if coin <= i]:
dp[i] = min(dp[i], dp[i-c] + 1)
# print(dp)
return dp[change]
print(min_coin_change(change_list, target))
# min cost table for addition or multiplication with unusual operational costs:
# if from zero to the destination(N), the cost of addition(P) of 1 unit is 5, and multiplication cost by two(Q) is 1;
# find the min cost to reach to destination from 0
# Input: N = 9, P = 5, Q = 1 Min cost is: 13 : 0 --> 1 --> 2 --> 4 --> 8 --> 9 is: 5 + 1 + 1 + 1 + 5 = 13.
import numpy as np
def min_cost_to_reach_N(N, P, Q):
# prepare the memory array with the worst case scenerio (with addition of P for each unit) to reach any number
memo = np.zeros(N + 1, dtype = int)
for i in range(1, len(memo)):
memo[i] = i*P
# loop over the memo array to optimize the values in the array
for i in range(1, len(memo)):
# the index of memo can be odd or even
if i % 2 == 0:
memo[i] = min(memo[i-1] + 5, memo[i//2] + 1 )
else:
memo[i] = min(memo[i-1] + 5, memo[i//2] + 1 + 5)
print(np.arange(len(memo)), memo)
return memo[N]
min_cost_to_reach_N(9, 5,1) #[ 0 5 6 11 7 12 12 17 8 13]
# Order shuffled pairs such as: (a, b),(c, d),(e, f)... if c > b and e > f.. Find the longest chain from a given set
# Input: {{5, 24}, {41, 50}, {25, 68}, {27, 39}, {30, 90}}, then the longest chain is: {{5, 24}, {27, 39}, {41, 50}}
def max_chain_length_of_tuples(arr):
arr = sorted(arr, key = lambda x: x[0]) # sort the array of tuples according to the first value
n = len(arr)
max_length = 0
# Initialize MCL(max chain length) values for all indices
memo = [1 for i in range(n)]
# Compute optimized chain length values in bottom up manner
for i in range(1, n):
for j in range(0, i):
if (arr[i][0] > arr[j][1] and memo[i] <= memo[j]):
memo[i] = memo[j] + 1
print(memo)
return max(memo)
# Driver program to test above function
arr = [(5, 24), (41, 50), (25, 68), (27, 39), (70,90)]
max_chain_length_of_tuples(arr)
# Longest Increasing Subsequence Algorithm can be applied to find a solution to this problem.
# Because the items are shuffled, it wouldn't be useful to print the order
# def LIS(numbers): # with two for loops, simple Longest Increasing Subsequence
# n = len(numbers)
# ranking = [1 for i in range(n)] # ranking is set to 1, if there is no sequence, it will be just one number
# for i in range(n): # all the numbers
# for j in range(i+1, len(numbers)):
# # check if the remaining items have lower rank but higher value and re-adjust the ranking
# if numbers[i] < numbers[j] and ranking[i] >= ranking[j]:
# ranking[j] = ranking[i] + 1
# return max(ranking) , ranking
# LIS([4, 0, 1, 8, 2, 10, 2]
def max_chain_length_of_tuples2(arr):
arr = sorted(arr, key = lambda x: x[0]) # sort the array of tuples according to the first value
n = len(arr)
max_length = 0
# Initialize MCL(max chain length) values for all indices
memo = [1 for i in range(n)]
# Compute optimized chain length values in bottom up manner
for i in range(n):
for j in range(i, n):
if (arr[j][0] > arr[i][1] and memo[i] >= memo[j]):
memo[j] = memo[i] + 1
print(memo)
return max(memo)
# Driver program to test above function
arr = [(5, 24), (41, 50), (25, 68), (27, 39), (70,90)]
max_chain_length_of_tuples2(arr)
# Reminder of the Longest Increasing Subsequence problem before the Longest Common Increasing Subsequence problem
# ATTN: There might always be several longest increasing subsequences with same length
# ex: [4, 5, 0, 1, 8, 2, 10] answer: 4,5,8,10 and 0,1,2,10
test_array = [4, 0, 1, 8, 2, 10]
def longest_increasing_subsequence(a):
# create a ranking array and a parent array
ranking = np.ones(len(a), dtype = int) # dtype is very important!
# loop over the array
for i in range(len(a)):
# loop over all the remaining items ahead of the current item
for j in range(i+1, len(a)):
# check if the remaining items have lower rank but higher value
# if this is the case, increase the rank of the further items and assign them as parent
if a[i] < a[j] and ranking[i] >= ranking[j]:
ranking[j] = ranking[i] + 1
print(ranking)
max_rank_at = int(max(ranking))
order_list = []
for each_rank in reversed(range(len(ranking))):
if ranking[each_rank] == max_rank_at:
order_list.insert(0, a[each_rank])
max_rank_at -= 1
return order_list
longest_increasing_subsequence(test_array)
# Longest Common Increasing Subsequence of two arrays
arr1 = [4, 0, 8, 2, 10] # LCS: 4,8,2,10 & LCIS: 4, 8, 10 & LCIS should be: [0,1,2,1,3]
arr2 = [3, 4, 8, 2, 10]
# find the longest common subsequence
def lcsubseq(X, Y):
if len(X) == 0 or len(Y) == 0: return None
n = len(X)
m = len(Y)
memo = np.zeros([n+1, m+1], dtype = int)
for i in range(1,n+1):
for j in range(1,m+1):
if X[i-1] == Y[j-1]:
memo[i][j] = memo[i-1][j-1] + 1
else:
memo[i][j] = max(memo[i][j-1], memo[i-1][j])
# percolate up to find the items that have matched
i = n
j = m
lcsubseq_list = []
while i > 0 and j > 0:
if X[i-1] == Y[j-1]:
lcsubseq_list.insert(0, X[i-1])
i -= 1
j -= 1
elif memo[i][j-1] > memo[i-1][j]:
j -= 1
else:
i -= 1
return lcsubseq_list
test_array = lcsubseq(arr1, arr2)
# Extract the longest increasing subsequence from the solution of the LCS of two arrays
def lisubs(array):
n = len(array)
ranking = np.ones(n, dtype = int)
for i in range(n):
for j in range(i, n):
if array[j] > array[i] and ranking[i] >= ranking[j]:
ranking[j] = ranking[i] + 1
print(ranking)
lis = []
max_val = np.max(ranking)
for i in reversed(range(n)):
if ranking[i] == max_val:
lis.insert(0, array[i])
max_val -= 1
return lis
lisubs(test_array)
# Find Maximum Sum Increasing Subsequence
# Solution is also similar to Longest Increasing Subsequence with a twist
# Ex: If input is {1, 101, 2, 3, 100, 4, 5}, output is: 106 (1 + 2 + 3 + 100)
import numpy as np
def max_sum_increasing_subseq(arr):
n = len(arr)
msis = np.copy(arr) # copy the values of original array as starting point
# Compute maximum sum values in bottom up manner
for i in range(n):
for j in range(i):
if (arr[i] > arr[j] and msis[i] < msis[j] + arr[i]):
msis[i] = msis[j] + arr[i]
print(msis)
# Pick maximum of all msis values
return np.max(msis)
test_array = [1, 101, 2, 3, 100, 4, 5]
max_sum_increasing_subseq(test_array)
# Minimum number of jumps to reach the end of the array
# Given an array of integers where each element represents the max number of steps that can be made forward from that
# element. Return the minimum number of jumps to reach the end of the array (starting from the first element).
# If an element is 0, then cannot move through that element.
# Example: Input: arr[] = {1, 3, 5, 8, 9, 2, 6, 7, 6, 8, 9} then Output: 3 (1-> 3 -> 8 ->9)
# First element is 1, so can only go to 3. Second element is 3, so can make at most 3 steps eg to 5 or 8 or 9.
def min_jumps_to_reach_the_end_of_array(arr):
n = len(arr)
if n == 0 or arr[0] == 0: return
jumps = [np.inf] * n
jumps[0] = 0
for i in range(1, n):
for j in range(i):
current_distance = i-j
potential_steps = arr[j]
if (current_distance <= potential_steps): # if potential steps are larger, it will be faster to reach to end
jumps[i] = min(jumps[i], jumps[j] + 1)
print(jumps)
return jumps[n-1]
# Driver Program to test above function
arr = [1, 3, 2, 5, 6, 4, 2, 6, 7, 6, 8, 9, 11]
min_jumps_to_reach_the_end_of_array(arr)
# Dynamic programming solution to find all possible steps to cover a distance problem
# Reach a target by using a list of all integers that are all smaller than or equal to the target
steps = [1,2,3]
target = 5
def count_ways_to_reach_target(target, step_list):
res = [0 for x in range(target+1)] # Creates list res with all elements 0
res[0] = 1
for i in range(1, target+1):
for j in [c for c in step_list if c <= target]:
res[i] += res[i-j]
print(res)
return res[target]
print(count_ways_to_reach_target(target,steps))
# %timeit count_ways_to_reach_target(target, steps)
# number of ways to cover a distance if it was made with recursion
def reach_t(list_of_numbers, target, result = None, sub_solution = None):
if result == None: result = []
if sub_solution == None: sub_solution = []
if sum(sub_solution) > target: return
if sum(sub_solution) == target: result.append(sub_solution)
# loop over all the items in the list
for i in range(len(list_of_numbers)):
reach_t(list_of_numbers, target, result, sub_solution + [list_of_numbers[i]])
return result
solution = reach_t([1,2,3], 5)
print(solution, len(solution))
%timeit reach_t([1,2,3], 5)
# print the min number of steps (number of items to add up) to reach the target:
print(min([len(solution[i]) for i in range(len(solution))])) # takes minimum 2 items to sum up to reach target
# Count the number of ways to traverse a Matrix with Time Complexity : O(m * n)
# start from: matrix[0][0], end at: mat[m-1][n-1]
def number_of_paths(m, n):
dp = np.ones([m, n], dtype = int)
for i in range(1, m):
for j in range(1, n):
dp[i][j] = (dp[i - 1][j] + dp[i][j - 1])
print(dp)
return dp[-1][-1]
n = 5
m = 5
print(number_of_paths(n, m))
# Optimal Strategy for a Game for 2 players picking coins from an array of coins, trying to get maximum value
# Consider a row of n coins, where n is even and two players alternating turns. In each turn, a player selects either
# the first or last coin from the row, removes it from the row permanently. Find the max value a player can get.
# Returns optimal value possible that
# a player can collect from an array
# of coins of size n.
def strategy_to_earn_most_money_in_a_game(arr, n):
table = np.zeros([n,n] ,dtype = int)
# the table is filled in diagonal fashion
for gap in range(n):
for j in range(gap, n):
i = j - gap
x = 0
if((i + 2) <= j):
x = table[i + 2][j]
y = 0
if((i + 1) <= (j - 1)):
y = table[i + 1][j - 1]
z = 0
if(i <= (j - 2)):
z = table[i][j - 2]
table[i][j] = max(arr[i] + min(x, y), arr[j] + min(y, z))
return table[0][n - 1]
# Driver Code
arr1 = [ 20, 9, 2, 2, 2, 10]
n = len(arr1)
print(strategy_to_earn_most_money_in_a_game(arr1, n))
# Alternatively it would be better to use this easy solution:
# player 1 is smart and making the best choices and player 2 is a total loser:
import copy
array = [ 20, 9, 2, 2, 2, 10]
player1 = []
player2 = []
cp_array = copy.deepcopy(array)
while len(cp_array) >= 2:
max_item = max(cp_array[0], cp_array[-1])
min_item = min(cp_array[0], cp_array[-1])
player1.append(max_item)
player2.append(min_item)
cp_array.remove(max_item)
cp_array.remove(min_item)
print(max(sum(player1), sum(player2)))
# find out if any combination of elements in an array sums up to target with time complexity O(n*target)
# the algoritm is similar to knapsack algorithm to build the dp
import numpy as np
test_array = [5,4,1,2,7,8,3]
def sum_subset(array, target):
n = len(array)
dp = np.zeros([n+1, target+1], dtype = int)
for i in range(n+1):
for t in range(target+1):
dp[i][0] = 1 # it is crucial to start with True for reaching 0 with any item
if array[i-1] <= t:
curr_val = array[i-1]
dp[i][t] = dp[i-1][t] or dp[i-1][t-curr_val]
else:
dp[i][t] = dp[i-1][t]
if dp[n][target] == 1:
print('target is reachable')
print(dp)
set_of_items = []
# Start from last element and percolate up
i = n
currSum = target
while (i > 0 and currSum > 0):
if (dp[i - 1][currSum]): # if previous items already add up, then exclude current item and move to the next
i -= 1
elif (dp[i - 1][currSum - array[i - 1]]):# if the previous item is necessary to reach the target, include it
set_of_items.append(array[i-1])
currSum -= array[i-1]
i -= 1
return set_of_items
sum_subset(test_array, 23)
# solution is [8, 7, 2, 1, 5]
# Recursive solution of the above problem would be very costly but it will give all combinations to the solution:
# NOT recommended for large size problems!
def find_subset_to_reach_target(array, target, result = None, subset = None):
if result == None: result = []
if subset == None: subset = []
if sum(subset) == target: result.append(subset)
if sum(subset) > target: return
for i in range(len(array)):
remaining = array[:i+1]
if array[i] not in subset: # make sure each element is included only once
find_subset_to_reach_target(remaining, target, result, subset + [array[i]])
return result
# return [n for i,n in enumerate(result) if n not in result[:i]] # avoid repetitions if any in the original array
find_subset_to_reach_target([5,4,1,2,7,8,3], 23)
# Dynamic programming solution for equal sum sub-sets partition problem with time complexity: O(n*k)
# Same algorithm with finding if any combination of sum of some elements in a list is equal to the target.
arr1 = [5,4,1,2,5,2,3]
import numpy as np
def print_two_equal_sum_subsets(arr):
n = len(arr)
if sum(arr) % 2 == 1: return False # if sum / 2 is odd the array can't be partitioned to two equal subsets
k = sum(arr) // 2 # the sum of both subsets will be k
# dp[i][j] = 1 if there is a subset of elements in first i elements of array that has sum equal to j.
memo_table = np.zeros((n + 1, k + 1))
for i in range(n + 1): # TRUE if the array has 0 elements
memo_table[i][0] = 1
# check if the any combination of elements until index i, is equal to curr_sum
for i in range(1, n + 1): # traverse each element
for currSum in range(1, k + 1): # for each sum subset value
if arr[i-1] <= currSum: # if current item can fit to the sum
memo_table[i][currSum] = memo_table[i-1][currSum] or memo_table[i-1][currSum - arr[i-1]]
else:
memo_table[i][currSum] = memo_table[i-1][currSum]
# If the answer needed was a TRUE or FALSE, we could just return the dp[n][k] value
print(memo_table) # view the memo table
# If partition is not possible, return False
if memo_table[n][k] == 0: return False
# partition part requires creating two sets and separating the items list
set1, set2 = [], []
# Start from last element and percolate up
i = n
currSum = k
while (i > 0 and currSum >= 0):
if (memo_table[i - 1][currSum]):
set1.append(arr[i-1])
print('item: {} goes to set 1, see indices {} and {} is 1'.format(arr[i-1], i-1, currSum))
elif (memo_table[i - 1][currSum - arr[i - 1]]):
print('item: {} goes to set 2, see indices {} and {} is 0'.format(arr[i-1], i-1, currSum))
set2.append(arr[i-1])
currSum -= arr[i-1]
i -= 1
return set1, set2
# arr1 = [5,4,1,2]
print_two_equal_sum_subsets(arr1)
# Rod cutting profit optimization
# Very similiar to finding minimum number of coins to reach a target change algorithm
import numpy as np
price_list = [1, 5, 8, 9, 10, 17, 17, 20]
def rod_cutting_max_profit(array):
n = len(array)
dp = np.copy(array)
dp = np.insert(dp, 0, 0)
for i in range(len(dp)):
for j in range(i):
dp[i] = max(dp[i], dp[i-j] + dp[j])
print(dp) # to view the dp
return dp[n]
rod_cutting_max_profit(price_list)
# Alternative Solution:
# def rod_cutting_price_optimization(pricing_list):
# n = len(pricing_list)
# known_results = np.copy(pricing_list) # np.copy is a deep copy by default
# for i in range(n):
# for j in range(i): # since each cut is only 1 inches, check the length
# known_results[i] = max(known_results[i-j-1] + known_results[j], known_results[i])
# print(known_results) # view the array build up
# return known_results[n-1] # view the whole list for all optinal solutions
# rod_cutting_price_optimization(price_list)
# Word Break Problem with Hashing:
test1 = "bunniesareawesome"
test2 = "bunniesadfadfareawesome"
list_of_substr = ["bunn", "a" , "awesome", "ies", "re"]
# create a function that takes the string and the substring list, outputs if substrings are all found in the string
def word_break(string, words):
n = len(string)
# create a memo table
memo = {}
for i in range(n): # traverse the string,
for j in range(i): # traverse each letter until i
if string[j:i+1] in words: # include all indices
memo[string[j:i+1]] = 'True'
print(memo.keys(), words)
for i in words:
if i not in memo.keys():
return False
return True
print(word_break(test1, list_of_substr))
print(word_break(test2, list_of_substr))
# Alternative Solution:
# string = "bunniesadfadfareawesome"
# words = ["bunn", "a" , "awesome", "ies", "re"]
# # create a function that takes the string and the substring list, outputs if substrings are all found in the string
# def words_in_a_string(string, word_list):
# # create a set
# checked_words = set()
# word_list = set(word_list)
# n = len(string)
# for i in range(1, n+1): # traverse the string, index i is starting from 1
# for j in range(i): # traverse each letter until i
# if string[j:i] in word_list and string[j:i] not in checked_words:
# checked_words.add(string[j:i])
# return checked_words == word_list
# print(words_in_a_string(string, words))
```
| github_jupyter |
# Validação cruzada
### Objetivo da apresentação
Identificar maneiras de evitar com que um modelo preditivo deixe de ser genérico o suficiente para previsões ainda não vistas, caso exista uma influência na forma de validação. O dataset e o dicionário de variáveis podem ser baixados pelo link https://www.kaggle.com/hellbuoy/car-price-prediction
## Construindo um modelo do zero
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv('CarPrice_Assignment.csv', index_col=0)
df.head()
df.info()
df.describe()
sns.boxplot(df['price'])
sns.distplot(df['price'])
num_col =[]
for col in df.columns:
if df[col].dtypes != np.object:
num_col.append(col)
sns.pairplot(df[num_col])
num_col
X = df[num_col].drop('price',axis=1)
y = df['price']
X.head()
y.head()
from sklearn import svm
regr = svm.SVR()
regr.fit(X,y)
```
## Partiu dar deploy! :D
<img src="img/datasiens.jpg" width=300 height=300 />
### Precisamos SEMPRE separar em treino e teste
### 1. Hold Out
A forma mais usual de se pensar em validar o modelo é pela divisão Treino/Teste
<img src="img/hold_out.png" width=500 height=500 />
### 2. Random sampling
Uma forma mais robusta de tirarmos o viés de divisão treino/teste na ordem que os dados são apresentados é fazer de forma aleatória a divisão. Normalmente se usa um range de Treino 70-80% e Teste 20-30%
<img src="img/random_sample.png" width=500 height=500 />
### 3. K-folds
Calcula-se um erro médio para as k divisões realizadas, a fim de estimar com mais generalidade o quão bem o modelo está performando. Custa muito pouco computacionalmente fazer essa verificação e um modelo que performe bem independente de sua divisão, pode ser considerado um modelo .
<img src="img/kfold.png" width=500 height=500 />
### 4. Bootstrapping
<img src="img/bootstrap.png" width=500 height=500 />
## Aplicação prática: Random Split & K-Folds
```
from sklearn.ensemble import RandomForestRegressor
tree = RandomForestRegressor()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)
tree.fit(X_train, y_train)
tree.score(X_test, y_test)
from sklearn.model_selection import cross_val_score
np.average(cross_val_score(tree, X, y, cv=5))
np.std(cross_val_score(tree, X, y, cv=5))
```
### E AGORA? ? ? ? ?
<img src="img/ohno.jpg" width=500 height=500 />
```
cat_col =[]
for col in df.columns:
if df[col].dtypes == np.object:
cat_col.append(col)
for col in df[cat_col].columns:
sns.boxplot(x= df[col], y=df['price'])
plt.show()
```
## Referencias
1. Introduction to Statistical Learning, Cap. 5
2. https://www.youtube.com/watch?v=fSytzGwwBVw
3. https://www.kaggle.com/hellbuoy/car-price-prediction
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import os
plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['axes.labelsize'] = 16
plt.rcParams['xtick.labelsize'] = 16
plt.rcParams['ytick.labelsize'] = 16
from pathlib import Path
def mkdir_secure(path):
if not os.path.exists(path):
os.makedirs(path)
dpi = 300
pids = ['BK7610',
'BU4707',
'CC6740',
'DC6359',
'DK3500',
'HV0618',
'JB3156',
'JR8022',
'MC7070',
'MJ8002',
'PC6771',
'SA0297',
'SF3079']
pid = pids[3]
no_gaps_path = os.path.join('..','data','interim','no_gaps')
fig_path = os.path.join('..','reports','figures')
```
# 1. Accelerometer
```
acc_data_path = os.path.join('..', 'data', 'interim', 'acc_data')
df_acc = pd.read_csv(os.path.join(acc_data_path, f'{pid}.csv'), dtype={'x': np.float32, 'y': np.float32, 'z': np.float32})
acc_fig_path = os.path.join('..','figs','acc')
mkdir_secure(acc_fig_path)
df_acc['timestamp'] = pd.to_datetime(df_acc['timestamp'],unit='ms')
ax = df_acc.plot.scatter(x='timestamp',y='x',grid=True)
ax.set_xlim(df_acc.timestamp[0],df_acc.timestamp.iloc[-1])
plt.xticks(rotation=25)
plt.savefig(os.path.join(fig_path,f'{pid}_acc_x.png'),dpi=dpi)
# ax = df_acc.plot.scatter(x='timestamp',y='y',c= 'darkmagenta',grid=True)
# ax.set_xlim(df_acc.timestamp[0],df_acc.timestamp.iloc[-1])
# plt.savefig(os.path.join(fig_path,f'{pid}_acc_y.png'),dpi=dpi)
# ax = df_acc.plot.scatter(x='timestamp',y='z',c= 'orangered',grid=True)
# ax.set_xlim(df_acc.timestamp[0],df_acc.timestamp.iloc[-1])
# plt.savefig(os.path.join(fig_path,f'{pid}_acc_z.png'),dpi=dpi)
plt.show()
# df_acc.plot(x='timestamp',y=['x','y','z'],grid=True)
# plt.savefig(os.path.join(fig_path,f'{pid}_acc_xyz.png'),dpi=dpi)
# plt.show()
```
# 2.1 TAC Clean
```
tac_data_path = os.path.join('..', 'data', 'interim', 'tac_data')
df_tac = pd.read_csv(os.path.join(tac_data_path, f'{pid}.csv'),parse_dates=['timestamp'])
tac_fig_path = os.path.join('..','figs','tac')
mkdir_secure(tac_fig_path)
df_tac['timestamp'] = pd.to_datetime(df_tac.timestamp,unit='ms')
clean_1st = df_tac['tac_clean'].first_valid_index()
clean_last = df_tac['tac_clean'].last_valid_index()
# x_clean = df_tac[df_tac['tac_clean'].notnull()]
# t_clean = x_clean.tac_clean
# c=t_clean,colormap='viridis'
ax = df_tac.plot.scatter(x='timestamp',y='tac_clean', s = 49,grid=True)
ax.set_xlim(df_tac['timestamp'][clean_1st], df_tac['timestamp'][clean_last])
plt.xticks(rotation=25)
# ax.set_xlabel('Time')
# ax.set_ylabel('TAC Level')
plt.savefig(os.path.join(fig_path,f'{pid}_tac_clean.png'),dpi=dpi)
```
# 2.2 TAC Raw
```
# raw_1st = df_tac['tac_raw'].first_valid_index()
# raw_last = df_tac['tac_raw'].last_valid_index()
# x_raw = df_tac[df_tac['tac_raw'].notnull()]
# t_raw = x_raw.tac_raw
#, c=t_raw, colormap='viridis'
# ax = x_raw.plot.scatter(x='timestamp',y='tac_raw', c='darkmagenta', s = 49, grid=True)
# ax.set_xlim(df_tac['timestamp'][raw_1st], df_tac['timestamp'][raw_last])
# plt.savefig(os.path.join(fig_path,f'{pid}_tac_raw.png'),dpi=dpi)
```
# 3.1 Merged And Resampled (ACC)
```
full_data_cut_path = os.path.join('..', 'data', 'interim', 'full_data_resampled')
df_merge = pd.read_feather(os.path.join(full_data_cut_path,f'{pid}.feather'))
dfm1 = df_merge.copy()
# dfm1['timestamp'] = dfm1['timestamp'].dt.hour
merged_fig_path = os.path.join('..','figs','merged')
mkdir_secure(merged_fig_path)
ax = dfm1.plot.scatter(x='timestamp',y='x',grid=True)
ax.set_xlim(dfm1.timestamp[0],dfm1.timestamp.iloc[-1])
plt.xticks(rotation=25)
plt.savefig(os.path.join(fig_path,f'{pid}_merged_x.png'),dpi=dpi)
# ax = dfm1.plot.scatter(x='timestamp',y='y',c= 'darkmagenta',grid=True)
# ax.set_xlim(dfm1.timestamp[0],dfm1.timestamp.iloc[-1])
# plt.savefig(os.path.join(fig_path,f'{pid}_merged_y.png'),dpi=dpi)
# ax = dfm1.plot.scatter(x='timestamp',y='z',c= 'orangered',grid=True)
# ax.set_xlim(dfm1.timestamp[0],dfm1.timestamp.iloc[-1])
# plt.savefig(os.path.join(fig_path,f'{pid}_merged_z.png'),dpi=dpi)
plt.show()
# dfm1.plot(x='timestamp',y=['x','y','z'],grid=True)
# plt.savefig(os.path.join(fig_path,f'{pid}_merged_xyz.png'),dpi=dpi)
# plt.show()
```
# 3.2 Merged And Resampled (TAC Clean)
```
x_1st = dfm1['tac_clean'].first_valid_index()
x_last = dfm1['tac_clean'].last_valid_index()
# x_nn = dfm1[dfm1['tac_clean'].notnull()]
ax = dfm1.plot.scatter(x='timestamp',y='tac_clean',grid=True)
ax.set_xlim(dfm1.timestamp[x_1st],dfm1.timestamp[x_last])
plt.xticks(rotation=25)
plt.savefig(os.path.join(fig_path,f'{pid}_merged_clean.png'),dpi=dpi)
```
# 3.2 Merged And Resampled (TAC Raw)
```
# x_1st = dfm1['tac_raw'].first_valid_index()
# x_last = dfm1['tac_raw'].last_valid_index()
# x_nn = dfm1[dfm1['tac_raw'].notnull()]
# ax = dfm1.plot.scatter(x='timestamp',y='tac_raw',c='darkmagenta',grid=True)
# ax.set_xlim(dfm1.timestamp[x_1st],dfm1.timestamp[x_last])
# plt.savefig(os.path.join(fig_path,f'{pid}_merged_raw.png'),dpi=dpi)
```
# 4.1 Interpolated Resampled, Gaps Removed (ACC)
```
part = 5
no_gaps_path_resampled = os.path.join('..','data','interim','no_gaps_resampled')
df_final = pd.read_feather(os.path.join(no_gaps_path_resampled, f'{pid}_{part}.feather'))
full_fig_path = os.path.join('..','figs','full')
mkdir_secure(full_fig_path)
ax = df_final.plot.scatter(x='timestamp',y='x',grid=True)
ax.set_xlim(df_final.timestamp[0],df_final.timestamp.iloc[-1])
plt.xticks(rotation=25)
plt.savefig(os.path.join(fig_path,f'{pid}_full_x.png'),dpi=dpi)
# ax = df_final.plot.scatter(x='timestamp',y='y',grid=True,c= 'darkmagenta')
# ax.set_xlim(df_final.timestamp[0],df_final.timestamp.iloc[-1])
# plt.savefig(os.path.join(fig_path,f'{pid}_full_y.png'),dpi=dpi)
# ax = df_final.plot.scatter(x='timestamp',y='z',grid=True,c= 'orangered')
# ax.set_xlim(df_final.timestamp[0],df_final.timestamp.iloc[-1])
# plt.savefig(os.path.join(fig_path,f'{pid}_full_z.png'),dpi=dpi)
plt.show()
# ax = df_final.plot(x='timestamp',y=['x','y','z'],grid=True)
# plt.savefig(os.path.join(fig_path,f'{pid}_full_xyz.png'),dpi=dpi)
```
# 4.2 Interpolated Resampled, Gaps Removed (TAC Clean)
```
cl_1st = df_final['tac_clean'].first_valid_index()
cl_last = df_final['tac_clean'].last_valid_index()
# ,c=df_final['tac_clean'],colormap='viridis'
ax = df_final.plot.scatter(x='timestamp',y='tac_clean',grid=True)
ax.set_xlim(df_final.timestamp[cl_1st],df_final.timestamp[cl_last])
plt.xticks(rotation=25)
plt.savefig(os.path.join(fig_path,f'{pid}_full_clean.png'),dpi=dpi)
plt.show()
```
# 4.3 Interpolated Resampled, Gaps Removed (TAC Raw)
```
# rw_1st = df_final['tac_raw'].first_valid_index()
# rw_last = df_final['tac_raw'].last_valid_index()
# ax = df_final.plot.scatter(x='timestamp',y='tac_raw',c='purple',grid=True)
# ax.set_xlim(df_final.timestamp[rw_1st],df_final.timestamp[rw_last])
# plt.savefig(os.path.join(fig_path,f'{pid}_full_raw.png'),dpi=dpi)
# plt.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import random as rnd
from sklearn.cross_validation import KFold, cross_val_score
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
train_=pd.read_csv('../train_allcols.csv')
validate_=pd.read_csv('../validate_allcols.csv')
#test=pd.read_csv('../testwDSM.csv')
train_.shape, validate_.shape, #test.shape
train = train_.query('DSMCRIT < 14')
validate = validate_.query('DSMCRIT < 14')
#print train['DSMCRIT'].value_counts()
print train.shape
#alcohol
#print train['DSMCRIT'].value_counts() / train['DSMCRIT'].count()
#print train['SUB1'].value_counts() / train['SUB1'].count()
#train.query('SUB1 == 4')['DSMCRIT'].value_counts() / train.query('SUB1 == 4')['DSMCRIT'].count()
#train.describe()
train = train.sample(10000)
validate = validate.sample(3000)
train.shape, #validate.shape, #validate.head(2)
#train = train.query('SUB1 <= 10').query('SUB2 <= 10')
#validate = validate.query('SUB1 <= 10').query('SUB2 <= 10')
drop_list = ['DSMCRIT', #'NUMSUBS'
]
drop_list_select = ['RACE', 'PREG', 'ARRESTS', 'PSYPROB', 'DETNLF', 'ETHNIC', 'MARSTAT', 'GENDER', 'EDUC'
,'LIVARAG', 'EMPLOY', 'SUB3']
retain_list = ['RACE','PCPFLG','PRIMINC','LIVARAG','BENZFLG','HLTHINS','GENDER','ROUTE3','PRIMPAY',
'MARSTAT','PSYPROB','ROUTE2','EMPLOY','SUB2','FRSTUSE3','FREQ3','FRSTUSE2','OTHERFLG',
'EDUC','FREQ2','FREQ1','YEAR',
'PSOURCE','DETCRIM','DIVISION','REGION','NOPRIOR','NUMSUBS','ALCDRUG',
'METHUSE','FRSTUSE1','AGE','COKEFLG','OPSYNFLG','IDU','SERVSETA','ROUTE1','MARFLG',
'MTHAMFLG','HERFLG',
'ALCFLG','SUB1']
X_train = train[retain_list]
#X_train = train.drop(drop_list + drop_list_select, axis=1)
Y_train = train["DSMCRIT"]
X_validate = validate[retain_list]
Y_validate = validate["DSMCRIT"]
#X_test = test.drop(drop_list, axis=1)
X_train.shape, #X_validate.shape, #X_test.shape
print X_train.columns.tolist()
from sklearn.feature_selection import SelectKBest, SelectPercentile
from sklearn.feature_selection import f_classif,chi2
#Selector_f = SelectPercentile(f_classif, percentile=25)
Selector_f = SelectKBest(f_classif, k=10)
Selector_f.fit(X_train,Y_train)
zipped = zip(X_train.columns.tolist(),Selector_f.scores_)
ans = sorted(zipped, key=lambda x: x[1])
for n,s in ans:
print 'F-score: %3.2ft for feature %s' % (s,n)
#X_train= SelectKBest(f_classif, k=10).fit_transform(X_train, Y_train)
#one hot
from sklearn import preprocessing
# 1. INSTANTIATE
enc = preprocessing.OneHotEncoder()
# 2. FIT
enc.fit(X_train)
# 3. Transform
onehotlabels = enc.transform(X_train).toarray()
X_train = onehotlabels
onehotlabels = enc.transform(X_validate).toarray()
X_validate = onehotlabels
X_train.shape, #X_validate.shape
#kfold
kf = 3
# Logistic Regression
logreg = LogisticRegression(n_jobs=-1)
logreg.fit(X_train, Y_train)
#Y_pred = logreg.predict(X_test)
l_acc_log = cross_val_score(logreg, X_train, Y_train, cv=kf)
acc_log = round(np.mean(l_acc_log), 3)
l_acc_log = ['%.3f' % elem for elem in l_acc_log]
print l_acc_log
print acc_log
# Random Forest (slow)
random_forest = RandomForestClassifier(n_estimators=200, max_depth=20, n_jobs=-1)
random_forest.fit(X_train, Y_train)
#Y_pred = random_forest.predict(X_test)
l_acc_random_forest = cross_val_score(random_forest, X_train, Y_train, cv=kf)
acc_random_forest = round(np.mean(l_acc_random_forest), 3)
l_acc_random_forest = ['%.3f' % elem for elem in l_acc_random_forest]
print l_acc_random_forest
print acc_random_forest
# Linear SVC
linear_svc = LinearSVC(C=1.0)
linear_svc.fit(X_train, Y_train)
#Y_pred = linear_svc.predict(X_test)
l_acc_linear_svc = cross_val_score(linear_svc, X_train, Y_train, cv=kf)
acc_linear_svc = round(np.mean(l_acc_linear_svc), 3)
l_acc_linear_svc = ['%.3f' % elem for elem in l_acc_linear_svc]
print l_acc_linear_svc
print acc_linear_svc
print 'predict-sub2-woflags-newsplit-sample20000'
models = pd.DataFrame({
'Model': ['Logistic Regression',
'Random Forest','Linear SVC'],
'Cross Validation': [l_acc_log,
l_acc_random_forest, l_acc_linear_svc],
'Cross Validation Mean': [acc_log,
acc_random_forest, acc_linear_svc]
})
print models.sort_values(by='Cross Validation Mean', ascending=False)
import matplotlib.pyplot as plt
import seaborn as sns
Y_pred = random_forest.predict(X_validate)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(Y_validate, Y_pred, labels=[3,4,5,6,7,8,9,10])
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = cm #confusion_matrix(y_test, Y_pred)
#class_names = ["ANXIETY","DEPRESS","SCHIZOPHRENIA","BIPOLAR","ATTENTION DEFICIT"]
class_names = [3,4,5,6,7,8,9,10]
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, class_names,
title='Confusion Matrix, without normalization')
#plt.savefig('cnf matrix', dpi=150)
#plt.show()
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized Confusion Matrix')
#plt.figure(figsize=(16,8))
#plt.savefig('cnf matrix norm', dpi=150)
plt.show()
print X_validate.shape,Y_pred.shape, Y_validate.shape
print round(random_forest.score(X_validate, Y_validate) * 100, 2)
```
| github_jupyter |
```
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
```
# Absolute camera orientation given set of relative camera pairs
This tutorial showcases the `cameras`, `transforms` and `so3` API.
The problem we deal with is defined as follows:
Given an optical system of $N$ cameras with extrinsics $\{g_1, ..., g_N | g_i \in SE(3)\}$, and a set of relative camera positions $\{g_{ij} | g_{ij}\in SE(3)\}$ that map between coordinate frames of randomly selected pairs of cameras $(i, j)$, we search for the absolute extrinsic parameters $\{g_1, ..., g_N\}$ that are consistent with the relative camera motions.
More formally:
$$
g_1, ..., g_N =
{\arg \min}_{g_1, ..., g_N} \sum_{g_{ij}} d(g_{ij}, g_i^{-1} g_j),
$$,
where $d(g_i, g_j)$ is a suitable metric that compares the extrinsics of cameras $g_i$ and $g_j$.
Visually, the problem can be described as follows. The picture below depicts the situation at the beginning of our optimization. The ground truth cameras are plotted in purple while the randomly initialized estimated cameras are plotted in orange:

Our optimization seeks to align the estimated (orange) cameras with the ground truth (purple) cameras, by minimizing the discrepancies between pairs of relative cameras. Thus, the solution to the problem should look as follows:

In practice, the camera extrinsics $g_{ij}$ and $g_i$ are represented using objects from the `SfMPerspectiveCameras` class initialized with the corresponding rotation and translation matrices `R_absolute` and `T_absolute` that define the extrinsic parameters $g = (R, T); R \in SO(3); T \in \mathbb{R}^3$. In order to ensure that `R_absolute` is a valid rotation matrix, we represent it using an exponential map (implemented with `so3_exp_map`) of the axis-angle representation of the rotation `log_R_absolute`.
Note that the solution to this problem could only be recovered up to an unknown global rigid transformation $g_{glob} \in SE(3)$. Thus, for simplicity, we assume knowledge of the absolute extrinsics of the first camera $g_0$. We set $g_0$ as a trivial camera $g_0 = (I, \vec{0})$.
## 0. Install and Import Modules
Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:
```
import os
import sys
import torch
need_pytorch3d=False
try:
import pytorch3d
except ModuleNotFoundError:
need_pytorch3d=True
if need_pytorch3d:
if torch.__version__.startswith("1.9") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
!tar xzf 1.10.0.tar.gz
os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
# imports
import torch
from pytorch3d.transforms.so3 import (
so3_exp_map,
so3_relative_angle,
)
from pytorch3d.renderer.cameras import (
SfMPerspectiveCameras,
)
# add path for demo utils
import sys
import os
sys.path.append(os.path.abspath(''))
# set for reproducibility
torch.manual_seed(42)
if torch.cuda.is_available():
device = torch.device("cuda:0")
else:
device = torch.device("cpu")
print("WARNING: CPU only, this will be slow!")
```
If using **Google Colab**, fetch the utils file for plotting the camera scene, and the ground truth camera positions:
```
!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/camera_visualization.py
from camera_visualization import plot_camera_scene
!mkdir data
!wget -P data https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/data/camera_graph.pth
```
OR if running **locally** uncomment and run the following cell:
```
# from utils import plot_camera_scene
```
## 1. Set up Cameras and load ground truth positions
```
# load the SE3 graph of relative/absolute camera positions
camera_graph_file = './data/camera_graph.pth'
(R_absolute_gt, T_absolute_gt), \
(R_relative, T_relative), \
relative_edges = \
torch.load(camera_graph_file)
# create the relative cameras
cameras_relative = SfMPerspectiveCameras(
R = R_relative.to(device),
T = T_relative.to(device),
device = device,
)
# create the absolute ground truth cameras
cameras_absolute_gt = SfMPerspectiveCameras(
R = R_absolute_gt.to(device),
T = T_absolute_gt.to(device),
device = device,
)
# the number of absolute camera positions
N = R_absolute_gt.shape[0]
```
## 2. Define optimization functions
### Relative cameras and camera distance
We now define two functions crucial for the optimization.
**`calc_camera_distance`** compares a pair of cameras. This function is important as it defines the loss that we are minimizing. The method utilizes the `so3_relative_angle` function from the SO3 API.
**`get_relative_camera`** computes the parameters of a relative camera that maps between a pair of absolute cameras. Here we utilize the `compose` and `inverse` class methods from the PyTorch3D Transforms API.
```
def calc_camera_distance(cam_1, cam_2):
"""
Calculates the divergence of a batch of pairs of cameras cam_1, cam_2.
The distance is composed of the cosine of the relative angle between
the rotation components of the camera extrinsics and the l2 distance
between the translation vectors.
"""
# rotation distance
R_distance = (1.-so3_relative_angle(cam_1.R, cam_2.R, cos_angle=True)).mean()
# translation distance
T_distance = ((cam_1.T - cam_2.T)**2).sum(1).mean()
# the final distance is the sum
return R_distance + T_distance
def get_relative_camera(cams, edges):
"""
For each pair of indices (i,j) in "edges" generate a camera
that maps from the coordinates of the camera cams[i] to
the coordinates of the camera cams[j]
"""
# first generate the world-to-view Transform3d objects of each
# camera pair (i, j) according to the edges argument
trans_i, trans_j = [
SfMPerspectiveCameras(
R = cams.R[edges[:, i]],
T = cams.T[edges[:, i]],
device = device,
).get_world_to_view_transform()
for i in (0, 1)
]
# compose the relative transformation as g_i^{-1} g_j
trans_rel = trans_i.inverse().compose(trans_j)
# generate a camera from the relative transform
matrix_rel = trans_rel.get_matrix()
cams_relative = SfMPerspectiveCameras(
R = matrix_rel[:, :3, :3],
T = matrix_rel[:, 3, :3],
device = device,
)
return cams_relative
```
## 3. Optimization
Finally, we start the optimization of the absolute cameras.
We use SGD with momentum and optimize over `log_R_absolute` and `T_absolute`.
As mentioned earlier, `log_R_absolute` is the axis angle representation of the rotation part of our absolute cameras. We can obtain the 3x3 rotation matrix `R_absolute` that corresponds to `log_R_absolute` with:
`R_absolute = so3_exp_map(log_R_absolute)`
```
# initialize the absolute log-rotations/translations with random entries
log_R_absolute_init = torch.randn(N, 3, dtype=torch.float32, device=device)
T_absolute_init = torch.randn(N, 3, dtype=torch.float32, device=device)
# furthermore, we know that the first camera is a trivial one
# (see the description above)
log_R_absolute_init[0, :] = 0.
T_absolute_init[0, :] = 0.
# instantiate a copy of the initialization of log_R / T
log_R_absolute = log_R_absolute_init.clone().detach()
log_R_absolute.requires_grad = True
T_absolute = T_absolute_init.clone().detach()
T_absolute.requires_grad = True
# the mask the specifies which cameras are going to be optimized
# (since we know the first camera is already correct,
# we only optimize over the 2nd-to-last cameras)
camera_mask = torch.ones(N, 1, dtype=torch.float32, device=device)
camera_mask[0] = 0.
# init the optimizer
optimizer = torch.optim.SGD([log_R_absolute, T_absolute], lr=.1, momentum=0.9)
# run the optimization
n_iter = 2000 # fix the number of iterations
for it in range(n_iter):
# re-init the optimizer gradients
optimizer.zero_grad()
# compute the absolute camera rotations as
# an exponential map of the logarithms (=axis-angles)
# of the absolute rotations
R_absolute = so3_exp_map(log_R_absolute * camera_mask)
# get the current absolute cameras
cameras_absolute = SfMPerspectiveCameras(
R = R_absolute,
T = T_absolute * camera_mask,
device = device,
)
# compute the relative cameras as a composition of the absolute cameras
cameras_relative_composed = \
get_relative_camera(cameras_absolute, relative_edges)
# compare the composed cameras with the ground truth relative cameras
# camera_distance corresponds to $d$ from the description
camera_distance = \
calc_camera_distance(cameras_relative_composed, cameras_relative)
# our loss function is the camera_distance
camera_distance.backward()
# apply the gradients
optimizer.step()
# plot and print status message
if it % 200==0 or it==n_iter-1:
status = 'iteration=%3d; camera_distance=%1.3e' % (it, camera_distance)
plot_camera_scene(cameras_absolute, cameras_absolute_gt, status)
print('Optimization finished.')
```
## 4. Conclusion
In this tutorial we learnt how to initialize a batch of SfM Cameras, set up loss functions for bundle adjustment, and run an optimization loop.
| github_jupyter |
# Model Template - STEP Data
This notebook outlines the data prep process for inputting the STEP data into a neural network. This code is essentially replicated in our full walkthrough notebooks for an end-to-end deep learning model.
### Import libraries and data
```
import pandas as pd
import numpy as np
import more_itertools
import random
df = pd.read_csv('/Users/N1/Data-2020/10_code/csv/no_aw_no_28.csv') #read in the csv file
df
```
### Label Encode
```
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['Subject_ID'] = le.fit_transform(df['Subject_ID'])
le1 = LabelEncoder()
df['Activity'] = le1.fit_transform(df['Activity'])
activity_name_mapping = dict(zip(le1.classes_, le1.transform(le1.classes_)))
print(activity_name_mapping)
```
### Sliding Window
```
from window_slider import Slider
def make_windows(df, bucket_size, overlap_count):
window_list = []
final = pd.DataFrame()
activity_list = list(df['Activity'].unique()) #list of the four activities
sub_id_list = list(df['Subject_ID'].unique()) #list of the subject ids
round_list = list(df['Round'].unique())
df_list = []
for i in sub_id_list:
df_subject = df[df['Subject_ID'] == i] #isolate a single subject id
for j in activity_list:
df_subject_activity = df_subject[df_subject['Activity'] == j] #isolate by activity
for k in round_list:
df_subject_activity_round = df_subject_activity[df_subject_activity['Round'] == k]
final_df = pd.DataFrame()
if df_subject_activity_round.empty:
pass
else:
df_flat = df_subject_activity_round[['ACC1', 'ACC2','ACC3','TEMP','EDA','BVP','HR','Magnitude', 'Subject_ID']].T.values #array of arrays, each row is every single reading in an array for a sensor in that isolation
slider = Slider(bucket_size,overlap_count)
slider.fit(df_flat)
while True:
window_data = slider.slide()
if slider.reached_end_of_list(): break
window_list.append(list(window_data))
final_df = final.append(window_list)
final_df.columns = [['ACC1', 'ACC2','ACC3','TEMP','EDA','BVP','HR','Magnitude', 'SID']]
final_df.insert(9, "Subject_ID", [i]*len(final_df), True)
final_df.insert(10, "Activity", [j]*len(final_df), True)
final_df.insert(11, "Round", [k]*len(final_df), True)
df_list.append(final_df)
window_list = []
final = pd.DataFrame(columns = df_list[0].columns)
for l in df_list:
final = final.append(l)
final
final.columns = final.columns.map(''.join)
return final
```
#### Please edit the cell below to change windows
```
window_size = 80
window_overlap = 40
windowed = make_windows(df, window_size, window_overlap)
windowed.head(1)
windowed.shape
```
### Split Train Test
```
ID_list = list(windowed['Subject_ID'].unique())
random.shuffle(ID_list)
train = pd.DataFrame()
test = pd.DataFrame()
#change size of train/test split
train = windowed[windowed['Subject_ID'].isin(ID_list[:45])]
test = windowed[windowed['Subject_ID'].isin(ID_list[45:])]
print(train.shape, test.shape)
X_train = train[['TEMP', 'EDA', 'HR', 'BVP', 'Magnitude', 'ACC1', 'ACC2', 'ACC3', 'SID']]
X_train = X_train.apply(pd.Series.explode).reset_index().drop(['index'], axis = 1)
X_test = test[['TEMP', 'EDA', 'HR', 'BVP', 'Magnitude', 'ACC1', 'ACC2', 'ACC3', 'SID']]
X_test = X_test.apply(pd.Series.explode).reset_index().drop(['index'], axis = 1)
print(X_train.shape, X_test.shape, X_train.shape[0] + X_test.shape[0])
y_train = train['Activity'].values
y_test = test['Activity'].values
print(y_train.shape, y_test.shape, y_train.shape[0] + y_test.shape[0])
```
### One-Hot Encoding Subject_ID
```
X_train['train'] =1
X_test['train'] = 0
combined = pd.concat([X_train, X_test])
combined = pd.concat([combined, pd.get_dummies(combined['SID'])], axis =1)
X_train = combined[combined['train'] == 1]
X_test = combined[combined['train'] == 0]
X_train.drop(["train", "SID"], axis = 1, inplace = True)
X_test.drop(["train", "SID"], axis = 1, inplace = True)
print(X_train.shape, X_test.shape, X_train.shape[0] + X_test.shape[0])
X_train.head(1)
```
### Normalize data
```
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
X_train_SID = X_train.iloc[:, 8:]
X_test_SID = X_test.iloc[:, 8:]
X_train_cont = pd.DataFrame(ss.fit_transform(X_train.iloc[:,0:8]))
X_test_cont = pd.DataFrame(ss.transform(X_test.iloc[:,0:8]))
X_train = pd.concat([X_train_cont, X_train_SID], axis = 1)
X_test = pd.concat([X_test_cont, X_test_SID], axis = 1)
X_train
```
### Reshaping windows as arrays
```
# Convert to transposed arrays
X_test = X_test.T.values
X_train = X_train.T.values
X_test = X_test.astype('float64')
X_train = X_train.astype('float64')
# Reshape to -1, window_size, # features
X_train = X_train.reshape((-1, window_size, X_train.shape[0]))
X_test = X_test.reshape((-1, window_size, X_test.shape[0]))
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
```
#### If using TensorFlow, run this cell to convert y_train and y_test to One-Hot
```
from keras.utils import np_utils
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/kentokura/ox_2x2_retrograde_analysis/blob/main/retrograde_analysis/retrograde_analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## ox_input
| | PREVIOUS_STATES | STATE | NEXT_STATES | RESULT |
|------|-------------------|---------|--------------------------|----------|
| ____ | [''] | ____ | ['o___'] | nan |
| o___ | ['____'] | o___ | ['o__x', 'o_x_', 'ox__'] | nan |
| o__x | ['o___'] | o__x | ['oo_x', 'o_ox'] | nan |
| o_x_ | ['o___'] | o_x_ | ['o_xo', 'oox_'] | nan |
| ox__ | ['o___'] | ox__ | ['oxo_', 'ox_o'] | nan |
| oo_x | ['o__x'] | oo_x | [] | o_win |
| o_ox | ['o__x'] | o_ox | [] | o_win |
| o_xo | ['o_x_'] | o_xo | [] | o_win |
| oox_ | ['o_x_'] | oox_ | [] | o_win |
| oxo_ | ['ox__'] | oxo_ | [] | o_win |
| ox_o | ['ox__'] | ox_o | [] | o_win |
## 前処理
辺の数列を追加
## 処理
```
Q をキューとする
dp テーブル全体を -1 に初期化する
for each 各ノード v do
if deg[v] == 0 then
dp[v] = 0
v を Q に push
while Q が空ではない do
Q からノード v を pop して取り出す
for each v を終点とする辺 e とその始点のノード nv do
if nv はすでに訪れたノード then
continue
辺 e をグラフから削除 (実装上は deg[nv] の値を減らせば OK)
if dp[v] = 0 then
dp[nv] = 1
nv を Q に push
else if dp[v] = 1 then
if deg[nv] = 0 then (辺を削除して行ったことで出次数が 0 になったら負けノード確定です)
dp[nv] = 0
nv を Q に push
```
```
unsolvedDfをキューとする
inputDf["RESULT"]を -1 に初期化する
for each inputDf node do
if node."NEXT_STATES" == [] then
dp[i] = 0
v を Q に push
```
グラフから葉をけずりおとしていくイメージ
- 前処理<br>
resultに"*_win"か"braw"が入力されている<br> -> それぞれのPrevious_statesに登録されているstateと一致するノードをinputDfからunsolvedDfへ移動、本体はsolvedDfへ移動.
- メイン処理
BFS
のこったunsolvedをあいこにする
後処理
## ox_output
| | PREVIOUS_STATES | STATE | NEXT_STATES | RESULT |
|------|-------------------|---------|--------------------------|----------|
| ____ | [''] | ____ | ['o___'] | o_win |
| o___ | ['____'] | o___ | ['o__x', 'o_x_', 'ox__'] | o_win |
| o__x | ['o___'] | o__x | ['oo_x', 'o_ox'] | o_win |
| o_x_ | ['o___'] | o_x_ | ['o_xo', 'oox_'] | o_win |
| ox__ | ['o___'] | ox__ | ['oxo_', 'ox_o'] | o_win |
| oo_x | ['o__x'] | oo_x | [] | o_win |
| o_ox | ['o__x'] | o_ox | [] | o_win |
| o_xo | ['o_x_'] | o_xo | [] | o_win |
| oox_ | ['o_x_'] | oox_ | [] | o_win |
| oxo_ | ['ox__'] | oxo_ | [] | o_win |
| ox_o | ['ox__'] | ox_o | [] | o_win |
```
各ノードはまだ勝敗が一つもわかっていない
# 前処理
for each 各ノード do
if ノード.枝の本数 == 0本 then
ノード.勝敗 = 勝敗確認(ノード)
キューにノードをpush
while キューが空ではない do
キューからノードをpopして取り出す
for each ノード.ひとつ前のノード do
if ひとつ前のノード はすでに訪れたノード then
continue
ひとつ前のノード.辺の数 -= 1
if ノード.勝敗 = "負け" then # このノードが手番Pの負けならば、一つ前の状態でその手をさせば勝てる
ひとつ前のノード.勝敗 = "勝ち"
ひとつ前のノード を キュー に push
else if ノード.勝敗 = "勝ち" then
if ひとつ前のノード.辺の数 = 0 then # 辺を削除して行ったことで出次数が 0 になったら負けノード確定
ひとつ前のノード.勝敗 = 負け
ひとつ前のノード を キュー に push
```
# インプット
```
# ドライブのマウント
from google.colab import drive
drive.mount('/content/drive')
# githubリポジトリをドライブにクローンする
!git clone https://github.com/kentokura/ox_2x2_retrograde_analysis.git
# csvの読み込むためのモジュール
import pandas as pd
from pandas import DataFrame
import numpy as np
from tabulate import tabulate # pandasのdfをきれいに出力するためのモジュール
# ox_inputの読み込み
inputDf = pd.read_csv(
"/content/ox_2x2_retrograde_analysis/retrograde_analysis/ox_input.csv",
index_col=0, # 最初の1行はデータ名。
encoding="cp932" # windowsの追加文字に対応。おまじないだと思えば良い。
)
print(tabulate(inputDf, inputDf.columns,tablefmt='github', showindex=True))
```
# 処理
```
# 抽象構文木の処理モジュール
# str化したlistをlistに戻す "[hoge, huga]" -> [hoge, huga]
import ast
workDf = inputDf
# 手番をつける
workDf["TURN_PLAYER"] = "unsolved"
workDf.loc[workDf["STATE"].str.count("_") % 2 == 0, "TURN_PLAYER"] = "o"
workDf.loc[workDf["STATE"].str.count("_") % 2 == 1, "TURN_PLAYER"] = "x"
# next_statesの辺の数の列をつくる
workDf["NEXT_STATES_NUM"] = workDf["NEXT_STATES"]
for index in workDf.index:
num = len(ast.literal_eval(workDf.loc[index].NEXT_STATES_NUM))
workDf.at[index, "NEXT_STATES_NUM"] = num
print(workDf)
各ノードはまだ勝敗が一つもわかっていない
# 前処理
for each 各ノード do
if ノード.枝の本数 == 0本 then
ノード.勝敗 = 勝敗確認(ノード)
キューにノードをpush
while キューが空ではない do
キューからノードをpopして取り出す
for each ノード.ひとつ前のノード do
if ひとつ前のノード はすでに訪れたノード then
continue
ひとつ前のノード.辺の数 -= 1
if ノード.勝敗 = "負け" then # このノードが手番Pの負けならば、一つ前の状態でその手をさせば勝てる
ひとつ前のノード.勝敗 = "勝ち"
ひとつ前のノード を キュー に push
else if ノード.勝敗 = "勝ち" then
if ひとつ前のノード.辺の数 = 0 then # 辺を削除して行ったことで出次数が 0 になったら負けノード確定
ひとつ前のノード.勝敗 = 負け
ひとつ前のノード を キュー に push
```
# 出力
```
# solvedDfをox_outputという名前で書き出し
solvedDf.to_csv('/content/ox_2x2_retrograde_analysis/retrograde_analysis/ox_output.csv')
# ox_outputの確認
solvedDf = pd.read_csv(
"/content/ox_2x2_retrograde_analysis/retrograde_analysis/ox_output.csv",
index_col=0, # 最初の1行はデータ名。
encoding="cp932" # windowsの追加文字に対応。おまじないだと思えば良い。
)
print(solvedDf)
```
| github_jupyter |
# Tutorial 8 : Comparison of various DMD algorithms
In this tutorial, we perform a thorough comparison of various DMD algorithms available in PyDMD, namely:
- the original DMD algorithm proposed by [Schmid (*J. Fluid Mech.*, 2010)](https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/article/dynamic-mode-decomposition-of-numerical-and-experimental-data/AA4C763B525515AD4521A6CC5E10DBD4), see `DMD`
- the optimal closed-form solution given by [Héas & Herzet (*arXiv*, 2016)](https://arxiv.org/abs/1610.02962), see `OptDMD`.
For that purpose, different test cases are considered in order to assess the accuracy, the computational efficiency and the generalization capabilities of each method. The system we'll consider throughout this notebook is that of a chain of slightly damped 1D harmonic oscillators with nearest-neighbours coupling. Defining the state-vector as
$$
\mathbf{x} = \begin{bmatrix} \mathbf{q} & \mathbf{p} \end{bmatrix}^T,
$$
where $\mathbf{q}$ is the position and $\mathbf{p}$ is the momentum, our linear system can be written as
$$
\displaystyle \frac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} \mathbf{q} \\ \mathbf{p} \end{bmatrix} = \begin{bmatrix} \mathbf{0} & \mathbf{I} \\ -\mathbf{K} & -\mathbf{G} \end{bmatrix} \begin{bmatrix} \mathbf{q} \\ \mathbf{p} \end{bmatrix},
$$
with $\mathbf{K}$ and $\mathbf{G}$ being the stiffness and friction matrices, respectively. It must be emphasized that, because we consider $N=50$ identifical oscillators, this system does not exhibit low-rank dynamics. It will nonetheless enable us to further highlight the benefits of using the optimal solution proposed by [Héas & Herzet (arXiv, 2016)](https://arxiv.org/abs/1610.02962) as opposed to the other algorithms previously available in PyDMD.
Three different test cases will be considered :
- fitting a DMD model using a single long time-series,
- fitting a DMD model using a short burst,
- fitting a DMD model using an ensemble of short bursts.
For each case, the reconstruction error on the training dataset used to fit the model will be reported along with the error on an ensemble of testing datasets as to assess the generalization capabilities of these various DMD models. The following two cells build the discrete linear time invariant (LTI) state space model for our system. It is this particular system, hereafter denoted `dsys`, that will be used throughout this notebook to generate both the training and testing datasets. For more details about SciPy implementation of LTI systems, interested readers are refered to the [`scipy.signal`](https://docs.scipy.org/doc/scipy/reference/signal.html#discrete-time-linear-systems) module.
```
# Import standard functions from numpy.
import numpy as np
from numpy.random import normal
# Import matplotlib and set related parameters.
import matplotlib.pyplot as plt
fig_width = 12
# Import SciPy utility functions for linear dynamical systems.
from scipy.signal import lti
from scipy.signal import dlti, dlsim
# Import standard linear algebra functions from SciPy.
from scipy.linalg import norm
# Import various DMD algorithms available in PyDMD.
from pydmd import DMD, OptDMD
def harmonic_oscillators(N=10, omega=0.1, alpha=0.2, gamma=0.05, dt=1.0):
"""
This function builds the discrete-time model of a chain of N coupled
weakly damped harmonic oscillators. All oscillators are identical to
one another and have a nearest-neighbour coupling.
Parameters
----------
N : integer
The number of oscillators forming the chain (default 10).
omega : float
The natural frequency of the base oscillator (default 0.1).
alpha : float
The nearest-neighbour coupling strength (default 0.2).
gamma : float
The damping parameter (default 0.05).
dt : float
The sampling period for the continuous-to-discrete time conversion (default 1.0).
Returns
-------
dsys : scipy.signal.dlti
The corresponding discrete-time state-space model.
"""
# Miscellaneous imports.
from scipy.sparse import diags, identity, bmat, block_diag
# Build the stiffness matrix.
K_ii = np.ones((N,)) * (omega**2/2.0 + alpha) # Self-coupling.
K_ij = np.ones((N-1,)) * (-alpha/2.0) # Nearest-neighbor coupling.
K = diags([K_ij, K_ii, K_ij], offsets=[-1, 0, 1]) # Assembles the stiffness matrix.
# Build the friction matrix.
G = gamma * identity(N)
# Build the dynamic matrix.
A = bmat([[None, identity(N)], [-K, -G]])
# Build the control matrix.
B = bmat([[0*identity(N)], [identity(N)]])
# Build the observation matrix.
C = identity(2*N)
# Build the feedthrough matrix.
D = bmat([[0*identity(N)], [0*identity(N)]])
# SciPy continuous-time LTI object.
sys = lti(A.toarray(), B.toarray(), C.toarray(), D.toarray())
# Return the discrete-time equivalent.
return sys.to_discrete(dt)
# Get the discrete-time LTI model.
N = 50 # Number of oscillators (each has 2 degrees of freedom so the total size of the system is 2N).
dsys = harmonic_oscillators(N=N) # Build the model.
```
## Case 1 : Using a single long time-series to fit a DMD model
```
# Training initial condition.
x0_train = normal( loc=0.0, scale=1.0, size=(dsys.A.shape[1]) )
# Run simulation to generate dataset.
t, _, x_train = dlsim(dsys, np.zeros((2000, dsys.inputs)), x0=x0_train)
def plot_training_dataset(t, x_train):
"""
This is a simple utility function to plot the time-series forming our training dataset.
Parameters
----------
t : array-like, shape (n_samples,)
The time instants.
x_train : array-like, shape (n_samples, n_dof)
The time-series of our system.
"""
# Setup the figure.
fig, axes = plt.subplots( 1, 2, sharex=True, figsize=(fig_width, fig_width/6) )
# Plot the oscillators' positions.
axes[0].plot( t, x_train[:, :dsys.inputs], alpha=0.5)
# Add decorators.
axes[0].set_ylabel(r"$q_i[k]$")
# Plot the oscillators'velocities.
axes[1].plot( t, x_train[:, dsys.inputs:], alpha=0.5 )
# Add decorators.
axes[1].set( xlim=(t.min(), t.max()), xlabel=r"k", ylabel=r"p_i[k]$" )
return
plot_training_dataset(t, x_train)
```
Let us now fit our models, namely vanilla DMD (Schmid, *J. Fluid Mech.*, 2010) and the closed-form solution DMD (Héas & Herzet, *arXiv*, 2016).
```
def rank_sensitvity(dsys, x_train, n_test=100):
"""
This function using the generated training dataset to fit DMD and OptDMD models of increasing rank.
It also computes the test error on an ensemble of testing dataset to get a better estimation of the
generalization capabilities of the fitted models.
Parameters
----------
dsys : scipy.signal.dlti
The discrete LTI system considered.
x_train : array-like, shape (n_features, n_samples)
The training dataset.
NOTE : It is transposed compared to the output of dsys.
n_test : int
The number of testing datasets to be generated.
Returns
-------
dmd_train_error : array-like, shape (n_ranks,)
The reconstruction error of the DMD model on the training data.
dmd_test_error : array-like, shape (n_ranks, n_test)
The reconstruction error of the DMD model on the various testing datasets.
optdmd_train_error : array-like, shape (n_ranks,)
The reconstruction error of the OptDMD model on the training data.
optdmd_test_error : array-like, shape (n_ranks, n_test)
The reconstruction error of the OptDMD model on the various testing datasets.
"""
dmd_train_error, optdmd_train_error = list(), list()
dmd_test_error, optdmd_test_error = list(), list()
# Split the training data into input/output snapshots.
y_train, X_train = x_train[:, 1:], x_train[:, :-1]
for rank in range(1, dsys.A.shape[0]+1):
# Fit the DMD model (Schmid's algorithm)
dmd = DMD(svd_rank=rank).fit(x_train)
# Fit the DMD model (optimal closed-form solution)
optdmd = OptDMD(svd_rank=rank, factorization="svd").fit(x_train)
# One-step ahead prediction using both DMD models.
y_predict_dmd = dmd.predict(X_train)
y_predict_opt = optdmd.predict(X_train)
# Compute the one-step ahead prediction error.
dmd_train_error.append( norm(y_predict_dmd-y_train)/norm(y_train) )
optdmd_train_error.append( norm(y_predict_opt-y_train)/norm(y_train) )
# Evaluate the error on test data.
dmd_error, optdmd_error = list(), list()
for _ in range(n_test):
# Test initial condition.
x0_test = normal( loc=0.0, scale=1.0, size=(dsys.A.shape[1]) )
# Run simulation to generate dataset.
t, _, x_test = dlsim(dsys, np.zeros((250, dsys.inputs)), x0=x0_test)
# Split the training data into input/output snapshots.
y_test, X_test = x_test.T[:, 1:], x_test.T[:, :-1]
# One-step ahead prediction using both DMD models.
y_predict_dmd = dmd.predict(X_test)
y_predict_opt = optdmd.predict(X_test)
# Compute the one-step ahead prediction error.
dmd_error.append( norm(y_predict_dmd-y_test)/norm(y_test) )
optdmd_error.append( norm(y_predict_opt-y_test)/norm(y_test) )
# Store the error for rank i DMD.
dmd_test_error.append( np.asarray(dmd_error) )
optdmd_test_error.append( np.asarray(optdmd_error) )
# Complete rank-sensitivity.
dmd_test_error = np.asarray(dmd_test_error)
optdmd_test_error = np.asarray(optdmd_test_error)
dmd_train_error = np.asarray(dmd_train_error)
optdmd_train_error = np.asarray(optdmd_train_error)
return dmd_train_error, dmd_test_error, optdmd_train_error, optdmd_test_error
def plot_rank_sensitivity(dmd_train_error, dmd_test_error, optdmd_train_error, optdmd_test_error):
"""
Simple utility function to plot the results from the rank sensitivity analysis.
Parameters
----------
dmd_train_error : array-like, shape (n_ranks,)
The reconstruction error of the DMD model on the training data.
dmd_test_error : array-like, shape (n_ranks, n_test)
The reconstruction error of the DMD model on the various testing datasets.
optdmd_train_error : array-like, shape (n_ranks,)
The reconstruction error of the OptDMD model on the training data.
optdmd_test_error : array-like, shape (n_ranks, n_test)
The reconstruction error of the OptDMD model on the various testing datasets.
"""
# Generate figure.
fig, axes = plt.subplots( 1, 2, figsize=(fig_width, fig_width/4), sharex=True, sharey=True )
# Misc.
rank = np.arange(1, dmd_test_error.shape[0]+1)
#####
##### TRAINING ERROR
#####
# Plot the vanilla DMD error.
axes[0].plot( rank, dmd_train_error )
# Plot the OptDMD error.
axes[0].plot( rank, optdmd_train_error, ls="--" )
# Add decorators.
axes[0].set(
xlabel=r"Rank of the DMD model", ylabel=r"Normalized error", title=r"Training dataset"
)
axes[0].grid(True)
#####
##### TESTING ERROR
#####
# Plot the vanilla DMD error.
axes[1].plot( rank, np.mean(dmd_test_error, axis=1), label=r"Regular DMD" )
axes[1].fill_between(
rank,
np.mean(dmd_test_error, axis=1) + np.std(dmd_test_error, axis=1),
np.mean(dmd_test_error, axis=1) - np.std(dmd_test_error, axis=1),
alpha=0.25,
)
# Plot the OptDMD error.
axes[1].plot( rank, np.mean(optdmd_test_error, axis=1), ls="--", label=r"Optimal DMD" )
axes[1].fill_between(
rank,
np.mean(optdmd_test_error, axis=1) + np.std(optdmd_test_error, axis=1),
np.mean(optdmd_test_error, axis=1) - np.std(optdmd_test_error, axis=1),
alpha=0.25,
)
# Add decorators.
axes[1].set(
xlim = (0, rank.max()), xlabel=r"Rank of the DMD model",
ylim=(0, 1),
title=r"Testing dataset"
)
axes[1].grid(True)
axes[1].legend(loc=0)
return
# Run the rank-sensitivity analysis.
output = rank_sensitvity(dsys, x_train.T)
# Keep for later use.
long_time_series_optdmd_train, long_time_series_optdmd_test = output[2], output[3]
# Plot the results.
plot_rank_sensitivity(*output)
```
## Case 2 : Using a short burst to fit a DMD model
```
# Training initial condition.
x0_train = normal( loc=0.0, scale=1.0, size=(dsys.A.shape[1]) )
# Run simulation to generate dataset.
t, _, x_train = dlsim(dsys, np.zeros((100, dsys.inputs)), x0=x0_train)
# Plot the corresponding training data.
plot_training_dataset(t, x_train)
# Run the rank-sensitivity analysis.
output = rank_sensitvity(dsys, x_train.T)
# Keep for later use.
short_time_series_optdmd_train, short_time_series_optdmd_test = output[2], output[3]
# Plot the results.
plot_rank_sensitivity(*output)
```
## Case 3 : Fitting a DMD model using an ensemble of trajectories
```
def generate_ensemble_time_series(dsys, n_traj, len_traj):
"""
Utility function to generate a training dataset formed by an ensemble of time-series.
Parameters
-----------
dsys : scipy.signal.dlti
The discrete LTI system considered.
n_traj : int
The numbr of trajectories forming our ensemble.
len_traj : int
The length of each time-series.
Returns
-------
X : array-like, shape (n_features, n_samples)
The input to the system.
Y : array-like, shape (n_features, n_samples)
The output of the system.
"""
for i in range(n_traj):
# Training initial condition.
x0_train = normal(loc=0.0,scale=1.0,size=(dsys.A.shape[1]))
# Run simulation to generate dataset.
t, _, x = dlsim(dsys, np.zeros((len_traj, dsys.inputs)), x0=x0_train)
# Store the data.
if i == 0:
X, Y = x.T[:, :-1], x.T[:, 1:]
else:
X, Y = np.c_[X, x.T[:, :-1]], np.c_[Y, x.T[:, 1:]]
return X, Y
def rank_sensitvity_bis(dsys, X, Y, n_test=100):
"""
Same as before but for the ensemble training. Note that no DMD model is fitted, only OptDMD.
"""
optdmd_train_error, optdmd_test_error = list(), list()
# Fit a DMD model for each possible rank.
for rank in range(1, dsys.A.shape[0]+1):
# Fit the DMD model (optimal closed-form solution)
optdmd = OptDMD(svd_rank=rank, factorization="svd").fit(X, Y)
# One-step ahead prediction using both DMD models.
y_predict_opt = optdmd.predict(X)
# Compute the one-step ahead prediction error.
optdmd_train_error.append( norm(y_predict_opt-Y)/norm(Y) )
# Evaluate the error on test data.
optdmd_error = list()
for _ in range(n_test):
# Test initial condition.
x0_test = normal(loc=0.0,scale=1.0,size=(dsys.A.shape[1]))
# Run simulation to generate dataset.
t, _, x_test = dlsim(dsys, np.zeros((250, dsys.inputs)), x0=x0_test)
# Split the training data into input/output snapshots.
y_test, X_test = x_test.T[:, 1:], x_test.T[:, :-1]
# One-step ahead prediction using both DMD models.
y_predict_opt = optdmd.predict(X_test)
# Compute the one-step ahead prediction error.
optdmd_error.append( norm(y_predict_opt-y_test)/norm(y_test) )
# Store the error for rank i DMD.
optdmd_test_error.append( np.asarray(optdmd_error) )
# Complete rank-sensitivity.
optdmd_test_error = np.asarray(optdmd_test_error)
optdmd_train_error = np.asarray(optdmd_train_error)
return optdmd_train_error, optdmd_test_error
def plot_rank_sensitivity_bis(
short_time_series_optdmd_train, short_time_series_optdmd_test,
long_time_series_optdmd_train, long_time_series_optdmd_test,
optdmd_train_error, optdmd_test_error
):
"""
Same as before for this second rank sensitivity analysis.
"""
# Generate figure.
fig, axes = plt.subplots( 1, 2, figsize=(fig_width, fig_width/4), sharey=True, sharex=True )
# Misc.
rank = np.arange(1, optdmd_train_error.shape[0]+1)
#####
##### TRAINING ERROR
#####
# Training error using a short time-series to fit the model.
axes[0].plot( rank, short_time_series_optdmd_train )
# Training error using a long time-series to fit the model.
axes[0].plot( rank, long_time_series_optdmd_train )
# Training error using an ensemble of short time-series to fit the model.
axes[0].plot( rank, optdmd_train_error )
# Add decorators.
axes[0].set(
xlim=(0, rank.max()), ylim=(0, 1),
xlabel=r"Rank of the DMD model", ylabel=r"Normalized error",
title=r"Training dataset"
)
axes[0].grid(True)
#####
##### TESTING ERROR
#####
# Testing error for the model fitted with a short time-series.
axes[1].plot( rank, np.mean(short_time_series_optdmd_test, axis=1), label=r"Short time-series" )
axes[1].fill_between(
rank,
np.mean(short_time_series_optdmd_test, axis=1) + np.std(short_time_series_optdmd_test, axis=1),
np.mean(short_time_series_optdmd_test, axis=1) - np.std(short_time_series_optdmd_test, axis=1),
alpha=0.25,
)
# Testing error for the model fitted with a long time-series.
axes[1].plot( rank, np.mean(long_time_series_optdmd_test, axis=1), label=r"Long time-series" )
axes[1].fill_between(
rank,
np.mean(long_time_series_optdmd_test, axis=1) + np.std(long_time_series_optdmd_test, axis=1),
np.mean(long_time_series_optdmd_test, axis=1) - np.std(long_time_series_optdmd_test, axis=1),
alpha=0.25,
)
# Testing error for the model fitted using an ensemble of trajectories.
axes[1].plot( rank, np.mean(optdmd_test_error, axis=1), label=r"Ensemble" )
axes[1].fill_between(
rank,
np.mean(optdmd_test_error, axis=1) + np.std(optdmd_test_error, axis=1),
np.mean(optdmd_test_error, axis=1) - np.std(optdmd_test_error, axis=1),
alpha=0.25,
)
# Add decorators.
axes[1].set(
xlim=(0, rank.max()), ylim=(0, 1),
xlabel=r"Rank of the DMD model", title=r"Testing dataset"
)
axes[1].grid(True)
axes[1].legend(loc=0)
return
```
### Case 3.1 : Small ensemble
```
# Number of trajectories and length.
n_traj, len_traj = 10, 10
X, Y = generate_ensemble_time_series(dsys, n_traj, len_traj)
# Run the rank-sensitivity analysis.
optdmd_train_error, optdmd_test_error = rank_sensitvity_bis(dsys, X, Y)
plot_rank_sensitivity_bis(
short_time_series_optdmd_train, short_time_series_optdmd_test,
long_time_series_optdmd_train, long_time_series_optdmd_test,
optdmd_train_error, optdmd_test_error
)
```
### Case 3.2 : Large ensemble
```
# Number of trajectories and length.
n_traj, len_traj = 200, 10
X, Y = generate_ensemble_time_series(dsys, n_traj, len_traj)
# Run the rank-sensitivity analysis.
optdmd_train_error, optdmd_test_error = rank_sensitvity_bis(dsys, X, Y)
plot_rank_sensitivity_bis(
short_time_series_optdmd_train, short_time_series_optdmd_test,
long_time_series_optdmd_train, long_time_series_optdmd_test,
optdmd_train_error, optdmd_test_error
)
```
## Conclusion
These various tests show that:
- The models fitted using the new `OptDMD` consistently have smaller errors both on the training and testing datasets as compared to the regular `DMD`, no matter whether one has few data or a lot of data.
- When a single time-series is used, both `DMD` and `OptDMD` models tend to actually overfit the training data. Indeed, a zero reconstruction error can be achieved for a low-rank model on the training data. The same models however poorly generalize to new testing data.
- In order to prevent this overfitting problem, using an ensemble of bursts (i.e. short time-series) to fit the model appear to be extremely beneficial. In this case, the model's error on the testing dataset is of the same order as the error on the training one. Using such an ensemble of trajectories is moreover data-efficient : for a given reconstruction error, `OptDMD` models fitted using an ensemble of trajectories require actually less data than models fitted using a single time-series.
| github_jupyter |
```
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p drive
!google-drive-ocamlfuse drive
!apt-get -qq install -y libsm6 libxext6 && pip install -q -U opencv-python
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# Download a file based on its file ID.
#
# A file ID looks like: laggVyWshwcyP6kEI-y_W3P8D26sz
file_id = '1AyULTkscFRSlZIJ143yXe7qg8fSNPBp-'
downloaded = drive.CreateFile({'id': file_id})
downloaded.GetContentFile('util.py')
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once per notebook.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# Download a file based on its file ID.
#
# A file ID looks like: laggVyWshwcyP6kEI-y_W3P8D26sz
file_id = '1IugJe_XgxWtVkim24TlGz36YiUwGX13D'
downloaded = drive.CreateFile({'id': file_id})
downloaded.GetContentFile('alexnet.py')
import sys
import os.path
import tensorflow as tf
import util as tu
import numpy as np
import alexnet as an
def classify(image,top_k,k_patches,ckpt_path,imagenet_path):
"""Procedure to classify the image given through the command line
Args:
image: path to the image to classify
top_k : integer representing the number of predictions with highest probability
to retrieve
k_patches: number of crops taken from an image and to input to the model
ckpt_path: path to model's tensorflow checkpoint
imagenet_path: path to ILSVRC12 ImageNet folder containing train images,
validation images,annotations and metadata file
"""
wnids,words = tu.load_imagenet_meta(os.paqth.join(imagenet_path,'data/meta.mat'))
#taking a few crops from an image
image_patches = tu.read_k_patches(image,k_patches)
x = tf.placeholder(tf.float32,[None,224,224,3])
_,pred = an.classifier(x,dropout = 1.0)
#calculate the average precision through the crops
avg_prediction = tf.div(tf.reduce_sum(pred,0),k_patches)
scores,indexes = tf.nn.top_k(avg_prediction, k =top_k)
saver = tf.train.Saver()
with tf.Session(config = tf.ConfigProto()) as sess:
saver.restore(sess,os.path.join(ckpt_path,'alexnet-cnn.ckpt'))
s,i = sess.run([scores,indexes],feed_dict = {x:image_patches})
s,i = np.squeeze(s), np.squeeze(i)
print('AlexNet saw:')
for idx in range(top_k):
print('{} - score: {}'.format(words[i[idx]],s[idx]))
if __name__ == '__main__':
TOP_K = 5
K_CROPS = 5
IMAGENET_PATH = ''##CHeck path
CKPT_PATH = 'ckpt-alexnet'
image_path = sys.argv[1]
classify(image_path,TOP_K,K_CROPS,CKPT_PATH,IMAGENET_PATH)
```
| github_jupyter |
# Warsztaty Python w Data Science
***
# Blok 1 - Wprowadzenie
## Python (1 z 2)
***
# https://github.com/MichalKorzycki/PythonDataScience

***
# Python
Język Python jest:
- dynamicznym, silnie typowanym językiem skryptowym
- napędza takie sajty jak Youtube, Dropbox, Netflix czy Instagram
- są dwie "konkurencyjne" wersje języka - 2.7 (przestarzała) i tzw. py3k (3.7, 3.8, 3.9)
- na wykładzie korzystamy z 3.7 ale późniejsze wersje też są OK
- może pracować jako skrypty (samodzielny program)
- albo notebook (to co widzimy)
- poważnym językiem programowania




(Źródła zdjęć: wikipedia)
```
print ("Hello World")
import sys
print (sys.version)
```

Żródło: https://stackoverflow.blog/2017/09/06/incredible-growth-python/

Źródło: https://www.tiobe.com/tiobe-index/
***
# Program Wykładu
## Język Python - wprowadzenie (2 spotkania)
- ### Podstawowe elementy składni
- ### Środowisko pracy do pracy z danymi – anaconda, jupyter
- ### Struktury danych
- ### Instrukcje sterujące
## Data Wrangling (4 spotkania)
- ### Tidy Data – co to jest
- ### Data wrangling, munging, tidying - podstawowe operacje
- ### Biblioteka Pandas
- ### Czytanie danych
- ### Wybieranie kolumn i „krojenie danych”
- ### Czyszczenie danych
- ### Agregacja, grupowanie
## Wizualizacja danych (2 spotkania)
- ### Proste wykresy
- ### Konfiguracja wykresu, sztuczki i kruczki
## Zewnętrzne źródła danych (2 spotkania)
- ### Pojęcie API i korzystanie z nich. JSON
- ### Samodzielne pobieranie danych
- ### Konsumowanie API
- ### Scraping, Ściąganie danych z sieci
- ### Biblioteki Scrapy, Beautiful Soup, lxml
- ### Biblioteki
## Machine Learning (4 spotkania)
- ### Regresja w ML
- ### Klasyfikacja w ML
- ### Inżynieria cech (Feature Engineering)
- ### Metryki skuteczności optymalizacja modeli
- ### Trening klasyfikatorów
- ### Wybór optymalnego modelu, walidacja krzyżowa
- ### Specyfika danych tekstowych
- ### Postawowe metryki dla danych tekstowych
---
# Jupyter Notebooks
... bo **Jupyter** jest *prosty* ale _**potężny**_
$$\sum_{i=1}^\infty \frac{1}{2^i} = 1$$
A to: $P(A \mid B) = \frac{P(B \mid A)P(A)}{P(B)}$ jest wzór _**Bayesa**_
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(1,100, 100).reshape(-1, 10))
df.columns=[chr(i) for i in range(ord('A'),ord('J')+1)]
df["max"] = df.apply(max,axis=1)
df
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.style.use("dark_background")
x = np.linspace(0, 10, 100)
fig = plt.figure(figsize=(20,4))
plt.plot(x, np.sin(x), '-')
plt.plot(x, np.cos(x), '.');
```
# https://www.anaconda.com/download/
---
# Składnia języka
## Zmienne i ich Typy
```
s = 'Ala ma kota'
s
```
## Typy zmiennych
### Podstawowe
- liczby całkowite (int)
- liczby zmiennoprzecinkowe (float)
- Łańcuchy znaków (str)
- Boolowskie (True i False)
### Złożone
- listy - (list)
- krotki - (tuple)
- słowniki - (dict)
- ...
```
7+2
7/2
```
### Dzielenie całkowite
```
7//2
```
### Operacja modulo (reszta z dzielenia)
```
7%2
```
## Python jest językiem dynamicznym
```
s = 3
s
s = "Ala ma "
s = s + "kota"
s
n = 3
s = "Ala ma " + n + "koty"
s
```
## Python jest SILNIE typowany
Ale zawsze można skorzystać z konwersji
```
s = 'Ala ma ' + str(n) + ' koty '
s
s * 4
"10"
int("10")
```
## Formatowanie napisów
```
'%d jest liczbą' % 3
'Liczba Pi to jest mniej więcej %.2f' % 3.141526
'%d jest liczbą, i %d też jest liczbą' % (3,7)
str(3) + ' jest liczbą, i ' + str(7) + ' też jest liczbą'
n = 3
m = 11117
f'{n} jest liczbą, i {m} też jest liczbą'
```
## Typ logiczny Boolean
```
n == 3
n != 3
not n == 3
n == 3 or n != 3
n == 3 and n != 3
if n == 3:
print( "Trzy" )
if n == 4:
print ("Cztery")
else:
print("To nie cztery")
if n == 4:
print ("Cztery")
elif n == 3:
print ("Trzy")
else:
print("Ani trzy ani cztery")
```
## Listy
```
a = [3,5,6,7]
a
a[1]
a[0]
a[0:2]
a[-1]
a[1:-1]
```
```
a[-4]
a[:-1]
a[1:-1]+a[0:-1]
len(a)
for i in a:
print (i)
print(".")
for i in range(len(a)):
print(i)
for i in range(len(a)):
print(a[i])
a.append(8)
a
a + a
a * 3
s
s[0]
" - ".join( ["Ala", "ma", "kota"] )
"".join( ["Ala", "ma", "kota"] )
s2 = '.|.'
s2.join(["Ala", "ma", "kota"] )
```
## Krotki (tuple)
```
t = (1, 2, 3, 4)
t
```
## Funkcje
```
def dodaj_2(x):
wynik = x + 2
return wynik
dodaj_2(5)
def is_odd(x):
print ("*" * x)
return (x % 2) == 1
is_odd(25)
is_odd(8)
def slownie(n):
jednosci = { 0: "zero", 1: "jeden", 2: "dwa", 3: "trzy", 4: "cztery", 5: "pięć", 6: "sześć", 7: "siedem", 8: "osiem", 9: "dziewięć"}
return jednosci[n]
slownie(6)
def dodaj_2_slownie(n):
wynik = dodaj_2(n)
return slownie(wynik)
dodaj_2_slownie(4)
```
## Słowniki (dict)
```
m = { 'a': 1, 'b': 2 }
m.keys()
m.values()
m['a']
m['c']
m.get('c', 0)
m = dict( [("a", 1), ("b", 2)] )
m
l = [ "a", "a", "a" ]
l
list(zip( range(len(l)),l ))
li = list(range(len(l)))
li
l
list(zip(li, l))
m = dict(zip( range(len(l)), l))
m
for k in m.keys():
print (k, m[k])
for k in m:
print( k, m[k])
{ (1,2): "a", (1,2,3): "b"}
{ [1,2]: "a", [1,2,3]: "b"}
list(range(12))
l = ["a"] * 7
l
li = list(range(7))
li
list(zip(li,l))
len(l)
```
---
## Zadanie 1 - łatwe
## Zadanie 2 - średnie
## Zadanie 3 - trudne
# Literatura (darmowe wersje online)
## Książki wprowadzające
- [Python 101](http://python101.pythonlibrary.org/) – pierwsze 10 rozdziałów wystarczy by zacząc efektywnie pracować z Pythonem
- [Automate the Boring Stuff with Python](https://automatetheboringstuff.com/) – bardzo praktyczne podejście do Pythona. Gorąco polecam.
- [Python Docs](https://docs.python.org/3/) – Dokumentacja do Pythona
## Książki dla osób z pewnym doświadczeniem
- [Python Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/) – Prawie wszystko co może potrzebować data scientist do pracy
- [Dive into Python](https://diveintopython3.problemsolving.io/) – Świetna książka do Pythona
- [Zanurkuj w Pythonie](https://pl.wikibooks.org/wiki/Zanurkuj_w_Pythonie) – Polska wersja
- [Think Bayes](https://greenteapress.com/wp/think-bayes/) – Wprowadzenie do statystyki Bayesowskiej
- [Think Stats](https://greenteapress.com/wp/think-stats-2e/) – Wprowadzenie do statystyki Bayesowskiej
- [Natural Language Processing in Python](https://www.nltk.org/book/) – wprowadzenie do przetwarzania języka naturalnego w Pythonie
## Zaawansowane tematy
- [The Elements of Statistical Learning](https://web.stanford.edu/~hastie/Papers/ESLII.pdf) – prawdopodobnie najbardziej wyczerpująca książka o Machine Learning
- [Foundations of Statistical NLP](https://nlp.stanford.edu/fsnlp/) – książka o statystycznym przetwarzaniu języka naturalnego
- [Introduction to Information Retrieval](https://nlp.stanford.edu/IR-book/) – Podstawy formalne ekstrakcji informacji
## Polecane lektury
- [Peter Norvig - The Unreasonable Effectiveness of Data](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf)
- [Andrew Ng – Machine Learning Yearning](https://www.deeplearning.ai/machine-learning-yearning/)
- [Tidy Data](https://vita.had.co.nz/papers/tidy-data.pdf) - Klasyczny artykuł o tym jak doprowadzić dane do najlepszej postaci pod kątem analiz
| github_jupyter |
```
%autosave 60
%load_ext autoreload
%autoreload 2
%matplotlib inline
import json
import os
import pickle
from collections import Counter, OrderedDict, defaultdict
from copy import deepcopy
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union, cast
import cv2
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import PIL.Image as pil_img
import seaborn as sns
import sklearn as skl
from IPython.display import Image, display
from matplotlib.patches import Rectangle
from matplotlib_inline.backend_inline import set_matplotlib_formats
from tqdm.contrib import tenumerate, tmap, tzip
from tqdm.contrib.bells import tqdm, trange
from geoscreens.consts import (
EXTRACTED_FRAMES_PATH,
FRAMES_METADATA_PATH,
LATEST_DETECTION_MODEL_NAME,
VIDEO_PATH,
)
from geoscreens.data import get_all_geoguessr_split_metadata
from geoscreens.data.metadata import GOOGLE_SHEET_IDS, FramesList
from geoscreens.utils import batchify, load_json, save_json, timeit_context
pd.set_option("display.max_colwidth", None)
pd.set_option("display.max_columns", 15)
pd.set_option("display.max_rows", 50)
# Suitable default display for floats
pd.options.display.float_format = "{:,.2f}".format
plt.rcParams["figure.figsize"] = (12, 10)
# This one is optional -- change graphs to SVG only use if you don't have a
# lot of points/lines in your graphs. Can also just use ['retina'] if you
# don't want SVG.
%config InlineBackend.figure_formats = ["retina"]
set_matplotlib_formats("pdf", "png")
```
* [x] Load in_game frames
* [x] load detections for all videos
* [x] filter to in_game frames
* [ ] crop images
* [ ] ocr cropped images
* [ ] save results
## Functions
## Load in_game Frames
```
df_ingame = pickle.load(open("/shared/gbiamby/geo/segment/in_game_frames_000.pkl", "rb"))
df_url_frames = df_ingame[df_ingame.labels.apply(lambda l: "url" in l)].copy(deep=True)
print(df_ingame.shape, df_url_frames.shape)
df_ingame.video_id.nunique(), df_url_frames.video_id.nunique()
df_url_frames.head(1).T
import operator
import easyocr
def last_index(lst, value):
return len(lst) - operator.indexOf(reversed(lst), value) - 1
reader = easyocr.Reader(["en"])
urls = defaultdict(list)
for i, (idx, row) in tenumerate(df_url_frames.iterrows(), total=len(df_url_frames)):
# if i >= 100:
# break
# print(row)
video_id = row.video_id
url_idx = last_index(row.labels, "url")
# Crop:
img = pil_img.open(row.file_path)
# display(img)
url_area = row.bboxes[url_idx]
url_area = (url_area["xmin"], url_area["ymin"], url_area["xmax"], url_area["ymax"])
img_cropped = img.crop(url_area)
# display(img_cropped)
result = reader.recognize(np.array(img_cropped))
urls[video_id].append({**row.to_dict(), "ocr": result})
# print(result)
```
### Show the cropped URL bar
```
display(img_cropped)
```
### OCR on the cropped URL bar
```
result = reader.recognize(np.array(img_cropped))
result
df_url_frames["row_num"] = df_url_frames.reset_index().index
df_url_frames["gpu_id"] = df_url_frames.row_num.apply(lambda x: x % 3)
df_url_frames["gpu_id"] = df_url_frames.row_num.apply(lambda x: x % 3)
results = pickle.load(open("/shared/gbiamby/geo/data/urls/url_ocr_raw.pkl", "rb"))
results
```
---
## Load Raw OCR Results, Clean up the URLs and Group Them by video_id + game_num
```
ocr = pickle.load(open("/shared/gbiamby/geo/data/urls/url_ocr_raw.pkl", "rb"))
ocr["--0Kbpo9DtE"][0]
# How many ocr outputs have more than one result?
# rawr = []
# for i, (video_id, frames) in tenumerate(ocr.items()):
# for f in frames:
# if len(f["ocr"]) > 1:
# rawr.append({"video_id": video_id, "ocr": f["ocr"]})
# print(len(rawr))
ocr_clean = []
for i, (video_id, frames) in tenumerate(ocr.items()):
for f in frames:
for ocr_result in f["ocr"]:
ocr_clean.append(
{
"video_id": video_id,
"ocr": ocr_result[1],
"file_path": f["file_path"],
"round_num": f["round_num"],
}
)
len(ocr_clean)
df_ocr = pd.DataFrame(ocr_clean)
df_ocr.ocr = (
df_ocr.ocr.astype("string")
.str.replace("\s\s+", " ")
.str.replace("^[0-9]*\s*[l|]*", "", regex=True)
.str.replace("| |", "||", regex=False)
.str.replace("https||", "", regex=False)
.str.replace("https|", "", regex=False)
.str.replace("https[|l]{0,2}", "")
.str.replace("Secure |", "", regex=False)
.str.replace("Secure", "")
.str.replace("||", "", regex=False)
.str.replace("Il", "", regex=False)
.str.replace("ssrcon", "ssr.com", regex=False)
.str.replace("con/", "com/", regex=False)
.str.replace("cor/", "com/", regex=False)
.str.replace("[.\s]*c[cao0][mnr]\s*/", ".com/")
.str.replace("[.\s]*c[cao0][mnr]\s*", ".com")
.str.replace(".*?eoguessr", "geoguessr")
.str.replace("g.*?oguessr", "geoguessr")
.str.replace("ge.*?guessr", "geoguessr")
.str.replace("geo.*?uessr", "geoguessr")
.str.replace("geog.*?essr", "geoguessr")
.str.replace("geogues.*?r", "geoguessr")
.str.replace("geoguess.*?", "geoguessr")
.str.replace("geoguessrr", "geoguessr")
# Two
.str.replace("..oguessr", "geoguessr")
.str.replace(".e.guessr", "geoguessr")
.str.replace(".eo.uessr", "geoguessr")
.str.replace(".eoguess.", "geoguessr")
.str.replace("g.*?oguess.*?", "geoguessr")
.str.replace("g.*?o.?uess.*?", "geoguessr")
# two
.str.replace("g.*?.*?guessr", "geoguessr")
.str.replace("g.{0,3}?uessr", "geoguessr")
.str.replace("g.*?o.uessr", "geoguessr")
.str.replace("g.*?oguess.", "geoguessr")
.str.replace(".+?eo.+?uess[a-zA-Z]{1}", "geoguessr")
.str.replace("ld\s*/?\s*play", ".com/play")
.str.replace("^eog", "geog")
.str.replace("geoguessr[^.]+?co", "geoguessr.co")
# .str.replace("geoguess co[mnr]", "geoguessr.com")
# .str.replace("geoguessco[mnr]", "geoguessr.com")
# .str.replace("geoguessrco[mnr]", "geoguessr.com")
.str.replace("geoguessr.*com", "geoguessr.com")
.str.replace("geoguessr\s*o[mnr]", "geoguessr.com")
# Strip text before "geoguess"
.str.replace("^.+(?=geoguess)", "")
#
.str.replace("geoguessr.*\.com", "geoguessr.com")
#
.str.replace("geoguessr\.com.*challenge", "geoguessr.com/challenge")
.str.replace("geoguessr\.com/challenge[^/]{1}", "geoguessr.com/challenge/")
.str.replace("geoguessr\.com.*play", "geoguessr.com/play")
.str.replace("geoguessr\.com/play[^/]{1}", "geoguessr.com/play/")
.str.replace("(?<=.)?comuk[/]?", "com/uk/")
.str.replace("/.{1,2}lay", "/play")
.str.strip()
)
exclude = set(["did you enjoy", "channel v"])
for e in exclude:
df_ocr = df_ocr[~(df_ocr.ocr.str.lower().str.contains(e))].copy(deep=True)
df_ocr["url_count"] = df_ocr.merge(df_ocr.groupby("ocr").count(), on="ocr")[["video_id_y"]]
df_ocr[~df_ocr.ocr.str.contains("geoguessr.com")].sort_values("url_count")
with pd.option_context("display.max_rows", None, "display.max_columns", None):
display(
pd.DataFrame(df_ocr[~df_ocr.ocr.str.contains("geoguessr.com")].ocr.value_counts()).head()
)
exclude = set(["geoguessr.com/play"])
df_clean = df_ocr[
(df_ocr.ocr.str.contains("geoguessr.com"))
& ~(df_ocr.ocr.str.contains("retro"))
& ~(df_ocr.ocr.isin(exclude))
].copy(deep=True)
df_clean["slug"] = (
df_clean.ocr.str.replace("geoguessr.com", "")
.str.replace("challenge", "")
.str.replace("play", "")
.str.replace("/", "")
)
df_clean["game_num"] = df_clean.round_num.apply(lambda rn: rn // 5)
df_clean["slug_len"] = df_clean.slug.apply(lambda s: len(s))
df_clean = df_clean[(df_clean.slug_len > 10) & (df_clean.slug_len < 80)].copy(deep=True)
df_clean.sort_values("slug_len")
df_clean2 = (
pd.DataFrame(
df_clean.groupby(["video_id", "game_num", "ocr"]).agg(
url_count=("ocr", "count"),
file_path=("file_path", "max"),
)
)
.reset_index()
.sort_values(["video_id", "game_num", "url_count"], ascending=[True, True, False])
)
df_clean2["ocr_rank"] = (
df_clean2.groupby(["video_id", "game_num"])["url_count"]
.transform(lambda x: x.rank(method="first", ascending=False))
.astype("int")
)
df_clean2 = df_clean2[["video_id", "game_num", "ocr", "ocr_rank", "url_count", "file_path"]]
print("Total video_ids: ", df_clean2.video_id.nunique())
print("Total games: ", len(df_clean2.groupby(["video_id", "game_num"]).count()))
df_clean2
print(
"Number of video_id's with URL detections from OCR, that are not in the google sheet: ",
len(set(df_clean2.video_id.values.tolist()) - (set(GOOGLE_SHEET_IDS))),
)
if True:
# pickle.dump(df_clean2, open("/shared/gbiamby/geo/data/urls/url_ocrs_cleaned.pkl", "wb"))
df_clean2.to_pickle("/shared/gbiamby/geo/data/urls/url_ocrs_cleaned-protocol_5.pkl", protocol=5)
df_clean2.to_pickle("/shared/gbiamby/geo/data/urls/url_ocrs_cleaned-protocol_4.pkl", protocol=4)
df_clean2.to_csv("/shared/gbiamby/geo/data/urls/url_ocrs_cleaned.csv", index=False, header=True)
# with pd.option_context("display.max_rows", None, "display.max_columns", None):
# # display(pd.DataFrame(df_clean[(df_clean.slug_len < 8)].ocr.value_counts()))
# display(pd.DataFrame(df_clean[(df_clean.slug_len > 0)].ocr.value_counts()))
df_clean.slug_len.plot.hist(bins=50)
```
---
Junk
| github_jupyter |
<img align="left" src = https://www.linea.gov.br/wp-content/themes/LIneA/imagens/logo-header.png width=180 style="padding: 20px"> <br>
## Curso básico de ferramentas computacionais para astronomia
Contato: Julia Gschwend ([julia@linea.gov.br](mailto:julia@linea.gov.br)) <br>
Github: https://github.com/linea-it/minicurso-jupyter <br>
Site: https://minicurso-ed2.linea.gov.br/ <br>
Última verificação: 26/08/2021<br>
<a class="anchor" id="top"></a>
# Aula 3 - Acesso a dados pelo LIneA Science Server, Pandas DataFrame
### Índice
1. [SQL básico](#sql)
2. [Download dos dados](#download)
3. [Manipulação dos dados usando NumPy](#numpy)
4. [Manipulação dos dados usando Pandas](#pandas)
<a class="anchor" id="sql"></a>
# 1. SQL básico
[slides aula 3](https://docs.google.com/presentation/d/1lK8XNvj1MG_oC39iNfEA16PiU10mzhgUkO0irmxgTmE/preview?slide=id.ge8847134d3_0_821)
<a class="anchor" id="download"></a>
# 2. Download dos dados
Para exemplificar a leitura de dados a partir de um Jupyter Notebook, vamos criar um arquivo com dados baixados da ferramenta **User Query** da plataforma [LIneA Science Server](https://desportal2.cosmology.illinois.edu). Antes de executar a query que vai dar origem ao arquivo, vamos consultar o tamanho (número de linhas) da tabela que ela vai gerar utilizando a seguinte query:
```sql
SELECT COUNT(*)
FROM DES_ADMIN.DR2_MAIN
WHERE ra > 35 and ra < 36
AND dec > -10 and dec < -9
AND mag_auto_g between 15 and 23
AND CLASS_STAR_G < 0.5
AND FLAGS_I <=4
```
Copie a qurery acima e cole no campo `SQL Sentence`. Em seguida pressione o botão `Preview`.
O resultado esperado é de 8303 objetos.
Substitua o texto pela query abaixo para criar a tabela selecionando as colunas que vamos utilizar na demonstração.
```sql
SELECT coadd_object_id, ra ,dec, mag_auto_g, mag_auto_r, mag_auto_i, magerr_auto_g, magerr_auto_r, magerr_auto_i, flags_i
FROM DES_ADMIN.DR2_MAIN
WHERE ra > 35 and ra < 36
AND dec > -10 and dec < -9
AND mag_auto_g between 15 and 23
AND CLASS_STAR_G < 0.5
AND FLAGS_I <=4
```
Clique no botão de `Play`(Excecute Query) no canto superior esquerdo, escolha um nome para a sua tabela e pressione `Start`.
Você receberá um email de notificação quando a sua tabela estiver pronta e ela aparecerá no menu `My Tables`.
Clique no botão de seta para baixo ao lado do nome da tabela para abrir o menu e clique em `Download`. Você receberá um email com um link para o download do catálogo em formato `.zip`.
Os dados acessados pelo Science Server estão hospedados nos computadores do NCSA. Faremos o download deste subconjunto de dados no formato _comma-separated values_ (CSV) e em seguida o upload do arquivo gerado para o JupyterHub. Sugerimos nomear o arquivo como `galaxias.csv` e guardá-lo dentro da pasta `dados`.
Nas células abaixo vamos demonstrar como ler/escrever arquivos e manipular dados usando duas bibliotecas diferentes. As células do tipo **Markdown** contém instruções e as células em branco serão preenchidas durante a demonstração. Vamos juntos?
<a class="anchor" id="numpy"></a>
## 3. Manipulação dos dados usando NumPy
Documentação da biblioteca NumPy: https://numpy.org/doc/stable/index.html
Para começar, importe a biblioteca.
```
import numpy as np
print('NumPy version: ', np.__version__)
```
Leia o arquivo e atribua todo o seu conteúdo a uma variável usando a função `loadtxt`([doc](https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html)):
Confira o tipo de objeto.
Consulte o tipo de dados dos elementos contidos no array.
Consulte as dimensões do array (linhas,colunas).
Usar o argumento `unpack` da função `loadtxt` para obter as colunas separadamente. Quando ativado, o argumento `unpack` traz a tabela transposta.
Se você já sabe a ordem das colunas, pode atribuir cada uma delas a uma variável. Dica: pegar os nomes do cabeçalho do arquivo.
Você pode fazer isso direto ao ler o arquivo. OBS: ao usar os mesmos nomes para as variáveis, está sobrescrevendo as anteriores.
Faça um filtro (ou máscara) para galáxias brilhantes (mag i < 20).
Confira o tamanho do array **sem** filtro.
Confira o tamanho do array **com** filtro.
A "máscara" (ou filtro) tambem pode ser aplicada a qualquer outro array com as mesmas dimensões. Teste com outra coluna.
A máscara é útil para criar um subconjunto dos dados. Crie um subconjunto com 3 colunas e com as linhas que satisfazem a condição do filtro.
Salve este subconjunto em um arquivo usando a função `savetxt`.
Dê uma ohada no arquivo salvo (duplo clique no arquivo leva para o visualizador de tabelas).
Algo de errado com o número de linhas?
E agora? Algo errado com o número de colunas?
Como ficou a formatação? Que tal deixar mais legível e definir o número de espaços?
Que tal um cabeçalho para nomear as colunas?
<a class="anchor" id="numpy"></a>
## 4. Manipulação dos dados usando Pandas
Documentação da biblioteca Pandas: https://pandas.pydata.org/docs/
```
import pandas as pd
print('Pandas version: ', pd.__version__)
```
### Pandas Series
https://pandas.pydata.org/docs/reference/api/pandas.Series.html
### Pandas DataFrame
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html
```python
class pandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=None)
```
_"Two-dimensional, size-mutable, potentially heterogeneous tabular data. <p>
Data structure also contains labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure."_
Exemplo de criação de um DataFrame a partir de um dicionário:
```
dic = {"nomes": ["Huguinho", "Zezinho", "Luizinho"],
"cores": ["vermelho", "verde", "azul"],
"características": ["nerd", "aventureiro", "criativo"]}
sobrinhos = pd.DataFrame(dic)
sobrinhos
```
Um DataFrame também é o resultado da leitura de uma tabela usando a função `read_csv` do Pandas.
Vamos explorar algumas funções e atributos úteis de um objeto to tipo _DataFrame_, começando pela sua "ficha técnica".
Imprima as primeiras linhas.
Defina o número de linhas exibidas.
Imprima as últimas linhas.
Renomeie a coluna `coadd_object_id`
Use a coluna coadd_object_id IDs como índice no DataFrame.
Imprima a descrição do _DataFrame_ (resumão com estatísticas básicas).
Faça um filtro para selecionar objetos com fotometria perfeita (flags_i = 0).
Confira o tamanho do _DataFrame_ filtrado.
Compare com o original.
### Tratando dados qualitativos: flags de qualidade
Contagem dos valores de cada categoria (cada flag).
A classe _Series_ possui alguns gráficos simples embutidos, por exemplo, o gráfico de barras (histograma).
Gráfico de pizza:
Veremos muito mais sobre gráficos na aula 4. Até logo!
| github_jupyter |
# Introduction
## Motivation
This notebook follows up `model_options.ipynb`.
The key difference is that we filter using the category distance metric (see `bin/wp-get-links` for details), rather than relying solely on the regression to pick relevant articles. Thus, we want to decide what an appropriate category distance threshold is.
Our hope is that by adding this filter, we can now finalize the regression algorithm selection and configuration.
## Summary
(You may want to read this section last, as it refers to the full analysis below.)
### Q1. Which algorithm?
In `model_options`, we boiled down the selection to three choices:
1. Lasso, normalized, positive, auto 𝛼 (LA_NPA)
2. Elastic net, normalized, positive, auto 𝛼, auto 𝜌 (EN_NPAA)
3. Elastic net, normalized, positive, auto 𝛼, manual 𝜌 = ½ (EN_NPAM)
The results below suggest three conclusions:
1. LA_NPA vs. EN_NPAA:
* EN_NPAA has (probably insignificantly) better RMSE.
* Forecasts look almost identical.
* EN_NPAA chooses a more reasonable-feeling number of articles.
* EN_NPAA is more principled (lasso vs. elastic net).
2. EN_NPAM vs. EN_NPAA:
* EN_NPAA has better RMSE.
* Forecasts look almost identical, except EN_NPAM has some spikes in the 2014–2015 season, which probably accounts for the RMSE difference.
* EN_NPAA chooses fewer articles, though EN_NPAM does not feel excessive.
* EN_NPAA is more principles (manual 𝜌 vs. auto).
On balance, **EN_NPAA seems the best choice**, based on principles and article quantity rather than results, which are nearly the same across the board.
### Q2. What distance threshold?
Observations for EN_NPAA at distance threshold 1, 2, 3:
* d = 2 is where RMSE reaches its minimum, and it stays more or less the same all the way through d = 8.
* d = 2 and 3 have nearly identical-looking predictions.
* d = 2 and 3 have very similar articles and coefficient ranking. Of the 10 and 13 articles respectively, 9 are shared and in almost the same order.
These suggests that the actual models for d = 2..8 are very similar. Further:
* d = 2 or 3 does not have the spikes in 3rd season that d = 1 does. This suggests that the larger number of articles gives a more robust model.
* d = 2 or 3 matches the overall shape of the outbreak better than d = 1, though the latter gets the peak intensity more correct in the 1st season.
Finally, examining the article counts in `COUNTS` and `COUNTS_CUM`, d = 2 would give very small input sets in some of the sparser cases, while d = 3 seems safer (e.g., "es+Infecciones por clamidias" and "he+שעלת"). On the other hand, Arabic seems to have a shallower category structure, and d = 3 would capture most articles.
On balance, **d = 3 seems the better choice**. It performs as well as d = 2, without catching irrelevant articles, and d = 2 seems too few articles in several cases. The Arabic situation is a bit of an unknown, as none of us speak Arabic, but erring on the side of too many articles seems less risky than clearly too few.
### Q3. What value of 𝜌?
In both this notebook and `model_options`, every auto-selected 𝜌 has been 0.9, i.e., mostly lasso. Thus, we will **fix 𝜌 = 0.9** for performance reasons.
### Conclusion
We select **EN_NPAM with 𝜌 = 0.9**.
# Preamble
## Imports
```
%matplotlib inline
import collections
import gzip
import pickle
import os
import urllib.parse
import numpy as np
import matplotlib as plt
import pandas as pd
import sklearn as sk
import sklearn.linear_model
DATA_PATH = os.environ['WEIRD_AL_YANKOVIC']
plt.rcParams['figure.figsize'] = (12, 4)
```
## Load, preprocess, and clean data
Load and preprocess the truth spreadsheet.
```
truth = pd.read_excel(DATA_PATH + '/truth.xlsx', index_col=0)
TRUTH_FLU = truth.loc[:,'us+influenza'] # pull Series
TRUTH_FLU.index = TRUTH_FLU.index.to_period('W-SAT')
TRUTH_FLU.head()
```
Load the Wikipedia link data. We convert percent-encoded URLs to Unicode strings for convenience of display.
```
def unquote(url):
(lang, url) = url.split('+', 1)
url = urllib.parse.unquote(url)
url = url.replace('_', ' ')
return (lang + '+' + url)
raw_graph = pickle.load(gzip.open(DATA_PATH + '/articles/wiki-graph.pkl.gz'))
GRAPH = dict()
for root in raw_graph.keys():
unroot = unquote(root)
GRAPH[unroot] = { unquote(a): d for (a, d) in raw_graph[root].items() }
```
Load all the time series. Most of the 4,299 identified articles were in the data set.
Note that in contrast to `model_options`, we do not remove any time series by the fraction that they are zero. The results seem good anyway. This filter also may not apply well to the main experiment, because the training periods are often not long enough to make it meaningful.
```
TS_ALL = pd.read_csv(DATA_PATH + '/tsv/forecasting_W-SAT.norm.tsv',
sep='\t', index_col=0, parse_dates=True)
TS_ALL.index = TS_ALL.index.to_period('W-SAT')
TS_ALL.rename(columns=lambda x: unquote(x[:-5]), inplace=True)
len(TS_ALL.columns)
(TS_ALL, TRUTH_FLU) = TS_ALL.align(TRUTH_FLU, axis=0, join='inner')
TRUTH_FLU.plot()
TS_ALL.iloc[:,:5].plot()
```
## Summarize distance from root
Number of articles by distance from each root.
```
COUNTS = pd.DataFrame(columns=range(1,9), index=sorted(GRAPH.keys()))
COUNTS.fillna(0, inplace=True)
for (root, leaves) in GRAPH.items():
for (leaf, dist) in leaves.items():
COUNTS[dist][root] += 1
COUNTS
```
Number of articles of at most the given distance.
```
COUNTS_CUM = COUNTS.cumsum(axis=1)
COUNTS_CUM
```
# Parameter sweep
Return the set of articles with maximum category distance from a given root.
```
def articles_dist(root, dist):
return { a for (a, d) in GRAPH[root].items() if d <= dist }
```
Return time series for articles with a maximum category distance from the given root.
```
def select_by_distance(root, d):
keep_cols = articles_dist(root, d)
return TS_ALL.filter(items=keep_cols, axis=1)
select_by_distance('en+Influenza', 1).head()
```
Fit function. The core is the same as the `model_options` one, with non-constant training series set and a richer summary.
```
def fit(root, train_week_ct, d, alg, plot=True):
ts_all = select_by_distance(root, d)
ts_train = ts_all.iloc[:train_week_ct,:]
truth_train = TRUTH_FLU.iloc[:train_week_ct]
m = alg.fit(ts_train, truth_train)
m.input_ct = len(ts_all.columns)
pred = m.predict(ts_all)
pred_s = pd.Series(pred, index=TRUTH_FLU.index)
m.r = TRUTH_FLU.corr(pred_s)
m.rmse = ((TRUTH_FLU - pred_s)**2).mean()
m.nonzero = np.count_nonzero(m.coef_)
if (not hasattr(m, 'l1_ratio_')):
m.l1_ratio_ = -1
# this is just a line to show how long the training period is
train_period = TRUTH_FLU.iloc[:train_week_ct].copy(True)
train_period[:] = 0
if (plot):
pd.DataFrame({'truth': TRUTH_FLU,
'prediction': pred,
'training pd': train_period}).plot(ylim=(-1,9))
sumry = pd.DataFrame({'coefs': m.coef_,
'coefs_abs': np.abs(m.coef_)},
index=ts_all.columns)
sumry.sort_values(by='coefs_abs', ascending=False, inplace=True)
sumry = sumry.loc[:, 'coefs']
for a in ('intercept_', 'alpha_', 'l1_ratio_', 'nonzero', 'rmse', 'r', 'input_ct'):
try:
sumry = pd.Series([getattr(m, a)], index=[a]).append(sumry)
except AttributeError:
pass
return (m, pred, sumry)
```
Which 𝛼 and 𝜌 to explore? Same as `model_options`.
```
ALPHAS = np.logspace(-15, 2, 25)
RHOS = np.linspace(0.1, 0.9, 9)
```
Try all distance filters and summarize the result in a table.
```
def fit_summary(root, label, train_week_ct, alg, **kwargs):
result = pd.DataFrame(columns=[[label] * 4,
['input_ct', 'rmse', 'rho', 'nonzero']],
index=range(1, 9))
preds = dict()
for d in range(1, 9):
(m, preds[d], sumry) = fit(root, train_week_ct, d, alg(**kwargs), plot=False)
result.loc[d,:] = (m.input_ct, m.rmse, m.l1_ratio_, m.nonzero)
return (result, preds)
```
## Lasso, normalized, positive, auto 𝛼
```
la_npa = fit_summary('en+Influenza', 'la_npa', 104, sk.linear_model.LassoCV,
normalize=True, positive=True, alphas=ALPHAS,
max_iter=1e5, selection='random', n_jobs=-1)
la_npa[0]
(m, _, s) = fit('en+Influenza', 104, 1,
sk.linear_model.LassoCV(normalize=True, positive=True, alphas=ALPHAS,
max_iter=1e5, selection='random', n_jobs=-1))
s.head(27)
(m, _, s) = fit('en+Influenza', 104, 2,
sk.linear_model.LassoCV(normalize=True, positive=True, alphas=ALPHAS,
max_iter=1e5, selection='random', n_jobs=-1))
s.head(27)
(m, _, s) = fit('en+Influenza', 104, 3,
sk.linear_model.LassoCV(normalize=True, positive=True, alphas=ALPHAS,
max_iter=1e5, selection='random', n_jobs=-1))
s.head(27)
```
## Elastic net, normalized, positive, auto 𝛼, auto 𝜌
```
en_npaa = fit_summary('en+Influenza', 'en_npaa', 104, sk.linear_model.ElasticNetCV,
normalize=True, positive=True, alphas=ALPHAS, l1_ratio=RHOS,
max_iter=1e5, selection='random', n_jobs=-1)
en_npaa[0]
(m, _, s) = fit('en+Influenza', 104, 1,
sk.linear_model.ElasticNetCV(normalize=True, positive=True,
alphas=ALPHAS, l1_ratio=RHOS,
max_iter=1e5, selection='random', n_jobs=-1))
s.head(27)
(m, _, s) = fit('en+Influenza', 104, 2,
sk.linear_model.ElasticNetCV(normalize=True, positive=True,
alphas=ALPHAS, l1_ratio=RHOS,
max_iter=1e5, selection='random', n_jobs=-1))
s.head(27)
(m, _, s) = fit('en+Influenza', 104, 3,
sk.linear_model.ElasticNetCV(normalize=True, positive=True,
alphas=ALPHAS, l1_ratio=RHOS,
max_iter=1e5, selection='random', n_jobs=-1))
s.head(27)
```
## Elastic net, normalized, positive, auto 𝛼, manual 𝜌 = ½
```
en_npam = fit_summary('en+Influenza', 'en_npam', 104, sk.linear_model.ElasticNetCV,
normalize=True, positive=True, alphas=ALPHAS, l1_ratio=0.5,
max_iter=1e5, selection='random', n_jobs=-1)
en_npam[0]
(m, _, s) = fit('en+Influenza', 104, 1,
sk.linear_model.ElasticNetCV(normalize=True, positive=True,
alphas=ALPHAS, l1_ratio=0.5,
max_iter=1e5, selection='random', n_jobs=-1))
s.head(27)
(m, _, s) = fit('en+Influenza', 104, 2,
sk.linear_model.ElasticNetCV(normalize=True, positive=True,
alphas=ALPHAS, l1_ratio=0.5,
max_iter=1e5, selection='random', n_jobs=-1))
s.head(27)
(m, _, s) = fit('en+Influenza', 104, 3,
sk.linear_model.ElasticNetCV(normalize=True, positive=True,
alphas=ALPHAS, l1_ratio=0.5,
max_iter=1e5, selection='random', n_jobs=-1))
s.head(27)
```
## Summary
All the result tables next to one another.
```
pd.concat([la_npa[0], en_npaa[0], en_npam[0]], axis=1)
```
Plot the predictions by distance filter next to one another.
```
def plot(data, ds):
for d in ds:
D = collections.OrderedDict([('truth', TRUTH_FLU)])
D[d] = data[1][d]
pd.DataFrame(D).plot(figsize=(12,3))
plot(la_npa, range(1, 4))
plot(en_npaa, range(1, 4))
plot(en_npam, range(1, 4))
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from IPython.core.display import HTML
import pydae.svg_tools as svgt
#%config InlineBackend.figure_format = 'svg'
import pydae.grid_tools as gt
import scipy.optimize as sopt
from scipy.optimize import NonlinearConstraint
import time
%matplotlib widget
from pydae import ssa
from oc_3bus_uvsg import oc_3bus_uvsg_class
grid_uvsg = oc_3bus_uvsg_class()
H = 10.0; # desired virtual inertia
K_p = 0.01; # active power proportinal gain
#H = T_p/K_p/2
T_p = K_p*2*H; # active power integral time constant
params_uvsg = {"S_n_B1":1e6,"S_n_B3":100e3,"K_p_agc":0.0,"K_i_agc":1,
"R_v_B1":0.0,"R_v_B3":0.0,"R_s_B3":0.01,
"X_v_B1":-0.0001,"X_v_B3":-0.1,
'p_g_B3': 0.0,'q_s_ref_B3': 0.0,"K_p_B3":K_p,"T_p_B3":T_p,
"K_delta_B1":0.01,
"P_B2":-50e3,"Q_B2":0e3,'K_e_B3':-0.01,
}
grid_uvsg.initialize([params_uvsg],'xy_0.json',compile=True)
grid_uvsg.report_y()
grid_uvsg = oc_3bus_uvsg_class()
gt.change_line(grid_uvsg,'B1','B2',X_km=0.167,R_km=0.287,km=0.1)
gt.change_line(grid_uvsg,'B2','B3',X_km=0.167,R_km=0.287,km=0.3)
grid_uvsg.initialize([params_uvsg],'xy_0.json',compile=True)
t_0 = time.time()
grid_uvsg.run([{'t_end': 1.0,'Dt':0.01,'decimation':1}])
grid_uvsg.run([{'t_end': 5.0,'alpha_B1':-0.01}])
grid_uvsg.run([{'t_end':10.0,'alpha_B1':-0.00}])
#grid.set_value('P_B2', -1e3)
#grid.run([{'t_end':3.0}])
print(time.time() - t_0)
grid_uvsg.post();
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(4, 4))
axes[0].plot(grid_uvsg.T,grid_uvsg.get_values('omega_coi'), label='$\omega_{coi}$')
axes[1].plot(grid_uvsg.T,grid_uvsg.get_values('p_s_B3'), label='$p_{B3}$')
axes[1].plot(grid_uvsg.T,grid_uvsg.get_values('q_s_B3'), label='$q_{B3}$')
for ax in axes:
ax.grid()
ax.legend()
ax.set_xlabel('Time (s)')
axes[1].legend(loc='best')
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(4, 5))
axes[0].plot(grid_uvsg.T,grid_uvsg.get_values('omega_coi'), label='$\omega_{coi}$')
axes[0].plot(grid_uvsg.T,grid_uvsg.get_values('omega_B3'), label='$\omega_{coi}$')
axes[1].plot(grid_uvsg.T,grid_uvsg.get_values('e_B3'), label='$e$')
for ax in axes:
ax.grid()
ax.legend()
ax.set_xlabel('Time (s)')
axes[1].legend(loc='best')
grid_pi = oc_3bus_vsg_pi_class()
gt.change_line(grid_pi,'B1','B2',X_km=0.167,R_km=0.287,km=0.1)
gt.change_line(grid_pi,'B2','B3',X_km=0.167,R_km=0.287,km=0.3)
grid_pi.initialize([params_vsg_pi],'xy_0.json',compile=True)
grid_pi.run([{'t_end': 1.0,'Dt':0.01,'decimation':1}])
grid_pi.run([{'t_end': 5.0,'P_B2':-100e3}])
grid_pi.run([{'t_end': 8.0,'v_ref_B1':1.04636674,'v_ref_B3':1.05}])
grid_pi.run([{'t_end':30.0,'omega_ref_B1':1.011,'omega_ref_B3':1.011}])
#grid.set_value('P_B2', -1e3)
#grid.run([{'t_end':3.0}])
grid_pi.post();
```
## Reporter
```
U_b = 400.0
S_b = 100e3
I_b = S_b/(np.sqrt(3)*U_b)
def report(grid):
P_B1,P_B2,P_B3 = grid.get_value('p_g_B1')*100,grid.get_value('P_B1')/1e3,grid.get_value('P_B2')/1e3
Q_B1,Q_B2,Q_B3 = grid.get_value('q_g_B0_1')*100,grid.get_value('Q_B1')/1e3,grid.get_value('Q_B2')/1e3
U_B0,U_B1,U_B2,U_B3 = grid.get_value('V_B0')*400,grid.get_value('V_B1')*400,grid.get_value('V_B2')*400,grid.get_value('V_B3')*400
I_03_m = np.abs(grid.get_value('i_d_ref_B0') + 1j*grid.get_value('i_q_ref_B0'))*I_b
S_B1_m = np.abs(grid.get_value('P_B1') + 1j*grid.get_value('Q_B1'))/100e3
I_B1_m = S_B1_m/grid.get_value('V_B1')*I_b
S_B2_m = np.abs(grid.get_value('P_B2') + 1j*grid.get_value('Q_B2'))/100e3
I_B2_m = S_B2_m/grid.get_value('V_B2')*I_b
P_loss = P_B0 + P_B1 + P_B2 - 400
print(f' P_1 P_2 P_3 Q_1 Q_2 Q_3 P_loss I_1 I_2 U_1 U_2 U_3') # & $P_0\,(kW)$ & $P_1\,(kW)$ & $P_2\,(kW)$ & $P_{loss}\,(kW)$ & $i_{0,3}\,(A)$ & $v_{max}\,(V)$ & $v_{min}\,(V)$ \\ \hline
print(f'{P_B0:7.2f} {P_B1:7.2f} {P_B2:7.2f} {Q_B0:7.2f} {Q_B1:7.2f} {Q_B2:7.2f} {P_loss:7.2f} {I_B1_m:7.1f} {I_B2_m:7.1f} {U_B1:7.1f} {U_B2:7.1f} {U_B3:7.1f}')
```
## Optimization problem
```
grid_pi_opt = oc_3bus_vsg_pi_class()
gt.change_line(grid_pi,'B1','B2',X_km=0.167,R_km=0.287,km=0.1)
gt.change_line(grid_pi,'B2','B3',X_km=0.167,R_km=0.287,km=0.3)
grid_pi_opt.initialize([params_vsg_pi],'xy_0.json',compile=True)
grid_pi_opt.initialization_tol = 1e-10
P_B2 = -100e3
grid_pi_opt.set_value('P_B2',P_B2)
def obj_eval(u):
grid_pi_opt.load_0('xy_0.json')
v_ref_B1 = u[0]
v_ref_B3 = u[1]
grid_pi_opt.set_value('v_ref_B1',v_ref_B1)
grid_pi_opt.set_value('v_ref_B3',v_ref_B3)
params_vsg_pi['v_ref_B1'] = v_ref_B1
params_vsg_pi['v_ref_B3'] = v_ref_B3
params_vsg_pi['P_B2'] = P_B2
gt.change_line(grid_pi,'B1','B2',X_km=0.167,R_km=0.287,km=0.1)
gt.change_line(grid_pi,'B2','B3',X_km=0.167,R_km=0.287,km=0.3)
grid_pi_opt.initialize([params_vsg_pi],'xy_0.json',compile=True)
P_B1 = grid_pi_opt.get_value('p_g_B1')*100e3
P_B3 = grid_pi_opt.get_value('p_g_B3')*100e3
P_loss = P_B1 + P_B3 + grid_pi_opt.get_value('P_B2')
return P_loss
x0 = np.array([ 1,1 ])
#SLSQP
bounds = [(0.95,1.05),(0.95,1.05)]
res = sopt.minimize(obj_eval, x0, method='Powell',bounds=bounds,
options={})
res
#grid_pi_opt.save_params('opt.json')
res.x
grid_pi_opt.initialize([params_vsg_pi],'xy_0.json',compile=True)
P_loss = grid_pi_opt.get_value('p_g_B1')*100e3 + grid_pi_opt.get_value('p_g_B3')*100e3 + grid_pi_opt.get_value('P_B2')
P_loss
grid_pi_opt.run([{'t_end': 1.0,'P_B2':-50e3, 'v_ref_B1':res.x[0],'v_ref_B3':res.x[1]}])
grid_pi_opt.run([{'t_end': 5.0,'P_B2':-100e3}])
grid_pi_opt.run([{'t_end': 8.0,'v_ref_B1':1.04636674,'v_ref_B3':1.05}])
grid_pi_opt.run([{'t_end':30.0,'omega_ref_B1':1.011,'omega_ref_B3':1.011}])
#grid.set_value('P_B2', -1e3)
#grid.run([{'t_end':3.0}])
grid_pi_opt.post();
grid_pi.report_u()
#grid_pi.initialize([params_vsg_pi],'xy_0.json',compile=True)
grid_pi.run([{'t_end': 1.0,'P_B2':-50e3, 'v_ref_B1':res.x[0],'v_ref_B3':res.x[1]}])
grid_pi.run([{'t_end': 5.0,'P_B2':-100e3}])
grid_pi.run([{'t_end': 8.0,'v_ref_B1':1.04636674,'v_ref_B3':1.05}])
grid_pi.run([{'t_end':30.0,'omega_ref_B1':1.011,'omega_ref_B3':1.011}])
#grid.set_value('P_B2', -1e3)
#grid.run([{'t_end':3.0}])
grid_pi.post();
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(4, 5))
axes[0].plot(grid_pi.T,grid_pi.get_values('omega_B1'), label='$\omega_{B1}$')
axes[0].plot(grid_pi.T,grid_pi.get_values('omega_B3'), label='$\omega_{B3}$')
axes[0].plot(grid_pi.T,grid_pi.get_values('omega_coi'), label='$\omega_{coi}$')
axes[1].plot(grid_pi.T,grid_pi.get_values('V_B2'), label='$V_{{B2}}$')
P_loss = grid_pi_opt.get_values('p_g_B1')*100e3 + grid_pi_opt.get_values('p_g_B3')*100e3 + grid_pi_opt.get_values('P_B2')
axes[2].plot(grid_pi_opt.T,P_loss, label='$P_{{loss}}$ (W)')
for ax in axes:
ax.grid()
ax.legend()
ax.set_xlabel('Time (s)')
axes[1].legend(loc='best')
grid = arn_4bus_class()
grid.initialization_tol = 1e-10
gt.change_line(grid,'B0','B3',X_km=0.167,R_km=0.287,km=0.2)
gt.change_line(grid,'B2','B3',X_km=0.167,R_km=0.287,km=0.2)
gt.change_line(grid,'B1','B3',X_km=0.167,R_km=0.287,km=0.3)
grid.set_value('P_B3',-400e3)
u = np.array([ 50. , 253.50910792 , 3.4676086 , 17.1092196 ])*1e3
grid.set_value('P_B1',u[0])
grid.set_value('P_B2',u[1])
grid.set_value('Q_B1',u[2])
grid.set_value('Q_B2',u[3])
grid.set_value('K_delta_B0',1)
grid.set_value('R_v_B0',1e-8)
grid.set_value('X_v_B0',1e-8)
grid.load_0('xy_0.json')
grid.ss()
print(grid.get_value('V_B0'))
def obj_eval(u):
grid_pi.load_0('xy_0.json')
v_ref_B1 = u[0]
v_ref_B3 = u[1]
grid.set_value('v_ref_B1',v_ref_B1)
grid.set_value('v_ref_B3',v_ref_B3)
grid.ss()
P_B1 = grid_pi.get_values('p_g_B1')*100e3
P_B1 = grid_pi.get_values('p_g_B1')*100e3
P_loss = P_B0 + P_B1 + P_B2 - 400e3
return P_loss
def contraint_I_B0_B3(u):
grid.set_value('P_B1',u[0])
grid.set_value('P_B2',u[1])
grid.set_value('Q_B1',u[2])
grid.set_value('Q_B2',u[3])
grid.load_0('xy_0.json')
grid.ss()
I_B0_B3 = np.abs(grid.get_value('i_d_ref_B0') + 1j*grid.get_value('i_q_ref_B0'))*I_b
return I_B0_B3
def contraint_I_B1(u):
grid.set_value('P_B1',u[0])
grid.set_value('P_B2',u[1])
grid.set_value('Q_B1',u[2])
grid.set_value('Q_B2',u[3])
grid.load_0('xy_0.json')
grid.ss()
S_B1_m = np.abs(grid.get_value('P_B1') + 1j*grid.get_value('Q_B1'))/100e3
I_B1_m = S_B1_m/(np.sqrt(3)*grid.get_value('V_B1')*400)
return I_B1_m
def contraint_I_B2(u):
grid.set_value('P_B1',u[0])
grid.set_value('P_B2',u[1])
grid.set_value('Q_B1',u[2])
grid.set_value('Q_B2',u[3])
grid.load_0('xy_0.json')
grid.ss()
S_B2_m = np.abs(grid.get_value('P_B2') + 1j*grid.get_value('Q_B2'))
I_B2_m = S_B2_m/(np.sqrt(3)*grid.get_value('V_B2')*400)
return I_B2_m
def contraint_V_B3(u):
grid.set_value('P_B1',u[0])
grid.set_value('P_B2',u[1])
grid.set_value('Q_B1',u[2])
grid.set_value('Q_B2',u[3])
grid.load_0('xy_0.json')
grid.ss()
U_B3 = grid.get_value('V_B3')*400
return U_B3
def contraint_V_B2(u):
grid.set_value('P_B1',u[0])
grid.set_value('P_B2',u[1])
grid.set_value('Q_B1',u[2])
grid.set_value('Q_B2',u[3])
grid.load_0('xy_0.json')
grid.ss()
U_B2 = grid.get_value('V_B2')*400
return U_B2
def contraint_V_B1(u):
grid.set_value('P_B1',u[0])
grid.set_value('P_B2',u[1])
grid.set_value('Q_B1',u[2])
grid.set_value('Q_B2',u[3])
grid.load_0('xy_0.json')
grid.ss()
U_B1 = grid.get_value('V_B1')*400
return U_B1
c1_nlcs = (
NonlinearConstraint(contraint_I_B1, -80, 80),
NonlinearConstraint(contraint_I_B2, -476, 476),
# NonlinearConstraint(contraint_V_B1, 400*0.8, 400*1.2),
# NonlinearConstraint(contraint_V_B2, 400*0.8, 400*1.2),
# NonlinearConstraint(contraint_V_B3, 400*0.8, 400*1.2),
# NonlinearConstraint(contraint_I_B0_B3, -400, 400),
)
c2_nlcs = (
NonlinearConstraint(contraint_I_B1, -80, 80),
NonlinearConstraint(contraint_I_B2, -476, 476),
NonlinearConstraint(contraint_V_B1, 400*0.95, 400*1.05),
NonlinearConstraint(contraint_V_B2, 400*0.95, 400*1.05),
NonlinearConstraint(contraint_V_B3, 400*0.95, 400*1.05),
#NonlinearConstraint(contraint_I_B0_B3, -400, 400),
)
c3_nlcs = (
NonlinearConstraint(contraint_I_B1, -80, 80),
NonlinearConstraint(contraint_I_B2, -476, 476),
NonlinearConstraint(contraint_V_B1, 400*0.95, 400*1.05),
NonlinearConstraint(contraint_V_B2, 400*0.95, 400*1.05),
#NonlinearConstraint(contraint_V_B3, 400*0.95, 400*1.05),
NonlinearConstraint(contraint_I_B0_B3, -180, 180),
)
bounds=[(0e3,50e3),(0,400e3),(-50e3,50e3),(-300e3,300e3)]
```
### Minimize
```
x0 = np.array([ 50. , 253.50910792 , 3.4676086 , 17.1092196 ])*1e3
res = sopt.minimize(obj_eval, x0, method='Powell',
options={})
report(grid)
```
### Case 1
```
c1_solution = sopt.differential_evolution(obj_eval,bounds=bounds ,constraints=(c1_nlcs), tol=1e-10)
report(grid)
```
### Case 2
```
c2_solution = sopt.differential_evolution(obj_eval, bounds=bounds,constraints=(c2_nlcs), tol=1e-8)
report(grid)
```
### Case 3
```
c3_solution = sopt.differential_evolution(obj_eval, bounds=bounds,constraints=(c3_nlcs), tol=1e-8)
report(grid)
```
| github_jupyter |
# Linear Elasticity in 2D for 3 Phases
## Introduction
This example provides a demonstration of using PyMKS to compute the linear strain field for a three-phase composite material. It demonstrates how to generate data for delta microstructures and then use this data to calibrate the first order MKS influence coefficients. The calibrated influence coefficients are used to predict the strain response for a random microstructure and the results are compared with those from finite element. Finally, the influence coefficients are scaled up and the MKS results are again compared with the finite element data for a large problem.
PyMKS uses the finite element tool [SfePy](http://sfepy.org) to generate both the strain fields to fit the MKS model and the verification data to evaluate the MKS model's accuracy.
### Elastostatics Equations and Boundary Conditions
The governing equations for elasticostaics and the boundary conditions used in this example are the same as those provided in the [Linear Elastic in 2D](elasticity_2D.html) example.
Note that an inappropriate boundary condition is used in this example because current version of SfePy is unable to implement a periodic plus displacement boundary condition. This leads to some issues near the edges of the domain and introduces errors into the resizing of the coefficients. We are working to fix this issue, but note that the problem is not with the MKS regression itself, but with the calibration data used. The finite element package ABAQUS includes the displaced periodic boundary condition and can be used to calibrate the MKS regression correctly.
## Modeling with MKS
### Calibration Data and Delta Microstructures
The first order MKS influence coefficients are all that is needed to compute a strain field of a random microstructure as long as the ratio between the elastic moduli (also known as the contrast) is less than 1.5. If this condition is met we can expect a mean absolute error of 2% or less when comparing the MKS results with those computed using finite element methods [1].
Because we are using distinct phases and the contrast is low enough to only need the first-order coefficients, delta microstructures and their strain fields are all that we need to calibrate the first-order influence coefficients [2].
Here we use the `make_delta_microstructure` function from `pymks.datasets` to create the delta microstructures needed to calibrate the first-order influence coefficients for a two-phase microstructure. The `make_delta_microstructure` function uses SfePy to generate the data.
```
#PYTEST_VALIDATE_IGNORE_OUTPUT
import pymks
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
from pymks.tools import draw_microstructures
from pymks.datasets import make_delta_microstructures
n = 21
n_phases = 3
X_delta = make_delta_microstructures(n_phases=n_phases, size=(n, n))
```
Let's take a look at a few of the delta microstructures by importing `draw_microstructures` from `pymks.tools`.
```
draw_microstructures(X_delta[::2])
```
Using delta microstructures for the calibration of the first-order influence coefficients is essentially the same, as using a unit [impulse response](http://en.wikipedia.org/wiki/Impulse_response) to find the kernel of a system in signal processing. Any given delta microstructure is composed of only two phases with the center cell having an alternative phase from the remainder of the domain. The number of delta microstructures that are needed to calibrated the first-order coefficients is $N(N-1)$ where $N$ is the number of phases, therefore in this example we need 6 delta microstructures.
### Generating Calibration Data
The `make_elasticFEstrain_delta` function from `pymks.datasets` provides an easy interface to generate delta microstructures and their strain fields, which can then be used for calibration of the influence coefficients. The function calls the `ElasticFESimulation` class to compute the strain fields.
In this example, lets look at a three phase microstructure with elastic moduli values of 80, 100 and 120 and Poisson's ratio values all equal to 0.3. Let's also set the macroscopic imposed strain equal to 0.02. All of these parameters used in the simulation must be passed into the `make_elasticFEstrain_delta` function. The number of Poisson's ratio values and elastic moduli values indicates the number of phases. Note that `make_elasticFEstrain_delta` does not take a number of samples argument as the number of samples to calibrate the MKS is fixed by the number of phases.
```
from pymks.datasets import make_elastic_FE_strain_delta
from pymks.tools import draw_microstructure_strain
elastic_modulus = (80, 100, 120)
poissons_ratio = (0.3, 0.3, 0.3)
macro_strain = 0.02
size = (n, n)
X_delta, strains_delta = make_elastic_FE_strain_delta(elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio,
size=size, macro_strain=macro_strain)
```
Let's take a look at one of the delta microstructures and the $\varepsilon_{xx}$ strain field.
```
draw_microstructure_strain(X_delta[0], strains_delta[0])
```
Because `slice(None)` (the default slice operator in Python, equivalent to array[:]) was passed in to the `make_elasticFEstrain_delta` function as the argument for `strain_index`, the function returns all the strain fields. Let's also take a look at the $\varepsilon_{yy}$ and $\varepsilon_{xy}$ strain fields.
### Calibrating First-Order Influence Coefficients
Now that we have the delta microstructures and their strain fields, we will calibrate the influence coefficients by creating an instance of the `MKSLocalizatoinModel` class. Because we are going to calibrate the influence coefficients with delta microstructures, we can create an instance of `PrimitiveBasis` with `n_states` equal to 3, and use it to create an instance of `MKSLocalizationModel`. The delta microstructures and their strain fields will then be passed to the `fit` method.
```
from pymks import MKSLocalizationModel
from pymks import PrimitiveBasis
p_basis =PrimitiveBasis(n_states=3, domain=[0, 2])
model = MKSLocalizationModel(basis=p_basis)
```
Now, pass the delta microstructures and their strain fields into the `fit` method to calibrate the first-order influence coefficients.
```
model.fit(X_delta, strains_delta)
```
That's it, the influence coefficient have been calibrated. Let's take a look at them.
```
from pymks.tools import draw_coeff
draw_coeff(model.coef_)
```
The influence coefficients for $l=0$ and $l = 1$ have a Gaussian-like shape, while the influence coefficients for $l=2$ are constant-valued. The constant-valued influence coefficients may seem superfluous, but are equally as important. They are equivalent to the constant term in multiple linear regression with [categorical variables](http://en.wikipedia.org/wiki/Dummy_variable_%28statistics%29).
### Predict of the Strain Field for a Random Microstructure
Let's now use our instance of the `MKSLocalizationModel` class with calibrated influence coefficients to compute the strain field for a random two-phase microstructure and compare it with the results from a finite element simulation.
The `make_elasticFEstrain_random` function from `pymks.datasets` is an easy way to generate a random microstructure and its strain field results from finite element analysis.
```
from pymks.datasets import make_elastic_FE_strain_random
np.random.seed(101)
X, strain = make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio, size=size,
macro_strain=macro_strain)
draw_microstructure_strain(X[0] , strain[0])
```
**Note that the calibrated influence coefficients can only be used to reproduce the simulation with the same boundary conditions that they were calibrated with.**
Now, to get the strain field from the `MKSLocalizationModel`, just pass the same microstructure to the `predict` method.
```
strain_pred = model.predict(X)
```
Finally let's compare the results from finite element simulation and the MKS model.
```
from pymks.tools import draw_strains_compare
draw_strains_compare(strain[0], strain_pred[0])
```
Let's plot the difference between the two strain fields.
```
from pymks.tools import draw_differences
draw_differences([strain[0] - strain_pred[0]], ['Finite Element - MKS'])
```
The MKS model is able to capture the strain field for the random microstructure after being calibrated with delta microstructures.
## Resizing the Coefficeints to use on Larger Microstructures
The influence coefficients that were calibrated on a smaller microstructure can be used to predict the strain field on a larger microstructure though spectral interpolation [3], but accuracy of the MKS model drops slightly. To demonstrate how this is done, let's generate a new larger random microstructure and its strain field.
```
m = 3 * n
size = (m, m)
print(size)
X, strain = make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio, size=size,
macro_strain=macro_strain)
draw_microstructure_strain(X[0] , strain[0])
```
The influence coefficients that have already been calibrated on a $n$ by $n$ delta microstructures, need to be resized to match the shape of the new larger $m$ by $m$ microstructure that we want to compute the strain field for. This can be done by passing the shape of the new larger microstructure into the `resize_coeff` method.
```
model.resize_coeff(X[0].shape)
```
Let's now take a look that ther resized influence coefficients.
```
draw_coeff(model.coef_)
```
Because the coefficients have been resized, they will no longer work for our original $n$ by $n$ sized microstructures they were calibrated on, but they can now be used on the $m$ by $m$ microstructures. Just like before, just pass the microstructure as the argument of the `predict` method to get the strain field.
```
strain_pred = model.predict(X)
draw_strains_compare(strain[0], strain_pred[0])
```
Again, let's plot the difference between the two strain fields.
```
draw_differences([strain[0] - strain_pred[0]], ['Finite Element - MKS'])
```
As you can see, the results from the strain field computed with the resized influence coefficients is not as accurate as they were before they were resized. This decrease in accuracy is expected when using spectral interpolation [4].
## References
[1] Binci M., Fullwood D., Kalidindi S.R., A new spectral framework for establishing localization relationships for elastic behavior of composites and their calibration to finite-element models. Acta Materialia, 2008. 56 (10) p. 2272-2282 [doi:10.1016/j.actamat.2008.01.017](http://dx.doi.org/10.1016/j.actamat.2008.01.017).
[2] Landi, G., S.R. Niezgoda, S.R. Kalidindi, Multi-scale modeling of elastic response of three-dimensional voxel-based microstructure datasets using novel DFT-based knowledge systems. Acta Materialia, 2009. 58 (7): p. 2716-2725 [doi:10.1016/j.actamat.2010.01.007](http://dx.doi.org/10.1016/j.actamat.2010.01.007).
[3] Marko, K., Kalidindi S.R., Fullwood D., Computationally efficient database and spectral interpolation for fully plastic Taylor-type crystal plasticity calculations of face-centered cubic polycrystals. International Journal of Plasticity 24 (2008) 1264–1276 [doi:10.1016/j.ijplas.2007.12.002](http://dx.doi.org/10.1016/j.ijplas.2007.12.002).
[4] Marko, K. Al-Harbi H. F. , Kalidindi S.R., Crystal plasticity simulations using discrete Fourier transforms. Acta Materialia 57 (2009) 1777–1784 [doi:10.1016/j.actamat.2008.12.017](http://dx.doi.org/10.1016/j.actamat.2008.12.017).
| github_jupyter |
# Short-Circuit Calculation according to IEC 60909
pandapower supports short-circuit calculations with the method of equivalent voltage source at the fault location according to IEC 60909. The pandapower short-circuit calculation supports the following elements:
- sgen (as motor or as full converter generator)
- gen (as synchronous generator)
- ext_grid
- line
- trafo
- trafo3w
- impedance
with the correction factors as defined in IEC 60909. Loads and shunts are neglected as per standard. The pandapower switch model is fully integrated into the short-circuit calculation.
The following short-circuit currents can be calculated:
- ikss (Initial symmetrical short-circuit current)
- ip (short-circuit current peak)
- ith (equivalent thermal short-circuit current)
either as
- symmetrical three-phase or
- asymmetrical two-phase
short circuit current. Calculations are available for meshed as well as for radial networks. ip and ith are only implemented for short circuits far from synchronous generators.
The results for all elements and different short-circuit currents are tested against commercial software to ensure that correction factors are correctly applied.
### Example Network
Here is a little example on how to use the short-circuit calculation. First, we create a simple open ring network with 4 buses, that are connected by one transformer and two lines with one open sectioning point. The network is fed by an external grid connection at bus 1:
<img src="shortcircuit/example_network_sc.png">
```
import pandapower as pp
import pandapower.shortcircuit as sc
def ring_network():
net = pp.create_empty_network()
b1 = pp.create_bus(net, 220)
b2 = pp.create_bus(net, 110)
b3 = pp.create_bus(net, 110)
b4 = pp.create_bus(net, 110)
pp.create_ext_grid(net, b1, s_sc_max_mva=100., s_sc_min_mva=80., rx_min=0.20, rx_max=0.35)
pp.create_transformer(net, b1, b2, "100 MVA 220/110 kV")
pp.create_line(net, b2, b3, std_type="N2XS(FL)2Y 1x120 RM/35 64/110 kV" , length_km=15.)
l2 = pp.create_line(net, b3, b4, std_type="N2XS(FL)2Y 1x120 RM/35 64/110 kV" , length_km=12.)
pp.create_line(net, b4, b2, std_type="N2XS(FL)2Y 1x120 RM/35 64/110 kV" , length_km=10.)
pp.create_switch(net, b4, l2, closed=False, et="l")
return net
```
## Symmetric Short-Circuit Calculation
### Maximum Short Circuit Currents
Now, we load the network and calculate the maximum short-circuit currents with the calc_sc function:
```
net = ring_network()
sc.calc_sc(net, case="max", ip=True, ith=True)
net.res_bus_sc
```
where ikss is the initial short-circuit current, ip is the peak short-circuit current and ith is the thermal equivalent current.
For branches, the results are defined as the maximum current flows through that occurs for a fault at any bus in the network. The results are available seperately for lines:
```
net.res_line_sc
```
and transformers:
```
net.res_trafo_sc
```
### Minimum Short Circuit Currents
Minimum short-circuits can be calculated in the same way. However, we need to specify the end temperature of the lines after a fault as per standard first:
```
net = ring_network()
net.line["endtemp_degree"] = 80
sc.calc_sc(net, case="min", ith=True, ip=True)
net.res_bus_sc
```
The branch results are now the minimum current flows through each branch:
```
net.res_line_sc
net.res_trafo_sc
```
### Asynchronous Motors
Asynchronous motors can be specified by creating a static generator of type "motor". For the short circuit impedance, an R/X ratio "rx" as well as the ratio between nominal current and short circuit current "k" has to be specified:
```
net = ring_network()
pp.create_sgen(net, 2, p_kw=0, sn_kva=500, k=1.2, rx=7., type="motor")
net
```
If we run the short-circuit calculation again, we can see that the currents increased due to the contribution of the inverteres to the short-circuit currents.
```
sc.calc_sc(net, case="max", ith=True, ip=True)
net.res_bus_sc
```
### Synchronous Generators
Synchronous generators can also be considered in the short-circuit calculation with the gen element. According to the standard, the rated cosine(phi) "cos_phi", rated voltage "vn_kv", rated apparent power "sn_kva" and subtransient resistance "rdss" and reactance "xdss" are necessary to calculate the short circuit impedance:
```
net = ring_network()
pp.create_gen(net, 2, p_kw=0, vm_pu=1.0, cos_phi=0.8, vn_kv=22, sn_kva=5e3, xdss=0.2, rdss=0.005)
net
```
and run the short-circuit calculation again:
```
sc.calc_sc(net, case="max", ith=True, ip=True)
net.res_bus_sc
```
Once again, the short-circuit current increases due to the contribution of the generator. As can be seen in the warning, the values for peak and thermal equivalent short-circuit current will only be accurate for faults far from generators.
## Meshed Networks
The correction factors for aperiodic and thermal currents differ between meshed and radial networks. pandapower includes a meshing detection that automatically detects the meshing for each short-circuit location. Alternatively, the topology can be set to "radial" or "meshed" to circumvent the check and save calculation time.
We load the radial network and close the open sectioning point to get a closed ring network:
```
net = ring_network()
net.switch.closed = True
sc.calc_sc(net, topology="auto", ip=True, ith=True)
net.res_bus_sc
```
the network is automatically detected to be meshed and application factors are applied. This can be validated by setting the topology to radial and comparing the results:
```
sc.calc_sc(net, topology="radial", ip=True, ith=True)
net.res_bus_sc
```
If we look at the line results, we can see that the line currents are significantly smaller than the bus currents:
```
sc.calc_sc(net, topology="auto", ip=True, ith=True)
net.res_line_sc
```
this is because the short-circuit current is split up on both paths of the ring, which is correctly considered by pandapower.
## Fault Impedance
It is also possible to specify a fault impedance in the short-circuit calculationn:
```
net = ring_network()
sc.calc_sc(net, topology="radial", ip=True, ith=True, r_fault_ohm=1., x_fault_ohm=2.)
```
which of course decreases the short-circuit currents:
```
net.res_bus_sc
```
## Asymetrical Two-Phase Short-Circuit Calculation
All calculations above can be carried out for a two-phase short-circuit current in the same way by specifying "2ph" in the fault parameter:
```
net = ring_network()
sc.calc_sc(net, fault="2ph", ip=True, ith=True)
net.res_bus_sc
```
Two phase short-circuits are often used for minimum short-circuit calculations:
```
net = ring_network()
net.line["endtemp_degree"] = 150
sc.calc_sc(net, fault="2ph", case="min", ip=True, ith=True)
net.res_bus_sc
```
| github_jupyter |
# Matplotlib example (https://matplotlib.org/gallery/index.html)
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.show()
# to save
# plt.savefig('test_nb.png')
```
# Pandas examples (https://pandas.pydata.org/pandas-docs/stable/visualization.html)
```
import pandas as pd
import numpy as np
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list('ABCD'))
df.cumsum().plot()
# new figure
plt.figure()
df.diff().hist(color='k', alpha=0.5, bins=50)
```
# Seaborn examples (https://seaborn.pydata.org/examples/index.html)
```
# Joint distributions
import seaborn as sns
sns.set(style="ticks")
rs = np.random.RandomState(11)
x = rs.gamma(2, size=1000)
y = -.5 * x + rs.normal(size=1000)
sns.jointplot(x, y, kind="hex", color="#4CB391")
# Multiple linear regression
sns.set()
# Load the iris dataset
iris = sns.load_dataset("iris")
# Plot sepal with as a function of sepal_length across days
g = sns.lmplot(x="sepal_length", y="sepal_width", hue="species",
truncate=True, height=5, data=iris)
# Use more informative axis labels than are provided by default
g.set_axis_labels("Sepal length (mm)", "Sepal width (mm)")
```
# Cartopy examples (https://scitools.org.uk/cartopy/docs/latest/gallery/index.html)
```
import cartopy.crs as ccrs
from cartopy.examples.arrows import sample_data
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
ax.set_extent([-90, 75, 10, 85], crs=ccrs.PlateCarree())
ax.coastlines()
x, y, u, v, vector_crs = sample_data(shape=(80, 100))
magnitude = (u ** 2 + v ** 2) ** 0.5
ax.streamplot(x, y, u, v, transform=vector_crs,
linewidth=2, density=2, color=magnitude)
plt.show()
import matplotlib.patches as mpatches
import shapely.geometry as sgeom
import cartopy.io.shapereader as shpreader
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], projection=ccrs.LambertConformal())
ax.set_extent([-125, -66.5, 20, 50], ccrs.Geodetic())
shapename = 'admin_1_states_provinces_lakes_shp'
states_shp = shpreader.natural_earth(resolution='110m',
category='cultural', name=shapename)
# Hurricane Katrina lons and lats
lons = [-75.1, -75.7, -76.2, -76.5, -76.9, -77.7, -78.4, -79.0,
-79.6, -80.1, -80.3, -81.3, -82.0, -82.6, -83.3, -84.0,
-84.7, -85.3, -85.9, -86.7, -87.7, -88.6, -89.2, -89.6,
-89.6, -89.6, -89.6, -89.6, -89.1, -88.6, -88.0, -87.0,
-85.3, -82.9]
lats = [23.1, 23.4, 23.8, 24.5, 25.4, 26.0, 26.1, 26.2, 26.2, 26.0,
25.9, 25.4, 25.1, 24.9, 24.6, 24.4, 24.4, 24.5, 24.8, 25.2,
25.7, 26.3, 27.2, 28.2, 29.3, 29.5, 30.2, 31.1, 32.6, 34.1,
35.6, 37.0, 38.6, 40.1]
# to get the effect of having just the states without a map "background" turn off the outline and background patches
ax.background_patch.set_visible(False)
ax.outline_patch.set_visible(False)
ax.set_title('US States which intersect the track of '
'Hurricane Katrina (2005)')
# turn the lons and lats into a shapely LineString
track = sgeom.LineString(zip(lons, lats))
# buffer the linestring by two degrees (note: this is a non-physical
# distance)
track_buffer = track.buffer(2)
for state in shpreader.Reader(states_shp).geometries():
# pick a default color for the land with a black outline,
# this will change if the storm intersects with our track
facecolor = [0.9375, 0.9375, 0.859375]
edgecolor = 'black'
if state.intersects(track):
facecolor = 'red'
elif state.intersects(track_buffer):
facecolor = '#FF7E00'
ax.add_geometries([state], ccrs.PlateCarree(),
facecolor=facecolor, edgecolor=edgecolor)
ax.add_geometries([track_buffer], ccrs.PlateCarree(),
facecolor='#C8A2C8', alpha=0.5)
ax.add_geometries([track], ccrs.PlateCarree(),
facecolor='none', edgecolor='k')
# make two proxy artists to add to a legend
direct_hit = mpatches.Rectangle((0, 0), 1, 1, facecolor="red")
within_2_deg = mpatches.Rectangle((0, 0), 1, 1, facecolor="#FF7E00")
labels = ['State directly intersects\nwith track',
'State is within \n2 degrees of track']
ax.legend([direct_hit, within_2_deg], labels,
loc='lower left', bbox_to_anchor=(0.025, -0.1), fancybox=True)
plt.show()
```
# Xarray examples (http://xarray.pydata.org/en/stable/plotting.html)
```
import xarray as xr
airtemps = xr.tutorial.load_dataset('air_temperature')
airtemps
# Convert to celsius
air = airtemps.air - 273.15
# copy attributes to get nice figure labels and change Kelvin to Celsius
air.attrs = airtemps.air.attrs
air.attrs['units'] = 'deg C'
air.sel(lat=50, lon=225).plot()
fig, axes = plt.subplots(ncols=2)
air.sel(lat=50, lon=225).plot(ax=axes[0])
air.sel(lat=50, lon=225).plot.hist(ax=axes[1])
plt.tight_layout()
plt.show()
air.sel(time='2013-09-03T00:00:00').plot()
# Faceting
# Plot evey 250th point
air.isel(time=slice(0, 365 * 4, 250)).plot(x='lon', y='lat', col='time', col_wrap=3)
# Overlay data on cartopy map
ax = plt.axes(projection=ccrs.Orthographic(-80, 35))
air.isel(time=0).plot.contourf(ax=ax, transform=ccrs.PlateCarree());
ax.set_global(); ax.coastlines();
```
| github_jupyter |
# Independence Tests Power over Increasing Dimension
```
import sys, os
import multiprocessing as mp
from joblib import Parallel, delayed
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from power import power
from hyppo.independence import CCA, MGC, RV, Dcorr, Hsic, HHG
from hyppo.tools import *
sys.path.append(os.path.realpath('..'))
import numpy as np
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.metrics.pairwise import euclidean_distances
from abc import ABC, abstractmethod
# from hyppo.random_forest.base import RandomForestTest
from hyppo.independence._utils import sim_matrix
from hyppo.tools import perm_test
FOREST_TYPES = {
"classifier" : RandomForestClassifier,
"regressor" : RandomForestRegressor
}
def euclidean(x):
return euclidean_distances(x)
class RandomForestTest(ABC):
r"""
A base class for an random-forest based independence test.
"""
def __init__(self):
# set statistic and p-value
self.stat = None
self.pvalue = None
super().__init__()
@abstractmethod
def _statistic(self, x, y):
r"""
Calulates the random-forest test statistic.
Parameters
----------
x, y : ndarray
Input data matrices.
"""
class KMERF(RandomForestTest):
r"""
Class for calculating the random forest based Dcorr test statistic and p-value.
"""
def __init__(self, forest="regressor", n_estimators=500, **kwargs):
self.first_time = True
if forest in FOREST_TYPES.keys():
self.clf = FOREST_TYPES[forest](n_estimators=ntrees, **kwargs)
else:
raise ValueError("forest must be one of the following ")
RandomForestTest.__init__(self)
def _statistic(self, x, y):
r"""
Helper function that calculates the random forest based Dcorr test statistic.
"""
if self.first_time:
y = y.reshape(-1)
self.clf.fit(x, y)
self.first_time = False
distx = np.sqrt(1 - sim_matrix(self.clf, x))
y = y.reshape(-1, 1)
disty = euclidean(y)
stat = Dcorr(compute_distance=None)._statistic(distx, disty)
self.stat = stat
return stat
POWER_REPS = 5
SIMULATIONS = {
"linear": "Linear",
"exponential": "Exponential",
"cubic": "Cubic",
"joint_normal": "Joint Normal",
"step": "Step",
"quadratic": "Quadratic",
"w_shaped": "W-Shaped",
"spiral": "Spiral",
"uncorrelated_bernoulli": "Bernoulli",
"logarithmic": "Logarithmic",
"fourth_root": "Fourth Root",
"sin_four_pi": "Sine 4\u03C0",
"sin_sixteen_pi": "Sine 16\u03C0",
"square": "Square",
"two_parabolas": "Two Parabolas",
"circle": "Circle",
"ellipse": "Ellipse",
"diamond": "Diamond",
"multiplicative_noise": "Multiplicative",
"multimodal_independence": "Independence"
}
TESTS = [
# KMERF,
MGC,
Dcorr,
Hsic,
HHG,
CCA,
RV,
]
def find_dim(sim):
if sim not in SIMULATIONS.keys():
raise ValueError("Invalid simulation")
if sim in ["joint_normal", "sin_four_pi", "sin_sixteen_pi", "multiplicative_noise"]:
dim = 10
elif sim in ["multimodal_independence", "uncorrelated_bernoulli", "logarithmic"]:
dim = 100
elif sim in ["linear", "exponential", "cubic"]:
dim = 1000
elif sim in ["square", "diamond"]:
dim = 40
else:
dim = 20
return dim
def find_dim_range(dim):
if dim < 20:
lim = 10
else:
lim = 20
dim_range = list(range(int(dim/lim), dim+1, int(dim/lim)))
if int(dim/lim) != 1:
dim_range.insert(0, 1)
return dim_range
def estimate_power(sim, test):
dim_range = find_dim_range(find_dim(sim))
est_power = np.array([np.mean([power(test, sim, p=dim) for _ in range(POWER_REPS)])
for dim in dim_range])
np.savetxt('../rf/vs_dimension/{}_{}.csv'.format(sim, test.__name__),
est_power, delimiter=',')
return est_power
# outputs = Parallel(n_jobs=-1, verbose=100)(
# [delayed(estimate_power)(sim, test) for sim in SIMULATIONS.keys() for test in TESTS]
# )
sns.set(color_codes=True, style='white', context='talk', font_scale=1.5)
PALETTE = sns.color_palette("Set1")
sns.set_palette(PALETTE[1:5] + PALETTE[6:], n_colors=9)
def plot_power():
fig, ax = plt.subplots(nrows=4, ncols=5, figsize=(25, 20))
plt.suptitle("Multivariate Independence Testing", y=0.93, va='baseline')
for i, row in enumerate(ax):
for j, col in enumerate(row):
count = 5*i + j
sim = list(SIMULATIONS.keys())[count]
for test in TESTS:
test_name = test.__name__
power = np.genfromtxt('../kmerf/vs_dimension/{}_{}.csv'.format(sim, test_name), delimiter=',')
hsic_power = np.genfromtxt('../kmerf/vs_dimension/{}_Hsic.csv'.format(sim), delimiter=',')
dim_range = find_dim_range(find_dim(sim))
kwargs = {
"label": test.__name__,
"lw": 2,
}
if test_name in ["MGC", "KMERF"]:
kwargs["color"] = "#e41a1c"
kwargs["lw"] = 4
if test_name == "KMERF":
kwargs["linestyle"] = "dashed"
if count in [0, 1, 2, 19]:
col.plot(dim_range, power - hsic_power, **kwargs)
elif count in [8, 9]:
col.plot(dim_range[1:], (power - hsic_power)[1:], **kwargs)
else:
col.plot(dim_range[2:], (power - hsic_power)[2:], **kwargs)
col.set_xticks([3, dim_range[-1]])
col.set_ylim(-1.05, 1.05)
col.set_yticks([])
if j == 0:
col.set_yticks([-1, 0, 1])
col.set_title(SIMULATIONS[sim])
fig.text(0.5, 0.07, 'Dimensions', ha='center')
fig.text(0.07, 0.5, 'Statistical Power Relative to Hsic', va='center', rotation='vertical')
leg = plt.legend(bbox_to_anchor=(0.5, 0.07), bbox_transform=plt.gcf().transFigure, ncol=len(TESTS),
loc='upper center')
leg.get_frame().set_linewidth(0.0)
for legobj in leg.legendHandles:
legobj.set_linewidth(5.0)
plt.subplots_adjust(hspace=.50)
plt.savefig('../kmerf/figs/indep_power_dimension.pdf', transparent=True, bbox_inches='tight')
plot_power()
```
| github_jupyter |
# ETL Project by Johneson Giang
## SCOPE:
### - Extracted, transformed, and loaded up YouTube's Top Trending Videos from December 2017 thru May 2018 for their videos categorized as music only, and created an "Artist" column to enable joining with Spotify's Top 100 Songs of 2018. Both dataframes were loaded into MySQL.
## PURPOSE:
### - I choose this project because I'm a avid listener and a huge music and concert goer, and wanted to work with data that I was familiar with.
### Data Sources - Kaggle
- https://www.kaggle.com/datasnaek/youtube-new/downloads/youtube-new.zip/114
- https://www.kaggle.com/nadintamer/top-spotify-tracks-of-2018
## Step 1) Import Dependencies
```
#!pip install PyMySQL
# Import Dependencies:
import os
import csv
import json
import numpy as np
import pandas as pd
from datetime import datetime
import simplejson
import sys
import string
from sqlalchemy import create_engine
import pymysql
pymysql.install_as_MySQLdb()
```
## Step 2) "Extract" the data
```
# YouTube Data - Raw CSV
csv_file_yt = "youtube_USvideos.csv"
yt_rawdata_df = pd.read_csv(csv_file_yt, encoding='utf-8')
#yt_rawdata_df
# Load the JSON Category file and print to see the categories
#YouTube Data - Raw JSON - Categories
yt_json_file = "youtube_US_category_id.json"
yt_rawjson_df = pd.read_json(yt_json_file)
list(yt_rawjson_df)
for i in yt_rawjson_df.iterrows():
print(i[1].items)
# Spotify 2018 - Top 100 Songs - Raw CSV
csv_file_spotify2018 = "spotify_top2018.csv"
spotify2018_rawdata_df = pd.read_csv(csv_file_spotify2018, encoding='utf-8')
spotify2018_rawdata_df.head()
```
## Step 3) "Transform" the data (clean, manipulate, and etc.)
```
# Figure out if there's any missing information
yt_rawdata_df.count()
# Explore the data and types
yt_rawdata_df.info()
# Clean the column names - note to self use underscores next time instead of spaces
yt_cleandata_df = yt_rawdata_df.rename(columns={"video_id":"Video ID", "trending_date":"Trending Date",
"title":"Title", "channel_title":"Channel Title",
"category_id":"Category Titles", "publish_time":"Publish Time",
"tags":"Tags", "views":"Views",
"likes":"Likes", "dislikes":"Dislikes",
"comment_count":"Comment Count", "thumbnail_link":"Thumbnail Link",
"comments_disabled":"Comments Disabled", "ratings_disabled":"Ratings Disabled",
"video_error_or_removed":"Video Error Or Removed", "description":"Description"
})
yt_cleandata_df
# Drop Cells with Missing Information
yt_cleandata_df = yt_cleandata_df.dropna(how="any")
#yt_cleandata_df.info()
# Drop Dulplicates and Sort by Trending Date by chaining
yt_cleandata_df.drop_duplicates(['Video ID', 'Trending Date', 'Title', 'Channel Title', 'Category Titles', 'Publish Time']).sort_values(by=['Trending Date'], ascending=False)
# Drop unwanted columns
to_drop =['Publish Time', 'Tags', 'Thumbnail Link', 'Comments Disabled', 'Ratings Disabled', 'Video Error Or Removed', 'Description']
yt_cleandata_df.drop(to_drop, inplace=True, axis=1)
# Replace the "." in "Trending Date" to "-"
yt_cleandata_df['Trending Date'] = [x.replace(".","-") for x in yt_cleandata_df['Trending Date']]
yt_cleandata_df.head()
# Convert categories to string type to set up conversion of category titles
yt_cleandata_df['Category Titles'] = yt_cleandata_df['Category Titles'].apply(str)
yt_cleandata_df.info()
# Learn about the category titles to convert them
yt_cleandata_df['Category Titles'].value_counts()
# Convert Categories ID's to Actual Category Titles, opened the YouTube Categories Json File in VSC
# and manully translated. Didn't have enough time to use python to finish extracting categories from the json files
yt_cleandata_df['Category Titles'] = [x.replace("24","Entertainment") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("10","Music") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("26","How To & Style") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("23","Comedy") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("22","People & Blogs") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("25","News & Politics") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("28","Science & Technology") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("1","Film & Animation") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("17","Sports") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("27","Education") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("15","Pets & Animals") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("20","Gaming") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("19","Travel & Events") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("2","Autos & Vehicles") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("43","Shows") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df['Category Titles'] = [x.replace("29","Nonprofits & Activism") for x in yt_cleandata_df['Category Titles']]
yt_cleandata_df
# Sort the values by Trending Date
yt_cleandata_df = yt_cleandata_df.sort_values(by= ["Trending Date"], ascending=False)
yt_cleandata_df.head()
# Filter out for Items with the Music Category
yt_musicdata_df = yt_cleandata_df.loc[yt_cleandata_df['Category Titles'] == 'Music']
yt_musicdata_df.head()
# Print out All Channel Titles to see what needs to be cleaned
for x in yt_musicdata_df['Channel Title'].unique():
print(x)
# Insert a new column for Artist. This will be used to join the Spotify 2018 Top 100 Artists
yt_music_data_df = yt_musicdata_df.insert(3, "Artist", "")
#yt_musicdata_df
# Clean Channel Title Column to set up the MAIN LOOP
yt_musicdata_df['Channel Title'] = [x.replace("VEVO","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'] = [x.replace("vevo","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'] = [x.replace("Vevo","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'] = [x.replace("Official","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'] = [x.replace("official","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'] = [x.replace("OFFICIAL","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'] = [x.replace("You Tube Channel","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'] = [x.replace("Music","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'] = [x.replace("music","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'] = [x.replace(" - Topic","") for x in yt_musicdata_df['Channel Title']]
yt_musicdata_df['Channel Title'].replace("BackstreetBoys","Backstreet Boys")
yt_musicdata_df['Channel Title'].replace("CalumScott","Calum Scott")
yt_musicdata_df['Channel Title'].replace("TaylorSwift","Taylor Swift")
yt_musicdata_df['Channel Title'].replace("NickiMinajAt","Nicki Minaj")
yt_musicdata_df['Channel Title'].replace("FifthHarmony","FifthHarmony")
yt_musicdata_df['Channel Title'].replace("davematthewsband","Dave Matthews Band")
yt_musicdata_df['Channel Title'].replace("EnriqueIglesias","Enrique Iglesias")
yt_musicdata_df['Channel Title'].replace("ChildishGambino","Childish Gambino")
yt_musicdata_df['Channel Title'].replace("SamSmithWorld","Sam Smith")
yt_musicdata_df['Channel Title'].replace("MeghanTrainor","Meghan Trainor")
yt_musicdata_df['Channel Title'].replace("johnmayer","John Mayer")
yt_musicdata_df['Channel Title'].replace("weezer","Weezer")
yt_musicdata_df['Channel Title'].replace("AzealiaBanks","Azealia Banks")
yt_musicdata_df['Channel Title'].replace("Maroon5","Maroon 5")
yt_musicdata_df['Channel Title'].replace("Zayn","ZAYN")
yt_musicdata_df['Channel Title'].replace("ArianaGrande","Ariana Grande")
yt_musicdata_df['Channel Title'].replace("CAguilera","Christina Aguilera")
yt_musicdata_df['Channel Title'].replace("LadyGaga","Lady Gaga")
yt_musicdata_df['Channel Title'].replace("ToniBraxton","Toni Braxton")
yt_musicdata_df['Channel Title'].replace("JasonAldean","Jason Aldean")
yt_musicdata_df['Channel Title'].replace("PTXofficial","PTX")
yt_musicdata_df['Channel Title'].replace("KeithUrban","Keith Urban")
yt_musicdata_df['Channel Title'].replace("KaceyMusgraves","Kacey Musgraves")
yt_musicdata_df['Channel Title'].replace("ChrisStapleton","Chris Stapleton")
yt_musicdata_df['Channel Title'].replace("ThirtySecondsToMars","Thirty Seconds To Mars")
# Test Loop to convert spotify artist names to lower case
for x in spotify2018_rawdata_df['artists'].unique():
str = x.lower().replace(" ", "")
print(x, " | ", str)
# MAIN LOOP: Loop through both YouTube and Spotify Data Sets, Normalize them (lower case and remove spaces),
# and fill in the newly "Artist" column in the YouTube DF with the Spotify Artist Value if the artist name is found
for index, x in yt_musicdata_df.iterrows():
stryt = x['Channel Title'].lower().replace(" ", "")
# yt_musicdata_df['Artist'][index] =
# print(x['Channel Title'], " | ", stryt)
for y in spotify2018_rawdata_df['artists'].unique():
str = y.lower().replace(" ", "")
#print(y, " | ", str)
if str in stryt:
yt_musicdata_df['Artist'][index] = y
yt_musicdata_df
# Created a new filtered df for the music taking out all blank "Artist".
# All items in this DF returned were on Spotify's Top 100 Songs in 2018
yt_filtered_musicdata_df = yt_musicdata_df.loc[yt_musicdata_df['Artist'] != '']
yt_filtered_musicdata_df = yt_filtered_musicdata_df.reset_index(drop=True)
# Set up Spotify DataFrame
spotify_2018_id = spotify2018_rawdata_df['id']
spotify_2018_name = spotify2018_rawdata_df['name']
spotify_2018_artists = spotify2018_rawdata_df['artists']
#type(spotify_2018_id)
spotify2018_filtered_df = pd.DataFrame({"Artist": spotify_2018_artists,
"Song Name": spotify_2018_name,
"Spotify Unique ID": spotify_2018_id
})
spotify2018_filtered_df.head()
```
## Step 4) Load both dataframes into MySQL database
```
# COPY THE YouTube DATA FRAME
youtube_music_artist_df = yt_filtered_musicdata_df.copy()
youtube_music_artist_df
# COPY THE Spotify 2018 Top 100 Filtered DATA FRAME
# create a new variable and copy the finished spotify dataframe
spotify_2018_filtered_df = spotify2018_filtered_df.copy()
spotify_2018_filtered_df.head()
# Ready the "rds_connection_string" = "<inser user name>:<insert password>@127.0.0.1/customer_db"
rds_connection_string = "root:******************@127.0.0.1/youtube_spotify_2018_db"
engine = create_engine(f'mysql://{rds_connection_string}')
# engine.set_character_set('utf8')
# engine.execute('SET NAMES utf8;')
# dbc.execute('SET CHARACTER SET utf8;')
# dbc.execute('SET character_set_connection=utf8;')
youtube_music_artist_df['Channel Title'].unique()
youtube_music_artist_df
# Clean Special Characters to prevent latin-1 encoding errors. Went back up to the pd.read_csv
# and added "encoding="utf-8"
youtube_music_artist_df['Title'] = [x.replace("é","e") for x in youtube_music_artist_df['Title']]
youtube_music_artist_df['Title'] = [x.replace("ú","u") for x in youtube_music_artist_df['Title']]
youtube_music_artist_df['Title'] = [x.replace("®","") for x in youtube_music_artist_df['Title']]
# Use pandas to load csv converted DataFrame into database - YouTube
youtube_music_artist_df.to_sql(name='youtube_music_2018', con=engine, if_exists='append', index=False)
# Use pandas to load csv converted DataFrame into database - Spotify 2018
spotify_2018_filtered_df.to_sql(name='spotify2018_top100', con=engine, if_exists='append', index=False)
# CHECK FOR TABLES
engine.table_names()
# Confirm data has been added by querying the you tube table
pd.read_sql_query('select * from youtube_music_2018', con=engine).head()
# Confirm data has been added by querying the spotify table
pd.read_sql_query('select * from spotify2018_top100', con=engine).head()
```
## Finished!
| github_jupyter |
```
import math
import numpy as np
import sys
import pandas as pd
def add(x,y,filter=False):
if filter==True:
x[np.isnan(x)] = 0
y[np.isnan(y)] = 0
else:
a=0
return x+y
def ceil(x):
a=x.shape[0]
b=x.shape[1]
for i in range(a):
for j in range(b):
x[i][j]=int(x[i][j])+1
return x
def divide(x,y):
return x/y
def exp(x):
x=np.exp(x)
return x
def floor(x):
a=x.shape[0]
b=x.shape[1]
for i in range(a):
for j in range(b):
x[i][j]=int(x[i][j])
return x
def frac(x):
return x.astype(float)-floor(x)
def inverse(x):
return 1/x
def log(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
array[j]=np.log(array[j])
return(matrix)
def log_diff(x):
matrix = np.transpose(x).astype(float).copy()
new_list = list()
for i in range(np.shape(x)[0]):
array = matrix[i]
new_array = list()
new_array.append(0.)
for j in range(1,len(array)):
new_array.append(array[j]-array[j-1])
new_list.append(new_array)
return(np.array(np.transpose(new_list)))
def nan_out(x, lower=0, upper=0):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if array[j] < lower:
array[j]=np.nan
elif array[j] > upper:
array[j]=np.nan
return(matrix)
def power(x,y):
return x**y
def purify(x):
pos_inf = math.inf
neg_inf = -math.inf
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if array[j] == pos_inf:
array[j]=np.nan
return(matrix)
def reverse(x):
return(-x)
def sign(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if np.isnan(array[j])==True:
array[j]=np.nan
elif array[j] > 0:
array[j]=1
elif array[j] < 0:
array[j]=-1
else:
array[j]=0
return(matrix)
def signed_power_(x, y):
matrix_x = x.astype(float).copy()
matrix_y = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array_x = matrix_x[i]
array_y = matrix_y[i]
for j in range(len(array_x)):
if array_x[j]>=0:
array_x[j]=array_x[j]**array_y[j]
elif array_x[j]<0:
array_x[j]=-1*array_x[j]**array_y[j]
return(matrix_x)
def slog1p(x):
return(sign(x) * log(1 + abs(x)))
def sqrt(x):
return x**(1/2)
def subtract(x, y, filters=False):
if filters == False:
return(x-y)
if filters == True:
matrix_x = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix_x[i]
for j in range(len(array)):
if np.isnan(array[j])==True:
array[j]=0
matrix_y = y.astype(float).copy()
for i in range(np.shape(y)[0]):
array = matrix_y[i]
for j in range(len(array)):
if np.isnan(array[j])==True:
array[j]=0
return(matrix_x-matrix_y)
def to_nan(x, value=0, reverse=False):
if reverse == False:
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if array[j]==value:
array[j]=np.nan
if reverse == True:
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if np.isnan(array[j])==True:
array[j]=value
return(matrix)
def negate(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if array[j] == 0:
array[j]=0
else:
array[j]=1
return(matrix)
def is_not_nan(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if np.isnan(array[j]) == False:
array[j]=1
else:
array[j]=0
return(matrix)
def is_nan(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if np.isnan(array[j]) == True:
array[j]=1
else:
array[j]=0
return(matrix)
def is_finite(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if np.isinf(array[j]) == True:
array[j]=1
elif np.isnan(array[j]) == True:
array[j]=1
else:
array[j]=0
return(matrix)
def is_not_finite(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if np.isinf(array[j]) == True:
array[j]=0
elif np.isnan(array[j]) == True:
array[j]=0
else:
array[j]=1
return(matrix)
def arc_cos(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if 1>=array[j]>=-1:
array[j]=math.acos(array[j])
else:
array[j]=np.nan
return(matrix)
def arc_sin(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if 1>=array[j]>=-1:
array[j]=math.asin(array[j])
else:
array[j]=np.nan
return(matrix)
def arc_tan(x):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
array[j]=math.atan(array[j])
return(matrix)
def left_tail(x, maximum = 0):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if array[j] > maximum:
array[j]=np.nan
return(matrix)
def right_tail(x, minimum = 0):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if array[j] < minimum:
array[j]=np.nan
return(matrix)
def sigmoid(x):
return(1 / (1 + np.exp(-x)))
def tail(x, lower = 0,upper = 0, newval=0 ):
matrix = x.astype(float).copy()
for i in range(np.shape(x)[0]):
array = matrix[i]
for j in range(len(array)):
if array[j] < lower:
array[j] = newval
elif array[j] > upper:
array[j] = newval
return(matrix)
def tanh(x):
return(np.tanh(x))
def normalize(x, useStd=False, limit=0.0):
matrix=x.astype(float).copy()
if len(np.shape(x)) == 1:
if useStd == False:
if limit == 0.0:
return x-x.mean()
else:
a=x-x.mean()
b=((a)*(limit*2))/(a.max()-a.min())
return b+(limit-b.max())
else:
if limit == 0.0:
return (x-x.mean())/x.std()
else:
a=(x-x.mean())/x.std()
b=((a)*(limit*2))/(a.max()-a.min())
return b+(limit-b.max())
else:
if useStd == False:
if limit == 0.0:
for i in range(np.shape(x)[0]):
matrix[i] = x[i]-x[i].mean()
return matrix
else:
for i in range(np.shape(x)[0]):
a=x[i]-x[i].mean()
b=((a)*(limit*2))/(a.max()-a.min())
matrix[i] = b+(limit-b.max())
return matrix
else:
if limit == 0.0:
for i in range(np.shape(x)[0]):
matrix[i] = (x[i]-x[i].mean())/x[i].std()
return matrix
else:
for i in range(np.shape(x)[0]):
a=(x[i]-x[i].mean())/x[i].std()
b=((a)*(limit*2))/(a.max()-a.min())
matrix[i] = b+(limit-b.max())
return matrix
def rank(x,rate=2):
matrix = x.astype(float).copy()
if rate == 2 :
for i in range(np.shape(x)[0]):
array = matrix[i]
seq = sorted(array)
matrix[i] = [seq.index(v) for v in array]
matrix = matrix/(np.shape(np.transpose(x))[1]-1)
elif rate == 0 :
for i in range(np.shape(np.transpose(x))[0]):
array = matrix[i]
matrix[i] = [(v-min(array))/(max(array)-min(array)) for v in array]
return(np.transpose(matrix))
def scale_down(x, constant=0):
matrix=x.astype(float).copy()
if len(np.shape(x)) == 1:
return (x-x.min())/(x.max()-x.min()) - constant
else:
for i in range(np.shape(x)[0]):
matrix[i] = (x[i]-x[i].min())/(x[i].max()-x[i].min()) - constant
return matrix
def winsorize(x, std=4):
matrix=x.astype(float).copy()
if len(np.shape(x)) == 1:
a = (x > x.mean() + std*x.std()) | (x < x.mean() - std*x.std())
return np.delete(x, np.where(a))
else:
for i in range(np.shape(x)[0]):
a = (x[i] > x[i].mean() + std*x[i].std()) | (x[i] < x[i].mean() - std*x[i].std())
matrix[i] = np.delete(x[i], np.where(a))
return matrix
# some problems when dim are not match
def zscore(x):
matrix=x.astype(float).copy()
if len(np.shape(x)) == 1:
return (x-x.mean())/x.std()
else:
for i in range(np.shape(x)[0]):
matrix[i] = (x[i]-x[i].mean())/x[i].std()
return matrix
def ts_delay(x, d):
if len(np.shape(x)) == 1:
x=pd.Series(x)
return x.shift(periods=d)
else:
x=pd.DataFrame(x)
return x.shift(periods=d, axis=1)
# input format : pandas DataFrame and Series
# note the data format !!!!!!!
def ts_delta(x, d):
return x - ts_delay(x, d)
# input format : pandas DataFrame
def ts_ir(x, d):
return ts_mean(x, d)/ts_stddev(x, d)
def ts_max(x, d):
if len(np.shape(x)) == 1:
x=pd.Series(x)
return x.max()
else:
a=np.zeros(shape=(np.shape(x)))
x=pd.DataFrame(x[:,-d:])
a[:,-1]=x.max(axis = 1)
return a
# note the data format !!!!!!!
def ts_max_diff(x, d):
matrix=np.zeros(shape=np.shape(x))
for i in range(len(matrix)):
matrix[i] = x.T[i] - ts_max(x, d)[:,-1]
return matrix.T
def ts_mean(x, d):
a=np.zeros(shape=(np.shape(x)))
x=pd.DataFrame(x[:, -d:])
a[:,-1]=x.mean(axis = 1)
return a
def ts_median(x, d):
a=np.zeros(shape=(np.shape(x)))
x=pd.DataFrame(x[:,-d:])
a[:,-1]=x.median(axis = 1)
return a
def ts_min(x, d):
a=np.zeros(shape=(np.shape(x)))
x=pd.DataFrame(x[:,-d:])
a[:,-1]=x.min(axis = 1)
return a
def ts_min_diff(x, d):
matrix=np.zeros(shape=np.shape(x))
for i in range(len(matrix)):
matrix[i] = x.T[i] - ts_min(x, d)[:,-1]
return matrix.T
def ts_min_max_cps(x, d, f=2):
matrix=np.zeros(shape=np.shape(x))
for i in range(len(matrix)):
matrix[i] = (ts_min(x, d)[:,-1] - ts_max(x, d)[:,-1]) - f * x.T[i]
return matrix.T
def ts_min_max_diff(x, d, f=0.5):
matrix=np.zeros(shape=np.shape(x))
for i in range(len(matrix)):
matrix[i] = x.T[i] - f*(ts_min(x, d)[:,-1] + ts_max(x, d)[:,-1])
return matrix.T
def ts_product(x, d):
a=np.zeros(shape=(np.shape(x)))
x=pd.DataFrame(x[:,-d:])
a[:,-1]=x.prod(axis = 1)
return a
def ts_returns(x, d, mode=1):
if mode == 1:
return (x.T - ts_delay(x, d))/ts_delay(x, d)
elif mode == 2:
return (x.T - ts_delay(x, d))/((x + ts_delay(x, d))/2)
def ts_scale(x, d, constant = 0):
matrix=np.zeros(shape=np.shape(x))
for i in range(len(matrix)):
matrix[i] = ((x.T[i] - ts_min(x, d)[:,-1])/(ts_max(x, d)[:,-1]-ts_min(x, d)[:,-1])) + constant
return matrix.T
def ts_stddev(x, d):
a=np.zeros(shape=(np.shape(x)))
x=pd.DataFrame(x[:,-d:])
a[:,-1]=x.std(axis = 1)
return a
def ts_sum(x, d):
a=np.zeros(shape=(np.shape(x)))
x=pd.DataFrame(x[:,-d:])
a[:,-1]=x.sum(axis = 1)
return a
def ts_zscore(x, d):
matrix=np.zeros(shape=np.shape(x))
for i in range(len(matrix)):
matrix[i] = (x.T[i] - ts_mean(x, d)[:,-1])/ts_stddev(x, d)[:,-1]
return matrix.T
```
--------------------------------------------------
```
def if_else(input1, input2, input3):
if input1 == True:
input2
else:
input3
def group_scale(x, group):
matrix=x.astype(float).copy()
if len(np.shape(x)) == 1:
return (x-group.min())/(group.max()-group.min())
else:
for i in range(np.shape(x)[0]):
matrix[i] = (x[i]-group.min())/(group.max()-group.min())
return matrix
# some problems when dim of group != 1
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jejjohnson/gp_model_zoo/blob/master/code/numpyro/numpyro_gpr_laplace.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Numpyro Jax PlayGround
My starting notebook where I install all of the necessary libraries and load some easy 1D/2D Regression data to play around with.
```
#@title Install Packages
%%capture
!pip install jax jaxlib flax chex optax objax
!pip install "git+https://github.com/deepmind/dm-haiku"
!pip install "git+https://github.com/pyro-ppl/numpyro.git#egg=numpyro"
!pip uninstall tensorflow -y -q
!pip install -Uq tfp-nightly[jax] > /dev/null
#@title Load Packages
# TYPE HINTS
from typing import Tuple, Optional, Dict, Callable, Union
# JAX SETTINGS
import jax
import jax.numpy as np
import jax.random as random
# JAX UTILITY LIBRARIES
import chex
# NUMPYRO SETTINGS
import numpyro
import numpyro.distributions as dist
from numpyro.infer.autoguide import AutoDiagonalNormal
from numpyro.infer import SVI, Trace_ELBO
# NUMPY SETTINGS
import numpy as onp
onp.set_printoptions(precision=3, suppress=True)
# MATPLOTLIB Settings
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# SEABORN SETTINGS
import seaborn as sns
sns.set_context(context='talk',font_scale=0.7)
# PANDAS SETTINGS
import pandas as pd
pd.set_option("display.max_rows", 120)
pd.set_option("display.max_columns", 120)
# LOGGING SETTINGS
import sys
import logging
logging.basicConfig(
level=logging.INFO,
stream=sys.stdout,
format='%(asctime)s:%(levelname)s:%(message)s'
)
logger = logging.getLogger()
#logger.setLevel(logging.INFO)
%load_ext autoreload
%autoreload 2
#@title Data
def get_data(
n_train: int = 30,
input_noise: float = 0.15,
output_noise: float = 0.15,
n_test: int = 400,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, None]:
onp.random.seed(0)
X = np.linspace(-1, 1, n_train)
Y = X + 0.2 * np.power(X, 3.0) + 0.5 * np.power(0.5 + X, 2.0) * np.sin(4.0 * X)
Y += output_noise * onp.random.randn(n_train)
Y -= np.mean(Y)
Y /= np.std(Y)
X += input_noise * onp.random.randn(n_train)
assert X.shape == (n_train,)
assert Y.shape == (n_train,)
X_test = np.linspace(-1.2, 1.2, n_test)
return X[:, None], Y[:, None], X_test[:, None]
n_train = 60
input_noise = 0.0
output_noise = 0.1
n_test = 100
X, Y, Xtest = get_data(
n_train=n_train,
input_noise=0.0, output_noise=output_noise,
n_test=n_test
)
fig, ax = plt.subplots()
ax.scatter(X, Y)
plt.show()
```
## Gaussian Process Model
```
import objax
# squared euclidean distance
def sqeuclidean_distance(x: np.array, y: np.array) -> float:
return np.sum((x - y) ** 2)
# distance matrix
def distmat(func: Callable, x: np.ndarray, y: np.ndarray) -> np.ndarray:
"""distance matrix"""
return jax.vmap(lambda x1: jax.vmap(lambda y1: func(x1, y1))(y))(x)
# 1D covariance matrix
def rbf_kernel(X, Y, variance, length_scale):
# distance formula
deltaXsq = distmat(sqeuclidean_distance, X / length_scale, Y / length_scale)
# rbf function
K = variance * np.exp(-0.5 * deltaXsq)
return K
class ExactGP(objax.Module):
def __init__(self):
self.var = numpyro.param("kernel_var", init_value=1.0, constraints=dist.constraints.positive)
self.scale = numpyro.param("kernel_length", init_value=0.1, constraints=dist.constraints.positive)
self.sigma = numpyro.param("sigma", init_value=0.01, onstraints=dist.constraints.positive)
def model2(self, X, y=None):
# η = numpyro.sample("variance", dist.HalfCauchy(scale=5.))
# ℓ = numpyro.sample("length_scale", dist.Gamma(2., 1.))
# σ = numpyro.sample("noise", dist.HalfCauchy(scale=5.))
# Compute kernel
K = rbf_kernel(X, X, self.var, self.scale)
K += np.eye(X.shape[0]) * np.power(self.sigma, 2)
# Sample y according to the standard gaussian process formula
return numpyro.sample("y", dist.MultivariateNormal(
loc=np.zeros(X.shape[0]),
covariance_matrix=K), obs=y
)
def guide(self, X, y):
pass
def model(self, X, y=None):
# η = numpyro.param("kernel_var", init_value=1.0, constraints=dist.constraints.positive)
# ℓ = numpyro.param("kernel_length", init_value=0.1, constraints=dist.constraints.positive)
# σ = numpyro.param("sigma", init_value=0.01, onstraints=dist.constraints.positive)
η = numpyro.sample("variance", dist.HalfCauchy(scale=5.))
ℓ = numpyro.sample("length_scale", dist.Gamma(2., 1.))
σ = numpyro.sample("noise", dist.HalfCauchy(scale=5.))
# Compute kernel
K = rbf_kernel(X, X, η, ℓ)
K += np.eye(X.shape[0]) * np.power(σ, 2)
# Sample y according to the standard gaussian process formula
return numpyro.sample("y", dist.MultivariateNormal(
loc=np.zeros(X.shape[0]),
covariance_matrix=K), obs=y
)
def guide(self, X, y):
pass
# Predictive Mean and Variance
def predict(X, Y, X_test, variance, length_scale, noise):
K = rbf_kernel(X, X, variance, length_scale)
L, alpha = cholesky_factorization(K + noise * np.eye(K.shape[0]), Y)
# Calculate the Mean
K_x = rbf_kernel(X_test, X, variance, length_scale)
mu_y = np.dot(K_x, alpha)
# Calculate the variance
v = jax.scipy.linalg.cho_solve(L, K_x.T)
# Calculate kernel matrix for inputs
K_xx = rbf_kernel(X_test, X_test, variance, length_scale)
cov_y = K_xx - np.dot(K_x, v)
return mu_y, cov_y
# GP model.
def GP(X, y):
# Set informative log-normal priors on kernel hyperparameters.
# η = pm.HalfCauchy("η", beta=5)
η = numpyro.sample("variance", dist.HalfCauchy(scale=5.))
ℓ = numpyro.sample("length_scale", dist.Gamma(2., 1.))
σ = numpyro.sample("noise", dist.HalfCauchy(scale=5.))
# η = numpyro.param("kernel_var", init_value=1.0, constraints=dist.constraints.positive)
# ℓ = numpyro.param("kernel_length", init_value=0.1, constraints=dist.constraints.positive)
# σ = numpyro.param("sigma", init_value=0.01, onstraints=dist.constraints.positive)
# Compute kernel
K = rbf_kernel(X, X, η, ℓ)
K += np.eye(X.shape[0]) * np.power(σ, 2)
# Sample y according to the standard gaussian process formula
numpyro.sample("y", dist.MultivariateNormal(
loc=np.zeros(X.shape[0]),
covariance_matrix=K), obs=y
)
def empty_guide(X, y):
pass
def cholesky_factorization(K: np.ndarray, Y: np.ndarray) -> Tuple[np.ndarray, bool]:
"""Cholesky Factorization"""
# cho factor the cholesky
L = jax.scipy.linalg.cho_factor(K, lower=True)
# weights
weights = jax.scipy.linalg.cho_solve(L, Y)
return L, weights
# Predictive Mean and Variance
def predict(X, Y, X_test, variance, length_scale, noise):
K = rbf_kernel(X, X, variance, length_scale)
L, alpha = cholesky_factorization(K + noise * np.eye(K.shape[0]), Y)
# Calculate the Mean
K_x = rbf_kernel(X_test, X, variance, length_scale)
mu_y = np.dot(K_x, alpha)
# Calculate the variance
v = jax.scipy.linalg.cho_solve(L, K_x.T)
# Calculate kernel matrix for inputs
K_xx = rbf_kernel(X_test, X_test, variance, length_scale)
cov_y = K_xx - np.dot(K_x, v)
return mu_y, cov_y
# Summarize function posterior.
def posterior(rng_key, X, Y, X_test, variance, length_scale, noise):
m, cov = predict(X, Y, X_test, variance, length_scale, noise)
return random.multivariate_normal(rng_key, mean=m, cov=cov)
def summarize_posterior(preds, ci=96):
ci_lower = (100 - ci) / 2
ci_upper = (100 + ci) / 2
preds_mean = preds.mean(0)
preds_lower = np.percentile(preds, ci_lower, axis=0)
preds_upper = np.percentile(preds, ci_upper, axis=0)
return preds_mean, preds_lower, preds_upper
K = rbf_kernel(X, X, 1.0, 1.0)
# check shape
chex.assert_shape(K, (n_train, n_train))
```
## Laplace Approximation
```
from numpyro.infer.autoguide import AutoLaplaceApproximation
from numpyro.infer import TraceMeanField_ELBO
# reproducibility
rng_key = random.PRNGKey(0)
# Setup
guide = numpyro.infer.autoguide.AutoLaplaceApproximation(GP)
optimizer = numpyro.optim.Adam(step_size=0.01)
# optimizer = numpyro.optim.Minimize()
# optimizer = optax.adamw(learning_rate=0.1)
svi = SVI(GP, guide, optimizer, loss=Trace_ELBO())
svi_result = svi.run(random.PRNGKey(1), 1_000, X, Y.squeeze())
fig, ax = plt.subplots()
ax.plot(svi_result.losses);
ax.set(
title = "Loss",
xlabel="Iterations",
ylabel="Negative ELBO"
);
plt.show()
params = svi_result.params
print(params)
quantiles = guide.median(params)
print(quantiles)
y_pred, y_cov = predict(
X, Y.squeeze(), Xtest,
variance=quantiles['variance'],
length_scale=quantiles['length_scale'],
noise=quantiles['noise']
)
y_var = np.diagonal(y_cov)
y_std = np.sqrt(y_var)
fig, ax = plt.subplots(ncols=1, figsize=(6, 4))
ax.scatter(X, Y, label='Training Data', color='red')
ax.plot(Xtest, y_pred, label='Predictive Mean', color='black', linewidth=3)
ax.fill_between(
Xtest.squeeze(),
y_pred - y_std,
y_pred + y_std,
label='Confidence Interval',
alpha=0.3,
color='darkorange'
)
ax.legend()
seed = 123
n_samples = 1_000
advi_samples = guide.sample_posterior(random.PRNGKey(seed), params, (n_samples,))
# Plot posteriors for the parameers
fig, ax = plt.subplots(ncols=3, figsize=(12, 3))
sns.histplot(ax=ax[0], x=advi_samples['length_scale'], kde=True, bins=50, stat='density')
sns.histplot(ax=ax[1], x=advi_samples['variance'], kde=True, bins=50, stat='density')
sns.histplot(ax=ax[2], x=advi_samples['noise'], kde=True, bins=50, stat='density')
ax[0].set(title='Kernel Length Scale')
ax[1].set(title='Kernel Variance', ylabel="")
ax[2].set(title='Likelihood Noise', ylabel="")
plt.show()
```
#### Predictions
```
predictions, _ = jax.vmap(predict, in_axes=(None, None, None, 0, 0, 0), out_axes=(0, 0))(
X, Y.squeeze(), Xtest,
advi_samples['variance'],
advi_samples['length_scale'],
advi_samples['noise']
)
plt.plot(predictions.T, color='gray', alpha=0.01);
y_pred, y_lb, y_ub = summarize_posterior(predictions)
fig, ax = plt.subplots(ncols=1, figsize=(6, 4))
ax.scatter(X, Y, label='Training Data', color='red')
ax.plot(Xtest, y_pred, label='Predictive Mean', color='black', linewidth=3)
ax.fill_between(
Xtest.squeeze(),
y_lb,
y_ub,
label='Confidence Interval',
alpha=0.3,
color='darkorange'
)
ax.legend()
plt.show()
```
| github_jupyter |
# Averaging Example
Example system of
$$
\begin{gather*}
\ddot{x} + \epsilon \left( x^2 + \dot{x}^2 - 4 \right) \dot{x} + x = 0.
\end{gather*}
$$
For this problem, $h(x,\dot{x}) = x^2 + \dot{x}^2 - 4$ where $\epsilon \ll 1$. The if we assume the solution for x to be
$$
\begin{gather*}
x(t) = a\cos(t + \phi) = a \cos\theta
\end{gather*}
$$
we have
$$
\begin{align*}
h(x,\dot{x}) &= \left(a^2\cos^2\theta + a^2\sin^2\theta - 4\right)\left(-a\sin\theta\right)\\
&= -a^3\cos^2\theta\sin\theta - a^3\sin^3\theta + 4a\sin\theta.
\end{align*}
$$
From the averaging equations we know that
\begin{align*}
\dot{a} &= \dfrac{\epsilon}{2\pi}\int_0^{2\pi}{\left( -a^3\cos^2\theta\sin\theta - a^3\sin^3\theta + 4a\sin\theta \right)\sin\theta}{d\theta}\\
&= \dfrac{\epsilon}{2\pi}\int_{0}^{2\pi}{\left( -a^3\cos^2\theta\sin^2\theta - a^3\sin^4\theta + 4a\sin^2\theta \right)}{d\theta}
\end{align*}
since
\begin{gather*}
\int_{0}^{2\pi}{\cos^2\theta\sin^2\theta}{d\theta} = \dfrac{\pi}{4}\\
\int_{0}^{2\pi}{\sin^2\theta}{d\theta} = \pi\\
\int_{0}^{2\pi}{\sin^4\theta}{d\theta} = \dfrac{3\pi}{4}
\end{gather*}
we have
\begin{align*}
\dot{a} = 2\epsilon a - \dfrac{\epsilon}{2}a^3 + O(\epsilon^2).
\end{align*}
To solve this analytically, let $b = a^{-2}$, then
\begin{gather*}
\dot{b} = -2a^{-3}\dot{a} \phantom{-} \longrightarrow \phantom{-} \dot{a} = -\dfrac{1}{2}a^3b.
\end{gather*}
If we plug this back into the nonlinear ODE we get
\begin{gather*}
-\dfrac{1}{2}a^3\dot{b} - 2\epsilon a = -\dfrac{\epsilon}{2}a^3\\
\therefore \dot{b} + 4\epsilon b = \epsilon.
\end{gather*}
This nonhomogeneous linear ODE can be solved by
$$
\begin{align*}
b(t) &= e^{\int{-4\epsilon t}{dt}}\left[ \int{\epsilon e^{\int{4\epsilon t}{dt}}}{dt} + C \right]\\
&= \dfrac{1}{4} + Ce^{-4\epsilon t}.
\end{align*}
$$
If we apply the initial condition of $a(0) = a_0$ we get
\begin{gather*}
b(t) = \dfrac{1}{4} + \left( \dfrac{1}{a_0^2} - \dfrac{1}{4} \right)e^{-4\epsilon t}.
\end{gather*}
And therefore,
\begin{gather*}
a(t) = \sqrt{\dfrac{1}{\dfrac{1}{4} + \left( \dfrac{1}{a_0^2} - \dfrac{1}{4} \right)e^{-4\epsilon t}}} + O(\epsilon^2).
\end{gather*}
Additionally,
\begin{align*}
\dot{\phi} &= \dfrac{\epsilon}{2\pi}\int_{0}^{2\pi}{\left(-a^3\cos^2\theta\sin\theta - a^3\sin^3\theta + 4a\sin\theta \right)\cos\theta}{d\theta}\\
&= \dfrac{\epsilon}{2\pi}\int_{0}^{2\pi}{\left(-a^3\cos^3\theta\sin\theta - a^3\cos\theta\sin^3\theta + 4a\cos\theta\sin\theta \right)}{d\theta}\\
&= 0
\end{align*}
and thus,
\begin{gather*}
\phi(t) = \phi_0 + O(\epsilon^2).
\end{gather*}
Finally, we have the following approximated expression
\begin{gather*}
x(t) = \sqrt{\dfrac{1}{\dfrac{1}{4} + \left( \dfrac{1}{a_0^2} - \dfrac{1}{4} \right)e^{-4\epsilon t}}}\cos\left( t + \phi_0 \right) + O(\epsilon^2).
\end{gather*}
If we assume, $\dot{x} = 0$
\begin{gather*}
0 = -a_0\sin\phi_0\\
\therefore \phi_0 = 0.
\end{gather*}
Hence,
\begin{gather*}
{x(t) = \sqrt{\dfrac{1}{\dfrac{1}{4} + \left( \dfrac{1}{a_0^2} - \dfrac{1}{4} \right)e^{-4\epsilon t}}}\cos\left( t \right) + O(\epsilon^2).}
\end{gather*}
Since $\omega = 1$ for this approximation, the period $T$ of the limit cycle is
\begin{gather*}
{T = \dfrac{2\pi}{\omega} = 2\pi.}
\end{gather*}
```
# We plot the phase plane of this system to check the limit cycle
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp, DOP853
from typing import List
# System
def nlsys(t, x, epsilon):
return [x[1], -epsilon*(x[0]**2 + x[1]**2 - 4)*x[1] - x[0]]
def solve_diffeq(func, t, tspan, ic, parameters={}, algorithm='DOP853', stepsize=np.inf):
return solve_ivp(fun=func, t_span=tspan, t_eval=t, y0=ic, method=algorithm,
args=tuple(parameters.values()), atol=1e-8, rtol=1e-5, max_step=stepsize)
def phasePlane(x1, x2, func, params):
X1, X2 = np.meshgrid(x1, x2) # create grid
u, v = np.zeros(X1.shape), np.zeros(X2.shape)
NI, NJ = X1.shape
for i in range(NI):
for j in range(NJ):
x = X1[i, j]
y = X2[i, j]
dx = func(0, (x, y), *params.values()) # compute values on grid
u[i, j] = dx[0]
v[i, j] = dx[1]
M = np.hypot(u, v)
u /= M
v /= M
return X1, X2, u, v, M
def DEplot(sys: object, tspan: tuple, x0: List[List[float]],
x: np.ndarray, y: np.ndarray, params: dict):
if len(tspan) != 3:
raise Exception('tspan should be tuple of size 3: (min, max, number of points).')
# Set up the figure the way we want it to look
plt.figure(figsize=(12, 9))
X1, X2, dx1, dx2, M = phasePlane(
x, y, sys, params
)
# Quiver plot
plt.quiver(X1, X2, dx1, dx2, M, scale=None, pivot='mid')
plt.grid()
if tspan[0] < 0:
t1 = np.linspace(0, tspan[0], tspan[2])
t2 = np.linspace(0, tspan[1], tspan[2])
if min(tspan) < 0:
t_span1 = (np.max(t1), np.min(t1))
else:
t_span1 = (np.min(t1), np.max(t1))
t_span2 = (np.min(t2), np.max(t2))
for x0i in x0:
sol1 = solve_diffeq(sys, t1, t_span1, x0i, params)
plt.plot(sol1.y[0, :], sol1.y[1, :], '-r')
sol2 = solve_diffeq(sys, t2, t_span2, x0i, params)
plt.plot(sol2.y[0, :], sol2.y[1, :], '-r')
else:
t = np.linspace(tspan[0], tspan[1], tspan[2])
t_span = (np.min(t), np.max(t))
for x0i in x0:
sol = solve_diffeq(sys, t, t_span, x0i, params)
plt.plot(sol.y[0, :], sol.y[1, :], '-r')
plt.xlim([np.min(x), np.max(x)])
plt.ylim([np.min(y), np.max(y)])
plt.show()
x10 = np.arange(0, 10, 1)
x20 = np.arange(0, 10, 1)
x0 = np.stack((x10, x20), axis=-1)
p = {'epsilon': 0.001}
x1 = np.linspace(-5, 5, 20)
x2 = np.linspace(-5, 5, 20)
DEplot(nlsys, (-8, 8, 1000), x0, x1, x2, p)
# Compare the approximation to the actual solution
# let a = 2
tmax = 2
tmin = 30
tspan = np.linspace(tmin, tmax, 1000)
# ODE solver solution
sol = solve_diffeq(nlsys, tspan, (tmin, tmax), [2, 0], p)
# Approximation
def nlsys_averaging(t, a, e):
return np.sqrt(1 / (0.25 + (1/a**2 - 0.25) * np.exp(-4*e*t))) * np.cos(t)
approx = nlsys_averaging(tspan, 2, 0.001)
plt.figure(figsize=(12, 9))
plt.plot(tspan, sol.y[0, :])
plt.plot(tspan, approx)
plt.grid(True)
plt.xlabel('$t$')
plt.ylabel('$x$')
plt.show()
```
| github_jupyter |
```
# Standard imports
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
# Needed for advanced plotting
import matplotlib.patches as patches
# For plotting inline
%matplotlib inline
plt.ion()
# Import suftware (use development copy)
import sys
sys.path.append('../../suftware')
import suftware as sw
# Set default plotting parameters
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.serif'] = ['Times New Roman']
mpl.rc('text', usetex=True)
fontsize=9
mpl.rcParams['font.size'] = fontsize
mpl.rcParams['hatch.linewidth'] = 1
mpl.rcParams['hatch.color'] = 'black'
# Load raw information extracted from figure
raw_df = pd.read_excel('data/CMS2012.xlsx').fillna(0)
# Clean up dataframe
df = raw_df[['bins','data']].copy()
df['bg'] = raw_df['green']+raw_df['blue-green']
df['bg+H'] = df['bg'] + raw_df['red-blue']
df.head()
# How many of each type of event (bg or H) are expected?
expected_N_tot = df['bg+H'].sum()
expected_N_bg = df['bg'].sum()
expected_N_H = expected_N_tot-expected_N_bg
print('Expected N_tot: %.1f +/- %.1f'%(expected_N_tot,np.sqrt(expected_N_tot)))
print('Expected N_bg: %.1f +/- %.1f'%(expected_N_bg,np.sqrt(expected_N_bg)))
print('Expected N_H: %.1f +/- %.1f'%(expected_N_H,np.sqrt(expected_N_H)))
print('N_H / N_tot: %.1f%%'%(100*expected_N_H/expected_N_tot))
# Generate data so SUFTware can histogram it again. Shouldn't have to do this.
data = []
for n, row in df.iterrows():
data.extend([row['bins']]*int(row['data']))
grid = df['bins'].values
print('Dataset: %s'%repr(data))
# Do density estimation
num_samples = 1000
density = sw.DensityEstimator(data=data,grid=grid,num_posterior_samples=num_samples, seed=0)
# Extract information to plot
bins = df['bins'].values
hist = df['data'].values
bg = df['bg'].values
bgH = df['bg+H'].values
N = df['data'].sum()
h = bins[1]-bins[0]
# Create xs to interpolate density on
xmin, xmax = density.bounding_box
xs = np.linspace(xmin,xmax,1000)
# Compute estimated densitya nd fluctuations
count_density = N*h*density.evaluate(xs)
count_fluctuations = N*h*density.evaluate_samples(xs)
xlim = density.bounding_box
ylim = [0,12]
def plot_higgs(ax, zorder=0):
"""
Function to plot Higgs data similar to in CMS Collaboration, 2012, Fig. 3
"""
# Plot Higgs prediction
ax.step(bins,bgH,
label='Higgs ($m_\mathrm{H}$=125 GeV)',
where='mid',
color='red',
linewidth=1,
zorder=zorder)
# Plot background
fill_blue=[.60,.81,1.00]
ax.bar(bins,bg,
label='background',
width=h,
color='lightskyblue',
zorder=zorder)
ax.step(bins,bg,color='k',linewidth=1,where='mid',zorder=zorder)
# Plot data
ax.plot(bins,hist,
markersize=3,
marker='o',
color='k',
linewidth=0,
label='data ($N$=%d)'%(N),
zorder=zorder+1)
# Style plot
ax.set_xlim(xlim)
ax.set_ylabel('events / 3 GeV')
ax.set_ylim(ylim)
yticks = np.arange(ylim[1]+1)
yticklabels = [('%d'%y if y%2==0 else '') for y in yticks]
ax.set_yticks(yticks)
ax.set_yticklabels(yticklabels)
ax.legend()
#ax.minorticks_on()
# Label x axis
ax.set_xlabel('$m_{4 \ell}$ (GeV)')
# Make Fig. 3 for manuscript
fig, axs = plt.subplots(2,1,figsize=[3.4,4.5])
### Panel (A)
ax = axs[0]
plot_higgs(ax, zorder=-10)
# Prepare legend
handles,labels = ax.get_legend_handles_labels()
indices=[1,2,0]
labels = [labels[i] for i in indices]
handles = [handles[i] for i in indices]
ax.legend(handles,labels, fontsize=9)
### Panel (B)
ax = axs[1]
plot_higgs(ax, zorder=-10)
# Create a Rectangle patch
rect = patches.Rectangle((xlim[0],ylim[0]),
xlim[1]-xlim[0],
ylim[1]-ylim[0],
facecolor='white',
edgecolor='white',
alpha=.7,
zorder=-1)
# Add the patch to the Axes
ax.add_patch(rect)
# Set number of posterior samples to show
num_samples_to_show=100
# Plot DEFT fluctuations
ax.plot(xs,100+count_fluctuations[:,0],
linewidth=.5,
alpha=1,
color='olivedrab',
label='DEFT: $Q \sim p(Q|\mathrm{data})$',
zorder=-1) # Dummy plot just for legend entry
ax.plot(xs,count_fluctuations[:,:num_samples_to_show],
linewidth=.5,
alpha=.2,
color='olivedrab',
zorder=-1)
# Plot DEFT best fit
ax.plot(xs,count_density,
label='DEFT: $Q^*$',
linewidth=2,
color='black',
alpha=1,
zorder=0)
# Prepare legend
handles,labels = ax.get_legend_handles_labels()
indices=[3,2]
labels = [labels[i] for i in indices]
handles = [handles[i] for i in indices]
ax.legend(handles,labels, fontsize=9)
# Label pannels
fig.text(x=.01, y=.98, s='(a)', fontsize=12, horizontalalignment='left', verticalalignment='top')
fig.text(x=.01, y=.50, s='(b)', fontsize=12, horizontalalignment='left', verticalalignment='top')
# Do tight layout
plt.tight_layout()
fig.savefig('figures/fig_3.pdf')
# Where is the maximum?
indices = (xs > 110) & (xs < 140)
tmp_xs = xs[indices]
tmp_density = density.evaluate(tmp_xs)
i = tmp_density.argmax()
print('Best estimate peaks at %.1f GeV'%tmp_xs[i])
tmp_samples = density.evaluate_samples(tmp_xs)
maxima = []
for j in range(num_samples):
i = tmp_samples[:,j].argmax()
maxima.append(tmp_xs[i])
maxima = np.array(maxima)
print('Samples peak at %.1f GeV +/- %0.1f GeV'%(maxima.mean(),maxima.std()))
# What fraction of points have exactly one interior maximum between
def get_interior_maxima(ys, xs, xmin, xmax):
"""Counts number of interior maxima in a specified interval"""
indices = (xs >= xmin) & (xs <= xmax)
y = ys[indices]
x = xs[indices]
K = len(y)
optima_locations = []
for i in range(1,K-1):
if y[i]>y[i-1] and y[i]>y[i+1]:
optima_locations.append(x[i])
return optima_locations
# Get number of posterior samples
num_samples = count_fluctuations.shape[1]
print('Num posterior samples: %d'%num_samples)
# Count interior maxima within a specified interval
xmin = 110
xmax = 140
print('In interval: [%.1f GeV, %.1f GeV]'%(xmin,xmax))
maxima_list = []
for n in range(num_samples):
ys = count_fluctuations[:,n]
interior_maxima = get_interior_maxima(ys,xs=xs,xmin=xmin,xmax=xmax)
maxima_list.append(interior_maxima)
# Describe the number of interior maxima
num_no_maxima = sum([len(v)==0 for v in maxima_list])
print('Pct samples with no interior maxima: %.1f%%'%(100*num_no_maxima/num_samples))
num_single_maxima = sum([len(v)==1 for v in maxima_list])
print('Pct samples with one interior maximum: %.1f%%'%(100*num_single_maxima/num_samples))
num_multiple_maxima = sum([len(v)>=2 for v in maxima_list])
print('Pct samples with multipel interior maxima: = %.1f%%'%(100*num_multiple_maxima/num_samples))
# Compute the mean and std energy of lone maxima
lone_maxima = np.array([v for v in maxima_list if len(v)==1]).ravel()
print('Lone maxima: %.1f GeV +/- %0.1f GeV'%(lone_maxima.mean(), lone_maxima.std()))
```
| github_jupyter |
```
%pylab inline
import sys
sys.path.append("/Users/hantke/flash_mnt/home/tekeberg/Source/pah/")
import numpy
import matplotlib
import matplotlib.pyplot
from camp.pah.beamtimedaqaccess import BeamtimeDaqAccess
# MOUNT YOUR DATA
# ssh -f mhantke@bastion -L 2222:max-cfel002:22 -N
# sshfs -p 2222 mhantke@localhost:/ /Users/hantke/flash_mnt/
#root_directory_of_h5_files = "/Users/hantke/flash_mnt/data/beamline/current/raw/hdf/block-02"
root_directory_of_h5_files = "/Users/hantke/flash_mnt/asap3/flash/gpfs/bl1/2017/data/11001733/raw/hdf/block-02"
daq= BeamtimeDaqAccess.create(root_directory_of_h5_files)
# Define DAQ channel names
#tunnelEnergyChannelName= "/Photon Diagnostic/GMD/Average energy/energy tunnel (raw)"
bda_energy_channel_name = "/Photon Diagnostic/GMD/Pulse resolved energy/energy BDA"
# All TOF values of a run
tofChannelName= "/Experiment/BL1/ADQ412 GHz ADC/CH00/TD"
runNumber = [16115, 16116, 16117, 16118, 16120, 16121, 16122, 16123, 16124, 16125,
16126, 16127, 16128, 16129, 16130, 16131, 16132, 16133, 16134, 16135,
16136, 16137, 16138, 16139, 16140, 16142, 16143, 16144, 16145]
scan_distance = -500. + 100.*numpy.arange(len(runNumber))
#gmd_gate = [(80, 85)] * len(runNumber)
gmd_gate = [(87.5, 92.5)]
#position_gate = [(2.26, 2.31), (2.32, 2.39), (2.73, 2.84)]
position_gate = [(2.04, 2.06), (2.08, 2.10), (2.12, 2.14),
(2.16, 2.19), (2.199, 2.25), (2.26, 2.31), (2.32, 2.379), (2.393, 2.455),
(2.482, 2.545), (2.597, 2.667), (2.73, 2.84)]
all_tof = []
all_idInterval = []
for rn in runNumber:
print("read run {0}".format(rn))
tofSpectra0, idInterval0 = daq.allValuesOfRun(tofChannelName, rn)
all_tof.append(tofSpectra0 * 0.8 / 2048.) # convert to V
all_idInterval.append(idInterval0)
all_gmd = []
for id_interval in all_idInterval:
try:
bdaEnergy= daq.valuesOfInterval(bda_energy_channel_name, id_interval)
gmd_values = bdaEnergy[:, 0]
all_gmd.append(gmd_values)
except:
all_tof = all_tof[:len(all_gmd)]
scan_distance = scan_distance[:len(all_gmd)]
print "Stopping after {0} gmd reads at run {1}".format(len(all_gmd), runNumber[len(all_gmd)]-1)
break
average_tof = []
tof_x = numpy.arange(20000) * 10. / 20000.
integral_plots = [[] for _ in range(len(position_gate))]
for index in range(len(all_tof)):
average_tof.append(all_tof[index][(all_gmd[index] > gmd_gate[index][0]) * (all_gmd[index] < gmd_gate[index][1]), :].mean(axis=0))
for window_index, g in enumerate(position_gate[:len(all_tof)]):
integral_plots[window_index].append(average_tof[-1][(tof_x > g[0]) * (tof_x < g[1])].sum())
fig = matplotlib.pyplot.figure(1)
fig.clear()
ax = fig.add_subplot(111)
for i, p in enumerate(integral_plots):
ax.plot(scan_distance[:-1], p[:-1], label="{0} - {1}".format(position_gate[i][0], position_gate[i][1]))
for i, p in enumerate(integral_plots):
ax.plot([scan_distance[19]], [p[-1]], "o", color=ax.lines[i].get_color())
#ax.legend()
#fig.canvas.draw()
fig2 = matplotlib.pyplot.figure(2)
fig2.clear()
ax2 = fig2.add_subplot(111)
for i, this_average_tof in enumerate(average_tof):
ax2.plot(tof_x, this_average_tof + 0.01*i, color="black")
ax2.set_xlim((2., 6.))
ylim = ax2.get_ylim()
for i, g in enumerate(position_gate):
ax2.add_patch(matplotlib.patches.Rectangle((g[0], ylim[0]), g[1]-g[0], ylim[1] - ylim[0], color="lightgray"))
ax2.set_ylim(ylim)
#fig2.canvas.draw()
```
| github_jupyter |
# Build a Custom Training Container and Debug Training Jobs with Amazon SageMaker Debugger
Amazon SageMaker Debugger enables you to debug your model through its built-in rules and tools (`smdebug` hook and core features) to store and retrieve output tensors in Amazon Simple Storage Service (S3).
To run your customized machine learning/deep learning (ML/DL) models, use Amazon Elastic Container Registry (ECR) to build and push your customized training container.
Use SageMaker Debugger for training jobs run on Amazon EC2 instance and take the benefit of its built-in functionalities.
You can bring your own model customized with state-of-the-art ML/DL frameworks, such as TensorFlow, PyTorch, MXNet, and XGBoost.
You can also use your Docker base image or AWS Deep Learning Container base images to build a custom training container.
To run and debug your training script using SageMaker Debugger, you need to register the Debugger hook to the script.
Using the `smdebug` trial feature, you can retrieve the output tensors and visualize it for analysis.
By monitoring the output tensors, the Debugger rules detect training issues and invoke a `IssueFound` rule job status.
The rule job status also returns at which step or epoch the training job started having the issues.
You can send this invoked status to Amazon CloudWatch and AWS Lambda to stop the training job when the Debugger rule triggers the `IssueFound` status.
The workflow is as follows:
- [Step 1: Prepare prerequisites](#step1)
- [Step 2: Prepare a Dockerfile and register the Debugger hook to you training script](#step2)
- [Step 3: Create a Docker image, build the Docker training container, and push to Amazon ECR](#step3)
- [Step 4: Use Amazon SageMaker to set the Debugger hook and rule configuration](#step4)
- [Step 5: Define a SageMaker Estimator object with Debugger and initiate a training job](#step5)
- [Step 6: Retrieve output tensors using the smdebug trials class](#step6)
- [Step 7: Analyze the training job using the smdebug trial methods and rule job status](#step7)
**Important:** You can run this notebook only on SageMaker Notebook instances. You cannot run this in SageMaker Studio. Studio does not support Docker container build.
## Step 1: Prepare prerequisites<a class="anchor" id="step1"></a>
### Install the SageMaker Python SDK v2 and the smdebug library
This notebook runs on the latest version of the SageMaker Python SDK and the `smdebug` client library. If you want to use one of the previous version, specify the version number for installation. For example, `pip install sagemaker==x.xx.0`.
```
import sys
!{sys.executable} -m pip install "sagemaker==1.72.0" smdebug
```
### [Optional Step] Restart the kernel to apply the update
**Note:** If you are using **Jupyter Notebook**, the previous cell automatically installs and updates the libraries. If you are using **JupyterLab**, you have to manually choose the "Restart Kernel" under the **Kernel** tab in the top menu bar.
Check the SageMaker Python SDK version by running the following cell.
```
import sagemaker
sagemaker.__version__
```
## Step 2: Prepare a Dockerfile and register the Debugger hook to you training script<a class="anchor" id="step2"></a>
You need to put your **Dockerfile** and training script (**tf_keras_resnet_byoc.py** in this case) in the **docker** folder. Specify the location of the training script in the **Dockerfile** script in the line for `COPY` and `ENV`.
### Prepare a Dockerfile
The following cell prints the **Dockerfile** in the **docker** folder. You must install `sagemaker-training` and `smdebug` libraries to fully access the SageMaker Debugger features.
```
! pygmentize docker/Dockerfile
```
### Prepare a training script
The following cell prints an example training script **tf_keras_resnet_byoc.py** in the **docker** folder. To register the Debugger hook, you need to use the Debugger client library `smdebug`.
In the `main` function, a Keras hook is registered after the line where the `model` object is defined and before the line where the `model.compile()` function is called.
In the `train` function, you pass the Keras hook and set it as a Keras callback for the `model.fit()` function. The `hook.save_scalar()` method is used to save scalar parameters for mini batch settings, such as epoch, batch size, and the number of steps per epoch in training and validation modes.
```
! pygmentize docker/tf_keras_resnet_byoc.py
```
## Step 3: Create a Docker image, build the Docker training container, and push to Amazon ECR<a class="anchor" id="step3"></a>
### Create a Docker image
AWS Boto3 Python SDK provides tools to automatically locate your region and account information to create a Docker image uri.
```
import boto3
account_id = boto3.client('sts').get_caller_identity().get('Account')
ecr_repository = 'sagemaker-debugger-mnist-byoc-tf2'
tag = ':latest'
region = boto3.session.Session().region_name
uri_suffix = 'amazonaws.com'
if region in ['cn-north-1', 'cn-northwest-1']:
uri_suffix = 'amazonaws.com.cn'
byoc_image_uri = '{}.dkr.ecr.{}.{}/{}'.format(account_id, region, uri_suffix, ecr_repository + tag)
```
Print the image URI address.
```
byoc_image_uri
```
### [Optional Step] Login to access the Deep Learning Containers image repository
If you use one of the AWS Deep Learning Container base images, uncomment the following cell and execute to login to the image repository.
```
# ! aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com
```
### Build the Docker container and push it to Amazon ECR
The following code cell builds a Docker container based on the Dockerfile, create an Amazon ECR repository, and push the container to the ECR repository.
```
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $byoc_image_uri
!docker push $byoc_image_uri
```
**Note:** If this returns a permission error, see the [Get Started with Custom Training Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/build-container-to-train-script-get-started.html#byoc-training-step5) in the Amazon SageMaker developer guide. Follow the note in Step 5 to register the **AmazonEC2ContainerRegistryFullAccess** policy to your IAM role.
## Step 4: Use Amazon SageMaker to set the Debugger hook and rule configuration<a class="anchor" id="step4"></a>
### Define Debugger hook configuration
Now you have the custom training container with the Debugger hooks registered to your training script. In this section, you import the SageMaker Debugger API operations, `Debugger hook Config` and `CollectionConfig`, to define the hook configuration. You can choose Debugger pre-configured tensor collections, adjust `save_interval` parameters, or configure custom collections.
In the following notebook cell, the `hook_config` object is configured with the pre-configured tensor collections, `losses`. This will save the tensor outputs to the default S3 bucket. At the end of this notebook, we will retrieve the `loss` values to plot the overfitting problem that the example training job will be experiencing.
```
import sagemaker
from sagemaker.debugger import DebuggerHookConfig, CollectionConfig
sagemaker_session = sagemaker.Session()
train_save_interval=100
eval_save_interval=10
hook_config = DebuggerHookConfig(
collection_configs=[
CollectionConfig(name="losses",
parameters={
"train.save_interval": str(train_save_interval),
"eval.save_interval": str(eval_save_interval)}
)
]
)
```
### Select Debugger built-in rules
The following cell shows how to directly use the Debugger built-in rules. The maximum number of rules you can run in parallel is 20.
```
from sagemaker.debugger import Rule, rule_configs
rules = [
Rule.sagemaker(rule_configs.vanishing_gradient()),
Rule.sagemaker(rule_configs.overfit()),
Rule.sagemaker(rule_configs.overtraining()),
Rule.sagemaker(rule_configs.saturated_activation()),
Rule.sagemaker(rule_configs.weight_update_ratio())
]
```
## Step 5. Define a SageMaker Estimator object with Debugger and initiate a training job<a class="anchor" id="step5"></a>
Construct a SageMaker Estimator using the image URI of the custom training container you created in **Step 3**.
**Note:** This example uses the SageMaker Python SDK v1. If you want to use the SageMaker Python SDK v2, you need to change the parameter names. You can find the SageMaker Estimator parameters at [Get Started with Custom Training Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/build-container-to-train-script-get-started.html#byoc-training-step5) in the AWS SageMaker Developer Guide or at [the SageMaker Estimator API](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) in one of the older version of SageMaker Python SDK documentation.
```
from sagemaker.estimator import Estimator
from sagemaker import get_execution_role
role = get_execution_role()
estimator = Estimator(
image_name=byoc_image_uri,
role=role,
train_instance_count=1,
train_instance_type="ml.p3.16xlarge",
# Debugger-specific parameters
rules = rules,
debugger_hook_config=hook_config
)
```
### Initiate the training job in the background
With the `wait=False` option, the `estimator.fit()` function will run the training job in the background. You can proceed to the next cells. If you want to see logs in real time, go to the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/home), choose **Log Groups** in the left navigation pane, and choose **/aws/sagemaker/TrainingJobs** for training job logs and **/aws/sagemaker/ProcessingJobs** for Debugger rule job logs.
```
estimator.fit(wait=False)
```
### Print the training job name
The following cell outputs the training job running in the background.
```
job_name = estimator.latest_training_job.name
print('Training job name: {}'.format(job_name))
client = estimator.sagemaker_session.sagemaker_client
description = client.describe_training_job(TrainingJobName=job_name)
```
### Output the current job status
The following cell tracks the status of training job until the `SecondaryStatus` changes to `Training`. While training, Debugger collects output tensors from the training job and monitors the training job with the rules.
```
import time
if description['TrainingJobStatus'] != 'Completed':
while description['SecondaryStatus'] not in {'Training', 'Completed'}:
description = client.describe_training_job(TrainingJobName=job_name)
primary_status = description['TrainingJobStatus']
secondary_status = description['SecondaryStatus']
print('Current job status: [PrimaryStatus: {}, SecondaryStatus: {}]'.format(primary_status, secondary_status))
time.sleep(15)
```
## Step 6: Retrieve output tensors using the smdebug trials class<a class="anchor" id="step6"></a>
### Call the latest Debugger artifact and create a smdebug trial
The following smdebug `trial` object calls the output tensors once they become available in the default S3 bucket. You can use the `estimator.latest_job_debugger_artifacts_path()` method to automatically detect the default S3 bucket that is currently being used while the training job is running.
Once the tensors are available in the dafault S3 bucket, you can plot the loss curve in the next sections.
```
from smdebug.trials import create_trial
trial = create_trial(estimator.latest_job_debugger_artifacts_path())
```
**Note:** If you want to re-visit tensor data from a previous training job that has already done, you can retrieve them by specifying the exact S3 bucket location. The S3 bucket path is configured in a similar way to the following sample: `trial="s3://sagemaker-us-east-1-111122223333/sagemaker-debugger-mnist-byoc-tf2-2020-08-27-05-49-34-037/debug-output"`.
### Print the hyperparameter configuration saved as scalar values
```
trial.tensor_names(regex="scalar")
```
### Print the size of the `steps` list to check the training progress
```
from smdebug.core.modes import ModeKeys
len(trial.tensor('loss').steps(mode=ModeKeys.TRAIN))
len(trial.tensor('loss').steps(mode=ModeKeys.EVAL))
```
## Step 7: Analyze the training job using the smdebug `trial` methods and the Debugger rule job status<a class="anchor" id="step7"></a>
### Plot training and validation loss curves in real time
The following cell retrieves the `loss` tensor from training and evaluation mode and plots the loss curves.
In this notebook example, the dataset was `cifar10` that divided into 50,000 32x32 color training images and 10,000 test images, labeled over 10 categories. (See the [TensorFlow Keras Datasets cifar10 load data documentation](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/cifar10/load_data) for more details.) In the Debugger configuration step (Step 4), the save interval was set to 100 for training mode and 10 for evaluation mode. Since the batch size is set to 100, there are 1,000 training steps and 200 validation steps in each epoch.
The following cell includes scripts to call those mini batch parameters saved by `smdebug`, computes the average loss in each epoch, and renders the loss curve in a single plot.
As the training job proceeds, you will be able to observe that the validation loss curve starts deviating from the training loss curve, which is a clear indication of overfitting problem.
```
import matplotlib.pyplot as plt
import numpy as np
# Retrieve the loss tensors collected in training mode
y = []
for step in trial.tensor('loss').steps(mode=ModeKeys.TRAIN):
y.append(trial.tensor('loss').value(step,mode=ModeKeys.TRAIN)[0])
y=np.asarray(y)
# Retrieve the loss tensors collected in evaluation mode
y_val=[]
for step in trial.tensor('loss').steps(mode=ModeKeys.EVAL):
y_val.append(trial.tensor('loss').value(step,mode=ModeKeys.EVAL)[0])
y_val=np.asarray(y_val)
train_save_points=int(trial.tensor('scalar/train_steps_per_epoch').value(0)[0]/train_save_interval)
val_save_points=int(trial.tensor('scalar/valid_steps_per_epoch').value(0)[0]/eval_save_interval)
y_mean=[]
x_epoch=[]
for e in range(int(trial.tensor('scalar/epoch').value(0)[0])):
ei=e*train_save_points
ef=(e+1)*train_save_points-1
y_mean.append(np.mean(y[ei:ef]))
x_epoch.append(e)
y_val_mean=[]
for e in range(int(trial.tensor('scalar/epoch').value(0)[0])):
ei=e*val_save_points
ef=(e+1)*val_save_points-1
y_val_mean.append(np.mean(y_val[ei:ef]))
plt.plot(x_epoch, y_mean, label='Training Loss')
plt.plot(x_epoch, y_val_mean, label='Validation Loss')
plt.legend(bbox_to_anchor=(1.04,1), loc='upper left')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
```
### Check the rule job summary
The following cell returns the Debugger rule job summary. In this example notebook, we used the five built-in rules: `VanishingGradient`, `Overfit`, `Overtraining`, `SaturationActivation`, and `WeightUpdateRatio`. For more information about what each of the rules evaluate on the on-going training job, see the [List of Debugger built-in rules](https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-built-in-rules.html) documentation in the Amazon SageMaker developer guide. Define the following `rule_status` object to retrieve Debugger rule job summaries.
```
rule_status=estimator.latest_training_job.rule_job_summary()
```
In the following cells, you can print the Debugger rule job summaries and the latest logs. The outputs are in the following format:
```
{'RuleConfigurationName': 'Overfit',
'RuleEvaluationJobArn': 'arn:aws:sagemaker:us-east-1:111122223333:processing-job/sagemaker-debugger-mnist-b-overfit-e841d0bf',
'RuleEvaluationStatus': 'IssuesFound',
'StatusDetails': 'RuleEvaluationConditionMet: Evaluation of the rule Overfit at step 7200 resulted in the condition being met\n',
'LastModifiedTime': datetime.datetime(2020, 8, 27, 18, 17, 4, 789000, tzinfo=tzlocal())}
```
The `Overfit` rule job summary above is an actual output example of the training job in this notebook. It changes `RuleEvaluationStatus` to the `IssuesFound` status when it reaches the global step 7200 (in the 6th epoch). The `Overfit` rule algorithm determines if the training job is having Overfit issue based on its criteria. The default criteria to invoke the overfitting issue is to have at least 10 percent deviation between the training loss and validation loss.
Another issue that the training job has is the `WeightUpdateRatio` issue at the global step 500 in the first epoch, as shown in the following log.
```
{'RuleConfigurationName': 'WeightUpdateRatio',
'RuleEvaluationJobArn': 'arn:aws:sagemaker:us-east-1:111122223333:processing-job/sagemaker-debugger-mnist-b-weightupdateratio-e9c353fe',
'RuleEvaluationStatus': 'IssuesFound',
'StatusDetails': 'RuleEvaluationConditionMet: Evaluation of the rule WeightUpdateRatio at step 500 resulted in the condition being met\n',
'LastModifiedTime': datetime.datetime(2020, 8, 27, 18, 17, 4, 789000, tzinfo=tzlocal())}
```
This rule monitors the weight update ratio between two consecutive global steps and determines if it is too small (less than 0.00000001) or too large (above 10). In other words, this rule can identify if the weight parameters are updated abnormally during the forward and backward pass in each step, not being able to start converging and improving the model.
In combination of the two issues, it is clear that the model is not well setup to improve from the early stage of training.
Run the following cells to track the rule job summaries.
**`VanishingGradient` rule job summary**
```
rule_status[0]
```
**`Overfit` rule job summary**
```
rule_status[1]
```
**`Overtraining` rule job summary**
```
rule_status[2]
```
**`SaturationActivation` rule job summary**
```
rule_status[3]
```
**`WeightUpdateRatio` rule job summary**
```
rule_status[4]
```
## Notebook Summary and Other Applications
This notebook presented how you can have insights into training jobs by using SageMaker Debugger for any of your model running in a customized training container. The AWS cloud infrastructure, the SageMaker ecosystem, and the SageMaker Debugger tools make debugging process more convenient and transparent. The Debugger rule's `RuleEvaluationStatus` invocation system can be further extended to the Amazon CloudWatch Events and AWS Lambda function to take automatic actions, such as stopping training jobs once issues are detected. A sample notebook to set the combination of Debugger, CloudWatch, and Lambda is provided at [Amazon SageMaker Debugger - Reacting to CloudWatch Events from Rules](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-debugger/tensorflow_action_on_rule/tf-mnist-stop-training-job.ipynb).
| github_jupyter |
## 2.3 Combination Matrix over undirected network
### 2.3.1 Weight
We now associate each edge with a positive weight. This weight is used to scale information following over the associated edge.
For a given topology, we define $w_{ij}$, the weight to scale information flowing from agent $j$ to agent $i$, as follows:
\begin{align}\label{wij}
w_{ij}
\begin{cases}
> 0 & \mbox{if $(j,i) \in \mathcal{E}$, or $i=j$;} \\
= 0 & \mbox{otherwise.}
\end{cases}
\end{align}
### 2.3.2 Combination matrix and a fundamental assumption
We further define the combination matrix $W = [w_{ij}]_{i,j=1}^{n} \in \mathbb{R}^{n\times n}$ to stack all weights into a matrix. Such matrix $W$ will characterize the sparsity and connectivity of the underlying network topology. Throughout this section, we assume the combination matrix $W$ satisfies the following important assumption.
> **Assumption 1 (Doubly stochastic)** We assume $W$ is a doubly stochastic matrix, i.e.,
$W \mathbf{1} = \mathbf{1}$ and $\mathbf{1}^T W = \mathbf{1}^T$.
The above assumption essentially implies that both the row sum and the column sum of matrix $W$ are $1$, i.e., $\sum_{j=1}^n w_{ij} = 1$ and $\sum_{i=1}^n w_{ij} = 1$. This assumption indicates that each agent is taking a wighted local average in this neighborhood, and it is fundamental to guarantee that average consensus will converge to the global average $\bar{x}$ asymtotically.
### 2.3.3 Combination matrix over the undirected network
In a undirected network, the edges $(i,j)$ and $(j,i)$ will appear in a pari-wise manner. Undirected network is very common in nature. In a [random geometric graph](https://networkx.org/documentation/stable/auto_examples/drawing/plot_random_geometric_graph.html), two agents $i$ and $j$ within a certain distance are regarded neighbors. Apparently, both edges $(i,j)$ and $(j,i)$ exist for this scenario. Given a undirected network topology, there are many rules that can help generate combination matrix satisfying Assumption 1. The most well-known rule is the Metropolis-Hastings rule \[Refs\]:
> **Metropolis-Hastings rule.** Providing a undirected and connected topology $\mathcal{G}$, we select $w_{ij}$ as
>
>\begin{align}
\hspace{-3mm} w_{ij}=
\begin{cases}
\begin{array}{ll}\displaystyle
\hspace{-2mm}\frac{1}{1 + \max\{d_i, d_j \}},& \mbox{if $j \in \mathcal{N}(i)$}, \\
\hspace{-2mm}\displaystyle 1 - \sum_{j\in \mathcal{N}(i)}w_{ij}, & \mbox{if $i = j$},\\
\hspace{-2mm}0,& \mbox{if $j \notin \mathcal{N}(i)$ and $j\neq i$}.
\end{array}
\end{cases}
\end{align}
>
> where $d_i = |\mathcal{N}(i)|$ (the number of incoming neighbors of agent $k$). It is easy to verify such $W$ is always doubly-stochastic.
The other popular approaches can be found in Table 14.1 of reference \[Refs\].
#### 2.3.3.1 Example I: Commonly-used topology and associated combination matrix
In BlueFog, we support various commonly-used undirected topologies such as ring, star, 2D-mesh, fully-connected graph, hierarchical graph, etc. One can organize his clusters into any of these topopology easily with the function ```bluefog.common.topology_util```, see the deatil in the [user manual](https://bluefog-lib.github.io/bluefog/topo_api.html?highlight=topology#module-bluefog.common.topology_util). In addition, BlueFog also provides the associated combination matrix for these topologies. These matrices are guaranteed to be symmetric and doubly stochas
**Note:** The reader not familiar with how to run BlueFog in ipython notebook environment is encouraged to read Sec. \[HelloWorld section\] first.
- **A: Test whether BlueFog works normally (the same steps as illustrated in Sec. 2.1)**
In the following code, you should be able to see the id of your CPUs. We use 8 CPUs to conduct the following experiment.
```
import numpy as np
import bluefog.torch as bf
import torch
import networkx as nx # nx will be used for network topology creating and plotting
from bluefog.common import topology_util
import matplotlib.pyplot as plt
%matplotlib inline
import ipyparallel as ipp
np.set_printoptions(precision=3, suppress=True, linewidth=200)
rc = ipp.Client(profile="bluefog")
rc.ids
%%px
import numpy as np
import bluefog.torch as bf
import torch
from bluefog.common import topology_util
import networkx as nx
bf.init()
```
- **B: Undirected Ring topology and associated combination matrix**
We now construct a ring topology. Note that to plot the topology, we have to pull the topology information from the agent to the Jupyter engine. That is why we have to use ```dview.pull```. In the following code, ```bf.size()``` will return the size of the network.
```
# Generate topology.
# Plot figure
%px G = topology_util.RingGraph(bf.size())
dview = rc[:]
G_0 = dview.pull("G", block=True, targets=0)
nx.draw_circular(G_0)
```
When the topology is generated through BlueFog utilities (such as ```RingGraph()```, ```ExponentialTwoGraph()```, ```MeshGrid2DGraph()``` and others in the [user manual](https://bluefog-lib.github.io/bluefog/topo_api.html?highlight=topology#module-bluefog.common.topology_util)), the associated combination matrix is provided automatically.
Now we examine the self weight and neighbor weights of each agent in the ring topology. To this end, we can use ```GetRecvWeights()``` to get these information. In the following code, ```bf.rank()``` will return the label of that agent. Note that all agents will run the following code in parallel (One can see that from the magic command ```%%px```).
```
%%px
self_weight, neighbor_weights = topology_util.GetRecvWeights(G, bf.rank())
```
Now we examine the self weight and neighbor weights of agent $0$.
```
%%px
if bf.rank() == 0:
print("self weights: {}\n".format(self_weight))
print("neighbor weights:")
for k, v in neighbor_weights.items():
print("neighbor id:{}, weight:{}".format(k, v))
```
We can even construct the combination matrix $W$ and examine its property. To this end, we will pull the weights of each agent into the Jupyter engine and then construct the combination matrix. The method to pull information from agent is
```dview.pull(information_to_pull, targets=agent_idx)```
It should be noted that ```agent_idx``` is not the rank of each agent. Instead, it is essentially the order that the engine collects information from each agent. We need to establish a mapping between rank and agent_idx.
```
network_size = dview.pull("bf.size()", block=True, targets=0)
agentID_to_rank = {}
for idx in range(network_size):
agentID_to_rank[idx] = dview.pull("bf.rank()", block=True, targets=idx)
for k, v in agentID_to_rank.items():
print("id:{}, rank:{}".format(k, v))
```
Now we construct the combination matrix $W$.
```
W = np.zeros((network_size, network_size))
for idx in range(network_size):
self_weight = dview.pull("self_weight", block=True, targets=idx)
neighbor_weights = dview.pull("neighbor_weights", block=True, targets=idx)
W[agentID_to_rank[idx], agentID_to_rank[idx]] = self_weight
for k, v in neighbor_weights.items():
W[agentID_to_rank[idx], k] = v
print("The matrix W is:")
print(W)
# check the row sum and column sum
print("\nRow sum of W is:", np.sum(W, axis=1))
print("Col sum of W is:", np.sum(W, axis=0))
if np.sum(W, axis=1).all() and np.sum(W, axis=0).all():
print("The above W is doubly stochastic.")
```
- **I-C: Undirected Star topology and associated combination matrix**
We can follow the above codes to draw the star topology and its associated combination matrix
```
# Generate topology.
# Plot figure
%px G = topology_util.StarGraph(bf.size())
G_0 = dview.pull("G", block=True, targets=0)
nx.draw_circular(G_0)
%%px
self_weight, neighbor_weights = topology_util.GetRecvWeights(G, bf.rank())
network_size = dview.pull("bf.size()", block=True, targets=0)
W = np.zeros((network_size, network_size))
for idx in range(network_size):
self_weight = dview.pull("self_weight", block=True, targets=idx)
neighbor_weights = dview.pull("neighbor_weights", block=True, targets=idx)
W[agentID_to_rank[idx], agentID_to_rank[idx]] = self_weight
for k, v in neighbor_weights.items():
W[agentID_to_rank[idx], k] = v
print("The matrix W is:")
print(W)
# check the row sum and column sum
print("\nRow sum of W is:", np.sum(W, axis=1))
print("Col sum of W is:", np.sum(W, axis=0))
if np.sum(W, axis=1).all() and np.sum(W, axis=0).all():
print("The above W is doubly stochastic.")
```
- **I-D: Undirected 2D-Mesh topology and associated combination matrix**
```
# Generate topology.
# Plot figure
%px G = topology_util.MeshGrid2DGraph(bf.size())
G_0 = dview.pull("G", block=True, targets=0)
nx.draw_spring(G_0)
%%px
self_weight, neighbor_weights = topology_util.GetRecvWeights(G, bf.rank())
network_size = dview.pull("bf.size()", block=True, targets=0)
W = np.zeros((network_size, network_size))
for idx in range(network_size):
self_weight = dview.pull("self_weight", block=True, targets=idx)
neighbor_weights = dview.pull("neighbor_weights", block=True, targets=idx)
W[agentID_to_rank[idx], agentID_to_rank[idx]] = self_weight
for k, v in neighbor_weights.items():
W[agentID_to_rank[idx], k] = v
print("The matrix W is:")
print(W)
# check the row sum and column sum
print("\nRow sum of W is:", np.sum(W, axis=1))
print("Col sum of W is:", np.sum(W, axis=0))
if np.sum(W, axis=1).all() and np.sum(W, axis=0).all():
print("The above W is doubly stochastic.")
```
- **I-E: Undirected 2D-Mesh topology and associated combination matrix**
```
# Generate topology.
# Plot figure
%px G = topology_util.FullyConnectedGraph(bf.size())
G_0 = dview.pull("G", block=True, targets=0)
nx.draw_circular(G_0)
%%px
self_weight, neighbor_weights = topology_util.GetRecvWeights(G, bf.rank())
network_size = dview.pull("bf.size()", block=True, targets=0)
W = np.zeros((network_size, network_size))
for idx in range(network_size):
self_weight = dview.pull("self_weight", block=True, targets=idx)
neighbor_weights = dview.pull("neighbor_weights", block=True, targets=idx)
W[agentID_to_rank[idx], agentID_to_rank[idx]] = self_weight
for k, v in neighbor_weights.items():
W[agentID_to_rank[idx], k] = v
print("The matrix W is:")
print(W)
# check the row sum and column sum
print("\nRow sum of W is:", np.sum(W, axis=1))
print("Col sum of W is:", np.sum(W, axis=0))
if np.sum(W, axis=1).all() and np.sum(W, axis=0).all():
print("The above W is doubly stochastic.")
```
The readers are encourged to test other topologies and examine their associated combination matrix. Check the topology related utility function in [user manual](https://bluefog-lib.github.io/bluefog/topo_api.html?highlight=topology#module-bluefog.common.topology_util).
#### 2.3.3.2 Example II: Set up your own topology
There also exist scenarios in which you want to organize the agents into your own magic topologies. In this example, we will show how to produce the combination matrix via the Metropolis-Hastings rule and generate the network topology that can be imported to BlueFog utilities.
Before we generate the combination matrix, you have to prepare an [adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrix) of your topology. Since the topology is undirected, the adjacency matrix should be symmetric.
```
def gen_comb_matrix_via_MH(A):
"""Generate combinational matrix via Metropolis-Hastings rule
Args:
A: numpy 2D array with dims (n,n) representing adjacency matirx.
Returns:
A combination matrix W: numpy 2D array with dims (n,n).
"""
# the adjacency matrix must be symmetric
assert np.linalg.norm(A - A.T) < 1e-6
# make sure the diagonal elements of A are 0
n, _ = A.shape
for i in range(n):
A[i, i] = 0
# compute the degree of each agent
d = np.sum(A, axis=1)
# identify the neighbor of each agent
neighbors = {}
for i in range(n):
neighbors[i] = set()
for j in range(n):
if A[i, j] == 1:
neighbors[i].add(j)
# generate W via M-H rule
W = np.zeros((n, n))
for i in range(n):
for j in neighbors[i]:
W[i, j] = 1 / (1 + np.maximum(d[i], d[j]))
W_row_sum = np.sum(W, axis=1)
for i in range(n):
W[i, i] = 1 - W_row_sum[i]
return W
```
Random geometric graph is one of the undirected graph that nutrally appears in many application. It is constructed by randomly placing $n$ nodes in some metric space (according to a specified probability distribution) and connecting two nodes by a link if and only if their distance is in a given range $r$.
Random geometric graph is not provided in BlueFog. In the following codes, we will show how to generate the combination matrix of random geometric graph via Metropolis-Hastings rule, and how to import the topology, and the combination matrix into BlueFog to facilitate the downstream average consensus and decentralized optimization algorithms.
test the correctedness of the above function. To this end, we define a new network topology: we randomly generate the 2D coordinates of $n$ agents within a $1 \times 1$ square. If the distance between two nodes are within $r$, they are regarded as neighbors. We call such network topology as distance-decided network.
The following functioin will return the adjacency matrix of the random geometric graph.
```
def gen_random_geometric_topology(num_agents, r):
"""Generate random geometric topology.
Args:
num_agents: the number of agents in the network.
r: two agents within the distance 'r' are regarded as neighbors.
"""
# Generate n random 2D coordinates within a 1*1 square
agents = {}
for i in range(num_agents):
agents[i] = np.random.rand(2, 1)
A = np.zeros((num_agents, num_agents))
for i in range(num_agents):
for j in range(i + 1, num_agents):
dist = np.linalg.norm(agents[i] - agents[j])
if dist < r:
A[i, j] = 1
A[j, i] = 1
return A
```
Now we use the above utility function to generate a random distance-decided network. One can adjust parameter ```r``` to manipulate the density of the network topology.
```
np.random.seed(seed=2021)
num_nodes = len(rc.ids)
A = gen_random_geometric_topology(num_agents=num_nodes, r=0.5)
print("The adjacency matrix of the generated network is:")
print(A)
print("\n")
print("The associated combination matrix is:")
W = gen_comb_matrix_via_MH(A)
print(W)
# test whether it is symmetric and doubly stochastic
print("\n")
if np.linalg.norm(W - W.T) == 0:
print("W is symmetric.")
if np.sum(W, axis=0).all() == 1 and np.sum(W, axis=1).all() == 1:
print("W is doubly stochastic.")
# generate topology from W
G = nx.from_numpy_array(W, create_using=nx.DiGraph)
# draw topology
nx.draw_spring(G)
```
Given $W$ generated from the M-H rule, next we organize the agents into the above topology with ```set_topology(G)```. We further examine whether the agents' associated combination matrix is consistent with the above generated ombination matrix $W$.
```
dview.push({"W": W}, block=True)
%%px
G = nx.from_numpy_array(W, create_using=nx.DiGraph)
bf.set_topology(G)
topology = bf.load_topology()
self_weight, neighbor_weights = topology_util.GetRecvWeights(topology, bf.rank())
network_size = dview.pull("bf.size()", block=True, targets=0)
W = np.zeros((network_size, network_size))
for idx in range(network_size):
self_weight = dview.pull("self_weight", block=True, targets=idx)
neighbor_weights = dview.pull("neighbor_weights", block=True, targets=idx)
W[agentID_to_rank[idx], agentID_to_rank[idx]] = self_weight
for k, v in neighbor_weights.items():
W[agentID_to_rank[idx], k] = v
print("The matrix W is:")
print(W)
# check the row sum and column sum
print("\nRow sum of W is:", np.sum(W, axis=1))
print("Col sum of W is:", np.sum(W, axis=0))
if np.sum(W, axis=1).all() and np.sum(W, axis=0).all():
print("The above W is doubly stochastic.")
```
It is observed that the agents' associated combination matrix is consistent with the above generated associated combination matrix.
| github_jupyter |
# Lecture 11 - Gaussian Process Regression
## Objectives
+ to do regression using a GP
+ to find the hyperparameters of the GP by maximizing the (marginal) likelihood
+ to use GP regression for uncertainty propagation
## Readings
+ Please read [this](http://www.kyb.mpg.de/fileadmin/user_upload/files/publications/pdfs/pdf2903.pdf) OR watch [this video lecture](http://videolectures.net/mlss03_rasmussen_gp/?q=MLSS).
+ [Section 5.4 in GP for ML textbook](http://www.gaussianprocess.org/gpml/chapters/RW5.pdf).
+ See slides for theory.
## Example
The purpose of this example is to demonstrate Gaussian process regression. To motivate the need let us introduce a toy uncertainty quantification example:
> We have developed an "amazing code" that models an extremely important physical phenomenon. The code works with a single input paramete $x$ and responds with a single value $y=f(x)$. A physicist, who is an expert in the field, tells us that $x$ must be somewhere between 0 and 1. Therefore, we treat it as uncertain and we assign to it a uniform probability density:
$$
p(x) = \mathcal{U}(x|0,1).
$$
Our engineers tell us that it is vitally important to learn about the average behavior of $y$. Furthermore, they believe that a value of $y$ greater than $1.2$ signifies a catastrophic failure. Therefore, we wish to compute:
1. the variance of $y$:
$$
v_y = \mathbb{V}[f(x)] = \int\left(f(x) - \mathbb{E}[f(x)]\right)^2p(x)dx,
$$
2. and the probability of failure:
$$
p_{\mbox{fail}} = P[y > 1.2] = \int\mathcal{X}_{[1.2,+\infty)}(f(x))p(x)dx,
$$
where $\mathcal{X}_A$ is the characteristic function of the set A, i.e., $\mathcal{X}_A(x) = 1$ if $x\in A$ and $\mathcal{X}_A(x) = 0$ otherwise.
Unfortunately, our boss is not very happy with our performance. He is going to shut down the project unless we have an answer in ten days. However, a single simulation takes a day... We can only do 10 simulations! What do we do?
Here is the "amazing code"...
```
import numpy as np
# Here is an amazing code:
solver = lambda(x): -np.cos(np.pi * x) + np.sin(4. * np.pi * x)
# It accepts just one input parameter that varies between 0 and 1.
```
### Part 1 - Learning About GP Regression
This demonstrates how do do Gaussian process regression.
```
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn
import cPickle as pickle
import GPy
# Ensure reproducibility
np.random.seed(1345678)
# Select the number of simulations you want to perform:
num_sim = 10
# Generate the input data (needs to be column matrix)
X = np.random.rand(num_sim, 1)
# Evaluate our amazing code at these points:
Y = solver(X)
# Pick a covariance function
k = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
# Construct the GP regression model
m = GPy.models.GPRegression(X, Y, k)
# That's it. Print some details about the model:
print m
# Now we would like to make some predictions
# Namely, we wish to predict at this dense set of points:
X_p = np.linspace(0, 1., 100)[:, None]
# We can make predictions as follows
Y_p, V_p = m.predict(X_p) # Y_p = mean prediction, V_p = predictive variance
# Here is the standard deviation:
S_p = np.sqrt(V_p)
# Lower predictive bound
Y_l = Y_p - 2. * S_p
# Upper predictive bound
Y_u = Y_p + 2. * S_p
# Plot the results
fig, ax = plt.subplots()
ax.plot(X_p, Y_p, label='Predictive mean')
ax.fill_between(X_p.flatten(), Y_l.flatten(), Y_u.flatten(), alpha=0.25, label='Predictive error bars')
ax.plot(X, Y, 'kx', markeredgewidth=2, label='Observed data')
# Write the model to a file
print '> writing model to file: surrogate.pcl'
#with open('surrogate.pcl', 'wb') as fd:
# pickle.dump(m, fd)
```
#### Questions
1. The fit looks pretty bad. Why do you think that is? Are our prior assumptions about the parameters of the GP compatible with reality?
2. Ok. We know that our code is deterministic but the GP thinks that there is noise there. Let’s fix this. Go to line 40 and type:
```
m.likelihood.variance = 0
```
This tells the GP that the observations have no noise. Rerun the code. Is the fit better?
3. The previous question was not supposed to work. Why do you think it failed? It
can be fixed by making the variance something small, e.g., make it 1e-6 instead of exactly zero. Rerun the code. Is the fit now any better?
4. We are not quite there. The length scale we are using is 1. Perhaps our function is not that smooth. Try to pick a more reasonable value for the length scale and rerun the code. What do you think is a good value?
5. Repeat 3 for the variance parameter of the SE covariance function.
6. That’s too painful and not very scientific. The proper way to find the parameters is to maximize the likelihood. Undo the modifications you made so far and type ```m.optimize()``` after the model definition.
This maximizes the marginal likelihood of your model using the BFGS algorithm and honoring any constraints. Rerun the examples. What are the parameters that the algorithm finds? Do they make sense? How do the results look like?
7. Based on the results you obtained in 5, we decide to ask our boss for one more
day. We believe that doing one more simulation will greatly reduce error in our predictions. At which input point you think we should make this simulation? You can augement the input data by typing:
```
X = np.vstack([X, [[0.7]]])
```
where, of course, you should replace “0.7” with the point you think is the best. This just appends a new input point to the existing X. Rerun the example. What fit do you get now?
8. If you are this fast, try repeating 5-6 with a less smooth covariance function, e.g.,
the Matern32. What do you observe? Is the prediction uncertainty larger or smaller?
| github_jupyter |
# 传统数据库
上篇文章:聊聊数据库~开篇 <https://www.cnblogs.com/dotnetcrazy/p/9690466.html>
本来准备直接开讲NoSQL的(当时开篇就是说的NoSQL)考虑到有些同志可能连MySQL系都没接触过,所以我们2019说数据系的时候预计从`MySQL`(穿插`MSSQL`)开始,这篇文章就当试水篇,效果好就继续往下写~(这篇偏理论和运维)
## 1.1.MariaDB and MySQL
官方文档:`https://mariadb.com/kb/zh-cn/mariadb`
目前主流:`MySQL 5.7.x` or **`MariaDB 5.5.60`**(推荐)
多一句嘴,`MySQL`当年被`Oracle`收购后,`MySQL之父`觉得靠`Oracle`维护`MySQL`很不靠谱,然后就跳槽弄了个`MariaDB`(很多`Oracle`竞争对手扶持着),目前`MariaDB`是发展最快的`MySQL`分支版本(PS:`MySQL`现在是双协议了,大部分公司用的版本都是`<=5.7`)
然后得说下迁移问题:`MySQL 5.x`到 `MariaDB 5.x`基本上是无缝的,MariaDB**最新稳定版**为:`MariaDB 5.5`
> PS:**MariaDB有两个分支,而10.x分支是不兼容MySQL的**
`MariaDB`与`MySQL`兼容性可以查看:
> <https://mariadb.com/kb/zh-cn/mariadb-vs-mysql-compatibility/>
PS:国内比较火的还有阿里的`MySQL分支`:`https://github.com/alibaba/AliSQL`
不谈其他的,咱们看看它们开发的积极程度就知道为什么`MariaDB`是主流了

### 使用概括(推荐)
如果想要使用`MariaDB10.x`的同志可以考虑`MySQL8.x`(社区完善)
如果想要使用`MySQL5.x`的同志可以考虑`MariaDB5.5.x`(高性能且兼容)
## 1.2.MariaDB部署
网络配置如果不会可以看我以前写的文章:<https://www.cnblogs.com/dunitian/p/6658578.html>
### 1.环境配置和初始化
安装很简单,以`CentOS`为例:

```shell
systemctl start mariadb.service # 启动MariaDB
systemctl enable mariadb.service # 设置开机启动
systemctl stop mariadb.service # 停止MariaDB
systemctl restart mariadb.service # 重启MariaDB
```

PS:Win安装注意这一步:

执行文件简单说明:有时候我们 `ps aux | grep mysql` 的时候,发现运行的并不是`/usr/bin/`下的`mysqld`而是`mysqld_safe`,那这个`mysqld_safe`是啥呢?==> **线程安全的实例**
`MariaDB`的程序组成:`ls /usr/bin | grep mysql`
1. Client:
- **`mysql`** 命令行客户端
- **`mysqldump`** 数据库备份用
- **`mysqladmin`** 远程管理工具
- **`mysqlbinlog`** 二进制日志管理工具
- ...
2. Server:
- **`mysqld_safe`** 线程安全的实例
- `mysqld_multi` 多实例
- `mysqld`
- **`mysql_secure_installation`** 安全初始化工具(记得先启动数据库哦)
- ...
**`mysql`的账号由两部分组成:`username`@`host`,MySQL客户端连接参数:**
- `-u用户名`:`--user`,默认为`root`
- `-h服务器主机`:`--host`,默认为`localhost`
- `host`用于限制用户可以通过哪些主机连接
- 支持通配符:
- `%`匹配任意长度的任意字符:172.16.0.0/16 ==> 172.16.%.%
- `_`匹配任意单个字符
- `-p密码`:`--password`,默认为`空`
- 安装完成后运行`mysql_secure_installation`来设置密码并初始化
- other:
- `-P`:`--port`,指定端口
- `-D`:`--database`,指定数据库
- `-C`:`--compress`,连接数据库的时候对传输的数据压缩
- `-S`:`--socket`,指定socket文件
- MySQL专用:-e "SQL语句",直接执行SQL语句
- mysql -e "show databases"(脚本直接运行)
很多人安装完成后是这样设置密码的:(**不推荐**)

**正确打开方式:`mysql_secure_installation`**

如果允许root远程登录:`Disallow root login remotely? [Y/n] n`

安全初始化后登录图示:


### 2.配置文件
以`MariaDB 5.5.60`为例:
1. Linux:配置文件查找顺序(找不到就往下继续)
- `/etc/my.cnf` --> **`/etc/mysql/conf.d/*.cnf`** --> `~/.my.cnf`
2. Windows:`MariaDB安装目录/data/my.ini`
PS:一般配置文件都会设置这3个
```shell
[mysqld]
# 独立表空间: 每一个表都有一个.frm表描述文件,还有一个.ibd文件
innodb_file_per_table=on
# 不对连接进行DNS解析(省时)
skip_name_resolve=on
# 配置sql_mode
sql_mode='strict_trans_tables'
# 指定数据库文件存放路径
# datadir=/mysql/data
# socket=/mysql/data/mysql.sock # 与之对应
```
其他配置`MariaDB`提供了样本:
```shell
[dnt@localhost ~] ls /usr/share/mysql/ | grep .cnf
my-huge.cnf # 超大内存配置参考
my-innodb-heavy-4G.cnf # 4G内存配置参考
my-large.cnf # 大内存配置
my-medium.cnf # 中等内存配置
my-small.cnf # 小内存配置
```
PS:`thread_concurrency`=`CPU数*2`最佳,**修改配置后记得重启数据库**
### 3.远程访问
1.之前安全初始化的时候把`root`禁止远程登录了,现在我们创建一个其他用户

2.给用户权限

3.防火墙放行指定端口

4.远程客户端测试一下

Code如下:
```shell
# root账户登录
mysql -uroot -p
# 新增用户
insert into mysql.user(user,host,password) values("用户名","%",password("密码"));
# 刷新设置
flush privileges;
# 分配权限
grant all privileges on 数据库.* to 用户名@"%" identified by "密码";
# 刷新设置
flush privileges;
# 显示服务状态
systemctl status firewalld
# 添加 --permanent永久生效(没有此参数重启后失效)
firewall-cmd --zone=public --add-port=3306/tcp --permanent
# 重新载入
firewall-cmd --reload
# 查看
firewall-cmd --zone= public --query-port=3306/tcp
# 删除
firewall-cmd --zone= public --remove-port=3306/tcp --permanent
```
**SQLServer远程连接**:<https://www.cnblogs.com/dunitian/p/5474501.html>
## MySQL军规(58)
文章结尾贴一节`58`的`MySQL`军规:(**适用于并发量大,数据量大的典型互联网业务**)
### 1.基础规范
1. 表存储引擎必须使用`InnoDB`
2. 表字符集默认使用`utf8`,必要时候使用`utf8mb4`
- `utf8`通用,无乱码风险,汉字3字节,英文1字节
- `utf8mb4`是`utf8的超集`,存储4字节时使用(eg:表情符号)
3. **禁止使用存储过程,视图,触发器,Event**
- 调试,排错,迁移都比较困难,扩展性较差
- 对数据库性能影响较大,互联网业务,能让站点层和服务层干的事情,不要交到数据库层
4. 禁止在数据库中存储大文件(eg:照片)
- 可以将大文件存储在对象存储系统,数据库中存储路径
5. 禁止在线上环境做数据库压力测试
- 测试,开发,线上数据库环境必须隔离
### 2.命名规范
1. **库名,表名,列名必须用小写,采用下划线分隔**
- abc,Abc,ABC都是给自己埋坑
2. 库名,表名,列名必须见名知义,长度不要超过32字符
- tmp,wushan谁TM知道这些库是干嘛的
3. 库备份必须以bak为前缀,以日期为后缀
- 从库必须以-s为后缀
- 备库必须以-ss为后缀
### 3.表设计规范
1. 单实例表个数必须控制在`2000`个以内
2. 单表分表个数必须控制在`1024`个以内
3. **表必须有主键,推荐使用`unsigned`整数为主键**
- 潜在坑:删除无主键的表,如果是row模式的主从架构,从库会挂住
4. 禁止使用外键,如果要保证完整性,应由应用程式实现
- 外键使得表之间相互耦合,影响`update/delete`等SQL性能
- 有可能造成死锁,高并发情况下容易成为数据库瓶颈
5. 建议将大字段,访问频度低的字段拆分到单独的表中存储,分离冷热数据
- 垂直拆分的依据,尽量把长度较短,访问频率较高的属性放在主表里
- 流量大数据量大时,数据访问要有`service`层,并且`service`层不要通过`join`来获取主表和扩展表的属性
- 具体可以参考沈剑大牛写的<a href="https://mp.weixin.qq.com/s/ezD0CWHAr0RteC9yrwqyZA" target="_blank">《如何实施数据库垂直拆分》</a>
### 4.列设计规范
1. 根据业务区分使用`tinyint`/`int`/`bigint`,分别会占用`1`/`4`/`8`字节
2. 根据业务区分使用`char`/`varchar`(PS:没有MSSQL里的`nvarchar`)
- 字段长度固定,或者长度近似的业务场景,适合使用`char`,**能够减少碎片,查询性能高**
- 字段长度相差较大,或者更新较少的业务场景,适合使用`varchar`,能够**减少空间**
3. 根据业务区分使用`datetime`/`timestamp`
- `datetime`占用5个字节,`timestamp`占用4个字节
- 存储年使用`year`,存储日期使用`date`,存储时间使用`datetime`
4. **必须把字段定义为`NOT NULL`并设默认值**
- NULL需要更多的存储空间
- NULL的列使用索引,索引统计,值都更加复杂,MySQL更难优化
- NULL只能采用IS NULL或者IS NOT NULL,而在=/!=/in/not in时有大坑
5. **使用`int unsigned`存储`IPv4`**,不要用`char(15)`
6. **使用`varchar(20)`存储手机号,不要使用整数**
- 手机号不会用来做数学运算
- `varchar`可以模糊查询(eg:like ‘138%’)
- 牵扯到国家代号,可能出现`+、-、()`等字符,eg:`+86`
7. 使用`tinyint`来代替`enum`
- `enum`增加新值要进行`DDL`操作
### 5.索引规范(常用)
1. 唯一索引使用`uniq_字段名`来命名(`uq_表名_字段名`)
2. 非唯一索引使用`idx_字段名`来命名(`ix_表名_字段名`)
3. **单张表索引数量建议控制在5个以内**
- 互联网高并发业务,太多索引会影响写性能
- 异常复杂的查询需求,可以选择`ES`等更为适合的方式存储
- `生成执行计划时,如果索引太多,会降低性能,并可能导致MySQL选择不到最优索引`
4. **组合索引字段数不建议超过5个**
- 如果5个字段还不能极大缩小row范围,八成是设计有问题
5. **不建议在频繁更新的字段上建立索引**
6. **尽量不要`join`查询,如果要进行`join`查询,被`join`的字段必须类型相同,并建立索引**
- `join`字段类型不一致容易导致全表扫描
7. 理解组合索引最左前缀原则,避免重复建设索引
- 如果建立了`(a,b,c)`,相当于建立了`(a)`, `(a,b)`, `(a,b,c)`
### 6.SQL规范(常用)
1. **禁止使用`select *`,只获取必要字段**
- 指定字段能有效利用索引覆盖
- `select *`会增加`cpu/io/内存/带宽`的消耗
- `指定字段查询,在表结构变更时,能保证对应用程序无影响`
2. **`insert`必须指定字段,禁止使用`insert into T values()`**
- 指定字段插入,在表结构变更时,能保证对应用程序无影响
3. **隐式类型转换会使索引失效,导致全表扫描**(很重要)
4. 禁止在`where`条件列使用函数或者表达式
- 导致不能命中索引,全表扫描
5. 禁止负向查询以及`%`开头的模糊查询
- 导致不能命中索引,全表扫描
6. 禁止大表`join`和`子查询`
7. **同一个字段上的`or`必须改写为`in`,`in`的值必须少于50个**
8. 应用程序必须捕获SQL异常(方便定位线上问题)
课后思考:为什么`select uid from user where phone=13811223344`不能命中phone索引?
课后拓展:
```
MyISAM与InnoDB两者之间区别与选择
https://www.cnblogs.com/y-rong/p/5309392.html
https://www.cnblogs.com/y-rong/p/8110596.html
了解下Mysql的间隙锁及产生的原因
https://www.cnblogs.com/wt645631686/p/8324671.html
grant授权和revoke回收权限
https://www.cnblogs.com/kevingrace/p/5719536.html
centos7自带数据库MariaDB重启和修改密码
https://blog.csdn.net/shachao888/article/details/50341857
MySQL添加用户、删除用户与授权
https://www.cnblogs.com/wanghetao/p/3806888.html
深度认识 Sharding-JDBC:做最轻量级的数据库中间层
https://my.oschina.net/editorial-story/blog/888650
```
上篇回顾:<a href="https://www.cnblogs.com/dotnetcrazy/p/9887708.html" target="_blank">聊聊数据库~SQL环境篇</a>
### 扩展:为用户添加新数据库的权限
PS:先使用root创建数据库,然后再授权`grant all privileges on 数据库.* to 用户名@"%" identified by "密码";`并刷新`flush privileges;`

查看权限:`show grants for dnt;`

效果:

## 1.3.MySQL部署
之前有园友说,为啥不顺便说说`UbuntuServer`的部署呢?呃。。。一般来说公司服务器都是`CentOS`的占大多数,然后`UbuntuServer`更多的是个人云服务比较多(**推荐初创公司使用**),毕竟它们两个系统追求的不太一样,一个是追求稳(部署麻烦),一个是追求软件尽量新的情况下稳定(更新太快)
那么长话短说,步入正轨:
### 1.Ubuntu最常见的包问题
Ubuntu不得不说的就是这个**`apt`出问题的处理** :(换源就不说了`/etc/apt/sources.list`)
```shell
# 一般删除这几个锁文件,然后再重新配置下就可以了
sudo rm /var/lib/dpkg/lock
sudo rm /var/lib/dpkg/lock-frontend
sudo rm /var/lib/apt/lists/lock
sudo rm /var/cache/apt/archives/lock
# 简写(千万注意空格,不然你就是rm -rf / + 跑路了)
# sudo rm /var/lib/apt/lists/lock /var/cache/apt/archives/lock /var/lib/dpkg/lock /var/lib/dpkg/lock-frontend
# 重新配置下
sudo dpkg --configure -a
```
### 2.安装注意(Ubuntu的特点就是使用起来简单)
`Ubuntu`推荐使用`MySQL`(毕竟同是`5.x`用起来基本上差不多,安装过程和之前说的`CentOS 下 MariaDB`差不多,所有命令前加个**`sudo`**)
1.安装比较简单:`sudo apt install mysql-server -y`

2.允许远程连接:`注释掉 bind-address=127.0.0.1`(`/etc/mysql/mysql.conf.d/mysqld.cnf`)

PS:常用配置(`/etc/mysql/mysql.conf.d/mysqld.cnf`)

3.关于为什么是这个路径的说明:`sudo vi /etc/mysql/mysql.conf.d/mysqld.cnf`

4.所有配置修改都需要重新启动下:`sudo systemctl restart mysql`

5.第一次初始化和MariaDB不太一样:`sudo mysql_secure_installation`(其他一路`y`即可)
需要选下你设置root密码的复杂度:(一般1就可以了,就算设置再复杂,入了系统也是虚的)

PS:可以看看拓展文章:<a href="https://www.cnblogs.com/super-zhangkun/p/9435974.html" target="_blank">Ubuntu16安装mysql5.7未提示输入密码,安装后修改mysql默认密码</a> 和 <a href="https://blog.csdn.net/hello_world_qwp/article/details/79551789" target="_blank">【不推荐】修改mysql密码策略</a>
6.然后输入密码你就可以登录了`sudo mysql -uroot -p`(PS:你直接`sudo mysql`也可以直接登录)
这边我就不像上节课一步步演示了,直接授权和创建一步走了`grant all privileges on 数据库.* to "用户名"@"%" identified by "复杂密码";`

7.记得`flush privileges;`刷新一下系统表

PS:数据文件一般都是放在`/var/lib/mysql`中
#### 课后拓展:
MySQL5.7.26 忘记Root密码小计:<https://www.cnblogs.com/dotnetcrazy/p/11027732.html>
```
浅析MySQL 8忘记密码处理方式
https://www.cnblogs.com/wangjiming/p/10363357.html
MySQL5.6更改datadir数据存储目录
https://www.cnblogs.com/ding2016/p/7644675.html
```
#### 扩展:CentOS7安装MySQL8
<a href="https://www.cnblogs.com/dotnetcrazy/p/10871352.html">CentOS7安装MySQL8.0安装小计</a>:<https://mp.weixin.qq.com/s/Su3Ivuy5IMeAwYBXaka0ag>
---
## 1.4.基础(MySQL and SQLServer)
脚本示例:<https://github.com/lotapp/BaseCode/tree/master/database/SQL>
**PS:在MySQL中运行SQL脚本:`mysql < script.sql`**
后面代码优先使用通用SQL(`MySQL`和`SQLServer`(`MSSQL`)通用的SQL语句),逆天好几年没用`SQLServer`了(几年前讲过MSSQL),这边就一带而过(欢迎纠错)
**PS:后面`MariaDB`我也直接说成`MySQL`了(有区别的地方我会说下,毕竟也是MySQL的分支,相似度还是很大的)**
### 1.概念
#### 1.1.传统概念
来说说传统概念:
1. 关系型数据库中的**关系**:表(行、列)
2. **设计范式**:
- 第1范式:字段是原子性的
- 第2范式:每个表需要一个主键
- 第3范式:任何表都不应该有依赖于其他**非主键**表的字段
- **DDL**:数据定义语言(Data Defination Language)
- `create、drop、alter`
- **DML**:数据操作语言(Data Manipulation Language)
- **`insert、delete、update、select`**
- **DCL**:数据库控制语言(Data Control Language)
- `grant`(授权)、`revoke`(回收)
**PS:`CURD`(定义了用于处理数据的基本原子操作):创建(Create)更新(Update)读取(Retrieve)删除(Delete)操作**
#### 1.2.常见组件
关系型数据库常见组件:
1. **数据库**:database
2. **表**:table
- 行:row
- 列:column
3. **索引**:index
4. **视图**:view
- PS:如果有`数据库迁移`的需求则不建议使用
- PS:MySQL的视图功能不是特别完素,尽量不使用
5. 存储过程:procedure
6. 存储函数:function
7. 触发器:trigger
8. 事件调度器:event、scheduler
9. 用户:user
10. 权限:privilege
PS:MySQL常见的文件类型:
1. 数据:数据文件、索引文件
2. 日记:错误日志、查询日志、慢查询日志、二进制日志、(重做日志、撤销日志、中继日志)
### 2.MySQL标准存储引擎
#### 2.1.MySQL
先说说`MySQL`标准存储引擎(**`表类型`**):
1. **`MyISAM`**:只支持`表级锁`,不支持`事务`
2. **`InnoDB`**:支持`事务`、`间隙锁`、`行锁`等等
#### 2.2.MariaDB
首先是**插件式存储引擎(`表类型`)的改进和扩充** PS:其实也就是支持更多的存储引擎(包括自定义的)
`MariaDB`对标准存储引擎进行了改造升级:
1. `MyISAM` ==> `Aria`:支持崩溃后的恢复
2. `InnoDB` ==> **`XtraDB`**:优化存储性能
还进行了很多扩展并开发了新的功能(也提供了很多测试工具),比如添加一些`NoSQL`的功能(`SQLServer`也扩展了`NoSQL`)
### 3.创建、删除(数据库 | 表)
#### 字段类型(含异同)
官方文档:
- `https://mariadb.com/kb/en/library/data-types`
- `https://dev.mysql.com/doc/refman/5.7/en/data-types.html`
以`MariaDB`为例,简单列举下常用类型:(倾体说明和`MySQL`不一样)
1. 字符型:
1. 定长字符型:
- **`char()`**:不区分字符大小写类型的字符串,`max:255个字符`
- binary():区分字符大小写类型的二进制字符串
2. 变长字符型:
- **`varchar()`**: 不区分字符大小写类型的字符串
- max:65535(2^16 - 1)个字节(`utf8编码下最多支持21843个字符`)
- 可以理解为`SQLServer`的`nvarchar`
- varbinary():区分字符的大小写类型的二进制字符串
3. 对象存储:
- **`text`**:不区分字符大小写的不限长字符串
- 最大长度为65,535(2^16 - 1)个字符
- 如果值包含多字节字符,则有效最大长度减少
- blob:区分字符大小写的不限长二进制字符串
4. 内建类型:(不推荐使用)
- enum:单选字符串数据类型,适合表单中的`单选值`
- set:多选字符串数据类型,适合表单的`多选值`
- **PS:`MySQL系`独有,`SQLServer`没有**
2. 数值型:
1. 精确数值型:
- 整型:int
1. _bool:布尔类型_(MySQL没有)
- **PS:`SQLServer`是`bit`**
- **相当于`MySQL`的`tinyint(1)`**
2. **`tinyint`**:微小整型(1字节,8位)
- `[-2^7, 2^7)`(`-128 ~ 127`)
- 无符号:`[0, 2^8)`(`0 ~ 255`)
3. smallint(2bytes,16bit):小整型
- 无符号:`0 ~ 65535`
4. mediumint(3bytes,24位):中等整型
- `PS:SQLServer中没这个类型`
5. **`int`**(4bytes,32bit)
- `[-2^31, 2^31)`,`[-2147483648,2147483648)`
- 无符号:`[0, 2^32)`,`[0,4294967296)`
6. **`bigint`**(8bytes,64bit)
- `[-2^63, 2^63)`
- 无符号:`[0, 2^64)`
2. 浮点类型:
- float:单精度浮点数(4字节)
- **`double`**:双精度浮点数(8字节)
- **PS:`SQLServer`的`float`类型相当于`MySQL`的`double`**
- **`decimal`**:精确小数类型(比double更精确)
- 钱相关用的比较多:`decimal(位数,小数点位数)`
- eg:`decimal(2,1)` => `x.x`
3. **日期和时间类型**:(和`MySQL`一样)
1. date:日期(3bytes)
2. time:时间(3bytes)
3. year:年
- eg:`year(2)`:`00~99(1bytes)`
- eg:`year(4)`:`0000~9999(1bytes)`
- **PS:`SQLServer没有这个类型`**
4. **`datetime`**:既有时间又有日期(8bytes)
5. **`timestamp`**:时间戳(4bytes)【精度更高】
4. 修饰符:
- 所有类型都适用:
- 是否为null:`null` | `not null`
- 默认值:`default xxx_value`
- 主 键:`primary key`
- 唯一键:`unique key`
- 数值类型适用:
- **无符号:`unsigned`**(MySQL系独有)
- 自增长:**`auto_increment`** (一般只用于整型,MSSQL是`identity`)
- 获取ID:`last_insert_id()`
- PS:**多列设置**:
1. 主键:`primary key(xx,...)`
2. 唯一键:`unique key(xx,...)`
3. 索引:`index index_name (xx,...)`
PS:现在新版本数据库兼容了SQLServer的`nvarchar`写法(`执行成功后数据类型变成varchar`)【不推荐使用】
课后拓展:
```
MySQL:char、varchar、text的区别
https://dev.mysql.com/doc/refman/5.7/en/char.html
https://blog.csdn.net/brycegao321/article/details/78038272
```
#### 3.1.MySQL
知识点概括:
1. 创建数据库:
- `create database [if not exists] db_name;`
2. 删除数据库:
- `drop database [if exists] db_name;`
3. 创建表:
- `create table [if not exists] tb_name(列名1,数据类型 修饰符,列名2,数据类型 修饰符);`
4. 删除表:
- `drop table [if exists] db_name.tb_name;`
5. 修改表:
1. 字段
- 添加字段:add
- `alter table tb_name add 列名 数据类型 修饰符 [first | after 列名];`
- **PS:SQLServer没有`[first | after 列名]`**
- 修改字段:alter、change、modify
- 修改字段名:`alter table tb_name change 旧列名 新列名 类型 类型修饰符`
- 修改字段类型:`alter table tb_name modify 列名 类型 类型修饰符`
- 添加默认值:`alter table tb_name alter 列名 set default df_value`
- 删除字段:drop
- `alter table tb_name drop 字段名`
2. 索引
- 添加索引:add(常用:**`create index index_name on tb_name(列名,...);`**)
- `alter table tb_name add index [ix_name] (列名,...);`
- 添加唯一键:`alter table tb_name add unique [uq_name] (列名,列名2...);`
- **PS:不指定索引名字,默认就是第一个字段名**
- 删除索引:drop(常用:**`drop index index_name on tb_name`**)
- `alter table tb_name drop index index_name;`
- 删除唯一键:`alter table tb_name drop index uq_name;`
- **PS:唯一键的索引名就是第一个列名**
- **PS:一般在经常用做查询条件的列设置索引**
3. 表选项
- 可以参考这篇文章:`https://www.cnblogs.com/huangxm/p/5736807.html`
6. **`SQL Model`**:定义MySQL对约束的响应行为:
- 会话修改:
- mysql> `set [session] sql_model='xx_mode'`
- mysql> `set @@session.sql_mode='xx_mode'`
- **PS:只在当前会话生效**
- 全局修改:需要有权限,并且不会立即生效,对以后新建的会话生效(从全局继承的)
- mysql> `set global sql_mode='xx_mode'`
- mysql> `set @@global.sql_mode='xx_mode'`
- **PS:MySQL重启后失效**
- 配置修改:永远生效:
- eg:`vi /etc/my.cnf`,在`[mysqld]`下添加`sql_mode='xx'`,然后重启数据库
- PS:从MySQL8开始,可通过`set persist`命令将全局变量的修改持久化到配置文件中
- **持久化到`/var/lib/mysql/mysqld-auto.cnf`配置文件中**
- eg:`set persist log_timestamps='SYSTEM';`(需要root权限)
- **常用mode**:(阿里服务器默认是:`strict_trans_tables`)
- **`traditional`**:使用传统模型,不允许对非法值做插入操作
- **`strict_trans_tables`**:对所有支持事物类型的表做严格约束
- `strict_all_tables`:对所有表做严格约束
- 查询当前设置:**`select @@sql_mode`**
- 详情可以查看我之前写的文章:<https://www.cnblogs.com/dotnetcrazy/p/10374091.html>
##### 3.1.1.创建、删除数据库
```sql
-- 如果存在就删除数据库
drop database if exists dotnetcrazy;
-- 创建数据库
create database if not exists dotnetcrazy;
```
##### 3.1.2.创建、删除表
```sql
-- 如果存在就删除表
drop table if exists dotnetcrazy.users;
-- mysql> help create table(低版本的默认值不支持函数)
-- 创建表 create table users(字段名 类型 修饰符,...)
create table if not exists dotnetcrazy.users
(
id int unsigned auto_increment, -- 主键,自增长【获取ID:last_insert_id()】
username varchar(20) not null,
password char(40) not null, -- sha1:40
email varchar(50) not null,
ucode char(36) not null,-- default uuid(), -- uuid
createtime datetime not null,-- default now(),
updatetime datetime not null,-- default now(),
datastatus tinyint not null default 0, -- 默认值为0
primary key (id), -- 主键可多列
unique uq_users_email (email),
index ix_users_createtime_updatetime (createtime, updatetime) -- 索引,不指定名字默认就是字段名
)
-- 表选项
-- engine = 'innodb', -- 引擎
-- character set utf8, -- 字符集
-- collate utf8_general_ci, -- 排序规则
;
```
##### 3.1.3.修改表
```sql
-- 修改表 mysql> help alter table
-- 3.1.添加一列 alter table tb_name add 列名 数据类型 修饰符 [first | after 列名]
alter table dotnetcrazy.users
add uid bigint not null unique first; -- MSSQL没有[first | after 列名]
-- 在email后面添加手机号码列
-- 手机号不会用来做数学运算,varchar可以模糊查询(eg:like ‘138%’)
-- 牵扯到国家代号时,可能出现+、-、()等字符,eg:+86
alter table dotnetcrazy.users
add tel varchar(20) not null after email;
-- 3.2.删除一列 alter table tb_name drop 字段名
alter table dotnetcrazy.users
drop uid;
-- 3.3.添加索引 alter table tb_name add index [ix_name] (列名,...)
alter table dotnetcrazy.users
add index ix_users_ucode (ucode); -- 不指定名字默认就是字段名
-- add index (ucode, tel); -- 不指定索引名字,默认就是第一个字段名
-- 添加唯一键 alter table tb_name add unique [uq_name] (列名,列名2...)
alter table dotnetcrazy.users
add unique uq_users_tel_ucode (tel, ucode);
-- add unique (tel, ucode);-- 不指定索引名字,默认就是第一个字段名
-- 3.4.删除索引 alter table tb_name drop index ix_name
alter table dotnetcrazy.users
drop index ix_users_ucode;
-- 删除索引(唯一键) alter table tb_name drop index uq_name
alter table dotnetcrazy.users
drop index uq_users_tel_ucode;
-- drop index tel; -- 唯一键的索引名就是第一个列名
-- 3.5.修改字段
-- 1.修改字段名:`alter table tb_name change 旧列名 新列名 类型 类型修饰符`
-- 此时一定要重新指定该列的类型和修饰符
alter table dotnetcrazy.users
change ucode usercode char(36); -- default uuid();
-- 2.修改字段类型
alter table dotnetcrazy.users
modify username varchar(25) not null;
-- 3.添加默认值:`alter table tb_name alter 列名 set default df_value`
alter table dotnetcrazy.users
alter password set default '7c4a8d09ca3762af61e59520943dc26494f8941b';
```
#### 3.2.SQLServer
示例服务器:`SQLServer 2014`
##### 3.2.1.创建、删除数据库
```sql
use master
--存在就删除
if exists(select *
from sysdatabases
where Name = N'dotnetcrazy')
begin
drop database dotnetcrazy
end
--创建数据库(简化版:create database dotnetcrazy)
create database dotnetcrazy
on primary --数据库文件,主文件组
(
name ='dotnetcrazy_Data', --逻辑名
size =10 mb, --初始大小
filegrowth =10%, --文件增长
maxsize =1024 mb, --最大值
filename =N'D:\Works\SQL\dotnetcrazy_data.mdf'--存放路径(包含文件后缀名)
)
log on --日记
(
name ='dotnetcrazy_Log',
size =5 mb,
filegrowth =5%,
filename =N'D:\Works\SQL\dotnetcrazy_log.ldf'
);
-- 切换数据库
use dotnetcrazy;
```
##### 3.2.2.创建、删除表
```sql
--存在就删除表
if exists(select *
from sysobjects
where name = N'users')
begin
drop table users
end
-- dotnetcrazy.dbo.users
create table users
(
id int identity, -- 主键,自增长
username nvarchar(20) not null,
email varchar(50) not null,
password char(40) not null, -- sha1
ucode char(36) not null default newid(), -- guid
createtime datetime not null default getdate(),
updatetime datetime not null default getdate(),
datastatus tinyint not null default 0, -- 默认值为0
primary key (id), -- 主键可多列
unique (email),
index ix_users_createtime_updatetime (createtime, updatetime) -- 索引
);
```
##### 3.1.3.修改表
```sql
-- 3.1.添加一列 alter table tb_name add 列名 数据类型 修饰符
-- 在email后面添加手机号码列
alter table users
add tel varchar(20) not null;
-- 3.1.1.添加含唯一键的列
-- 先添加列
alter table users
add uid bigint not null
-- 再添加约束 alter table tb_name add constraint uq_name
alter table users
add constraint uq_users_uid unique (uid); -- 自定义名称
-- 3.1.2.定义和约束一步走(系统设置名字)
-- alter table users
-- add uid bigint not null unique; -- 默认名称
-- 3.2.含唯一键的列
-- 3.2.1.删除约束 alter table tb_name drop constraint uq_name
if exists(select *
from sysobjects
where name = 'uq_users_uid')
alter table users
drop constraint uq_users_uid;
-- 3.2.2.删除列 alter table tb_name drop column 字段名
alter table users
drop column uid;
-- 3.3.修改字段
-- 3.3.1.修改列名:exec sp_rename '表名.旧列名','新列名';
exec sp_rename 'users.ucode', 'usercode';
-- 3.3.2.修改字段类型
alter table users
alter column username varchar(25) not null;
-- 3.3.3.添加默认值:`alter table tb_name alter 列名 set default df_value`
alter table users
add default '7c4a8d09ca3762af61e59520943dc26494f8941b' for password;
```
知识回顾:
1. <a href="https://www.cnblogs.com/dunitian/p/5276431.html" target="_blank">01.SQLServer性能优化之---强大的文件组(分盘存储)</a>
2. <a href="https://www.cnblogs.com/dunitian/p/6078512.html" target="_blank">02.SQLServer性能优化之---水平分库扩展</a>
3. <a href="https://www.cnblogs.com/dunitian/p/6041745.html" target="_blank">03.SQLServer性能优化之---存储优化系列</a>
课后拓展:
```
SQLServer突破内存限制:
https://www.cnblogs.com/zkweb/p/6137423.html
官方demo:
https://www.microsoft.com/en-us/sql-server/developer-get-started/python/ubuntu
官方文档:
https://docs.microsoft.com/zh-cn/sql/linux/sql-server-linux-overview?view=sql-server-2017
PS:SQL Server默认端口为TCP 1433
```
#### 3.3.区别
简单列举下上面的区别(欢迎补充):
1. **MySQL自增长是`auto_increment`,MSSQL是`identity`**
2. **MySQL可以设置无符号`unsigned`,MSSQL不可以直接设置无符号整型,需要通过约束之类的来限制**
3. **`alter table`的时候,MSSQL没有`[first | after 列名]`,而且语法差别也挺大**
### 4.增删改查(CURD)
#### 4.1.MySQL
**select语句执行流程**:
1. `from 表`
2. `[inner|left|right] join 表 on 条件`
3. `where 条件`
- **对select的结果进行过滤**
4. `group by 字段`
- **根据指定条件把查询结果进行`分组`,以用做`聚合`运算**
5. `having 条件`
- **对分组聚合运算(`group by`)后的结果进行过滤**
6. `order by 字段 [asc|desc]`
- 根据指定字段对查询结果进行排序(默认升序`asc`)
7. `select 字段`
8. `limit [偏移量,]显示数量`
- 显示多少条数据 | 分页显示
##### 增删改
```sql
-- 4.1.插入 help insert
-- 自增长主键和默认值的字段可以不写
insert into dotnetcrazy.users(username, password, email, tel, usercode, createtime, updatetime, datastatus)
values ('dnt', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'dnt@qq.com', '18738002038', uuid(), now(), now(), 1);
-- 批量插入
insert into dotnetcrazy.users(username, password, email, tel, usercode, createtime, updatetime, datastatus)
values('xxx', '7c4a8d09ca3762af61e59520943dc26494f8942b', 'xxx@qq.com', '13738002038', uuid(), now(), now(), 0),('mmd', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'mmd@qq.com', '13718002038', uuid(), now(), now(), 1),('小张', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'zhang@qq.com', '13728002038', uuid(), now(), now(), 1);
-- 4.2.修改 help update
update dotnetcrazy.users
set datastatus=99,
updatetime = now()
where username = 'mmd'; -- 一定要有where条件!开发中一般都是先写where条件再写update
-- 4.3.删除
-- 删除数据(自增长不重置)help delete;
delete
from dotnetcrazy.users
where datastatus = 0;
-- 删除全部数据(自增长重置)help truncate;
truncate table dotnetcrazy.users;
```
##### 查询
```sql
-- 数据构造见附录
-- 4.4.查询 help select
-- 查询来源url(去重后)
select distinct url
from file_records;
-- 查询来源url(分组方式)
select url
from file_records
group by url;
-- 分别统计一下url出现的次数(分组+聚合)
-- 分组一般都和聚合函数一起使用
select url, count(*) as count
from file_records
group by url;
-- 分别统计一下url出现的次数,已经删除的文件不算进去
select url, count(*) as count
from file_records
group by url
having count > 3; -- 在group by的结果上筛选
-- 分别统计一下url出现的次数并查出对应的id
select group_concat(id) as ids, url
from file_records
group by url;
-- 内连接查询 innet join tb_name on 关联条件
select file_records.id,
users.id as uid,
users.username,
users.email,
file_records.file_name,
file_records.md5,
inet_ntoa(file_records.ip) as ip,
file_records.url
from users
inner join file_records on file_records.user_id = users.id -- 连接条件
where users.datastatus = 1
and file_records.datastatus = 1
order by file_records.file_name desc; -- 文件名降序排序
-- MySQL没有`select top n`语法,可以使用 limit来实现,eg:top 5
select *
from file_records
limit 5; -- limit 0,5
-- 分页查询
-- page:1,count=5 ==> 0,5 ==> (1-1)*5,5
-- page:2,count=5 ==> 5,5 ==> (2-1)*5,5
-- page:3,count=5 ==> 10,5 ==> (3-1)*5,5
-- 推理:limit (page-1)*count,count
select file_records.id,
users.id as uid,
users.username,
users.email,
file_records.file_name,
file_records.md5,
inet_ntoa(file_records.ip) as ip,
file_records.url
from file_records
inner join users on file_records.user_id = users.id
limit 0,5;
-- limit后面跟表达式就会报错
select file_records.id,
users.id as uid,
users.username,
users.email,
file_records.file_name,
file_records.md5,
inet_ntoa(file_records.ip) as ip,
file_records.url
from file_records
inner join users on file_records.user_id = users.id
limit 5,5;
-- limit (2-1)*5,5; -- limit错误写法
-- limit要放在最后
select file_records.id,
users.id as uid,
users.username,
users.email,
file_records.file_name,
file_records.md5,
inet_ntoa(file_records.ip) as ip,
file_records.url
from file_records
inner join users on file_records.user_id = users.id
order by username desc, file_name desc
limit 10,5; -- 先order by排完序,然后再取第三页的5个数据
-- 查找一下从来没上传过文件的用户
-- right join:以右边表(users)为基准连接
select file_records.id as fid,
users.id as uid,
users.username,
users.email,
file_records.file_name,
file_records.md5,
inet_ntoa(file_records.ip) as ip,
file_records.url
from file_records
right join users on file_records.user_id = users.id
where users.datastatus = 1
and file_records.id is null
order by username desc, file_name desc;
-- 自连接案例:
-- 二级联动 p:province,c:city,a:area
-- 前端一般都会显示省级信息,用户选择后可以获得对应的二三级信息
select c.name, a.name
from city_infos as c
inner join city_infos as a on a.pcode = c.code
where c.pcode = '320000'; -- pcode设置为索引
-- 通过省名称查询
select p.name, c.name, a.name
from city_infos as c
inner join city_infos as p on c.pcode = p.code
inner join city_infos as a on a.pcode = c.code
where p.name = '江苏省';
```
##### 视图
```sql
-- 简单提一下视图:
-- 创建视图
create view view_userinfo as
select id, username, password, email, tel, datastatus
from dotnetcrazy.users;
-- 查询视图
select id, username, password, email, tel, datastatus
from dotnetcrazy.view_userinfo;
-- 删除视图
drop view if exists view_userinfo;
```
**附录**:
知识点:
```sql
-- 把ip转换成int
select inet_aton('43.226.128.3'); -- inet6_aton()
-- 把int转换成ip
select inet_ntoa('736264195'); -- inet6_ntoa() ipv6
-- 将多个字符串连接成一个字符串
select concat(user_id, ',', file_name, ',', ip, ',', url) as concat_str
from file_records;
-- 将多个字符串连接成一个字符串+可以一次性指定分隔符
select concat_ws(',', user_id, file_name, ip, url) as concat_str
from file_records;
-- 在有group by的查询语句中,select指定的字段要么就包含在group by语句的后面,作为分组的依据,要么就包含在聚合函数中
-- group_concat():将group by产生的同一个分组中的值连接起来,返回一个字符串结果
select group_concat(file_name) as file_name, url, count(*)
from file_records
group by url;
-- having一般对group by的结果进行筛选,where是对原表进行筛选
select group_concat(file_name) as file_name, group_concat(url) as url, count(*) as count
from file_records
group by url
having count >= 3;
-- 四舍五入到指定位数
select round(3.12345, 4);
-- 存小数数据为了不损伤精读一般都是转成整数,eg:3.1415 ==> 整数:31415,倍数:10000
```
**数据构造**:
`city_data.sql`:<https://github.com/lotapp/BaseCode/blob/master/database/SQL/city2017.sql>
```sql
-- 编号,文件名,文件MD5,Meta(媒体类型),当前用户,请求IP,来源地址,请求时间,数据状态
drop table if exists file_records;
create table if not exists file_records
(
id int unsigned auto_increment primary key,
file_name varchar(100) not null,
md5 char(32) not null,
meta_type tinyint unsigned not null default 1,
user_id int unsigned not null,
ip int unsigned not null,
url varchar(200) not null default '/',
createtime datetime not null, -- default now(),
datastatus tinyint not null default 0
);
-- 可以插入2~3次(方便下面演示)
insert into file_records(file_name, md5, meta_type, user_id, ip, url, createtime, datastatus)
values ('2.zip', '3aa2db9c1c058f25ba577518b018ed5b', 2, 1, inet_aton('43.226.128.3'), 'http://baidu.com', now(), 1),
('3.rar', '6f401841afd127018dad402d17542b2c', 3, 3, inet_aton('43.224.12.3'), 'http://qq.com', now(), 1),
('7.jpg', 'fe5df232cafa4c4e0f1a0294418e5660', 4, 5, inet_aton('58.83.17.3'), 'http://360.cn', now(), 1),
('9.png', '7afbb1602613ec52b265d7a54ad27330', 5, 4, inet_aton('103.3.152.3'), 'http://cnblogs.com', now(), 1),
('1.gif', 'b5e9b4f86ce43ca65bd79c894c4a924c', 6, 3, inet_aton('114.28.0.3'), 'http://qq.com', now(), 1),
('大马.jsp', 'abbed9dcc76a02f08539b4d852bd26ba', 9, 4, inet_aton('220.181.108.178'), 'http://baidu.com', now(),
99);
```
#### 4.2.SQLServer
select语句执行流程:
1. `from 表`
2. `join类型 join 表 on 条件`
3. `where 条件`
- **对select的结果进行过滤**
4. `group by 字段`
- **根据指定条件把查询结果进行`分组`,以用做`聚合`运算**
5. `having 条件`
- **对分组聚合运算(`group by`)后的结果进行过滤**
6. `select distinct 字段`
7. `order by 字段 [asc|desc]`
- 根据指定字段对查询结果进行排序(默认升序`asc`)
8. `top 多少行`
- 类比`limit`
##### 增删改
```sql
-- 4.1.插入 help insert
-- 自增长主键和默认值的字段可以不写
insert into dotnetcrazy.dbo.users(username, password, email, tel, usercode, createtime, updatetime, datastatus)
values ('dnt', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'dnt@qq.com', '18738002038', newid(), getdate(), getdate(),
1);
-- 批量插入 SQLServer一次批量插入最多1000行左右
insert into dotnetcrazy.dbo.users(username, password, email, tel, usercode, createtime, updatetime, datastatus)
values ('xxx', '7c4a8d09ca3762af61e59520943dc26494f8942b', 'xxx@qq.com', '13738002038', newid(), getdate(), getdate(), 0),
('mmd', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'mmd@qq.com', '13738002038', newid(), getdate(), getdate(), 1),
('小明', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'xiaoming@qq.com', '13718002038', newid(), getdate(), getdate(), 1),
('小张', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'zhang@qq.com', '13728002038', newid(), getdate(), getdate(), 1),
('小潘', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'pan@qq.com', '13748002038', newid(), getdate(), getdate(), 1),
('小周', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'zhou@qq.com', '13758002038', newid(), getdate(), getdate(), 1),
('小罗', '7c4a8d09ca3762af61e59520943dc26494f8941b', 'luo@qq.com', '13768002038', newid(), getdate(), getdate(), 1);
-- 4.2.修改 help update
update dotnetcrazy.dbo.users
set datastatus=99,
updatetime = getdate()
where username = 'mmd'; -- 一定要有where条件!开发中一般都是先写where条件再写update
-- 4.3.删除
-- 删除数据(自增长不重置)help delete;
delete
from dotnetcrazy.dbo.users
where datastatus = 0;
-- 删除全部数据(自增长重置)help truncate;
truncate table dotnetcrazy.dbo.users;
```
##### 查询
```sql
-- 查询来源url(去重后)
select distinct url
from file_records;
-- 查询来源url(分组方式)
select url
from file_records
group by url;
-- 分别统计一下url出现的次数(分组+聚合)
-- 分组一般都和聚合函数一起使用
select url, count(*) as count
from file_records
group by url;
-- 分别统计一下url出现的次数,已经删除的文件不算进去
select url, count(*) as count
from file_records
group by url
having count(*) > 3; -- 在group by的结果上筛选,★写成count就不行了★
-- 分别统计一下url出现的次数并查出对应的id
-- SQLServer2017新增string_agg
select ids =(select stuff((select ',' + cast(id as varchar(20)) from file_records as f
where f.url = file_records.url for xml path ('')), 1, 1, '')),url from file_records
group by url;
-- 内连接查询 innet join tb_name on 关联条件
select file_records.id,
users.id as uid,
users.username,
users.email,
file_records.file_name,
file_records.md5,
file_records.ip,
file_records.url
from users
inner join file_records on file_records.user_id = users.id -- 连接条件
where users.datastatus = 1
and file_records.datastatus = 1
order by file_records.file_name desc; -- 文件名降序排序
-- 显示前5个数据
select top 5 * from file_records;
-- 分页查询 第3页,每页5条
select *
from (select row_number() over (order by username desc, file_name desc) as id,
file_records.id as fid,
users.id as uid,
users.username,
users.email,
file_records.file_name,
file_records.md5,
file_records.ip,
file_records.url
from file_records
inner join users on file_records.user_id = users.id) as temp
where id > (3 - 1) * 5 and id <= 3 * 5;
-- 简单提一下视图:
-- 存在就删除
if exists(select *
from sysobjects
where name = N'view_userinfo')
begin
drop view view_userinfo
end
-- 创建视图
create view view_userinfo as
select id, username, password, email, tel, datastatus
from users;
-- 查询视图
select id, username, password, email, tel, datastatus
from view_userinfo;
```
##### 附录
知识点:
```sql
select getdate() as datatime, newid() as uuid;
-- 类似于concat的效果
select cast(id as varchar(20)) + ','
from file_records for xml path ('');
-- 移除多余的字符
-- STUFF(<character_expression>,<开始>,<长度>,<character_expression>)
-- 将字符串插入到另一个字符串中。它会删除开始位置第一个字符串中的指定长度的字符,然后将第二个字符串插入到开始位置的第一个字符串中
select stuff((select ',' + cast(id as varchar(20))
from file_records for xml path ('')), 1, 1, '');
```
数据构造:
```sql
--存在就删除表
if exists(select *
from sysobjects
where name = N'file_records')
begin
drop table file_records
end
-- 因为SQLServer的int没有unsigned,所以推荐使用bigint
create table file_records
(
id bigint identity (1,1) primary key,
file_name varchar(100) not null,
md5 char(32) not null,
meta_type tinyint not null default 1,
user_id int not null,
ip bigint not null, -- 在程序中自行转换
url varchar(200) not null default '/',
createtime datetime not null default getdate(),
datastatus tinyint not null default 0
);
-- 可以插入3次(方便下面演示)
insert into file_records(file_name, md5, meta_type, user_id, ip, url, createtime, datastatus)
values ('2.zip', '3aa2db9c1c058f25ba577518b018ed5b', 2, 1, 736264195, 'http://baidu.com', getdate(), 1),
('3.rar', '6f401841afd127018dad402d17542b2c', 3, 3, 736103427, 'http://qq.com', getdate(), 1),
('7.jpg', 'fe5df232cafa4c4e0f1a0294418e5660', 4, 5, 978522371, 'http://360.cn', getdate(), 1),
('9.png', '7afbb1602613ec52b265d7a54ad27330', 5, 4, 1728288771, 'http://cnblogs.com', getdate(), 1),
('1.gif', 'b5e9b4f86ce43ca65bd79c894c4a924c', 6, 3, 1914437635, 'http://qq.com', getdate(), 1),
('大马.jsp', 'abbed9dcc76a02f08539b4d852bd26ba', 9, 4, 3702877362, 'http://baidu.com', getdate(), 99);
```
### 5.MySQL命令扩展:
1. **命令帮助**:`MySQL>` **`help 命令`**
- PS:版本查询:`select version();`
2. 查看字符集:**`show character set;`**
- **utf8**:使用1~3bytes来表示一个Unicode字符(常用)
- **utf8mb4**:使用1~4bytes来表示一个Unicode字符(`Emoji表情` or `不常用汉字`)
3. 排序规则:`show collation;`
- eg:`show collation where Collation like "%utf8%";`
4. 查看引擎:`show engines;`
- `InnoDB是默认存储引擎`
5. **查看所有数据库:`show databases;`**
6. **切换数据库:`use db_name;`**
7. **查看所有表:`show tables;`**
8. **显示表状态:`show table status;`**
- eg:`show table status like 'users';`
9. **显示表结构:`desc tb_name;`**
10. **查看创建表时的SQL:`show create table tb_name;`**
11. **显示表的索引:`show indexes from tb_name`**
12. 查看mysql数据文件目录`show variables like '%dataDir%';`
13. 查询当前会话的连接号:`select connection_id();`
**PS:`\G可以竖排显示`:`show table status like 'users'\G`**
最后YY几句:
1. 没使用`Linux`之前,我认为`C#是最优美、性价比最高、最简单的语言`,之后发现`Python才是最简单的语言`,`C#只能是最优美、性价比最高的语言`
- 现在准备接触Golang,最终评价先待定吧
2. 刚接触MySQL发现SQLServer真的很方便,研究MySQL越深越发现==>平心而讲:
- **对应开发人员来说,`MySQL`真的比`SQLServer`方便**
- **对于运维人员来说,`SQLServer`真的太方便了**
- PS:中小企业如果没有专门运维人员,还是推荐`SQLServer`,如果有运维人员或者团队有点`Linux`运维功底的还是选择`MySQL`吧
送大家一句话:**`思维局限在一个维度里,认知就会发生偏移,希望大家能够勇于尝试和突破~`**
因为时间问题之后的SQL案例就不对比演示了,直接全部`MySQL`走起(之后只能说尽量加上`SQLServer`版的演示)
**下节预估:查询优化**
课外拓展:
```
MySQL在线IDE:phpMyAdmin
https://www.phpmyadmin.net/downloads/
MySQL最火工具:Navicat Premium
https://www.cnblogs.com/dotnetcrazy/p/9711198.html
MySQL最佳工具:dbForge Studio for MySQL
https://www.devart.com/dbforge/mysql/studio/download.html
【跨平台】SQLServer工具:SqlOps
https://www.cnblogs.com/dunitian/p/8045081.html
https://github.com/Microsoft/azuredatastudio/releases
【跨平台】都支持:JetBrains DataGrip 【推荐】
https://www.cnblogs.com/dotnetcrazy/p/9711763.html
MariaDB数据类型
https://www.w3cschool.cn/mariadb/mariadb_data_types.html
MySQL 数据类型
https://www.w3cschool.cn/mysql/mysql-data-types.html
(MariaDB)MySQL数据类型详解和存储机制
https://www.cnblogs.com/f-ck-need-u/archive/2017/10/25/7729251.html
Sql Server中的数据类型和Mysql中的数据类型的对应关系
https://blog.csdn.net/lilong329329/article/details/78899477
ALTER TABLE和CREATE INDEX的区别
https://blog.csdn.net/qq_34578253/article/details/72236808
1. create index必须提供索引名,对于alter table,如果你不提供索引名称,MySQL会自动创建索引名称(默认为第一个列名)
2. create index一个语句一次只能建立一个索引,alter table可以在一个语句建立多个,如:
- `ALTER TABLE HeadOfState ADD PRIMARY KEY (ID), ADD INDEX (LastName,FirstName);`
3. 只有alter table才能创建主键
```
## 1.5.查询的艺术
上期回顾:<https://www.cnblogs.com/dotnetcrazy/p/10399838.html>
本节脚本:<https://github.com/lotapp/BaseCode/blob/master/database/SQL/02.索引、查询优化.sql>
文章有点小长,但认真阅读肯定会有所感触和收获的。PS:我把我能想到的都列下来了,如果有新的会追加,欢迎补充和纠错~
### 1.5.1.索引
大方向:**减少冗余索引,避免重复(无用)索引**
#### 1.概念
大一统分类:
1. 聚簇索引、非聚簇索引:看看数据是否与索引存储在一起(一起是聚簇索引)
2. 主键索引、辅助索引
3. 稠密索引、稀疏索引
- 是否索引了每一个数据项(是则为稠密索引)
4. `B+ Tree`索引、`hash`索引(键值索引,`只有Memory存储引擎支持`)、`R Tree`索引(空间索引,`MyISAM存储引擎支持`)、`Fulltext`索引(全文索引)
2. 简单索引、组合索引
PS:索引通常做查询条件的字段(索引是在存储引擎级别实现的)
**常用分类:**
1. 语法分类:
1. **普通索引**:一列一索引
2. **唯一索引**:设置unique之后产生(可空)
- 可以这么理解:唯一+非空=主键
3. **复合索引**:多列一索引
2. 物理存储:(Innodb和MyISAM存储引擎)
1. **聚簇索引**:一般都是主键
- 数据和索引存储在一起的存储方式
- Innodb文件后缀:frm、ibd(数据+索引)
2. **非聚簇索引**:不是聚集索引的索引
- 数据和索引分开存放
- MyISAM文件后缀:frm、myd(数据)、myi(索引)
3. PS:它俩都是b树索引,frm(表结构)和存储引擎无关
#### 2. 语法基础
1. 查看索引:`show index from tb_name;`
- show index from worktemp.userinfo\G;
- show index from worktemp.userinfo;
2. 创建索引:
- `create [unique] index index_name on tb_name(列名,...)`
- `alter table tb_name add [unique] index [index_name] on (列名,...)`
3. 删除索引:
- `drop index index_name on tb_name`
- `alter table tb_name drop index index_name`
### 1.5.2.执行计划
#### 1.往期回顾
先回顾下上节课内容:
**手写SQL的语法顺序:**
```sql
select distinct
<select_list>
from <tb_name>
<join_type> join <right_table> on <join_condition>
where
<where_condition>
group by
<group_by_list>
having
<having_condition>
order by
<order_by_list>
limit <limit_number>
```
**SQL执行顺序:**
1. `from <tb_name>`
2. `on <join_condition>`
3. `<join_type> join <right_table>`
4. `where <where_condition>`
5. `group by <group_by_list>`
6. `having <having_condition>`
7. `select [distinct] <select_list>`
8. `order by <order_by_list>`
9. `limit <limit_number>`
#### 2.基础
语法:explain + SQL语句
执行计划:**使用`explain`关键词可以模拟优化器执行SQL查询语句,一般用来`分析查询语句或者表结构的性能瓶颈`**
执行计划一般用来干这些事情:
1. 查看表的读取顺序
2. 查看数据读取操作的操作类型
3. 查看哪些索引可以使用
4. 查看哪些索引被实际使用
5. 查看表之间的引用
6. 查看每张表有多少行被优化器读取
##### 主要参数
主要是看这几个参数:
1. id:当前查询语句中,每个select语句的编号
- 主要是针对子查询、union查询
2. `select_type`:查询类型
- 简单查询:simple(一般的查询语句)
- 复杂查询:(详解见附录1)
- `subquery`:用于where中的子查询(简单子查询)
- `derived`:用于from中的子查询
- `union`:union语句的第一个之后的select语句
- `union result`:匿名临时表
3. `type`:访问类型(MySQL查询表中行的方式)
1. all:全表扫描
2. index:根据索引的次序进行全表扫描(**覆盖索引效率更高**)
3. range:根据索引做指定范围扫描
4. ref:返回表中所有匹配某单个值的所有行
5. eq_ref:等同于ref,与某个值做比较且仅返回一行
6. const:根据具有唯一性索引查找时,且返回单个行(**性能最优**)
- eg:主键、唯一键
7. **PS:1~6 ==> 数字越大效率越高(性能递增)**,(详解见附录2)
4. `possible_keys`:查询可能会用到的索引
5. `key`:查询中使用了的索引
6. `key_len`:索引使用的字节数(详解见附录3)
- 根据这个值,可以判断索引使用情况
- eg:使用组合索引时,判断所有索引字段是否都被查询到
7. `ref`:显示key列索引用到了哪些列、常量值
- 在索引列上查找数据时,用到了哪些列或者常量
8. `rows`:估算大概需要扫描多少行
9. `Extra`:额外信息(性能递减)
1. **using index**:使用了覆盖索引
2. `using where`:在存储引擎检索后,再进行一次过滤
3. using temporary:对结果排序时会使用临时表
4. using filesort:对结果使用一个外部索引排序
- 没有有索引顺序,使用了自己的排序算法
- 可能出现的情况:(**出现这个情况基本上都是需要优化的**)
- where后面的索引列和`order by|group by`后面的索引列不一致(只能用到一个索引)
- eg:`explain select * from users where id<10 order by email;`(只用到了id)
#### 附录
##### 1.select_type
**`select_type`:查询类型**
```sql
-- `subquery`:用于where中的子查询(简单子查询)
explain
select name, age
from students
where age > (select avg(age) from students);
-- `union`:union语句的第一个之后的select语句
-- `union result`:匿名临时表
explain
select name, age, work
from students
where name = '小张'
union
select name, age, work
from students
where name = '小明';
-- `derived`:用于from中的子查询
explain
select *
from (select name, age, work from students where name = '小张'
union
select name, age, work from students where name = '小明') as tmp;
```
图示输出:

##### 2.type
**`type`:访问类型(MySQL查询表中行的方式)**
```sql
-- all:全表扫描(效率极低)
explain
select *
from students
where name like '%小%';
-- index:根据索引的次序进行全表扫描(效率低)
explain
select name, age, work
from students
where name like '%小%'; -- 其实就是上面全表扫描的改进版
-- range:根据索引做指定范围扫描
explain
select name, age, work
from students
where id > 5;
-- ref:返回表中所有匹配某单个值的所有行
explain
select name, age, work
from students
where name = '小明';
-- eq_ref:等同于ref,与某个值做比较且仅返回一行
explain
select *
from userinfo
inner join (select id from userinfo limit 10000000,10) as tmp
on userinfo.id = tmp.id; -- 1s
-- const:根据具有唯一性索引查找时,且返回单个行(**性能最优**)
explain
select name, age, work
from students
where id = 3; -- 一般都是主键或者唯一键
```
图示输出:


##### 3.key-len
1. 是否为空:
- not null 不需要额外的字节
- null 需要1字节用来标记
- PS:索引最好不要为null,这样需要额外的存储空间而且统计也变得更复杂
2. 字符类型(char、varchar)的索引长度计算:
- 字符编码:(PS:不同字符编码占用的存储空间不同)
- `latin1`|`ISO8859`占1个字节,`gbk`占2个字节,**`utf8`占3个字节**
- 变长字段(varchar)需要额外的2个字节
- 1字节用来保存需要的字符数
- 1字节用来记录长度(PS:如果列定义的长度超过255则需要2个字节【总共3字节】)
- 定长字段(char)不需要额外的字节
3. 数值类型、日期类型的索引长度计算:
- 一般都是其本身长度,如果可空则+1
- 标记是否为空需要占1个字节
- PS:datetime在5.6中字段长度是5,在5.5中字段长度是8
4. 复合索引有最左前缀的特性。如果复合索引能全部用上,则为复合索引字段的索引长度之和
- PS:可以用来判断复合索引是否全部使用到
5. 举个栗子:
- eg:`char(20) index 可空`
- `key-len=20*3(utf8)+1(可空)=61`
- eg:`varchar(20) index 可空`
- `key-len=20*3(utf8)+2(可变长度)+1(是否可空的标记)=63`
##### 建表语句
```sql
create table if not exists `students`
(
id int unsigned auto_increment primary key,
name varchar(25) not null default '' comment '姓名',
age tinyint unsigned not null default 0 comment '年龄',
work varchar(20) not null default '普通学生' comment '职位',
create_time datetime not null comment '入学时间',
datastatus tinyint not null default 0 comment '数据状态'
) charset utf8 comment '学生表';
-- select current_timestamp(), now(), unix_timestamp();
insert into students(name, age, work, create_time, datastatus)
values ('111', 22, 'test', now(), 99),
('小张', 23, '英语课代表', now(), 1),
('小李', 25, '数学课代表', now(), 1),
('小明', 21, '普通学生', now(), 1),
('小潘', 27, '物理课代表', now(), 1),
('张小华', 22, '生物课代表', now(), 1),
('张小周', 22, '体育课代表', now(), 1),
('小罗', 22, '美术课代表', now(), 1);
-- 创建一个组合索引
create index ix_students_name_age_work on students (name, age, work);
```
说了这么多题外话,现在进入正题:
---
### 1.5.3.建表优化
1. 定长和变长分离(具体得看业务)
- eg:varchar、text、blob等变长字段单独出一张表和主表关联起来即可
2. 常用字段和不常用字段分离
- 根据业务来分析,不常用的字段拎出来
3. 在1对多需要关联统计的字段上添加点冗余字段
- 分表分库时,扩表跨库查询的情景(注意数据一致性)
- eg:在分类表中添加一个数量字段,统计每天新增商品数量
- 添加商品时,选完分类就update一下count值(第二天清零)
4. 字段类型一般都是按照这个优先级:(尽量使用优先级高的类型)
- `数值 > 日期 > char > varchar > text、blob`
- PS:总体原则就是够用即可,然后尽量避免null(不利于索引,浪费空间)
- eg:varchar(10)和varchar(300),在表连接查询时,需要的内存是不一样的
5. **伪hash法**:比如商品url是一个varchar的列
- 这时候再建一个hash(url)之后的列,把索引设置到该列
- 推荐使用**`crc32`**(用bigint存储)索引空间就会小很多而且可以避免全表扫描
- eg:`select crc32('http://www.baidu.com/shop/1.html');`
- PS:如果DBA配置了crc64,则使用;如果没有,可以加个条件(`CRC32碰撞后的解决方案`)
- 对于少部分碰撞的记录,只需要多扫描几行就行了,不会出现全表扫描的情况
- eg:`select xxx from urls where crc_url=563216577 and url='url地址'`
**PS:需要关注的技术点:`crc32`**
### 1.5.4.组合索引专题
项目里面使用最多的是组合索引,这边先以组合索引为例:
#### 1.尽可能多的使用索引列,尽可能使用覆盖索引
```sql
-- 如果我查询的时候,索引的三列都用到了,那么速度无疑是最快的
-- Extra:using where
explain
select id, name, age, work, create_time
from students
where name = '小张'
and age = 23
and work = '英语课代表';
-- PS:★尽量使用覆盖索引★(近乎万能)
-- 覆盖索引:仅仅查找索引就能找到所需要的数据
-- Extra:using where;using index
explain
select name, age, work
from students
where name = '小张'
and age = 23
and work = '英语课代表';
-- PS:一般把经常select出的列设置一个组合索引,一般不超过5个
```
图示:

#### 2.最左前缀原则
类比火车,火车头自己可以开,车身要是没有了车头就开不了
```sql
-- 查询的时候从最左边的列开始,并且不跳过中间的列,一直到最后
explain
select id, name, age, work, create_time
from students
where name = '小张'
and age = 23
and work = '英语课代表';
-- 跳过了中间的age,这时候只用到了name列的索引(work列没用到)
explain
select id, name, age, work, create_time
from students
where name = '小张'
and work = '英语课代表';
```
图示:

再看两个补充案例:
```sql
-- PS:如果跳过了第一列,这时候索引一个也用不到,直接全表扫描了
explain
select id, name, age, work, create_time
from students
where age = 23
and work = '英语课代表';
-- PS:列不一定需要按照指定顺序来写
explain
select id, name, age, work, create_time
from students
where age = 23
and work = '英语课代表'
and name = '小张';
```
图示:

#### 2.3.范围条件放在最后面(范围条件后面的列索引会失效)
```sql
-- name、age、work索引生效时,key_len=140
explain
select id, name, age, work, create_time, datastatus
from students
where name = '小张'
and age = 23
and work = '英语课代表';
-- 现在key_len=78 ==> work列索引就失效了(PS:age索引列未失效,只是age之后的列失效了)
explain
select id, name, age, work, create_time, datastatus
from students
where name = '小张'
and age > 22
and work = '英语课代表';
```
图示:

补充说明:
```sql
-- 加快查询速度可以使用覆盖索引
explain
select name, age, work
from students
where name = '小张'
and age > 22
and work = '英语课代表';
-- PS:多个主键列也一样
explain
select id, name, age, work
from students
where name = '小张'
and age > 22
and work = '英语课代表';
-- PS:调换顺序是没法解决范围后面索引失效的(本来对顺序就不在意)
explain
select id, name, age, work, create_time, datastatus
from students
where name = '小张'
and work = '英语课代表'
and age > 22;
```
图示:

#### 2.4.不在索引列上做其他操作
容易导致全表扫描,这时候利用覆盖索引可以简单优化下
##### 1.`!=`、`is not null`、`is null`、`not in`、`in`、`like`慎用
**`!=`、`is not null`、`is null`的案例**
```sql
-- 1.不等于案例
-- 索引失效(key,key_len ==> null)
explain
select id, name, age, work, create_time, datastatus
from students
where name != '小明'; -- <> 等同于 !=
-- 项目里面很多使用都要使用,那怎么办呢?==> 使用覆盖索引
-- key=ix_students_name_age_work,key_len=140
explain
select name, age, work
from students
where name != '小明'; -- <> 等同于 !=
-- 2.is null、is not null案例
-- 索引失效(key,key_len ==> null)
explain
select id, name, age, work, create_time, datastatus
from students
where name is not null;
-- 解决:覆盖索引 key=ix_students_name_age_work,key_len=140
explain
select name, age, work
from students
where name is not null;
```
图示:

**`not in`、`in`的案例**
```sql
-- 3.not in、in案例
-- 索引失效(key,key_len ==> null)
explain
select id, name, age, work, create_time, datastatus
from students
where name in ('小明', '小潘', '小李');
explain
select id, name, age, work, create_time, datastatus
from students
where name not in ('小明', '小潘', '小李');
-- 解决:覆盖索引 key=ix_students_name_age_work,key_len=140
explain
select name, age, work
from students
where name in ('小明', '小潘', '小李');
explain
select name, age, work
from students
where name not in ('小明', '小潘', '小李');
```
图示:

**`like`案例**:尽量使用`xxx%`的方式来全文搜索,能和覆盖索引联合使用更好
```sql
-- 4.like案例
-- 索引不失效 key=ix_students_name_age_work,key_len=77(尽量这么用like)
explain
select id, name, age, work, create_time, datastatus
from students
where name like '张%';
-- 索引失效
explain
select id, name, age, work, create_time, datastatus
from students
where name like '%张';
-- 索引失效
explain
select id, name, age, work, create_time, datastatus
from students
where name like '%张%';
-- 解决:覆盖索引 key=ix_students_name_age_work,key_len=140(尽量避免)
explain
select name, age, work
from students
where name like '%张%';
```

##### 2.计算、函数、类型转换(自动 or 手动)【尽量避免】
```sql
-- 4.2.计算、函数、类型转换(自动 or 手动)【尽量避免】
-- 这时候索引直接失效了,并全表扫描了
-- 解决虽然可以使用覆盖索引,但是尽量避免下面的情况:
-- 1.计算
explain
select id, name, age, work, create_time, datastatus
from students
where age = (10 + 13);
-- 2.隐式类型转换(111==>'111')
explain
select id, name, age, work, create_time, datastatus
from students
where name = 111;
-- PS:字符类型不加引号索引就直接失效了
-- 虽然覆盖索引可以解决,但是不要这样做(严格意义上讲,这个算个错误)
-- 3.函数
explain
select id, name, age, work, create_time, datastatus
from students
where right(name, 1) = '明';
```
图示:

---
光看没意思,再举个简单的业务案例:
> eg:用户一般都是根据商品的大分类=>小分类=>品牌来查找,有时候到不看品牌,直接小分类后就自己找了。那么组合索引可以这么建:`index(分类id,商品价格)`,`index(分类id,品牌id,商品价格)`(一般都需要根据查询日记来确定)
PS:有些条例是流传甚广的,有些是工作中的经验,至少都是我踩过坑的,可以相对放心(业务不同优化角度不同)
### 1.5.5.写法上的优化
#### 5.1.or改成union
```sql
-- 5.1.or改成union
-- 现在高版本对只有一个or的sql语句有了优化
explain
select id, name, age, work, create_time, datastatus
from students
where name = '小明'
or name = '小张'
or name = '小潘';
-- PS:等同上面or的语句
explain
select id, name, age, work, create_time, datastatus
from students
where name in ('小明', '小张', '小潘');
-- 高效
explain
select id, name, age, work, create_time, datastatus
from students
where name = '小明'
union all
select id, name, age, work, create_time, datastatus
from students
where name = '小张'
union all
select id, name, age, work, create_time, datastatus
from students
where name = '小潘';
```

**PS:union总是产生临时表,优化起来比较棘手**
> 一般来说union子句尽量查询最少的行,union子句在内存中合并结果集需要去重(浪费资源),所以**使用union的时候尽量加上all**(在程序级别去重即可)
#### 5.2.count优化
一般都是`count(主键|索引)`,但现在`count(*)`基本上数据库内部都优化过了(根据公司要求使用即可)
> PS:记得当时踩了次坑,等复现的时候补上案例(记得好像跟null相关)
看下就知道为什么说无所谓了(PS,你`count(非索引)`就有所谓了)
```sql
explain
select count(id) -- 常用
from userinfo;
explain
select count(*)
from userinfo;
-- 你`count(非索引)`就有所谓了
explain
select count(password)
from userinfo;
```

**我想说的优化是下面这个count优化案例:**(有时候拆分查询会更快)
```sql
-- 需要统计id>10000的数据总量(实际中可能会根据时间来统计)
explain
select count(*) as count
from userinfo
where id > 10000; -- 2s
-- 分解成用总数-小数据统计 ==> 1s
explain
select (select count(*) from userinfo) - (select count(*) from userinfo where id <= 10000) as count;
```
执行图示:

分析图示:

#### 5.3.group by和order by
**`group by`和`order by`的列尽量相同,这样可以避免filesort**
```sql
-- 5.3.group by和order by的列尽量相同,这样可以避免filesort
explain
select *
from students
group by name
order by work;
explain
select *
from students
group by name
order by name;
-- 加where条件也一样
explain
select *
from students
where name like '小%'
group by age
order by work;
-- PS:一般group by和order by的列都和where索引列相同(不一致也只会使用一个索引)
explain
select *
from students
where name like '小%' and age>20
group by name
order by name;
-- where后面的索引列和`order by|group by`后面的索引列不一致
-- id和email都是索引,但只用了一个索引
explain
select *
from users
where id < 10
order by email;
```
图示:

**PS:不一致也只会使用一个索引(在索引误区有详细说明)**
#### 5.4.用连接查询来代替子查询
**一般来说都是用连接查询来代替子查询**,有些时候子查询更方便(具体看业务吧)
```sql
-- 用exists代替in?MySQL查询优化器针对in做了优化(改成了exists,当users表越大查询速度越慢)
explain
select *
from students
where name in (select username from users where id < 7);
-- ==> 等同于:
explain
select *
from students
where exists(select username from users where username = students.name and users.id < 7);
-- 真正改进==>用连接查询代替子查询
explain
select students.*
from students
inner join users on users.username = students.name and users.id < 7;
-- 等效写法:这个tmp是临时表,是没有索引的,如果需要排序可以在()里面先排完序
explain
select students.*
from students
inner join (select username from users where id < 7) as tmp on students.name = tmp.username;
```
图示:(内部已经把in转换成exists了,所以改不改写无所谓了)

#### 5.5.★limit优化★
**`limit offset,N`**:mysql并不是跳过`offset`行,然后取`N`行,而是取`offset+N`行,然后放弃前`offset`行,返回`N`行
- PS:`offset越大效率越低`(你去翻贴的时候,页码越大一般越慢)
##### 知识点 ~ profiling
为了更加的直观,我们引入一下**`profiling`**
```sql
-- 查看profiling系统变量
show variables like '%profil%';
-- profiling:开启SQL语句剖析功能(开启之后应为ON)
-- 来查看是否已经启用profile
select @@profiling;
-- 启动profile(当前会话启动)
set profiling = 1; -- 0:未启动,1:启动
show profiles; -- 显示查询的列表
show profile for query 5; -- 查看指定编号查询的详细信息
```
输出:
```
MariaDB [dotnetcrazy]> show variables like '%profil%';
+------------------------+-------+
| Variable_name | Value |
+------------------------+-------+
| have_profiling | YES |
| profiling | OFF |
| profiling_history_size | 15 |
+------------------------+-------+
3 rows in set (0.002 sec)
MariaDB [dotnetcrazy]> select @@profiling;
+-------------+
| @@profiling |
+-------------+
| 0 |
+-------------+
1 row in set (0.000 sec)
MariaDB [dotnetcrazy]> set profiling = 1;
Query OK, 0 rows affected (0.000 sec)
```
##### 正文
上面设置完后,分别执行下面SQL:
```sql
select * from userinfo limit 10,10;
select * from userinfo limit 1000,10;
select * from userinfo limit 100000,10;
select * from userinfo limit 1000000,10;
select * from userinfo limit 10000000,10;
```
输出:
```
+----------+------------+------------------------------------------+
| Query_ID | Duration | Query |
+----------+------------+------------------------------------------+
| 1 | 0.00060250 | select * from userinfo limit 10,10 |
| 2 | 0.00075870 | select * from userinfo limit 1000,10 |
| 3 | 0.03121300 | select * from userinfo limit 100000,10 |
| 4 | 0.30530230 | select * from userinfo limit 1000000,10 |
| 5 | 3.03068020 | select * from userinfo limit 10000000,10 |
+----------+------------+------------------------------------------+
```
图示:

##### 解决方法
1. 业务上解决,eg:不许翻页超过100(一般都是通过搜索来查找数据)
- PS:百度搜索页面也只是最多翻到76
2. 使用where而不使用offset
- **id完整的情况**:eg:`limit 5,3 ==> where id > 5 limit 3;`
- PS:项目里面一般都是逻辑删除,id基本上算是比较完整的
3. `覆盖索引+延迟关联`:通过使用覆盖索引查询返回需要的主键,再根据主键关联原表获得需要的数据
- 使用场景:比如`主键为uuid`或`id不连续`(eg:部分数据物理删除了等等)
说太空洞,演示下就清楚了:
```sql
-- 全表扫描
explain
select *
from userinfo
limit 10000000,10; -- 3s
-- 先range过滤了一部分
explain
select *
from userinfo
where id > 10000000
limit 10; -- 20ms
-- 内部查询使用了索引覆盖
explain
select *
from userinfo
inner join (select id from userinfo limit 10000000,10) as tmp
on userinfo.id = tmp.id; -- 2s
```
分析图示:

查询图示:

### 扩展:索引误区和冗余索引
#### 1.索引误区
**很多人喜欢把where条件的常用列上都加上索引**,但是遗憾的事情是:**`独立的索引只能同时用上一个`**
- **PS:在实际应用中往往选择`组合索引`**
别不信,来验证一下就知道了:
```sql
-- id和email都是索引,但是只能使用一个索引(独立的索引只能同时用上一个)
-- id的key-len=4(int4个字节)
-- email的key-len=152(50*3(utf8下每个字符占3位)+2(varchar需要额外两个字节存放)==>152)
-- 1.唯一索引和主键:优先使用主键
explain
select * from users where id = 4 and email = 'xiaoming@qq.com';
-- 2.组合索引和主键:优先使用主键
explain
select * from users where id=4 and createtime='2019-02-16 17:10:29';
-- 3.唯一索引和组合索引:优先使用唯一索引
explain
select * from users where createtime='2019-02-16 17:10:29' and email='xiaoming@qq.com';
-- 4.组合索引和一般索引:优先使用组合索引
-- create index ix_users_datastatus on users(datastatus);
-- create index ix_users_username_password on users(username,password);
explain
select * from users where datastatus=1 and username='小明';
-- 删除临时添加的索引
-- drop index ix_users_datastatus on users;
-- drop index ix_users_username_password on users;
```
图示:

**PS:根据测试得知,一次只能使用1个索引。`索引优先级:主键 > 唯一 > 组合 > 普通`**
#### 2.冗余索引
举个标签表的例子:
```sql
create table tags
(
id int unsigned auto_increment primary key,
aid int unsigned not null,
tag varchar(25) not null,
datastatus tinyint not null default 0
);
insert into tags(aid,tag,datastatus) values (1,'Linux',1),(1,'MySQL',1),(1,'SQL',1),(2,'Linux',1),(2,'Python',1);
select id, aid, tag, datastatus from tags;
```
输出:
```
+----+-----+--------+------------+
| id | aid | tag | datastatus |
+----+-----+--------+------------+
| 1 | 1 | MySQL | 1 |
| 2 | 1 | SQL | 1 |
| 3 | 2 | Linux | 1 |
| 4 | 2 | Python | 1 |
+----+-----+--------+------------+
```
**实际应用中可能会`根据tag查找文章列表`,也可能`通过文章id查找对应的tag列表`**
> 项目里面一般是这么建立索引(冗余索引):index(文章id,tag),index(tag,文章id),这样在上面两种情况下可以直接用到覆盖索引
```sql
create index ix_tags_aid_tag on tags(aid,tag);
create index ix_tags_tag_aid on tags(tag,aid);
select tag from tags where aid=1;
select aid from tags where tag='Linux';
```
#### 3.修复碎片
这边简单说下,下一章应该还会继续说运维相关的知识
数据库表使用时间长了会出现碎片,可以定期修复一下(不影响数据):**`optimize table users;`**
> 修复表的数据以及索引碎片会把数据文件整理一下,这个过程相对耗费时间(数据量大的情况下)一般根据情况选择按周|月|年修复一下
PS:可以配合`crontab`(定时任务)使用:
- 使用命令:`crontab -e`:`***** 命令 [ > /dev/null 2>&1 ]`
- **`5个*的含义`**:`分`、`时`、`日`、`月`、`周`
- 从定向知识:
- `>> /xx/日志文件`:输出重定向到日记文件(不包含错误信息)
- `>> /xx/日志文件 2>&1`:输出信息包括错误信息
- `> /dev/null 2>&1`:出错信息重定向到垃圾桶(黑洞)
- 举几个栗子:
- `21*** xxx` ==> 每天 1:02 执行 xxx命令
- `5921*** xxx` ==> 每天 21::59 执行 xxx命令
- `*/*1*** xxx` ==> 每1小时 执行一次xxx命令
- 定时任务以`*/`开头
**下期预估:SQL运维**
课后拓展:
```
【推荐】一步步分析为什么B+树适合作为索引的结构
https://blog.csdn.net/weixin_30531261/article/details/79312676
善用mysql中的FROM_UNIXTIME()函数和UNIX_TIMESTAMP()函数
https://www.cnblogs.com/haorenergou/p/7927591.html
【推荐】MySQL crc32 & crc64函数 提高字符串查询效率
https://www.jianshu.com/p/af6cc7b72dac
MySQL优化之profile
https://www.cnblogs.com/lizhanwu/p/4191765.html
```
## 1.6.SQL运维篇
运维这块逆天只能说够用,并不能说擅长,所以这篇就当抛砖之用,欢迎补充和纠错
PS:再说明下`CentOS优化策略`这部分的内容来源:首先这块逆天不是很擅长,所以主要是参考网上的DBA文章,之后请教了下运维相关的朋友,大家辩证看就行了,我只能保证90%的准确度
### 1.6.1.概念
#### 1.RAID系
RAID:磁盘冗余队列
> 把多个容量小的磁盘组成一组容量更大的磁盘,并提供数据冗余来保证数据完整性的技术
`RAID0`:数据条带(好处:成本低,应用:数据备份)
> 需要硬盘数>=2,数据没有冗余或修复功能,只是多个小容量变成大容量的功能
RAID1:磁盘镜像(好处:数据安全、读很快)
> 磁盘的数据镜像到另一个磁盘上,最大限度的保证系统的可靠性和可修复性
**`RAID5`**:分布式奇偶校验磁盘阵列(好处:性价比高,缺点:两块磁盘失效则整个卷的数据都无法恢复,应用:从数据库)
> 把数据分散到多个磁盘上,如果任何一个盘数据失效都可以从奇偶校验块中重建
RAID10:分片镜像(优点:读写性能良好,相对RAID5重建更简单速度更快,缺点:贵)
> 对磁盘先做RAID1之后对两组RAID1的磁盘再做RAID0
| RAID级别 | 特点 | 备份 | 盘数 | 读 | 写 |
| ----------- | -------------------- | ---- | ---- | --- | ------------ |
| **`RAID0`** | 便宜,读写快,不安全 | 没有 | N | 快 | 快 |
| `RAID1` | 贵,高速读,最安全 | 有 | 2N | 快 | 慢 |
| **`RAID5`** | 性价比高,读快,安全 | 有 | N+1 | 快 | 取决于最慢盘 |
| `RAID10` | 贵,高速,安全 | 有 | 2N | 快 | 快 |
#### 2.SAN和NAS
SAN:通过专用高速网将一个或多个网络存储设备和服务器连接起来的专用存储系统
> 通过光纤连接到服务器,设备通过块接口访问,服务器可以将其当做硬盘使用
**NAS**:连接在网络上, 具备资料存储功能的装置,以数据为中心,将存储设备与服务器彻底分离,集中管理数据,从而释放带宽、提高性能、降低总拥有成本、保护投资。其成本远远低于使用服务器存储,而效率却远远高于后者
> 使用网络进行连接,通过基于文件协议(NFS、**SMB**)来访问
**PS:网络存储一般都是用来搭建开发环境或者数据库备份**
#### 3.QPS和TPS
**`QPS`**(Queries Per Second):**每秒钟处理的请求数**(一般都是查询,但DML、DDL也包括)
> eg:`10ms`处理1个sql,1s处理100个sql,那么`QPS<=100`(`100ms`处理1个sql,QPS<=10)
`TPS`(Transactions Per Second):每秒钟系统能够处理的交易或事务的数量(`每秒事务数|消息数`)
> 一个事务是指一个客户机向服务器发送请求然后服务器做出反应的过程。客户机在发送请求时开始计时,收到服务器响应后结束计时,以此来计算使用的时间和完成的事务个数
**PS:QPS看的多些**
### 1.6.2.常见问题
1.**超高的CPU|内存使用率**:容易因CPU|内存资源耗尽而宕机
> PS:如果是CPU密集型:需要更好的CPU;需要更大的并发量:需要更多的CPU(WEB项目)
MySQL有同一数据中多次写操作合并为一次写操作
2.**并发量大**:容易导致数据库连接数被占满
> PS:MySQL的`max_connections`默认是100(根据硬件条件调整)
3.**磁盘IO**:导致性能直线下降(`热点数据内存放不下时`)
> 解决:定期整理磁盘碎片、`RAID增强传统硬盘`、`SSD`、`Fusion-io`(PCIe)、`网络存储NAS or ASN`
PS:SSD应用于存在`大量随机IO`或解决`单线程IO瓶颈`的场景
4.**网卡流量**(网络):容易出现无法连接数据库的现象
> 解决:
> 1. 减少从服务器的数量
> 2. 分级缓存(防止同一时间缓存的大量失效)
> 3. 避免使用`select *`进行查询(减少传输过程中的无用字节)
> 4. 分离业务网络和服务器网络
5.**大表**定义:单表数据量超过千万行 or 表数据文件超过10G
> 问题:大表更容易出现慢查询、DDL也很慢也容易导致其他问题
> 解决:分库分表(拆分为多个小表)
> PS:分库分表前可以对大表的历史数据进行归档(冷热数据隔离)【核心:归档时间点的选择】
**DDL影响的补充说明:**
- `建索引很慢`,而且会引起长时间的主从延迟
- `修改表结构需要长时间锁表`
- 引起长时间的主从延迟
- 影响正常的数据操作
**分库分表容易出现的问题:**
1. 分表主键的选择
- 不能保证id是全局唯一,这时候可以使用诸如`雪花算法`来解决
2. 跨库跨表的join问题
3. 事物问题(分布式事物诞生了)
PS:不太影响的案例:**日志表**(`insert`和`select`很多,很少delete和update)
6.**大事务**定义:运行时间较长,操作数据比较多的事物
> 问题:
> 1. 锁定太多的数据,造成大量的阻塞和锁超时
> 2. 回滚需要的时间很长(又得锁一段时间了)
> 3. 执行时间长,容易导致主从的延迟
> 解决:
> 1. 避免一次处理大量数据(分批处理)
> 2. 去除在事物中不必要的select语句(一般都是事物中使用过多查询导致的)
> - PS:select完全可以在事物外查询,事物专注于写
**SQL标准中定义的4种隔离级别:**
1. _未提交读(`read uncommited`)_
2. **已提交读**(`read commited`)
- 不可重复读
3. **可重复读**(`repeatable read`)
- `innodb的默认隔离级别`
4. _可串行化(`serializable`)_
5. **PS:隔离性低到高,`并发性高到低`**
**PS:查看事物隔离级别-`show variables like '%iso%';`,设置会话的隔离级别:`set session tx_isolation='read-committed'`**
### 扩展:CentOS优化策略(MySQL服务器)
#### 1.内核相关(`/etc/sysctl.conf`)
**查看默认值:`sysctl -a`**
**tcp相关设置:**
```shell
# 三次握手listen的最大限制
net.core.somaxconn = 65535 # 默认是128
# 当网络接受速率大于内核处理速率时,允许发送到队列中的包数
net.core.netdev_max_backlog = 65535 # 默认是1000
# Linux队列的最大半连接数(超过则丢包)
net.ipv4.tcp_max_syn_backlog = 65535 # 默认是128(不适合Web服务器)
```
PS:这边只是一个参考,自己可以根据环境适当降低(最大端口数一般都是65535)
> 注意:如果是Web服务器,`net.ipv4.tcp_max_syn_backlog`**不宜过大**(容易有synflood攻击的安全问题),`net.ipv4.tcp_tw_recycle`和`net.ipv4.tcp_tw_reuse`**不建议开启**
**加快tcp链接回收的几个参数:**
```shell
# TCP等待时间,加快tcp链接回收
net.ipv4.tcp_fin_timeout = 10 # 默认60
# 把发起关闭,但关闭没完成的TCP关闭掉
net.ipv4.tcp_tw_recycle = 1 # 默认0(不适合Web服务器)
# 允许待关闭的socket建立新的tcp
net.ipv4.tcp_tw_reuse = 1 # 默认0(不适合Web服务器)
```
PS:`net.ipv4.tcp_tw_reuse`扩展说明:主动调用closed的一方才会在接收到对端的ACK后进入time_wait状态
> 参考文章:`https://blog.csdn.net/weixin_41966991/article/details/81264095`
**缓存区大小的最大值和默认值:**
```shell
net.core.wmem_default = 87380 # 默认212992
net.core.wmem_max = 16777216 # 默认212992
net.core.rmem_default = 87380 # 默认212992
net.core.rmem_max = 16777216 # 默认212992
```
PS:每个`socket`都会有一个`rmem_default`大小的缓存空间(如果设置了`setsockopt`则就是多少,最大不超过`rmem_max`)
**减少失效连接所占用的系统资源**:
```shell
# 对于tcp失效链接占用系统资源的优化,加快资源回收效率
# 链接有效时间(单位s)
net.ipv4.tcp_keepalive_time = 120 # 默认7200
# tcp未获得相应时重发间隔(单位s)
net.ipv4.tcp_keepalive_intvl = 30 # 默认75
# 重发数量(单位s)
net.ipv4.tcp_keepalive_probes = 3 # 默认9
```
**内存相关参数:**
```shell
# 共享单个共享内存下的最大值
kernel.shmmax = 4294967295 # 最大为物理内存-1byte
# 除非虚拟内存全部占满,否则不使用交换分区(为了性能)
# (free -m ==> Swap)
vm.swappiness = 0 # 默认30
```
**PS:`kernel.shmmax`设置的足够大,一般就是为了容纳整个innodb的缓冲池**
> eg:`4G = 4*1024 M = 4*1024*1024 KB = 4*1024*1024*1024 byte = 4294967296 - 1 = 4294967295`
> PS:`unsigned int` => `[0, 2^32)` => `[0,4294967296)` => `[0,4294967295]` **巧不,一样的值**
#### 2.资源限制(`/etc/security/limit.conf`)
**打开文件数的限制**(追加到配置后即可)
```shell
# [*|%] [soft|hard] [type_item] [value]
* soft nofile 65536
* hard nofile 65535
```
默认值:**`ulimit -a`**
```shell
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3548
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024 《《看这
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3548
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
```
PS:一般来说是够用了,但是一个遇到大型数据库可能就不够看了(多表多库配置高)
> `*:所有用户有效、soft:当前系统生效、hard:系统中所能设置的最大值、nofile:所限制的资源是打开文件的最大数、65536:数值`【**重启才生效**】
#### 3.磁盘调度策略(`/sys/block/devname/queue/scheduler`)
现在默认策略就是`deadline`,所以不用优化了【对数据库支持很不错】
PS:通过`cat /sys/block/sda/queue/scheduler`查看(`[这个就是设置的值]`)
```shell
noop [deadline] cfq
```
如果不是**可以通过:`echo deadline > /sys/block/sda/queue/scheduler`来设置**
> `cfq`:会在队列中插入一些不必要的请求,会导致相应时间加长,一般桌面系统用的比较多
> `noop`:实现了一个FIFO队列,像电梯工作一样对IO请求进行组织,当有一个新请求到来时会合并到最近请求之后,以此保证请求同一介质(倾向于饿死读而利于写)一般`闪存设备`、`RAM`、`嵌入式系统`用的比较多
> `deadline`:确保了在一个截止时间内去服务请求(可调整)默认读期限短于写期限(防止写操作因为不能被读而出现饿死的现象)
#### 4.文件系统
Win:`NTFS`,Linux:EXT3|4、`XFS`
Linux现在基本上都是选择`XFS`,如果是`EXT3`、`EXT4`还需要设置一下:`/etc/fstab`(**慎重**)
```shell
/dev/sda1/ext4 noatime,nodiratime,data=writeback 1 1
```
PS:`noatime`表示不记录访问时间,`nodiratime`不记录目录的访问时间(可以减少一些写的操作)
> 不同的日志策略:`data=[wtiteback|ordered|journal]`
> 1. writeback:只有原数据写入日志,原数据写入和数据写入并不是同步的(最快)PS:Innodb有自己的事务日志,所以是最好的选择
> 2. ordered:只会记录原数据,但提供了一些一致性的保证,在写原数据之前会先写数据,使他们保持一致(比writeback慢但更安全)
> 3. journal:提供了原子日志的一种行为,在数据写入到最终位置之前,将记录到日志中(最慢,对Innodb来说是没有必要)
课后拓展:
```
TPS、并发用户数、吞吐量关系
https://www.cnblogs.com/zhengah/p/4532156.html
针对Mysql所在linux服务器的系统优化参数
https://blog.csdn.net/qq_40999403/article/details/80666102
网络优化之net.ipv4.tcp_tw_recycle参数
https://blog.csdn.net/chengm8/article/details/51668992
linux socket 缓存: core rmem_default rmem_max
https://blog.csdn.net/penzchan/article/details/41682411
Linux上的free命令详解、swap机制
http://www.cnblogs.com/xiaojianblogs/p/6254535.html
磁盘IO过高时的处理办法
https://www.cnblogs.com/wjoyxt/p/4808024.html
文件系统对性能的影响
https://blog.csdn.net/qq_30353203/article/details/78197870
```
### 1.6.3.MySQL配置参数
**建议:优先从数据库设计和SQL优化着手,然后才是配置优化和存储引擎的选择,最后才是硬件提升**
> 设计案例:`列太多`不行,`关联太多`也不行(10个以内),不恰当的`分区表`,使用了外键
分区表:一个服务器下,逻辑上还是一个表,物理存储上分成了多个表(<a href="https://www.cnblogs.com/dunitian/p/6078512.html" target="_blank">类似于SQLServer的水平分库</a>)
> PS:分库分表:物理和逻辑上都拆分成多个表了
之前讲环境的时候简单说了下最基础的
```shell
[mysqld]
# 独立表空间: 每一个表都有一个.frm表描述文件,还有一个.ibd文件
innodb_file_per_table=on
# 不对连接进行DNS解析(省时)
skip_name_resolve=on
# 配置sql_mode
sql_mode='strict_trans_tables'
```
然后说`SQL_Mode`的时候简单说了下`全局参数`和`会话参数`的设置方法:<a href="https://www.cnblogs.com/dotnetcrazy/p/10374091.html" target="_blank">MySQL的SQL_Mode修改小计</a>
- 全局参数设置:`set global 参数名=参数值;`
- 只对新会话有效,重启后失效
- 会话参数设置:`set [session] 参数名=参数值`
- 只对当前会话有效,其他会话不影响
这边继续说下其他几个**影响较大**的配置参数:(**对于开发人员来说,简单了解即可,这个是DBA的事情了**)
#### 1.安全相关配置
- `expire_logs_days`:自动清理binlog
- PS:一般最少保存7天(具体根据业务来)
- **`max_allowed_packet`:配置MySQL接收包的大小**
- PS:默认太小。如果配置了主从,需要配置成一样大(防止丢包)
- **`skip_name_resolve`:禁用DNS查找**(这个我们之前说过了,主要是提速)
- **PS:如果启用了,那么进行用户授权时,只能通过`ip`或者`ip段`或者`本机host出现过的域名`进行授权**
- 用`*`的是没影响的
- `sysdata_is_now`:**保证sysdate()返回确定性日期**
- PS:如果主从使用了binlog的`statement`模式,sysdata的结果会不一样,最后导致数据不一致
- 类似的问题还有很多,eg:获取最后一次id的时候(`last_insert_id()`)
- 扩:现在MySQL有了`Mixed`模式
- `read_only`:一般用户只能读数据,只有root用户可以写:
- **PS:推荐在从库中开启,这样就只接受从主库中的写操作,其它只读**
- 从库授权的时候不要授予超级管理员的权限,不然这个参数相当于废了
- `skip_slave_start`:**禁用从库(`Slave`)自动恢复**
- MySQL在重启后会自动启用复制,这个可以禁止
- PS:不安全的崩溃后,复制过去的数据可能也是不安全的(手动启动更合适)
- **`sql_mode`:设置MySQL的SQL模式**(这个上次说过,默认是宽松的检测,这边再补充几个)
- **`strict_trans_tables`:对所有支持事物类型的表做严格约束**:
- **最常见**,主要对事物型的存储引擎生效,其他的没效果
- PS:如果插入数据不符合规范,则中断当前操作
- **`no_engine_subtitution`:建表的时候指定不可用存储引擎会报错**
- **`only_full_group_by`**:检验`group by`语句的合法性
- 要求在在分组查询语句中,把所有没有使用聚合函数的列,列出来
- eg:`select count(url),name from file_records group by url;`
- 使用了name字段,name不是聚合函数,那必须在group by中写一下
- **`ansi_quotes`**:不允许使用双引号来包含字符串
- PS:防止数据库迁移的时候出错
- **PS:生存环境下最好不要修改,容易报错对业务产生影响(严格变宽松没事)**
**PS:一般`SQL_Mode`是测试环境相对严格(`strict_trans_tables,only_full_group_by,no_engine_subtitution,ansi_quotes`),线上相对宽松(`strict_trans_tables`)**
**补充说下`sysdate()`和`now()`的区别:**(看个案例就懂了)
> PS:对于一个语句中调用多个函数中`now()`返回的值是**执行时刻的时间**,而`sysdate()`返回的是**调用该函数的时间**
```sql
MariaDB [(none)]> select sysdate(),sleep(2),sysdate();
+---------------------+----------+---------------------+
| sysdate() | sleep(2) | sysdate() |
+---------------------+----------+---------------------+
| 2019-03-28 09:09:29 | 0 | 2019-03-28 09:09:31 |
+---------------------+----------+---------------------+
1 row in set (2.001 sec)
MariaDB [(none)]> select now(),sleep(2),now();
+---------------------+----------+---------------------+
| now() | sleep(2) | now() |
+---------------------+----------+---------------------+
| 2019-03-28 09:09:33 | 0 | 2019-03-28 09:09:33 |
+---------------------+----------+---------------------+
1 row in set (2.000 sec)
```
#### 2.内存相关
- **`sort_buffer_size`:每个会话使用的排序缓冲区大小**
- PS:每个连接都分配这么多eg:1M,100个连接==>100M(默认是全部)
- **`join_buffer_size`:每个会话使用的表连接缓冲区大小**
- PS:给每个join的表都分配这么大,eg:1M,join了10个表==>10M
- **`binlog_cache_size`:每个会话未提交事物的缓冲区大小**
- **`read_rnd_buffer_size`:设置索引缓冲区大小**
- **`read_buffer_size`:对MyISAM全表扫描时缓冲池大小**(一般都是4k的倍数)
- PS:对临时表操作的时候可能会用到
**`read_buffer_size`的扩充说明:**
> 现在基本上都是Innodb存储引擎了,大部分的MyISAM的配置就不用管了,但是这个还是需要配置下的
引入下**临时表知识扩展**:
1. 系统使用临时表:
- 不超过16M:系统会使用`Memory`表
- 超过限制:使用`MyISAM`表
2. 自己建的临时表:(可以使用任意存储引擎)
- `create temporary table tb_name(列名 类型 类型修饰符,...)`
**PS:现在知道为啥配置`read_buffer_size`了吧(系统使用临时表的时候,可能会使用`MyISAM`)**
#### 3.IO相关参数
**主要看看`Innodb`的`IO`相关配置**
事物日志:(总大小:`Innodb_log_file_size * Innodb_log_files_in_group`)
- 事物日志大小:`Innodb_log_file_size`
- 事物日志个数:`Innodb_log_files_in_group`
日志缓冲区大小:`Innodb_log_buffer_size`
> 一般日志先写到缓冲区中,再刷新到磁盘(一般32M~128M就够了)
知识扩展:`redo Log`内存中缓冲区的大小:(字节为单位)
- `show variables like 'innodb_log_buffer_size';`
- PS:以字节为单位,每隔1s就会把数据存储到磁盘上
- `show variables like 'innodb_log_files_in_group';`
- PS:有几个就产生几个`ib_logfile`文件(默认是2)
---
**日志刷新频率:`Innodb_flush_log_at_trx_commit`**
- 0:每秒进行一次日志写入缓存,并刷新日志到磁盘(最多丢失1s)
- 1:每次交执事物就把日志写入缓存,并刷新日志到磁盘(**默认**)
- 2:每次事物提交就把日志写入缓存,每秒刷新日志到磁盘(**推荐**)
**刷新方式:`Innodb_flush_method=O_DIRECT`**
> 关闭操作系统缓存(避免了操作系统和Innodb双重缓存)
**如何使用表空间:`Innodb_file_per_table=1`**
> 为每个innodb建立一个单独的表空间(这个基本上已经成为通用配置了)
是否使用双写缓存:`Innodb_doublewrite=1`(避免发生页数据损坏)
- 默认是开启的,如果出现写瓶颈或者不在意一些数据丢失可以不开启(开启后性能↑↑)
- 查看是否开启:`show variables like '%double%';`
**设置innodb缓冲池大小:`innodb_buffer_pool_size`**
> 如果都是innodb存储引擎,这个参数的设置可以这样来算:(**一般都是内存的`75%`**)
> 查看命令:`show global variables like 'innodb_buffer_pool_size';`
>**PS:缓存数据和索引(直接决定了innodb性能)** 课后拓展:<https://www.cnblogs.com/wanbin/p/9530833.html>
**innodb缓存池实例的个数:`innodb_buffer_pool_instances`**
> PS:主要目的为了减少资源锁增加并发。`每个实例的大小=总大小/实例的个数`
> 一般来说,每个实例大小不能小于1G,而且个数不超过8个
#### 4.其他服务器参数
- **`sync_binlog`:控制MySQL如何像磁盘中刷新binlog**
- 默认是0,MySQL不会主动把缓存存储到磁盘,而是靠操作系统
- PS:为了数据安全,建议主库设置为1(效率也容易降低)
- 还是那句话:一般不去管,具体看业务
- 控制内存临时表大小:`tmp_table_size` and `max_heap_table_size`
- PS:建议保持两个参数一致
- **`max_connections`:设置最大连接数**
- 默认是100,可以根据环境调节,**太大可能会导致内存溢出**
- **`Sleep`等待时间**:一般设置为相同值(通过连接参数区分是否是交互连接)
- `interactive_timeout`:设置交互连接的timeout时间
- `wait_timeout`:设置非交互连接的timeout时间
#### 扩展工具:`pt-config-diff`
使用参考:`pt-config-diff u=root,p=pass,h=localhost /etc/my.conf`
eg:比较配置文件和服务器配置
```
pt-config-diff /etc/my.cnf h=localhost --user=root --password=pass
3 config differences
Variable /etc/my.cnf mariadb2
========================= =========== ========
max_connect_errors 2 100
rpl_semi_sync_master_e... 1 OFF
server_id 101 102
```
课后拓展:<https://www.cndba.cn/leo1990/article/2789>
---
### 扩展:常见存储引擎
常见存储引擎:
1. MyISAM:不支持事物,表级锁
- 索引存储在内存中,数据放入磁盘
- 文件后缀:`frm、MYD、MYI`
2. **`Innodb`**:事物级存储引擎,支持行级锁和事物ACID特性
- 同时在内存中缓存索引和数据
- 文件后缀:`frm、ibd`
3. `Memory`:表结构保存在磁盘文件中,表内容存储在内存中
- Hash索引、B-Tree索引
- PS:容易丢失数据(重启后数据丢失,表结构依旧存在)
4. `CSV`:一般都是作为中间表
- 以文本方式存储在文件中,**不适合大表**
- frm(表结构)、CSV(表内容)、CSM(元数据,eg:表状态、数据量)
- PS:不支持索引(engine=csv),所有列不能为Null
- 详细可以查看上次写的文章:<a href="https://www.cnblogs.com/dotnetcrazy/p/10481483.html" target="_blank">小计:协同办公衍生出的需求</a>
5. Archive:数据归档(压缩)
- 文件:`.frm`(存储表结构)、`.arz`(存储数据)
- 只支持`insert`和`select`操作
- 只允许在自增ID列上加上索引
- `适合场景:日志类`(省空间)
6. _Federated_:建立远程连接表(性能不怎样,默认禁止)
- 本地不存储数据(数据全部在远程服务器上)
- 本地需要保存表结构和远程服务器的连接信息
- PS:类似于SQLServer的链接服务器
**逆天点评:除非你有100%的理由,否则全选`innodb`,特别不建议混合使用**
#### Memory存储引擎
Memory存储引擎:
1. 支持`Hash`和`BTree`两种索引
- Hash索引:等值查找(默认)
- Btree索引:范围查找
- `create index ix_name using btree on tb_name(字段,...)`
- PS:不同场景下的不同选择,性能差异很大
2. 所有字段类型都等同于**固定长度**,且不支持`Text`和`Blog`等大字段类型
- eg:`varchar(100)`==等价于==> `char(100) `
3. 存储引擎使用表级锁
- PS:性能不见得比innodb好
4. 大小由`max_heap_table_size`决定(默认16M)
- PS:如果想存大点,就得改参数(对已经存在的表不生效,需要重建才行)
5. 常用场景(`数据易丢失,要保证数据可再生`)
- **缓存周期性聚合数据的结果**
- 用于查找或者映射的表(eg:邮编和地区的对应表)
- 保存数据分析中产生的**中间表**
**PS:现在基本上都是redis了,如果不使用redis的小项目可以考虑(eg:官网、博客...)**
---
文章拓展:
```
OLAP、OLTP的介绍和比较
https://www.cnblogs.com/hhandbibi/p/7118740.html
now()与sysdate()
http://blog.itpub.net/22664653/viewspace-752576/
https://stackoverflow.com/questions/24137752/difference-between-now-sysdate-current-date-in-mysql
binlog三种模式的区别(row,statement,mixed)
https://blog.csdn.net/keda8997110/article/details/50895171/
MySQL-重做日志 redo log -原理
https://www.cnblogs.com/cuisi/p/6525077.html
详细分析MySQL事务日志(redo log和undo log)
https://www.cnblogs.com/f-ck-need-u/archive/2018/05/08/9010872.html
innodb_flush_method的性能差异与File I/O
https://blog.csdn.net/melody_mr/article/details/48626685
InnoDB关键特性之double write
https://www.cnblogs.com/geaozhang/p/7241744.html
```
---
### 存储引擎的扩展
#### 1.简单回顾
上节在最后的时候说了下存储引擎,这边简单回顾下:
| 存储引擎 | 是否支持事物 | 文字说明 |
| ------------- | ------------ | ------------------------------------ |
| `MyISAM` | 不支持 | MySQL5.6以前的默认存储引擎 |
| `CSV` | 不支持 | 用CSV格式来存储数据(一般当中间表) |
| **`Archive`** | 不支持 | 只能查询和添加数据(一般记录日志用) |
| `Memory` | 不支持 | 数据只存储在内存中(容易丢失) |
| **`innodb`** | **支持**(行级锁) | 现在基本上都使用这个 |
| `NDB` | **支持**(行级锁) | MySQL集群才使用(内存型,数据会持久化一份) |
补充说明:
1. `Archive`存储引擎的数据会用`zlib`来**压缩**,而且**只支持在自增ID上添加索引**
2. `NDB`存储引擎的数据存储在磁盘中(热数据存储在内存中),支持Ttree索引和集群
- 场景:数据需要完全同步(这些后面会继续说的)
#### 2.常见场景
提一个场景:**`innodb`表无法在线修改表结构的时候怎么解决?**
先看下`Innodb不支持在线修改表结构`都有哪些情况:(主要从性能方面考虑)
1. 第一次创建`全文索引`和添加`空间索引`(`MySQL5.6`以前版本不支持)
- **全文索引**:`create fulltext index name on table(列,...);`
- 空间索引:`alter table geom add spatial index(g);`
2. 删除主键或者添加自增列
- PS:innodb存储就是按照主键进行顺序存储的(这时候需要重新排序)
- 删除主键:`alter table 表名 drop primary key`
- 加自增列:`alter table 表名 add column id int auto_increment primary key`
3. 修改列类型、修改表字符集
- 修改列类型:`alter table 表名 modify 列名 类型 类型修饰符`
- 修改字符集:`alter table 表名 character set=utf8mb4`
**PS:DDL不能并发执行**(表级锁)**长时间的DDL操作会导致主从不一致**
> DDL没法进行资源限制,表数据多了容易占用大量存储IO空间(空间不够就容易执行失败)
#### 3.解决方案
**安装:`yum install percona-toolkit` or `apt-get install percona-toolkit`**
> PS:离线包:`https://www.percona.com/downloads/percona-toolkit/LATEST/`
命令:`pt-online-schema-change 选项 D=数据库,t=表名,u=用户名,p=密码`
> **原理:先创建一个类型修改完的表,然后把旧表数据copy过去,然后删除旧表并重命名新表**
**查看帮助文档:`pt-online-schema-change --help | more`**
> 官方文档:<https://www.percona.com/doc/percona-toolkit/LATEST/pt-online-schema-change.html>
**PS:一般就`--alter`和`--charset`用的比较多**(`--execute`代表执行)
**常用:`pt-online-schema-change --alter "DDL语句" --execute D=数据库,t=表名,u=用户名,p=密码`**
> eg:添加新列:`pt-online-schema-change --alter "add 列名 类型" --execute D=数据库,t=表名,u=用户名,p=密码`
**知识回顾**:
- 添加字段:add
- `alter table tb_name add 列名 数据类型 修饰符 [first | after 列名];`
- **PS:SQLServer没有`[first | after 列名]`**
- 修改字段:alter、change、modify
- 修改字段名:`alter table tb_name change 旧列名 新列名 类型 类型修饰符`
- 修改字段类型:`alter table tb_name modify 列名 类型 类型修饰符`
- 添加默认值:`alter table tb_name alter 列名 set default df_value`
- 删除字段:drop
- `alter table tb_name drop 字段名`
---
#### 4.InnoDB专栏
写在前面的概念:**排它锁(别名:独占锁、写锁)、共享锁(别名:读锁)**
##### 4.1.innoDB是如何实现事物的?
> 事物4大特性:A(原子性)C(一致性)I(隔离性)D(持久性)
innodb事务日志主要就是`redo log`(重做日志)和`undo log`(回滚日志)
| 事物特性 | innodb实现方式 |
| --------- | --------------------------------------------------- |
| 原子性(A) | 回滚日志(`undo log`):用于记录数据**修改前**的状态 |
| 一致性(C) | 重做日志(`redo log`):用于记录数据**修改后**的状态 |
| 隔离性(I) | 锁(`lock`):用于资源隔离(**共享锁** + **排他锁**) |
| 持久性(D) | 重做日志(`redo log`) + 回滚日志(`undo log`) |
我画个转账案例:

##### 4.2.innodb`读`操作是否会阻塞`写`操作?
一般情况下:`查询`需要对资源添加**共享锁**(读锁) | `修改`需要对资源添加**排它锁**(写锁)
| 是否兼容 | `写锁` | `读锁` |
| -------- | ------ | -------- |
| `写锁` | 不兼容 | 不兼容 |
| `读锁` | 不兼容 | **兼容** |
PS:共享锁和共享锁之间是可以共存的(读的多并发)**理论上讲读操作和写操作应该相互阻塞**
而`innodb`看起来却仿佛打破了这个常规,看个案例:
1.启动一个事物,但是不提交

2.在另一个连接中查询

PS:理论上独占锁没提交时是不能读操作的,**但`innodb`做了优化,会查询`undo log`(未修改前的数据)中的记录**来提高并发性
3.提交事物后再查询,这时候就看到更新后的数据了

**PS:这个就是innodb的`MVCC`(多版本并发控制)**
---
知识拓展:
```
【推荐】Mysql的InnoDB事务多版本并发控制如何实现(MVCC)
https://www.cnblogs.com/aspirant/p/6920987.html
https://blog.csdn.net/u013007900/article/details/78641913
https://www.cnblogs.com/dongqingswt/p/3460440.html
https://www.jianshu.com/p/a3d49f7507ff
https://www.jianshu.com/p/a03e15e82121
https://www.jianshu.com/p/5a9c1e487ddd
基于mysql全文索引的深入理解
https://www.cnblogs.com/dreamworlds/p/5462018.html
【推荐】MySQL中的全文索引(InnoDB存储引擎)
https://www.jianshu.com/p/645402711dac
innodb的存储结构
https://www.cnblogs.com/janehoo/p/6202240.html
深入浅出空间索引:为什么需要空间索引
https://www.cnblogs.com/mafeng/p/7909426.html
常见的空间索引方法
https://blog.csdn.net/Amesteur/article/details/80392679
【推荐】pt-online-schema-change解读
https://www.cnblogs.com/xiaoyanger/p/6043986.html
pt-online-schema-change使用说明、限制与比较
https://www.cnblogs.com/erisen/p/5971416.html
pt-online-schema-change使用注意要点
https://www.jianshu.com/p/84af8b8f040b
详细分析MySQL事务日志(redo log和undo log)
https://www.cnblogs.com/f-ck-need-u/archive/2018/05/08/9010872.html
```
---
### 1.6.4.MySQL权限相关
#### 1.账号权限设置
之前在SQL环境篇的时候简单提了一下权限设置(<a href="#3.远程访问">点我回顾</a>),现在再说说常用的权限知识:
> <https://www.cnblogs.com/dotnetcrazy/p/9887708.html>
##### 1.2.创建账号
用户组成格式:`用户名@可访问控制的列表`
1. 用户名:一般16字节
- 以`UTF-8`为例:**1英文字符 = 1字节,1中文 = 3字节**
2. 可访问控制列表:
- **`%`:所有ip都可访问**(一般都这么干的,数据比较重要的推荐使用第二种)
- **`192.168.1.%`**:`192.168.1`网段的ip都可以访问
- 这个不包含`localhost`(数据库本地服务器不能访问)
- `localhost`:只能通过数据库服务器进行本地访问
**1.创建命令:`create user user_name@ip identified by '密码';`**
> PS:可以使用`\h create user`来查看帮助文档

```shell
mysql> \h create user;
# 新增password_option选项
password_option: {
PASSWORD EXPIRE #设置密码过期,用户在下次登录时密码设置一个新的密码
PASSWORD EXPIRE DEFAULT #设置账号使用全局配置的过期策略 default_password_lifetime
PASSWORD EXPIRE NEVER #密码永不过期
PASSWORD EXPIRE INTERVAL N DAY #密码在N天后过期
PASSWORD HISTORY DEFAULT #设置账号使用全局配置的密码历史策略 password_history
PASSWORD HISTORY N #设置禁止重用最新N次的密码
PASSWORD REUSE INTERVAL DEFAULT #基于时间控制密码是否可以重用,default表示使用全避默认值password-reuse-interval
PASSWORD REUSE INTERVAL N DAY #基于时间控制密码是否可以重用,禁止重用N天前的密码
}
```
**2.查看当前用户:`select user();`**
PS:`MariaDB`**查看当前数据库有哪些用户**:`select user,password,host from mysql.user;`
> MySQL:`select user,authentication_string,host from mysql.user;`
**3.修改密码:`alter user user() identified by '密码';`**
**4.另类思路:我一般都是直接在表中插入数据**(MySQL是`authentication_string`)
> eg:`insert into mysql.user(user,host,password) values("用户名","%",password("密码"));`
> PS:修改密码:`update mysql.user set `password`=password('新密码') where user='用户名';`
知识拓展:<a href="https://blog.csdn.net/vurtne_ye/article/details/26514499/">ERROR 1045 (28000): Access denied for user 'mysql'@'localhost' </a>
##### 1.3.常用权限
| 权限类别 | 语句 | 说明文字 |
| -------- | -------------- | -------------- |
| `admin` | `create user` | 创建新用户权限 |
| - | `grant option` | 为用户设置权限 |
| - | `super` | 设置服务器权限 |
| `DDL` | **`create`** | 创建数据库和表 |
| - | **`alter`** | 修改表结构权限 |
| - | **`index`** | 创建和删除索引 |
| - | `drop` | 删除数据库和表 |
| `DML` | **`select`** | 查询表数据权限 |
| - | **`insert`** | 插入表数据权限 |
| - | **`update`** | 删除表数据权限 |
| - | `execute` | 可执行存储过程 |
| - | `delete` | 删除表数据权限 |
补充说明:`super`:如设置全局变量等系统语句,一般DBA会有这个权限
PS:MariaDB查看数据库支持哪些权限:**`show privileges;`**
> <https://mariadb.com/kb/en/library/show-privileges/>
##### 1.4.用户授权
权限这个东西大家都懂,一般都是最小权限
授权命令如下:**`grant 权限列表 on 数据库.表 to 用户名@ip`**
> PS:开发的时候可能为了省事这么设置:**`grant all [privileges] on 数据库.* to 用户名@'%';`**
正规点一般这么设置:
- 线上:`grant select,insert,update on 数据库.* to 用户名@ip`
- 开发:`grant select,insert,update,index,alter,create on 数据库.* to 用户名@ip段`
**PS:查看当前用户权限:`show grants for 用户名;`,刷新数据库权限:`flush privileges;`**
> 以前可以在授权的时候直接创建用户(加一段`identified by '密码'`),新版本好像分开了
##### 1.5.权限收回
命令如下:**`revoke 权限列表 on 数据库.表 from 用户名@ip`**
> eg:`revoke create,alter,delete from django.* from dnt@'%'`(是`from`而不是`on`)
#### 2.数据库账号安全
这个了解即可,我也是刚从DBA朋友那边了解到的知识(`MySQL8.0`),基本上用不到的,简单罗列下规范:
1. 只给最小的权限(线上权限基本上都是给最低的(防黑客))
2. 密码强度限制(MySQL高版本默认有限制,主要针对MariaDB)
3. 密码有期限(谨慎使用,不推荐线上用户设置有效期)
4. 历史密码不可用(不能重复使用旧密码)
- PS:现在用BAT的产品来修改密码基本上都是不让使用上次的密码
**设置前三次使用过的密码不能再使用:`create user@'%'identified by '密码' password history 3;`**
PS:**设置用户密码过期:`alter user 用户名@ip password expire;`**
#### 3.迁移问题
经典问题:**如何从一个实例迁移数据库账号到另一个实例?**
- eg:老集群 > 新集群
官方文档:<https://www.percona.com/doc/percona-toolkit/LATEST/pt-show-grants.html>
##### 3.1.版本相同
数据库备份下,然后在新环境中恢复
然后**导出用户创建和授权语句**:eg:**`pt-show-grants -u=root,-p=密码,-h=服务器地址 -P=3306`**
> 扩展文章:<a href="https://www.cnblogs.com/shengdimaya/p/7093030.html">pt-show-grants的使用</a>(eg:`pt-show-grants --host=192.168.36.123 --port=3306 --user=root --password=密码`)
生成的脚本大致是这样的:(**把脚本放新服务器中执行即可**)
```sql
CREATE USER IF NOT EXISTS 'mysql.sys'@'localhost';
ALTER USER 'mysql.sys'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE' REQUIRE NONE
PASSWORD EXPIRE DEFAULT ACCOUNT LOCK;GRANT SELECT ON `sys`.`sys_config` TO 'mysql.sys'@'localhost';
GRANT TRIGGER ON `sys`.* TO 'mysql.sys'@'localhost';
GRANT USAGE ON *.* TO 'mysql.sys'@'localhost';
-- Grants for 'root'@'%'
CREATE USER IF NOT EXISTS 'root'@'%';
ALTER USER 'root'@'%' IDENTIFIED WITH 'mysql_native_password' AS '*6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9' REQUIRE NONE PASSWORD EXPI
RE DEFAULT ACCOUNT UNLOCK;GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
```
##### 3.2.版本不同
可以使用上面的方法,但是需要使用`mysql_upgrade`升级下系统表(适用:**低版本到高版本**)但是推荐使用**`生成SQL脚本`**
> 扩展文章:<a href="https://www.cnblogs.com/zengkefu/p/5678054.html" target="_blank">mysql升级小结和mysql_upgrade的用途</a>
### 1.6.5.MySQL日志相关
本文的测试环境:`MySQL5.7.26`、`MariaDB5.5.60`、`MySQL8.0.16`
> PS:版本查询`select version();`
#### 1.MySQL常用日志
服务器层日志(存储引擎层有自己的日志)
| 日志类型 | 描述 |
| ------------------------------ | ------------------------------------------------------ |
| `error_log`(错误日志) | 记录MySQL启动、运行或停止时出现的问题 |
| `general_log`(常规日志) | 记录所有发送给MySQL的请求(耗性能) |
| **`slow_query_log`**(慢查日志) | 记录符合条件的查询(eg:超过10s、没有使用索引等) |
| **`binary_log`**(二进制日志) | 记录全部有效的数据修改日志(老版本数据库不会开启) |
| `relay_log`(中继日志) | 用于主从复制,临时存储主从同步的二进制日志(增量复制) |
知识扩展:<https://blog.csdn.net/zhang123456456/article/details/72811875>
**实时查看文件:`tail -f /var/log/mysqld.log`**
> tail -f 用于监视文件增长(默认是末尾10行)
#### 2.error_log(错误日志)
一般记录MySQL`运行错误`和和`未授权的访问`
> - **老版:`log_error` + `log_warnings`**
> - **常用:`log_error` + `log_error_verbosity`**
> - 新版:`log_error` + `log_error_verbosity` + `log_error_services`
查询MySQL配置:`show variables like '%log_error%';`
SQL查询可以这么干:
```sql
-- Ubuntu下默认是:`/var/log/mysql/error.log`
-- CentOS下默认是:`/var/log/mysqld.log` | `/var/log/mariadb/mariadb.log`
select @@log_error; -- 尽可能和Data分开存储
-- 0:不记录警告信息,1:告警信息写入错误日志,2:各类告警信息都写入(eg:网络故障和重连信息)
select @@log_warnings; -- MySQL8中已经移除(MySQL5.7默认是2,MariaDB5.5.60默认是1)
-- 错误级别(1:Error,2:Error、Warning,3:Error、Warning、Info
select @@log_error_verbosity; -- MySQL8默认是2,MySQL5.7默认是3
-- PS:从MySQL 5.7.2开始,首选`log_error_verbosity`系统变量
-- 默认是`log_filter_internal; log_sink_internal`
select @@log_error_services; -- MySQL8.0中新增
```
PS:其实MySQL在市面上有很多成熟解决方案(基本上都是基于5.6、5.7的)
> 这也是为什么我开篇主推`MySQL5.7`系列和`MariaDB5.5.60`(很多时候不是不用最新DB,而是架构依赖做不到啊)
知识拓展:<https://www.cnblogs.com/kerrycode/p/8973285.html>
PS:SQLServer的ErrorLog也是差不多的

##### MySQL8.0新增参数:log_error_services
**日志服务组件**:
| 日志服务组件名 | 描述 |
| ---------------------- | ----------------------------------------------- |
| `log_sink_internal` | 默认的日志输出组件(依赖`log_error`) |
| **`log_filter_internal`** | 默认的日志过滤组件(依赖`log_error_verbosity`) |
| **`log_sink_json`** | 将错误日志输出到`json`文件 |
| `log_sink_syseventlog` | 将错误日志输出到系统日志文件 |
PS:`log_filter_internal`:过滤错误信息(**达不到级别的不记录**)
**日记格式一般是这样的**:`UTC时间戳 进程id [日志级别] [错误代码] [由什么产生的日志(Server or Client)] 详细信息`
> eg:`2019-05-19T09:54:11.590474Z 8 [Warning] [MY-010055] [Server] IP address '192.168.36.144' could not be resolved: Name or service not known`
**一般`log_sink_json`用的比较多**:
> 官方文档参考:<https://dev.mysql.com/doc/refman/8.0/en/error-log-json.html>
PS:第一次使用需要安装一下json组件:`install component 'file://component_log_sink_json';`
> **常用设置:`set persist log_error_services='log_filter_internal;log_sink_json';`**
##### 时间戳相关的小知识点
上面的时间默认是UTC的时间戳,和我们是有时差的,这个时间戳可以通过设置`log_timestamps`来本地化:
```sql
-- 查询
select @@log_timestamps; -- MySQL5.7新增
-- 从8开始,可通过SET PERSIST命令将全局变量的修改持久化到配置文件中
set persist log_timestamps='SYSTEM'; -- 需要root权限
```
**PS:`set persist`生成的配置文件路径在:`/var/lib/mysql/mysqld-auto.cnf`**

#### 3.general_log(常规日志)
以前开发调试的时候基本上都是会开启的,上线后关闭(系统V1初期的时候也会开启一段时间)
> **现在开发可以使用[go-sniffer](https://www.cnblogs.com/dotnetcrazy/p/10443522.html)来抓包查看客户端执行的SQL**
```sql
-- 是否打开常规日志(0不打开,1打开)
-- 一般不打开(性能)
select @@general_log; -- 默认为0
-- Ubuntu默认:/var/lib/mysql/ubuntuserver.log
-- CentOS默认:/var/lib/mysql/localhost.log
select @@general_log_file; -- 常规日志的路径
-- 日志的存储方式(FILE | TABLE | NONE)
select @@log_output; -- 默认是文件存储
```
简单看一下常规日志在数据库中的结构:

**临时开启参考**:
```shell
# 开启
set global general_log = 1;
# set [global | persist] general_log_file = '日志路径';
set global log_output = 'TABLE';
```
#### 4.`slow_query_log`(慢查询日志)
这个是`最常用`的,把符合条件的查询语句记录在日志中,**一般都是些需要优化的SQL**
> PS:出现性能瓶颈的时候,或者为了优化SQL会开启一段时间(小项目推荐直接开启)
先看下默认值:**`show variables like '%slow%';`、`show variables like 'long%';`**

**SQL查询**:
```sql
-- 是否开启
select @@slow_query_log; -- 默认是关闭
-- CentOS:/var/lib/mysql/localhost-slow.log
-- Ubuntu:/var/lib/mysql/ubuntuserver-slow.log
select @@slow_query_log_file;
-- 条件:设置超过多少秒为慢查询(一般设置1s)
select @@long_query_time; -- 默认是10s(支持小数:0.003)
-- PS:设置为0就会记录所有SQL(不推荐这么干)
-- 条件:没有使用索引的查询记录到日志中
select @@log_queries_not_using_indexes; -- 默认是0(不开启)
-- 记录optimize table、analyze table和alter table的管理语句
select @@log_slow_admin_statements; -- 默认是0(不开启)
-- 记录由Slave所产生的慢查询
select @@log_slow_slave_statements;
```
**常用设置**:
> PS:高并发下的互联网项目,对SQL执行时间的容忍度一般都是**低于`300~500ms`**的(`long_query_time=0.05`)
```shell
# 常用如下:(需要MySQL的root权限)
set global slow_query_log = 1; # 开启慢查询日志
set global long_query_time = 1; # 记录大于1s的SQL
set global log_slow_admin_statements = 1; # 记录管理语句
set global log_queries_not_using_indexes = 1; # 记录没有使用索引的SQL
# set [global | persist] slow_query_log_file = '路径'; # 设置log路径
```
**设置`long_query_time`时,需要重新连接才能生效(不需要重启DB)**
> PS:当前会话不生效,之后的会话就生效了(不想重连可以再设置下当前会话的`long_query_time`)
知识拓展:(`chown mysql:mysql /work/log/xxx.log`)
- <https://shihlei.iteye.com/blog/2311752>
- <https://www.cnblogs.com/1021lynn/p/5328495.html>
#### 扩展:慢查询工具
先简单分析下慢查询日志:
```shell
# Time: 2019-05-22T21:16:28.759491+08:00
# User@Host: root[root] @ localhost [] Id: 11
# Query_time: 0.000818 Lock_time: 0.000449 Rows_sent: 5 Rows_examined: 5
SET timestamp=1558530988;
select * from mysql.user order by host; # SQL语句
```
1. `Time`:查询的**执行时间**(`start_time`)
2. `User@Host: root[root] @ localhost [] Id:11`:执行 sql 的**主机信息**
3. `Query_time`:SQL**`查询`**所**耗**的**时**间
4. `Lock_time`:**锁定时间**
5. `Rows_sent`:所**发送的行数**
6. `Rows_examined`:**锁扫描的行数**
7. `SET timestamp=1558530988;`:SQL**执行时间**
现在可以说说工具了,推荐两款:
1. 自带的慢日志分析工具:`mysqldumpslow`
2. MySQL工具箱(`percona-toolkit`)中的`pt-query-digest`
##### mysqldumpslow(精简)
**查询最慢的10条SQL:`mysqldumpslow -s t -t 10 /var/lib/mysql/localhost-slow.log`**
```shell
-s 按照那种方式排序
t: 查询时间
c:访问计数
l:锁定时间
r:返回记录
al:平均锁定时间
ar:平均访问记录数
at:平均查询时间
-t 返回多少条数据(可以理解为top n)
-g 可以跟上正则匹配模式,大小写不敏感。
```
PS:使用mysqldumpslow的分析结果不会显示具体完整的sql语句:
1. **翻页sql不一样,性能也是不一样的,越往后的页数越容易出现慢查询,而mysqldumpslow把所有翻页sql当成一个sql了**
2. eg:`select * from tb_table where uid=20 group by createtime limit 10000, 1000;` ==> `select * from tb_table where uid=N group by createtime limit N, N;`
- 不管你uid和limit怎么变,mysqldumpslow认为是一样的
##### pt-query-digest(推荐)
官方文档:<https://www.percona.com/doc/percona-toolkit/3.0/pt-query-digest.html>
> **分析慢查询日志:`pt-query-digest /var/lib/mysql/localhost-slow.log`**
1. 使用tcppdump捕获MySQL协议数据,然后报告最慢的查询:
- `tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306 > mysql.tcp.txt`
- `pt-query-digest --type tcpdump mysql.tcp.txt`
2. 查看来自远程进程列表上最慢的查询:
- `pt-query-digest --processlist h=ip`
安装可以参考:<https://github.com/lotapp/awesome-tools/blob/master/README.md#4%E8%BF%90%E7%BB%B4>
> PS:percona-toolkit的常用工具我也在里面简单说了下,对应文档也贴了
##### other
PS:还有一款**`mysqlsla`**我没用过,所以贴个参考文章,感兴趣的同志自己研究下
> <https://www.cnblogs.com/fengchi/p/6187099.html>
知识拓展:<https://www.cnblogs.com/fengchi/p/6187099.html>
---
#### 5.binary_log(二进制日志)
上节主要说了通用日志和慢查日志,今天说下二进制日志:
二进制日志算是最常用的了,主要就是**`记录对数据库的修改`**,然后就是**`主从复制`**用的比较多(比如增量备份)
> PS:记录了修改操作,那么衍生出的场景就是:`增量备份和恢复`(基于时间点的备份和恢复)
PS:MySQL日志主要分为这两类:(互不干扰)
1. `服务层`日志(和使用存储引擎无关)
- 通用日志、慢查询日志、二进制日志
2. `存储引擎层`日志
- eg:innodb的重做日志(`redo log`)和回滚日志(`undo log`)
Q:那什么样的修改会记录下来呢?
> A:记录所有对MySQL数据库的修改事件(包括增删改查事件和对表结构修改的事件),而且**只记录已经成功执行的事件**(失败的不会记录)
这么说可能有点抽象,熟悉SQLServer的同志看个图就秒懂:

##### 5.1.二进制日志格式
| 参数 | 说明 |
| ----------- | --------------------------------------------------------------------------- |
| `STATEMENT` | 基于段的格式,记录执行数据修改时候所执行的SQL语句 |
| **`ROW`** | 基于行的格式,记录增删改查操作所修改行的信息(每修改一行就会有一条信息) |
| **`MIXED`** | 基于行和端的混合格式,根据SQL语句由系统决定是基于段还是基于行的日志格式记录 |
查看方式:**`show variables like 'binlog_format';`**
1. **binlog_format=`statement`**:基于段的记录格式(老版本的默认值)
1. 优点:记录量较小,节约磁盘和网络IO(单条操作Row更节约)
2. 缺点:必须记录上下文信息来保证语句在从服务器上执行结果与主服务器相同
- **但是如果使用了`uuid()`、`user()`等结果非确定的函数,可能会造成MySQL主从不一致**
3. **日志查看**:`mysqlbinlog /var/lib/mysql/binlog.0000xx | more`(不用指定参数)
2. **binlog_format=`row`**:基于行的记录格式(5.7以后的默认值)
1. 优点:可以避免MySQL复制中出现的主从不一致的问题(主从更安全)
- PS:没有备份的时候可以通过分析row格式的二进制日志来反向恢复
2. 缺点:记录日志量较大(顺序写入)
- **现在增加了新参数来优化**:`binlog_row_image=[full|minimal|noblob]`
3. **日志查看**:`mysqlbinlog -vv /var/lib/mysql/binlog.0000xx | more`
3. **binlog_format=`mixed`**:基于行和端的混合格式(`推荐`)
- PS:数据量大小由所执行的SQL决定(非确定性函数越多,行数越多)
PS:**DDL操作(create、drop、alter)的时候都是基于段方式来记录log**
> 如果一条一条记录,表有上亿数据,我就修改某列的状态值,那不得疯?
**对`binlog_row_image=[FULL|MINIMAL|NOBLOB]`的补充说明**:
> PS:**查看方式:`show variables like 'binlog_row_image'`**
1. 默认是`full`:完整
- 记录修改行的全部内容
2. `noblob`:就是在full记录的基础上对大文本列的优化
- **没有对text或者blob列修改就不记录该列**
3. `minimal`:简单记录,**只记录修改的那一列**
- PS:**这个要特别注意一点,虽然容量小了,但是一旦误操作,很难恢复的**(不知道原来内容)
##### 推荐使用
**一般使用`binlog_format=mixed`混合格式** or **`binlog_format=row`** + **`binlog_row_image=minimal`**
> PS:如果对安全性要求特别高,推荐使用`binlog_format=row` + `binlog_row_image=full`(不怕误操作)
这个和SQLServer的日志恢复模式有点类似,我贴下图你们可以对比参考:

##### 5.2.二进制日志配置
上面虽然说完了二进制日志的常用3种格式,但老版本默认都是不启用二进制日志的,咋办?
> PS:如果是MariaDB可以去示例配置中查看:`ls /usr/share/mysql/ |grep .cnf`(CentOS)
验证下:
MySQL8之前:`cat /etc/mysql/mysql.conf.d/mysqld.cnf`(UbuntuServer)

MySQL8:`cat /etc/my.cnf |grep log`(CentOS)

---
Q:有些人可能疑惑了,为什么用**`show variables like 'log_bin';`**查询出来的结果和配置文件中不大一样啊?
> PS:一般配置项中的参数都可以使用`show variables like 'xx'`来查询对应的值

A:那是因为5.7之后版本分成了两个参数:**`log_bin`和`log_bin_basename`**:
> PS:**配置文件的`log_bin=xxx`相当于命令中的`log_bin`和`log_bin_basename`**
```
mysql> show variables like 'log_bin%';
+---------------------------------+-----------------------------+
| Variable_name | Value |
+---------------------------------+-----------------------------+
| log_bin | ON |
| log_bin_basename | /var/lib/mysql/binlog |
| log_bin_index | /var/lib/mysql/binlog.index |
| log_bin_trust_function_creators | OFF |
| log_bin_use_v1_row_events | OFF |
+---------------------------------+-----------------------------+
5 rows in set (0.00 sec)
```
##### 开启演示
**MariaDB开启binlog图示**:(CentOS)

**MySQL5.7演示**:(UbuntuServer)

配置文件中修改:(**`show variables like 'binlog_format';`:查看当前binlog基于什么格式**)
```shell
# 服务器标识
server-id=1 # 单机MariaDB可不开启
# 开启binlog并设置路径
# 不指定路径则默认在数据目录下
log_bin=binlog # 这个代表以binlog开头的文件
# binlog采用ROW|MIXED格式
# binlog_format=MIXED # 5.7默认是ROW
```
先看下文件前缀(`log_bin=binlog`)的概念,一张图就懂:

PS:**如果log_bin只是指定一个名字,那么默认路径一般都是在数据文件的文件夹中**
> 配置文件一般都会写,eg:`datadir=/var/lib/mysql`,或者通过**`show variables like 'datadir';`**也可以查询到
虽然和SQLServer文件组不是一个概念,但有些相似 ==> `log可以多个也可以动态调整`

##### 5.3.ROW模式下记录SQL
Q:虽然ROW记录能保证主从数据安全,但我们排查问题的时候往往需要知道SQL,而用段的记录方式又不合适,咋办?
A:有个新参数可以解决:**`binlog_rows_query_log_events`**,开启后就可以记录sql了
查看方式:**`show variables like 'binlog_row%';`**
```shell
mysql> show variables like 'binlog_row%';
+------------------------------+-------+
| Variable_name | Value |
+------------------------------+-------+
| binlog_row_image | FULL |
| binlog_rows_query_log_events | OFF |
+------------------------------+-------+
2 rows in set (0.01 sec)
```
##### binlog演示
**显示binlog列表:`show binary logs;`**
**刷新一份新的binlog:`flush logs;`**(现在开始的二进制日志就记录在这个新文件中)

binlog现在是空的:(`-vv`:把二进制格式的日志显示为能读懂的字符串)
> **`mysqlbinlog --no-defaults -vv --base64-output=DECODE-ROWS /var/lib/mysql/binlog.000006`**

现在简单摸拟几个SQL操作,然后看看binlog:

查看下binlog日志:(线上开发一般都是FULL模式,主要是防止程序员修改SQL的时候不加条件等误操作)
> FULL模式就是这样,该行数据全部记录(修改部分其实就绿色框的地方)

想要binlog中记录SQL就开启`binlog_rows_query_log_events`:
> PS:像这种操作,如果模式选混合模式,binlog中会记录SQL的
临时开启下`binlog_rows_query_log_events`(如果你有需要可以配置文件设置一下)
> PS:MySQL8可通过`set persist`命令将全局变量的修改持久化到配置文件中

效果如下:

##### 5.4.二进制日志的清除
1. 自动清除
- 配置文件中设置时间:`expire_logs_days = 30`
2. 手动清除
- 删除指定编号之前的日志:`purge binary logs to 'binlog.000006';`
- **删除指定时间之前的日志:`purge binary logs before '2019-06-15 14:14:00';`**
已经23:23了,我们快速演示下:
MySQL命令行中执行命令:

文件列表:

##### 5.5.二进制日志与主从
这个把运维篇讲完会继续说,运维篇结束后会有个高级篇(架构),这边就简单提下`二进制格式对主从复制的影响`:
1. 基于SQL语句的复制(SBR)
- 二进制日志格式使用的是`statement`格式(5.7前的默认)
2. 基于行的复制(RBR)
- 二进制日志格式使用的是基于行的日志格式
3. 混合模式
- 根据实际在上面两者中切换
贴个课后拓展文章:<https://www.cnblogs.com/gujianzhe/p/9371682.html>
---
#### 6.relay_log(中继日志)
临时记录从主服务器中增量复制到从服务器中的binlog
```shell
relay_log = 文件前缀 # 默认是主机名,换主机容易变,一般自己设置一个
relay_log_purge = 1 # 默认是on(1)自动清理
```
**下级预估:备份与恢复、监控**
---
### 4.6.6.SQLServer监控
脚本示意:<https://github.com/lotapp/BaseCode/tree/master/database/SQL/SQLServer>
> PS:这些脚本都是我以前用SQLServer手写的,参考即可(现在用MySQL,下次也整理一下)
之前写SQLServer监控系列文章因为换环境断篇了,只是简单演示了下基础功能,现在准备写`MySQL`监控相关内容了,于是补了下:
> **[SQLServer性能优化之---数据库级日记监控](https://www.cnblogs.com/dunitian/p/6022967.html)**:<https://www.cnblogs.com/dunitian/p/6022967.html>
在说监控前你可以先看下[数据库发邮件](https://mp.weixin.qq.com/s/WWdDSNj_19RVCZJMGxhMWA):<https://www.cnblogs.com/dunitian/p/6022826.html>
> 应用:**一般就是设置个定时任务,把耗时SQL信息或者错误信息通过邮件的方式及时预警**
好处就太多了,eg:客户出错如果是数据库层面,那瞬间就可以场景重放(PS:等客户找会降低业绩)
以往都是程序的`try`+`catch`来捕获错误,但数据库定时任务之类的出错程序是捕获不到的,所以就需要数据库层面的监控了
> PS:开发的时候通过`SQLServer Profiler`来监控
先说说本质吧:SQLServer2012的XEVENT机制已经完善,eg:常用的扩展事件**`error_reported`就可以在错误的时候通过邮件来通知管理员了**
> PS:扩展事件性能较高,而且比较轻量级
PS:**SQLServer的监控大体思路三步走:`发邮件`,`事件监控`,`定时执行`**
#### 4.6.6.1 发送邮件
这个之前讲过,这边就再说下SQL的方式:
##### 1.配置发件人邮箱
这个配置一次即可,**以后使用就可以直接通过配置名发邮件**:
```sql
--开启发邮件功能
exec sp_configure 'show advanced options',1
reconfigure with override
go
exec sp_configure 'database mail xps',1
reconfigure with override
go
--创建邮件帐户信息
exec msdb.dbo.sysmail_add_account_sp
@account_name ='dunitian', -- 邮件帐户名称
@email_address ='xxx@163.com', -- 发件人邮件地址
@display_name ='SQLServer2014_192.168.36.250', -- 发件人姓名
@MAILSERVER_NAME = 'smtp.163.com', -- 邮件服务器地址
@PORT =25, -- 邮件服务器端口
@USERNAME = 'xxx@163.com', -- 用户名
@PASSWORD = '邮件密码或授权码' -- 密码(授权码)
GO
--数据库配置文件
exec msdb.dbo.sysmail_add_profile_sp
@profile_name = 'SQLServer_DotNetCrazy', -- 配置名称
@description = '数据库邮件配置文件' -- 配置描述
go
--用户和邮件配置文件相关联
exec msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'SQLServer_DotNetCrazy', -- 配置名称
@account_name = 'dunitian', -- 邮件帐户名称
@sequence_number = 1 -- account 在 profile 中顺序(默认是1)
go
```
##### 2.发生预警邮箱
同样我只演示SQL的方式,图形化的方式可以看我以前写的文章:
```sql
-- 发邮件测试
exec msdb.dbo.sp_send_dbmail
@profile_name = 'SQLServer_DotNetCrazy', --配置名称
@recipients = 'xxx@qq.com', --收件邮箱
@body_format = 'HTML', --内容格式
@subject = '文章标题', --文章标题
@body = '邮件内容<br/><h2>This is Test</h2>...' --邮件内容
```
效果:

##### 3.邮件查询相关
主要用途其实就是出错排查:
```sql
-- 查询相关
select * from msdb.dbo.sysmail_allitems --查看所有邮件消息
select * from msdb.dbo.sysmail_mailitems --查看邮件消息(更多列)
select * from msdb.dbo.sysmail_sentitems --查看已发送的消息
select * from msdb.dbo.sysmail_faileditems --失败状态的消息
select * from msdb.dbo.sysmail_unsentitems --看未发送的消息
select * from msdb.dbo.sysmail_event_log --查看记录日记
```
---
#### 4.6.6.2.监控实现
会了邮件的发送,那下面就是监控了
##### 1.图形化演示
**不推荐使用图形化的方式,但可以来理解扩展事件的监控**
1.新建一个会话向导(熟悉后可以直接新建会话)


2.设置需要捕获的扩展事件

3.这边捕获的全局字段和左边SQL是一样的(截图全太麻烦了,所以偷个懒,后面会说怎么生成左边的核心SQL)

4.自己根据服务器性能设置一个合理的值(IO、内存、CPU)

5.生成核心SQL(我们图形化的目的就是生成核心SQL,后面可以根据这个SQL自己扩展)

6.核心代码如下

7.启动会话后一个简单的扩展事件监控就有了

8.SQLServer提供了查看方式

9.日志可以自己查下`xxx\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log`

---
##### 2.SQL的方式
上面只是过家家,主要目的就是让大家知道核心SQL是怎么来的,凭什么这么写
下面就来个制定化监控:
先截图演示下各个核心点,然后贴一个我封装的存储过程附件
1.扩展事件相关的核心代码

2.内存中数据存储到临时表

3.临时表中的数据存储到自己建立的表中
> 我抛一个课后小问给大家:为什么先存储在临时表中?(提示:效率)

4.发送监控提醒的邮件

5.看看数据库层面多了什么:

6.来个测试

7.效果(可以自己美化)

##### SQL附录
```sql
-- 切换到需要监控的数据库
USE [dotnetcrazy]
GO
--收集服务器上逻辑错误的信息
SET QUOTED_IDENTIFIER ON
SET ANSI_NULLS ON
GO
-- 自定义的错误信息表
IF OBJECT_ID('log_error_message') IS NULL
BEGIN
CREATE TABLE [dbo].[log_error_message]
(
[login_message_id] [uniqueidentifier] NULL CONSTRAINT [DF__PerfLogic__Login__7ACA4E21] DEFAULT (newid()),
[start_time] [datetime] NULL,
[database_name] [nvarchar] (128) COLLATE Chinese_PRC_CI_AS NULL,
[message] [nvarchar] (max) COLLATE Chinese_PRC_CI_AS NULL,
[sql_text] [nvarchar] (max) COLLATE Chinese_PRC_CI_AS NULL,
[alltext] [nvarchar] (max) COLLATE Chinese_PRC_CI_AS NULL,
-- [worker_address] [nvarchar] (1000) COLLATE Chinese_PRC_CI_AS NULL,
[username] [nvarchar] (1000) COLLATE Chinese_PRC_CI_AS NULL,
[client_hostname] [nvarchar] (1000) COLLATE Chinese_PRC_CI_AS NULL,
[client_app_name] [nvarchar] (1000) COLLATE Chinese_PRC_CI_AS NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
END
GO
-- 创建存储过程
CREATE PROCEDURE [dbo].[event_error_monitor]
AS
IF NOT EXISTS( SELECT 1 FROM sys.dm_xe_sessions dxs(NOLOCK) WHERE name = 'event_error_monitor') -- 不存在就创建EVENT
-- 创建扩展事件,并把数据放入内存中
BEGIN
CREATE EVENT session event_error_monitor on server
ADD EVENT sqlserver.error_reported -- error_reported扩展事件
(
ACTION -- 返回结果
(
sqlserver.session_id, -- 会话id
sqlserver.plan_handle, -- 计划句柄,可用于检索图形计划
sqlserver.tsql_stack, -- T-SQ堆栈信息
package0.callstack, -- 当前调用堆栈
sqlserver.sql_text, -- 遇到错误的SQL查询
sqlserver.username, -- 用户名
sqlserver.client_app_name, -- 客户端应用程序名称
sqlserver.client_hostname, -- 客户端主机名
-- sqlos.worker_address, -- 当前任务执行时间
sqlserver.database_name -- 当前数据库名称
)
WHERE severity >= 11 AND Severity <=16 -- 指定用户级错误
)
ADD TARGET package0.ring_buffer -- 临时放入内存中
WITH (max_dispatch_latency=1seconds)
-- 启动监控事件
ALTER EVENT SESSION event_error_monitor on server state = START
END
ELSE
-- 存储过程已经存在就把数据插入表中
BEGIN
-- 将内存中已经收集到的错误信息转存到临时表中(方便处理)
SELECT
DATEADD(hh,
DATEDIFF(hh, GETUTCDATE(), CURRENT_TIMESTAMP),
n.value('(event/@timestamp)[1]', 'datetime2')) AS [timestamp],
n.value('(event/action[@name="database_name"]/value)[1]', 'nvarchar(128)') AS [database_name],
n.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)') AS [sql_text],
n.value('(event/data[@name="message"]/value)[1]', 'nvarchar(max)') AS [message],
n.value('(event/action[@name="username"]/value)[1]', 'nvarchar(max)') AS [username],
n.value('(event/action[@name="client_hostname"]/value)[1]', 'nvarchar(max)') AS [client_hostname],
n.value('(event/action[@name="client_app_name"]/value)[1]', 'nvarchar(max)') AS [client_app_name],
n.value('(event/action[@name="tsql_stack"]/value/frames/frame/@handle)[1]', 'varchar(max)') AS [tsql_stack],
n.value('(event/action[@name="tsql_stack"]/value/frames/frame/@offsetStart)[1]', 'int') AS [statement_start_offset],
n.value('(event/action[@name="tsql_stack"]/value/frames/frame/@offsetEnd)[1]', 'int') AS [statement_end_offset]
into #error_monitor -- 临时表
FROM
( SELECT td.query('.') as n
FROM
(
SELECT CAST(target_data AS XML) as target_data
FROM sys.dm_xe_sessions AS s
JOIN sys.dm_xe_session_targets AS t
ON t.event_session_address = s.address
WHERE s.name = 'event_error_monitor'
--AND t.target_name = 'ring_buffer'
) AS sub
CROSS APPLY target_data.nodes('RingBufferTarget/event') AS q(td)
) as TAB
-- 把数据存储到自己新建的表中(有SQL语句的直接插入到表中)
INSERT INTO log_error_message(start_time,database_name,message,sql_text,alltext,username,client_hostname,client_app_name)
SELECT TIMESTAMP,database_name,[message],sql_text,'',username,client_hostname,client_app_name
FROM #error_monitor as a
WHERE a.sql_text != '' --AND client_app_name !='Microsoft SQL Server Management Studio - 查询'
AND a.MESSAGE NOT LIKE '找不到会话句柄%' AND a.MESSAGE NOT LIKE '%SqlQueryNotification%' --排除server broker
AND a.MESSAGE NOT LIKE '远程服务已删除%'
-- 插入应用执行信息(没有SQL的语句通过句柄查询下SQL)
INSERT INTO log_error_message(start_time,database_name,message,sql_text,alltext,username,client_hostname,client_app_name)
SELECT TIMESTAMP,database_name,[message],
SUBSTRING(qt.text,a.statement_start_offset/2+1,
(case when a.statement_end_offset = -1
then DATALENGTH(qt.text)
else a.statement_end_offset end -a.statement_start_offset)/2 + 1) sql_text,qt.text alltext,
username,client_hostname,client_app_name
FROM #error_monitor as a
CROSS APPLY sys.dm_exec_sql_text(CONVERT(VARBINARY(max),a.tsql_stack,1)) qt -- 通过句柄查询具体的SQL语句
WHERE a.sql_text IS NULL AND tsql_stack != '' --AND client_app_name = '.Net SqlClient Data Provider'
DROP TABLE #error_monitor -- 删除临时表
--重启清空
ALTER EVENT SESSION event_error_monitor ON SERVER STATE = STOP
ALTER EVENT SESSION event_error_monitor on server state = START
END
-- 美化版预警邮箱
DECLARE @body_html VARCHAR(max)
set @body_html = '<table style="width:100%" cellspacing="0"><tr><td colspan="6" align="center" style="font-weight:bold;color:red">数据库错误监控</td></tr>'
set @body_html = @body_html + '<tr style="text-align: left;"><th>运行时间</th><th>数据库</th><th>发生错误的SQL语句</th><th>消息</th><th>用户名</th><th>应用</th><th>应用程序名</th></tr>'
-- 格式处理(没内容就空格填充)
select @body_html = @body_html + '<tr><td>'
+ case (isnull(start_time, '')) when '' then ' ' else convert(varchar(20), start_time, 120) end + '</td><td>'
+ case (isnull(database_name, '')) when '' then ' ' else database_name end + '</td><td>'
+ case (isnull(sql_text, '')) when '' then ' ' else sql_text end + '</td><td>'
+ case (isnull(message, '')) when '' then ' ' else message end + '</td><td>'
+ case (isnull(username, '')) when '' then ' ' else username end + '</td><td>'
+ case (isnull(client_hostname, '')) when '' then ' ' else client_hostname end + '</td><td>'
+ case (isnull(client_app_name, '')) when '' then ' ' else client_app_name end + '</td></tr>'
from (
select start_time, database_name,sql_text, message, username, client_hostname, client_app_name
from [dbo].[log_error_message]
where start_time >= dateadd(hh,-2,getdate()) -- 当前时间 - 定时任务的时间间隔(2h)
and client_app_name != 'Microsoft SQL Server Management Studio - 查询' -- and client_hostname in('')
) as temp_message
set @body_html= @body_html+'</table>'
-- 发送警告邮件
exec msdb.dbo.sp_send_dbmail
@profile_name = 'SQLServer_DotNetCrazy', --配置名称
@recipients = 'xxxxx@qq.com', --收件邮箱
@body_format = 'HTML', --内容格式
@subject = '数据库监控通知', --文章标题
@body = @body_html --邮件内容
go
```
下节预估:定时任务、完整版监控
> PS:估计先得更八字的文章(拖太久)然后更完SQLServer更MySQL,等MySQL监控更完会说下备份与恢复,接着我们开架构篇(MyCat系列先不讲放在Redis和爬虫系列的后面)
---
#### 4.6.6.1 定时任务
一般定时任务都是程序层面来控制,这次简单说下DB层面的定时任务:
```sql
-- 定时任务只要执行下存储过程就可以实现数据库监控了
select 1/0; -- eg:摸拟一个错误
exec event_error_monitor;
```
---
#### 其他监控思路
SQLServer是支持编写插件的,你可以和定时任务结合起来,**通过插件发送给程序,通过ES之类的收集慢查询SQL信息或者错误信息**等,反正思路是无限的
> PS:你搜“在SQL Server中调用.NET程序集”就一大堆资料了(使用`xp_cmdshell`直接执行程序也是一种思路)
贴一个参考文章:<https://www.cnblogs.com/knowledgesea/p/4617877.html>
---
### 4.6.6.MySQL监控指标
这边我们说说`MySQL`的监控:MySQL社区版不提供监控工具,MariaDB有`Monyog`(收费)
> 系统规模大了基本上都是使用企业级监控,小了都是写个脚本定期查看下各项监控指标
公司以前基本上都是使用**`Zabbix`**和**`Nagios`**
> PS:现在中小企更欢喜小米的**`Open-Falcon`**(潜力股)
软件使用这块后面有机会再聊,我们今天主要说说MySQL中那些需要知道的监控**指标**
> PS:**软件也是根据这些指标来监控的**,你完全可以根据这些自己写个轻量级监控程序
数据库监控基本上都是这几方面来进行的:
1. DB可用性监控
- 进程存在,能对外服务(能执行SQL)
2. DB性能的监控
- QPS、TPS、并发线程数(小于数据库连接数)、缓存命中率
3. DB异常的监控
- innodb阻塞和死锁、慢查询
4. 主从相关监控
- 链路状态、主从延迟、数据一致性(定期检查)
5. 服务器的监控
- CPU、内存、swap分区(内存不够的时候它来顶包)、网络IO、磁盘空间(数据目录和日志目录)
#### 4.6.6.1.DB可用性监控
这个其实只要判断DB能对外服务就说明DB是可用的(三步走,1可连接,2可读写,3连接数)
##### 1.是否可连接
查看数据库是否可以连接自带的工具就可以实现了:(也可以使用`telnet ip port`来测试)
> 在另一台服务器上运行:`mysqladmin -u用户名 -p -h服务器ip地址 ping`(显示`mysqld is alive`)


如果Win没安装可以安装一下:

**其实最好就是通过程序看看能不能connect数据库,能就没问题,不能就over了**(你通过IDE也一样,程序的优点在于可自动化)
> PS:可以**周期性**的通过程序建立连接,然后执行一个简单查询`select @@version;`,能得到返回结果就行了
##### 2.是否可读写
一般看能否读写都是简单的查询或者写入(太复杂的容易耗费服务器资源)
1. 检查是否可读:执行一个简单的查询即可
- eg:`select @@version;`
2. 检查是否可写:创建一个监控表,对表中数据进行update
- eg:更新最近监控时间
扩:如果启用了主从服务,那么需要额外检查下`read_only`的配置:
> PS:**检查**主从服务器的**主数据库`read_only`是否为off**(从库一般都是设置只读,如果切换了数据库每改过来就不能写了)
##### 3.监控连接数
主要是两种情况:
1. 当前连接数**一直逼近最大连接数**:配置优化 or 提升硬件
2. 当前连接数**短时间爆增**:发警告,防止CC之类的安全攻击
- eg:**`当前连接数 / 最大连接数 > 80%`** 预警一下,DBA可以查看是因为啥
**最大连接数**:`show variables like 'max_connections';`
**当前连接数**:`show global status like 'Threads_connected';`

---
#### 4.6.6.2.DB性能的监控(重要)
**记录**性能监控过程中**采集到DB的status信息**(以后分析趋势会用到)
##### 1.TPS and QPS
**`QPS`**(Queries Per Second):**每秒钟处理的请求数**(一般都是查询,但DML、DDL也包括)
> PS:贴一下专业的计算公式:`QPS = (Queries2 - Queries1) / 时间间隔`
**`TPS`**(Transactions Per Second):**每秒钟处理事务的数量**(insert、update、delete)
> PS:贴一下专业的计算公式:**`TPS = (TC2 - TC1) / 时间间隔`**(**`TC = Com_insert + Com_delete + Com_update`**)
PS:**你可以理解为`TPS`是`QPS`的子集**,我画个草图表示一下:

##### 2.计算
大体思路:**通过前后两次采集系统状态变量,然后套公式计算**
> PS:QPS一般是测一下查频繁程度,TPS是测一下写的频繁程度
**`QPS = (Queries2 - Queries1) / 时间间隔`**
> PS:Queries = **`show global status like 'Queries';`**

查询命令:**`show global status where variable_name in ('Queries','uptime');`**

PS:其实`Queries`是MySQL为我们提供的简便方法,本质还是使用了计数器的求和`Sum(Com_xx)`
计数器相关:**`show global status like 'Com%';`**
> eg:Com_select、Com_insert、Com_delete、Com_update、Com_create_index、Com_alter_table...
---
**`TPS = (TC2 - TC1) / 时间间隔`**
> PS:**`TC = Com_insert + Com_delete + Com_update`**
查询命令:**`show global status where variable_name in ('com_insert','com_delete','com_update','uptime');`**

带入公式计算就知道TPS了

##### 2.并发请求数
这个大家都知道,**并发数越大系统性能越弱**(结合**CPU***、内存、网络IO来看)
数据库**当前并发数量**:`show global status like 'Threads_running';`
> PS:**当前连接数**:`show global status like 'Threads_connected';`

**线上项目**的并发数一般**远小于**同时间数据库线程数量(如果不是,那可能出现了大量阻塞)
> PS:**当前并发数量**(`Threads_running`) << **当前连接数**(`Threads_connected`)
区分:**并发数:同时执行的会话数量**,当前**连接数:会话总数**(包括sleep的)
##### 3.缓存命中率(innodb)
这个主要是指innodb查询的`缓存命中率`(一般都`>=95%`)
> 公式:`(innodb_buffer_pool_read_requests - innodb_buffer_pool_reads) / innodb_buffer_pool_read_requests`

简单解释一下:
1. **`innodb_buffer_pool_read_requests`:请求读取总次数**(从缓冲池中读取的次数 + 物理磁盘读取次数)
- PS:这个参数已经包含了`从磁盘读取次数`(两者差值就说明缓存命中的次数)
2. **`innodb_buffer_pool_reads`:从物理磁盘读取的次数**
PS:查询:`show global status like 'innodb_buffer_pool_read%';`

---
#### 4.6.6.3.DB异常的监控
##### 1.检查innodb的阻塞
MySQL5.7以前使用了2张表:`information_schema.innodb_lock_waits` and `information_schema.innodb_trx`
> PS:MySQL5.7以后用一张表就行了:**`sys.innodb_lock_waits`**(本质就是上面封装的一个视图)
看个案例:**摸拟两个会话同时对一条记录进行修改**


查询阻塞时间>30s的相关信息
> PS:可以捕获被阻塞的语句,缺没法捕获导致阻塞的语句(已经执行完了)

```sql
-- 查询阻塞时间大于30s的信息
select waiting_pid as blocked_pid,
waiting_query as blocked_sql,
blocking_pid as running_pid,
blocking_query as running_sql,
wait_age as blocked_time,
sql_kill_blocking_query as info
from sys.innodb_lock_waits
where (unix_timestamp() - unix_timestamp(wait_started)) > 30;
```
PS:之所以阻塞是因为**都对同一条资源占用排它锁**
> 可以这么理解:大家都要排他锁,第一个占了lock,第二个就阻塞了
---
扩展:**MariaDB 5.5.60 或者小于MySQL5.7**使用这句SQL:

```sql
select b.trx_mysql_thread_id as blocked_pid,
b.trx_query as blocked_sql,
c.trx_mysql_thread_id as running_pid,
c.trx_query as running_sql,
(unix_timestamp() - unix_timestamp(c.trx_started)) as blocked_time
from information_schema.innodb_lock_waits a
join information_schema.innodb_trx b on a.requesting_trx_id = b.trx_id
join information_schema.innodb_trx c on a.blocking_trx_id = c.trx_id
where (unix_timestamp() - unix_timestamp(c.trx_started)) > 30;
```
简单总结一下:
查询当前会话的连接号:**`select connection_id();`**
> PS:修改innodb事务锁的超时时间:`set global innodb_lock_wait_timeout=60;`(MySQL8默认是50s)
MySQL命令下 `kill 线程号` 可以杀死阻塞线程
**可以抓到阻塞线程的线程号,但没法准确的抓到被哪条SQL阻塞的,只能抓取正在执行中的SQL语句**
> PS:MyISAM的表不能用这种方式检查(现在基本都是innodb,如果历史版本的Table记得改下引擎)
---
##### 2.检查死锁
当前事物是否产生了死锁(MySQL会主动回滚占用资源比较小的那个事物)
> PS:如果回滚资源占用多了,那计算运行半天不就白费了?(时间和资源的浪费)
微端解决:把死锁信息记录到error.log中
> PS:因为MySQL会自动处理死锁,所以简单记录一下就行了,然后自己再场景欢迎来排除死锁
`set global innodb_print_all_deadlocks=on;`(MySQL8可以使用`persist`持久化配置文件)
> PS:查看最近一次死锁相关信息:**`show engine innodb status`**
**企业级方案**:
`pt-deadlock-logger u=用户名,p=密码,h=ip地址 --create-dest-table --dest u=用户名,p=密码,h=ip地址,D=数据库,t=记录死锁信息的表名`
> PS:`--dest u=用户名,p=密码,h=ip地址,D=数据库,t=表名`~存储死锁信息到哪个数据库

PS:**有一个坑点,pt连接的用户必须要有root权限,不然只能建表,但没有死锁记录,建议单独创建一个dba账号用于监控**

死锁演示:

死锁捕获:

画个流水线的表格把,不然新手不懂:
> 看完流水线再看gif演示就简单了(之前讲Python的死锁画了很多图,可以看看)
|会话1|会话2|
|---|---|
|begin;|x|
|`update workdb.users set email='dnt@188.com' where id=1;`|x|
|x|begin;|
|x|`update workdb.users set email='dnt@taobao.com' where id=2;`|
|`update workdb.users set email='dnt@188.com' where id=2;`|x|
|x|`update workdb.users set email='dnt@taobao.com' where id=1;`|
---
##### 3.慢查询监控
两种处理方式:
1. 定期对慢查询日志分析(上节讲过)
2. 实时监控`information_schema.processlist`表
- eg:`select * from information_schema.processlist where time>60 and command<>'sleep';`

PS:这个`event_scheduler`是系统的后台线程,忽略即可
#### 4.6.6.4.主从相关监控
##### 1.主从状态
数据库主从复制链路是否正常
**`show slave status;`查看`Slave_IO_Running`和`Slave_SQL_Running`**
> PS:正常都是yes,如果是no可以在`Last_Errno`和`Last_Error`查看错误信息
##### 2.主从延迟
数据库主从延迟时间:
微端方案:`show slave status;`查看`Seconds_Behind_Master`(主库binlog时间和从库重新执行过的binlog的时间差)
> PS:很多开源工具都是监控这个值,但这个监控不是很准,eg:网络延迟 or 主库执行了一个耗时的事物,binlog还没同步到从库上,这时就不准了
**企业级方案**:
先说原理:主要就是**在主数据库中创建一个表,然后周期性的在主库中插入数据,然后读取从库中的这条数据,并统计下同步完成所耗的时间**
主库:**`pt-heartbeat --user=用户名 --password=密码 -h master_ip --create-table --database 数据库名 --update --daemonize --interval=1`**
从库:**`pt-heartbeat --user=用户名 --password=密码 -h slave_ip --database 数据库名 --monitor --daemonize --log /tmp/salve_tmp.log`**
参考文章:
```shell
数据库架构--数据库监控
https://blog.csdn.net/xiaochen1999/article/details/80947183
mysql 查询正在执行的事务以及等待锁 常用的sql语句
https://www.cnblogs.com/xiaoleiel/p/8316527.html
使用pt-heartbeat检测主从复制延迟
https://www.cnblogs.com/xiaoboluo768/p/5147425.html
```
##### 附录:看看innodb_lock_waits的本质
验证:`show create view sys.innodb_lock_waits;`
```
# show create view sys.innodb_lock_waits;
# -- MySQL5.7.27
# CREATE ALGORITHM = TEMPTABLE DEFINER =`mysql.sys`@`localhost` SQL SECURITY INVOKER VIEW `sys`.`innodb_lock_waits` AS
# select `r`.`trx_wait_started` AS `wait_started`,
# timediff(now(), `r`.`trx_wait_started`) AS `wait_age`,
# timestampdiff(SECOND, `r`.`trx_wait_started`, now()) AS `wait_age_secs`,
# `rl`.`lock_table` AS `locked_table`,
# `rl`.`lock_index` AS `locked_index`,
# `rl`.`lock_type` AS `locked_type`,
# `r`.`trx_id` AS `waiting_trx_id`,
# `r`.`trx_started` AS `waiting_trx_started`,
# timediff(now(), `r`.`trx_started`) AS `waiting_trx_age`,
# `r`.`trx_rows_locked` AS `waiting_trx_rows_locked`,
# `r`.`trx_rows_modified` AS `waiting_trx_rows_modified`,
# `r`.`trx_mysql_thread_id` AS `waiting_pid`,
# `sys`.`format_statement`(`r`.`trx_query`) AS `waiting_query`,
# `rl`.`lock_id` AS `waiting_lock_id`,
# `rl`.`lock_mode` AS `waiting_lock_mode`,
# `b`.`trx_id` AS `blocking_trx_id`,
# `b`.`trx_mysql_thread_id` AS `blocking_pid`,
# `sys`.`format_statement`(`b`.`trx_query`) AS `blocking_query`,
# `bl`.`lock_id` AS `blocking_lock_id`,
# `bl`.`lock_mode` AS `blocking_lock_mode`,
# `b`.`trx_started` AS `blocking_trx_started`,
# timediff(now(), `b`.`trx_started`) AS `blocking_trx_age`,
# `b`.`trx_rows_locked` AS `blocking_trx_rows_locked`,
# `b`.`trx_rows_modified` AS `blocking_trx_rows_modified`,
# concat('KILL QUERY ', `b`.`trx_mysql_thread_id`) AS `sql_kill_blocking_query`,
# concat('KILL ', `b`.`trx_mysql_thread_id`) AS `sql_kill_blocking_connection`
# from ((((`information_schema`.`innodb_lock_waits` `w` join `information_schema`.`innodb_trx` `b` on ((`b`.`trx_id` = `w`.`blocking_trx_id`))) join `information_schema`.`innodb_trx` `r` on ((`r`.`trx_id` = `w`.`requesting_trx_id`))) join `information_schema`.`innodb_locks` `bl` on ((`bl`.`lock_id` = `w`.`blocking_lock_id`)))
# join `information_schema`.`innodb_locks` `rl` on ((`rl`.`lock_id` = `w`.`requested_lock_id`)))
# order by `r`.`trx_wait_started`;
# -- MySQL8.0.16(information_schema
# CREATE ALGORITHM = TEMPTABLE DEFINER =`mysql.sys`@`localhost` SQL SECURITY INVOKER VIEW `sys`.`innodb_lock_waits`
# (`wait_started`, `wait_age`, `wait_age_secs`, `locked_table`, `locked_table_schema`, `locked_table_name`,
# `locked_table_partition`, `locked_table_subpartition`, `locked_index`, `locked_type`, `waiting_trx_id`,
# `waiting_trx_started`, `waiting_trx_age`, `waiting_trx_rows_locked`, `waiting_trx_rows_modified`,
# `waiting_pid`, `waiting_query`, `waiting_lock_id`, `waiting_lock_mode`, `blocking_trx_id`, `blocking_pid`,
# `blocking_query`, `blocking_lock_id`, `blocking_lock_mode`, `blocking_trx_started`, `blocking_trx_age`,
# `blocking_trx_rows_locked`, `blocking_trx_rows_modified`, `sql_kill_blocking_query`,
# `sql_kill_blocking_connection`) AS
# select `r`.`trx_wait_started` AS `wait_started`,
# timediff(now(), `r`.`trx_wait_started`) AS `wait_age`,
# timestampdiff(SECOND, `r`.`trx_wait_started`, now()) AS `wait_age_secs`,
# concat(`sys`.`quote_identifier`(`rl`.`OBJECT_SCHEMA`), '.',
# `sys`.`quote_identifier`(`rl`.`OBJECT_NAME`)) AS `locked_table`,
# `rl`.`OBJECT_SCHEMA` AS `locked_table_schema`,
# `rl`.`OBJECT_NAME` AS `locked_table_name`,
# `rl`.`PARTITION_NAME` AS `locked_table_partition`,
# `rl`.`SUBPARTITION_NAME` AS `locked_table_subpartition`,
# `rl`.`INDEX_NAME` AS `locked_index`,
# `rl`.`LOCK_TYPE` AS `locked_type`,
# `r`.`trx_id` AS `waiting_trx_id`,
# `r`.`trx_started` AS `waiting_trx_started`,
# timediff(now(), `r`.`trx_started`) AS `waiting_trx_age`,
# `r`.`trx_rows_locked` AS `waiting_trx_rows_locked`,
# `r`.`trx_rows_modified` AS `waiting_trx_rows_modified`,
# `r`.`trx_mysql_thread_id` AS `waiting_pid`,
# `sys`.`format_statement`(`r`.`trx_query`) AS `waiting_query`,
# `rl`.`ENGINE_LOCK_ID` AS `waiting_lock_id`,
# `rl`.`LOCK_MODE` AS `waiting_lock_mode`,
# `b`.`trx_id` AS `blocking_trx_id`,
# `b`.`trx_mysql_thread_id` AS `blocking_pid`,
# `sys`.`format_statement`(`b`.`trx_query`) AS `blocking_query`,
# `bl`.`ENGINE_LOCK_ID` AS `blocking_lock_id`,
# `bl`.`LOCK_MODE` AS `blocking_lock_mode`,
# `b`.`trx_started` AS `blocking_trx_started`,
# timediff(now(), `b`.`trx_started`) AS `blocking_trx_age`,
# `b`.`trx_rows_locked` AS `blocking_trx_rows_locked`,
# `b`.`trx_rows_modified` AS `blocking_trx_rows_modified`,
# concat('KILL QUERY ', `b`.`trx_mysql_thread_id`) AS `sql_kill_blocking_query`,
# concat('KILL ', `b`.`trx_mysql_thread_id`) AS `sql_kill_blocking_connection`
# from ((((`performance_schema`.`data_lock_waits` `w` join `information_schema`.`INNODB_TRX` `b` on ((
# convert(`b`.`trx_id` using utf8mb4) =
# cast(`w`.`BLOCKING_ENGINE_TRANSACTION_ID` as char charset utf8mb4)))) join `information_schema`.`INNODB_TRX` `r` on ((
# convert(`r`.`trx_id` using utf8mb4) =
# cast(`w`.`REQUESTING_ENGINE_TRANSACTION_ID` as char charset utf8mb4)))) join `performance_schema`.`data_locks` `bl` on ((`bl`.`ENGINE_LOCK_ID` = `w`.`BLOCKING_ENGINE_LOCK_ID`)))
# join `performance_schema`.`data_locks` `rl` on ((`rl`.`ENGINE_LOCK_ID` = `w`.`REQUESTING_ENGINE_LOCK_ID`)))
# order by `r`.`trx_wait_started`;
```
---
### 1.6.7.MySQL备份恢复
SQLServer的备份在2017年的时候就说过,图形+SQL+Demo,感兴趣的同志可以回顾一下:
> <https://www.cnblogs.com/dunitian/p/6260481.html>
说到数据库备份,常想到的就是这三大块:**全备份**、**增量备份**、**差异备份**,这个在SQLServer中是最常见的备份方式
但说到MySQL咱们往往最常使用的是**物理备份**(物理文件拷贝),然后才是**逻辑备份**(生成sql文件)
> PS:逻辑备份的可移植性比较强,但恢复效率较低;物理备份移植性比较差(DB的环境版本要一致)但恢复效率较高
提示:如果是诸如Memory类型的表(数据没存在磁盘中)那么物理备份只能备份表结构
> PS:不管是逻辑备份还是物理备份,通常都需要使用binlog ==> 也**对binlog进行**一波**备份**
#### 1.6.7.1.全备、差异备份、增量备份的概念
画一张图说明一下差异备份:**在上一次全备的基础上进行差异备份** ==> 每次备份文件都会越来越大,直到下一次全备份
> PS:第一天全备了,那么第2天差异备份就只会备份第2天本身新增的数据,第3天则会备份第2天和第3天新增的数据

**增量**备份则**只备份每天新增的数据**(PS:SQLServer中我们最常用的就是全备份和增量备份)
> PS:缺点也很明显,还原数据的时候需要先还原最近的全备份,然后依次还原每次的增量备份,恢复所需要的时间就比较长了

#### 1.6.7.2.常用工具
| 名称 | 特点 |
| ---------- | ----------------------------------------------- |
| mysqldump | mysql自带工具,支持全量备份和条件备份(单线程) |
| mysqlpump | 多线程逻辑备份工具,mysqldump的增强版 |
| xtrabackup | innodb在线物理备份工具,支持多线程和增量备份 |
备份不一定是root账号,只要具有 **`select、reload、lock tables、replication client、show view、process`** 权限就可以了(如果使用`--tab`那么还需要`file`权限)
> PS:`create user 'dbbak'@'%' identified by '密码';` `grant select,reload,lock tables,replication client,show view,process on *.* to 'dbbak'@'%';`
##### 1.mysqldump
mysqldump在对表备份的时候会加锁,这时候线上有写操作产生就会阻塞(表越大阻塞时间越长)
> PS:原理就是**先读出来**,然后写入备份sql文件中,这个过程会污染innodb buffer pool,可能就会把热数据挤出,那查询性能就必然会下降
先看下帮助文档(最常用:**`mysqldump [OPTIONS] database [tables]`**)

```shell
[OPTIONS]
# 数据库中有这些就加上
-E, --events # 备份计划任务
-R, --routines # 备份存储过程和函数
--triggers # 备份触发器(一般用不到)
-F, --flush-logs # 备份的时候同时刷新一份新的二进制日志
--master-data=1|2 # 将二进制的信息(名称、偏移量)写入到sql中(1生成CHANGE MASTER,2则注释掉)
--max-allowed-packet=x # 设置最大包的值(<=目标库的max-allowed-packet值)
--single-transaction # 用于备份事务性存储引擎时使用(保证数据一致性)
-l, --lock-tables # 锁定指定数据库下的所有表(备份过程中只能读)
-x, --lock-all-tables # 锁定所有数据库的表
-w, --where=name # 条件备份时候使用(只支持单表的导出)
--hex-blob # 以16进制的形式保存(防止出现文本乱码)
--tab=path # 为数据库中的每个表生成两个文件,一个存储表结构,一个存储表数据(一般都是mysqldump与mysqld在同一台机器才使用)
```
再说下`--single-transaction`,这个参数一般在存储引擎都是InnoDB的时候使用,备份的时候会启动一个事物来保证数据的一致性
> PS:如果有其他非事务性存储引擎,那么只能使用低效率的`--lock-tables`了(这也是为什么推荐存储引擎全部使用InnoDB)
###### 备份
这边就简单演示下**备份数据库**:`mysqldump -uroot -p --databases safe_db > safe_db.sql`

PS:完整写法:**`mysqldump -uroot -p --master-data=2 --single-transaction --events --routines --triggers db_name > db_name.sql`**

---
有时候我们还会对数据库中需要分析的**表进行备份**,然后在新环境下进行进一步的分析(比如访问ip、上传记录等)
> eg:`mysqldump -uroot -p safe_db users> safe_db_users_tb.sql`

---
在主从模式下还会使用到**`master-data`**,我们也来简单演示下,这个到底是个啥(看图秒懂)
> eg:`mysqldump -uroot -p --database safe_db --master-data=2 > safe_db2.sql`

下面见证奇迹的时刻到了~区别只是有没有把**binlog文件名和偏移量**注释而已

然后说过**踩的坑**:使用`--master-data`的时候如果没有使用`--single-transaction`,那么DB默认会使用`--lock-all-tables`
> PS:这也是为什么各大公司都推荐,建表的时候都使用InnoDB的根本原因 ==> 不影响线上的后期维护太麻烦
---
再说个场景:数据库数据都是假删,有些时候这些删除的数据比真实数据还要多,在一些场景下需要收缩数据库,那么就可以使用**条件备份**
> eg:`mysqldump -uroot -p safe_db file_records --where "datastatus=1" > safe_db_file_records_tb.sql`
先看下表结构

导出和恢复对比图(新数据库中表数据只含符合条件的)
> PS:`mysql -uroot -p safe_db_bak < safe_db_file_records_tb.sql`(只支持单表的导出)

PS:旧表删不删除你得看SQL ==> `DROP TABLE IF EXISTS file_records;`
---
最后再提一个常用的场景:**结构和数据分开备份**(`--tab=path`)
> eg:`mysqldump -uroot -p --master-data=2 --single-transaction --events --routines --triggers --tab=/tmp safe_db`
使用`--tab`mysql备份账户必须有file的权限才能操作,然后指定的目录mysql必须有可写权限
> PS:没权限 ==> `chown mysql:mysql 目录名` or `/tmp`

因为安全性的缘故,CentOS中Mysql写入tmp的文件都在这个临时文件夹中,如果你删除了这个临时文件夹,那么只会写入失败(重启mysql方能生效)
> PS:这个我在讲DB系统表的时候说过(`select into outfile` and `secure_file_priv`),可以一览:<https://mp.weixin.qq.com/s/rQnMlllRSlyseEmSOCTVGw>
文字很抽象,一张图示即可秒懂
> PS:根本原因就是因为`select into outfile`会导致一些安全问题,CentOS做了处理,而MySQL还是以为自己直接写入/tmp中了(后面说恢复会讲)

---
###### 恢复
恢复就比较简单了,但是单线程的操作,数据库内容较多可能就得等会了(这也是为什么线上故障恢复常用物理备份的原因),简单提一下:`mysql -uroot -p safe_db_bak < safe_db2.sql`
> PS:在mysql命令行下使用`source /xxx/safe_db2.sql`也一样
如果没创建新DB就会报错
> 快速创建:`mysql -uroot -p -e "create database safe_db_bak charset=utf8"`


简单提几个可能出现的错误:
其一:`mysqldump: Error: Binlogging on server not active` ==> binlog没有打开,如果不会可以看看之前写的binlog文章
> 博客园:<https://www.cnblogs.com/dotnetcrazy/p/11029323.html>,微信:<https://mp.weixin.qq.com/s/58auoCY0SU7qNjdTa5dc_Q>
其二:`@@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty`,这个在集群的情况下会遇到,简单说下解决:
> 1.mysql命令行中输入`reset mater`;2.执行mysqldump的时候后面加个参数`--set-gtid-purged=off`(不导出gtid的信息)
最后再提下上面说的分结构和数据的导入方式:`source /tmp/xxx.sql;` and `load data infile '/tmp/xxx.txt' into table tb_name;`
恢复表结构:`source /tmp/file_records.sql;`

恢复表数据:`load data infile '/tmp/file_records.txt' into table file_records;`

若出现任何异常(权限也好,错误也罢)依旧可看我上面贴的文章,里面讲这些已经很透彻了
> PS:<https://mp.weixin.qq.com/s/rQnMlllRSlyseEmSOCTVGw>
##### 2.mysqlpump
`mysqlpump`是5.7之后自带的工具,不需要自行安装而且**语法和mysqldump基本相同**(可以理解为多线程版**mysqldump**)
> PS:新增了zlib和lz4压缩方式(以前需要自己压缩)
再来说下不足点:
1. 虽说是并行备份,但大表来说性能较差
2. 5.7.11之前并发备份和一致性备份不能同时满足(不能同时使用并行备份和一致性两个参数)
3. 同样是会污染InnoDB Buffer Pool
### MySQL高级
**高可用注意下:MariaDB的Gtid复制和MySQL不兼容,多主复制也和MySQL不一样**
> GTID即全局事务ID,GTID实际上是由`UUID`+`TID`组成的。其中UUID是一个`MySQL实例的唯一标识`。TID代表了该实例上`已经提交的事务数量`,并且随着事务提交单调递增,所以GTID能够保证每个MySQL实例事务的执行(不会重复执行同一个事务,并且会补全没有执行的事务)eg:`4e659069-3cd8-11e5-9a49-001c4270714e:1-77`
日志审计(监测)用户活动
---
数据库升级前需要考虑:
eg:MySQL之前修改Json是全部复制,MySQL8.0对Json数据的复制只复制修改的部分
---
innodb专题:
for update:请求写锁(排它锁,独占,别人没法读写了)
lock in share mode:读锁(共享锁)
- Redo Log:实现事物的持久性
- 内存中的缓冲区、重构日志文件
- PS:顺序写入
- Undo Log:未提交事物的回滚
- 随机读写
锁的作用:
- 实现事物的隔离性
- 管理共享资源的并发访问
锁的粒度:
- 表级锁(并发性低)
- 独占锁:`lock table 表名 write;`
- 解锁:`unlock tables;`
- 行级锁(并发性高)
innodb锁的类型:
- 共享锁(读锁)
- 独占锁(写锁)
## 1.7.数据交互
### 1.PyMySQL
官方文档:<https://github.com/PyMySQL/PyMySQL>
### 2.aiomysql
官方文档:<https://github.com/aio-libs/aiomysql>
### 类比:NodeJS操作MySQL
官方文档:<https://github.com/mysqljs/mysql>
### 3.ORM
基础回顾:<https://www.cnblogs.com/dotnetcrazy/p/9333792.html#3.3.元类系列>
#### 3.1.元类基础
知道最基本的一点就行了:**通过元类(`type`)创建出类对象(`class`),通过类对象创建出示例对象**
> PS:创建类的时候如果没指定`metaclass`就使用`Type`创建,如果指定了就以指定的`方法|类`来创建
看个验证案例:
```
# 方法
def meta_fun(class_name, parent_class_tuple, class_attr_dict):
print(class_name, parent_class_tuple, class_attr_dict, sep="\n")
# 这个就是默认做的事情,我这边只是演示一下,指定了metaclass就会使用自定义的方法或者类
return type(class_name, parent_class_tuple, class_attr_dict)
class People(object):
location = "地球"
class China(object):
skin = "黄皮肤"
class Student(People, China, metaclass=meta_fun):
def __init__(self, name, age, school):
self.name = name
self.age = age
self.school = school
def show(self):
print(self.name, self.age, self.school)
def main():
xiaoming = Student("小明", 25, "中科院")
xiaoming.show()
if __name__ == "__main__":
main()
# 类
class MetaClass(type):
# 在__init__之前执行(__new__:创建对象并返回【单例模式会用到】)
def __new__(cls, class_name, parent_class_tuple, class_attr_dict):
print(class_name, parent_class_tuple, class_attr_dict, sep="\n")
return type(class_name, parent_class_tuple, class_attr_dict)
# 也可以这么做:type.__new__(cls,x,x,x)
# return super().__new__(cls, class_name, parent_class_tuple, class_attr_dict)
class People(object):
location = "地球"
class China(object):
skin = "黄皮肤"
class Student(People, China, metaclass=MetaClass):
def __init__(self, name, age, school):
self.name = name
self.age = age
self.school = school
def show(self):
print(self.name, self.age, self.school)
def main():
xiaoming = Student("小明", 25, "中科院")
xiaoming.show()
if __name__ == "__main__":
main()
```
简单提一下:`__new__`+`__init__`相当于C#里面的构造方法
> **PS:`__new__`和`__init__`的方法参数是一样的**
#### 3.2.手写ORM
---
### 4.SQLAlchemy
最新地址:<https://github.com/sqlalchemy/sqlalchemy>
知识点:
```shell
当天的时间:date +%Y%m%d%H%M ==> 201902211407
一周前时间:date -d '-7 day' +%Y%m%d%H%M ==> 201902141407
三分钟前:date -d '-3 minutes' +%Y%m%d%H%M ==> 201902211404
压缩文件:tar -zcvf 时间.sql.tar.gz 数据库.sql
数据库备份:mysqldump -uroot -p密码 -B 数据库 > /bak/数据库.sql
定时任务:
```
## 1.8.高级
最后不要在主库上进行数据库备份(会影响磁盘性能)大促销活动前取消这类计划任务
架构相关(分库分表)、监控优化
## 1.9.扩展
### NoSQL列式存储
```
百科:https://baike.baidu.com/item/列式数据库
行数据库
EmpId,Lastname,Firstname,Salary
1,Smith,Joe,40000; 2,Jones,Mary,50000; 3,Johnson,Cathy,44000;
列数据库
EmpIds,Lastnames,Firstnames,Salarys
1,2,3; Smith,Jones,Johnson; Joe,Mary,Cathy; 40000,50000,44000;
```
| github_jupyter |
# SEIR-Campus Examples
This file illustrates some of the examples from the corresponding paper on the SEIR-Courses package. Many of the function here call on classes inside the SEIR-Courses package. We encourage you to look inside the package and explore!
The data file that comes in the examples, publicdata.data, is based on course network published by Weeden and Cornwell at https://osf.io/6kuet/. Course names, as well as some student demographic information, and varsity athlete ids have been assigned randomly and are not accurate representations of student demographics and varsity athletics at Cornell University.
```
from datetime import datetime, timedelta
from PySeirCampus import *
```
### Load the data for the simulations.
```
holiday_list = [(2020, 10, 14)]
holidays = set(datetime(*h) for h in holiday_list)
semester = Semester('publicdata.data', holidays)
```
### Run a first simulation!
```
parameters = Parameters(reps = 10)
run_repetitions(semester, parameters)
```
### Example when no one self reports/shows symptoms.
```
parameters = Parameters(reps = 10)
parameters.infection_duration = BasicInfectionDuration(1 / 3.5, 1 / 4.5)
run_repetitions(semester, parameters)
```
### Example of infection testing
```
parameters = Parameters(reps = 10)
parameters.intervention_policy = IpWeekdayTesting(semester)
run_repetitions(semester, parameters)
```
### Example with Contact Tracing and Quarantines
```
parameters = Parameters(reps = 10)
parameters.contact_tracing = BasicContactTracing(14)
run_repetitions(semester, parameters)
```
### Example with hybrid classes
```
semester_alt_hybrid = make_alternate_hybrid(semester)
parameters = Parameters(reps = 10)
run_repetitions(semester_alt_hybrid, parameters)
```
### Building social groups.
For proper randomization, these groups should be recreated for each simulation repetition using the preprocess feature in the parameters. However, this can add significant time to the computations.
First, consider generating random clusters of students each day.
```
settings = ClusterSettings(
start_date = min(semester.meeting_dates), end_date = max(semester.meeting_dates),
weekday_group_count = 280, weekday_group_size = 10, weekday_group_time = 120,
weekend_group_count = 210, weekend_group_size = 20, weekend_group_time = 180)
def groups_random(semester):
clusters = make_randomized_clusters(semester.students, settings)
return make_from_clusters(semester, clusters)
parameters = Parameters(reps = 10)
parameters.preprocess = groups_random
run_repetitions(semester, parameters)
```
Next, consider when some students form pairs.
```
def pairing(semester):
clusters, _ = make_social_groups_pairs(semester, 0.25, interaction_time = 1200,
weighted=False)
return make_from_clusters(semester, clusters)
parameters = Parameters(reps = 10)
parameters.preprocess = pairing
run_repetitions(semester, parameters)
```
Next, we consider social interactions among varsity athletes. In the first case, we assume that new social clusters are formed within teams each day.
```
def groups_varsity(semester):
clusters, processed = make_social_groups_varsity(semester, 6, 240, 6, 240)
return make_from_clusters(semester, clusters)
parameters = Parameters(reps = 10)
parameters.preprocess = groups_varsity
run_repetitions(semester, parameters)
```
Finally, we consider varsity teams again but assume that the athletes keep socialization within the same cluster of people each day.
```
def groups_static_varsity(semester):
clusters, processed = make_social_groups_varsity_static(semester, 6, 240)
return make_from_clusters(semester, clusters)
parameters = Parameters(reps = 10)
parameters.preprocess = groups_static_varsity
run_repetitions(semester, parameters)
```
## Other ways to explore
Here are some other ideas for how to play around with the simulations!
Try changing the infection rate. Here we retry the first simulation, but with 10% lower infection rate.
```
def test_infection_sensitivity(semester, delta):
parameters = Parameters(reps = 10)
parameters.rate *= (1 + delta)
run_repetitions(semester, parameters)
test_infection_sensitivity(semester, -0.10)
```
Suppose that the percentage of students that are asymptomatic is now 50% instead of 75%.
```
def test_asymptomatic(semester, ratio):
parameters = Parameters(reps = 10)
parameters.infection_duration = VariedResponse(1 / 3.5, 1 / 4.5, 1 / 2, ratio)
run_repetitions(semester, parameters)
test_asymptomatic(semester, 0.5)
```
Eliminate external sources of exposure.
```
def test_outsideclass(semester, increase_factor):
parameters = Parameters(reps = 10)
parameters.daily_spontaneous_prob *= increase_factor
run_repetitions(semester, parameters)
test_outsideclass(semester, 0)
```
Change the number of initial infections from 10 to 0 (reasonable if arrival testing is conducted).
```
def test_initialconditions(semester, initial_change):
parameters = Parameters(reps = 10)
parameters.initial_exposure *= initial_change
run_repetitions(semester, parameters)
test_initialconditions(semester, 0)
```
Test students once per week, on Sunday.
```
def test_test_onceperweek(semester, weekday = 0):
parameters = Parameters(reps = 10)
parameters.intervention_policy = IpWeeklyTesting(semester, weekday = weekday)
run_repetitions(semester, parameters)
test_test_onceperweek(semester, weekday = 6)
```
Test students one per week, on Monday.
```
test_test_onceperweek(semester, weekday = 0)
```
| github_jupyter |
```
from pdc_project.settings import *
from pdc_project.encoder import *
from pdc_project.helper import *
from pdc_project.transmitter import *
from pdc_project.modulation import *
import numpy as np
import pylab as pl
import scipy.signal.signaltools as sigtool
import scipy.signal as signal
from numpy.random import sample
from pylab import rcParams
rcParams['figure.figsize'] = 20, 8
```
# Helper
```
A_n = 12000 #noise peak amplitude
N_prntbits = 150 #number of bits to print in plots
def plot_data(y, t, m):
#view the data in time and frequency domain
#calculate the frequency domain for viewing purposes
N_FFT = float(len(y))
f = np.arange(0,Fs/2,Fs/N_FFT)
w = np.hanning(len(y))
y_f = np.fft.fft(np.multiply(y,w))
y_f = 10*np.log10(np.abs(y_f[0:int(N_FFT/2)]/N_FFT))
pl.subplot(2,2,1)
pl.plot(t[0:int(Fs*N_prntbits/Fbit)],m[0:int(Fs*N_prntbits/Fbit)])
pl.xlabel('Time (s)')
pl.ylabel('Frequency (Hz)')
pl.title('Original VCO output versus time')
pl.grid(True)
pl.subplot(2,2,2)
pl.plot(t[0:int(Fs*N_prntbits/Fbit)],m[0:int(Fs*N_prntbits/Fbit)])
pl.xlabel('Time (s)')
pl.ylabel('Amplitude (V)')
pl.title('Amplitude of carrier versus time')
pl.grid(True)
pl.subplot(2,2,3)
pl.plot(t[0:int(Fs*N_prntbits/Fbit)],y[0:int(Fs*N_prntbits/Fbit)])
pl.xlabel('Time (s)')
pl.ylabel('Amplitude (V)')
pl.title('Amplitude of carrier versus time')
pl.grid(True)
pl.subplot(2,2,4)
pl.plot(f[0:int((Fc+Fdev*2)*N_FFT/Fs)],y_f[0:int((Fc+Fdev*2)*N_FFT/Fs)])
pl.xlabel('Frequency (Hz)')
pl.ylabel('Amplitude (dB)')
pl.title('Spectrum')
pl.grid(True)
pl.tight_layout()
pl.show()
```
# Transmitter
First, we encode the data and create the waveform.
```
message = "Hello"
data = encode_data(message)
sig, t, sig_freq = prepare_signal(data, debug=True)
plot_data(sig, t, sig_freq)
```
# Channel
We add some random noise to simulate the channel adn add some delay.
```
noise = (np.random.randn(len(sig))+1)*A_n
snr = 10*np.log10(np.mean(np.square(sig)) / np.mean(np.square(noise)))
print("SNR = %fdB" % snr)
delay = np.random.randint(1000,len(noise/2))
print(delay)
y = np.concatenate([noise[:delay],np.add(sig,noise)])
plot_data(y[delay:], t, sig_freq)
```
# Receiver
```
unprocessed_chunks = []
unprocessed_bits = []
RECEIVED_START = False
last_chunk = np.empty(shape=(0))
for data in chunks(list(y), CHUNK):
if not RECEIVED_START:
fft_sample = fft(data)
if content_at_freq(fft_sample, START_FREQ, Fs, Fbit) > START_FREQ_THRESHOLD:
RECEIVED_START = True
unprocessed_chunks.append(last_chunk)
print("Transmission!")
if RECEIVED_START:
unprocessed_chunks.append(data)
last_chunk = data
########
start_signal = get_start_signal()
print(len(start_signal))
sync_signal = get_sync_signal()
SYNCED = False
duration = int(Fs/Fbit)
chunks_buffer = np.empty(shape=(0))
for data in unprocessed_chunks:
chunks_buffer = np.concatenate([chunks_buffer, data])
if not SYNCED and len(chunks_buffer) > len(start_signal) + len(sync_signal) + RESEARCH_WINDOW :
index = synchronise_signal(chunks_buffer)
if index+len(sync_signal) < len(chunks_buffer):
chunks_buffer = chunks_buffer[index + len(sync_signal):]
SYNCED = True
if SYNCED and len(chunks_buffer) > duration:
rest = len(chunks_buffer) % duration
for chunk in chunks(chunks_buffer[:-rest], duration) :
bits = fsk_demodulation(chunk, 2, Fs, Fc, Fdev)
unprocessed_bits = unprocessed_bits + bits
chunks_buffer = chunks_buffer[-rest:]
bits_buffer = []
string = ""
for bit in unprocessed_bits:
bits_buffer.append(bit)
if len(bits_buffer) == 8*FORWARD_ENCODING_LENGTH:
bits = [bit if bit != -1 else 0 for bit in bits_buffer]
message = decode_data(bits)
string += message
bits_buffer = []
string
```
-----------
| github_jupyter |
STAT 453: Deep Learning (Spring 2021)
Instructor: Sebastian Raschka (sraschka@wisc.edu)
Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2021/
GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss21
---
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
```
# LeNet-5 on MNIST
## Imports
```
import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt
# From local helper files
from helper_evaluation import set_all_seeds, set_deterministic, compute_confusion_matrix
from helper_train import train_model
from helper_plotting import plot_training_loss, plot_accuracy, show_examples, plot_confusion_matrix
from helper_dataset import get_dataloaders_mnist
```
## Settings and Dataset
```
##########################
### SETTINGS
##########################
RANDOM_SEED = 123
BATCH_SIZE = 256
NUM_EPOCHS = 15
DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
set_all_seeds(RANDOM_SEED)
#set_deterministic()
##########################
### MNIST DATASET
##########################
resize_transform = torchvision.transforms.Compose(
[torchvision.transforms.Resize((32, 32)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5,), (0.5,))])
train_loader, valid_loader, test_loader = get_dataloaders_mnist(
batch_size=BATCH_SIZE,
validation_fraction=0.1,
train_transforms=resize_transform,
test_transforms=resize_transform)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
print('Class labels of 10 examples:', labels[:10])
break
```
## Model
```
class LeNet5(torch.nn.Module):
def __init__(self, num_classes, grayscale=False):
super().__init__()
self.grayscale = grayscale
self.num_classes = num_classes
if self.grayscale:
in_channels = 1
else:
in_channels = 3
self.features = torch.nn.Sequential(
torch.nn.Conv2d(in_channels, 6, kernel_size=5),
torch.nn.Tanh(),
torch.nn.MaxPool2d(kernel_size=2),
torch.nn.Conv2d(6, 16, kernel_size=5),
torch.nn.Tanh(),
torch.nn.MaxPool2d(kernel_size=2)
)
self.classifier = torch.nn.Sequential(
torch.nn.Linear(16*5*5, 120),
torch.nn.Tanh(),
torch.nn.Linear(120, 84),
torch.nn.Tanh(),
torch.nn.Linear(84, num_classes),
)
def forward(self, x):
x = self.features(x)
x = torch.flatten(x, 1)
logits = self.classifier(x)
return logits
model = LeNet5(grayscale=True,
num_classes=10)
model = model.to(DEVICE)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer,
factor=0.1,
mode='max',
verbose=True)
minibatch_loss_list, train_acc_list, valid_acc_list = train_model(
model=model,
num_epochs=NUM_EPOCHS,
train_loader=train_loader,
valid_loader=valid_loader,
test_loader=test_loader,
optimizer=optimizer,
device=DEVICE,
logging_interval=100)
plot_training_loss(minibatch_loss_list=minibatch_loss_list,
num_epochs=NUM_EPOCHS,
iter_per_epoch=len(train_loader),
results_dir=None,
averaging_iterations=100)
plt.show()
plot_accuracy(train_acc_list=train_acc_list,
valid_acc_list=valid_acc_list,
results_dir=None)
plt.ylim([80, 100])
plt.show()
model.cpu()
show_examples(model=model, data_loader=test_loader)
class_dict = {0: '0',
1: '1',
2: '2',
3: '3',
4: '4',
5: '5',
6: '6',
7: '7',
8: '8',
9: '9'}
mat = compute_confusion_matrix(model=model, data_loader=test_loader, device=torch.device('cpu'))
plot_confusion_matrix(mat, class_names=class_dict.values())
plt.show()
```
| github_jupyter |
```
import cv2
import dlib
import numpy as np
import imutils
import random
from imutils import face_utils
import matplotlib.pyplot as plt
print(cv2.__version__)
%matplotlib inline
def features(img):
#initialize facial detector
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# gray = cv2.resize(gray,(300,300))
# img = cv2.resize(img,(300,300))
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('./shape_predictor_68_face_landmarks.dat')
rects = detector(gray,1)
points = []
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
print(len(rects))
for (i,rect) in enumerate(rects):
shape = predictor(gray,rect)
shape = face_utils.shape_to_np(shape)
(x,y,w,h) = face_utils.rect_to_bb(rect)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
for (x,y) in shape:
cv2.circle(img,(x,y),2,(0,0,255),-1)
points.append((x,y))
# points = np.asarray(list([p.x, p.y] for p in shape.parts()), dtype=np.int)
return img,points
def rect_contains(rect, point) :
if point[0] < rect[0] :
return False
elif point[1] < rect[1] :
return False
elif point[0] > rect[2] :
return False
elif point[1] > rect[3] :
return False
return True
# Draw a point
def draw_point(img, p, color ) :
cv2.circle( img, p, 2, color, cv2.FILLED, cv2.LINE_AA, 0 )
# Draw delaunay triangles
def draw_delaunay(img, delaunay_color):
# Define colors for drawing.
delaunay_color = (255,255,255)
points_color = (0, 0, 255)
img_orig = img.copy()
img,points = features(img)
# print(len(points))
# Rectangle to be used with Subdiv2D
size = img.shape
rect = (0, 0, size[1], size[0])
# Create an instance of Subdiv2D
subdiv = cv2.Subdiv2D(rect);
# Insert points into subdiv
for p in points :
subdiv.insert(p)
triangleList = subdiv.getTriangleList();
size = img.shape
r = (0, 0, size[1], size[0])
triangle_points = []
# print(len(triangleList))
for t in triangleList :
# print(t.shape)
pt1 = (t[0], t[1])
pt2 = (t[2], t[3])
pt3 = (t[4], t[5])
# if rect_contains(r, pt1) and rect_contains(r, pt2) and rect_contains(r, pt3) :
# cv2.line(img, pt1, pt2, delaunay_color, 1, cv2.LINE_AA, 0)
# cv2.line(img, pt2, pt3, delaunay_color, 1, cv2.LINE_AA, 0)
# cv2.line(img, pt3, pt1, delaunay_color, 1, cv2.LINE_AA, 0)
# triangle_points.append(t)
for p in points :
draw_point(img, p, (0,0,255))
return img,triangle_points
if __name__ == '__main__':
# Read in the image.
source = cv2.imread("../../TestSet/stark.jpg");
target = cv2.imread("../../TestSet/Scarlett.jpg");
# plt.rcParams["figure.figsize"] = (10,10)
# plt.imshow(source)
# plt.rcParams["figure.figsize"] = (10,10)
# plt.imshow(target)
# Draw delaunay triangles
source_img,source_points = draw_delaunay( source, (255, 255, 255) );
plt.figure(figsize=(20,10))
# plt.subplot(121)
plt.imshow(source_img)
target_img,target_points = draw_delaunay( target, (255, 255, 255) );
plt.figure(figsize=(20,10))
# plt.subplot(122)
plt.imshow(target_img)
```
| github_jupyter |
### Telecom Customer Churn Prediction
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.ticker as mtick
import matplotlib.pyplot as plt
```
### Reading the data
The dataset contains the following information:
1- Customers who left within the last month – the column is called Churn
2- Services that each customer has signed up for – phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies
3- Customer account information – how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total charges
4- Demographic info about customers – gender, age range, and if they have partners and dependents
```
telecom_data = pd.read_csv('WA_Fn-UseC_-Telco-Customer-Churn.csv')
telecom_data.head().T
```
### Checking the missing values from dataset
```
telecom_data.dtypes
telecom_data.shape
# Converting Total Charges to a numerical data type.
telecom_data.TotalCharges = pd.to_numeric(telecom_data.TotalCharges, errors='coerce')
telecom_data.isnull().sum()
### 11 missing values were found for the TotalCharges and will be removed from our dataset
#Removing missing values
telecom_data.dropna(inplace = True)
#Remove customer IDs from the data set
df2 = telecom_data.set_index('customerID')
#Convertin the predictor variable in a binary numeric variable
df2['Churn'].replace(to_replace='Yes', value=1, inplace=True)
df2['Churn'].replace(to_replace='No', value=0, inplace=True)
#Let's convert all the categorical variables into dummy variables
df_dummies = pd.get_dummies(df2)
df_dummies.head()
df2.head()
## Evaluating the correlation of "Churn" with other variables
plt.figure(figsize=(15,8))
df_dummies.corr()['Churn'].sort_values(ascending = False).plot(kind = 'bar')
```
As it is depicted, month to month contract, online security, and techsupport seem to be highly correlated values with high possibility of churn. tenure and two years contract are negatively correlated with churn.
### Evaluating the Churn Rate
```
colors = ['g','r']
ax = (telecom_data['Churn'].value_counts()*100.0 /len(telecom_data)).plot(kind='bar',
stacked = True,
rot = 0,
color = colors)
ax.yaxis.set_major_formatter(mtick.PercentFormatter())
ax.set_ylabel('% Customers')
ax.set_xlabel('Churn')
ax.set_title('Churn Rate')
# create a list to collect the plt.patches data
totals = []
# find the values and append to list
for i in ax.patches:
totals.append(i.get_width())
# set individual bar lables using above list
total = sum(totals)
for i in ax.patches:
# get_width pulls left or right; get_y pushes up or down
ax.text(i.get_x()+.15, i.get_height()-4.0, \
str(round((i.get_height()/total), 1))+'%',
fontsize=12,
color='white',
weight = 'bold')
```
Here, we can see almost 74% of customers stayed with the company and 27% of customers churned.
### Churn by Contract Type
As shown in the correlation plot, customer with monthly plan, have a high potential of churning
```
contract_churn = telecom_data.groupby(['Contract','Churn']).size().unstack()
ax = (contract_churn.T*100.0 / contract_churn.T.sum()).T.plot(kind='bar',
width = 0.3,
stacked = True,
rot = 0,
figsize = (8,6),
color = colors)
ax.yaxis.set_major_formatter(mtick.PercentFormatter())
ax.legend(loc='best',prop={'size':12},title = 'Churn')
ax.set_ylabel('% Customers')
ax.set_title('Churn by Contract Type')
# Code to add the data labels on the stacked bar chart
for p in ax.patches:
width, height = p.get_width(), p.get_height()
x, y = p.get_xy()
ax.annotate('{:.0f}%'.format(height), (p.get_x()+.25*width, p.get_y()+.4*height),
color = 'white',
weight = 'bold')
```
### Churn by Monthly Charges
In this part, we can see customer with higher monthly charges, have more tend to churn.
```
ax = sns.kdeplot(telecom_data.MonthlyCharges[(telecom_data["Churn"] == 'No') ],
color="Red", shade = True)
ax = sns.kdeplot(telecom_data.MonthlyCharges[(telecom_data["Churn"] == 'Yes') ],
ax =ax, color="Blue", shade= True)
ax.legend(["Not Churn","Churn"],loc='upper right')
ax.set_ylabel('Density')
ax.set_xlabel('Monthly Charges')
ax.set_title('Distribution of monthly charges by churn')
```
## Applying Machine Learning Algorithms
### Logestic Regression
```
# We will use the data frame where we had created dummy variables
y = df_dummies['Churn'].values
X = df_dummies.drop(columns = ['Churn'])
# Scaling all the variables to a range of 0 to 1
from sklearn.preprocessing import MinMaxScaler
features = X.columns.values
scaler = MinMaxScaler(feature_range = (0,1))
scaler.fit(X)
X = pd.DataFrame(scaler.transform(X), index= df_dummies.index)
X.columns = features
X.head()
# Create Train & Test Data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
# Running logistic regression model
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
result = model.fit(X_train, y_train)
from sklearn import metrics
prediction_test = model.predict(X_test)
# Print the prediction accuracy
print (metrics.accuracy_score(y_test, prediction_test))
# To get the weights of all the variables
weights = pd.Series(model.coef_[0],
index=X.columns.values)
weights.sort_values(ascending = False)
```
From Logestic Regression model we can understand having two years contract, and internet service DSL reduces the churn rate. Also, tenure and two years contract have the least churn rate.
On the other hand, total charges, monthly contract, and internet service fiber optic have the highest churn rate from the logestic regression model.
### Random Forest
```
from sklearn.ensemble import RandomForestClassifier
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
model_rf = RandomForestClassifier(n_estimators=1000 , oob_score = True, n_jobs = -1,
random_state =50, max_features = "auto",
max_leaf_nodes = 30)
model_rf.fit(X_train, y_train)
# Make predictions
prediction_test = model_rf.predict(X_test)
print (metrics.accuracy_score(y_test, prediction_test))
importances = model_rf.feature_importances_
weights = pd.Series(importances,
index=X.columns.values)
weights.sort_values()[-10:].plot(kind = 'barh')
```
Based on the Random Forest model, monthly contract, tenure, and total charges are considered as the most important factors for churning.
### Support Vecor Machine (SVM)
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=99)
from sklearn.svm import SVC
model.svm = SVC(kernel='linear')
model.svm.fit(X_train,y_train)
preds = model.svm.predict(X_test)
metrics.accuracy_score(y_test, preds)
```
Suport vector machine shows better performance in terms of accuracy compare to Logestic Regression and Random Forest models.
```
Churn_pred = preds[preds==1]
Churn_X_test = X_test[preds==1]
print("Number of customers predicted to be churner is:", Churn_X_test.shape[0], " out of ", X_test.shape[0])
Churn_X_test.head()
```
This is the list of target customers, who haven't churned but are likely to.
## Confusion matrix definition
```
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
```
## Compute confusion matrix for SVM
```
from sklearn.metrics import confusion_matrix
import itertools
cnf_matrix = confusion_matrix(y_test, preds)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
class_names = ['Not churned','churned']
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
#plt.figure()
#plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
# title='Normalized confusion matrix')
#plt.show()
```
In the first column, confusion matrix predicted 1,117 customer with not churned label. Out of this number, 953 customer predicted with true label and did not churned and 164 customer predicted with false label and churned.
Similarly, in the second column, confusion matrix predicted 290 customers with churned label. Out of this number, 201 customer predicted with true label and churned and 89 customers predicted with flase label and did not churned.
## Applying Artificial Intelligence Methods
### Here we use Keras library with TensorFlow backend to run Deep Neural Network model.
```
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout, TimeDistributed, Bidirectional
from keras.layers.convolutional import Conv1D, MaxPooling1D
from keras.regularizers import l1, l2, l1_l2
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
X_train.shape
```
### Designing the Model
```
model = Sequential()
model.add(Dense(10, input_dim=X_train.shape[1], kernel_initializer='normal', activation= 'relu'))
model.add(Dense(1, kernel_initializer='normal', activation= 'sigmoid'))
model.summary()
```
### Compiling the Model and Fit It
```
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
checkpointer = ModelCheckpoint(filepath="weights.hdf5", verbose=0, save_best_only=True)
history = model.fit(X_train, y_train, epochs=3000, batch_size=100, validation_split=.30, verbose=0,callbacks=[checkpointer])
model.load_weights('weights.hdf5')
```
### summarize history
```
# list all data in history
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
### Evaluating the Model
```
prediction_test_Dense= model.predict_classes(X_test)
cm = confusion_matrix(y_test, prediction_test_Dense)
# Plot non-normalized confusion matrix
plt.figure()
class_names = ['Not churned','churned']
plot_confusion_matrix(cm, classes=class_names,
title='Confusion matrix, without normalization')
print('''In the first column, confusion matrix predicted''',(cm[0,0]+cm[1,0]),'''customer with not churned label.
Out of this number,''' ,(cm[0,0]),'''customers were predicted with true label and did not churned
and''' ,(cm[1,0]),'''customers were predicted with false label and churned.''')
print('''Similarly, in the second column, confusion matrix predicted''' ,(cm[0,1]+cm[1,1]),'''customers with churned label.
Out of this number,''' ,(cm[1,1]),'''customers were predicted with true label
and churned and''' ,(cm[0,1]),'''customers were predicted with flase label and did not churned.''')
```
## Analyse time to churn based on features
```
##Using lifeline features
import lifelines
from lifelines import KaplanMeierFitter
kmf = KaplanMeierFitter()
T = telecom_data['tenure']
#Convertin the predictor variable in a binary numeric variable
telecom_data['Churn'].replace(to_replace='Yes', value=1, inplace=True)
telecom_data['Churn'].replace(to_replace='No', value=0, inplace=True)
E = telecom_data['Churn']
kmf.fit(T, event_observed=E)
```
### Plot Survival
```
kmf.survival_function_.plot()
plt.title('Survival function of Telecom customers');
max_life = T.max()
ax = plt.subplot(111)
telecom_data['MultipleLines'].replace(to_replace='Yes', value='MultipleLines', inplace=True)
telecom_data['MultipleLines'].replace(to_replace='No', value='SingleLine', inplace=True)
feature_columns = ['InternetService', 'gender', 'Contract', 'PaymentMethod', 'MultipleLines']
for feature in feature_columns:
feature_types = telecom_data[feature].unique()
for i,feature_type in enumerate(feature_types):
ix = telecom_data[feature] == feature_type
kmf.fit( T[ix], E[ix], label=feature_type)
kmf.plot(ax=ax, legend=True, figsize=(12,6))
plt.title(feature_type)
plt.xlim(0, max_life)
if i==0:
plt.ylabel(feature+ 'to churn after $n$ months')
plt.tight_layout()
```
This plot shows month-to-month contract and electronic check have very high potential of churn, while two years contract and no phone service are mostly staying with company for longer periods.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
```
# Implementing activation functions with numpy
## Sigmoid
```
x = np.arange(-10,10,0.1)
z = 1/(1 + np.exp(-x))
plt.figure(figsize=(20, 10))
plt.plot(x, z, label = 'sigmoid function', lw=5)
plt.axvline(lw=0.5, c='black')
plt.axhline(lw=0.5, c='black')
plt.box(on=None)
plt.legend()
plt.show()
```
## Tanh
```
x = np.arange(-10,10,0.1)
z = (np.exp(x) - np.exp(-x))/(np.exp(x) + np.exp(-x))
plt.figure(figsize=(20, 10))
plt.plot(x, z, label = 'Tanh function', lw=5)
plt.axvline(lw=0.5, c='black')
plt.axhline(lw=0.5, c='black')
plt.box(on=None)
plt.legend()
plt.show()
```
## Relu
```
x = np.arange(-10,10,0.1)
z = np.maximum(0, x)
plt.figure(figsize=(20, 10))
plt.plot(x, z, label = 'ReLU function', lw=5)
plt.axvline(lw=0.5, c='black')
plt.axhline(lw=0.5, c='black')
plt.box(on=None)
plt.legend()
plt.show()
```
## Leaky ReLU
```
x = np.arange(-10,10,0.1)
z = np.maximum(0.1*x, x)
plt.figure(figsize=(20, 10))
plt.plot(x, z, label = 'LReLU function', lw=5)
plt.axvline(lw=0.5, c='black')
plt.axhline(lw=0.5, c='black')
plt.box(on=None)
plt.legend()
plt.show()
```
## ELU
```
x1 = np.arange(-10,0,0.1)
x2 = np.arange(0,10,0.1)
alpha = 1.67326324
z1 = alpha * (np.exp(x1) - 1)
z2 = x2
plt.figure(figsize=(20, 10))
plt.plot(np.append(x1, x2), np.append(z1, z2), label = 'ELU function', lw=5)
plt.axvline(lw=0.5, c='black')
plt.axhline(lw=0.5, c='black')
plt.box(on=None)
plt.legend()
plt.show()
```
## SELU
```
x1 = np.arange(-10,0,0.1)
x2 = np.arange(0,10,0.1)
alpha = 1.67326324
scale = 1.05070098
z1 = scale * alpha * (np.exp(x1) - 1)
z2 = scale * x2
plt.figure(figsize=(20, 10))
plt.plot(np.append(x1, x2), np.append(z1, z2), label = 'SELU function', lw=5)
plt.axvline(lw=0.5, c='black')
plt.axhline(lw=0.5, c='black')
plt.box(on=None)
plt.legend()
plt.show()
```
## Swish
```
x = np.arange(-10,10,0.1)
z = x * (1/(1 + np.exp(-x)))
plt.figure(figsize=(20, 10))
plt.plot(x, z, label = 'Swish function', lw=5)
plt.axvline(lw=0.5, c='black')
plt.axhline(lw=0.5, c='black')
plt.box(on=None)
plt.legend()
plt.show()
```
## Chapter 11
```
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
import pandas as pd
X,y = load_iris(return_X_y=True)
y = pd.get_dummies(y).values
y
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, activation="relu", input_shape=X_train.shape[1:]),
tf.keras.layers.Dense(10, kernel_initializer="he_normal"),
tf.keras.layers.LeakyReLU(alpha=0.2),
tf.keras.layers.Dense(10, activation="selu", kernel_initializer="lecun_normal"),
tf.keras.layers.Dense(3, activation="softmax")
])
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))
model.summary()
train, test = tf.keras.datasets.mnist.load_data()
X_train, y_train = train
X_test, y_test = test
model2 = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, activation="softmax")
])
model2.summary()
[(var.name, var.trainable) for var in model2.layers[1].variables]
[(var.name, var.trainable) for var in model2.layers[2].variables]
```
| github_jupyter |
<img src="https://raw.githubusercontent.com/google/jax/main/images/jax_logo_250px.png" width="300" height="300" align="center"/><br>
I hope you all enjoyed the first JAX tutorial where we discussed **DeviceArray** and some other fundamental concepts in detail. This is the fifth tutorial in this series, and today we will discuss another important concept specific to JAX. If you haven't looked at the previous tutorials, I highly suggest going through them once. Here are the links:
1. [TF_JAX_Tutorials - Part 1](https://www.kaggle.com/aakashnain/tf-jax-tutorials-part1)
2. [TF_JAX_Tutorials - Part 2](https://www.kaggle.com/aakashnain/tf-jax-tutorials-part2)
3. [TF_JAX_Tutorials - Part 3](https://www.kaggle.com/aakashnain/tf-jax-tutorials-part3)
4. [TF_JAX_Tutorials - Part 4 (JAX and DeviceArray)](https://www.kaggle.com/aakashnain/tf-jax-tutorials-part-4-jax-and-devicearray)
Without any further delay, let's jump in and talk about **pure functions** along with code examples
# Pure Functions
According to [Wikipedia](https://en.wikipedia.org/wiki/Pure_function), a function is pure if:
1. The function returns the same values when invoked with the same inputs
2. There are no side effects observed on a function call
Although the definition looks pretty simple, without examples it can be hard to comprehend and it can sound very vague (especially to the beginners). The first point is clear, but what does a **`side-effect`** mean? What constitutes or is marked as a side effect? What can you do to avoid side effects?
Though I can state all the things here and you can try to "fit" them in your head to make sure that you aren't writing anything that has a side effect, I prefer taking examples so that everyone can understand the "why" part in an easier way. So, let's take a few examples and see some common mistakes that can create side effects
```
import numpy as np
import jax
import jax.numpy as jnp
from jax import grad
from jax import jit
from jax import lax
from jax import random
%config IPCompleter.use_jedi = False
```
# Case 1 : Globals
```
# A global variable
counter = 5
def add_global_value(x):
"""
A function that relies on the global variable `counter` for
doing some computation.
"""
return x + counter
x = 2
# We will `JIT` the function so that it runs as a JAX transformed
# function and not like a normal python function
y = jit(add_global_value)(x)
print("Global variable value: ", counter)
print(f"First call to the function with input {x} with global variable value {counter} returned {y}")
# Someone updated the global value later in the code
counter = 10
# Call the function again
y = jit(add_global_value)(x)
print("\nGlobal variable changed value: ", counter)
print(f"Second call to the function with input {x} with global variable value {counter} returned {y}")
```
Wait...What??? What just happened?
When you `jit` your function, JAX tracing kicks in. On the first call, the results would be as expected, but on the subsequent function calls you will get the **`cached`** results unless:
1. The type of the argument has changed or
2. The shape of the argument has changed
Let's see it in action
```
# Change the type of the argument passed to the function
# In this case we will change int to float (2 -> 2.0)
x = 2.0
y = jit(add_global_value)(x)
print(f"Third call to the function with input {x} with global variable value {counter} returned {y}")
# Change the shape of the argument
x = jnp.array([2])
# Changing global variable value again
counter = 15
# Call the function again
y = jit(add_global_value)(x)
print(f"Third call to the function with input {x} with global variable value {counter} returned {y}")
```
What if I don't `jit` my function in the first place? ¯\_(ツ)_/¯ <br>
Let's take an example of that as well. We are in no hurry!
```
def apply_sin_to_global():
return jnp.sin(jnp.pi / counter)
y = apply_sin_to_global()
print("Global variable value: ", counter)
print(f"First call to the function with global variable value {counter} returned {y}")
# Change the global value again
counter = 90
y = apply_sin_to_global()
print("\nGlobal variable value: ", counter)
print(f"Second call to the function with global variable value {counter} returned {y}")
```
*`Hooraaayy! Problem solved! You can use JIT, I won't!`* If you are thinking in this direction, then it's time to remember two things:
1. We are using JAX so that we can transform our native Python code to make it run **faster**
2. We can achieve 1) if we compile (using it loosely here) the code so that it can run on **XLA**, the compiler used by JAX
Hence, avoid using `globals` in your computation because globals introduce **impurity**
# Case 2: Iterators
We will take a very simple example to see the side effect. We will add numbers from `0 to 5` but in two different ways:
1. Passing an actual array of numbers to a function
2. Passing an **`iterator`** object to the same function
```
# A function that takes an actual array object
# and add all the elements present in it
def add_elements(array, start, end, initial_value=0):
res = 0
def loop_fn(i, val):
return val + array[i]
return lax.fori_loop(start, end, loop_fn, initial_value)
# Define an array object
array = jnp.arange(5)
print("Array: ", array)
print("Adding all the array elements gives: ", add_elements(array, 0, len(array), 0))
# Redefining the same function but this time it takes an
# iterator object as an input
def add_elements(iterator, start, end, initial_value=0):
res = 0
def loop_fn(i, val):
return val + next(iterator)
return lax.fori_loop(start, end, loop_fn, initial_value)
# Define an iterator
iterator = iter(np.arange(5))
print("\n\nIterator: ", iterator)
print("Adding all the elements gives: ", add_elements(iterator, 0, 5, 0))
```
Why the result turned out to be zero in the second case?<br>
This is because an `iterator` introduces an **external state** to retrieve the next value.
# Case 3: IO
Let's take one more example, a very **unusual** one that can turn your functions impure.
```
def return_as_it_is(x):
"""Returns the same element doing nothing. A function that isn't
using `globals` or any `iterator`
"""
print(f"I have received the value")
return x
# First call to the function
print(f"Value returned on first call: {jit(return_as_it_is)(2)}\n")
# Second call to the fucntion with different value
print(f"Value returned on second call: {jit(return_as_it_is)(4)}")
```
Did you notice that? The statement **`I have received the value`** didn't get printed on the subsequent call. <br>
At this point, most people would literally say `Well, this is insane! I am not using globals, no iterators, nothing at all and there is still a side effect? How is that even possible?`
The thing is that your function is still **dependent** on an external state. The **print** statement! It is using the standard output stream to print. What if the stream isn't available on the subsequent calls for whatsoever reason? That will violate the first principle of "returning the same thing" when called with the same inputs.
In a nutshell, to keep function pure, don't use anything that depends on an **external state**. The word **external** is important because you can use stateful objects internally and still keep the functions pure. Let's take an example of this as well
# Pure functions with stateful objects
```
# Function that uses stateful objects but internally and is still pure
def pure_function_with_stateful_obejcts(array):
array_dict = {}
for i in range(len(array)):
array_dict[i] = array[i] + 10
return array_dict
array = jnp.arange(5)
# First call to the function
print(f"Value returned on first call: {jit(pure_function_with_stateful_obejcts)(array)}")
# Second call to the fucntion with different value
print(f"\nValue returned on second call: {jit(pure_function_with_stateful_obejcts)(array)}")
```
So, to keep things **pure**, remember not to use anything inside a function that depends on any **external state**, including the IO as well. If you do that, transforming the function would give you unexpected results, and you would end up wasting a lot of time debugging your code when the transformed function returns a cached result, which is ironical because pure functions are easy to debug
# Why pure functions?
A natural question that comes to mind is that why JAX uses pure functions in the first place? No other framework like TensorFlow, PyTorch, mxnet, etc uses it. <br>
Another thing that you must be thinking right is probably this: Using pure functions is such a headache, I never have to deal with these nuances in TF/Torch.
Well, if you are thinking that, you aren't alone but before jumping to any conclusion, consider the advantages of relying on pure functions.
### 1. Easy to debug
The fact that a function is pure implies that you don't need to look beyond the scope of the pure function. All you need to focus on is the arguments, the logic inside the function, and the returned value. That's it! Same inputs => Same outputs
### 2. Easy to parallelize
Let's say you have three functions A, B, and C and there is a computation involved like this one:<br>
<div style="font-style: italic; text-align: center;">
`res = A(x) + B(y) + C(z)` <br>
</div>
Because all the functions are pure, you don't have to worry about the dependency on an external state or a shared state. There is no dependency between A, B, and C in terms of how are they executed. Each function receives some argument and returns the same output. Hence you can easily offload the computation to many threads, cores, devices, etc. The only thing that the compiler has to ensure that the results of all the functions (A, b, and C in this case) are available before item assignment
### 3. Caching or Memoization
We saw in the above examples that once we compile a pure function, the function will return a cached result on the subsequent calls. We can cache the results of the transformed functions to make the whole program a lot faster
### 4. Functional Composition
When functions are pure, you can `chain` them to solve complex things in a much easier way. For example, in JAX you will see these patterns very often:
<div style="font-style: italic; text-align: center;">
jit(vmap(grad(..)))
</div>
### 5. Referential transparency
An expression is called referentially transparent if it can be replaced with its corresponding value (and vice-versa) without changing the program's behavior. This can only be achieved when the function is pure. It is especially helpful when doing algebra (which is all we do in ML). For example, consider the expression<br>
<div style="font-style: italic; text-align: center;">
x = 5 <br>
y = 5 <br>
z = x + y <br>
</div>
Now you can replace `x + y` with `z` anywhere in your code, considering the value of `z` is coming from a pure function
That's it for Part-5! We will look into other building blocks in the next few chapters, and then we will dive into building neural networks in JAX!
**References:**<br>
1. https://jax.readthedocs.io/en/latest/
2. https://alvinalexander.com/scala/fp-book/benefits-of-pure-functions/
3. https://www.sitepoint.com/what-is-referential-transparency/#referentialtransparencyinmaths
| github_jupyter |
# System and Python setup
## Google Colab prerequisites
Make sure to activate CPU acceleration in Google Colab under `Runtime/Runtime type/CPU`.
This Notebook follows the [tf2-object-detection-api-tutorial](https://github.com/abdelrahman-gaber/tf2-object-detection-api-tutorial) from [Abdelrahman G. Abubakr](https://github.com/abdelrahman-gaber). It is adjusted to train neural networks for object detection (here: stairs) in floorplans. The adjusted repo is [floorplan-object-detection](https://github.com/araccaine/floorplan-object-detection) by me.
It uses models from the [TensorFlow 2 Detection Model Zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md)
Data for training and testing was taken from [CubiCasa5k](https://github.com/CubiCasa/CubiCasa5k) dataset and annotated with [roboflow.com](https://roboflow.com/). Roboflow provides lots of export formats. I used the `.tfrecord` format for tensorflow 2.
## Update system
```
%%bash
# Update system packages
sudo apt update
sudo apt -y upgrade
# check Python Version
python3 -V
%%bash
# Installing pip
apt install -y python3-pip
%%bash
apt install -y build-essential libssl-dev libffi-dev python3-dev libgtk2.0-dev pkg-config
```
# Setup Tensorflow 2 Object Detection API
```
import os
import pathlib
# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
%%bash
apt install protobuf-compiler
%%bash
cd models/research/
# openCV
# pip install opencv-python
pip install opencv-contrib-python
# compile protoc
protoc object_detection/protos/*.proto --python_out=.
# Install Tensorflow ObjectDetection API
cp object_detection/packages/tf2/setup.py .
python -m pip install .
```
# Test the installation
```
%%bash
python models/research/object_detection/builders/model_builder_tf2_test.py
```
# Install scripts and pre-trained model
```
%%bash
rm floorplan-object-detection -rf
git clone https://github.com/araccaine/floorplan-object-detection.git
```
# Train ObjectDetectionModel with custom dataset
## get own dataset
```
%%bash
# get your custom dataset from roboflow.com
cd floorplan-object-detection/data/
curl -L "https://app.roboflow.com/ds/oHCw8RNYQw?key=mw7ZcRlCKz" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
```
## start training
Before training, check in .config file:
* number of classes in `model: num_classes`
* all paths for `label_map_path` and `input_path`
```
%%bash
# rm /content/floorplan-object-detection/models/ssd_mobilenet_floorplan/c*
%%bash
cd floorplan-object-detection/train_tf2/
bash start_train.sh
```
# Freeze (save) the trained model
```
%%bash
cd floorplan-object-detection/train_tf2/
bash export_model.sh
```
# Test model
```
%%bash
cd floorplan-object-detection/
python detect_objects.py --threshold 0.4 --model_path models/ssd_mobilenet_floorplan/frozen_model/saved_model --path_to_labelmap data/train/stairs_label_map.pbtxt --images_dir data/samples/images/ --save_output
```
# Export
After Training and Testing some may compress the output folder and download it.
## Model export
```
%%bash
cd floorplan-object-detection/models/ssd_mobilenet_floorplan/
# compress and zip
tar cvzf frozen_model.tar.gz frozen_model/
# show content
# tar tvzf frozen_model.tar.gz
```
## Output export
```
%%bash
cd floorplan-object-detection/data/samples/
# compress and zip
tar cvzf output.tar.gz output/
# show content
# tar tvzf output.tar.gz
```
| github_jupyter |
```
# necessary libraries
import os
import pandas as pd
# visualizations libraries
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from matplotlib.image import imread
%matplotlib inline
# tensorflow libraries
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg19 import VGG19
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from keras import optimizers
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
# model evaluation libraries
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
from mlxtend.plotting import plot_confusion_matrix
base_dir= "D:\TbDB\TB_D"
os.listdir(base_dir)
tuberculosis_data= r"D:\TbDB\TB_D\Tuberculosis"
print("tuberculosis images :\n" ,os.listdir(tuberculosis_data)[:5])
normal_data= r"D:\TbDB\TB_D\Normal"
print("\nnormal images :\n" ,os.listdir(normal_data)[:5])
print("no. of tuberculosis images :" ,len(os.listdir(tuberculosis_data)))
print("\nno. of normal images :" ,len(os.listdir(normal_data)))
nrows= 4
ncols= 4
pic_index= 0
fig= plt.gcf()
fig.set_size_inches(ncols*4, nrows*4)
pic_index+=8
tuberculosis_img = [os.path.join(tuberculosis_data, image) for image in os.listdir(tuberculosis_data)[pic_index-8:pic_index]]
normal_img = [os.path.join(normal_data, image) for image in os.listdir(normal_data)[pic_index-8:pic_index]]
for i, image_path in enumerate(tuberculosis_img+normal_img):
sp = plt.subplot(nrows, ncols, i + 1)
sp.axis('Off')
img = mpimg.imread(image_path)
plt.imshow(img)
plt.show()
image= imread("D:\TbDB\TB_D\Tuberculosis\Tuberculosis-197.png")
image.shape
# generating training data
print("training data :")
train_datagen= ImageDataGenerator(rescale=1/255, zoom_range=0.3, rotation_range=50, width_shift_range= 0.2, height_shift_range=0.2, shear_range=0.2,
horizontal_flip=True, fill_mode='nearest', validation_split = 0.2)
train_data = train_datagen.flow_from_directory(base_dir,
target_size= (75, 75),
class_mode= "binary",
batch_size=20,
subset= "training"
)
# genarating validation data
print("\nvalidation data :")
val_datagen= ImageDataGenerator(rescale= 1/255, validation_split= 0.2)
val_data= train_datagen.flow_from_directory(base_dir,
target_size= (75, 75),
class_mode= "binary",
batch_size=20,
shuffle= False,
subset= "validation"
)
train_data.class_indices
from keras.applications import xception
base_model = xception.Xception(weights='imagenet',include_top=False,input_shape=(75,75, 3),pooling='avg')
top_model = Sequential()
top_model.add(Dense(256, activation='relu', input_shape=base_model.output_shape[1:]))
top_model.add(Dropout(0.5))
top_model.add(Dense(128, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
model = Model(inputs=base_model.input, outputs=top_model(base_model.output))
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=['accuracy'])
history= model.fit(train_data,
steps_per_epoch= train_data.samples//train_data.batch_size,
validation_data= val_data,
validation_steps= val_data.samples//val_data.batch_size,
epochs= 10,
verbose=1
)
epochs= range(1, len(history.history["accuracy"])+1)
plt.plot(epochs, history.history["accuracy"], color="purple")
plt.plot(epochs, history.history["val_accuracy"], color="pink")
plt.xlabel("epochs")
plt.ylabel("accuracy")
plt.title("Accuracy plot")
plt.legend(["train_acc", "val_acc"])
plt.show()
plt.plot(epochs, history.history["loss"], color="purple")
plt.plot(epochs, history.history["val_loss"], color="pink")
plt.xlabel("epochs")
plt.ylabel("loss")
plt.title("Loss plot")
plt.legend(["train_loss", "val_loss"])
plt.show()
import numpy as np
prediction= model.predict(val_data, steps=np.ceil(val_data.samples/val_data.batch_size), verbose=2)
prediction= (prediction > 0.5)
prediction
val_labels=val_data.classes
val_labels
cm= confusion_matrix(val_data.classes, prediction)
plot_confusion_matrix(cm, figsize=(5,5))
print(accuracy_score(val_data.classes, prediction))
print(classification_report(val_data.classes, prediction))
```
| github_jupyter |
# Project: Part of Speech Tagging with Hidden Markov Models
---
### Introduction
Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more.

The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
<div class="alert alert-block alert-info">
**Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files.
</div>
<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>
### The Road Ahead
You must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.
- [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus
- [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline
- [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline
- [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger
<div class="alert alert-block alert-warning">
**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
</div>
```
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers, tests
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
```
## Step 1: Read and preprocess the dataset
---
We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.
The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.
Example from the Brown corpus.
```
b100-38532
Perhaps ADV
it PRON
was VERB
right ADJ
; .
; .
b100-35577
...
```
```
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"
```
### The Dataset Interface
You can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.
```
Dataset-only Attributes:
training_set - reference to a Subset object containing the samples for training
testing_set - reference to a Subset object containing the samples for testing
Dataset & Subset Attributes:
sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus
keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus
vocab - an immutable collection of the unique words in the corpus
tagset - an immutable collection of the unique tags in the corpus
X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...)
Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...)
N - returns the number of distinct samples (individual words or tags) in the dataset
Methods:
stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus
__iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs
__len__() - returns the nubmer of sentences in the dataset
```
For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:
```
subset.keys == {"s1", "s0"} # unordered
subset.vocab == {"See", "run", "ran", "Spot"} # unordered
subset.tagset == {"VERB", "NOUN"} # unordered
subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys
subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys
subset.N == 7 # there are a total of seven observations over all sentences
len(subset) == 2 # because there are two sentences
```
<div class="alert alert-block alert-info">
**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data.
</div>
#### Sentences
`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`.
```
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))
```
<div class="alert alert-block alert-info">
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data.
</div>
#### Counting Unique Elements
You can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`.
```
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"
```
#### Accessing word and tag Sequences
The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.
```
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(2):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()
```
#### Accessing (word, tag) Samples
The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.
```
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
```
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts.
## Step 2: Build a Most Frequent Class tagger
---
Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus.
### IMPLEMENTATION: Pair Counts
Complete the function below that computes the joint frequency counts for two input sequences.
```
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
raise NotImplementedError
# Calculate C(t_i, w_i)
emission_counts = pair_counts(# TODO: YOUR CODE HERE)
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')
```
### IMPLEMENTATION: Most Frequent Class Tagger
Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.
The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably.
```
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
word_counts = pair_counts(# TODO: YOUR CODE HERE)
mfc_table = # TODO: YOUR CODE HERE
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')
```
### Making Predictions with a Model
The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger.
```
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions
```
### Example Decoding Sequences with MFC Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
### Evaluating Model Accuracy
The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus.
```
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions
```
#### Evaluate the accuracy of the MFC tagger
Run the next cell to evaluate the accuracy of the tagger on the training and test corpus.
```
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')
```
## Step 3: Build an HMM tagger
---
The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.
We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).
The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:
$$t_i^n = \underset{t_i^n}{\mathrm{argmin}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$
Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information.
### IMPLEMENTATION: Unigram Counts
Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)
$$P(tag_1) = \frac{C(tag_1)}{N}$$
```
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
# TODO: Finish this function!
raise NotImplementedError
# TODO: call unigram_counts with a list of tag sequences from the training set
tag_unigrams = unigram_counts(# TODO: YOUR CODE HERE)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')
```
### IMPLEMENTATION: Bigram Counts
Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
```
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
# TODO: Finish this function!
raise NotImplementedError
# TODO: call bigram_counts with a list of tag sequences from the training set
tag_bigrams = bigram_counts(# TODO: YOUR CODE HERE)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')
```
### IMPLEMENTATION: Sequence Starting Counts
Complete the code below to estimate the bigram probabilities of a sequence starting with each tag.
```
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
# TODO: Finish this function!
raise NotImplementedError
# TODO: Calculate the count of each tag starting a sequence
tag_starts = starting_counts(# TODO: YOUR CODE HERE)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')
```
### IMPLEMENTATION: Sequence Ending Counts
Complete the function below to estimate the bigram probabilities of a sequence ending with each tag.
```
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
# TODO: Finish this function!
raise NotImplementedError
# TODO: Calculate the count of each tag ending a sequence
tag_ends = ending_counts(# TODO: YOUR CODE HERE)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')
```
### IMPLEMENTATION: Basic HMM Tagger
Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.
- Add one state per tag
- The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$
- Add an edge from the starting state `basic_model.start` to each tag
- The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$
- Add an edge from each tag to the end state `basic_model.end`
- The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$
- Add an edge between _every_ pair of tags
- The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$
```
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
# (Hint: you may need to loop & create/add new states)
basic_model.add_states()
# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# (Hint: you may need to loop & add transitions
basic_model.add_transition()
# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')
hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')
```
### Example Decoding Sequences with the HMM Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
## Finishing the project
---
<div class="alert alert-block alert-info">
**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
</div>
```
!!jupyter nbconvert *.ipynb
```
## Step 4: [Optional] Improving model performance
---
There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.
- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts)
Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.
- Backoff Smoothing
Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.
- Extending to Trigrams
HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two.
### Obtain the Brown Corpus with a Larger Tagset
Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.
Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets.
```
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]
```
| github_jupyter |
## Visualization
```
import torch
import torch.nn as nn
import numpy as np
import pytorch_lightning as pl
import sys
import os
import matplotlib.pylab as plt
%matplotlib inline
from lifelines.utils import concordance_index
from sklearn.metrics import r2_score
from torch.utils.data import DataLoader, TensorDataset
from torchcontrib.optim import SWA
from pytorch_lightning import Trainer, seed_everything
from argparse import ArgumentParser
sys.path.append('..')
sys.path.append('../data/ml_mmrf')
sys.path.append('../data/')
sys.path.append('../ief_core/')
from ml_mmrf_v1.data import load_mmrf
from synthetic.synthetic_data import load_synthetic_data_trt, load_synthetic_data_noisy
from semi_synthetic.ss_data import *
from models.ssm.ssm import SSM
from models.rnn import GRU
from models.utils import *
healthy_mins_max = {
'cbc_abs_neut':(2., 7.5,1/3.), # abs neutrophil count (3.67, 1.), (2.83, 4.51)
'chem_albumin':(34, 50,1/8.), # chemical albumin (43.62, 2.77), (41.30, 45.94)
'chem_bun':(2.5, 7.1,1/5.), #BUN # reference range, (4.8, 1.15)
'chem_calcium':(2.2, 2.7,2.), #Calcium, (2.45, 0.125)
'chem_creatinine':(66, 112,1/36.), # creatinine, (83., 24.85), (62.22, 103.77)
'chem_glucose':(3.9, 6.9,1/5.), # glucose, (4.91, 0.40), (4.58, 5.24)
'cbc_hemoglobin':(13., 17.,1), # hemoglobin (12.90, 15.64), (8.86, 1.02)
'chem_ldh':(2.33, 4.67,1/3.), #LDH, (3.5, 0.585)
'serum_m_protein':(0.1, 1.1, 1), # M protein (<3 g/dL is MGUS, any presence of protein is pathological); am just using the data mean/std for this, (0.85, 1.89)
'urine_24hr_m_protein':(0.0, 0.1, 1), # Urine M protein
'cbc_platelet':(150, 400,1/60.), # platelet count (206.42, 334.57), (270.5, 76.63)
'chem_totprot':(6, 8,1/6.), # total protein, (7, 0.5)
'urine_24hr_total_protein':(0, 0.23, 1), #
'cbc_wbc':(3, 10,1/4.), # WBC (5.71, 8.44), (7.07, 1.63)
'serum_iga':(0.85, 4.99, 1.), # IgA, (2.92, 1.035)
'serum_igg':(6.10, 16.16,1/10.), # IgG, (11.13, 2.515)
'serum_igm':(0.35, 2.42,1), #IgM, (1.385, 0.518)
'serum_lambda':(0.57, 2.63, 1/2.), #serum lambda, (1.6, 0.515)
'serum_kappa':(.33, 1.94,1/8.), #serum kappa , (1.135, 0.403)
'serum_beta2_microglobulin':(0.7, 1.80, 1/3.), #serum_beta2_microglobulin,
'serum_c_reactive_protein':(0.0, 1., 1.) #serum_c_reactive_protein,
}
scaled_healthy_min_max = {}
for k,v in healthy_mins_max.items():
old_min, old_max, scale = v
new_min = (old_min - old_max)*scale
new_max = 0.
scaled_healthy_min_max[k] = (new_min, new_max)
```
## RNN
```
# checkpoint_path = '../ief_core/tbp_logs/checkpoints/rnn_pkpd_semi_synthetic_subsample_best_epoch=00989-val_loss=-240.20.ckpt'
checkpoint_path = '../ief_core/tbp_logs/checkpoints/rnn_semi_synthetic_subsample_best_epoch=00909-val_loss=-256.67.ckpt'
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
hparams = checkpoint['hyper_parameters']
gru = GRU(**hparams); gru.setup(1)
gru.load_state_dict(checkpoint['state_dict'])
assert 'dim_data' in gru.hparams
assert 'dim_treat' in gru.hparams
assert 'dim_base' in gru.hparams
assert gru.hparams['mtype'] == 'gru'
print(gru.hparams['dataset'])
# sample
valid_loader = gru.val_dataloader()
B, X, A, M, Y, CE = valid_loader.dataset.tensors
_, _, lens = get_masks(M)
B, X, A, M, Y, CE = B[lens>1], X[lens>1], A[lens>1], M[lens>1], Y[lens>1], CE[lens>1]
T_forward = 15; T_condition = 4
_, _, lens = get_masks(M)
B, X, A, M, Y, CE = B[lens>T_forward+T_condition], X[lens>T_forward+T_condition], A[lens>T_forward+T_condition], M[lens>T_forward+T_condition], Y[lens>T_forward+T_condition], CE[lens>T_forward+T_condition]
_, _, _, _, _, sample = gru.inspect(T_forward, T_condition, B, X, A, M, Y, CE)
print(sample.shape)
ddata = load_ss_data(10)
print(ddata['feature_names_x'])
np.random.seed(0)
idxs = np.random.choice(np.arange(X.shape[0]),size=10)
sel_sample = sample[idxs]
orig = X[idxs]
fig, axlist = plt.subplots(2,5,figsize=(42,12))
feat = 13
feat_name = ddata['feature_names_x'][feat]
vmin, vmax = scaled_healthy_min_max[feat_name]
ax = axlist.ravel()
for i,idx in enumerate(idxs):
ss_example = sample[idx]
orig_example = X[idx]
orig_example[np.where(M[idx] == 0.)] = np.nan
ax[i].plot(np.arange(ss_example.shape[0]), pt_numpy(ss_example[:,feat]), linewidth=4)
ax[i].scatter(np.arange(ss_example.shape[0]), pt_numpy(orig_example[:ss_example.shape[0],feat]), s=36)
ax[i].axhline(y=vmin, color='r', linestyle='--', alpha=0.6)
ax[i].axhline(y=vmax, color='g', linestyle='--', alpha=0.6)
ax[i].set_title(f'Patient {idx}', fontsize=25)
plt.suptitle(feat_name, fontsize=36)
plt.savefig('./plots/samples_gru_igg.pdf',bbox_inches='tight')
```
## SSM
```
# checkpoint_path = '../ief_core/tbp_logs/checkpoints/ssm_pkpd_semi_synthetic_subsample_best_epoch=09999-val_loss=-306.21.ckpt'
checkpoint_path = '../ief_core/tbp_logs/checkpoints/ssm_lin_semi_synthetic_subsample_best_epoch=09459-val_loss=-303.40.ckpt'
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
hparams = checkpoint['hyper_parameters']
ssm = SSM(**hparams); ssm.setup(1)
ssm.load_state_dict(checkpoint['state_dict'])
assert 'dim_data' in ssm.hparams
assert 'dim_treat' in ssm.hparams
assert 'dim_base' in ssm.hparams
assert ssm.hparams['ttype'] == 'lin'
valid_loader = ssm.val_dataloader()
B, X, A, M, Y, CE = valid_loader.dataset.tensors
_, _, lens = get_masks(M)
B, X, A, M, Y, CE = B[lens>1], X[lens>1], A[lens>1], M[lens>1], Y[lens>1], CE[lens>1]
T_forward = 15; T_condition = 4
_, _, lens = get_masks(M)
B, X, A, M, Y, CE = B[lens>T_forward+T_condition], X[lens>T_forward+T_condition], A[lens>T_forward+T_condition], M[lens>T_forward+T_condition], Y[lens>T_forward+T_condition], CE[lens>T_forward+T_condition]
_, _, _, _, _, sample = ssm.inspect(T_forward, T_condition, B, X, A, M, Y, CE)
print(sample.shape)
ddata = load_ss_data(10)
print(ddata['feature_names_x'])
np.random.seed(0)
idxs = np.random.choice(np.arange(X.shape[0]),size=10)
sel_sample = sample[idxs]
orig = X[idxs]
fig, axlist = plt.subplots(2,5,figsize=(42,13))
feat = 13
feat_name = ddata['feature_names_x'][feat]
vmin, vmax = scaled_healthy_min_max[feat_name]
ax = axlist.ravel()
for i,idx in enumerate(idxs):
ss_example = sample[idx]
orig_example = X[idx]
orig_example[np.where(M[idx] == 0.)] = np.nan
ax[i].plot(np.arange(ss_example.shape[0]),pt_numpy(ss_example[:,feat]), linewidth=4)
ax[i].scatter(np.arange(ss_example.shape[0]), pt_numpy(orig_example[:ss_example.shape[0],feat]), s=36)
ax[i].axhline(y=vmin, color='r', linestyle='--', alpha=0.6)
ax[i].axhline(y=vmax, color='g', linestyle='--', alpha=0.6)
ax[i].set_title(f'Patient {idx}', fontsize=25)
plt.suptitle(feat_name, fontsize=36)
plt.savefig('./plots/samples_ssmlin_igg.pdf',bbox_inches='tight')
```
| github_jupyter |
# Analyzing the model comparison benchmark with ROC analysis
We want to analyze our model comparison benchmark with respect to the receiver operating characteristic and precision-recall curves.
For most models, we use thresholding on the p-values to create the curves.
Exceptions:
- scCODA: Use inclusion probability instead of p-values
- simple DM: No good way of building a ROC. Leave out!
- ANCOM: Use the W statistic (no. of significant tests for each cell type). There are not that many points here, since W is discrete from 0 to K.
Therefore, this is only partially informative.
For fairness reasons, we exclude the one-sample case from the analysis (many models are not applicable for this case.)
```
# Setup
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import os
import pickle as pkl
from sklearn.metrics import roc_curve, auc, precision_recall_curve, average_precision_score
```
## Data preparation
**reading data**
```
# Read data. scCODA and simple DM results are stored in multiple files, therefore read them separately.
save_path = "../../zenodo_data_rv1/sccoda_benchmark_data/model_comparison/"
all_results = pd.read_csv(save_path + "benchmark_results")
all_effects = {}
for mod in pd.unique(all_results["model"]):
with open(save_path + f"{mod}_effects", "rb") as f:
eff = pd.read_csv(f)
all_effects[mod] = eff
```
**scCODA**
First, we read in all inclusion probabilities from all cell types of all benchmark runs, and put them into a DataFrame.
We then add the ground truth value (1: expected change/0: no expected change) to each entry.
```
all_inc_probs_df = all_effects["scCODA"]
res = all_results[all_results["model"] == "scCODA"].reset_index()
count = 0
for i in range(len(res)):
K = res.loc[i, "n_cell_types"]
effect = eval(res.loc[i, "Increase"])
n_samples = res.loc[i, "n_controls"]
all_inc_probs_df.loc[count:count+K-1, "sample size"] = n_samples
all_inc_probs_df.loc[count:count+K-1, "effect_0"] = effect[0]
all_inc_probs_df.loc[count:count+K-1, "effect_1"] = effect[1]
count += K
all_inc_probs_df["Ground truth"] = 0
all_inc_probs_df.loc[(all_inc_probs_df["Cell Type"] == 0) & (all_inc_probs_df["effect_0"] != 0), "Ground truth"] = 1
all_inc_probs_df.loc[(all_inc_probs_df["Cell Type"] == 1) & (all_inc_probs_df["effect_1"] != 0), "Ground truth"] = 1
```
We then calculate the ROC metrics (FPR, TPR, Precision, Recall) for all possible threshold values with the help of sklearn.
We also calculate the AUC and average precision score.
```
fprs = {}
tprs = {}
roc_aucs = {}
# exclude one-sample case
aip_2 = all_inc_probs_df.loc[all_inc_probs_df["sample size"] > 1 ,:]
fprs["scCODA_rev"], tprs["scCODA_rev"], thresh_2 = roc_curve(aip_2["Ground truth"], aip_2["Inclusion probability"])
roc_aucs["scCODA_rev"] = auc(fprs["scCODA_rev"], tprs["scCODA_rev"])
prec = {}
rec = {}
pr_aucs = {}
prec["scCODA_rev"], rec["scCODA_rev"], thresh_2_ = precision_recall_curve(aip_2["Ground truth"], aip_2["Inclusion probability"])
pr_aucs["scCODA_rev"] = average_precision_score(aip_2["Ground truth"], aip_2["Inclusion probability"])
print(roc_aucs)
```
**scDC**
For the scDC model, we first have to concatenate all results and do some renaming to get the result in line with the other p-value based models
```
all_p_df = all_effects["scdc"]
res = all_results[all_results["model"] == "scdc"].reset_index()
count = 0
for i in range(len(res)):
K = res.loc[i, "n_cell_types"]
effect = eval(res.loc[i, "Increase"])
n_samples = res.loc[i, "n_controls"]
all_p_df.loc[count:count+K-1, "sample size"] = n_samples
all_p_df.loc[count:count+K-1, "effect_0"] = effect[0]
all_p_df.loc[count:count+K-1, "effect_1"] = effect[1]
all_p_df.loc[count:count+K-1, "Cell Type"] = np.arange(K)
count += K
all_p_df["Ground truth"] = 0
all_p_df.loc[(all_p_df["Cell Type"] == 0) & (all_p_df["effect_0"] != 0), "Ground truth"] = 1
all_p_df.loc[(all_p_df["Cell Type"] == 1) & (all_p_df["effect_1"] != 0), "Ground truth"] = 1
all_p_df.loc[all_p_df["p.value"] == 0, "p.value"] = 1
all_p_df["1-p"] = (1 - all_p_df["p.value"]).replace(np.nan, 0)
aip_2_scdc = all_p_df.loc[all_p_df["sample size"] > 1 ,:]
fprs["scdc"], tprs["scdc"], thresh_2 = roc_curve(aip_2_scdc["Ground truth"], aip_2_scdc["1-p"])
roc_aucs["scdc"] = auc(fprs["scdc"], tprs["scdc"])
prec["scdc"], rec["scdc"], thresh_2_ = precision_recall_curve(aip_2_scdc["Ground truth"], aip_2_scdc["1-p"])
pr_aucs["scdc"] = average_precision_score(aip_2_scdc["Ground truth"], aip_2_scdc["1-p"])
print(roc_aucs)
```
**p-values**
For the p-values, the preparation is identical: Read p-values, determine ground truth, calculate statistics
```
tests = ["ALDEx2_alr", "ANCOMBC_holm", "alr_ttest", "alr_wilcoxon", "dirichreg", "Poisson_1e-10", "ttest", "BetaBinomial"]
for test_name in tests:
print(test_name)
pval = all_effects[test_name]
res = all_results[all_results["model"] == test_name].reset_index()
count = 0
for i in range(len(res)):
K = res.loc[i, "n_cell_types"]
effect = eval(res.loc[i, "Increase"])
n_samples = res.loc[i, "n_controls"]
pval.loc[count:count+K-1, "sample size"] = n_samples
pval.loc[count:count+K-1, "effect_0"] = effect[0]
pval.loc[count:count+K-1, "effect_1"] = effect[1]
pval.loc[count:count+K-1, "Cell Type"] = np.arange(K)
count += K
pval["Ground truth"] = 0
pval.loc[(pval["Cell Type"] == 0) & (pval["effect_0"] != 0), "Ground truth"] = 1
pval.loc[(pval["Cell Type"] == 1) & (pval["effect_1"] != 0), "Ground truth"] = 1
pval.loc[pval["p value"] == 0, "p value"] = 1
pval["1-p"] = (1 - pval["p value"]).replace(np.nan, 0)
pval_2 = pval.loc[pval["sample size"] > 1 ,:]
fprs[test_name], tprs[test_name], thresh_2 = roc_curve(pval_2["Ground truth"], pval_2["1-p"])
roc_aucs[test_name] = auc(fprs[test_name], tprs[test_name])
prec[test_name], rec[test_name], thresh_2_ = precision_recall_curve(pval_2["Ground truth"], pval_2["1-p"])
pr_aucs[test_name] = average_precision_score(pval_2["Ground truth"], pval_2["1-p"])
# Histograms of ground truth for p-values
sns.histplot(data=pval_2,
x="p value",
bins=50,
hue="Ground truth",
multiple="stack")
plt.title(test_name)
plt.show()
print(roc_aucs)
```
**ANCOM**
For ANCOM, we need to extract the W statistic. Then the procedure is identical
```
w_df = all_effects["ancom"]
res = all_results[all_results["model"] == "ancom"].reset_index()
# Get all p-values
w_df = w_df.rename(columns={"Unnamed: 0": "Cell Type"})
count = 0
for i in range(len(res)):
K = res.loc[i, "n_cell_types"]
effect = eval(res.loc[i, "Increase"])
n_samples = res.loc[i, "n_controls"]
w_df.loc[count:count+K-1, "sample size"] = n_samples
w_df.loc[count:count+K-1, "effect_0"] = effect[0]
w_df.loc[count:count+K-1, "effect_1"] = effect[1]
w_df.loc[count:count+K-1, "Cell Type"] = np.arange(K)
count += K
w_df["Ground truth"] = 0
w_df.loc[(w_df["Cell Type"] == 0) & (w_df["effect_0"] != 0), "Ground truth"] = 1
w_df.loc[(w_df["Cell Type"] == 1) & (w_df["effect_1"] != 0), "Ground truth"] = 1
w_df_2 = w_df.loc[w_df["sample size"] > 1 ,:]
fprs["ANCOM"], tprs["ANCOM"], thresh_2 = roc_curve(w_df_2["Ground truth"], w_df_2["W"])
roc_aucs["ANCOM"] = auc(fprs["ANCOM"], tprs["ANCOM"])
prec["ANCOM"], rec["ANCOM"], thresh_2_ = precision_recall_curve(w_df_2["Ground truth"], w_df_2["W"])
pr_aucs["ANCOM"] = average_precision_score(w_df_2["Ground truth"], w_df_2["W"])
# Histograms of ground truth for p-values
sns.histplot(data=w_df,
x="W",
bins=15,
hue="Ground truth",
multiple="stack")
plt.title("ANCOM")
plt.show()
print(roc_aucs)
```
## Analysis
### ROC plot
```
models = list(roc_aucs.keys())
print(models)
models_rel = ["scCODA_rev", "ANCOM", "ALDEx2_alr", "ANCOMBC_holm", "alr_ttest", "alr_wilcoxon", "dirichreg", "scdc", "Poisson_1e-10", "ttest", "BetaBinomial"]
linestyles = dict(zip(models_rel, [0, 1, 2, 3, 0, 1, 2, 0, 1, 2, 3]))
colors = dict(zip(models_rel, [0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3]))
dashes=[(1,0), (4, 4), (7, 2, 2, 2), (2, 2, 2, 2, 4, 2)]
palette = sns.color_palette(['#e41a1c','#377eb8','#4daf4a','#984ea3'])
# Nice label names for legend
leg_labels = ["scCODA", "ANCOM", "ALDEx2", "ANCOMBC",
"ALR + t", "ALR + Wilcoxon", "Dirichlet regression", "scDC", "Poisson regression", "t-test", "BetaBinomial"]
lab = dict(zip(models_rel, leg_labels))
plot_path = "../../sccoda_benchmark_data/paper_plots_rv1/model_comparison_v3"
sns.set(style="ticks", font_scale=1.5)
plt.figure(figsize=(10,6))
lw = 2
for m in models_rel:
plt.plot(
fprs[m],
tprs[m],
color=palette[colors[m]],
lw=lw,
label=f'{lab[m]}: ROC curve (area = {np.round(roc_aucs[m], 2)})',
dashes=dashes[linestyles[m]]
)
plt.plot([0, 1], [0, 1], color='black', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# plt.savefig(plot_path + "/model_comparison_roc.svg", format="svg", bbox_inches="tight")
# plt.savefig(plot_path + "/model_comparison_roc.png", format="png", bbox_inches="tight")
plt.show()
```
### Precision-Recall plot
```
sns.set(style="ticks", font_scale=1.5)
plt.figure(figsize=(10,6))
lw = 2
for m in models_rel:
plt.plot(
rec[m],
prec[m],
color=palette[colors[m]],
lw=lw,
label=f'{lab[m]}: Average precision score ({np.round(pr_aucs[m], 2)})',
dashes=dashes[linestyles[m]]
)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# plt.savefig(plot_path + "/model_comparison_prc.svg", format="svg", bbox_inches="tight")
# plt.savefig(plot_path + "/model_comparison_prc.png", format="png", bbox_inches="tight")
plt.show()
```
| github_jupyter |
# RadiusNeighborsClassifier with Scale & Quantile Transformer
This Code template is for the Classification task using a simple Radius Neighbor Classifier with separate feature scaling using Scale pipelining Quantile Transformer which is a feature transformation technique. It implements learning based on the number of neighbors within a fixed radius r of each training point, where r is a floating-point value specified by the user.
### Required Packages
```
!pip install imblearn
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from imblearn.over_sampling import RandomOverSampler
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder,scale,QuantileTransformer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import RadiusNeighborsClassifier
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Rescaling
<Code>scale</Code> standardizes a dataset along any axis. It standardizes features by removing the mean and scaling to unit variance.
scale is similar to <Code>StandardScaler</Code> in terms of feature transformation, but unlike StandardScaler, it lacks Transformer API i.e., it does not have <Code>fit_transform</Code>, <Code>transform</Code> and other related methods.
```
X_Scaled = scale(X)
X_Scaled=pd.DataFrame(X_Scaled,columns=X.columns)
X_Scaled.head()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X_Scaled,Y,test_size=0.2,random_state=123)
```
#### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.
One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Feature Transformation
QuantileTransformer transforms features using quantiles information.
This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.The transformation is applied on each feature independently.
##### For more information on QuantileTransformer [ click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html)
### Model
RadiusNeighborsClassifier implements learning based on the number of neighbors within a fixed radius of each training point, where is a floating-point value specified by the user.
In cases where the data is not uniformly sampled, radius-based neighbors classification can be a better choice.
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.RadiusNeighborsClassifier.html) for parameters
```
# Build Model here
# Change outlier_label as per specific use-case
model=make_pipeline(QuantileTransformer(),RadiusNeighborsClassifier(n_jobs=-1, outlier_label='most_frequent'))
model.fit(x_train,y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Ganapathi Thota , Github: [Profile](https://github.com/Shikiz)
| github_jupyter |
```
# Load packages
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pandas as pd
import os
import pickle
import time
import scipy as scp
import scipy.stats as scps
from scipy.optimize import differential_evolution
from scipy.optimize import minimize
from datetime import datetime
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
# Load my own functions
import dnnregressor_train_eval_keras as dnnk
import make_data_wfpt as mdw
from kde_training_utilities import kde_load_data
import ddm_data_simulation as ddm_sim
import cddm_data_simulation as cds
import boundary_functions as bf
# Handle some cuda business
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="3"
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
# Load Model
model_path = '/media/data_cifs/afengler/data/kde/ddm/keras_models/dnnregressor_ddm_06_28_19_00_58_26/model_0'
ckpt_path = '/media/data_cifs/afengler/data/kde/ddm/keras_models/dnnregressor_ddm_06_28_19_00_58_26/ckpt_0_final'
model = keras.models.load_model(model_path)
model.load_weights(ckpt_path)
# model_path = "/home/tony/repos/temp_models/keras_models/dnnregressor_ddm_06_28_19_00_58_26/model_0"
# ckpt_path = "/home/tony/repos/temp_models/keras_models/dnnregressor_ddm_06_28_19_00_58_26/ckpt_0_final"
# model = keras.models.load_model(model_path)
# model.load_weights(ckpt_path)
# network_path = "/home/tony/repos/temp_models/keras_models/\
# dnnregressoranalytical_ddm_07_25_19_15_50_52/model.h5"
#model = keras.models.load_model(network_path)
model.predict(np.array([[0, 1, .5, 1, 1]]))
model.layers[1].get_weights()
# Initializations -----
n_runs = 100
n_samples = 2500
feature_file_path = '/media/data_cifs/afengler/data/kde/linear_collapse/train_test_data/test_features.pickle'
mle_out_path = '/media/data_cifs/afengler/data/kde/linear_collapse/mle_runs'
# NOTE PARAMETERS:
# WEIBULL: [v, a, w, node, shape, scale]
param_bounds = [(-1, 1), (0.3, 2), (0.3, 0.7), (0.01, 0.01), (0, np.pi / 2.2)]
my_optim_columns = ['v_sim', 'a_sim', 'w_sim', 'node_sim', 'theta_sim',
'v_mle', 'a_mle', 'w_mle', 'node_mle', 'theta_mle', 'n_samples']
# Get parameter names in correct ordering:
dat = pickle.load(open(feature_file_path,
'rb'))
parameter_names = list(dat.keys())[:-2] # :-1 to get rid of 'rt' and 'choice' here
# Make columns for optimizer result table
p_sim = []
p_mle = []
for parameter_name in parameter_names:
p_sim.append(parameter_name + '_sim')
p_mle.append(parameter_name + '_mle')
my_optim_columns = p_sim + p_mle + ['n_samples']
# Initialize the data frame in which to store optimizer results
optim_results = pd.DataFrame(np.zeros((n_runs, len(my_optim_columns))), columns = my_optim_columns)
optim_results.iloc[:, 2 * len(parameter_names)] = n_samples
# define boundary
boundary = bf.linear_collapse
boundary_multiplicative = False
def extract_info(model):
biases = []
activations = []
weights = []
for layer in model.layers:
if layer.name == "input_1":
continue
weights.append(layer.get_weights()[0])
biases.append(layer.get_weights()[1])
activations.append(layer.get_config()["activation"])
return weights, biases, activations
weights, biases, activations = extract_info(model)
def relu(x):
return np.maximum(0, x)
def linear(x):
return x
activation_fns = {"relu":relu, "linear":linear}
def np_predict(x, weights, biases, activations):
for i in range(len(weights)):
x = activation_fns[activations[i]](
np.dot(x, weights[i]) + biases[i]
)
return x
np_predict(np.array([[0, 1, .5, 1, 1]]), weights, biases, activations)
def log_p(params):
param_grid = np.tile(params, (data.shape[0], 1))
inp = np.concatenate([param_grid, data], axis = 1)
out = np_predict(inp, weights, biases, activations)
return -np.sum(out)
# Define the likelihood function
def log_p(params = [0, 1, 0.9], model = [], data = [], parameter_names = []):
# Make feature array
feature_array = np.zeros((data[0].shape[0], len(parameter_names) + 2))
# Store parameters
cnt = 0
for i in range(0, len(parameter_names), 1):
feature_array[:, i] = params[i]
cnt += 1
# Store rts and choices
feature_array[:, cnt] = data[0].ravel() # rts
feature_array[:, cnt + 1] = data[1].ravel() # choices
# Get model predictions
prediction = model.predict(feature_array)
# Some post-processing of predictions
prediction[prediction < 1e-29] = 1e-29
return(- np.sum(np.log(prediction)))
def make_params(param_bounds = []):
params = np.zeros(len(param_bounds))
for i in range(len(params)):
params[i] = np.random.uniform(low = param_bounds[i][0], high = param_bounds[i][1])
return params
# ---------------------
v = np.random.uniform(-1, 1)
a = np.random.uniform(.5, 2)
w = np.random.uniform()
true_params = np.array([v, a, w])
rts, choices, _ = ddm_simulate(v = true_params[0], a = true_params[1],
w = true_params[2], n_samples = 2000)
data = np.concatenate([rts, choices], axis = 1)
boundary_params = [(-1, 1), (.5, 2), (0, 1)]
out = differential_evolution(log_p, bounds = boundary_params,
popsize = 60,
disp = True, workers = -1)
true_params
out["x"]
param_grid = np.tile(true_params, (data.shape[0], 1))
inp = np.concatenate([param_grid, data], axis=1)
prediction = np_predict(inp, weights, biases, activations)
prediction.sum()
plt.hist(prediction)
# Main loop ----------- TD: Parallelize
for i in range(0, n_runs, 1):
# Get start time
start_time = time.time()
# # Sample parameters
# v_sim = np.random.uniform(high = v_range[1], low = v_range[0])
# a_sim = np.random.uniform(high = a_range[1], low = a_range[0])
# w_sim = np.random.uniform(high = w_range[1], low = w_range[0])
# #c1_sim = np.random.uniform(high = c1_range[1], low = c1_range[0])
# #c2_sim = np.random.uniform(high = c2_range[1], low = c2_range[0])
# node_sim = np.random.uniform(high = node_range[1], low = node_range[0])
# shape_sim = np.random.uniform(high = shape_range[1], low = shape_range[0])
# scale_sim = np.random.uniform(high = scale_range[1], low = scale_range[0])
tmp_params = make_params(param_bounds = param_bounds)
# Store in output file
optim_results.iloc[i, :len(parameter_names)] = tmp_params
# Print some info on run
print('Parameters for run ' + str(i) + ': ')
print(tmp_params)
# Define boundary params
boundary_params = {'node': tmp_params[3],
'theta': tmp_params[4]}
# Run model simulations
ddm_dat_tmp = ddm_sim.ddm_flexbound_simulate(v = tmp_params[0],
a = tmp_params[1],
w = tmp_params[2],
s = 1,
delta_t = 0.001,
max_t = 20,
n_samples = n_samples,
boundary_fun = boundary, # function of t (and potentially other parameters) that takes in (t, *args)
boundary_multiplicative = boundary_multiplicative, # CAREFUL: CHECK IF BOUND
boundary_params = boundary_params)
# Print some info on run
print('Mean rt for current run: ')
print(np.mean(ddm_dat_tmp[0]))
# Run optimizer
out = differential_evolution(log_p,
bounds = param_bounds,
args = (model, ddm_dat_tmp, parameter_names),
popsize = 30,
disp = True)
# Print some info
print('Solution vector of current run: ')
print(out.x)
print('The run took: ')
elapsed_time = time.time() - start_time
print(time.strftime("%H:%M:%S", time.gmtime(elapsed_time)))
# Store result in output file
optim_results.iloc[i, len(parameter_names):(2*len(parameter_names))] = out.x
# -----------------------
# Save optimization results to file
optim_results.to_csv(mle_out_path + '/mle_results_1.csv')
# Read in results
optim_results = pd.read_csv(os.getcwd() + '/experiments/ddm_flexbound_kde_mle_fix_v_0_c1_0_w_unbiased_arange_2_3/optim_results.csv')
plt.scatter(optim_results['v_sim'], optim_results['v_mle'], c = optim_results['c2_mle'])
# Regression for v
reg = LinearRegression().fit(np.expand_dims(optim_results['v_mle'], 1), np.expand_dims(optim_results['v_sim'], 1))
reg.score(np.expand_dims(optim_results['v_mle'], 1), np.expand_dims(optim_results['v_sim'], 1))
plt.scatter(optim_results['a_sim'], optim_results['a_mle'], c = optim_results['c2_mle'])
# Regression for a
reg = LinearRegression().fit(np.expand_dims(optim_results['a_mle'], 1), np.expand_dims(optim_results['a_sim'], 1))
reg.score(np.expand_dims(optim_results['a_mle'], 1), np.expand_dims(optim_results['a_sim'], 1))
plt.scatter(optim_results['w_sim'], optim_results['w_mle'])
# Regression for w
reg = LinearRegression().fit(np.expand_dims(optim_results['w_mle'], 1), np.expand_dims(optim_results['w_sim'], 1))
reg.score(np.expand_dims(optim_results['w_mle'], 1), np.expand_dims(optim_results['w_sim'], 1))
plt.scatter(optim_results['c1_sim'], optim_results['c1_mle'])
# Regression for c1
reg = LinearRegression().fit(np.expand_dims(optim_results['c1_mle'], 1), np.expand_dims(optim_results['c1_sim'], 1))
reg.score(np.expand_dims(optim_results['c1_mle'], 1), np.expand_dims(optim_results['c1_sim'], 1))
plt.scatter(optim_results['c2_sim'], optim_results['c2_mle'], c = optim_results['a_mle'])
# Regression for w
reg = LinearRegression().fit(np.expand_dims(optim_results['c2_mle'], 1), np.expand_dims(optim_results['c2_sim'], 1))
reg.score(np.expand_dims(optim_results['c2_mle'], 1), np.expand_dims(optim_results['c2_sim'], 1))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/florentPoux/point-cloud-processing/blob/main/Point_cloud_data_sub_sampling_with_Python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Created by Florent Poux. Licence MIT
* To reuse in your project, please cite the article accessible here:
[To Medium Article](https://towardsdatascience.com/how-to-automate-lidar-point-cloud-processing-with-python-a027454a536c)
* Have fun with this notebook that you can very simply run (ctrl+Enter) !
* The first time thought, it will ask you to get a key for it to be able to acces your Google drive folders if you want to work all remotely.
* Simply accept, and then change the input path by the folder path containing your data, on Google Drive.
Enjoy!
# Step 1: Setting up the environment
```
#This code snippet allows to use data directly from your Google drives files.
#If you want to use a shared folder, just add the folder to your drive
from google.colab import drive
drive.mount('/content/gdrive')
```
# Step 2: Load and prepare the data
```
#https://pythonhosted.org/laspy/
!pip install laspy
#libraries used
import numpy as np
import laspy as lp
#create paths and load data
input_path="gdrive/My Drive/10-MEDIUM/DATA/Point Cloud Sample/"
output_path="gdrive/My Drive/10-MEDIUM/DATA/Point Cloud Sample/"
#Load the file
dataname="NZ19_Wellington"
point_cloud=lp.file.File(input_path+dataname+".las", mode="r")
#store coordinates in "points", and colors in "colors" variable
points = np.vstack((point_cloud.x, point_cloud.y, point_cloud.z)).transpose()
colors = np.vstack((point_cloud.red, point_cloud.green, point_cloud.blue)).transpose()
```
# Step 3: Choose a sub-sampling strategy
## 1- Point Cloud Decimation
```
#The decimation strategy, by setting a decimation factor
factor=160
decimated_points = points[::factor]
#decimated_colors = colors[::factor]
len(decimated_points)
```
## 2 - Point Cloud voxel grid
```
# Initialize the number of voxels to create to fill the space including every point
voxel_size=6
nb_vox=np.ceil((np.max(points, axis=0) - np.min(points, axis=0))/voxel_size)
#nb_vox.astype(int) #this gives you the number of voxels per axis
# Compute the non empty voxels and keep a trace of indexes that we can relate to points in order to store points later on.
# Also Sum and count the points in each voxel.
non_empty_voxel_keys, inverse, nb_pts_per_voxel= np.unique(((points - np.min(points, axis=0)) // voxel_size).astype(int), axis=0, return_inverse=True, return_counts=True)
idx_pts_vox_sorted=np.argsort(inverse)
#len(non_empty_voxel_keys) # if you need to display how many no-empty voxels you have
#Here, we loop over non_empty_voxel_keys numpy array to
# > Store voxel indices as keys in a dictionnary
# > Store the related points as the value of each key
# > Compute each voxel barycenter and add it to a list
# > Compute each voxel closest point to the barycenter and add it to a list
voxel_grid={}
grid_barycenter,grid_candidate_center=[],[]
last_seen=0
for idx,vox in enumerate(non_empty_voxel_keys):
voxel_grid[tuple(vox)]=points[idx_pts_vox_sorted[last_seen:last_seen+nb_pts_per_voxel[idx]]]
grid_barycenter.append(np.mean(voxel_grid[tuple(vox)],axis=0))
grid_candidate_center.append(voxel_grid[tuple(vox)][np.linalg.norm(voxel_grid[tuple(vox)]-np.mean(voxel_grid[tuple(vox)],axis=0),axis=1).argmin()])
last_seen+=nb_pts_per_voxel[idx]
```
# Step 4: Vizualise and export the results
```
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
ax = plt.axes(projection='3d')
ax.scatter(decimated_points[:,0], decimated_points[:,1], decimated_points[:,2], c = decimated_colors/65535, s=0.01)
plt.show()
%timeit np.savetxt(output_path+dataname+"_voxel-best_point_%s.xyz" % (voxel_size), grid_candidate_center, delimiter=";", fmt="%s")
```
# Step 5 - Automate with functions
```
#Define a function that takes as input an array of points, and a voxel size expressed in meters. It returns the sampled point cloud
def grid_subsampling(points, voxel_size):
nb_vox=np.ceil((np.max(points, axis=0) - np.min(points, axis=0))/voxel_size)
non_empty_voxel_keys, inverse, nb_pts_per_voxel= np.unique(((points - np.min(points, axis=0)) // voxel_size).astype(int), axis=0, return_inverse=True, return_counts=True)
idx_pts_vox_sorted=np.argsort(inverse)
voxel_grid={}
grid_barycenter,grid_candidate_center=[],[]
last_seen=0
for idx,vox in enumerate(non_empty_voxel_keys):
voxel_grid[tuple(vox)]=points[idx_pts_vox_sorted[last_seen:last_seen+nb_pts_per_voxel[idx]]]
grid_barycenter.append(np.mean(voxel_grid[tuple(vox)],axis=0))
grid_candidate_center.append(voxel_grid[tuple(vox)][np.linalg.norm(voxel_grid[tuple(vox)]-np.mean(voxel_grid[tuple(vox)],axis=0),axis=1).argmin()])
last_seen+=nb_pts_per_voxel[idx]
return subsampled_points
#Execute the function, and store the results in the grid_sampled_point_cloud variable
grid_sampled_point_cloud variable = grid_subsampling(point_cloud, 6)
#Save the variable to an ASCII file to open in a 3D Software
%timeit np.savetxt(output_path+dataname+"_sampled.xyz", grid_sampled_point_cloud variable, delimiter=";", fmt="%s")
```
| github_jupyter |
Copyright 2018 The Dopamine Authors.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
# Visualize Dopamine baselines with Tensorboard
This colab allows you to easily view the trained baselines with Tensorboard (even if you don't have Tensorboard on your local machine!).
Simply specify the game you would like to visualize and then run the cells in order.
_The instructions for setting up Tensorboard were obtained from https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/_
```
# @title Prepare all necessary files and binaries.
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
!gsutil -q -m cp -R gs://download-dopamine-rl/compiled_tb_event_files.tar.gz /content/
!tar -xvzf /content/compiled_tb_event_files.tar.gz
# @title Select which game to visualize.
game = 'Asterix' # @param['AirRaid', 'Alien', 'Amidar', 'Assault', 'Asterix', 'Asteroids', 'Atlantis', 'BankHeist', 'BattleZone', 'BeamRider', 'Berzerk', 'Bowling', 'Boxing', 'Breakout', 'Carnival', 'Centipede', 'ChopperCommand', 'CrazyClimber', 'DemonAttack', 'DoubleDunk', 'ElevatorAction', 'Enduro', 'FishingDerby', 'Freeway', 'Frostbite', 'Gopher', 'Gravitar', 'Hero', 'IceHockey', 'Jamesbond', 'JourneyEscape', 'Kangaroo', 'Krull', 'KungFuMaster', 'MontezumaRevenge', 'MsPacman', 'NameThisGame', 'Phoenix', 'Pitfall', 'Pong', 'Pooyan', 'PrivateEye', 'Qbert', 'Riverraid', 'RoadRunner', 'Robotank', 'Seaquest', 'Skiing', 'Solaris', 'SpaceInvaders', 'StarGunner', 'Tennis', 'TimePilot', 'Tutankham', 'UpNDown', 'Venture', 'VideoPinball', 'WizardOfWor', 'YarsRevenge', 'Zaxxon']
agents = ['dqn', 'c51', 'rainbow', 'implicit_quantile']
for agent in agents:
for run in range(1, 6):
!mkdir -p "/content/$game/$agent/$run"
!cp -r "/content/$agent/$game/$run" "/content/$game/$agent/$run"
LOG_DIR = '/content/{}'.format(game)
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
# @title Start the tensorboard
get_ipython().system_raw('./ngrok http 6006 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
```
| github_jupyter |
# Time-series prediction (temperature from weather stations)
Companion to [(Time series prediction, end-to-end)](./sinewaves.ipynb), except on a real dataset.
```
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%datalab project set -p $PROJECT
```
# Data exploration and cleanup
The data are temperature data from US weather stations. This is a public dataset from NOAA.
```
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow as tf
import google.datalab.bigquery as bq
from __future__ import print_function
def query_to_dataframe(year):
query="""
SELECT
stationid, date,
MIN(tmin) AS tmin,
MAX(tmax) AS tmax,
IF (MOD(ABS(FARM_FINGERPRINT(stationid)), 10) < 7, True, False) AS is_train
FROM (
SELECT
wx.id as stationid,
wx.date as date,
CONCAT(wx.id, " ", CAST(wx.date AS STRING)) AS recordid,
IF (wx.element = 'TMIN', wx.value/10, NULL) AS tmin,
IF (wx.element = 'TMAX', wx.value/10, NULL) AS tmax
FROM
`bigquery-public-data.ghcn_d.ghcnd_{}` AS wx
WHERE STARTS_WITH(id, 'USW000')
)
GROUP BY
stationid, date
""".format(year)
df = bq.Query(query).execute().result().to_dataframe()
return df
df = query_to_dataframe(2016)
df.head()
df.describe()
```
Unfortunately, there are missing observations on some days.
```
df.isnull().sum()
```
One way to fix this is to do a pivot table and then replace the nulls by filling it with nearest valid neighbor
```
def cleanup_nulls(df, variablename):
df2 = df.pivot_table(variablename, 'date', 'stationid', fill_value=np.nan)
print('Before: {} null values'.format(df2.isnull().sum().sum()))
df2.fillna(method='ffill', inplace=True)
df2.fillna(method='bfill', inplace=True)
df2.dropna(axis=1, inplace=True)
print('After: {} null values'.format(df2.isnull().sum().sum()))
return df2
traindf = cleanup_nulls(df[df['is_train']], 'tmin')
traindf.head()
seq = traindf.iloc[:,0]
print('{} values in the sequence'.format(len(seq)))
ax = sns.tsplot(seq)
ax.set(xlabel='day-number', ylabel='temperature');
seq.to_string(index=False).replace('\n', ',')
# Save the data to disk in such a way that each time series is on a single line
# save to sharded files, one for each year
# This takes about 15 minutes
import shutil, os
shutil.rmtree('data/temperature', ignore_errors=True)
os.makedirs('data/temperature')
def to_csv(indf, filename):
df = cleanup_nulls(indf, 'tmin')
print('Writing {} sequences to {}'.format(len(df.columns), filename))
with open(filename, 'w') as ofp:
for i in xrange(0, len(df.columns)):
if i%10 == 0:
print('{}'.format(i), end='...')
seq = df.iloc[:365,i] # chop to 365 days to avoid leap-year problems ...
line = seq.to_string(index=False, header=False).replace('\n', ',')
ofp.write(line + '\n')
print('Done')
for year in xrange(2000, 2017):
print('Querying data for {} ... hang on'.format(year))
df = query_to_dataframe(year)
to_csv(df[df['is_train']], 'data/temperature/train-{}.csv'.format(year))
to_csv(df[~df['is_train']], 'data/temperature/eval-{}.csv'.format(year))
%bash
head -1 data/temperature/eval-2004.csv | tr ',' ' ' | wc
head -1 data/temperature/eval-2005.csv | tr ',' ' ' | wc
wc -l data/temperature/train*.csv
wc -l data/temperature/eval*.csv
%bash
gsutil -m rm -rf gs://${BUCKET}/temperature/*
gsutil -m cp data/temperature/*.csv gs://${BUCKET}/temperature
```
Our CSV file sequences consist of 365 values. For training, each instance's 0~364 numbers are inputs, and 365th is truth.
# Model
This is the same model as [(Time series prediction, end-to-end)](./sinewaves.ipynb)
```
%bash
#for MODEL in dnn; do
for MODEL in cnn dnn lstm lstm2 lstmN; do
OUTDIR=gs://${BUCKET}/temperature/$MODEL
JOBNAME=temperature_${MODEL}_$(date -u +%y%m%d_%H%M%S)
REGION=us-central1
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/sinemodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=1.2 \
-- \
--train_data_paths="gs://${BUCKET}/temperature/train*.csv" \
--eval_data_paths="gs://${BUCKET}/temperature/eval*.csv" \
--output_dir=$OUTDIR \
--train_steps=5000 --sequence_length=365 --model=$MODEL
done
```
## Results
When I ran it, these were the RMSEs that I got for different models:
| Model | # of steps | Minutes | RMSE |
| --- | ----| --- | --- | --- |
| dnn | 5000 | 19 min | 9.82 |
| cnn | 5000 | 22 min | 6.68 |
| lstm | 5000 | 41 min | 3.15 |
| lstm2 | 5000 | 107 min | 3.91 |
| lstmN | 5000 | 107 min | 11.5 |
As you can see, on real-world time-series data, LSTMs can really shine, but the highly tuned version for the synthetic data doesn't work as well on a similiar, but different problem. Instead, we'll probably have to retune ...
<p>
## Next steps
This is likely not the best way to formulate this problem. A better method to work with this data would be to pull out arbitrary, shorter sequences (say of length 20) from the input sequences. This would be akin to image augmentation in that we would get arbitrary subsets, and would allow us to predict the sequence based on just the last 20 values instead of requiring a whole year. It would also avoid the problem that currently, we are training only for Dec. 30/31.
Feature engineering would also help. For example, we might also add a climatological average (average temperature at this location over the last 10 years on this date) as one of the inputs. I'll leave both these improvements as exercises for the reader :)
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Variational Inference in Stan
Variational inference is a scalable technique for approximate Bayesian inference.
Stan implements an automatic variational inference algorithm,
called Automatic Differentiation Variational Inference (ADVI)
which searches over a family of simple densities to find the best
approximate posterior density.
ADVI produces an estimate of the parameter means together with a sample
from the approximate posterior density.
ADVI approximates the variational objective function, the evidence lower bound or ELBO,
using stochastic gradient ascent.
The algorithm ascends these gradients using an adaptive stepsize sequence
that has one parameter ``eta`` which is adjusted during warmup.
The number of draws used to approximate the ELBO is denoted by ``elbo_samples``.
ADVI heuristically determines a rolling window over which it computes
the average and the median change of the ELBO.
When this change falls below a threshold, denoted by ``tol_rel_obj``,
the algorithm is considered to have converged.
### Example: variational inference for model ``bernoulli.stan``
In CmdStanPy, the `CmdStanModel` class method `variational` invokes CmdStan with
`method=variational` and returns an estimate of the approximate posterior
mean of all model parameters as well as a set of draws from this approximate
posterior.
```
import os
from cmdstanpy.model import CmdStanModel
from cmdstanpy.utils import cmdstan_path
bernoulli_dir = os.path.join(cmdstan_path(), 'examples', 'bernoulli')
stan_file = os.path.join(bernoulli_dir, 'bernoulli.stan')
data_file = os.path.join(bernoulli_dir, 'bernoulli.data.json')
# instantiate, compile bernoulli model
model = CmdStanModel(stan_file=stan_file)
# run CmdStan's variational inference method, returns object `CmdStanVB`
vi = model.variational(data=data_file)
```
The class [`CmdStanVB`](https://cmdstanpy.readthedocs.io/en/latest/api.html#stanvariational) provides the following properties to access information about the parameter names, estimated means, and the sample:
+ `column_names`
+ `variational_params_dict`
+ `variational_params_np`
+ `variational_params_pd`
+ `variational_sample`
```
print(vi.column_names)
print(vi.variational_params_dict['theta'])
print(vi.variational_sample.shape)
```
These estimates are only valid if the algorithm has converged to a good
approximation. When the algorithm fails to do so, the `variational`
method will throw a `RuntimeError`.
```
model_fail = CmdStanModel(stan_file='eta_should_fail.stan')
vi_fail = model_fail.variational()
```
Unless you set `require_converged=False`:
```
vi_fail = model_fail.variational(require_converged=False)
```
This lets you inspect the output to try to diagnose the issue with the model
```
vi_fail.variational_params_dict
```
See the [api docs](https://cmdstanpy.readthedocs.io/en/latest/api.html), section [`CmdStanModel.variational`](https://cmdstanpy.readthedocs.io/en/latest/api.html#cmdstanpy.CmdStanModel.variational) for a full description of all arguments.
| github_jupyter |
[](https://pythonista.io)
## La biblioteca *SQLAlchemy*.
[*SQLAlchemy*](http://www.sqlalchemy.org/) comprende diversas herramientas enfocadas a interactuar con bases de datos relacionales de forma "pythonica".
Consta de:
* **SQLAlchemy Core**, la cual permite crear una interfaz genérica e independiente del gestor de base de datos por medio de un lenguaje de expresiones basado en SQL.
* **SQLAlchemy ORM** Mapeador entre objetos y transacciones relacionales u ORM (por las sigas de object-relational mapper).
Conocer a profundidad la biblioteca de SQLAlchemy podría extenderse por varios capítulos, por lo que para fines de este curso, solo se explorarán superficialmente el ORM de SQLAlchemy y la extensión de Jupyter del lenguaje de expresiones de SQLAlchemy ([*ipython-sql*](https://github.com/catherinedevlin/ipython-sql)), la cual aprovecha las funcionalidades de SQLAlchemy Core.
**Nota:** En este ejemplo tulizaremos el gestor de bases de datos [SQLite](https://sqlite.org/index.html), el cual vienejunto con Python y es capaz de crear bases de datos en un archivo o incluso en memoria.
```
!pip install sqlalchemy
import sqlalchemy
```
## Ruta de conexión a un gestor de la base de datos.
El primer paso para poder interactuar con una base de datos es conectándose a esta. Para ello es necesario conocer la ruta y contar con las credenciales (generalmente usuario y contraseña) para poder acceder.
La sintaxis de la ruta de conexión a una base de datos utiliza la siguiente sintaxis.
```
<<dialecto><controlador>://<usuario>:<contraseña>@<ruta del servidor>:<puerto>/<base de datos>
```
Para mayor información, consultar:
http://docs.sqlalchemy.org/en/latest/core/engines.html#database-urls
## La clase *sqlalchemy.engine.base.Engine*.
La clase *sqlalchemy.engine.base.Engine* tiene por finalidad instanciar objetos que representan el elemento fundamental de esta biblioteca para conectarse con una base de datos y a su vez mapear los atributos de los objetos creados a partir del modelo ORM.
Para instanciar un objeto de *sqlalchemy.engine.base.Engine* se utliliza la función *sqlalchemy.create_engine()* con la siguiente sintaxis:
```
create_engine('<ruta>', <argumentos>)
````
**Ejemplo:**
Se instanciará un objetos a partir de *sqlalchemy.Engine* conectado a una base de datos localizada en [*data/alumnos.db*](data/alumnos.txt)
```
engine = sqlalchemy.create_engine('sqlite:///data/alumnos.db')
type(engine)
```
## Definición de un modelo objeto-relacional.
Antes de crear la base de datos es necesario definir un modelo que mapee a un objeto con al menos una tabla en la base de datos.
La función *sqlalchemy.ext.declarative.declarative_base()* nos permite crear un modelo a partir de las subclases de *sqlalchemy.ext.declarative.api.DeclarativeMeta*.
**Ejemplo:**
Se creará la clase *Base* como superclase genera.
**Nota:** A partir de este momento cuando se utilice el nombre *Base* se referirá a una subclase de *sqlalchemy.ext.declarative.api.DeclarativeMeta*.
```
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
type(Base)
```
### Definición de columnas de una tabla como atributo de una clase.
Las subclase de *Base* corresponden a tablas dentro de una base de datos.
Estas subclases tiene el atributo *\_\_tablename\_\_* que corresponde al nombre de la tabla a la cual está mapeando sus atributos.
La sintaxis es la siguiente:
```
class <Nombre de la Clase>(Base):
__tablename__ = <nombre de la tabla>
...
...
...
```
Cada objeto instanciado de estas subclases corresponderá a un registro en la tabla definida en *\_\_tablename\_\_*.
#### Definición de columnas como atributos.
Para mapear un atribulo a una columna de la tabla se utiliza la clase *sqlalchemy.Column* con la siguiente sintaxis.
```
class <Nombre de la Clase>(Base):
__tablename__ = <nombre de la tabla>
<nombre del atributo> = sqlalchemy.Column(<tipo de dato>, <argumentos>)
...
...
```
##### Algunos parámetros relevantes de *sqlalchemy.Column*.
* El parámetro *primary\_key* con argumento igual a *True* indica que dicho atributo y será la clave primaria de la tabla definida por Base.\_\_tablename\_\_ y además será utilizado como parámetro al instanciar a la subclase de *Base*.
* El parámetro *unique* con argumento igual a *True* indica que no puede haber dos valores idénticos en la columna.
##### Tipos de datos de SQLAlchemy.
Cada columna debe de ser definida en la base de datos con un tipo de dato específico.
Los tipos de datos que SQLAlchemy acepta pueden ser consultados en http://docs.sqlalchemy.org/en/latest/core/type_basics.html.
En este ejemplo utilizaremos los tipos:
* [*Integer*](http://docs.sqlalchemy.org/en/latest/core/type_basics.html#sqlalchemy.types.Integer).
* [*String*](http://docs.sqlalchemy.org/en/latest/core/type_basics.html#sqlalchemy.types.String).
* [*Float*](http://docs.sqlalchemy.org/en/latest/core/type_basics.html#sqlalchemy.types.Float).
* [*Boolean*](http://docs.sqlalchemy.org/en/latest/core/type_basics.html#sqlalchemy.types.Boolean).
A continuación se creará la clase *Alumno* que es subclase de *Base*, la cual estará ligada a la tabla *alumnos* cuyas columnas/atributos son:
* *cuenta* de tipo *Integer* con el argumento *primary\_key=True*.
* *nombre* de tipo *String* con tamaño de 50 caracteres.
* *primer_apellido* de tipo *String* con tamaño de 50 caracteres.
* *segundo_apellido* de tipo *String* con tamaño de 50 caracteres.
* *carrera* de tipo *String* con tamaño de 50 caracteres.
* *semestre* de tipo *Integer*.
* *promedio* de tipo *Float*.
* *al_corriente* de tipo *Boolean*.
```
from sqlalchemy import Integer, String, Float, Boolean, Column
class Alumno(Base):
__tablename__ = 'alumnos'
cuenta = Column(Integer, primary_key=True)
nombre = Column(String(50))
primer_apellido = Column(String(50))
segundo_apellido = Column(String(50))
carrera = Column(String(50))
semestre = Column(Integer)
promedio = Column(Float)
al_corriente = Column(Boolean)
```
### Creación de las tablas de la base de datos con el método *Base.metadata.create_all()*.
Una base de datos puede consistir de múltiples tablas. En este caso, la base de datos sólo contendrá a la tabla *alumnos* ligadas a la clase *Alumnos*.
Para crear la base de datos con las tablas definidas se utiliza el método *Base.metadata.create_all()* en la base de datos gestionada por el objeto instanciado de *sqlalchemy.engine.base.Engine*.
* En caso de que no exista el archivo con la base de datos, este será creado.
* En caso de que ya existan tablas definidas en la base de datos, sólos e crearán las que sean nuevas y los datos que ya contienen no serán eliminados.
```
Base.metadata.create_all(engine)
```
Ahora se creó el archivo *alumnos.db* en el directorio [*data*](data/).
A partir de este momento, cada objeto instanciado de *Alumno* puede ser representado como un registro de la tabla *alumnos*.
### Creación de una sesión.
La función *sqlalchemy.orm.sessionmaker()* permite crear una clase *sqlalchemy.orm.session.sessionmaker* que contiene atributos y métodos que permiten interactuar con la base de datos. En este capítulo usaremos:
* El método *add()* que añade o sustituye el registro ligado al objeto instanciado de una subclase de *Base* en el registro correspondiente dentro de la base de datos.
* El método *delete()* que elimina el registro ligado al objeto.
* El método *commit()* el cual aplica los cambios en la base de datos.
```
from sqlalchemy.orm import sessionmaker
Sesion = sessionmaker(bind=engine)
type(Sesion)
sesion = Sesion()
```
### Poblamiento de la base de datos a partir de un archivo.
El archivo [*data/alumnos.txt*](data/alumnos.txt) contiene la representación de un objeto tipo *list* que a su vez contiene objetos tipo *dir* con los campos:
* *'Cuenta'*.
* *'Nombre'*.
* *'Primer Apellido'*.
* *'Segundo Apellido'*.
* *'Carrera'*.
* *'Semestre'*.
* *'Promedio'*.
* *'Al Corriente'*.
Se puede observar que los identificadores de los campos son similares a los nombres de las columnas definidas en la clase *Alumnos*. Sólo es necesario convertirlos a minúsculas y sustituir los espacios con guiones bajos.
La siguiente celda convertirá cada objeto tipo *dict* del archivo *data/alumnos.txt* en un objeto instaciado de *Alumno* y lo ligará a la base de datos.
Cada objeto ligado a *alumno* será sustituido por el siguiente cada vez que *Alumno* sea instanciado en cada iteración. Esto implica que el objeto sustituído será destruido, pero sus atributos persistirán en la base de datos.
```
# Craga los datos del archivo.
with open('data/alumnos.txt', 'tr') as archivo:
base = eval(archivo.read())
# Puebla la tabla alumno instanciando de Alumno a cada elemento de tipo _dict_.
for registro in base:
# Se instancia el objeto alumno a partir de la clase Alumno.
alumno = Alumno(cuenta=registro['Cuenta'])
del registro['Cuenta']
for campo in registro:
# Los campos son convertidos en atributos.
setattr(alumno, campo.lower().replace(' ', '_'), registro[campo])
#
sesion.add(alumno)
# Los cambios se aplican a la base de datos.
sesion.commit()
```
### Consultas mediante el ORM.
Los objetos de subclases de *sqlalchemy.orm.session.sessionmaker* pueden realizar búsquedas relacionadas a los objetos instanciados de *Base* a partir de ellas y de las tablas ligadas a ellas.
SQLAlchemy puede realizar consultas tanto mediante álgebra relacional como de búsqueda sobre los atributos de sus objetos instanciados.
**Nota:** Para ilustrar estas operaciones se utilizará la clase *Alumno*, la cual es subclase de *Base*.
Para efectos de este curso sólo se explorarán los siguientes métodos:
* *query.first()* regresa al primer objeto encontrado en una búsqueda.
* *query.all()* regresa un objeto tipo *list* con todos los objetos resultantes de una búsqueda.
* *query.filter()* regresa un objeto de tipo *Query* con los objetos encontrados al ejecutar una búsqueda que satisfaga la expresión lógica sobre los atributos de la clase, la cual es ingresada como argumento.
* *query.filter_by()* regresa un objeto de tipo *Query* con los objetos encontrados al ejecutar una búsqueda en la tabla de la base de datos cuyos valores en la columna sean iguales al valor que se ingresa como argumento en la forma ```<columna>=<valor>```.
```
sesion.query(Alumno).filter(Alumno.cuenta)
sesion.query(Alumno).filter(Alumno.cuenta).all()
resultado = sesion.query(Alumno).filter(Alumno.cuenta > 1231221).all()
resultado[0].nombre
```
## La extensión *ipython-sql* de Jupyter.
La extensión [*ipython-sql*](https://github.com/catherinedevlin/ipython-sql) utiliza a SQLAlchemy para realizar conexiones a bases de datos y ejecutar instrucciones SQL desde una celda de Jupyter mediante "comandos mágicos".
Para cargar la extensión se debe ejecutarse desde una celda con la siguiente sintaxis:
```
%load_ext sql
```
Para conectarse a la base de datos se utiliza la siguiente sintaxis:
```
%sql <ruta a la base de datos>
```
Para ejecutar una instrucción SQL por celda se utiliza la siguiente sintaxis:
```
%sql <instrucción>
```
Para ejecutar varias instrucciones SQL en una celda se utiliza la siguiente sintaxis:
```
%%sql
<instrucción>
<instrucción>
...
<instrucción>
```
**Ejemplo:**
Se hará una conexión a la base de datos creada con *SQLite* y se realizará una consulta.
```
!pip install ipython-sql
%load_ext sql
%sql sqlite:///data/alumnos.db
%%sql
select * from alumnos
```
<p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>
<p style="text-align: center">© José Luis Chiquete Valdivieso. 2019.</p>
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Training checkpoints
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/checkpoint"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The phrase "Saving a TensorFlow model" typically means one of two things:
1. Checkpoints, OR
2. SavedModel.
Checkpoints capture the exact value of all parameters (`tf.Variable` objects) used by a model. Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.
The SavedModel format on the other hand includes a serialized description of the computation defined by the model in addition to the parameter values (checkpoint). Models in this format are independent of the source code that created the model. They are thus suitable for deployment via TensorFlow Serving, TensorFlow Lite, TensorFlow.js, or programs in other programming languages (the C, C++, Java, Go, Rust, C# etc. TensorFlow APIs).
This guide covers APIs for writing and reading checkpoints.
## Setup
```
import tensorflow as tf
class Net(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def call(self, x):
return self.l1(x)
net = Net()
```
## Saving from `tf.keras` training APIs
See the [`tf.keras` guide on saving and
restoring](./keras/overview.ipynb#save_and_restore).
`tf.keras.Model.save_weights` saves a TensorFlow checkpoint.
```
net.save_weights('easy_checkpoint')
```
## Writing checkpoints
The persistent state of a TensorFlow model is stored in `tf.Variable` objects. These can be constructed directly, but are often created through high-level APIs like `tf.keras.layers` or `tf.keras.Model`.
The easiest way to manage variables is by attaching them to Python objects, then referencing those objects.
Subclasses of `tf.train.Checkpoint`, `tf.keras.layers.Layer`, and `tf.keras.Model` automatically track variables assigned to their attributes. The following example constructs a simple linear model, then writes checkpoints which contain values for all of the model's variables.
You can easily save a model-checkpoint with `Model.save_weights`
### Manual checkpointing
#### Setup
To help demonstrate all the features of `tf.train.Checkpoint` define a toy dataset and optimization step:
```
def toy_dataset():
inputs = tf.range(10.)[:, None]
labels = inputs * 5. + tf.range(5.)[None, :]
return tf.data.Dataset.from_tensor_slices(
dict(x=inputs, y=labels)).repeat().batch(2)
def train_step(net, example, optimizer):
"""Trains `net` on `example` using `optimizer`."""
with tf.GradientTape() as tape:
output = net(example['x'])
loss = tf.reduce_mean(tf.abs(output - example['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
```
#### Create the checkpoint objects
To manually make a checkpoint you will need a `tf.train.Checkpoint` object. Where the objects you want to checkpoint are set as attributes on the object.
A `tf.train.CheckpointManager` can also be helpful for managing multiple checkpoints.
```
opt = tf.keras.optimizers.Adam(0.1)
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
```
#### Train and checkpoint the model
The following training loop creates an instance of the model and of an optimizer, then gathers them into a `tf.train.Checkpoint` object. It calls the training step in a loop on each batch of data, and periodically writes checkpoints to disk.
```
def train_and_checkpoint(net, manager):
ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
print("Restored from {}".format(manager.latest_checkpoint))
else:
print("Initializing from scratch.")
for _ in range(50):
example = next(iterator)
loss = train_step(net, example, opt)
ckpt.step.assign_add(1)
if int(ckpt.step) % 10 == 0:
save_path = manager.save()
print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
print("loss {:1.2f}".format(loss.numpy()))
train_and_checkpoint(net, manager)
```
#### Restore and continue training
After the first you can pass a new model and manager, but pickup training exactly where you left off:
```
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
train_and_checkpoint(net, manager)
```
The `tf.train.CheckpointManager` object deletes old checkpoints. Above it's configured to keep only the three most recent checkpoints.
```
print(manager.checkpoints) # List the three remaining checkpoints
```
These paths, e.g. `'./tf_ckpts/ckpt-10'`, are not files on disk. Instead they are prefixes for an `index` file and one or more data files which contain the variable values. These prefixes are grouped together in a single `checkpoint` file (`'./tf_ckpts/checkpoint'`) where the `CheckpointManager` saves its state.
```
!ls ./tf_ckpts
```
<a id="loading_mechanics"/>
## Loading mechanics
TensorFlow matches variables to checkpointed values by traversing a directed graph with named edges, starting from the object being loaded. Edge names typically come from attribute names in objects, for example the `"l1"` in `self.l1 = tf.keras.layers.Dense(5)`. `tf.train.Checkpoint` uses its keyword argument names, as in the `"step"` in `tf.train.Checkpoint(step=...)`.
The dependency graph from the example above looks like this:

With the optimizer in red, regular variables in blue, and optimizer slot variables in orange. The other nodes, for example representing the `tf.train.Checkpoint`, are black.
Slot variables are part of the optimizer's state, but are created for a specific variable. For example the `'m'` edges above correspond to momentum, which the Adam optimizer tracks for each variable. Slot variables are only saved in a checkpoint if the variable and the optimizer would both be saved, thus the dashed edges.
Calling `restore()` on a `tf.train.Checkpoint` object queues the requested restorations, restoring variable values as soon as there's a matching path from the `Checkpoint` object. For example we can load just the bias from the model we defined above by reconstructing one path to it through the network and the layer.
```
to_restore = tf.Variable(tf.zeros([5]))
print(to_restore.numpy()) # All zeros
fake_layer = tf.train.Checkpoint(bias=to_restore)
fake_net = tf.train.Checkpoint(l1=fake_layer)
new_root = tf.train.Checkpoint(net=fake_net)
status = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))
print(to_restore.numpy()) # We get the restored value now
```
The dependency graph for these new objects is a much smaller subgraph of the larger checkpoint we wrote above. It includes only the bias and a save counter that `tf.train.Checkpoint` uses to number checkpoints.

`restore()` returns a status object, which has optional assertions. All of the objects we've created in our new `Checkpoint` have been restored, so `status.assert_existing_objects_matched()` passes.
```
status.assert_existing_objects_matched()
```
There are many objects in the checkpoint which haven't matched, including the layer's kernel and the optimizer's variables. `status.assert_consumed()` only passes if the checkpoint and the program match exactly, and would throw an exception here.
### Delayed restorations
`Layer` objects in TensorFlow may delay the creation of variables to their first call, when input shapes are available. For example the shape of a `Dense` layer's kernel depends on both the layer's input and output shapes, and so the output shape required as a constructor argument is not enough information to create the variable on its own. Since calling a `Layer` also reads the variable's value, a restore must happen between the variable's creation and its first use.
To support this idiom, `tf.train.Checkpoint` queues restores which don't yet have a matching variable.
```
delayed_restore = tf.Variable(tf.zeros([1, 5]))
print(delayed_restore.numpy()) # Not restored; still zeros
fake_layer.kernel = delayed_restore
print(delayed_restore.numpy()) # Restored
```
### Manually inspecting checkpoints
`tf.train.list_variables` lists the checkpoint keys and shapes of variables in a checkpoint. Checkpoint keys are paths in the graph displayed above.
```
tf.train.list_variables(tf.train.latest_checkpoint('./tf_ckpts/'))
```
### List and dictionary tracking
As with direct attribute assignments like `self.l1 = tf.keras.layers.Dense(5)`, assigning lists and dictionaries to attributes will track their contents.
```
save = tf.train.Checkpoint()
save.listed = [tf.Variable(1.)]
save.listed.append(tf.Variable(2.))
save.mapped = {'one': save.listed[0]}
save.mapped['two'] = save.listed[1]
save_path = save.save('./tf_list_example')
restore = tf.train.Checkpoint()
v2 = tf.Variable(0.)
assert 0. == v2.numpy() # Not restored yet
restore.mapped = {'two': v2}
restore.restore(save_path)
assert 2. == v2.numpy()
```
You may notice wrapper objects for lists and dictionaries. These wrappers are checkpointable versions of the underlying data-structures. Just like the attribute based loading, these wrappers restore a variable's value as soon as it's added to the container.
```
restore.listed = []
print(restore.listed) # ListWrapper([])
v1 = tf.Variable(0.)
restore.listed.append(v1) # Restores v1, from restore() in the previous cell
assert 1. == v1.numpy()
```
The same tracking is automatically applied to subclasses of `tf.keras.Model`, and may be used for example to track lists of layers.
## Saving object-based checkpoints with Estimator
See the [Estimator guide](https://www.tensorflow.org/guide/estimator).
Estimators by default save checkpoints with variable names rather than the object graph described in the previous sections. `tf.train.Checkpoint` will accept name-based checkpoints, but variable names may change when moving parts of a model outside of the Estimator's `model_fn`. Saving object-based checkpoints makes it easier to train a model inside an Estimator and then use it outside of one.
```
import tensorflow.compat.v1 as tf_compat
def model_fn(features, labels, mode):
net = Net()
opt = tf.keras.optimizers.Adam(0.1)
ckpt = tf.train.Checkpoint(step=tf_compat.train.get_global_step(),
optimizer=opt, net=net)
with tf.GradientTape() as tape:
output = net(features['x'])
loss = tf.reduce_mean(tf.abs(output - features['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
return tf.estimator.EstimatorSpec(
mode,
loss=loss,
train_op=tf.group(opt.apply_gradients(zip(gradients, variables)),
ckpt.step.assign_add(1)),
# Tell the Estimator to save "ckpt" in an object-based format.
scaffold=tf_compat.train.Scaffold(saver=ckpt))
tf.keras.backend.clear_session()
est = tf.estimator.Estimator(model_fn, './tf_estimator_example/')
est.train(toy_dataset, steps=10)
```
`tf.train.Checkpoint` can then load the Estimator's checkpoints from its `model_dir`.
```
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
ckpt = tf.train.Checkpoint(
step=tf.Variable(1, dtype=tf.int64), optimizer=opt, net=net)
ckpt.restore(tf.train.latest_checkpoint('./tf_estimator_example/'))
ckpt.step.numpy() # From est.train(..., steps=10)
```
## Summary
TensorFlow objects provide an easy automatic mechanism for saving and restoring the values of variables they use.
| github_jupyter |
# Volumetrics: HCIP calculation
We'll implement the volumetric equation:
$$ V = A \times T \times G \times \phi \times N\!\!:\!\!G \times S_\mathrm{O} \times \frac{1}{B_\mathrm{O}} $$
## Gross rock volume
$$ \mathrm{GRV} = A \times T $$
## Geometric factor
Now we need to compensate for the prospect not being a flat slab of rock — using the geometric factor.
We will implement the equations implied by this diagram:
<img src="http://subsurfwiki.org/images/6/66/Geometric_correction_factor.png", width=600>
Let's turn this one into a function too. It's a little trickier:
Apply the geometric factor to the gross rock volume:
## Multiple prospects
```
thicknesses = [10, 25, 15, 5, 100]
heights = [75, 100, 20, 100, 200]
```
## HC pore volume
$$ \mathrm{HCPV} = \mathrm{N\!:\!G} \times \phi \times S_\mathrm{O} $$
We need:
- net:gross — the ratio of reservoir-quality rock thickness to the total thickness of the interval.
- porosity
- $S_\mathrm{O}$ — the oil saturation, or proportion of oil to total pore fluid.
### EXERCISE
Turn this into a function by rearranging the following lines of code:
"""A function to compute the hydrocarbon pore volume."""
return hcpv
hcpv = netg * por * s_o
def calculate_hcpv(netg, por, s_o):
```
# Put your code here:
```
After you define the function and run that cell, this should work:
```
calculate_hcpv(0.5, 0.24, 0.8)
```
You should get: `0.096`.
## Formation volume factor
Oil shrinks when we produce it, especially if it has high GOR. The FVF, or $B_O$, is the ratio of a reservoir barrel to a stock-tank barrel (25 deg C and 1 atm). Typically the FVF is between 1 (heavy oil) and 1.7 (high GOR).
```
fvf = 1.1
```
We could define something to remember the FVF for different types of oil:
### EXERCISE
For gas, $B_\mathrm{G}$ is $0.35 Z T / P$, where $Z$ is the correction factor, or gas compressibility factor. $T$ should be in kelvin and $P$ in kPa. $Z$ is usually between 0.8 and 1.2, but it can be as low as 0.3 and as high as 2.0.
Can you write a function to calculate $B_\mathrm{G}$?
```
def calculate_Bg( ): # Add the arguments.
"""Write a docstring."""
return # Don't forget to return something!
```
## Put it all together
Now we have the components of the volumetric equation:
[For more on conversion to bbl, BOE, etc.](https://en.wikipedia.org/wiki/Barrel_of_oil_equivalent)
### EXERCISE
- Can you write a function to compute the volume (i.e. the HCIP), given all the inputs?
- Try to use the functions for calculating GRV and HCPV that you have already written.
As a reminder, here's the equation:
$$ V = A \times T \times G \times \phi \times N\!\!:\!\!G \times S_\mathrm{O} \times \frac{1}{B_\mathrm{O}} $$
```
# Put your code here.
```
When you've defined the function, this should work:
```
calculate_hcip(area, thick, g, por, netg, s_o, fvf)
```
You should get `4189090909`.
## Monte Carlo simulation
We can easily draw randomly from distributions of properties:
- Normal: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html
- Uniform: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html
- Lognormal: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.lognormal.html
The histogram looks a bit ragged, but this is probably because of the relatively small number of samples.
### EXERCISE
1. How does the histogram look if you take 1000 or 10,000 samples instead of 100?
1. Make distributions for some of the other properties, like thickness and FVF.
1. Maybe our functions should check that we don't get unreasonable values, like negative numbers, or decimal fractions over 1.0 Try to implement this if you have time.
----
## Full MC calculation
Remember how when we changed lists of multiple values into our functions, they didn't work, but if we used arrays they did? Well, our distributions are arrays, so we might just be able to pass them straight to our function:
```
calculate_hcip(area, thick, g, por, netg, s_o, fvf)
hcip = calculate_hcip(area, thick, g, por, netg, s_o, fvf)
plt.figure(figsize=(20,4))
plt.bar(np.arange(100), sorted(hcip))
plt.show()
p = 50
cols = 100 * ['gray']
cols[p] = 'red'
plt.figure(figsize=(20,4))
plt.bar(np.arange(100), sorted(hcip), color=cols)
plt.text(p, hcip[p]*1.3, f'{hcip[50]:.2e} $\mathrm{{Sm}}^3$', ha='center', color='red', size=20)
plt.show()
```
### Lognormal distributions
We might prefer a lognormal distribution for some parameters, e.g. area and porosity.
This is a little trickier, and involves using `scipy`. The good news is that this gives us access to 97 other continuous distributions, as well as multivariate distributions and other useful things.
```
import scipy.stats
dist = scipy.stats.lognorm(s=0.2, scale=0.15)
```
This has instantiated a continuous distribution, from which we can now sample random variables:
```
samples = dist.rvs(size=10000)
```
These have a lognormal distribution.
```
_ = plt.hist(samples, bins=40)
```
# Reading data from a file
Let's try reading data from a CSV.
```
ls ../data
!head -5 ../data/HC_volumes_random_input.csv
with open('../data/HC_volumes_random_input.csv') as f:
for line in f:
data = line.split(',')
print(data)
import csv
with open('../data/HC_volumes_random_input.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
print(row)
row
row['phi']
```
## Using `pandas`
```
import pandas as pd
df = pd.read_csv('../data/HC_volumes_random_input.csv')
df.head()
df[['Name', 'Thick [m]']]
calculate_grv(df['Thick [m]'], df['Area [km2]'])
```
We could also compute a distribution for each row in the dataframe:
```
df.columns
def wrapper(row):
_, name, thick, area, g, netg, por, s_o, fvf, grv = row
area *= 1000000
return calculate_hcip(area, thick, g, por, netg, s_o, fvf)
df.apply(wrapper, axis=1)
df['HCIP'] = df.apply(wrapper, axis=1)
df.head()
```
<hr />
<div>
<img src="https://avatars1.githubusercontent.com/u/1692321?s=50"><p style="text-align:center">© Agile Geoscience 2018</p>
</div>
| github_jupyter |
# DL20191921: Initial analysis of hemichordate 10x pilot experiment
## Modules and functions
```
%matplotlib inline
import collections
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pandas import CategoricalDtype
from pandas.api.types import CategoricalDtype, is_categorical_dtype
import scipy.sparse as sp_sparse
import h5py
import tqdm
import scanpy as sc
sc.settings.verbosity = 3
sc.logging.print_versions()
np.random.seed(0)
FeatureBCMatrix = collections.namedtuple('FeatureBCMatrix', ['feature_ids', 'feature_names', 'barcodes', 'matrix'])
def get_matrix_from_h5(filename):
with h5py.File(filename) as f:
if u'version' in f.attrs:
if f.attrs['version'] > 2:
raise ValueError('Matrix HDF5 file format version (%d) is an newer version that is not supported by this function.' % version)
else:
raise ValueError('Matrix HDF5 file format version (%d) is an older version that is not supported by this function.' % version)
feature_ids = [x.decode('ascii', 'ignore') for x in f['matrix']['features']['id']]
feature_names = [x.decode('ascii', 'ignore') for x in f['matrix']['features']['name']]
barcodes = list(f['matrix']['barcodes'][:])
matrix = sp_sparse.csc_matrix((f['matrix']['data'], f['matrix']['indices'], f['matrix']['indptr']), shape=f['matrix']['shape'])
return FeatureBCMatrix(feature_ids, feature_names, barcodes, matrix)
def get_expression(fbm, gene_name):
try:
gene_index = feature_bc_matrix.feature_names.index(gene_name)
except ValueError:
raise Exception("%s was not found in list of gene names." % gene_name)
return fbm.matrix[gene_index, :].toarray().squeeze()
def process_adata(adata):
adata.var_names_make_unique()
sc.pp.filter_cells(adata, min_genes=750)
sc.pp.filter_genes(adata, min_cells=3)
sc.pp.log1p(adata)
adata.raw = adata
sc.pp.highly_variable_genes(adata, min_mean=0.0125, max_mean=3, min_disp=0.5)
sc.pl.highly_variable_genes(adata)
adata = adata[:, adata.var['highly_variable']]
sc.pp.scale(adata, max_value=10)
sc.tl.pca(adata, svd_solver='arpack')
sc.pl.pca(adata)
sc.pl.pca_variance_ratio(adata, log=True)
sc.pp.neighbors(adata, n_neighbors=15, n_pcs=20)
sc.tl.louvain(adata, resolution = 0.2)
sc.tl.umap(adata)
sc.pl.umap(adata, color = 'louvain')
return adata
```
## Ingest and process cell x gene matrix
```
nonsorted_adata = sc.read_10x_mtx(
'/home/ubuntu/data/hemichordate/analysis/nonsorted/',
var_names='gene_symbols',
cache=True)
nonsorted_adata.obs['sort'] = False
# unsorted channel
nonsorted_adata = sc.read_10x_mtx(
'/home/ubuntu/data/hemichordate/analysis/nonsorted/',
var_names='gene_symbols',
cache=True)
nonsorted_adata.obs['sort'] = False
# sorted channel
sorted_adata = sc.read_10x_mtx(
'/home/ubuntu/data/hemichordate/analysis/sorted/',
var_names='gene_symbols',
cache=True)
sorted_adata.obs['sort'] = True
# concat both sorted and unsorted channels
merge_adata = nonsorted_adata.concatenate(sorted_adata)
merge_adata.obs['sort'] = (merge_adata.obs['sort']
.astype(str)
.astype(CategoricalDtype(['True','False']))
)
# generic processing
merge_adata = process_adata(merge_adata)
```
## Unsorted channel results
No detected transgene transcripts.
Weak engrailed signal.
```
sc.pl.umap(merge_adata[merge_adata.obs['sort'] == 'False'],
color=['TRANSGENE1', 'n_genes','en','louvain'],
ncols=2,
cmap='jet'
)
```
## Sorted channel results
Weak transgene signal, mostly in Louvain cluster 1.
Low-medium expression of engrailed in Louvain cluster 1.
```
sc.pl.umap(merge_adata[merge_adata.obs['sort'] == 'True'],
color=['TRANSGENE1', 'n_genes','en','louvain'],
ncols=2,
cmap='jet'
)
```
## Merged channels results
```
sc.pl.umap(merge_adata,
color=['TRANSGENE1', 'n_genes','en','louvain'],
ncols=2,
cmap='jet'
)
```
## Differential analysis
```
# ingest GTF to convert gene names
gtf_df = pd.read_csv('/home/ubuntu/data/hemichordate/hemichordate.gtf', sep='\t', header=None)
gtf_df.columns = ['scaffold','scaffold_name','gene_feature','start','end','score','strand','unk','annotations']
gtf_df['product_field'] = [any([x.startswith('product ') for x in x.split('; ')]) for x in gtf_df['annotations']]
prod_yes = gtf_df[gtf_df['product_field'] == True]
prod_yes['product_desc'] = [subx[9:-1] \
for x in prod_yes['annotations'] \
for subx in x.split('; ') if subx.startswith('product ')
]
prod_yes['gene_field'] = [any([x.startswith('gene ') for x in x.split('; ')]) for x in prod_yes['annotations']]
gene_yes = prod_yes[prod_yes['gene_field'] == True]
gene_yes['value'] = [subx[6:-1] \
for x in gene_yes['annotations'] \
for subx in x.split('; ') if subx.startswith('gene ')
]
gene_yes = gene_yes[~gene_yes['value'].duplicated('first')]
# Wilcoxon test
sc.tl.rank_genes_groups(merge_adata,
groupby='louvain',
method='wilcoxon',
n_genes = 20)
de_df = pd.DataFrame(merge_adata.uns['rank_genes_groups']['names'])
de_df['idx'] = [x for x in range(len(de_df))]
de_df = pd.melt(de_df,id_vars= 'idx')
de_df = pd.merge(de_df, gene_yes.loc[:,['product_desc','value']], 'left', 'value')
# output results for cluster 1
pd.options.display.max_rows = 100
de_df[de_df.variable == '1']
recluster_adata = merge_adata[merge_adata.obs.louvain == '1']
# sc.pp.highly_variable_genes(recluster_adata, min_mean=0.5, max_mean=4, min_disp=0.1)
# sc.pl.highly_variable_genes(recluster_adata)
recluster_adata = recluster_adata[:, recluster_adata.var['highly_variable']]
sc.tl.pca(recluster_adata, svd_solver='arpack', use_highly_variable=False)
sc.pl.pca(recluster_adata)
sc.pl.pca_variance_ratio(recluster_adata, log=True)
sc.pp.neighbors(recluster_adata, n_neighbors=5, n_pcs=10)
sc.tl.louvain(recluster_adata, resolution = 0.1)
sc.tl.umap(recluster_adata)
sc.pl.umap(recluster_adata, color = ['louvain','en'])
# Wilcoxon test
sc.tl.rank_genes_groups(recluster_adata,
groupby='louvain',
method='wilcoxon',
n_genes = 20)
de_df = pd.DataFrame(recluster_adata.uns['rank_genes_groups']['names'])
de_df['idx'] = [x for x in range(len(de_df))]
de_df = pd.melt(de_df,id_vars= 'idx')
de_df = pd.merge(de_df, gene_yes.loc[:,['product_desc','value']], 'left', 'value')
# output results for cluster 1
pd.options.display.max_rows = 50
for x in ['0','1','2']:
display(x,de_df[de_df.variable == x])
```
| github_jupyter |
```
import gym
from gym import spaces
from gym.utils import seeding
import numpy as np
from os import path
from IPython.display import clear_output
import matplotlib.pyplot as plt
np_random, seed = seeding.np_random(42)
a = np.arange(4).reshape((2,2))
print(a)
[0, 1, 2, 3][:1]
print([row for row in a])
class g2048(gym.Env):
metadata = {
'render.modes' : ['human', 'rgb_array'],
'video.frames_per_second' : 30
}
def __init__(self):
self.board_size = np.array([4, 4])
self.action_space = spaces.Discrete(4)
self.observation_space = spaces.Discrete(np.prod(self.board_size))
self.seed()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed if seed is not None else np.random.seed())
return [seed]
def step(self, move):
assert int(move) in range(4)
old_state = self.state.copy()
old_full = np.sum(old_state == 0)
rotated_state = np.rot90(old_state, move)
new_rotated_state = []
for i, row in enumerate(rotated_state):
zero_counter = 0
new_row = row.copy()
for j, value in enumerate(row):
if value == 0:
zero_counter += 1
else:
for k, kvalue in enumerate(new_row[:j]):
if kvalue == 0:
new_row[k] = value
new_row[j] = 0
break
elif kvalue == value:
new_row[k] = new_row[k] + 1
new_row[j] = 0
break
new_rotated_state.append(new_row)
self.state = np.rot90(np.array(new_rotated_state), 4-move)
if (self.state == old_state).all():
reward = 0
else:
reward, _ = self._put_piece()
if np.sum(self.state == 0) == 0: # No empty positions
done = True
else:
done = False
return self._get_obs(), reward, done, {}
def reset(self):
self.state = np.zeros(self.board_size, dtype=np.int8)
self._put_piece()
self._put_piece()
return self._get_obs()
def _get_obs(self):
return self.state
def _get_pos(self):
"""Select a random position in the state where there is no piece."""
zero_pos = list(np.array(np.where(self.state == 0)).T)
return zero_pos[self.np_random.randint(len(zero_pos))]
def _get_piece(self):
"""Select a piece value to be positioned."""
return 1 if self.np_random.uniform() < 0.9 else 2
def _put_piece(self):
"""Put piece with random value in empty position."""
position = self._get_pos()
value = self._get_piece()
self.state[position[0]][position[1]] = value
return value, position
def render(self, mode='human'):
# TBD
if self.viewer is None:
from gym.envs.classic_control import rendering
self.viewer = rendering.Viewer(500,500)
self.viewer.set_bounds(-2.2,2.2,-2.2,2.2)
rod = rendering.make_capsule(1, .2)
rod.set_color(.8, .3, .3)
self.pole_transform = rendering.Transform()
rod.add_attr(self.pole_transform)
self.viewer.add_geom(rod)
axle = rendering.make_circle(.05)
axle.set_color(0,0,0)
self.viewer.add_geom(axle)
fname = path.join(path.dirname(__file__), "assets/clockwise.png")
self.img = rendering.Image(fname, 1., 1.)
self.imgtrans = rendering.Transform()
self.img.add_attr(self.imgtrans)
self.viewer.add_onetime(self.img)
self.pole_transform.set_rotation(self.state[0] + np.pi/2)
if self.last_u:
self.imgtrans.scale = (-self.last_u/2, np.abs(self.last_u)/2)
return self.viewer.render(return_rgb_array = mode=='rgb_array')
def close(self):
if self.viewer:
self.viewer.close()
self.viewer = None
env = PendulumEnv()
rews = []
for tr in range(1000):
obs = env.reset()
done = False
total_reward = 0
while not done:
#print(obs)
action = np.random.choice([0, 1, 2, 3])
obs, reward, done, _ = env.step(action)
#clear_output(wait=True)
total_reward += reward
rews.append(total_reward)
rews = np.array(rews)
print(rews.mean())
plt.hist(rews)
env.step(0)
env.step(0)
env.step(1)
env.step(3)
env.step(0)
```
| github_jupyter |
# [ATM 623: Climate Modeling](../index.ipynb)
[Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany
# Lecture 3: Climate sensitivity and feedback
Tuesday February 3 and Thursday February 5, 2015
### About these notes:
This document uses the interactive [`IPython notebook`](http://ipython.org/notebook.html) format (now also called [`Jupyter`](https://jupyter.org)). The notes can be accessed in several different ways:
- The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware
- The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb)
- A complete snapshot of the notes as of May 2015 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html).
Many of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab
## Contents
1. [The feedback concept](#section1)
2. [Climate feedback: some definitions](#section2)
3. [Calculating $\lambda$ for the zero-dimensional EBM](#section3)
4. [Climate sensitivity](#section4)
5. [Feedbacks diagnosed from complex climate models](#section5)
6. [Feedback analysis of the zero-dimensional model with variable albedo](#section6)
## Preamble
- Questions and discussion about the previous take-home assignment on
- zero-dimensional EBM
- exponential relaxation
- timestepping the model to equilibrium
- multiple equilibria with ice albedo feedback
- Reading assigment:
- Everyone needs to read through Chapters 1 and 2 of "The Climate Modelling Primer (4th ed)"
- It is now on reserve at the Science Library
- Read it ASAP, but definitely before the mid-term exam
- Discuss the use of IPython notebook
____________
<a id='section1'></a>
## 1. The feedback concept
____________
A concept borrowed from electrical engineering. You have all probably heard or used the term before, but we’ll try take a more precise approach today.
A feedback occurs when a portion of the output from the action of a system is added to the input and subsequently alters the output:
```
from IPython.display import Image
Image(filename='../images/feedback_sketch.png', width=500)
```
The result of a loop system can either be amplification or dampening of the process, depending on the sign of the gain in the loop.
We will call amplifying feedbacks **positive** and damping feedbacks **negative**.
We can think of the “process” here as the entire climate system, which contains many examples of both positive and negative feedback.
### Two classic examples:
#### Water vapor feedback
The capacity of the atmosphere to hold water vapor (saturation specific humidity) increases exponentially with temperature. Warming is thus accompanied by moistening (more water vapor), which leads to more warming due to the enhanced water vapor greenhouse effect.
**Positive or negative feedback?**
#### Ice-albedo feedback
Colder temperatures lead to expansion of the areas covered by ice and snow, which tend to be more reflective than water and vegetation. This causes a reduction in the absorbed solar radiation, which leads to more cooling.
**Positive or negative feedback?**
*Make sure it’s clear that the sign of the feedback is the same whether we are talking about warming or cooling.*
_____________
<a id='section2'></a>
## 2. Climate feedback: some definitions
____________
Let’s go back to the concept of the **planetary energy budget**:
$$C \frac{d T_s}{dt} = F_{TOA} $$
where
$$ F_{TOA} = (1-\alpha) Q - \sigma T_e^4$$
is the **net downward energy flux** at the top of the atmosphere.
So for example when the planet is in equilibrium, we have $d/dt = 0$, or solar energy in = longwave emissions out
Let’s imagine we force the climate to change by adding some extra energy to the system, perhaps due to an increase in greenhouse gases, or a decrease in reflective aerosols. Call this extra energy a **radiative forcing**, denoted by $R$ in W m$^{-2}$.
The climate change will be governed by
$$C \frac{d \Delta T_s}{dt} = R + \Delta F_{TOA}$$
where $\Delta T_s$ is the change in global mean surface temperature. This budget accounts for two kinds of changes to $T_s$:
- due to the radiative forcing: $R$
- due to resulting changes in radiative processes (internal to the climate system): $\Delta F_{TOA}$
### The feedback factor: a linearization of the perturbation energy budget
The **key assumption** in climate feedback analysis is that *changes in radiative flux are proportional to surface temperature changes*:
$$ \Delta F_{TOA} = \lambda \Delta T_s $$
where $\lambda$ is a constant of proportionality, with units of W m$^{-2}$ K$^{-1}$.
Mathematically, we are assuming that the changes are sufficiently small that we can linearize the budget about the equilibrium state (as we did explicitly in our previous analysis of the zero-dimensional EBM).
Using a first-order Taylor Series expansion, a generic definition for $\lambda$ is thus
$$ \lambda = \frac{\partial}{\partial T_s} \bigg( \Delta F_{TOA} \bigg) $$
The budget for the perturbation temperature then becomes
$$ C \frac{d \Delta T}{dt} = R + \lambda \Delta T_s $$
We will call $\lambda$ the **climate feedback parameter**.
A key question for which we need climate models is this:
*How much warming do we expect for a given radiative forcing?*
Or more explicitly, how much warming if we double atmospheric CO$_2$ concentration (which it turns out produces a radiative forcing of roughly 4 W m$^{-2}$, as we will see later).
Given sufficient time, the system will reach its new equilibrium temperature, at which point
$$\frac{d \Delta T_s}{dt} = 0$$
And the perturbation budget is thus
$$ 0 = R + \lambda \Delta T_s $$
or
$$ \Delta T_s = - \frac{R}{\lambda}$$
where $R$ is the forcing in W m$^{-2}$ and $\lambda$ is the feedback in W m$^{-2}$ K$^{-1}$.
Notice that we have NOT invoked a specific model for the radiative emissions (yet). This is a very general concept that we can apply to ANY climate model.
We have defined things here such that **$\lambda > 0$ for a positive feedback, $\lambda < 0$ for a negative feedback**. That’s convenient!
### Decomposing the feedback into additive components
Another thing to note: we can decompose the total climate feedback into **additive components** due to different processes:
$$ \lambda = \lambda_0 + \lambda_1 + \lambda_2 + ... = \sum_{i=0}^n \lambda_i$$
This is possible because of our assumption of linear dependence on $\Delta T_s$.
We might decompose the net climate feedbacks into, for example
- longwave and shortwave processes
- cloud and non-cloud processes
These individual feedback processes may be positive or negative. This is very powerful, because we can **measure the relative importance of different feedback processes** simply by comparing their $\lambda_i$ values.
Let’s reserve the symbol $\lambda$ to mean the overall or net climate feedback, and use subscripts to denote specific feedback processes.
QUESTION: what is the sign of $\lambda$?
Could there be energy balance for a planet with a positive λ? Think about your experiences timestepping the energy budget equation.
_____________
<a id='section3'></a>
## 3. Calculating $\lambda$ for the zero-dimensional EBM
____________
Our prototype climate model is the **zero-dimensional EBM**
$$C \frac{d T_s}{dt}=(1-α)Q-σ(\beta T_s )^4$$
where $\beta$ is a parameter measuring the proportionality between surface temperature and emission temperature. From observations we estimate
$$ \beta = 255 / 288 = 0.885$$
We now add a radiative forcing to the model:
$$C \frac{d T_s}{dt}=(1-α)Q-σ(\beta T_s )^4 + R $$
We saw in the previous lecture that we can **linearize** the model about a reference temperature $\overline{T_s}$ using a first-order Taylor expansion to get
$$C \frac{d \Delta T_s}{d t} = R + \lambda \Delta T_s$$
with the constant of proportionality
$$\lambda = -\Big(4 \sigma \beta^4 \overline{T_s}^3 \Big)$$
which, according to the terminology we have just introduced above, is the net climate feedback for this model.
Evaluating $\lambda$ at the observed global mean temperature of 288 K and using our tuned value of $\beta$ gives
$$ \lambda = -3.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} $$
Note that we are treating the albedo $\alpha$ as fixed in this model. We will generalize to variable albedo below.
### What does this mean?
It means that, for every W m$^{-2}$ of excess energy we put into our system, our model predicts that the surface temperature must increase by $-1/ \lambda = 0.3$ K in order to re-establish planetary energy balance.
This model only represents a **single feedback process**: the increase in longwave emission to space with surface warming.
This is called the **Planck feedback** because it is fundamentally due to the Planck blackbody radiation law (warmer temperatures = higher emission).
Here and henceforth we will denote this feedback by $\lambda_0$. To be clear, we are saying that *for this particular climate model*
$$ \lambda = \lambda_0 = -\Big(4 \sigma \beta^4 \overline{T_s}^3 \Big) $$
### Every climate model has a Planck feedback
The Planck feedback is the most basic and universal climate feedback, and is present in every climate model. It is simply an expression of the fact that a warm planet radiates more to space than a cold planet.
As we will see, our estimate of $\lambda_0 = -3.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} $ is essentially the same as the Planck feedback diagnosed from complex GCMs. Unlike our simple zero-dimensional model, however, most other climate models (and the real climate system) have other radiative processes, such that $\lambda \ne \lambda_0$.
________________
<a id='section4'></a>
## 4. Climate sensitivity
____________
Let’s now define another important term:
**Equilibrium Climate Sensitivity (ECS)**: the global mean surface warming necessary to *balance the planetary energy budget* after a *doubling* of atmospheric CO2.
We will denote this temperature as $\Delta T_{2\times CO_2}$
ECS is an important number. A major goal of climate modeling is to provide better estimates of ECS and its uncertainty.
Let's estimate ECS for our zero-dimensional model. We know that the warming for any given radiative forcing $R$ is
$$ \Delta T_s = - \frac{R}{\lambda}$$
To calculate $\Delta T_{2\times CO_2}$ we need to know the radiative forcing from doubling CO$_2$, which we will denote $R_{2\times CO_2}$. We will spend some time looking at this quantity later in the semester. For now, let's just take a reasonable value
$$ R_{2\times CO_2} \approx 4 ~\text{W} ~\text{m}^{-2} $$
Our estimate of ECS follows directly:
$$ \Delta T_{2\times CO_2} = - \frac{R_{2\times CO_2}}{\lambda} = - \frac{4 ~\text{W} ~\text{m}^{-2}}{-3.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1}} = 1.2 ~\text{K} $$
### Is this a good estimate?
**What are the current best estimates for ECS?**
Latest IPCC report AR5 gives a likely range of **1.5 to 4.5 K**.
(there is lots of uncertainty in these numbers – we will definitely come back to this question)
So our simplest of simple climate models is apparently **underestimating climate sensitivity**.
Let’s assume that the true value is $\Delta T_{2\times CO_2} = 3 ~\text{K}$ (middle of the range).
This implies that the true net feedback is
$$ \lambda = -\frac{R_{2\times CO_2}}{\Delta T_{2\times CO_2}} = -\frac{4 ~\text{W} ~\text{m}^{-2}}{3 ~\text{K}} = -1.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} $$
We can then deduce the total of the “missing” feedbacks:
$$ \lambda = \lambda_0 + \sum_{i=1}^n \lambda_i $$
$$ -1.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} = -3.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} + \sum_{i=1}^n \lambda_i $$
$$ \sum_{i=1}^n \lambda_i = +2.0 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} $$
The *net effect of all the processes not included* in our simple model is a **positive feedback**, which acts to **increase the equilibrium climate sensitivity**. Our model is not going to give accurate predictions of global warming because it does not account for these positive feedbacks.
(This does not mean the feedback associated with every missing process is positive! Just that the linear sum of all the missing feedbacks is positive!)
This is consistent with our discussion above. We started our feedback discussion with two examples (water vapor and albedo feedback) which are both positive, and both absent from our model!
We've already seen (in homework exercise) a simple way to add an albedo feedback into the zero-dimensional model. We will analyze this version of the model below. But first, let's take a look at the feedbacks as diagnosed from current GCMs.
____________
<a id='section5'></a>
## 5. Feedbacks diagnosed from complex climate models
____________
### Data from the IPCC AR5
This figure is reproduced from the recent IPCC AR5 report. It shows the feedbacks diagnosed from the various models that contributed to the assessment.
(Later in the term we will discuss how the feedback diagnosis is actually done)
See below for complete citation information.
```
feedback_ar5 = 'http://www.climatechange2013.org/images/figures/WGI_AR5_Fig9-43.jpg'
Image(url=feedback_ar5, width=800)
```
**Figure 9.43** | (a) Strengths of individual feedbacks for CMIP3 and CMIP5 models (left and right columns of symbols) for Planck (P), water vapour (WV), clouds (C), albedo (A), lapse rate (LR), combination of water vapour and lapse rate (WV+LR) and sum of all feedbacks except Planck (ALL), from Soden and Held (2006) and Vial et al. (2013), following Soden et al. (2008). CMIP5 feedbacks are derived from CMIP5 simulations for abrupt fourfold increases in CO2 concentrations (4 × CO2). (b) ECS obtained using regression techniques by Andrews et al. (2012) against ECS estimated from the ratio of CO2 ERF to the sum of all feedbacks. The CO2 ERF is one-half the 4 × CO2 forcings from Andrews et al. (2012), and the total feedback (ALL + Planck) is from Vial et al. (2013).
*Figure caption reproduced from the AR5 WG1 report*
Legend:
- P: Planck feedback
- WV: Water vapor feedback
- LR: Lapse rate feedback
- WV+LR: combined water vapor plus lapse rate feedback
- C: cloud feedback
- A: surface albedo feedback
- ALL: sum of all feedback except Plank, i.e. ALL = WV+LR+C+A
Things to note:
- The models all agree strongly on the Planck feedback.
- The Planck feedback is about $\lambda_0 = -3.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} $ just like our above estimate.
- The water vapor feedback is strongly positive in every model.
- The lapse rate feedback is something we will study later. It is slightly negative.
- For reasons we will discuss later, the best way to measure the water vapor feedback is to combine it with lapse rate feedback.
- Models agree strongly on the combined water vapor plus lapse rate feedback.
- The albedo feedback is slightly positive but rather small globally.
- By far the largest spread across the models occurs in the cloud feedback.
- Global cloud feedback ranges from slighly negative to strongly positive across the models.
- Most of the spread in the total feedback is due to the spread in the cloud feedback.
- Therefore, most of the spread in the ECS across the models is due to the spread in the cloud feedback.
- Our estimate of $+2.0 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1}$ for all the missing processes is consistent with the GCM ensemble.
### Citation
This is Figure 9.43 from Chapter 9 of the IPCC AR5 Working Group 1 report.
The report and images can be found online at
<http://www.climatechange2013.org/report/full-report/>
The full citation is:
Flato, G., J. Marotzke, B. Abiodun, P. Braconnot, S.C. Chou, W. Collins, P. Cox, F. Driouech, S. Emori, V. Eyring, C. Forest, P. Gleckler, E. Guilyardi, C. Jakob, V. Kattsov, C. Reason and M. Rummukainen, 2013: Evaluation of Climate Models. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 741–866, doi:10.1017/CBO9781107415324.020
____________
<a id='section6'></a>
## 6. Feedback analysis of the zero-dimensional model with variable albedo
____________
### The model
In the recent homework you were asked to include a new process in the zero-dimensional EBM: a temperature-dependent albedo.
We used the following formulae:
$$C \frac{dT_s}{dt} =(1-\alpha)Q - \sigma (\beta T_s)^4 + R$$
$$ \alpha(T_s) = \left\{ \begin{array}{ccc}
\alpha_i & & T_s \le T_i \\
\alpha_o + (\alpha_i-\alpha_o) \frac{(T_s-T_o)^2}{(T_i-T_o)^2} & & T_i < T_s < T_o \\
\alpha_o & & T_s \ge T_o \end{array} \right\}$$
with the following parameters:
- $R$ is a radiative forcing in W m$^{-2}$
- $C = 4\times 10^8$ J m$^{-2}$ K$^{-1}$ is a heat capacity for the atmosphere-ocean column
- $\alpha$ is the global mean planetary albedo
- $\sigma = 5.67 \times 10^{-8}$ W m$^{-2}$ K$^{-4}$ is the Stefan-Boltzmann constant
- $\beta=0.885$ is our parameter for the proportionality between surface temperature and emission temperature
- $Q = 341.3$ W m$^{-2}$ is the global-mean incoming solar radiation.
- $\alpha_o = 0.289$ is the albedo of a warm, ice-free planet
- $\alpha_i = 0.7$ is the albedo of a very cold, completely ice-covered planet
- $T_o = 293$ K is the threshold temperature above which our model assumes the planet is ice-free
- $T_i = 260$ K is the threshold temperature below which our model assumes the planet is completely ice covered.
As you discovered in the homework, this model has **multiple equilibria**. For the parameters listed above, there are three equilibria. The warm (present-day) solution and the completely ice-covered solution are both stable equilibria. There is an intermediate solution that is an unstable equilibrium.
### Feedback analysis
In this model, the albedo is not fixed but depends on temperature. Therefore it will change in response to an initial warming or cooling. A feedback!
The net climate feedback in this model is now
$$ \lambda = \lambda_0 + \lambda_\alpha $$
where we are denoting the albedo contribution as $\lambda_\alpha$.
The Planck feedback is unchanged: $\lambda_0 = -3.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} $
To calculate $\lambda_\alpha$ we need to **linearize the albedo function**. Like any other linearization, we use a Taylor expansion and must take a first derivative:
$$ \Delta F_{TOA} = \lambda \Delta T_s = \big(\lambda_0 + \lambda_\alpha \big) \Delta T_s$$
$$ \lambda_0 = -\Big(4 \sigma \beta^4 \overline{T_s}^3 \Big)$$
$$ \lambda_\alpha = \frac{d}{d T_s} \Big( (1-\alpha)Q \Big) = - Q \frac{d \alpha}{d T_s} $$
Using the above definition for the albedo function, we get
$$ \lambda_\alpha = -Q ~\left\{ \begin{array}{ccc}
0 & & T_s \le T_i \\
2 (\alpha_i-\alpha_o) \frac{(T_s-T_o)}{(T_i-T_o)^2} & & T_i < T < T_o \\
0 & & T_s \ge T_o \end{array} \right\}$$
Notice that the feedback we have just calculated in **not constant** but depends on the state of the climate system (i.e. the surface temperature).
### Coding up the model in Python
This largely repeats what I asked you to do in your homework.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def albedo(T, alpha_o = 0.289, alpha_i = 0.7, To = 293., Ti = 260.):
alb1 = alpha_o + (alpha_i-alpha_o)*(T-To)**2 / (Ti - To)**2
alb2 = np.where(T>Ti, alb1, alpha_i)
alb3 = np.where(T<To, alb2, alpha_o)
return alb3
def ASR(T, Q=341.3):
alpha = albedo(T)
return Q * (1-alpha)
def OLR(T, sigma=5.67E-8, beta=0.885):
return sigma * (beta*T)**4
def Ftoa(T):
return ASR(T) - OLR(T)
T = np.linspace(220., 300., 100)
plt.plot(T, albedo(T))
plt.xlabel('Temperature (K)')
plt.ylabel('albedo')
plt.ylim(0,1)
plt.title('Albedo as a function of global mean temperature')
```
### Graphical solution: TOA fluxes as functions of temperature
```
plt.plot(T, OLR(T), label='OLR')
plt.plot(T, ASR(T), label='ASR')
plt.plot(T, Ftoa(T), label='Ftoa')
plt.xlabel('Surface temperature (K)')
plt.ylabel('TOA flux (W m$^{-2}$)')
plt.grid()
plt.legend(loc='upper left')
```
### Numerical solution to get the three equilibrium temperatures
```
# Use numerical root-finding to get the equilibria
from scipy.optimize import brentq
# brentq is a root-finding function
# Need to give it a function and two end-points
# It will look for a zero of the function between those end-points
Teq1 = brentq(Ftoa, 280., 300.)
Teq2 = brentq(Ftoa, 260., 280.)
Teq3 = brentq(Ftoa, 200., 260.)
print Teq1, Teq2, Teq3
```
### Feedback analysis in the neighborhood of each equilibrium
```
def lambda_0(T, beta=0.885, sigma=5.67E-8):
return -4 * sigma * beta**4 * T**3
def lambda_alpha(T, Q=341.3, alpha_o = 0.289, alpha_i = 0.7,
To = 293., Ti = 260.):
lam1 = 2*(alpha_i-alpha_o)*(T-To) / (Ti - To)**2
lam2 = np.where(T>Ti, lam1, 0.)
lam3 = np.where(T<To, lam2, 0.)
return -Q * lam3
```
Here we will loop through each equilibrium temperature and compute the feedback factors for those temperatures.
This code also shows an example of how to do nicely formated numerical output with the `print` function. The format string `%.1f` means floating point number rounded to one decimal place.
```
for Teq in (Teq1, Teq2, Teq3):
print 'Equilibrium temperature: %.1f K' % Teq
print ' Planck feedback: %.1f W/m2/K' % lambda_0(Teq)
print ' Albedo feedback: %.1f W/m2/K' % lambda_alpha(Teq)
print ' Net feedback: %.1f W/m2/K' %(lambda_0(Teq) + lambda_alpha(Teq))
```
### Results of the feedback analysis
- The Planck feedback is always negative, but gets a bit weaker in absolute value at very cold temperatures.
- The albedo feedback in this model depends strongly on the state of the system.
- At the intermediate solution $T_s = 273.9$ K, the albedo feedback is strongly positive.
- The **net feedback is positive** at this intermediate temperature.
- The **net feedback is negative** at the warm and cold temperatures.
### What does a **positive** net feedback mean?
Recall from our analytical solutions of the linearized model that the temperature will evolve according to
$$\Delta T_s(t) = \Delta T_s(0) \exp \bigg(-\frac{t}{\tau} \bigg)$$
with the timescale given by
$$ \tau = \frac{C}{-\lambda} $$
In the vicinity of $T_s = 273.9$ K we find that $\lambda > 0$ due to the very strong albedo feedback. Thus $\tau < 0$ in this case, and we are dealing with **exponential growth** of the temperature anomaly rather than exponential decay.
In other words, if the global mean temperature is close to (but not exactly) this value:
```
print Teq2
```
the climate system will rapidly warm up OR cool down. The temperature will NOT remain close to $T_s = 273.9$ K. This is an example of an **unstable equilibrium**.
The final state of the system after a perturbation will be one of the **stable equilibria**:
```
print Teq1, Teq3
```
Hopefully this is consistent with what you found numerically in the homework.
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
____________
## Version information
____________
```
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, climlab
```
____________
## Credits
The author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.
It was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php), offered in Spring 2015.
____________
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
import seaborn as sns
from scipy.stats import skew
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import cross_val_score
from google.colab import drive
!pip install tikzplotlib
import tikzplotlib
```
* Station Code: Código único para cada lugar
* Locations: Nome do rio e para onde flui
* State: O estado em que o rio está fluindo
* Temp: Valor médio de temperatura
* DO: Valor médio de oxigênio dissolvido
* PH: Valor médio de pH
* Conductivity: Valor médio de condutividade
* BOD: Valor médio da demanda bioquímica de oxigênio
* NITRATE_N_NITRITE_N: Valor médio de nitrato-n e nitrito-n
* FECAL_COLIFORM: Valor médio de coliformes fecais
* TOTAL_COLIFORM: Valor médio de coliformes totais
```
path ="/content/waterquality.csv"
dados = pd.read_csv(path,sep=',', engine='python')
dados.head()
```
**Pré-processamento**
Verifica-se a presença ou não de dados faltantes. Realiza também uma normalização dos dados em relação a média e o desvio padrão em virtude da ordem de grandeza dos dados.
```
dados =dados.drop(['STATION CODE', 'LOCATIONS','STATE'], axis=1)
dados.isnull().sum()
```
Verifica-se a presença de dados faltantes, opta-se por preencher com a o último registro válido
```
dadostratados = dados.fillna(method='bfill')
dadostratados.isnull().sum()
variaveis = ['TEMP','DO','pH','CONDUCTIVITY','BOD','NITRATE_N_NITRITE_N','FECAL_COLIFORM','TOTAL_COLIFORM']
dadostratados.head()
```
Seja $X_i$ um preditor do dataset, $\mu_i$ e $\sigma_i$ a média e o desvio padrão desse respectivo preditor, realiza-se a seguinte normalização:
$$
X^{*}_i=\frac{X_i-\mu_i}{\sigma_i}
$$
```
dadostratados = (dadostratados - dadostratados.mean())/dadostratados.std()
plt.figure(figsize=(10, 10))
corr = dadostratados.corr()
_ = sns.heatmap(corr, annot=True )
```
Verifica-se a baixa correlação entre a BOD e os seguintes preditores: temperatura, pH e condutividade.Então em virtude da sua pouca colacaboração para a construção do modelo opta-se por não considerá-los.
```
variaveis = ['DO','BOD','NITRATE_N_NITRITE_N','FECAL_COLIFORM','TOTAL_COLIFORM']
dadostratados =dadostratados.drop(['pH', 'TEMP','CONDUCTIVITY'], axis=1)
plt.figure(figsize=(10, 10))
corr = dadostratados.corr()
_ = sns.heatmap(corr, annot=True )
dadostratados
```
## Métricas de avaliação e validação
Uma métrica qualitativa de avaliação do desempenho é o erro quadrático médio (MSE) é dado pela soma dos erros ao quadrado dividido pelo total de amostras
\begin{equation}
MSE = \frac{1}{N}\sum_{i=1}^{N}\left ( y_i-\hat{y}_i \right )^2
\end{equation}
Tirando a raíz quadrada do $MSE$ defini-se o $RMSE$ :
\begin{equation}
RMSE = \sqrt{\frac{1}{N}\sum_{i=1}^{N}\left ( y_i-\hat{y}_i \right )^2}
\end{equation}
O $RMSE$ mede a performace do modelo, o $R^2$ representa a proporção da variância para uma variável dependente que é explicada por uma variável independente em um modelo de regressão, o qual é difinido pela da seguinte forma:
\begin{equation}
R^2 =1 -\frac{RSS}{TSS}
\end{equation}
onde $RSS= \sum_{i=1}^{N}(y_i-\hat{y}_i)^2$, que pode ser interpretado como uma medida dispersão dos dados gerados pelo em relação aos originais e $TSS = \sum_{i=1}^{N}\left ( y_i -\bar{y}\right )$, que mede a variância em relação a saída.
## Regressão linear por mínimos quadrados
A regressão linear é uma abordagem muito simples para o aprendizado supervisionado. É uma abordagem muito simples para prever uma resposta quantitativa de uma saída $Y$ no
tendo como base em uma única variável preditora de entrada $X$. é preciso que exista uma relação aproximadamente linear entre $X$ e $Y$:
\begin{equation}
Y \approx \beta_0+\beta_1X_1+\beta_1X_1+...+\beta_NX_N
\end{equation}
Onde $\beta_0$,$\beta_1$,...,$\beta_N$ são constantes desconhecidas a seren determinadas no processo, . Os dados são utilizados para estimar os coeficientes. $\hat{\beta}_0$ , $\hat{\beta}_1$,...,$\hat{\beta}_1$, que serão utilizados para determinar o nosso estimados $\hat{y}$. Nesse processo existe a presença de um erro irredutível, o objetivo consiste em minimizar a soma dos erros quadrados:
\begin{equation}
J\left ( \beta_0,\beta_1,..,\beta_N \right ) = \sum_{i=1}^{N}e_i^2 = \sum_{i=1}^{N}\left ( y_i-\hat{\beta}_0-\hat{\beta}_1x_i \right )^2
\end{equation}
Igualando a derivada a função custo a zero, determina-se as coeficientes do modelo,
\begin{equation}
\frac{\partial J}{\partial \beta_n} =0
\end{equation}
Obtem-se um sistema de equações com $N$ incógnitas.
```
from sklearn import linear_model
from sklearn.metrics import mean_absolute_error
X = dadostratados.drop(['BOD'], axis=1)
Y = dadostratados['BOD']
X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size=0.30, random_state=42)
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
cv = RepeatedKFold(n_splits=10, n_repeats=5, random_state=1)
RMSE = cross_val_score(regr, X_train, y_train,scoring = 'neg_mean_squared_error',cv=cv, n_jobs=-1)
RMSE = np.sqrt(-RMSE.mean())
'RMSE treino',RMSE
cv = RepeatedKFold(n_splits=5, n_repeats=5, random_state=1)
RMSE = cross_val_score(regr, X_test, y_test,scoring = 'neg_mean_squared_error',cv=cv, n_jobs=-1)
RMSE = np.sqrt(-RMSE.mean())
'RMSE teste',RMSE
r2_score(y_train, regr.predict(X_train))
r2_score(y_test, regr.predict(X_test))
y_pred = np.array(regr.predict(X_test))
plt.scatter(y_test,y_pred,color='black')
plt.xlabel('Real')
plt.ylabel('Estimado')
tikzplotlib.save("real_estimado_regrlinear.pgf")
```
## Modelos penalizados- Ridge Regression
Os coeficientes produzidos pela regressão de mínimos quadrados ordinários são imparciais e, este modelo também tem a variância mais baixa. Dado que o $MSE$ consiste em uma combinação de variância e bias, é possível gerar modelos com MSEs menores, faz com que estimativas dos parâmetros obitidos sejam tendenciosas. O normal que ocorra um pequeno aumento no viés inplique em uma queda considerável na variância, produzindo um $MSE$ menor do que os coeficientes de regressão de mínimos quadrados. Uma consequência das grandes correlações entre as variâncias do preditor é que a variância pode se tornar muito grande.
Uma possível solução seria penalizar a soma dos erros quadráticos. No presente estudo utilizou-se a Ridge regression, a qual adiciona uma penalidade na soma do parâmetro de regressão quadrado:
\begin{equation}
RSS_{L2} = \sum_{i=1}^{N}\left ( y_i -\hat{y}_i\right )+\lambda\sum_{j=1}^{N}\beta^2_j
\end{equation}
Este método reduz as estimativas para 0 à medida que a penalidade $\lambda$ torna-se grande.Ao penalizar o moelo, realiza-se uma compensação entre a variância e o viés do modelo.
```
from sklearn.linear_model import Ridge
def ridge(x, y,alpha,preditor):
X_train,X_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=42)
rr = Ridge(alpha=alpha)
rr.fit(X_train, y_train)
cv = RepeatedKFold(n_splits=10, n_repeats=5, random_state=1)
RMSE = cross_val_score(rr, X_train, y_train,scoring = 'neg_mean_squared_error',cv=cv, n_jobs=-1)
RMSE = np.sqrt(-RMSE.mean())
return RMSE
RMSEval = []
lambdas = [ ]
l2 = []
alpha = np.linspace(0.01,300,10)
for j in range(0,1):
for i in range(0,10):
RMSEval.append(ridge(dadostratados.drop(['BOD'], axis=1), dadostratados['BOD'].values.reshape(-1, 1),alpha[i],variaveis[0]))
lambdas.append(alpha[i])
print(round(min(RMSEval),4), lambdas[RMSEval.index(min(RMSEval))])
l2.append(lambdas[RMSEval.index(min(RMSEval))])
print('ideal lambda:',l2)
plt.plot(alpha,RMSEval,color='black')
plt.xlabel('$\lambda$')
plt.ylabel('$RMSE$')
tikzplotlib.save("rmsexlambda.pgf")
x=dadostratados.drop(['BOD'], axis=1)
y= dadostratados['BOD'].values.reshape(-1, 1)
X_train,X_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=42)
rr = Ridge(alpha=l2)
rr.fit(X_train, y_train)
y_pred = np.array(rr.predict(X_test))
plt.scatter(y_test,y_pred,color='black')
plt.xlabel('Real')
plt.ylabel('Estimado')
tikzplotlib.save("real_estimado_ridge.pgf")
cv = RepeatedKFold(n_splits=10, n_repeats=5, random_state=1)
RMSE = cross_val_score(rr, X_train, y_train,scoring = 'neg_mean_squared_error',cv=cv, n_jobs=-1)
RMSE = np.sqrt(-RMSE.mean())
'RMSE treino',RMSE
cv = RepeatedKFold(n_splits=10, n_repeats=5, random_state=1)
RMSE = cross_val_score(rr, X_test, y_test,scoring = 'neg_mean_squared_error',cv=cv, n_jobs=-1)
RMSE = np.sqrt(-RMSE.mean())
'RMSE treino',RMSE
r2_score(y_train, rr.predict(X_train))
r2_score(y_test, rr.predict(X_test))
```
# Regressão por Minimos Quadrados Parciais - PLS
```
from sklearn import model_selection
from sklearn.cross_decomposition import PLSRegression, PLSSVD
X = dadostratados.drop(['BOD'], axis=1).astype('float64')
y= dadostratados['BOD']
X_train, X_test , y_train, y_test = model_selection.train_test_split(X, y, test_size=0.3, random_state=1)
from sklearn.preprocessing import scale
n = len(X_train)
kf_10 = RepeatedKFold(n_splits=5, n_repeats=5, random_state=1)
rmse = []
for i in np.arange(1,4):
pls = PLSRegression(n_components=i)
score = model_selection.cross_val_score(pls, scale(X_train), y_train, cv=kf_10, scoring='neg_mean_squared_error').mean()
rmse.append(np.sqrt(-score))
plt.plot(np.arange(1, 4), np.array(rmse), '-x',color='black')
plt.xlabel(' N° de componentes principais')
plt.ylabel('RMSE')
tikzplotlib.save("pls.pgf")
pls = PLSRegression(n_components=3)
cv = RepeatedKFold(n_splits=10, n_repeats=5, random_state=1)
RMSE = cross_val_score(pls, X_train, y_train,scoring = 'neg_mean_squared_error',cv=cv, n_jobs=-1)
RMSE = np.sqrt(-RMSE.mean())
'RMSE treino',RMSE
cv = RepeatedKFold(n_splits=10, n_repeats=5, random_state=1)
RMSE = cross_val_score(pls, X_test, y_test,scoring = 'neg_mean_squared_error',cv=cv, n_jobs=-1)
RMSE = np.sqrt(-RMSE.mean())
'RMSE treino',RMSE
pls.fit(X, Y)
y_pred = pls.predict(X)
r2_score(y_train, pls.predict(X_train))
r2_score(y_test, pls.predict(X_test))
y_pred =pls.predict(X_test)
plt.scatter(y_test,y_pred,color='black')
plt.xlabel('Real')
plt.ylabel('Estimado')
tikzplotlib.save("real_estimado_pls.pgf")
```
#Rede neural
```
X = dadostratados.drop(['BOD'], axis=1)
Y = dadostratados['BOD']
X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size=0.30, random_state=42)
from sklearn.neural_network import MLPRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
regr = MLPRegressor(random_state=42, max_iter=5000).fit(X_train, y_train)
mean_absolute_error(y_test, regr.predict(X_test))
r2_score(y_test, regr.predict(X_test))
y_pred =regr.predict(X_test)
plt.scatter(y_test,y_pred,color='black')
plt.xlabel('Real')
plt.ylabel('Estimado')
tikzplotlib.save("real_estimado_mlp.pgf")
cv = RepeatedKFold(n_splits=10, n_repeats=5, random_state=1)
RMSE = cross_val_score(regr, X_train, y_train,scoring = 'neg_mean_squared_error',cv=cv, n_jobs=-1)
RMSE = np.sqrt(-RMSE.mean())
'RMSE treino',RMSE
```
| github_jupyter |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
import numpy as n
import sys
import os
sys.path.insert(0, 'python')
#base='kinase_sarfari'
base="gpcr_pdsp"
regression_file="{}_regressors.csv".format(base)
classification_file="{}_classifiers.csv".format(base)
dirname = os.getcwd()
regression_file = os.path.abspath(os.path.join(dirname, regression_file))
regression_df = pd.read_csv(regression_file)
select_fields = ['ProtocolName', 'LabelName', 'GroupValues']
regression_df.GroupValues.fillna('NA', inplace=True)
#regression_df.sort_values(select_fields)
regression_df.LabelName.value_counts()
print(regression_df.shape)
regression_df[regression_df.Correlation.isnull()]
regression_df = regression_df[~regression_df.Correlation.isnull()]
max_indices=regression_df.groupby(select_fields)["Correlation"].idxmax()
best_regression_df=regression_df.loc[max_indices]
#best_regression_df.sort_values(select_fields)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(7, 7))
sns.distplot(best_regression_df.Size, rug=True, kde=False, norm_hist=False, ax=axes)
best_regression_df['Correlation'].describe()
best_regression_df['RMSE'].describe()
high_rms_df = best_regression_df[best_regression_df.RMSE > 10]
print(high_rms_df.ProtocolName.unique())
for _, value in high_rms_df.GroupValues.items():
print(value)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(14, 7))
sns.distplot(best_regression_df['Correlation'], rug=True, kde=True, ax=axes[0])
sns.distplot(best_regression_df['RMSE'], rug=False, kde=True, ax=axes[1])
best_regression_df['Estimator'].value_counts()
sns.countplot(y='Estimator', data=best_regression_df)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(14, 7))
sns.boxplot(y='Estimator', x='Correlation', data=regression_df, ax=axes)
#for tick in axes.get_xticklabels():
# tick.set_rotation(80)
dirname = os.getcwd()
classification_file = os.path.abspath(os.path.join(dirname,
classification_file))
classification_df = pd.read_csv(classification_file)
classification_df.GroupValues.fillna('NA', inplace=True)
#classification_df.sort_values(select_fields)
max_indices=classification_df.groupby(select_fields)["ROC_AUC"].idxmax()
best_classification_df=classification_df.loc[max_indices]
#best_classification_df.sort_values(select_fields)
best_classification_df['ROC_AUC'].describe()
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(7, 7))
sns.distplot(best_classification_df['ROC_AUC'], rug=True, kde=True, ax=axes)
best_classification_df['Estimator'].value_counts()
sns.countplot(y='Estimator', data=best_classification_df)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(14, 7))
sns.boxplot(y='Estimator', x='ROC_AUC', data=classification_df, ax=axes)
```
| github_jupyter |
# Completely optional
... but fun!
#### Geek-out about Pandas Expanding Rolling Windows follows (a.k.a. `"Let's measure the Earth!!"`)
Rolling windows are cool, especially because they forget the far past, and keep only the recent data "in mind" when performing operations. There are [many types of rolling window](https://docs.scipy.org/doc/scipy/reference/signal.html#window-functions), which will fall out of the scope of the Academy.
I do however want to mention the expanding rolling window, as it is crazy cool! _(Confession bear: This is not technically timeseries, but just about the rolling windows of Pandas.)_

Let's say you are [Al-Biruni, and you are trying to calculate the radius of the earth in the 9th century](https://www.quora.com/When-and-how-did-scientists-measure-the-radius-of-the-earth). (for argument's sake).
You take measurements using his rudimentary yet brilliant approach, but your instrument is not that precise.
Our objective is to only be wrong by 10 Km (at the most!)
Let's go measure the earth!
```
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
import utils
import matplotlib
np.random.seed(1000)
% matplotlib inline
our_precision = .03
first_try = utils.measure_the_earth(our_precision, verbose=True);
```
Uff... ok, let's try again
```
second_try = utils.measure_the_earth(our_precision, verbose=True);
```
Ok, maybe third time is the charm...
```
third_try = utils.measure_the_earth(our_precision, verbose=True);
```
Oh boy... well we know we can average stuff out... maybe that will help?
```
mean_measure = np.mean([first_try, second_try, third_try])
utils.measure_error(mean_measure, corect_measure=6371)
```
So... how many do we need to get to our 10 Km mark?
This is where expanding rolling windows come into play: when you are measuring something which you know is a constant, and yet you have a sequence of measures.
In this case, your first measure is in no way inferior to your most recent measure, all of them are equally useful.
```
measurements = pd.Series([utils.measure_the_earth(our_precision) for i in range(1000)])
```
Let's use an expanding window to see how our mean evolves with the number of experiments:
```
series_of_measurements = measurements.expanding().mean()
```
The number we are looking for is 6371 Km.
```
series_of_measurements.head()
```
And as a plot:
```
utils.plot_number_of_tries(series_of_measurements)
```
So as a summary, expanding rolling windows are super-useful when we are measuring something we know to be a constant, and we have a sequence of measures. So... cool!
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.