code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Preprocessing
In this lab, we will be exploring how to preprocess tweets for sentiment analysis. We will provide a function for preprocessing tweets during this week's assignment, but it is still good to know what is going on under the hood. By the end of this lecture, you will see how to use the [NLTK](http://www.nltk.org) package to perform a preprocessing pipeline for Twitter datasets.
## Setup
You will be doing sentiment analysis on tweets in the first two weeks of this course. To help with that, we will be using the [Natural Language Toolkit (NLTK)](http://www.nltk.org/howto/twitter.html) package, an open-source Python library for natural language processing. It has modules for collecting, handling, and processing Twitter data, and you will be acquainted with them as we move along the course.
For this exercise, we will use a Twitter dataset that comes with NLTK. This dataset has been manually annotated and serves to establish baselines for models quickly. Let us import them now as well as a few other libraries we will be using.
```
import nltk # Python library for NLP
from nltk.corpus import twitter_samples # sample Twitter dataset from NLTK
import matplotlib.pyplot as plt # library for visualization
import random # pseudo-random number generator
```
## About the Twitter dataset
The sample dataset from NLTK is separated into positive and negative tweets. It contains 5000 positive tweets and 5000 negative tweets exactly. The exact match between these classes is not a coincidence. The intention is to have a balanced dataset. That does not reflect the real distributions of positive and negative classes in live Twitter streams. It is just because balanced datasets simplify the design of most computational methods that are required for sentiment analysis. However, it is better to be aware that this balance of classes is artificial.
The dataset is already downloaded in the Coursera workspace. In a local computer however, you can download the data by doing:
```
# downloads sample twitter dataset. uncomment the line below if running on a local machine.
nltk.download('twitter_samples')
```
We can load the text fields of the positive and negative tweets by using the module's `strings()` method like this:
```
# select the set of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
```
Next, we'll print a report with the number of positive and negative tweets. It is also essential to know the data structure of the datasets
```
print('Number of positive tweets: ', len(all_positive_tweets))
print('Number of negative tweets: ', len(all_negative_tweets))
print('\nThe type of all_positive_tweets is: ', type(all_positive_tweets))
print('The type of a tweet entry is: ', type(all_negative_tweets[0]))
```
We can see that the data is stored in a list and as you might expect, individual tweets are stored as strings.
You can make a more visually appealing report by using Matplotlib's [pyplot](https://matplotlib.org/tutorials/introductory/pyplot.html) library. Let us see how to create a [pie chart](https://matplotlib.org/3.2.1/gallery/pie_and_polar_charts/pie_features.html#sphx-glr-gallery-pie-and-polar-charts-pie-features-py) to show the same information as above. This simple snippet will serve you in future visualizations of this kind of data.
```
# Declare a figure with a custom size
fig = plt.figure(figsize=(5, 5))
# labels for the two classes
labels = 'Positives', 'Negative'
# Sizes for each slide
sizes = [len(all_positive_tweets), len(all_negative_tweets)]
# Declare pie chart, where the slices will be ordered and plotted counter-clockwise:
plt.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
# Equal aspect ratio ensures that pie is drawn as a circle.
plt.axis('equal')
# Display the chart
plt.show()
```
## Looking at raw texts
Before anything else, we can print a couple of tweets from the dataset to see how they look. Understanding the data is responsible for 80% of the success or failure in data science projects. We can use this time to observe aspects we'd like to consider when preprocessing our data.
Below, you will print one random positive and one random negative tweet. We have added a color mark at the beginning of the string to further distinguish the two. (Warning: This is taken from a public dataset of real tweets and a very small portion has explicit content.)
```
# print positive in greeen
print('\033[92m' + all_positive_tweets[random.randint(0,5000)])
# print negative in red
print('\033[91m' + all_negative_tweets[random.randint(0,5000)])
```
One observation you may have is the presence of [emoticons](https://en.wikipedia.org/wiki/Emoticon) and URLs in many of the tweets. This info will come in handy in the next steps.
## Preprocess raw text for Sentiment analysis
Data preprocessing is one of the critical steps in any machine learning project. It includes cleaning and formatting the data before feeding into a machine learning algorithm. For NLP, the preprocessing steps are comprised of the following tasks:
* Tokenizing the string
* Lowercasing
* Removing stop words and punctuation
* Stemming
The videos explained each of these steps and why they are important. Let's see how we can do these to a given tweet. We will choose just one and see how this is transformed by each preprocessing step.
```
# Our selected sample. Complex enough to exemplify each step
tweet = all_positive_tweets[2277]
print(tweet)
```
Let's import a few more libraries for this purpose.
```
# download the stopwords from NLTK
nltk.download('stopwords')
import re # library for regular expression operations
import string # for string operations
from nltk.corpus import stopwords # module for stop words that come with NLTK
from nltk.stem import PorterStemmer # module for stemming
from nltk.tokenize import TweetTokenizer # module for tokenizing strings
```
### Remove hyperlinks, Twitter marks and styles
Since we have a Twitter dataset, we'd like to remove some substrings commonly used on the platform like the hashtag, retweet marks, and hyperlinks. We'll use the [re](https://docs.python.org/3/library/re.html) library to perform regular expression operations on our tweet. We'll define our search pattern and use the `sub()` method to remove matches by substituting with an empty character (i.e. `''`)
```
print('\033[92m' + tweet)
print('\033[94m')
# remove old style retweet text "RT"
tweet2 = re.sub(r'^RT[\s]+', '', tweet)
# remove hyperlinks
tweet2 = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet2)
# remove hashtags
# only removing the hash # sign from the word
tweet2 = re.sub(r'#', '', tweet2)
print(tweet2)
```
### Tokenize the string
To tokenize means to split the strings into individual words without blanks or tabs. In this same step, we will also convert each word in the string to lower case. The [tokenize](https://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.casual) module from NLTK allows us to do these easily:
```
print()
print('\033[92m' + tweet2)
print('\033[94m')
# instantiate tokenizer class
tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,
reduce_len=True)
# tokenize tweets
tweet_tokens = tokenizer.tokenize(tweet2)
print()
print('Tokenized string:')
print(tweet_tokens)
```
### Remove stop words and punctuations
The next step is to remove stop words and punctuation. Stop words are words that don't add significant meaning to the text. You'll see the list provided by NLTK when you run the cells below.
```
#Import the english stop words list from NLTK
stopwords_english = stopwords.words('english')
print('Stop words\n')
print(stopwords_english)
print('\nPunctuation\n')
print(string.punctuation)
```
We can see that the stop words list above contains some words that could be important in some contexts.
These could be words like _i, not, between, because, won, against_. You might need to customize the stop words list for some applications. For our exercise, we will use the entire list.
For the punctuation, we saw earlier that certain groupings like ':)' and '...' should be retained when dealing with tweets because they are used to express emotions. In other contexts, like medical analysis, these should also be removed.
Time to clean up our tokenized tweet!
```
print()
print('\033[92m')
print(tweet_tokens)
print('\033[94m')
tweets_clean = []
for word in tweet_tokens: # Go through every word in your tokens list
if (word not in stopwords_english and # remove stopwords
word not in string.punctuation): # remove punctuation
tweets_clean.append(word)
print('removed stop words and punctuation:')
print(tweets_clean)
```
Please note that the words **happy** and **sunny** in this list are correctly spelled.
### Stemming
Stemming is the process of converting a word to its most general form, or stem. This helps in reducing the size of our vocabulary.
Consider the words:
* **learn**
* **learn**ing
* **learn**ed
* **learn**t
All these words are stemmed from its common root **learn**. However, in some cases, the stemming process produces words that are not correct spellings of the root word. For example, **happi** and **sunni**. That's because it chooses the most common stem for related words. For example, we can look at the set of words that comprises the different forms of happy:
* **happ**y
* **happi**ness
* **happi**er
We can see that the prefix **happi** is more commonly used. We cannot choose **happ** because it is the stem of unrelated words like **happen**.
NLTK has different modules for stemming and we will be using the [PorterStemmer](https://www.nltk.org/api/nltk.stem.html#module-nltk.stem.porter) module which uses the [Porter Stemming Algorithm](https://tartarus.org/martin/PorterStemmer/). Let's see how we can use it in the cell below.
```
print()
print('\033[92m')
print(tweets_clean)
print('\033[94m')
# Instantiate stemming class
stemmer = PorterStemmer()
# Create an empty list to store the stems
tweets_stem = []
for word in tweets_clean:
stem_word = stemmer.stem(word) # stemming word
tweets_stem.append(stem_word) # append to the list
print('stemmed words:')
print(tweets_stem)
```
That's it! Now we have a set of words we can feed into to the next stage of our machine learning project.
## process_tweet()
As shown above, preprocessing consists of multiple steps before you arrive at the final list of words. We will not ask you to replicate these however. In the week's assignment, you will use the function `process_tweet(tweet)` available in _utils.py_. We encourage you to open the file and you'll see that this function's implementation is very similar to the steps above.
To obtain the same result as in the previous code cells, you will only need to call the function `process_tweet()`. Let's do that in the next cell.
```
from utils import process_tweet # Import the process_tweet function
# choose the same tweet
tweet = all_positive_tweets[2277]
print()
print('\033[92m')
print(tweet)
print('\033[94m')
# call the imported function
tweets_stem = process_tweet(tweet); # Preprocess a given tweet
print('preprocessed tweet:')
print(tweets_stem) # Print the result
```
That's it for this lab! You now know what is going on when you call the preprocessing helper function in this week's assignment. Hopefully, this exercise has also given you some insights on how to tweak this for other types of text datasets.
| github_jupyter |
## A Classifier Model Performance Evaluation
The material for evaluting a classifier here are applied to all classifiers
We focused our concentration to Logistic Regression
Read this to undestand what is Logistic Regression: https://www.datacamp.com/community/tutorials/understanding-logistic-regression-python
## Activity: Obtain confusion matrix, accuracy, precision, recall for pima Diabetes dataset
Steps:
1- Load the dataset: `pd.read_csv('diabetes.csv')`
2- Use these features: `feature_cols = ['Pregnancies', 'Insulin', 'BMI', 'Age']`
3- split the data to train and test: `X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)`
4- Instantiate logistic regressiuon model
5- Obtain the statistics of `y_test`
6- Obtain the confuction matrix
https://www.ritchieng.com/machine-learning-evaluate-classification-model/
### Basic terminology
True Positives (TP): we correctly predicted that they do have diabetes: 15
True Negatives (TN): we correctly predicted that they don't have diabetes: 118
False Positives (FP): we incorrectly predicted that they do have diabetes (a "Type I error"): 12
False Negatives (FN): we incorrectly predicted that they don't have diabetes (a "Type II error"): 47
<img src="Images/confusion_matrix.png" width="500" height="500">
## What is accuracy, recall and precision?
Accuracy: overall, how often is the classifier correct? -> $accuracy = \frac {TP + TN}{TP+TN+FP+FN}$
Classification error: overall, how ofthen is the classifier incorrect? -> $error = 1- accuracy = \frac {FP + FN}{TP + TN + FP + FN}$
Recall: when the actual value is positive, how ofthen is the prediction correct? -> $recall = \frac {TP}{TP + FN}$
Precision: When a positive value is predicted, how often is the prediction correct? -> $precision = \frac {TP}{TP + FP}$
Specificity: When the actual value is negative, how often is the prediction correct? -> $Specificity = \frac {TN}{TN + FP}$
```
import pandas as pd
from sklearn.model_selection import train_test_split
pima = pd.read_csv('diabetes.csv')
print(pima.columns)
print(pima.head())
feature_cols = ['Pregnancies', 'Insulin', 'BMI', 'Age']
X = pima[feature_cols]
# print(X)
# y is a vector, hence we use dot to access 'label'
y = pima['Outcome']
# split X and y into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
y_test.value_counts()
```
## The difference between `.predict()` and `.predict_proba` for a classifier
Apply these two methods to Pima Indian Diabetes dataset
https://www.ritchieng.com/machine-learning-evaluate-classification-model/
| github_jupyter |
```
import pandas as pd
import numpy as np
import requests
from bs4 import BeautifulSoup
from lxml import html
import scrapy
from time import sleep
import urllib3
import json
from selenium import webdriver
import random
def parser(link):
encabezados = {
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36'
}
resp = requests.get(link, headers=encabezados, verify=False)
resp = resp.text
#soup = get_Soup('https://minciencias.gov.co/convocatorias/todas')
parser = html.fromstring(resp)
urllib3.disable_warnings()
return parser
estados = ['https://www.fapesc.sc.gov.br/category/chamadas-abertas/', 'https://www.fapesc.sc.gov.br/category/chamadas-encerradas/', 'https://www.fapesc.sc.gov.br/category/chamadas-em-andamento/']
links_pages_estados = []
for estado in estados:
parser_estado = parser(estado)
ult_page = parser_estado.xpath('//div[@class="pagination"]//@href')[-1].split('page/')[1].split('/')[0]
#####################
base = estado
for i in range(0, int(ult_page)):
link_estado = base +'page/' + str(i+1) + '/'
links_pages_estados.append(link_estado)
links_pages_estados
fapesc= pd.DataFrame()
for link in links_pages_estados:
parser_pag_proyecto = parser(link)
link_proyectos = parser_pag_proyecto.xpath('//div[@class="content"]//h2/a/@href')
titulo_proyectos = parser_pag_proyecto.xpath('//div[@class="content"]//h2/a/text()')
ano_bruto = parser_pag_proyecto.xpath('//p[@class="post-meta"]//span/text()')
ano_proyectos = []
for a in ano_bruto:
a = a.split('de ')[2]
ano_proyectos.append(a)
if 'encerradas' in link:
estado = 'Cerrada'
if 'abertas' in link:
estado = 'Abierta'
if 'andamento' in link:
estado = 'En evaluación'
fapesc_pag = pd.DataFrame()
fapesc_pag['Título'] = titulo_proyectos
fapesc_pag['Año'] = ano_proyectos
fapesc_pag['Link Proyecto'] = link_proyectos
fapesc_pag['Estado'] = estado
fapesc = pd.concat([fapesc, fapesc_pag])
fapesc= fapesc.reset_index(drop=True)
fapesc
descripcion_proyectos = []
pdfs_proyectos = []
for link in fapesc['Link Proyecto']:
parser_proyecto = parser(link)
descripcion_bruta = parser_proyecto.xpath('//div[@class="entry"]//p/text()')
descripcion = ''
for d in descripcion_bruta:
descripcion = descripcion + d + ' '
pdfs_bruta = parser_proyecto.xpath('//div[@class="entry"]//p/a/@href')
pdfs = ''
for p in pdfs_bruta:
if p.endswith('.pdf'):
pdfs = pdfs + p + ', '
descripcion_proyectos.append(descripcion.strip(' '))
pdfs_proyectos.append(pdfs.strip(', '))
fapesc['PDFs'] = pdfs_proyectos
fapesc['Descripción'] = descripcion_proyectos
fapesc.to_excel('brasil_fapesc.xlsx')
```
| github_jupyter |
# Lab 03 - Polynomial Fitting
In the previous lab we discussed linear regression and the OLS estimator for solving the minimization of the RSS. As we
mentioned, regression problems are a very wide family of settings and algorithms which we use to try to estimate the relation between a set of explanatory variables and a **continuous** response (i.e. $\mathcal{Y}\in\mathbb{R}^p$). In the following lab we will discuss one such setting called "Polynomial Fitting".
Sometimes, the data (and the relation between the explanatory variables and response) can be described by some polynomial
of some degree. Here, we only focus on the case where it is a polynomial of a single variable. That is:
$$ p_k\left(x\right)=\sum_{i=0}^{k}\alpha_i x^i\quad\alpha_0,\ldots,\alpha_k\in\mathbb{R} $$
So our hypothesis class is of the form:
$$ \mathcal{H}^k_{poly}=\left\{p_k\,\Big|\, p_k\left(x\right)=\sum_{i=0}^{k}\alpha_i x^i\quad\alpha_0,\ldots,\alpha_k\in\mathbb{R}\right\} $$
Notice that similar to linear regression, each hypothesis in the class is defined by a coefficients vector. Below are two
examples (simulated and real) for datasets where the relation between the explanatory variable and response is polynomial.
```
import sys
sys.path.append("../")
from utils import *
response = lambda x: x**4 - 2*x**3 - .5*x**2 + 1
x = np.linspace(-1.2, 2, 40)[0::2]
y_ = response(x)
df = pd.read_csv("../datasets/Position_Salaries.csv", skiprows=2, index_col=0)
x2, y2 = df.index, df.Salary
print(df)
make_subplots(1, 2, subplot_titles=(r"$\text{Simulated Data: }y=x^4-2x^3-0.5x^2+1$", r"$\text{Positions Salary}$"))\
.add_traces([go.Scatter(x=x, y=y_, mode="markers", marker=dict(color="black", opacity=.7), showlegend=False),
go.Scatter(x=x2, y=y2, mode="markers",marker=dict(color="black", opacity=.7), showlegend=False)],
rows=[1,1], cols=[1,2])\
.update_layout(title=r"$\text{(1) Datasets For Polynomial Fitting}$", margin=dict(t=100)).show()
```
As we have discussed in class, solving a polynomial fitting problem can be done by first manipulating the input data,
such that we represent each sample $x_i\in\mathbb{R}$ as a vector $\mathbf{x}_i=\left(x^0,x^1,\ldots,x^k\right)$. Then,
we treat the data as a design matrix $\mathbf{X}\in\mathbb{R}^{m\times k}$ of a linear regression problem.
For the simulated dataset above, which is of a polynomial of degree 4, the design matrix looks as follows:
```
from sklearn.preprocessing import PolynomialFeatures
m, k, X = 5, 4, x.reshape(-1, 1)
pd.DataFrame(PolynomialFeatures(k).fit_transform(X[:m]),
columns=[rf"$x^{{0}}$".format(i) for i in range(0, k+1)],
index=[rf"$x_{{0}}$".format(i) for i in range(1, m+1)])
```
## Fitting A Polynomial Of Different Degrees
Next, let us fit polynomials of different degrees and different noise properties to study how it influences the learned model.
We begin with the noise-less case where we fit for different values of $k$. As we increase $k$ we manage to fit a model
that describes the data in a better way, reflected by the decrease in the MSE.
*Notice that in both the `PolynomialFeatures` and `LinearRegression` functions we can add the bias/intercept parameter. As in this case it makes no difference, we will include the bias in the polynomial features transformation and fit a linear regression **without** an intercept. The bias parameter of the polynomial features (i.e. $x^0$) will in reality be the intercept of the linear regression.*
```
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
ks = [2, 3, 4, 5]
fig = make_subplots(1, 4, subplot_titles=list(ks))
for i, k in enumerate(ks):
y_hat = make_pipeline(PolynomialFeatures(k), LinearRegression(fit_intercept=False)).fit(X, y_).predict(X)
fig.add_traces([go.Scatter(x=x, y=y_, mode="markers", name="Real Points", marker=dict(color="black", opacity=.7), showlegend=False),
go.Scatter(x=x, y=y_hat, mode="markers", name="Predicted Points", marker=dict(color="blue", opacity=.7), showlegend=False)], rows=1, cols=i+1)
fig["layout"]["annotations"][i]["text"] = rf"$k={{0}}, MSE={{1}}$".format(k, round(np.mean((y_-y_hat)**2), 2))
fig.update_layout(title=r"$\text{(2) Simulated Data - Fitting Polynomials of Different Degrees}$",
margin=dict(t=60),
yaxis_title=r"$\widehat{y}$",
height=300).show()
```
Once we find the right $k$ (which in our case is 4) we managed to fit a perfect model, after which, as we increase $k$,
the additional coefficients will be zero.
```
coefs = {}
for k in ks:
fit = make_pipeline(PolynomialFeatures(k), LinearRegression(fit_intercept=False)).fit(X, y_)
coefs[rf"$k={{{k}}}$"] = [round(c,3) for c in fit.steps[1][1].coef_]
pd.DataFrame.from_dict(coefs, orient='index', columns=[rf"$w_{{{i}}}$" for i in range(max(ks)+1)])
```
## Fitting Polynomial Of Different Degrees - With Sample Noise
Still fitting for different values of $k$, let us add some standard Gaussian noise (i.e. $\mathcal{N}\left(0,1\right)$).
This time we observe two things:
- Even for the correct $k=4$ model we are not able to achieve zero MSE.
- As we increase $4<k\rightarrow 7$ we manage to decrease the error more and more.
```
y = y_ + np.random.normal(size=len(y_))
ks = range(2, 8)
fig = make_subplots(2, 3, subplot_titles=list(ks))
for i, k in enumerate(ks):
r,c = i//3+1, i%3+1
y_hat = make_pipeline(PolynomialFeatures(k), LinearRegression(fit_intercept=False)).fit(X, y).predict(X)
fig.add_traces([go.Scatter(x=x, y=y_, mode="markers", name="Real Points", marker=dict(color="black", opacity=.7), showlegend=False),
go.Scatter(x=x, y=y, mode="markers", name="Observed Points", marker=dict(color="red", opacity=.7), showlegend=False),
go.Scatter(x=x, y=y_hat, mode="markers", name="Predicted Points", marker=dict(color="blue", opacity=.7), showlegend=False)], rows=r, cols=c)
fig["layout"]["annotations"][i]["text"] = rf"$k={{0}}, MSE={{1}}$".format(k, round(np.mean((y-y_hat)**2), 2))
fig.update_layout(title=r"$\text{(4) Simulated Data With Noise - Fitting Polynomials of Different Degrees}$", margin=dict(t=80)).show()
```
How is it that we are able to fit "better" models for $k$s larger than the true one? As we increase $k$ we enable the model
more "degrees of freedom" to try and adapt itself to the observed data. The higher $k$ the more the learner will "go after
the noise" and miss the real signal of the data. In other words, what we have just observed is what is known as **overfitting**.
Later in the course we will learn methods for detection and avoidance of overfitting.
## Fitting Polynomial Over Different Sample Noise Levels
Next, let us set $k=4$ (the true values) and study the outputted models when training over different noise levels. Though
we will only be changing the scale of the noise (i.e. the variance, $\sigma^2$), changing other properties such as its
distribution is interesting too. As we would expect, as we increase the scale of the noise our error increases. We can
observe this also in a visual manner, where the fitted polynomial (in blue) less and less resembles the actual model (in black).
```
scales = range(6)
fig = make_subplots(2, 3, subplot_titles=list(map(str, scales)))
for i, s in enumerate(scales):
r,c = i//3+1, i%3+1
y = y_ + np.random.normal(scale=s, size=len(y_))
y_hat = make_pipeline(PolynomialFeatures(4), LinearRegression(fit_intercept=False)).fit(X, y).predict(X)
fig.add_traces([go.Scatter(x=x, y=y_, mode="markers", name="Real Points", marker=dict(color="black", opacity=.7), showlegend=False),
go.Scatter(x=x, y=y, mode="markers", name="Observed Points", marker=dict(color="red", opacity=.7), showlegend=False),
go.Scatter(x=x, y=y_hat, mode="markers", name="Predicted Points", marker=dict(color="blue", opacity=.7), showlegend=False)], rows=r, cols=c)
fig["layout"]["annotations"][i]["text"] = rf"$\sigma^2={{0}}, MSE={{1}}$".format(s, round(np.mean((y-y_hat)**2), 2))
fig.update_layout(title=r"$\text{(5) Simulated Data - Different Noise Scales}$", margin=dict(t=80)).show()
```
## The Influence Of $k$ And $\sigma^2$ On Error
Lastly, let us check how the error is influenced by both $k$ and $\sigma^2$. For each value of $k$ and $\sigma^2$ we will
add noise drawn from $\mathcal{N}\left(0,\sigma^2\right)$ and then, based on the noisy data, let the learner select an
hypothesis from $\mathcal{H}_{poly}^k$. We repeat the process for each set of $\left(k,\sigma^2\right)$ 10 times and report
the mean MSE value. Results are seen in heatmap below:
```
from sklearn.model_selection import ParameterGrid
df = []
for setting in ParameterGrid(dict(k=range(10), s=np.linspace(0, 5, 10), repetition=range(10))):
y = y_ + np.random.normal(scale=setting["s"], size=len(y_))
y_hat = make_pipeline(PolynomialFeatures(setting["k"]), LinearRegression(fit_intercept=False)).fit(X, y).predict(X)
df.append([setting["k"], setting["s"], np.mean((y-y_hat)**2)])
df = pd.DataFrame.from_records(df, columns=["k", "sigma","mse"]).groupby(["k","sigma"]).mean().reset_index()
go.Figure(go.Heatmap(x=df.k, y=df.sigma, z=df.mse, colorscale="amp"),
layout=go.Layout(title=r"$\text{(6) Average Train } MSE \text{ As Function of } \left(k,\sigma^2\right)$",
xaxis_title=r"$k$ - Fitted Polynomial Degree",
yaxis_title=r"$\sigma^2$ - Noise Levels")).show()
```
# Time To Think...
In the above figure, we observe the following trends:
- As already seen before, for the noise-free data, once we reach the correct $k$ we achieve zero MSE.
- Across all values of $k$, as we increase $\sigma^2$ we get higher MSE values.
- For all noise levels, we manage to reduce MSE values by increasing $k$.
So, by choosing a **richer** hypothesis class (i.e. larger and that can express more functions - polynomials of higher
degree) we are able to choose an hypothesis that fits the **observed** data **better**, regardless to how noisy the data is.
Try and think how the above heatmap would look if instead of calculating the MSE over the training samples (i.e train error)
we would have calculated it over a **new** set of test samples drawn from the same distribution.
Use the below code to create a test set. Change the code generating figure 6 such that the reported error is a test error. Do not forget to add the noise (that depends on $\sigma^2$) to the test data. What has changed between what we observe for the train error to the test error? What happens for high/low values of $\sigma^2$? What happens for high/low values of $k$?
```
# Generate the x values of the test set
testX = np.linspace(-1.2, 2, 40)[1::2].reshape(-1,1)
# Generate the noisy y values of the test set. Set the noise level (the scale parameter) according to the specific setting
testY = response(testX) + np.random.normal(scale=setting["s"], size=len(y_))
from sklearn.model_selection import ParameterGrid
df = []
for setting in ParameterGrid(dict(k=range(10), s=np.linspace(0, 5, 10), repetition=range(10))):
y = y_ + np.random.normal(scale=setting["s"], size=len(y_))
y_hat = make_pipeline(PolynomialFeatures(setting["k"]), LinearRegression(fit_intercept=False)).fit(X, y).predict(testX)
df.append([setting["k"], setting["s"], np.mean((testY-y_hat)**2)])
df = pd.DataFrame.from_records(df, columns=["k", "sigma","mse"]).groupby(["k","sigma"]).mean().reset_index()
go.Figure(go.Heatmap(x=df.k, y=df.sigma, z=df.mse, colorscale="amp"),
layout=go.Layout(title=r"$\text{(6) Average Train } MSE \text{ As Function of } \left(k,\sigma^2\right)$",
xaxis_title=r"$k$ - Fitted Polynomial Degree",
yaxis_title=r"$\sigma^2$ - Noise Levels")).show()
```
| github_jupyter |
This is a whirlwind tour of most of the fundamental concepts underlying AllenNLP. There is nothing for you to do here, if you like you could just Ctrl-Enter all the way to the bottom, but feel free to poke at the results as you go if you're curious about them.
Many of these concepts you won't have to worry about that much, but it's good to sort of understand what's going on under the hood.
# Tokenization
The default tokenizer in AllenNLP is the spacy tokenizer. You can specify others if you need them. (For instance, if you're using BERT, you want to use the same tokenizer that the BERT model expects.)
```
from allennlp.data.tokenizers import WordTokenizer
text = "I don't hate notebooks, I just don't like notebooks!"
tokenizer = WordTokenizer()
tokens = tokenizer.tokenize(text)
tokens
```
# Token Indexers
A `TokenIndexer` turns tokens into indices or lists of indices. We won't be able to see how they operate until slightly later.
```
from allennlp.data.token_indexers import SingleIdTokenIndexer, TokenCharactersIndexer
token_indexer = SingleIdTokenIndexer() # maps tokens to word_ids
```
# Fields
Your training examples will be represented as `Instances`, each consisting of typed `Field`s.
```
from allennlp.data.fields import TextField, LabelField
```
A `TextField` is for storing text, and also needs one or more `TokenIndexer`s that will be used to convert the text into indices.
```
text_field = TextField(tokens, {"tokens": token_indexer})
text_field._indexed_tokens # not yet
```
A `LabelField` is for storing a discrete label.
```
label_field = LabelField("technology")
```
# Instances
Each `Instance` is just a collection of named `Field`s.
```
from allennlp.data.instance import Instance
instance = Instance({"text": text_field, "category": label_field})
```
# Vocabulary
Based on our instances we construct a `Vocabulary` which contains the various mappings token <-> index, label <-> index, and so on.
```
from allennlp.data.vocabulary import Vocabulary
vocab = Vocabulary.from_instances([instance])
```
Here you can see that our vocabulary has two mappings, a `tokens` mapping (for the tokens) and a `labels` mapping (for the labels).
```
vocab._token_to_index
text_field._indexed_tokens
label_field._label_id
```
Although we have constructed the mappings, we haven't yet used them to index the fields in our instance. We have to do that manually (although when you use the allennlp trainer all of this will be taken care of.)
```
instance.index_fields(vocab)
text_field._indexed_tokens
label_field._label_id
```
Once the `Instance` has been indexed, it then knows how to convert itself to a tensor dict.
```
instance.as_tensor_dict()
```
And it knows how long other instances would need to be padded to if we do batching. (More on this below!)
```
instance.get_padding_lengths()
```
# Batching and Padding
When you're doing NLP, you have sequences with different lengths, which means that padding and masking are very important. They're tricky to get right! Luckily, AllenNLP handles most of the details for you.
```
text1 = "I just don't like notebooks."
tokens1 = tokenizer.tokenize(text)
text_field1 = TextField(tokens1, {"tokens": token_indexer})
label_field1 = LabelField("Joel")
instance1 = Instance({"text": text_field1, "speaker": label_field1})
text2 = "I do like notebooks."
tokens2 = tokenizer.tokenize(text2)
text_field2 = TextField(tokens2, {"tokens": token_indexer})
label_field2 = LabelField("Tim")
instance2 = Instance({"text": text_field2, "speaker": label_field2})
from allennlp.data.dataset import Batch
vocab = Vocabulary.from_instances([instance1, instance2])
batch = Batch([instance1, instance2])
batch.index_instances(vocab)
```
Notice that
1. the batching is already taken care of for you, and
2. the shorter text field is appropriately padded with 0's (the `@@PADDING@@` id)
```
batch.as_tensor_dict()
```
# Using Multiple Indexers
In some circumstances you might want to use multiple token indexers. For instance, you might want to index a token using its token_id, but also as a sequence of character_ids. This is as simple as adding extra token indexers to our text fields.
```
from allennlp.data.token_indexers import TokenCharactersIndexer
token_characters_indexer = TokenCharactersIndexer(min_padding_length=3)
text_field = TextField(tokens, {"tokens": token_indexer, "token_characters": token_characters_indexer})
label_field = LabelField("technology")
instance = Instance({"text": text_field, "label": label_field})
vocab = Vocabulary.from_instances([instance])
```
You can see that we now have an additional vocabulary namespace for the character ids:
```
vocab._token_to_index
instance.index_fields(vocab)
```
And now when we call `instance.as_tensor_dict` we'll get an additional (padded) tensor with the character_ids.
```
instance.as_tensor_dict()
```
# TokenEmbedders
Once we have our text represented as ids, we use a `TokenEmbedder` to create tensor embeddings.
```
text1 = "I just don't like notebooks."
tokens1 = tokenizer.tokenize(text)
text_field1 = TextField(tokens1, {"tokens": token_indexer})
label_field1 = LabelField("Joel")
instance1 = Instance({"text": text_field1, "speaker": label_field1})
text2 = "I do like notebooks."
tokens2 = tokenizer.tokenize(text2)
text_field2 = TextField(tokens2, {"tokens": token_indexer})
label_field2 = LabelField("Tim")
instance2 = Instance({"text": text_field2, "speaker": label_field2})
vocab = Vocabulary.from_instances([instance1, instance2])
batch = Batch([instance1, instance2])
batch.index_instances(vocab)
tensor_dict = batch.as_tensor_dict()
tensor_dict
from allennlp.modules.token_embedders import Embedding
```
Here we define an embedding layer that has a number of embeddings equal to the corresponding vocabulary size, and that consists of 5-dimensional vectors. In this case the embeddings will just be randomly initialized.
```
embedding = Embedding(num_embeddings=vocab.get_vocab_size("tokens"), embedding_dim=5)
```
Accordingly, we can apply those embeddings to the indexed tokens.
```
embedding(tensor_dict['text']['tokens'])
```
# TextFieldEmbedders
A text field may have multiple indexed representations of its tokens, in which case it needs multiple corresponding `TokenEmbedder`s. Because of this we typically wrap the token embedders in a `TextFieldEmbedder`, which runs the appropriate token embedder for each representation and then concatenates the results.
```
from allennlp.modules.text_field_embedders import BasicTextFieldEmbedder
text_field_embedder = BasicTextFieldEmbedder({"tokens": embedding})
```
Notice now we apply it to the full tensor dict for the text field.
```
text_field_embedder(tensor_dict['text'])
```
# Seq2VecEncoders
At this point we've ended up with a sequence of tensors. Frequently we'll want to collapse that sequence into a single contextualized tensor representation, which we do with a `Seq2VecEncoder`. (If we wanted to produce a full sequence of contextualized representations we'd instead use a `Seq2SeqEncoder`.
In particular, here we'll use a `BagOfEmbeddingsEncoder`, which just sums up the vectors.
```
from allennlp.modules.seq2vec_encoders import BagOfEmbeddingsEncoder
encoder = BagOfEmbeddingsEncoder(embedding_dim=text_field_embedder.get_output_dim())
```
We can apply this to the output of our text field embedder to collapse each sequence down to a single element.
```
encoder(text_field_embedder(tensor_dict['text']))
```
# Using PyTorch directly
AllenNLP modules are just PyTorch modules, and we can mix and match them with native PyTorch features. Here we create a `torch.nn.Linear` module and apply it to the output of the `Seq2VecEncoder`.
```
import torch
linear = torch.nn.Linear(in_features=text_field_embedder.get_output_dim(), out_features=3)
linear(encoder(text_field_embedder(tensor_dict['text'])))
```
# We typically encapsulate most of these steps into an allennlp Model
```
from allennlp.models import Model
from allennlp.modules.text_field_embedders import TextFieldEmbedder
from allennlp.modules.seq2vec_encoders import Seq2VecEncoder
from typing import Dict
```
Here is a model that accepts the output of `batch.as_tensor_dict`, and applies the text field embedder, the seq2vec encoder, and linear layer.
```
class MyModel(Model):
def __init__(self,
vocab: Vocabulary,
embedder: TextFieldEmbedder,
encoder: Seq2VecEncoder,
output_dim: int) -> None:
super().__init__(vocab)
self.embedder = embedder
self.encoder = encoder
self.linear = torch.nn.Linear(in_features=embedder.get_output_dim(), out_features=output_dim)
def forward(self, text: Dict[str, torch.Tensor], speaker: torch.Tensor) -> Dict[str, torch.Tensor]:
"""
Notice how the argument names correspond the field names in our instance.
"""
embedded = self.embedder(text)
encoded = self.encoder(embedded)
output = self.linear(encoded)
return {"output": output}
model = MyModel(vocab, text_field_embedder, encoder, 3)
model(**tensor_dict)
```
| github_jupyter |
# Discretization
---
In this notebook, you will deal with continuous state and action spaces by discretizing them. This will enable you to apply reinforcement learning algorithms that are only designed to work with discrete spaces.
### 1. Import the Necessary Packages
```
import sys
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Set plotting options
%matplotlib inline
plt.style.use('ggplot')
np.set_printoptions(precision=3, linewidth=120)
```
### 2. Specify the Environment, and Explore the State and Action Spaces
We'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's use an environment that has a continuous state space, but a discrete action space.
```
# Create an environment and set random seed
env = gym.make('MountainCar-v0')
env.seed(505);
```
Run the next code cell to watch a random agent.
```
state = env.reset()
score = 0
for t in range(200):
action = env.action_space.sample()
env.render()
state, reward, done, _ = env.step(action)
score += reward
if done:
break
print('Final score:', score)
env.close()
```
In this notebook, you will train an agent to perform much better! For now, we can explore the state and action spaces, as well as sample them.
```
# Explore state (observation) space
print("State space:", env.observation_space)
print("- low:", env.observation_space.low)
print("- high:", env.observation_space.high)
# Generate some samples from the state space
print("State space samples:")
print(np.array([env.observation_space.sample() for i in range(10)]))
# Explore the action space
print("Action space:", env.action_space)
# Generate some samples from the action space
print("Action space samples:")
print(np.array([env.action_space.sample() for i in range(10)]))
```
### 3. Discretize the State Space with a Uniform Grid
We will discretize the space using a uniformly-spaced grid. Implement the following function to create such a grid, given the lower bounds (`low`), upper bounds (`high`), and number of desired `bins` along each dimension. It should return the split points for each dimension, which will be 1 less than the number of bins.
For instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, and `bins = (10, 10)`, then your function should return the following list of 2 NumPy arrays:
```
[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]),
array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]
```
Note that the ends of `low` and `high` are **not** included in these split points. It is assumed that any value below the lowest split point maps to index `0` and any value above the highest split point maps to index `n-1`, where `n` is the number of bins along that dimension.
```
def create_uniform_grid(low, high, bins=(10, 10)):
"""Define a uniformly-spaced grid that can be used to discretize a space.
Parameters
----------
low : array_like
Lower bounds for each dimension of the continuous space.
high : array_like
Upper bounds for each dimension of the continuous space.
bins : tuple
Number of bins along each corresponding dimension.
Returns
-------
grid : list of array_like
A list of arrays containing split points for each dimension.
"""
# TODO: Implement this
grid = [np.linspace(low[dim], high[dim], bins[dim] + 1)[1:-1] for dim in range(len(bins))]
print("Uniform grid: [<low>, <high>] / <bins> => <splits>")
for l, h, b, splits in zip(low, high, bins, grid):
print(" [{}, {}] / {} => {}".format(l, h, b, splits))
return grid
low = [-1.0, -5.0]
high = [1.0, 5.0]
create_uniform_grid(low, high) # [test]
```
Now write a function that can convert samples from a continuous space into its equivalent discretized representation, given a grid like the one you created above. You can use the [`numpy.digitize()`](https://docs.scipy.org/doc/numpy-1.9.3/reference/generated/numpy.digitize.html) function for this purpose.
Assume the grid is a list of NumPy arrays containing the following split points:
```
[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]),
array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]
```
Here are some potential samples and their corresponding discretized representations:
```
[-1.0 , -5.0] => [0, 0]
[-0.81, -4.1] => [0, 0]
[-0.8 , -4.0] => [1, 1]
[-0.5 , 0.0] => [2, 5]
[ 0.2 , -1.9] => [6, 3]
[ 0.8 , 4.0] => [9, 9]
[ 0.81, 4.1] => [9, 9]
[ 1.0 , 5.0] => [9, 9]
```
**Note**: There may be one-off differences in binning due to floating-point inaccuracies when samples are close to grid boundaries, but that is alright.
```
def discretize(sample, grid):
"""Discretize a sample as per given grid.
Parameters
----------
sample : array_like
A single sample from the (original) continuous space.
grid : list of array_like
A list of arrays containing split points for each dimension.
Returns
-------
discretized_sample : array_like
A sequence of integers with the same number of dimensions as sample.
"""
# TODO: Implement this
return list(int(np.digitize(s, g)) for s, g in zip(sample, grid)) # apply along each dimension
# Test with a simple grid and some samples
grid = create_uniform_grid([-1.0, -5.0], [1.0, 5.0])
samples = np.array(
[[-1.0 , -5.0],
[-0.81, -4.1],
[-0.8 , -4.0],
[-0.5 , 0.0],
[ 0.2 , -1.9],
[ 0.8 , 4.0],
[ 0.81, 4.1],
[ 1.0 , 5.0]])
discretized_samples = np.array([discretize(sample, grid) for sample in samples])
print("\nSamples:", repr(samples), sep="\n")
print("\nDiscretized samples:", repr(discretized_samples), sep="\n")
```
### 4. Visualization
It might be helpful to visualize the original and discretized samples to get a sense of how much error you are introducing.
```
import matplotlib.collections as mc
def visualize_samples(samples, discretized_samples, grid, low=None, high=None):
"""Visualize original and discretized samples on a given 2-dimensional grid."""
fig, ax = plt.subplots(figsize=(10, 10))
# Show grid
ax.xaxis.set_major_locator(plt.FixedLocator(grid[0]))
ax.yaxis.set_major_locator(plt.FixedLocator(grid[1]))
ax.grid(True)
# If bounds (low, high) are specified, use them to set axis limits
if low is not None and high is not None:
ax.set_xlim(low[0], high[0])
ax.set_ylim(low[1], high[1])
else:
# Otherwise use first, last grid locations as low, high (for further mapping discretized samples)
low = [splits[0] for splits in grid]
high = [splits[-1] for splits in grid]
# Map each discretized sample (which is really an index) to the center of corresponding grid cell
grid_extended = np.hstack((np.array([low]).T, grid, np.array([high]).T)) # add low and high ends
grid_centers = (grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 # compute center of each grid cell
locs = np.stack(grid_centers[i, discretized_samples[:, i]] for i in range(len(grid))).T # map discretized samples
ax.plot(samples[:, 0], samples[:, 1], 'o') # plot original samples
ax.plot(locs[:, 0], locs[:, 1], 's') # plot discretized samples in mapped locations
ax.add_collection(mc.LineCollection(list(zip(samples, locs)), colors='orange')) # add a line connecting each original-discretized sample
ax.legend(['original', 'discretized'])
visualize_samples(samples, discretized_samples, grid, low, high)
```
Now that we have a way to discretize a state space, let's apply it to our reinforcement learning environment.
```
# Create a grid to discretize the state space
state_grid = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(10, 10))
state_grid
# Obtain some samples from the space, discretize them, and then visualize them
state_samples = np.array([env.observation_space.sample() for i in range(10)])
discretized_state_samples = np.array([discretize(sample, state_grid) for sample in state_samples])
visualize_samples(state_samples, discretized_state_samples, state_grid,
env.observation_space.low, env.observation_space.high)
plt.xlabel('position'); plt.ylabel('velocity'); # axis labels for MountainCar-v0 state space
```
You might notice that if you have enough bins, the discretization doesn't introduce too much error into your representation. So we may be able to now apply a reinforcement learning algorithm (like Q-Learning) that operates on discrete spaces. Give it a shot to see how well it works!
### 5. Q-Learning
Provided below is a simple Q-Learning agent. Implement the `preprocess_state()` method to convert each continuous state sample to its corresponding discretized representation.
```
class QLearningAgent:
"""Q-Learning agent that can act on a continuous state space by discretizing it."""
def __init__(self, env, state_grid, alpha=0.02, gamma=0.99,
epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=505):
"""Initialize variables, create grid for discretization."""
# Environment info
self.env = env
self.state_grid = state_grid
self.state_size = tuple(len(splits) + 1 for splits in self.state_grid) # n-dimensional state space
self.action_size = self.env.action_space.n # 1-dimensional discrete action space
self.seed = np.random.seed(seed)
print("Environment:", self.env)
print("State space size:", self.state_size)
print("Action space size:", self.action_size)
# Learning parameters
self.alpha = alpha # learning rate
self.gamma = gamma # discount factor
self.epsilon = self.initial_epsilon = epsilon # initial exploration rate
self.epsilon_decay_rate = epsilon_decay_rate # how quickly should we decrease epsilon
self.min_epsilon = min_epsilon
# Create Q-table
self.q_table = np.zeros(shape=(self.state_size + (self.action_size,)))
print("Q table size:", self.q_table.shape)
def preprocess_state(self, state):
"""Map a continuous state to its discretized representation."""
# TODO: Implement this
return tuple(discretize(state, self.state_grid))
def reset_episode(self, state):
"""Reset variables for a new episode."""
# Gradually decrease exploration rate
self.epsilon *= self.epsilon_decay_rate
self.epsilon = max(self.epsilon, self.min_epsilon)
# Decide initial action
self.last_state = self.preprocess_state(state)
self.last_action = np.argmax(self.q_table[self.last_state])
return self.last_action
def reset_exploration(self, epsilon=None):
"""Reset exploration rate used when training."""
self.epsilon = epsilon if epsilon is not None else self.initial_epsilon
def act(self, state, reward=None, done=None, mode='train'):
"""Pick next action and update internal Q table (when mode != 'test')."""
state = self.preprocess_state(state)
if mode == 'test':
# Test mode: Simply produce an action
action = np.argmax(self.q_table[state])
else:
# Train mode (default): Update Q table, pick next action
# Note: We update the Q table entry for the *last* (state, action) pair with current state, reward
self.q_table[self.last_state + (self.last_action,)] += self.alpha * \
(reward + self.gamma * max(self.q_table[state]) - self.q_table[self.last_state + (self.last_action,)])
# Exploration vs. exploitation
do_exploration = np.random.uniform(0, 1) < self.epsilon
if do_exploration:
# Pick a random action
action = np.random.randint(0, self.action_size)
else:
# Pick the best action from Q table
action = np.argmax(self.q_table[state])
# Roll over current state, action for next step
self.last_state = state
self.last_action = action
return action
q_agent = QLearningAgent(env, state_grid)
```
Let's also define a convenience function to run an agent on a given environment. When calling this function, you can pass in `mode='test'` to tell the agent not to learn.
```
def run(agent, env, num_episodes=20000, mode='train'):
"""Run agent in given reinforcement learning environment and return scores."""
scores = []
max_avg_score = -np.inf
for i_episode in range(1, num_episodes+1):
# Initialize episode
state = env.reset()
action = agent.reset_episode(state)
total_reward = 0
done = False
# Roll out steps until done
while not done:
state, reward, done, info = env.step(action)
total_reward += reward
action = agent.act(state, reward, done, mode)
# Save final score
scores.append(total_reward)
# Print episode stats
if mode == 'train':
if len(scores) > 100:
avg_score = np.mean(scores[-100:])
if avg_score > max_avg_score:
max_avg_score = avg_score
if i_episode % 100 == 0:
print("\rEpisode {}/{} | Max Average Score: {}".format(i_episode, num_episodes, max_avg_score), end="")
sys.stdout.flush()
return scores
scores = run(q_agent, env)
```
The best way to analyze if your agent was learning the task is to plot the scores. It should generally increase as the agent goes through more episodes.
```
# Plot scores obtained per episode
plt.plot(scores); plt.title("Scores");
```
If the scores are noisy, it might be difficult to tell whether your agent is actually learning. To find the underlying trend, you may want to plot a rolling mean of the scores. Let's write a convenience function to plot both raw scores as well as a rolling mean.
```
def plot_scores(scores, rolling_window=100):
"""Plot scores and optional rolling mean using specified window."""
plt.plot(scores); plt.title("Scores");
rolling_mean = pd.Series(scores).rolling(rolling_window).mean()
plt.plot(rolling_mean);
return rolling_mean
rolling_mean = plot_scores(scores)
```
You should observe the mean episode scores go up over time. Next, you can freeze learning and run the agent in test mode to see how well it performs.
```
# Run in test mode and analyze scores obtained
test_scores = run(q_agent, env, num_episodes=100, mode='test')
print("[TEST] Completed {} episodes with avg. score = {}".format(len(test_scores), np.mean(test_scores)))
_ = plot_scores(test_scores)
```
It's also interesting to look at the final Q-table that is learned by the agent. Note that the Q-table is of size MxNxA, where (M, N) is the size of the state space, and A is the size of the action space. We are interested in the maximum Q-value for each state, and the corresponding (best) action associated with that value.
```
def plot_q_table(q_table):
"""Visualize max Q-value for each state and corresponding action."""
q_image = np.max(q_table, axis=2) # max Q-value for each state
q_actions = np.argmax(q_table, axis=2) # best action for each state
fig, ax = plt.subplots(figsize=(10, 10))
cax = ax.imshow(q_image, cmap='jet');
cbar = fig.colorbar(cax)
for x in range(q_image.shape[0]):
for y in range(q_image.shape[1]):
ax.text(x, y, q_actions[x, y], color='white',
horizontalalignment='center', verticalalignment='center')
ax.grid(False)
ax.set_title("Q-table, size: {}".format(q_table.shape))
ax.set_xlabel('position')
ax.set_ylabel('velocity')
plot_q_table(q_agent.q_table)
```
### 6. Modify the Grid
Now it's your turn to play with the grid definition and see what gives you optimal results. Your agent's final performance is likely to get better if you use a finer grid, with more bins per dimension, at the cost of higher model complexity (more parameters to learn).
```
# TODO: Create a new agent with a different state space grid
state_grid_new = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(20, 20))
q_agent_new = QLearningAgent(env, state_grid_new)
q_agent_new.scores = [] # initialize a list to store scores for this agent
# Train it over a desired number of episodes and analyze scores
# Note: This cell can be run multiple times, and scores will get accumulated
q_agent_new.scores += run(q_agent_new, env, num_episodes=50000) # accumulate scores
rolling_mean_new = plot_scores(q_agent_new.scores)
# Run in test mode and analyze scores obtained
test_scores = run(q_agent_new, env, num_episodes=100, mode='test')
print("[TEST] Completed {} episodes with avg. score = {}".format(len(test_scores), np.mean(test_scores)))
_ = plot_scores(test_scores)
# Visualize the learned Q-table
plot_q_table(q_agent_new.q_table)
```
### 7. Watch a Smart Agent
```
state = env.reset()
score = 0
for t in range(200):
action = q_agent_new.act(state, mode='test')
env.render()
state, reward, done, _ = env.step(action)
score += reward
if done:
break
print('Final score:', score)
env.close()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv(r'C:\Users\revan\Desktop\ml\modular2\mlmar7\data\Advertising (1).csv')
df.head()
x=df.drop(['sales'],axis=1)
x
y=df['sales']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.33, random_state=101)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train,y_train)
test_pred=model.predict(X_test)
from sklearn.metrics import mean_absolute_error, mean_squared_error
sns.histplot(data=df,x='sales',bins=20)
mean_absolute_error(y_test, test_pred)
mean_squared_error(y_test, test_pred)
np.sqrt(mean_squared_error(y_test, test_pred))
test_residuals = y_test - test_pred
sns.scatterplot(x=y_test, y= test_residuals)
plt.axhline(y=0, color="red",ls="--")
sns.distplot(test_residuals, bins=25, kde=True)
plt.xlabel("residuals")
from joblib import dump,load
import os
model_dir = "modles"
os.makedirs(model_dir,exist_ok=True)
filepath= os.path.join(model_dir, 'model.joblib')
dump(model, filepath)
load_model = load(r'C:\Users\revan\Desktop\ml\modular2\mlmar7\modles\model.joblib')
load_model.coef_
example=[[151,25,15]]
load_model.predict(example)
# POLYNOMIAL REGRESSION
x1 =df.drop(['sales'], axis=1)
x1.head()
from sklearn.preprocessing import PolynomialFeatures
poly_conv = PolynomialFeatures(degree=2, include_bias=False)
poly_conv.fit(x1)
poly_features = poly_conv.transform(x1)
poly_features.shape
print(np.array(x1.iloc[0]))
print(poly_features[0])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.33, random_state=101)
model1 = LinearRegression()
model1.fit(X_train,y_train)
test_pred_pol = model1.predict(X_test)
test_pred_pol
MAE = mean_absolute_error(y_test, test_pred_pol)
MAE
RMSE = np.sqrt (mean_squared_error(y_test, test_pred_pol))
RMSE
model.coef_
train_rmse_errors = []
test_rmse_errors = []
for d in range(1,10):
poly_converter = PolynomialFeatures(degree=d)
poly_features = poly_converter.fit_transform(x1)
X_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.33, random_state=101)
model = LinearRegression()
model.fit(X_train, y_train)
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
train_rmse_errors.append(np.sqrt(mean_squared_error(y_train,pred_train)))
test_rmse_errors.append(np.sqrt(mean_squared_error(y_test,pred_test)))
#fig= plt.Figure(figsize=(20,16))
#ax= fig.add_axes((0,0,1,1))
plt.plot(range(1,6), train_rmse_errors[:5], label = 'train_rmse')
plt.plot(range(1,6), test_rmse_errors[:5], label = 'test_rmse')
filepath_poly= os.path.join(model_dir, 'model1.joblib')
dump(model1,filepath_poly)
load_model_poly = load(r'C:\Users\revan\Desktop\ml\modular2\mlmar7\modles\model1.joblib')
load_model_poly.coef_
```
| github_jupyter |
Task: Predicting Sina Weibo Interaction Behaviours
# Import
```
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_table('./data/weibo_train_data.txt', header=None, sep='\t') # training
df2 = pd.read_table('./data/weibo_predict_data.txt', header=None, sep='\t') # predicting
```
# Exploration
### Structure
```
df.columns = ['uid', 'mid', 'time', 'f', 'c', 'l', 'content']
df2.columns = ['uid', 'mid', 'time', 'content']
df.head()
```
### Statistics
```
def describe(df, stats): # additional parameters
d = df.describe()
return d.append(df.reindex(d.columns, axis="columns").agg(stats))
describe(df, ['skew', 'mad', 'kurt']).round(3)
```
### Missing values
```
df[['f', 'c', 'l']].info()
```
### Duplications
Duplicated records:
```
len(df[df.duplicated()])
```
Duplicated users:
```
user_counts = df['uid'][df['uid'].duplicated()].value_counts()
pd.DataFrame(user_counts[:5])
plt.figure(figsize=(12,3), dpi=128)
user_counts[:200].plot()# top 200
```
### Outliers
```
outlier = plt.figure(figsize=(12,5), dpi=128)
for i in range(1,4):
outlier.add_subplot(1, 3, i)
df[[['f', 'c', 'l'][i-1]]].boxplot()
```
### Correlation
```
import numpy as np
corr = df[['f', 'c', 'l']].corr()
high_corr = corr[np.abs(corr) > 0.5].fillna(0)
high_corr[np.abs(corr) > 0.5].fillna(0).round(3)
plt.figure(dpi=128)
ax = sns.heatmap(
high_corr,
vmin=-1, vmax=1, center=0,
cmap=sns.diverging_palette(20, 220, n=200),
square=True
)
ax.set_xticklabels(
ax.get_xticklabels(),
rotation=45,
horizontalalignment='right'
);
```
## Constants and features
### Constants
General constants are effective predicting values for some tweets.
#### Mean
Use mean, generally.
```
f_mean = int(df.f.mean()) # only integers according to the task description
c_mean = int(df.c.mean())
l_mean = int(df.l.mean())
f_mean, c_mean, l_mean
```
#### Median
Use median, since high skewness
```
f_median = int(df.f.median()) # only integers according to the task description
c_median = int(df.c.median())
l_median = int(df.l.median())
f_median, c_median, l_median
```
##### TODO: Considering median for unique users since all medians are zero.
## Sorting
```
df = df.sort_values( by=['uid', 'f'], ascending=False)
df2 = df2.sort_values(by=['uid'], ascending=False)
df.head()
users = pd.DataFrame(df2['uid'].unique()).isin(df['uid'].unique())
users.columns = ['in_train']
print(len(users[users['in_train'] == False]), "users from predicting data are not in the training data")
```
## Features
- 3-rd party applications
if "我在#" in content, then 1; else 0.
```
df['application'] = df.content.apply(lambda x: 1 if '我' in str(x) else 0) # str to convert string in numbers into strings
df[df['content'].str.contains("我在#") == 1].head(2)
```
- title
if content starts with "【", then 1; else 0.
```
df['title'] = df.content.apply(lambda x: 1 if str(x)[0] == '【' else 0)
```
TODO: entertainment
- ad related
if any ad keywords in content then 1; else 0
```
ad_keywords = ['天猫', '淘宝', '购物券', '折扣', '优惠']
df['advertisement'] = df.content.apply(lambda x: 1 if any( keyword in str(x) for keyword in ad_keywords) else 0) # list generator
df[df['content'].str.contains('天猫') == 1].head(2)
```
- hotwords
if any baidu hotwords in content then 1; else 0
```
baidu_hot_words = ['2015阅兵', '奔跑吧兄弟', '花干骨', 'duang', 'DUANG', '毕福剑', '完美世界', '清华大学', '九寨沟', '天津爆炸', '快乐大本营', \
'校花的贴身高手', '车震', '金星', '大主宰', '武汉大学', '泰山', '全面开放二孩政策', 'running man','Running Man','Running man',\
'RUNNING MAN', '盗墓笔记', '萌萌哒', '王思聪', '淘宝', '厦门大学', '颐和园', '优衣库事件', '最强大脑', '何以笙箫默', '然并卵', \
'叶良辰', '百度', '北京大学', '故宫', '毕福剑违纪', '极限挑战', '斗鱼', '有钱就是任性', '昆凌', '我欲封天', '中山大学', '华山', \
'a股保卫战','A股保卫战', '欢乐喜剧人', '琅琊榜', '不做死就不会死', '刘雯', '双色球开奖结果', '中南大学', '北戴河', '人民币贬值', \
'天天向上', '克拉恋人', '小鲜肉', '章泽天', 'qq', 'QQ', '复旦大学', '普陀山', '2015苹果发布会', '中国好声音', '旋风少年', '绿茶婊', \
'马云', '微信', '山东大学', '五台山', '另一个地球可能发现', '康熙来了', '终极教师', '壕', '宁泽涛', '花千骨', '浙江大学', '峨眉山', \
'日本8.5级地震', '奇葩说', '武媚娘传奇', '我也是醉了', '柴静', '双色球', '西南大学', '云台山']
df['hotwords'] = df.content.apply(lambda x: 1 if any( keyword in str(x) for keyword in baidu_hot_words) else 0) # list generator
df[df['content'].str.contains('车震') == 1].head(2)
```
- keywords in high interactions
jieba
```
threshold = 5
df[(df['f'] > threshold) & (df['c'] > threshold) & (df['l'] > threshold)].head(2)
content = df[(df['f'] > threshold) & (df['c'] > threshold) & (df['l'] > threshold)]['content'].to_string( index = False, header = False)
content[0:100]
# import sys
# sys.path.append('../')
import jieba
import jieba.analyse
# jieba.analyse.set_stop_words('stoped.txt') # do not remove any meaningless words for 1st model
keywords = jieba.analyse.extract_tags(content, topK=500)
print(",".join(keywords))
df['keywords'] = df.content.apply(lambda x: 1 if any( keyword in str(x) for keyword in keywords) else 0) # list generator
df[df['content'].str.contains('程序员') == 1].head(2)
```
- TF-IDF
```
def tfidf(content:str) -> float:
"""
Find TF-IDF value of the most valuable word in a single tweet
"""
keyword = jieba.analyse.extract_tags(content, topK=1, withWeight=True)
if len(keyword) != 0:
return keyword[0][1]
else:
return 0
testdf = df[1:10]
testdf.head(2)
# too slow, only testing here
testdf['tfidf'] = testdf.content.apply(lambda x: tfidf(str(x)))
testdf.head(2)
```
## Scores
```
df_test = pd.DataFrame({
'id':[1, 2, 3, 4, 5],
'f':[333, 1, 0, 0, 1],
'fp':[0, 0, 0, 0, 0],
'c':[222, 3, 0, 1, 0],
'cp':[0, 0, 0, 0, 0],
'l':[250, 2, 1, 0, 0],
'lp':[0, 0, 0, 0, 0],
})
df_test
df_test['precision'] = df_test.apply(lambda x: 1 - 0.5 * abs( x['fp'] - x['f'] ) / ( x['f'] + 5) \
- 0.25 * abs( x['cp'] - x['c'] ) / ( x['c'] + 3) \
- 0.25 * abs( x['lp'] - x['l']) / ( x['l'] + 3) , axis=1)
df_test
```
#### TODO: score function
```
def score(df, times=2):
df['precision'] = df.apply(lambda x: 1 - 0.5 * abs( x['fp'] - x['f'] ) / ( x['f'] + 5) \
- 0.25 * abs( x['cp'] - x['c'] ) / ( x['c'] + 3) \
- 0.25 * abs( x['lp'] - x['l']) / ( x['l'] + 3) , axis=1)
return
score(df_test)
```
## Predict
- Predict with 0,0,0, online accuracy: 22.94%
```
# dfp: a new pandas DF for predicting
dfp = df2[['uid', 'mid']]
dfp['fp,cp,lp'] = str("0,0,0")
dfp.to_csv("./output/result_000.txt", header = False, sep = "\t", index = False)
```
Predict with 0,1,0, online accuracy: 23.26%
```
# dfp: a new pandas DF for predicting
dfp = df2[['uid', 'mid']]
dfp['fp,cp,lp'] = str("0,1,0")
dfp.to_csv("./output/result_010.txt",
header = False,
sep = "\t",
index = False)
```
Predict with individual history median and fillna() with 0, online accuracy 29.38%
```
# dfp: a new pandas DF for predicting
dfp = df2[['uid', 'mid']]
m = df.groupby(['uid'])[['f', 'c', 'l']].median()
m = m.sort_values(by='f', ascending=False)
# dfp = pd.merge(dfp, m, how='left', on='uid')
dfp = pd.merge(dfp, m, how='left', on='uid').sort_values(by='f',
ascending=False)
dfp = dfp.fillna(0)
dfp = dfp.assign(
predict=dfp.f.astype(int).astype(str) + ',' +
dfp.c.astype(int).astype(str) + ',' +
dfp.l.astype(int).astype(str)) # long but faster than apply lambda
dfp = dfp[['uid', 'mid', 'predict']]
dfp.head()
dfp.to_csv("./output/result_user_median.txt",
header=False,
sep="\t",
index=False)
# dfp: a new pandas DF for predicting
dfp = df2[['uid', 'mid']]
m = df.groupby(['uid'])[['f', 'c', 'l']].median() * 1.33
m = m.sort_values(by='f', ascending=False)
# dfp = pd.merge(dfp, m, how='left', on='uid')
dfp = pd.merge(dfp, m, how='left', on='uid').sort_values(by='f',
ascending=False)
dfp = dfp.fillna(0)
dfp = dfp.assign(
predict=dfp.f.astype(int).astype(str) + ',' +
dfp.c.astype(int).astype(str) + ',' +
dfp.l.astype(int).astype(str)) # long but faster than apply lambda
dfp = dfp[['uid', 'mid', 'predict']]
dfp.head()
dfp.to_csv("./output/result_user_median.txt",
header=False,
sep="\t",
index=False)
```
| github_jupyter |
# Ejemplo: Reducción de palabras a su raíz (Stemming) en textos
**Autor:** Unidad de Científicos de Datos (UCD)
---
Este ejemplo muestra las principales funcionalidades del módulo `stemming`, de la librería **ConTexto**. Este módulo permite aplicar *stemming* a textos. El *stemming* es un método para reducir una palabra a su "raíz" o "tallo" (*stem*, en inglés) a todas las formas flexionadas de palabras que compartan una misma raíz. Por ejemplo, las palabras niños, niña y niñez tienen todas la misma raíz: "niñ". A diferencia de la lematización, en donde cada lema es una palabra que existe en el vocabulario del lenguaje correspondiente, las palabras raíz que se obtienen al aplicar *stemming* no necesariamente existen por sí solas como palabra. Aplicar *stemming* a textos puede simplificarlos, al unificar palabras que comparten la misma raíz, y evitando así tener un vocabulario más grande de lo necesario.
Para mayor información sobre este módulo y sus funciones, se puede consultar <a href="https://ucd-dnp.github.io/ConTexto/funciones/stemming.html" target="_blank">su documentación</a>.
---
## 1. Importar funciones necesarias y definir textos de prueba
En este caso se importa la función `stem_texto`, que aplica *stemming* a un texto de entrada, y la clase `Stemmer`, que puede ser utilizada directamente, entre otras cosas, para agilizar el proceso de hacer *stemming* a una lista de varios textos. Adicionalmente, se definen algunos textos para desarrollar los ejemplos.
```
import time
from contexto.stemming import Stemmer, stem_texto
# textos de prueba
texto = 'Esta es una prueba para ver si las funciones son correctas y funcionan bien. Perritos y gatos van a la casita'
texto_limpiar = "Este texto, con signos de puntuación y mayúsculas, ¡será limpiado antes de pasar por la función!"
texto_ingles = 'This is a test writing to study if these functions are performing well.'
textos = [
"Esta es una primera entrada en el grupo de textos",
"El Pibe Valderrama empezó a destacar jugando fútbol desde chiquitin",
"De los pájaros del monte yo quisiera ser canario",
"Finalizando esta listica, se incluye una última frase un poquito más larga que las anteriores."
]
```
---
## 2. *Stemming* de textos
La función `stem_texto` se encarga de aplicar *stemming* a un texto de entrada. Esta función tiene parámetros opcionales para determinar el lenguaje del texto de entrada (si es "auto", lo detectará automáticamente). Adicionalmente, el parámetro *limpiar* permite hacer una limpieza básica al texto antes de aplicar el *stemming*
```
# Determinar automáticamente el lenguaje del texto
texto_stem = stem_texto(texto, 'auto')
print(texto_stem)
# Prueba en otro lenguaje
stem_english = stem_texto(texto_ingles, 'inglés')
print('-------')
print(stem_english)
# Prueba limpiando un texto antes
print('-------')
print(stem_texto(texto_limpiar, limpiar=True))
```
---
## 3. *Stemming* de varios textos utilizando un solo objeto de la clase `Stemmer`
Si se desea aplicar *stemming* a un conjunto de textos, puede ser más rápido definir un único objeto de clase `Stemmer`, y pasar este objeto en el parámetro *stemmer* de la función `stem_texto`. Al hacer esto puede haber un ahorro de tiempo, pues se evita inicializar un nuevo objeto de clase `Stemmer` para cada texto. Este ahorro de tiempo será mayor a medida que sean más los textos que se desean procesar.
A continuación se muestra una comparación de tiempos para dos opciones:
1. Aplicar *stemming* a una lista de textos, aplicando la función `stem_texto` a cada uno sin ninguna otra consideración.
2. Definir un objeto de clase `Stemmer` y utilizarlo para procesar la misma lista de textos
```
# Opción 1: se inicializa el stemmer en cada texto
tic = time.time()
for t in textos:
print(stem_texto(t))
tiempo_1 = time.time() - tic
# Opción 2: se utiliza solo un lematizador para todos los textos
print('----------')
tic = time.time()
stemmer = Stemmer(lenguaje='español')
for t in textos:
print(stem_texto(t, stemmer=stemmer))
tiempo_2 = time.time() - tic
print('\n***************\n')
print(f'Tiempo con opción 1: {tiempo_1} segundos\n')
print(f'Tiempo con opción 2: {tiempo_2} segundos\n')
```
| github_jupyter |
# 1.0 Visualizing Frequency Distributions
## 1.1 Visualizing Distributions
To find patterns in a frequency table we have to look up the frequency of each unique value or class interval and at the same time compare the frequencies. This process can get time consuming for tables with many unique values or class intervals, or when the frequency values are large and hard to compare against each other.
We can solve this problem by **visualizing** the data in the tables with the help of graphs. Graphs make it much easier to scan and compare frequencies, providing us with a single picture of the entire distribution of a variable.
Because they are easy to grasp and also eye-catching, graphs are a better choice over frequency tables if we need to present our findings to a non-technical audience.
In this lesson, we'll learn about three kinds of graphs:
- Bar plots.
- Pie charts.
- Histograms.
By the end of the mission, we'll know how to generate ourselves the graphs below, and we'll know when it makes sense to use each:
<center><img width="1000" src="https://drive.google.com/uc?export=view&id=1Rxdp-_t01VXmbJayEqTs4WOn4_-SAL6t"></center>
We've already learned about bar plots and histograms in the EDA lessons. In this mission we build upon that knowledge and discuss the graphs in the context of statistics by learning for what kind of variables each graph is most suitable for.
## 1.2 Bar Plots
For variables measured on a **nominal** or an **ordinal** scale it's common to use a **bar plot** to visualize their distribution. To generate a **bar plot** for the distribution of a variable we need two sets of values:
- One set containing the unique values.
- Another set containing the frequency for each unique value.
We can get this data easily from a frequency table. We can use **Series.value_counts()** to generate the table, and then use the [Series.plot.bar()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.bar.html) method on the resulting table to generate a **bar plot**. Using the same WNBA dataset we've been working with for the past missions, this is how we'd do that for the Pos (player position) variable:
```python
>> wnba['Pos'].value_counts().plot.bar()
```
<center><img width="400" src="https://drive.google.com/uc?export=view&id=1xPuz8XKNPPmGVdpMbqLcVNniySRGGuDc"></center>
The **Series.plot.bar()** method generates a vertical bar plot with the frequencies on the y-axis, and the unique values on the x-axis. To generate a horizontal bar plot, we can use the [Series.plot.barh() method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.barh.html):
```python
>> wnba['Pos'].value_counts().plot.barh()
```
<center><img width="400" src="https://drive.google.com/uc?export=view&id=1jQCBSxV20rDElban00Shk08R6ykT0oXk"></center>
As we'll see in the next screen, horizontal bar plots are ideal to use when the labels of the unique values are long.
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- We've taken information from the **Experience** column, and created a new column named **Exp_ordinal**, which is measured on an ordinal scale. The new column has five unique labels, and each one corresponds to a number of years a player has played in WNBA:
<img width="300" src="https://drive.google.com/uc?export=view&id=1tqqE0d76Xk1baGCTWNEkYbJ9Muevfuw3">
- Create a **bar plot** to display the distribution of the **Exp_ordinal** variable:
- Generate a frequency table for the **Exp_ordinal** variable.
- Sort the table by unique labels in an ascending order using the techiques we learned in the previous mission.
- Generate a bar plot using the **Series.plot.bar()** method.
```
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# put your code here
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 200
pd.options.display.max_columns = 50
wnba = pd.read_csv("wnba.csv")
def make_exp_ordinal(row):
if row['Experience'] == 'R':
return 'Rookie'
if (1 <= int(row['Experience']) <= 3):
return 'Little experience'
if (4 <= int(row['Experience']) <= 5):
return 'Experienced'
if (5 <= int(row['Experience']) <= 10):
return 'Very experienced'
else:
return 'Veteran'
wnba['Exp_ordinal'] = wnba.apply(make_exp_ordinal, axis = 1)
wnba['Exp_ordinal'].value_counts().iloc[[3, 0, 2, 1, 4]].plot.bar()
```
## 1.3 Horizontal Bar Plots
One of the problems with the bar plot we built in the last exercise is that the tick labels of the x-axis are hard to read:
<img width="400" src="https://drive.google.com/uc?export=view&id=1gKTo1l94020_BBnk7ilS8WzllVFzYfFE">
To fix this we can rotate the labels, or we can switch to a horizontal bar plot. We can rotate the labels using the rot parameter of **Series.plot.bar()** method we used. The labels are already rotated at 90°, and we can tilt them a bit at 45°:
```python
>> wnba['Exp_ordinal'].value_counts().iloc[[3,0,2,1,4]].plot.bar(rot = 45)
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1wD2TvUAm0fyBrVwWeE5KnTxuz2IS6RKf">
Slightly better, but we can do a better job with a horizontal bar plot. If we wanted to publish this bar plot, we'd also have to make it more informative by adding a title. This is what we'll do in the next exercise, but for now this is how we could do that for the **Pos** variable (note that we use the [Series.plot.barh()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.barh.html) method, not **Series.plot.bar()**):
```python
>> wnba['Pos'].value_counts().plot.barh(title = 'Number of players in WNBA by position')
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1cNU5qtgeawYB-9ec-pC16XtN_6Rtb1Sd">
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Create a horizontal bar plot to visualize the distribution of the **Exp_ordinal** variable.
- Generate a frequency table for the **Exp_ordinal** variable.
- Sort the table by unique labels in an ascending order.
- Use the **Series.plot.barh()** method to generate the horizontal bar plot.
- Add the following title to the plot: **Number of players in WNBA by level of experience.**
```
# put your code here
wnba['Exp_ordinal'].value_counts().iloc[[3, 0, 2, 1, 4]].plot.barh(title = 'Number of players in WNBA by level of experience.')
```
## 1.4 Pie Charts
Another kind of graph we can use to visualize the distribution of **nominal** and **ordinal** variables is a **pie chart**.
Just as the name suggests, a pie chart is structured pretty much like a regular pie: it takes the form of a circle and is divided in wedges. Each wedge in a pie chart represents a category (one of the unique labels), and the size of each wedge is given by the proportion (or percentage) of that category in the distribution.
<img width="600" src="https://drive.google.com/uc?export=view&id=1KKprkhfZaGe0CkLO0p3FzoJ6i71QrSv-">
We can generate pie charts using the [Series.plot.pie() method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.pie.html). This is how we'd do that for the **Pos** variable:
```python
>> wnba['Pos'].value_counts().plot.pie()
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1MvVv73TYBDN5VNZ2gELxp5I4O3bn6UJC">
The main advantage of pie charts over bar plots is that they provide a much better sense for the relative frequencies (proportions and percentages) in the distribution. Looking at a bar plot, we can see that categories are more or less numerous than others, but it's really hard to tell what proportion in the distribution each category takes.
With pie charts, we can immediately get a visual sense for the proportion each category takes in a distribution. Just by eyeballing the pie chart above we can make a series of observations in terms of proportions:
- Guards ("G") take about two fifths (2/5) of the distribution.
- Forwards ("F") make up roughly a quarter (1/4) of the distribution.
- Close to one fifth (1/5) of the distribution is made of centers ("C").
- Combined positions ("G/F" and "F/C") together make up roughly one fifth (1/5) of the distribution.
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Generate a pie chart to visualize the distribution of the **Exp_ordinal** variable.
- Generate a frequency table for the **Exp_ordinal** variable. Don't sort the table this time.
- Use the **Series.plot.pie()** method to generate the pie plot.
```
# put your code here
wnba['Exp_ordinal'].value_counts().plot.pie(title = 'Number of players in WNBA by level of experience.')
```
## 1.5 Customizing a Pie Chart
The pie chart we generated in the previous exercise is more an ellipsis than a circle, and the **Exp_ordinal** label is unaesthetic and hard to read:
<img width="400" src="https://drive.google.com/uc?export=view&id=1tVBOULsftVpM-DH75lZvNGY09Jdavsik">
To give a pie chart the right shape, we need to specify equal values for height and width in the **figsize** parameter of **Series.plot.pie()**. The **Exp_ordinal** is the label of a hidden y-axis, which means we can use the **plt.ylabel()** function to remove it. This is how we can do this for the **Pos** variable:
```python
>> import matplotlib.pyplot as plt
>> wnba['Pos'].value_counts().plot.pie(figsize = (6,6))
>> plt.ylabel('')
```
<img width="250" src="https://drive.google.com/uc?export=view&id=1-ER6QNonL-CTu6tBMnvdL4PPER17igTt">
Ideally, we'd have proportions or percentages displayed on each wedge of the pie chart. Fortunately, this is easy to get using the **autopct** parameter. This parameter accepts Python string formatting, and we'll use the string **'%.1f%%'** to have percentages displayed with a precision of one decimal place. Let's break down this string formatting:
<center><img width="400" src="https://drive.google.com/uc?export=view&id=1Q6E-FXJCl4qVDplM3tM2n71i21eDk7vi"></center>
This is how the process looks for the Pos variable:
```ptyhon
>> wnba['Pos'].value_counts().plot.pie(figsize = (6,6), autopct = '%.1f%%')
```
<img width="300" src="https://drive.google.com/uc?export=view&id=1KnC3jlcoXgEXKcWMzQvS2hS6kxewrfLY">
Notice that the percentages were automatically determined under the hood, which means we don't have to transform to percentages ourselves using **Series.value_counts(normalize = True) * 100.**
Other display formats are possible, and more documentation on the the syntax of string formatting in Python can be found [here](https://docs.python.org/3/library/string.html#format-specification-mini-language). Documentation on **autopct** and other nice customization parameters can be found [here](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.pie.html).
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Generate and customize a pie chart to visualize the distribution of the **Exp_ordinal** variable.
- Generate a frequency table for the **Exp_ordinal** variable. Don't sort the table this time.
- Use the **Series.plot.pie()** method to generate the **pie plot.**
- Use the **figsize** parameter to specify a **width** and a **height** of 6 inches each.
- Use the **autopct** parameter to have percentages displayed with a precision of 2 decimal places.
- Add the following title to the plot: **Percentage of players in WNBA by level of experience.**
- Remove the **Exp_ordinal** label.
```
# put your code here
wnba['Exp_ordinal'].value_counts().plot.pie(
figsize = (6,6),
title = 'Percentage of players in WNBA by level of experience.'
)
```
## 1.6 Histograms
Because of the special properties of variables measured on interval and ratio scales, we can describe distributions in more elaborate ways. Let's examine the **PTS** (total points) variable, which is discrete and measured on a ratio scale:
```python
>> wnba['PTS'].describe()
count 143.000000
mean 201.790210
std 153.381548
min 2.000000
25% 75.000000
50% 177.000000
75% 277.500000
max 584.000000
Name: PTS, dtype: float64
```
We can see that 75% of the values are distributed within a relatively narrow interval (between 2 and 277), while the remaining 25% are distributed in an interval that's slightly larger.
<img width="500" src="https://drive.google.com/uc?export=view&id=1eKJa7moOQBWYg5iswrL5KZv6ZHJXZeNJ">
To visualize the distribution of the **PTS** variable, we need to use a graph that allows us to see immediately the patterns outlined above. The most commonly used graph for this scenario is the **histogram**.
To generate a histogram for the **PTS** variable, we can use the **Series.plot.hist()** method directly on the **wnba['PTS']** column (we don't have to generate a frequency table in this case):
```python
>> wnba['PTS'].plot.hist()
```
<img width="500" src="https://drive.google.com/uc?export=view&id=1Qa6ZdGR-918onl80zHS6ckoF5rMRuYOx">
In the next screen, we'll explain the statistics happening under the hood when we run **wnba['PTS'].plot.hist()** and discuss the histogram above in more detail. Until then, let's practice generating the histogram above ourselves.
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Using **Series.plot.hist()**, generate a histogram to visualize the distribution of the **PTS** variable.
```
# put your code here
wnba['PTS'].plot.hist()
```
## 1.7 The Statistics Behind Histograms
Under the hood, the **wnba['PTS'].plot.hist()** method:
- Generated a grouped frequency distribution table for the **PTS** variable with ten class intervals.
- For each class interval it plotted a bar with a height corresponding to the frequency of the interval.
Let's examine the grouped frequency distribution table of the **PTS** variable:
```python
>> wnba['PTS'].describe()
count 143.000000
mean 201.790210
std 153.381548
min 2.000000
25% 75.000000
50% 177.000000
75% 277.500000
max 584.000000
Name: PTS, dtype: float64
```
Each bar in the histogram corresponds to one class interval. To show this is true, we'll generate below the same histogram as in the previous screen, but this time:
- We'll add the values of the x-ticks manually using the xticks parameter.
- The values will be the limits of each class interval.
- We use the [arange()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html?highlight=arange#numpy.arange) function from numpy to generate the values and avoid spending time with typing all the values ourselves.
- We start at 2, not at 1.417, because this is the actual minimum value of the first class interval (we discussed about this in more detail in the previous mission).
- We'll add a **grid** line using the grid parameter to demarcate clearly each bar.
- We'll rotate the tick labels of the x-axis using the rot parameter for better readability.
```python
>> from numpy import arange
>> wnba['PTS'].plot.hist(grid = True, xticks = arange(2,585,58.2), rot = 30)
```
Looking on the histogram above, we can extract the same information as from the grouped frequency table. We can see that there are 20 players in the interval (176.6, 234.8], 10 players in the interval (351.2, 409.4], etc.
More importantly, we can see the patterns we wanted to see in the last screen when we examined the output of **wnba['PTS'].describe()**.
<img width="700" src="https://drive.google.com/uc?export=view&id=1qQtYC5R9oylmOkpx_YjzpCZLv7cUo_5b">
From the output of **wnba['PTS'].describe()** we can see that most of the values (75%) are distributed within a relatively narrow interval (between 2 and 277). This tells us that:
- The values are distributed unevenly across the 2 - 584 range (2 is the minimum value in the **PTS** variable, and 584 is the maximum).
- Most values are clustered in the first (left) part of the the distribution's range.
<img width="500" src="https://drive.google.com/uc?export=view&id=178mhCdacbAzjXqwfDSbVZJtGQ0ucuqzW">
We can immediately see the same two patterns on the histogram above:
- The distribution of values is uneven, with each class interval having a different frequency. If the distribution was even, all the class intervals would have the same frequency.
- Most values (roughly three quarters) are clustered in the left half of the histogram.
While it's easy and fast to make good estimates simply by looking at a histogram, it's always a good idea to add precision to our estimates using the percentile values we get from **Series.describe().**
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Examine the distribution of the **Games Played** variable using the **Series.describe()** method. Just from the output of this method, predict how the histogram of the **Games Played** variable should look like.
- Once you have a good idea of what histogram shape to expect, plot a histogram for the **Games Played** variable using **Series.plot.hist()**.
```
# put your code here
wnba['Games Played'].describe()
wnba['Games Played'].plot.hist()
```
## 1.8 Histograms as Modified Bar Plots
It should now be clear that a histogram is basically the visual form of a grouped frequency table. Structurally, a histogram can also be understood as a modified version of a bar plot. The main difference is that in the case of a histogram there are no gaps between bars, and each bar represents an interval, not a single value.
The main reason we remove the gaps between bars in case of a histogram is that we want to show that the class intervals we plot are adjacent to one another. With the exception of the last interval, the ending point of an interval is the starting point of the next interval, and we want that to be seen on the graph.
<img width="300" src="https://drive.google.com/uc?export=view&id=15v9hcTmgnSArcOD5jwZSlKJQTfwoVtHt">
For bar plots we add gaps because in most cases we don't know whether the unique values of ordinal variables are adjacent to one another in the same way as two class intervals are. It's safer to assume that the values are not adjacent, and add gaps.
<img width="600" src="https://drive.google.com/uc?export=view&id=1Bdl2qABVrU1nGjT1jIha-qwK7B62Caga">
For nominal variables, values can't be numerically ajdacent in principle, and we add gaps to emphasize that the values are fundamentally distinct.
Below we summarize what we've learned so far:
<img width="400" src="https://drive.google.com/uc?export=view&id=19NxFZUcKvnQFnXvvAQl5xjOpnF_reJD-">
## 1.9 Binning for Histograms
You might have noticed that **Series.plot.hist()** splits a distribution by default into 10 class intervals. In the previous mission, we learned that 10 is a good number of class intervals to choose because it offers a good balance between information and comprehensibility.
<img width="400" src="https://drive.google.com/uc?export=view&id=1C8q5Zxpdkd_rILrIr4hqO7jCmoUgT10c">
With histograms, the breakdown point is generally larger than 10 because visualizing a picture is much easier than reading a grouped frequency table. However, once the number of class intervals goes over 30 or so, the granularity increases so much that for some intervals the frequency will be zero. This will result in a discontinued histogram from which is hard to discern patterns.
Below, we can see how the histogram of the **PTS** variable changes as we vary the number of class intervals.
<img width="600" src="https://drive.google.com/uc?export=view&id=1yLJH1J-aXojOhzPg0FICmv6IQ9aXcpn8">
To modify the number of class intervals used for a histogram, we can use the **bins** parameter of **Series.plot.hist()**. A bin is the same thing as a class interval, and, when it comes to histograms, the term "bin" is used much more often.
Also, we'll often want to avoid letting pandas work out the intervals, and use instead intervals that we think make more sense. We can do this in two steps:
- We start with specifying the range of the entire distribution using the **range** parameter of **Series.plot.hist().**
- Then we combine that with the number of bins to get the intervals we want.
Let's say we want to get these three intervals for the distribution of the PTS variable:
- [1, 200)
- [200, 400)
- [400, 600]
If the histogram ranges from 1 to 600, and we specify that we want three bins, then the bins will automatically take the intervals above. This is because the bins must have equal interval lengths, and, at the same time, cover together the entire range between 1 and 600. To cover a range of 600 with three bins, we need each bin to cover 200 points, with the first bin starting at 1, and the last bin ending at 600.
<img width="600" src="https://drive.google.com/uc?export=view&id=1W8a-hbTW_ex0BI53go4xWvjKfMMP-WoO">
This is how we can generate a histogram with three bins and a 1 - 600 range for the **PTS** variable:
```python
>> wnba['PTS'].plot.hist(range = (1,600), bins = 3)
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1O-MfgUUOn1oEfTn1elEkpru5E49XP04c">
If we keep the same range, but change to six bins, then we'll get these six intervals: [1, 100), [100, 200), [200, 300), [300, 400), [400, 500), [500, 600].
```python
>> wnba['PTS'].plot.hist(range = (1,600), bins = 6)
```
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Generate a histogram for the **Games Played** variable, and customize it in the following way:
- Each bin must cover an interval of 4 games. The first bin must start at 1, the last bin must end at 32.
- Add the title "The distribution of players by games played".
- Add a label to the x-axis named "Games played".
```
# put your code here
wnba['Games Played'].plot.hist(
range = (1, 32),
bins = 8,
title = 'The distribution of players by games played'
)
plt.xlabel("Games played")
```
## 1.10 Skewed Distributions
There are a couple of histogram shapes that appear often in practice. So far, we've met two of these shapes:
<img width="600" src="https://drive.google.com/uc?export=view&id=1kosNk32RFOru1alq7taeD9u4DWMWanoi">
In the histogram on the left, we can see that:
- Most values pile up toward the endpoint of the range (32 games played).
- There are less and less values toward the opposite end (0 games played).
On the right histogram, we can see that:
- Most values pile up toward the starting point of the range (0 points).
- There are less and less values toward the opposite end.
Both these histograms show **skewed distributions**. In a skewed distribution:
- The values pile up toward the end or the starting point of the range, making up the body of the distribution.
- Then the values decrease in frequency toward the opposite end, forming the tail of the distribution.
<img width="400" src="https://drive.google.com/uc?export=view&id=1KR4lZFs4Z3D9GrEqpma9wRxRJbBvUzuC">
If the tail points to the left, then the distribution is said to be **left skewed**. When it points to the left, the tail points at the same time in the direction of negative numbers, and for this reason the distribution is sometimes also called **negatively skewed.**
If the tail points to the right, then the distribution is **right skewed**. The distribution is sometimes also said to be **positively skewed** because the tail points in the direction of positive numbers.
<img width="600" src="https://drive.google.com/uc?export=view&id=1FIUz6XuJTcU74IHfJvJP6Il_JkUUXD3g">
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Examine the distribution of the following two variables:
- **AST** (number of assists).
- **FT%** (percentage of free throws made out of all attempts).
- Depending on the shape of the distribution, assign the string **'left skewed'** or **'right skewed'** to the following variables:
- **assists_distro** for the **AST** column.
- **ft_percent_distro** for the **FT%** column.
For instance, if you think the **AST** variable has a right skewed distribution, your answer should be **assists_distro = 'right skewed'.**
```
# put your code here
assists_distro = 'right skewed'
ft_percent_distro = 'left skewed'
```
## 1.11 Symmetrical Distributions
Besides skewed distributions, we often see histograms with a shape that is more or less symmetrical. If we draw a vertical line exactly in the middle of a symmetrical histogram, then we'll divide the histogram in two halves that are mirror images of one another.
<img width="500" src="https://drive.google.com/uc?export=view&id=1FAOJOZTgCGv6FAQMfFH7ap2HjQok2ZEs">
If the shape of the histogram is **symmetrical**, then we say that we have a **symmetrical distribution.**
A very common symmetrical distribution is one where the values pile up in the middle and gradually decrease in frequency toward both ends of the histogram. This pattern is specific to what we call a **normal distribution** (also called **Gaussian distribution**).
<img width="500" src="https://drive.google.com/uc?export=view&id=16AQYkLid4MnTqFLK8CID3wDWybbKvHgJ">
Another common symmetrical distribution is one where the values are distributed uniformly across the entire range. This pattern is specific to a **uniform distribution.**
<img width="500" src="https://drive.google.com/uc?export=view&id=1WvoDjqD-S-W9qOkqaPt9o3JZs438T9TJ">
In practice, we rarely see perfectly **symmetrical distributions**. However, it's common to use perfectly symmetrical distributions as baselines for describing the distributions we see in practice. For instance, we'd describe the distribution of the **Weight** variable as resembling closely a normal distribution:
```python
>> wnba['Weight'].plot.hist()
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1eoNnxKtW_V5PrdZ5V1P83_YKWKNh61bZ">
When we say that the distribution above resembles closely a normal distribution, we mean that most values pile up somewhere close to the middle and decrease in frequency more or less gradually toward both ends of the histogram.
A similar reasoning applies to skewed distributions. We don't see very often clear-cut skewed distributions, and we use the left and right skewed distributions as baselines for comparison. For instance, we'd say that the distribution of the **BMI** variable is slightly **right skewed**:
```python
>> wnba['BMI'].plot.hist()
```
There's more to say about distribution shapes, and we'll continue this discussion in the next course when we'll learn new concepts. Until then, let's practice what we've learned.
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Examine the distribution of the following variables, trying to determine which one resembles the most a normal distribution:
- Age
- Height
- MIN
- Assign to the variable **normal_distribution** the name of the variable (as a string) whose distribution resembles the most a normal one.
For instance, if you think the **MIN** variable is the correct answer, then your answer should be **normal_distribution = 'MIN'.**
```
# put your code here
normal_distrinbution = 'Height'
```
## 1.12 Next Steps
In this mission, we learned about the graphs we can use to visualize the distributions of various kinds of variables. If a variable is measured on a nominal or ordinal scale, we can use a bar plot or a pie chart. If the variable is measured on an interval or ratio scale, then a histogram is good choice.
Here's the summary table once again to help you recollect what we did in this mission:
<img width="400" src="https://drive.google.com/uc?export=view&id=19NxFZUcKvnQFnXvvAQl5xjOpnF_reJD-">
We're one mission away from finishing the workflow we set out to complete in the first mission. Next, we'll continue the discussion about data visualization by learning how to compare frequency distributions using graphs.
<img width="600" src="https://drive.google.com/uc?export=view&id=1U88LilHa2asEN9vnC_PQFXz9UtjmRIzh">
| github_jupyter |
# Q1. If you have any, what are your choices for increasing the comparison between different figures on the same graph?
```
# Ans : Bar charts are good for comparisons, while line charts work better for trends. Scatter plot charts are good
# for relationships and distributions, but pie charts should be used only for simple compositions — never for
# comparisons or distributions.
```
# Q2. Can you explain the benefit of compound interest over a higher rate of interest that does not compound after reading this chapter?
```
# Ans : Compound interest makes your money grow faster because interest is calculated on the accumulated interest
# over time as well as on your original principal. Compounding can create a snowball effect, as the original
# investments plus the income earned from those investments grow together.
```
# Q3. What is a histogram, exactly? Name a numpy method for creating such a graph.
```
# Ans : Histogram is a type of a plot which hold the total number of counts or the percentile value in y-axis w.r.t.
# corresponding x-variables. When we say the number of counts it means that for a particular range in x-axis
# what will be the number of counts of the data points. When we talk about percentile it shows the probability
# occurrence of the x-variable or what percentage of data are below that particular percetile value w.r.t.
# particular x-variable range. Numpy has a built-in numpy. histogram() function.
```
# Q4. If necessary, how do you change the aspect ratios between the X and Y axes?
```
# Ans : We can use figure(figsize=(10,8)) function inside the matplot.pyplot library which we scale
# down or up the graph.
figure(figsize=(10,8))
```
# Q5. Compare and contrast the three types of array multiplication between two numpy arrays: dot product, outer product, and regular multiplication of two numpy arrays.
```
import numpy as np
a1=np.array([[1,2,3],[4,5,6],[6,7,8]])
a2=np.array([[10,20,30],[40,50,60],[60,70,80]])
# suppose above are the two arrays of shape 3 x 3
print(a1)
print()
print(a2)
#standard multiplication
a1*a2
# When we do standard multiplication in that case values of the same indexes in the array will get multiply.
# Like in above example. A1(i,j) x A2(i,j)
# dot product
np.dot(a1,a2)
# In case of dot product vector, multiplication will take place between row of first array and column of secodn array
# .respectively. Like first row of array a1 will be multiply by column value of array a2 one by one and then added.
#outer multiplication
np.outer(a1,a2)
# In outer multiplication every element of first array a1 will be multiply by every element of other array a2 such
# such the number of columns will be equal to the number of element in another array a2.
```
# Q6. Before you buy a home, which numpy function will you use to measure your monthly mortgage payment?
```
# Ans : np.pmt(rate, nper, pv) function we will be using in order to calculate monthly mortgage payment before
# you purchase a house
# rate = The periodic interest rate
# nper = The number of payment periods
# pv = The total value of the mortgage loan
```
# Q7. Can string data be stored in numpy arrays? If so, list at least one restriction that applies to this data.
```
# Ans : Yes an array can store the string.The limitation which imposed on the string data is, whenever we store the
# data of string dtype then it should should keep in mind that the string which is having the maximum length is
# the limit.
# The dtype value is the maximum length of a string in such data. Meaning suppose if any new string we wanted
# to add, then we can add only that string which have the length either equal to that dtype value or less than
# that. If any string which are adding and if its length is more than the dtype value then it will only accpet
# to maximum length and rest of the characters will not be included.
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import nltk
words = set(nltk.corpus.words.words())
from tqdm import tqdm
import matplotlib.pyplot as plt
data = pd.read_csv("../data/6lakh_pipeline_company_companies_data_2021-07-21.csv")
data.columns
len(data)
data.info()
keep = []
for i in data.columns:
non_null_count = data[i].notnull().sum()
if(non_null_count>=0.7*len(data)):
keep.append(i)
keep
keep = []
for i in data.columns:
non_null_count = data[i].notnull().sum()
if(non_null_count>=0.7*len(data)):
keep.append(i)
len(keep)
keep2 = []
for i in data.columns:
non_null_count = data[i].notnull().sum()
if(non_null_count>=0.5*len(data)):
keep2.append(i)
len(keep2)
# (set(keep2)-set(keep))
len(keep2),len(keep)
for i in keep2:
if i not in keep:
print(i)
keep
df = data[keep]
df2=df.drop(columns=['company_crunchbase_page','company_address','created_at','cb_id','nubela_id',
'linkedin_internal_id','linkedin_search_id','company_linkedin_page','company_phone_number',
'company_profile_image_url','homepage_url'])
df2.columns
to_drop = ['updated_at', 'permalink','partition_0','company_website','rank']
df2=df2.drop(columns=to_drop)
df2
df2.info()
df2.isnull().mean() * 100
df2.company_sector.unique()
df2.company_industry.unique()
print(len(df2),"\nAfter Drop:")
df3 = df2.dropna(axis=0, subset=['company_description','company_industry','company_sector'])
df3.reset_index(inplace=True,drop=True)
print(len(df3))
data = df3[['company_description','company_industry','company_sector']] #for TEXT ONLY
data
def get_non_Eng(data):
fin_data=pd.DataFrame()
spa_data=pd.DataFrame()
for i in tqdm(data.index):
ini_len = len(data['company_description'][i])
sen = " ".join(w for w in nltk.wordpunct_tokenize(data['company_description'][i]) if w.lower() in words or not w.isalpha())
fin_len=len(sen)
if(fin_len>= 0.5*ini_len):
fin_data=fin_data.append(data.iloc[i])
else:
spa_data=spa_data.append(data.iloc[i])
fin_data.reset_index(inplace=True,drop=True)
spa_data.reset_index(inplace=True,drop=True)
return fin_data,spa_data
engdf,nonedf=get_non_Eng(data)
nonedf
nonedf.to_excel("../data/nonEng_split.xlsx")
engdf.to_excel("../data/Eng_split.xlsx", engine='xlsxwriter')
engdf.to_csv("../data/Eng_split.csv",encoding='utf8')
df = pd.read_csv("../data/6lakh_pipeline_company_companies_data_2021-07-21.csv")
df=df[['_id','company_description']]
pd.merge(df, data, on="company_description")
from spacy_langdetect import LanguageDetector
import spacy
nlp = spacy.load('en_core_web_sm') # 1
nlp.add_pipe('language_detector', last=True) #2
text_content = "My name mohit in Berlin."
doc = nlp(text_content) #3
detect_language = doc._.language #4
print(detect_language)
noneng=pd.read_excel("../data/nonEng_split.xlsx")
count=0
for i in tqdm(noneng.index):
text_content = noneng['company_description'][i]
doc = nlp(text_content) #3
detect_language = doc._.language #4
if('en'==detect_language['language']):
count+=1
# print(text_content,"\n")
def get_lang(x):
text_content = x
doc = nlp(text_content) #3
detect_language = doc._.language #4
return detect_language['language']
tqdm.pandas()
noneng['language']=noneng.progress_apply(lambda i: get_lang(i['company_description']),axis=1)
noneng.columns
noneng
noneng.loc['en'==noneng['language']]
data=pd.read_csv("../data/Eng_split.csv")
to_add = noneng.loc['en'==noneng['language']].drop('language',axis=1)
data = pd.concat([data,to_add])
data = data.drop('Unnamed: 0',axis=1)
data.columns
df = pd.read_csv("../data/6lakh_pipeline_company_companies_data_2021-07-21.csv")
df=df[['_id','company_description', 'company_industry', 'company_sector']]
data=pd.merge(df, data, on=['company_description', 'company_industry', 'company_sector'],how='inner')
data.reset_index(inplace=True,drop=True)
data.to_csv("../data/Final_data.csv")
data['company_sector'].value_counts()
fig = plt.figure(figsize=(8,6))
data.groupby('company_sector').company_description.count().plot.bar()
plt.show()
data=pd.read_csv("../data/Eng_split.csv") ##
data=data.loc[:10000]
df=data.loc[:10000]
import re
import nltk.corpus
from nltk.corpus import stopwords
def clean_text(df, text_field, new_text_field_name):
df[new_text_field_name] = df[text_field].str.lower()
df[new_text_field_name] = df[new_text_field_name].apply(lambda elem: re.sub(r"(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)|^rt|http.+?", "", elem))
df[new_text_field_name] = df[new_text_field_name].apply(lambda elem: re.sub(r"\d+", "", elem))
return df
data_clean = clean_text(data, 'company_description', 'text_clean')
stop = stopwords.words('english')
data_clean['text_clean'] = data_clean['text_clean'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)]))
data_clean.head()
# import nltk
# nltk.download('punkt')
# from nltk.tokenize import sent_tokenize, word_tokenize
# data_clean['text_tokens'] = data_clean['text_clean'].apply(lambda x: word_tokenize(x))
# data_clean.head()
# from nltk.stem import PorterStemmer
# from nltk.tokenize import word_tokenize
# def word_stemmer(text):
# stem_text = [PorterStemmer().stem(i) for i in text]
# return stem_text
# data_clean['text_tokens_stem'] = data_clean['text_tokens'].apply(lambda x: word_stemmer(x))
# data_clean.head()
# nltk.download('wordnet')
# from nltk.stem import WordNetLemmatizer
# def word_lemmatizer(text):
# lem_text = [WordNetLemmatizer().lemmatize(i) for i in text]
# return lem_text
# data_clean['text_tokens_lemma'] = data_clean['text_tokens'].apply(lambda x: word_lemmatizer(x))
# data_clean.head()
data_clean['company_description']=data_clean['text_clean']
df= data_clean
from io import StringIO
col = ['company_sector', 'company_description']
df = df[col]
df = df[pd.notnull(df['company_description'])]
df.columns = ['company_sector', 'company_description']
df['category_id'] = df['company_sector'].factorize()[0]
category_id_df = df[['company_sector', 'category_id']].drop_duplicates().sort_values('category_id')
category_to_id = dict(category_id_df.values)
id_to_category = dict(category_id_df[['category_id', 'company_sector']].values)
df.head()
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(sublinear_tf=True, min_df=5, norm='l2', encoding='latin-1', ngram_range=(1, 2), stop_words='english')
features = tfidf.fit_transform(df.company_description).toarray()
labels = df.category_id
features.shape
from sklearn.feature_selection import chi2
import numpy as np
N = 2
for Product, category_id in sorted(category_to_id.items()):
features_chi2 = chi2(features, labels == category_id)
indices = np.argsort(features_chi2[0])
feature_names = np.array(tfidf.get_feature_names())[indices]
unigrams = [v for v in feature_names if len(v.split(' ')) == 1]
bigrams = [v for v in feature_names if len(v.split(' ')) == 2]
print("# '{}':".format(Product))
print(" . Most correlated unigrams:\n. {}".format('\n. '.join(unigrams[-N:])))
print(" . Most correlated bigrams:\n. {}".format('\n. '.join(bigrams[-N:])))
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
X_train, X_test, y_train, y_test = train_test_split(df['company_description'], df['company_sector'], random_state = 0)
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
clf = MultinomialNB().fit(X_train_tfidf, y_train)
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
models = [
RandomForestClassifier(n_estimators=200, max_depth=3, random_state=0),
LinearSVC(),
MultinomialNB(),
LogisticRegression(random_state=0),
]
CV = 5
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, features, labels, scoring='accuracy', cv=CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy'])
import seaborn as sns
sns.boxplot(x='model_name', y='accuracy', data=cv_df)
sns.stripplot(x='model_name', y='accuracy', data=cv_df,
size=8, jitter=True, edgecolor="gray", linewidth=2)
plt.show()
model = LinearSVC()
X_train, X_test, y_train, y_test, indices_train, indices_test = train_test_split(features, labels, df.index, test_size=0.33, random_state=0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(y_test, y_pred)
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(conf_mat, annot=True, fmt='d',
xticklabels=category_id_df.company_sector.values, yticklabels=category_id_df.company_sector.values)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
from sklearn import metrics
print(metrics.classification_report(y_test, y_pred, target_names=df['company_sector'].unique()))
```
| github_jupyter |
# NTDS'18 milestone 1: network collection and properties
[Effrosyni Simou](https://lts4.epfl.ch/simou), [EPFL LTS4](https://lts4.epfl.ch)
## Students
* Team: `42`
* Students: `Alexandre Poussard, Robin Leurent, Vincent Coriou, Pierre Fouché`
* Dataset: [`Flight routes`](https://openflights.org/data.html)
## Rules
* Milestones have to be completed by teams. No collaboration between teams is allowed.
* Textual answers shall be short. Typically one to three sentences.
* Code has to be clean.
* You cannot import any other library than we imported.
* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.
* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter.
## Objective
The purpose of this milestone is to start getting acquainted to the network that you will use for this class. In the first part of the milestone you will import your data using [Pandas](http://pandas.pydata.org) and you will create the adjacency matrix using [Numpy](http://www.numpy.org). This part is project specific. In the second part you will have to compute some basic properties of your network. **For the computation of the properties you are only allowed to use the packages that have been imported in the cell below.** You are not allowed to use any graph-specific toolboxes for this milestone (such as networkx and PyGSP). Furthermore, the aim is not to blindly compute the network properties, but to also start to think about what kind of network you will be working with this semester.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Part 1 - Import your data and manipulate them.
### A. Load your data in a Panda dataframe.
First, you should define and understand what are your nodes, what features you have and what are your labels. Please provide below a Panda dataframe where each row corresponds to a node with its features and labels. For example, in the the case of the Free Music Archive (FMA) Project, each row of the dataframe would be of the following form:
| Track | Feature 1 | Feature 2 | . . . | Feature 518| Label 1 | Label 2 |. . .|Label 16|
|:-------:|:-----------:|:---------:|:-----:|:----------:|:--------:|:--------:|:---:|:------:|
| | | | | | | | | |
It is possible that in some of the projects either the features or the labels are not available. This is OK, in that case just make sure that you create a dataframe where each of the rows corresponds to a node and its associated features or labels.
```
routesCol = ['Airline','AirlineID','SourceAirport','SourceAirportID',
'DestAirport','DestAirportID','CodeShare','Stops','Equipment']
airportsCol = ['AirportID', 'Name', 'City', 'Country', 'IATA', 'ICAO', 'Latitude', 'Longitude',
'Altitude', 'Timezone', 'DST', 'TzDatabaseTimezone', 'Type', 'Source']
routes = pd.read_csv("data/routes.dat",header = None,names = routesCol,encoding = 'utf-8', na_values='\\N')
airports= pd.read_csv("data/airports.dat",header = None, names = airportsCol, encoding = 'utf-8')
# We drop nan value for source and destination airport ID because they are not in the airports file
# and it does not make sense to keep routes that go nowhere.
routes.dropna(subset=["SourceAirportID", "DestAirportID"], inplace=True)
# Get unique airlines for a source airport.
routesAirline = routes[['SourceAirportID','Airline']].groupby('SourceAirportID').nunique().drop(['SourceAirportID'], axis=1)
# Get average number of stops of outbound flights (from Source)
routesStop = routes[['SourceAirportID', 'Stops']].groupby('SourceAirportID').mean()
# Get number of routes that leave the airport
routesSource = routes[['SourceAirportID', 'DestAirportID']].groupby('SourceAirportID').count()
# Get number of routes that arrive to the airport
routesDest = routes[['SourceAirportID', 'DestAirportID']].groupby('DestAirportID').count()
# Concatenate everything and fill nan with 0 (if no departure = 0 airline, 0 stop and ratio of 0)
features = pd.concat([routesAirline, routesStop, routesSource, routesDest], axis=1, sort=True).fillna(0)
features.index = features.index.astype('int64')
# Create Ratio
features['DestSourceRatio'] = features['DestAirportID']/features['SourceAirportID']
# Add countries (as labels)
airports_countries = airports[['AirportID', 'Country']].set_index(['AirportID'])
features = features.join(airports_countries).sort_index()
features.reset_index(level=0, inplace=True)
features = features.rename(columns={'index':'AirportID'})
features.reset_index(level=0, inplace=True)
features = features.rename(columns={'index':'node_idx'})
features.head()
```
For now our features are:
* Airline : the number our unique airline leaving an airport.
* Stops : the average number of stops of an airport's outgoing flights.
* DestAirportID: number of the airport incoming flights.
* SourceAirportID: number of the airport outgoing flights.
* DestSourceRatio: Ratio of the incoming fligths over the outgoing flights. (can be infinite)
The only label we have for now is the country.
### B. Create the adjacency matrix of your network.
Remember that there are edges connecting the attributed nodes that you organized in the dataframe above. The connectivity of the network is captured by the adjacency matrix $W$. If $N$ is the number of nodes, the adjacency matrix is an $N \times N$ matrix where the value of $W(i,j)$ is the weight of the edge connecting node $i$ to node $j$.
There are two possible scenarios for your adjacency matrix construction, as you already learned in the tutorial by Benjamin:
1) The edges are given to you explicitly. In this case you should simply load the file containing the edge information and parse it in order to create your adjacency matrix. See how to do that in the [graph from edge list]() demo.
2) The edges are not given to you. In that case you will have to create a feature graph. In order to do that you will have to chose a distance that will quantify how similar two nodes are based on the values in their corresponding feature vectors. In the [graph from features]() demo Benjamin showed you how to build feature graphs when using Euclidean distances between feature vectors. Be curious and explore other distances as well! For instance, in the case of high-dimensional feature vectors, you might want to consider using the cosine distance. Once you compute the distances between your nodes you will have a fully connected network. Do not forget to sparsify by keeping the most important edges in your network.
Follow the appropriate steps for the construction of the adjacency matrix of your network and provide it in the Numpy array ``adjacency`` below:
```
edges = routes[['SourceAirportID', 'DestAirportID']]
edges = edges.astype('int64')
airportID2idx = features[['node_idx', 'AirportID']]
airportID2idx = airportID2idx.set_index('AirportID')
edges = edges.join(airportID2idx, on='SourceAirportID')
edges = edges.join(airportID2idx, on='DestAirportID', rsuffix='_dest')
edges = edges.drop(columns=['SourceAirportID','DestAirportID'])
n_nodes = len(features)
adjacency = np.zeros((n_nodes, n_nodes), dtype=int)
# The weights of the adjacency matrix are the sum of the outgoing flights
for idx, row in edges.iterrows():
i, j = int(row.node_idx), int(row.node_idx_dest)
adjacency[i, j] += 1
print("The number of nodes in the network is {}".format(n_nodes))
adjacency
```
## Part 2
Execute the cell below to plot the (weighted) adjacency matrix of your network.
```
plt.spy(adjacency, markersize=1)
plt.title('adjacency matrix')
```
### Question 1
What is the maximum number of links $L_{max}$ in a network with $N$ nodes (where $N$ is the number of nodes in your network)? How many links $L$ are there in your collected network? Comment on the sparsity of your network.
```
# Our graph is directed therefore :
Lmax = n_nodes * (n_nodes - 1)
print("Maximum number of links in our network : {}".format(Lmax))
links = np.count_nonzero(adjacency)
print("Number of links in our network : {}".format(links))
sparsity = links * 100 / Lmax
print("The sparsity of our network is : {:.4f}%".format(sparsity))
```
The maximum number of links $L_{max}$ in an network with $N$ nodes is $L_{maxUndirected}=\binom{N}{2}=\frac{N(N-1)}{2}$ and $L_{maxDirected}=N(N-1)$.
We can see that our network is very sparse which makes sense as we work on a flights routes dataset. Thus, small airports have few connections compared to the big ones.
### Question 2
Is your graph directed or undirected? If it is directed, convert it to an undirected graph by symmetrizing the adjacency matrix.
Our graph is directed. (Some airports don't have the same number of incoming and outgoing flights.)
```
# We symmetrize our network by summing our weights (the number of incoming and outgoing flights)
# since it allows us to keep the maximum number of information.
adjacency_sym = adjacency + adjacency.T
```
### Question 3
In the cell below save the features dataframe and the **symmetrized** adjacency matrix. You can use the Pandas ``to_csv`` to save the ``features`` and Numpy's ``save`` to save the ``adjacency``. We will reuse those in the following milestones.
```
features.to_csv('features.csv')
np.save('adjacency_sym', adjacency_sym)
```
### Question 4
Are the edges of your graph weighted?
Yes, the weights of the symmetrized adjacency matrix are the total number of outgoing and incoming flights from each node.
### Question 5
What is the degree distibution of your network?
```
# The degree of a node is the sum of its corresponding row or column in the adjacency matrix.
degree = [(line > 0).sum() for line in adjacency_sym]
assert len(degree) == n_nodes
```
Execute the cell below to see the histogram of the degree distribution.
```
weights = np.ones_like(degree) / float(n_nodes)
plt.hist(degree, weights=weights);
```
What is the average degree?
```
avg_deg = np.mean(degree)
print("The average degree is {:.4f}".format(avg_deg))
```
### Question 6
Comment on the degree distribution of your network.
```
test = np.unique(degree,return_counts=True)
ax = plt.gca()
ax.scatter(test[0],test[1])
ax.set_yscale("log")
ax.set_xscale("log")
```
Our degree distribution follows a power law distribution, hence our network seems to be scale free as we saw in the lecture.
### Question 7
Write a function that takes as input the adjacency matrix of a graph and determines whether the graph is connected or not.
```
def BFS(adjacency, labels, state):
"""
return a component with an array of updated labels (the visited nodes during the BFS)
for a given adjacency matrix
:param adjacency: The adjacency matrix where to find the component
:param labels: An array of labels (0 : the node is not yet explored, 1: it is explored)
:param state: The # of the component we are looking for
:return: updated labels array and the component found
"""
queue = []
# current node is the first one with a label to 0
current_node = np.argwhere(labels == 0).flatten()[0]
labels[current_node] = state
queue.append(current_node)
current_component = []
current_component.append(current_node)
while len(queue) != 0:
# all the weight of the other nodes for a given one
all_nodes = adjacency_sym[current_node]
# all the nodes reachable from the current one that are not yet labeled
neighbours = np.argwhere((all_nodes > 0) & (labels == 0)).flatten()
# add those nodes to the queue and to the component
queue += list(neighbours)
current_component += list(neighbours)
for i in neighbours:
# we update the labels array
labels[i] = state
if len(queue) > 1:
# and we update the queue and the current node
queue = queue[1:]
current_node = queue[0]
else :
queue = []
return np.array(labels), current_component
def connected_graph(adjacency):
"""Determines whether a graph is connected.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
Returns
-------
bool
True if the graph is connected, False otherwise.
"""
#Run the BFS, find a component and see if all the nodes are in it. If so, the graph is connected.
first_labels = np.zeros(n_nodes, dtype=int)
labels, component = BFS(adjacency, first_labels, 1)
return labels.sum() == n_nodes
```
Is your graph connected? Run the ``connected_graph`` function to determine your answer.
```
connected = connected_graph(adjacency)
print("Is our graph connected ? {}".format(connected))
```
### Question 8
Write a function that extracts the connected components of a graph.
```
def find_components(adjacency):
"""Find the connected components of a graph.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
Returns
-------
list of numpy arrays
A list of adjacency matrices, one per connected component.
"""
#Find the first component
components = []
first_labels = np.zeros(n_nodes, dtype=int)
labels, component = BFS(adjacency, first_labels, 1)
components.append(component)
current_state = 2
#Redo BFS while we haven't found all the components
while (labels > 0).sum() != n_nodes:
labels, component = BFS(adjacency, labels, current_state)
components.append(component)
current_state += 1
return np.array(components)
```
How many connected components is your network composed of? What is the size of the largest connected component? Run the ``find_components`` function to determine your answer.
```
components = find_components(adjacency_sym)
print("Number of connected components : {}".format(len(components)))
size_compo = [len(compo) for compo in components]
print("Size of largest connected component : {}".format(np.max(size_compo)))
```
### Question 9
Write a function that takes as input the adjacency matrix and a node (`source`) and returns the length of the shortest path between that node and all nodes in the graph using Dijkstra's algorithm. **For the purposes of this assignment we are interested in the hop distance between nodes, not in the sum of weights. **
Hint: You might want to mask the adjacency matrix in the function ``compute_shortest_path_lengths`` in order to make sure you obtain a binary adjacency matrix.
```
#Find all the neighbours of a given node
def neighbours(node, adjacency_sym):
n = adjacency_sym[node]
neighbours = np.argwhere(n != 0).flatten()
return neighbours
def compute_shortest_path_lengths(adjacency, source):
"""Compute the shortest path length between a source node and all nodes.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
source: int
The source node. A number between 0 and n_nodes-1.
Returns
-------
list of ints
The length of the shortest path from source to all nodes. Returned list should be of length n_nodes.
"""
adjacency[adjacency != 0] = 1
shortest_path_lengths = np.ones(adjacency.shape[0]) * np.inf
shortest_path_lengths[source] = 0
visited = [source]
queue = [source]
while queue:
node = queue[0]
queue = queue[1:]
neighbors = neighbours(node, adjacency)
neighbors = np.setdiff1d(neighbors,visited).tolist()
neighbors = np.setdiff1d(neighbors,queue).tolist()
queue += neighbors
visited += neighbors
shortest_path_lengths[neighbors] = shortest_path_lengths[node] + 1
return shortest_path_lengths
```
### Question 10
The diameter of the graph is the length of the longest shortest path between any pair of nodes. Use the above developed function to compute the diameter of the graph (or the diameter of the largest connected component of the graph if the graph is not connected). If your graph (or largest connected component) is very large, computing the diameter will take very long. In that case downsample your graph so that it has 1.000 nodes. There are many ways to reduce the size of a graph. For the purposes of this milestone you can chose to randomly select 1.000 nodes.
```
max_component = components[np.argmax(size_compo)]
adjacency_max = adjacency_sym[max_component, :]
adjacency_max = adjacency_max[:, max_component]
longest = []
a = adjacency_max[:1000,:1000]
for node in range(len(a)):
short = compute_shortest_path_lengths(a, node)
longest.append(max(short[ short < np.inf ]))
print("The diameter of the graph is {}".format(max(longest)))
```
### Question 11
Write a function that takes as input the adjacency matrix, a path length, and two nodes (`source` and `target`), and returns the number of paths of the given length between them.
```
def compute_paths(adjacency, source, target, length):
"""Compute the number of paths of a given length between a source and target node.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
source: int
The source node. A number between 0 and n_nodes-1.
target: int
The target node. A number between 0 and n_nodes-1.
length: int
The path length to be considered.
Returns
-------
int
The number of paths.
"""
adjacency[adjacency != 0] = 1
adjacency = np.linalg.matrix_power(adjacency, length)
return adjacency[source, target]
```
Test your function on 5 pairs of nodes, with different lengths.
```
print(compute_paths(adjacency_sym, 0, 10, 1))
print(compute_paths(adjacency_sym, 0, 10, 2))
print(compute_paths(adjacency_sym, 0, 10, 3))
print(compute_paths(adjacency_sym, 23, 67, 2))
print(compute_paths(adjacency_sym, 15, 93, 4))
```
### Question 12
How many paths of length 3 are there in your graph? Hint: calling the `compute_paths` function on every pair of node is not an efficient way to do it.
```
# We could have used np.linalg.matrix_power(a, 3), but for performance reasons we prefered to make
#the multiplication explicitely.
a = adjacency_sym.copy()
a[a != 0] = 1
a = a @ a @ a
print("Number of path of length 3: {}".format(a.sum()))
```
### Question 13
Write a function that takes as input the adjacency matrix of your graph (or of the largest connected component of your graph) and a node and returns the clustering coefficient of that node.
```
def compute_clustering_coefficient(adjacency, node):
"""Compute the clustering coefficient of a node.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
node: int
The node whose clustering coefficient will be computed. A number between 0 and n_nodes-1.
Returns
-------
float
The clustering coefficient of the node. A number between 0 and 1.
"""
adjacency[adjacency >1]=1
neighbors = adjacency[node]
indices = np.argwhere(neighbors > 0).flatten()
if len(indices) <2:
return 0
#Take the neighbors of the neighbors, and find wich one are linked together
neiofnei = adjacency[indices]
test = neiofnei * neighbors
count = sum(test.flatten())
#Compute the clustering coefficient for the node. Since each link is counted twice, we don't multiply the formula by 2.
clustering_coefficient = count / (len(indices)*(len(indices)-1))
return clustering_coefficient
```
### Question 14
What is the average clustering coefficient of your graph (or of the largest connected component of your graph if your graph is disconnected)? Use the function ``compute_clustering_coefficient`` to determine your answer.
```
adjacency_component = adjacency_sym[components[0]]
adjacency_component = adjacency_component.T[components[0]].T
N = len(components[0])
count = 0
for i in range(N):
count+=compute_clustering_coefficient(adjacency_component,i)
avg_coeff = count/N
print("The average coefficient of our largest connected component is : {}".format(str(avg_coeff)))
```
| github_jupyter |
# Missing Recording Data hack
For multiple sessions the 'recording.dat' file that contains a tetrodes high pass channels is missing. This file is necessary for clustering. This hack is a way to either recreate these files anew, or found a way to bypass them by recalculating the high pass data for only the missing channels. This later strategy would be merge in the get_session_tt_wf method from the subject info class.
```
%matplotlib inline
from pathlib import Path
import time, traceback
from importlib import reload
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from TreeMazeAnalyses2.Analyses import subject_info as si
from TreeMazeAnalyses2.Analyses import cluster_match_functions as cmf
from TreeMazeAnalyses2.Pre_Processing import pre_process_functions as pp
from TreeMazeAnalyses2.Sorting import sort_functions as sort_funcs
si = reload(si)
subjects = ['Li','Ne', 'Cl', 'Al', 'C'
for subject in subjects:
subject_info = si.SubjectInfo(subject)
cd = subject_info.get_cluster_dists(overwrite=True)
matching_analysis = subject_info.get_session_match_analysis()
tt, d, sessions, n_cl, n_session_cl = matching_analysis[0]
cd = subject_info.get_cluster_dists()
matching_analyses = subject_info.get_session_match_analysis()
n_wf = 1000
dim_reduc_method = 'umap'
n_samps = 32 * 4
cluster_dists = {k: {} for k in np.arange(len(matching_analyses))}
for analysis_id, analysis in enumerate(matching_analyses):
tt, d, sessions, n_clusters, n_session_units = analysis
X = np.empty((0, n_wf, n_samps), dtype=np.float16)
for session in sessions:
cluster_ids = subject_info.session_clusters[session]['cell_IDs'][tt]
session_cell_wf = subject_info.get_session_tt_wf(session, tt, cluster_ids=cluster_ids, n_wf=n_wf)
X = np.concatenate((X, session_cell_wf), axis=0)
X[np.isnan(X)]=0
X[np.isinf(X)]=0
# Obtain cluster label namess
cluster_labels = np.arange(n_clusters).repeat(n_wf)
# Obtain cluster labels & mapping between labels [this part can be improved]
cl_names = []
for session_num, session in enumerate(sessions):
cluster_ids = subject_info.session_clusters[session]['cell_IDs'][tt]
for cl_num, cl_id in enumerate(cluster_ids):
cl_name = f"{session}-tt{tt}_d{d}_cl{cl_id}"
cl_names.append(cl_name)
# Reduce dims
X_2d = cmf.dim_reduction(X.reshape(-1, X.shape[-1]), method=dim_reduc_method)
# compute covariance and location
clusters_loc, clusters_cov = cmf.get_clusters_moments(data=X_2d, labels=cluster_labels)
# compute distance metrics
dist_mats = cmf.get_clusters_all_dists(clusters_loc, clusters_cov, data=X_2d, labels=cluster_labels)
# create data frames with labeled cluster names
dists_mats_df = {}
for metric, dist_mat in dist_mats.items():
dists_mats_df[metric] = pd.DataFrame(dist_mat, index=cl_names, columns=cl_names)
bad_wf_idx = np.where(np.isnan(X))
x = np.arange(10, dtype=float)
x[5] = np.inf
x[3] = np.nan
x
np.nanmedian(x)
session_info
```
| github_jupyter |
```
!pip install keras==2.3.1
!pip install tensorflow==2.1.0
!pip install keras_applications==1.0.8
!pip install segmentation_models
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from matplotlib import pyplot as plt
import cv2
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.utils import to_categorical ,Sequence
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, concatenate, Conv2DTranspose, BatchNormalization, Activation, Dropout,LeakyReLU
from tensorflow.keras.optimizers import Adadelta, Nadam ,Adam
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau, CSVLogger, TensorBoard
import os
from glob import glob
from pathlib import Path
import shutil
from random import sample, choice
import segmentation_models as sm
from google.colab import drive
drive.mount('/content/gdrive')
!unzip -uq "/content/gdrive/My Drive/CamVid.zip" -d "/CamVid"
print(os.listdir("/CamVid/CamVid"))
dataset_path = Path("/CamVid/CamVid/")
list(dataset_path.iterdir())
def tree(directory):
print(f'+ {directory}')
for path in sorted(directory.rglob('*')):
depth = len(path.relative_to(directory).parts)
spacer = ' ' * depth
print(f'{spacer}+ {path.name}')
#tree(dataset_path)
train_imgs = list((dataset_path / "train").glob("*.png"))
train_labels = list((dataset_path / "train_labels").glob("*.png"))
val_imgs = list((dataset_path / "val").glob("*.png"))
val_labels = list((dataset_path / "val_labels").glob("*.png"))
test_imgs = list((dataset_path / "test").glob("*.png"))
test_labels = list((dataset_path / "test_labels").glob("*.png"))
(len(train_imgs),len(train_labels)), (len(val_imgs),len(val_labels)) , (len(test_imgs),len(test_labels))
img_size = 512
assert len(train_imgs) == len(train_labels), "No of Train images and label mismatch"
assert len(val_imgs) == len(val_labels), "No of Train images and label mismatch"
assert len(test_imgs) == len(test_labels), "No of Train images and label mismatch"
sorted(train_imgs), sorted(train_labels), sorted(val_imgs), sorted(val_labels), sorted(test_imgs), sorted(test_labels);
for im in train_imgs:
assert dataset_path / "train_labels" / (im.stem +"_L.png") in train_labels , "{im} not there in label folder"
for im in val_imgs:
assert dataset_path / "val_labels" / (im.stem +"_L.png") in val_labels , "{im} not there in label folder"
for im in test_imgs:
assert dataset_path / "test_labels" / (im.stem +"_L.png") in test_labels , "{im} not there in label folder"
def make_pair(img,label,dataset):
pairs = []
for im in img:
pairs.append((im , dataset / label / (im.stem +"_L.png")))
return pairs
train_pair = make_pair(train_imgs, "train_labels", dataset_path)
val_pair = make_pair(val_imgs, "val_labels", dataset_path)
test_pair = make_pair(test_imgs, "test_labels", dataset_path)
temp = choice(train_pair)
img = img_to_array(load_img(temp[0], target_size=(img_size,img_size)))
mask = img_to_array(load_img(temp[1], target_size = (img_size,img_size)))
plt.figure(figsize=(10,10))
plt.subplot(121)
plt.imshow(img/255)
plt.subplot(122)
plt.imshow(mask/255)
class_map_df = pd.read_csv(dataset_path / "class_dict.csv")
class_map = []
for index,item in class_map_df.iterrows():
class_map.append(np.array([item['r'], item['g'], item['b']]))
len(class_map)
def assert_map_range(mask,class_map):
mask = mask.astype("uint8")
for j in range(img_size):
for k in range(img_size):
assert mask[j][k] in class_map , tuple(mask[j][k])
def form_2D_label(mask,class_map):
mask = mask.astype("uint8")
label = np.zeros(mask.shape[:2],dtype= np.uint8)
for i, rgb in enumerate(class_map):
label[(mask == rgb).all(axis=2)] = i
return label
lab = form_2D_label(mask,class_map)
np.unique(lab,return_counts=True)
class DataGenerator(Sequence):
'Generates data for Keras'
def __init__(self, pair, class_map, batch_size=16, dim=(512,512,3), shuffle=True):
'Initialization'
self.dim = dim
self.pair = pair
self.class_map = class_map
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.pair) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# Find list of IDs
list_IDs_temp = [k for k in indexes]
# Generate data
X, y = self.__data_generation(list_IDs_temp)
return X, y
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.pair))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
batch_imgs = list()
batch_labels = list()
# Generate data
for i in list_IDs_temp:
# Store sample
img = load_img(self.pair[i][0] ,target_size=self.dim)
img = img_to_array(img)/255.
batch_imgs.append(img)
label = load_img(self.pair[i][1],target_size=self.dim)
label = img_to_array(label)
label = form_2D_label(label,self.class_map)
label = to_categorical(label , num_classes = 32)
batch_labels.append(label)
return np.array(batch_imgs) ,np.array(batch_labels)
train_generator = DataGenerator(train_pair+test_pair,class_map,batch_size=4, dim=(img_size,img_size,3) ,shuffle=True)
train_steps = train_generator.__len__()
train_steps
dX,y = train_generator.__getitem__(1)
y.shape
val_generator = DataGenerator(val_pair, class_map, batch_size=4, dim=(img_size,img_size,3) ,shuffle=True)
val_steps = val_generator.__len__()
val_steps
def upsample_conv(filters, kernel_size, strides, padding):
return Conv2DTranspose(filters, kernel_size, strides=strides, padding=padding)
input_img = Input(shape=(512, 512, 3),name='image_input')
c1 = Conv2D(16, (3, 3), activation='relu', padding='same') (input_img)
c1 = Conv2D(16, (3, 3), activation='relu', padding='same') (c1)
p1 = MaxPooling2D((2, 2)) (c1)
c2 = Conv2D(32, (3, 3), activation='relu', padding='same') (p1)
c2 = Conv2D(32, (3, 3), activation='relu', padding='same') (c2)
p2 = MaxPooling2D((2, 2)) (c2)
c3 = Conv2D(64, (3, 3), activation='relu', padding='same') (p2)
c3 = Conv2D(64, (3, 3), activation='relu', padding='same') (c3)
p3 = MaxPooling2D((2, 2)) (c3)
c4 = Conv2D(128, (3, 3), activation='relu', padding='same') (p3)
c4 = Conv2D(128, (3, 3), activation='relu', padding='same') (c4)
p4 = MaxPooling2D(pool_size=(2, 2)) (c4)
c5 = Conv2D(256, (3, 3), activation='relu', padding='same') (p4)
c5 = Conv2D(256, (3, 3), activation='relu', padding='same') (c5)
p5 = MaxPooling2D(pool_size=(2, 2)) (c5)
c6 = Conv2D(512, (3, 3), activation='relu', padding='same') (p5)
c6 = Conv2D(512, (3, 3), activation='relu', padding='same') (c6)
p6 = MaxPooling2D(pool_size=(2, 2)) (c6)
p6 = Dropout(rate=0.5) (p6)
c7 = Conv2D(1024, (3, 3), activation='relu', padding='same') (p6)
c7 = Conv2D(1024, (3, 3), activation='relu', padding='same') (c7)
c7 = Dropout(rate=0.5) (c7)
u8 = upsample_conv(512, (2, 2), strides=(2, 2), padding='same') (c7)
u8 = concatenate([u8, c6])
c8 = Conv2D(512, (3, 3), activation='relu', padding='same') (u8)
c8 = Conv2D(512, (3, 3), activation='relu', padding='same') (c8)
c8 = Dropout(rate=0.5) (c8)
u9 = upsample_conv(256, (2, 2), strides=(2, 2), padding='same') (c8)
u9 = concatenate([u9, c5])
c9 = Conv2D(256, (3, 3), activation='relu', padding='same') (u9)
c9 = Conv2D(256, (3, 3), activation='relu', padding='same') (c9)
c9 = Dropout(rate=0.5) (c9)
u10 = upsample_conv(128, (3, 3), strides=(2, 2), padding='same') (c9)
u10 = concatenate([u10, c4])
c10 = Conv2D(128, (3, 3), activation='relu', padding='same') (u10)
c10 = Conv2D(128, (3, 3), activation='relu', padding='same') (c10)
u11 = upsample_conv(64, (2, 2), strides=(2, 2), padding='same') (c10)
u11 = concatenate([u11, c3])
c11 = Conv2D(64, (3, 3), activation='relu', padding='same') (u11)
c11 = Conv2D(64, (3, 3), activation='relu', padding='same') (c11)
u12 = upsample_conv(32, (2, 2), strides=(2, 2), padding='same') (c11)
u12 = concatenate([u12, c2])
c12 = Conv2D(32, (3, 3), activation='relu', padding='same') (u12)
c12 = Conv2D(32, (3, 3), activation='relu', padding='same') (c12)
u13 = upsample_conv(16, (2, 2), strides=(2, 2), padding='same') (c12)
u13 = concatenate([u13, c1], axis=3)
c13 = Conv2D(16, (3, 3), activation='relu', padding='same') (u13)
c13 = Conv2D(16, (3, 3), activation='relu', padding='same') (c13)
d = Conv2D(32, (1, 1), activation='softmax') (c13)
iou = sm.metrics.IOUScore(threshold=0.5)
seg_model = Model(inputs=[input_img], outputs=[d])
seg_model.summary()
seg_model.compile(optimizer='adam', loss='categorical_crossentropy' ,metrics=['accuracy',iou])
mc = ModelCheckpoint(mode='max', filepath='/content/gdrive/My Drive/Weights/Extended_Unet_Relu_Dropout.h5', monitor='val_accuracy',save_best_only='True', save_weights_only='True', verbose=1)
es = EarlyStopping(mode='max', monitor='val_accuracy', patience=10, verbose=1)
tb = TensorBoard(log_dir="logs/", histogram_freq=0, write_graph=True, write_images=False)
rl = ReduceLROnPlateau(monitor='val_accuracy',factor=0.1,patience=10,verbose=1,mode="max",min_lr=0.0001)
cv = CSVLogger("logs/log.csv" , append=True , separator=',')
results = seg_model.fit(train_generator , steps_per_epoch=train_steps ,epochs=150,
validation_data=val_generator,validation_steps=val_steps,callbacks=[mc,es,tb,rl,cv])
from matplotlib import pyplot as plt
plt.figure(figsize=(30, 5))
plt.subplot(121)
plt.plot(results.history['iou_score'])
plt.plot(results.history['val_iou_score'])
plt.title('Model iou_score')
plt.ylabel('iou_score')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
# Plot training & validation loss values
plt.subplot(122)
plt.plot(results.history['loss'])
plt.plot(results.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
img_mask = choice(val_pair)
img= img_to_array(load_img(img_mask[0] , target_size= (img_size,img_size)))
gt_img = img_to_array(load_img(img_mask[1] , target_size= (img_size,img_size)))
def make_prediction(model,img_path,shape):
img= img_to_array(load_img(img_path , target_size= shape))/255.
img = np.expand_dims(img,axis=0)
labels = model.predict(img)
labels = np.argmax(labels[0],axis=2)
return labels
pred_label = make_prediction(seg_model, img_mask[0], (img_size,img_size,3))
pred_label.shape
def form_colormap(prediction,mapping):
h,w = prediction.shape
color_label = np.zeros((h,w,3),dtype=np.uint8)
color_label = mapping[prediction]
color_label = color_label.astype(np.uint8)
return color_label
pred_colored = form_colormap(pred_label,np.array(class_map))
plt.figure(figsize=(15,15))
plt.subplot(131);plt.title('Original Image')
plt.imshow(img/255.)
plt.subplot(132);plt.title('True labels')
plt.imshow(gt_img/255.)
plt.subplot(133)
plt.imshow(pred_colored/255.);plt.title('predicted labels')
```
| github_jupyter |
# Clase 04: Desarrollo
### Analizando el Coronavirus
Al leer simplemente la base de datos "data1_POSITIVOS=pd.read_csv("positivos_covid.csv)" estabamos obviando momentaneamente un paso previo que se puede automotizar, tambien es posible hacerlo "a mano" pero podría ser muy tedioso debido a la limpieza de nombres por caracteres extraños.
Tengamos en cuenta ademas que le hemos puesto codificación latin1 en lugar de utf-8 y que hemos puesto como delimitador ",". El tipo delimitador, si es como o punto y coma, dependerá de como venga el archivo de origen y por tanto en que idioma estaba en Excel original o del idioma en el que esté su computadora.
Hay dos fuentes de información importante, el MINSA y del Portal de Datos Abiertos del SINADEF. Para obtener los datos:
La data siempre nos va a interesa de la forma mas granular posible, en este caso es a nivel de distrito.
```
import pandas as pd
import os
os.chdir(r'C:\Users\Diego\Desktop\Topicos de Eco Mate\Clases Parte 1°\Clase 06')
data_FALL=pd.read_csv('fallecidos_covid.csv', encoding= "latin1", delimiter=';')
#ojo: Usualmente las bases de datos flucutan entre latin1 y UTF-8. Podemos ir probando para ver cual queda.
data_FALL.head(2)
data_POS=pd.read_csv('positivos_covid.csv', encoding='latin1', delimiter=';')
data_POS=pd.read_csv('positivos_covid.csv', encoding='UTF-8', delimiter=';')
data_POS.head(2)
data_FALL_SINA=pd.read_csv('fallecidos_sinadef.csv',encoding='latin1',delimiter=";")
data_FALL_SINA.head(2)
data_FALL_SINA=pd.read_csv('fallecidos_sinadef.csv',encoding='latin1',delimiter=";", skiprows=2)
data_FALL_SINA.head(2)
data_FALL_SINA=pd.read_csv('fallecidos_sinadef.csv',encoding='latin1',delimiter=";", skiprows=2).iloc[:,range(0,31)]
data_FALL_SINA.head(2)
#necesitamos saber si tiene distrito
data_FALL_SINA.columns
```
### Verificando duplicados
#### De entre los positivos:
Para poder hacer unión entre las tablas de positivios y fallecidos, debemos verificar si existen duplicados en la llave UUID
```
dups_IDPOS= data_POS.pivot_table(index=['UUID'], aggfunc='size')
(dups_IDPOS.min(),dups_IDPOS.max())
```
#### De entre los fallecidos:
```
dups_IDFALL= data_FALL.pivot_table(index=['UUID'], aggfunc='size')
(dups_IDFALL.min(),dups_IDFALL.max())
```
### Uniendo tablas
Habiendo verificando que no hya duplicados, podemos hacer un cruce, teniendo como base maestra a la data de positivos y asignando todos los UUID de fallecidos y los atributos de los fallecidos a la unión de estas dos tablas. Debemos fijarnos que la tabla de cruce debe tener el mismo tamaño que la tabla maestra, es decir 1533121 filas.
Queda como tarea para el alumno verificar si existen inconsistencias en las tablas, por ejemplo:
* ¿De cuando a cuando se registran los fallecidos?
* ¿De cuando a cuando se registran los positivos?
* Otros
```
data_POS.shape, data_FALL.shape
```
Solo para no perder las bases originales, crearemos una nueva con el contenido de las dos iniciales, pero con un nombre distinto.
Adicinalmente, para luego poder verificar que todos los datos de la base de fallecidos, base mas pequeña de solo aprox 50 mil datos haya cruzado, vamos a crear una columba llenas de 1's. Así, cuando el cruce se haya hecho, podremos sumar los 1s en la nueva base. Recordando que la base de fallecidos tiene 62126 la suma en la nueva base nos debe dar este mismo número.
```
data_FALL['INDICADOR_EXISTENCIA']=1
data_POS['INDICADOR_EXISTENCIA2']=1
data_FALL.head(2)
import numpy as np
import matplotlib.pyplot as plt
import scipy
x=data_FALL.groupby(['FECHA_FALLECIMIENTO'])['INDICADOR_EXISTENCIA'].sum()
x
FALLECIDO_DIA=pd.DataFrame({'DIA':x.index,'FALL_DIA':x.values})
FALLECIDO_DIA.head()
FALLECIDO_DIA.max()
FALLECIDO_DIA.FALL_DIA.max()
plt.figure(figsize=(20,8))
plt.xticks(rotation=90)
plt.plot(FALLECIDO_DIA.index, FALLECIDO_DIA.FALL_DIA,marker='o', ms=8, linestyle='None',alpha=0.5)
rolling = FALLECIDO_DIA.FALL_DIA.rolling(window=7)
rolling2= FALLECIDO_DIA.FALL_DIA.rolling(window=14)
rolling_mean= rolling.mean()
rolling_mean2= rolling2.mean()
plt.figure(figsize=(20,8))
plt.xticks(rotation=90)
FALLECIDO_DIA.FALL_DIA.plot(marker='o',ms=8,linestyle='None',alpha=0.5)
rolling_mean.plot(color='red')
rolling_mean2.plot(color='purple')
plt.show()
```
### ANALIZANDO LA CURVA DE POSITIVOS
```
x1=data_POS.groupby(['FECHA_RESULTADO'])['INDICADOR_EXISTENCIA2'].sum()
POSITIVO_DIA=pd.DataFrame({'DIA':x1.index, 'POS_DIA':x1.values})
POSITIVO_DIA
rolling3 = POSITIVO_DIA.POS_DIA.rolling(window=7)
rolling4= POSITIVO_DIA.POS_DIA.rolling(window=14)
rolling_mean3= rolling3.mean()
rolling_mean4= rolling4.mean()
plt.figure(figsize=(20,8))
plt.xticks(rotation=90)
POSITIVO_DIA.POS_DIA.plot(marker='o',ms=8,linestyle='None',alpha=0.5)
rolling_mean3.plot(color='red')
rolling_mean4.plot(color='purple')
plt.show()
```
### ¿Pueden mezclarse ambas bases?
```
data_FALL['INDICADOR_EXISTENCIA1']=1
data_POS['INDICADOR_EXISTENCIA2']=1
data_POS.shape, data_FALL.shape
dataCruce1= pd.merge(data_POS, data_FALL, how='left' , left_on=['UUID'], right_on = ['UUID'])
dataCruce1.head() # REVISAR INCOHERENCIA DE DISTRITOS
dataCruce1.INDICADOR_EXISTENCIA.sum(), dataCruce1.shape[0]
data_POS.shape, data_FALL.shape
dataCruce1.columns
dataCruce2=dataCruce1[['UUID', 'DEPARTAMENTO_x', 'PROVINCIA_x', 'DISTRITO_x',
'METODODX', 'EDAD', 'SEXO_x', 'FECHA_RESULTADO',
'INDICADOR_EXISTENCIA2', 'FECHA_CORTE_y', 'FECHA_FALLECIMIENTO',
'EDAD_DECLARADA', 'SEXO_y', 'FECHA_NAC', 'DEPARTAMENTO_y',
'PROVINCIA_y', 'DISTRITO_y', 'INDICADOR_EXISTENCIA']]
dataCruce2.head()
sum(dataCruce2['DEPARTAMENTO_x']==dataCruce2['DEPARTAMENTO_y'])
```
#### NOTA1:
Deberian ser aproximadamente la misma cantidad de personas fallecidas
```
sum(dataCruce2['PROVINCIA_x']==dataCruce2['PROVINCIA_y'])
sum(dataCruce2['DISTRITO_x']==dataCruce2['DISTRITO_y'])
```
#### Aparentemente no
## ANALIZANDO LA DATA DEL SINADEF
```
data_FALL_SINA.columns
data_FALL_SINA.head(2)
data_FALL_SINA.shape
data_FALL_SINA.iloc[:,range(19,29)].head(2)
data_FALL_SINA['DEBIDO A (CAUSA A)'].value_counts().nlargest(30)
import re
data_FALL_SINA['INDICADOR_EXISTENCIA3']=1
data_FALL_SINA['DEBIDO A (CAUSA A)'].str.contains("COV\w+").value_counts()
data_FALL_SINA['DEBIDO A (CAUSA A)'].str.findall("COV\w+")
data_FALL_SINA.loc[data_FALL_SINA['DEBIDO A (CAUSA A)'].str.contains("COV\w+"), 'COVID_POSITIVO']= 'COVID19'
data_FALL_SINA['COVID_POSITIVO'].value_counts()
data_FALL_SINA.columns
data_FALL_SINA2=data_FALL_SINA[['Nº', 'TIPO SEGURO', 'SEXO', 'EDAD', 'TIEMPO EDAD', 'ESTADO CIVIL',
'NIVEL DE INSTRUCCIÓN', 'COD# UBIGEO DOMICILIO', 'PAIS DOMICILIO',
'DEPARTAMENTO DOMICILIO', 'PROVINCIA DOMICILIO', 'DISTRITO DOMICILIO',
'FECHA', 'AÑO', 'MES', 'TIPO LUGAR', 'INSTITUCION', 'MUERTE VIOLENTA',
'NECROPSIA', 'DEBIDO A (CAUSA A)', 'CAUSA A (CIE-X)',
'DEBIDO A (CAUSA B)', 'CAUSA B (CIE-X)', 'DEBIDO A (CAUSA C)',
'CAUSA C (CIE-X)', 'DEBIDO A (CAUSA D)', 'CAUSA D (CIE-X)',
'DEBIDO A (CAUSA E)', 'CAUSA E (CIE-X)', 'DEBIDO A (CAUSA F)',
'CAUSA F (CIE-X)', 'INDICADOR_EXISTENCIA3', 'COVID_POSITIVO']]
data_FALL_SINA2[data_FALL_SINA2.COVID_POSITIVO.notnull()].iloc[:,range(18,25)]
data_FALL_SINA2[['FECHA','AÑO','MES']].head()
data_FALL_SINA2.shape
data_FALL_SINA2.FECHA.min(), data_FALL_SINA2.FECHA.max()
xyz=data_FALL_SINA2.groupby(['FECHA'])['INDICADOR_EXISTENCIA3'].sum()
xyz
FALL_SINA_DIA=pd.DataFrame({'DIA':xyz.index, 'FALL_SINAD_DIA':xyz.values})
FALL_SINA_DIA
rolling5 = FALL_SINA_DIA.FALL_SINAD_DIA.rolling(window=7)
rolling6= FALL_SINA_DIA.FALL_SINAD_DIA.rolling(window=14)
rolling_mean5= rolling5.mean()
rolling_mean6= rolling6.mean()
plt.figure(figsize=(20,8))
plt.xticks(rotation=90)
FALL_SINA_DIA.FALL_SINAD_DIA.plot(marker='o',ms=8,linestyle='None',alpha=0.5)
rolling_mean5.plot(color='red')
rolling_mean6.plot(color='purple')
plt.show()
yyy=data_FALL_SINA2.groupby(['AÑO','MES'])['INDICADOR_EXISTENCIA3'].sum()
yyy.index
anio_mes=pd.DataFrame(yyy.index.get_level_values(0).astype(str)+'_'+yyy.index.get_level_values(1).astype(str))
anio_mes.head()
FALL_SINA_MES=pd.DataFrame({'ANIO_MES':anio_mes[0], 'FALL_SINA_MES':yyy.values})
FALL_SINA_MES.head()
rolling7 = FALL_SINA_MES.FALL_SINA_MES.rolling(window=3)
rolling8= FALL_SINA_MES.FALL_SINA_MES.rolling(window=5)
rolling_mean7= rolling7.mean()
rolling_mean8= rolling8.mean()
plt.figure(figsize=(20,8))
plt.xticks(rotation=90)
FALL_SINA_MES.FALL_SINA_MES.plot(marker='o',ms=20,linestyle='None',alpha=0.5)
rolling_mean7.plot(color='red',linestyle='--',alpha=0.5)
rolling_mean8.plot(color='purple',linestyle='--',alpha=0.5)
plt.show()
```
#### Ahora vemos el número de fallecidos por año, usando nuestra primera función
```
def funcion(x):
return 5*x
abc=data_FALL_SINA2.groupby(['AÑO'])['INDICADOR_EXISTENCIA3'].sum()
FALL_SINA_ANIO=pd.DataFrame({'ANIO':abc.index, 'FALL_SINA_ANIO':abc.values})
FALL_SINA_ANIO.head()
plt.rcParams["figure.figsize"] =(20,10)
FALL_SINA_ANIO.plot(x="ANIO",y=["FALL_SINA_ANIO"],kind="bar")
plt.title("Fallecidos en Perú", fontsize=24)
plt.xticks(fontsize=20,rotation=0)
plt.yticks(fontsize=20)
x = FALL_SINA_ANIO["ANIO"]
y = FALL_SINA_ANIO["FALL_SINA_ANIO"]
fx =1
fy =1
for i, v in enumerate(x):
plt.text(i*fx, y[i]*fy,str(int(y[i])), color='black', backgroundcolor= "skyblue",fontsize = 20 ,rotation= 0)
fig, ax = plt.subplots(figsize=(20,20))
# ax=FALL_SINA_ANIO.FALL_SINA_ANIO.plot(kind='bar')
# fig = plt.figure()
plt.ylim([0,230000])
plt.bar(FALL_SINA_ANIO.ANIO,FALL_SINA_ANIO.FALL_SINA_ANIO, color='blue', width=0.5, alpha=0.5)
plt.title("Fallecidos en Perú", fontsize=24)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
# funcion para poner etiquetas sobre las barras
def add_value_labels(ax, spacing=5):
# Para cada barra poner una etiqueta
for rect in ax.patches:
#Obteniendo los valores y pocisiones de la etiqueta
y_value = rect.get_height()
x_value= rect.get_x() + rect.get_width()/2
# Espacio entre la etiqueta y la barra
space= spacing
# Tipo de alineamiento
va= 'bottom'
# dando formato con 0 decimales, podemos cambiar el 0 por cualquier número, por ejemplo 1 y tendra 1 decimal
# Hay que notar que el punto se pone para el decimarl y la coma como separador de miles
label = "{:,.0f}".format(y_value)
# Añadiendo la anotación
ax.annotate(
label,
(x_value, y_value),
xytext=(0, space),
size=20, # Tamaño de letra
textcoords="offset points", # para interpretar xytext como espacio (offset) entre puntos
ha='center',
va=va) # alienamiento vertical
#llamando a la funcion creada
add_value_labels(ax)
plt.show()
ax.patches
```
### TAREA:
La idea de cruzar las bases de Positivos y Fallecidos, era ver la evolución del paciente, esto solo es posible si se trata de la misma persona. Aun cuando todos los campos cruzan, observamos que podrían rtatarse de personas distintas.
* A partir de los resultados de la referencia en la NOTA1 sotenemos que debería haber aproximadamente la misma cantidad de personas fallecidas en un mismo departamento, sin embargo obtenemos una igualdad inferior al 25% del total cruzado. Las razones pueden ser múltiples, entre ellas qeu haya diferencias en el formato de caracteres . Por ejemplo no se hacen iguales porque Python toma la escritura textual y en tal sentido A!=A (recordar que 1= significa "no es igual" en una condición lógica del lenguaje Pyhton), si la palabra en evaluación fuera AZÁNGARO.
* ¿Cuantos casos de este tipo encuentra en la base de datos?
* Para los casos que no son de este tipo. ¿Que explicación daría a la diferencia?
* ¿Que otros metodos de validación podríamos usar para saber si se trata o no de la misma persona?
* Existe un incremento significativo de la causa de muerte por INSUFICIENCIA RESPIRATORIA AGUDA en los años 2020 y 2021, respecto a los otros años? Muestre gráficamente y en una tabla de frecuencias (value_counts).
* dataCruce1['DEPARTAMENTO_x'] = dataCruce1['DEPARTAMENTO_x'].str.replace('Á','A')
* dataCruce1['DEPARTAMENTO_x'] = dataCruce1['DEPARTAMENTO_x'].str.replace('É','E')
* dataCruce1['DEPARTAMENTO_x'] = dataCruce1['DEPARTAMENTO_x'].str.replace('Í','I')
* dataCruce1['DEPARTAMENTO_x'] = dataCruce1['DEPARTAMENTO_x'].str.replace('Ó','O')
* dataCruce1['DEPARTAMENTO_x'] = dataCruce1['DEPARTAMENTO_x'].str.replace('Ú','U')
* dataCruce1['DEPARTAMENTO_x'] = dataCruce1['DEPARTAMENTO_x'].str.replace('Ñ','N')
* dataCruce1['PROVINCIA_x']= dataCruce1['PROVINCIA_x'].str.replace('Á','A')
* dataCruce1['PROVINCIA_x'] = dataCruce1['PROVINCIA_x'].str.replace('É','E')
* dataCruce1['PROVINCIA_x'] = dataCruce1['PROVINCIA_x'].str.replace('Í','I')
* dataCruce1['PROVINCIA_x'] = dataCruce1['PROVINCIA_x'].str.replace('Ó','O')
* dataCruce1['PROVINCIA_x'] = dataCruce1['PROVINCIA_x'].str.replace('Ú','U')
* dataCruce1['PROVINCIA_x'] = dataCruce1['PROVINCIA_x'].str.replace('Ñ','N')
* dataCruce1['DISTRITO_x']= dataCruce1['DISTRITO_x'].str.replace('Á','A')
* dataCruce1['DISTRITO_x'] = dataCruce1['DISTRITO_x'].str.replace('É','E')
* dataCruce1['DISTRITO_x'] = dataCruce1['DISTRITO_x'].str.replace('Í','I')
* dataCruce1['DISTRITO_x'] = dataCruce1['DISTRITO_x'].str.replace('Ó','O')
* dataCruce1['DISTRITO_x'] = dataCruce1['DISTRITO_x'].str.replace('Ú','U')
* dataCruce1['DISTRITO_x'] = dataCruce1['DISTRITO_x'].str.replace('Ñ','N')
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Start-to-Finish Example: Evolving Maxwell's Equations with Toriodal Dipole Field Initial Data, in Flat Spacetime and Curvilinear Coordinates
## Following the work of [Knapp, Walker & Baumgarte (2002)](https://arxiv.org/abs/gr-qc/0201051), we numerically implement the second version of Maxwell’s equations - System II (BSSN - like) in curvilinear coordinates.
## Author: Terrence Pierre Jacques and Zachariah Etienne
### Formatting improvements courtesy Brandon Clark
**Notebook Status:** <font color = green><b> Validated </b></font>
**Validation Notes:** This module has been validated to exhibit convergence to the exact solution for the electric field $E^i$ and vector potential $A^i$ at the expected order, *after a short numerical evolution of the initial data* (see [plots at bottom](#convergence)).
### NRPy+ Source Code for this module:
* [Maxwell/InitialData.py](../edit/Maxwell/InitialData.py); [\[**tutorial**\]](Tutorial-VacuumMaxwell_InitialData.ipynb): Purely toriodal dipole field initial data; sets all electromagnetic variables in the Cartesian basis
* [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear.py](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Cartesian.py); [\[**tutorial**\]](Tutorial-Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.ipynb): Generates the right-hand sides for Maxwell's equations in curvilinear coordinates
## Introduction:
Here we use NRPy+ to generate the C source code necessary to set up initial data for a purely toriodal dipole field, as defined in [Knapp, Walker & Baumgarte (2002)](https://arxiv.org/abs/gr-qc/0201051). We then evolve the RHSs of Maxwell's equations using the [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on an [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4 is chosen below, but multiple options exist).
The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:
1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration
* [**NRPy+ tutorial on Method of Lines algorithm**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).
1. Set gridfunction values to initial data
* [**NRPy+ tutorial on purely toriodal dipole field initial data**](Tutorial-VacuumMaxwell_InitialData.ipynb)
1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm:
1. At the start of each iteration in time, output the divergence constraint violation
* [**NRPy+ tutorial on Maxwell's equations constraints**](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb)
1. At each RK time substep, do the following:
1. Evaluate RHS expressions
* [**NRPy+ tutorial on Maxwell's equations right-hand sides**](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb)
1. Apply Sommerfeld boundary conditions in curvilinear coordinates
* [**NRPy+ tutorial on setting up Sommerfeld boundary conditions**](Tutorial-SommerfeldBoundaryCondition.ipynb)
1. Repeat above steps at two numerical resolutions to confirm convergence to zero.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric
1. [Step 1.a](#cfl) Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep
1. [Step 2](#mw): Generate symbolic expressions and output C code for evolving Maxwell's equations
1. [Step 2.a](#mwid): Generate symbolic expressions for toroidal dipole field initial data
1. [Step 2.b](#mwevol): Generate symbolic expressions for evolution equations
1. [Step 2.c](#mwcon): Generate symbolic expressions for constraint equations
1. [Step 2.d](#mwcart): Generate symbolic expressions for converting $A^i$ and $E^i$ to the Cartesian basis
1. [Step 2.e](#mwccode): Output C codes for initial data and evolution equations
1. [Step 2.f](#mwccode_con): Output C code for constraint equations
1. [Step 2.g](#mwccode_cart): Output C code for converting $A^i$ and $E^i$ to Cartesian coordinates
1. [Step 2.h](#mwccode_xzloop): Output C code for printing 2D data
1. [Step 2.i](#cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`
1. [Step 3](#bc_functs): Set up Sommerfeld boundary condition functions
1. [Step 4](#mainc): `Maxwell_Playground.c`: The Main C Code
1. [Step 5](#compileexec): Compile generated C codes & perform simulation of the propagating toriodal electromagnetic field
1. [Step 6](#visualize): Visualize the output!
1. [Step 6.a](#installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded
1. [Step 6.b](#genimages): Generate images for visualization animation
1. [Step 6.c](#genvideo): Generate visualization animation
1. [Step 7](#convergence): Plot the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)
1. [Step 8](#div_e): Comparison of Divergence Constrain Violation
1. [Step 9](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
```
# Step P1: Import needed NRPy+ core modules:
from outputC import outputC,lhrh,outCfunction # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions, benchmarking
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("MaxwellEvolCurvi_Playground_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three (BSSN is a 3+1 decomposition
# of Einstein's equations), and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL BoundaryCondition timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhCylindrical"
# Step 2.a: Set boundary conditions
# Current choices are QuadraticExtrapolation (quadratic polynomial extrapolation) or Sommerfeld
BoundaryCondition = "Sommerfeld"
# Step 2.b: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 10.0 # Needed for all coordinate systems.
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.c: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
default_CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (RHS_string)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
RHS_string = "rhs_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, RK_OUTPUT_GFS);"
if BoundaryCondition == "QuadraticExtrapolation":
# Extrapolation BCs are applied to the evolved gridfunctions themselves after the MoL update
post_RHS_string = "apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);"
elif BoundaryCondition == "Sommerfeld":
# Sommerfeld BCs are applied to the gridfunction RHSs directly
RHS_string += "\n apply_bcs_sommerfeld(¶ms, xx, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_INPUT_GFS, RK_OUTPUT_GFS);"
post_RHS_string = ""
else:
print("Invalid choice of boundary condition")
sys.exit(1)
MoL.MoL_C_Code_Generation(RK_method, RHS_string = RHS_string, post_RHS_string = post_RHS_string,
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
# Step 5: Set the finite differencing order to 4.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# par.set_parval_from_str("indexedexp::symmetry_axes","2")
```
<a id='cfl'></a>
## Step 1.a: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](#toc)\]
$$\label{cfl}$$
In order for our explicit-timestepping numerical solution to Maxwell's equations to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:
$$
\Delta t \le \frac{\min(ds_i)}{c},
$$
where $c$ is the wavespeed, and
$$ds_i = h_i \Delta x^i$$
is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
```
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
```
<a id='mw'></a>
# Step 2: Generate symbolic expressions and output C code for evolving Maxwell's equations \[Back to [top](#toc)\]
$$\label{mw}$$
Here we read in the symbolic expressions from the NRPy+ [InitialData](../edit/Maxwell/InitialData.py) and [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) modules to define the initial data, evolution equations, and constraint equations.
<a id='mwid'></a>
## Step 2.a: Generate symbolic expressions for toroidal dipole field initial data \[Back to [top](#toc)\]
$$\label{mwid}$$
Here we use the NRPy+ [InitialData](../edit/Maxwell/InitialData.py) module, described in [this tutorial](Tutorial-VacuumMaxwell_InitialData.ipynb), to write initial data to the grid functions for both systems.
We define the rescaled quantities $a^i$ and $e^i$ in terms of $A^i$ and $E^i$ in curvilinear coordinates within the NRPy+ [InitialData](../edit/Maxwell/InitialData.py) module (see [this tutorial](Tutorial-VacuumMaxwell_formulation_Curvilinear.ipynb) for more detail);
\begin{align}
a^i &= \frac{A^i}{\text{ReU}[i]},\\ \\
e^i &= \frac{E^i}{\text{ReU}[i]}.
\end{align}
```
import Maxwell.InitialData as mwid
# Set which system to use, which are defined in Maxwell/InitialData.py
par.set_parval_from_str("Maxwell.InitialData::System_to_use","System_II")
mwid.InitialData()
aidU = ixp.zerorank1()
eidU = ixp.zerorank1()
for i in range(DIM):
aidU[i] = mwid.AidU[i]/rfm.ReU[i]
eidU[i] = mwid.EidU[i]/rfm.ReU[i]
Maxwell_ID_SymbExpressions = [\
lhrh(lhs="*AU0_exact",rhs=aidU[0]),\
lhrh(lhs="*AU1_exact",rhs=aidU[1]),\
lhrh(lhs="*AU2_exact",rhs=aidU[2]),\
lhrh(lhs="*EU0_exact",rhs=eidU[0]),\
lhrh(lhs="*EU1_exact",rhs=eidU[1]),\
lhrh(lhs="*EU2_exact",rhs=eidU[2]),\
lhrh(lhs="*PSI_exact",rhs=mwid.psi_ID),\
lhrh(lhs="*GAMMA_exact",rhs=mwid.Gamma_ID)]
Maxwell_ID_CcodeKernel = fin.FD_outputC("returnstring", Maxwell_ID_SymbExpressions)
```
<a id='mwevol'></a>
## Step 2.b: Generate symbolic expressions for evolution equations \[Back to [top](#toc)\]
$$\label{mwevol}$$
Here we use the NRPy+ [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module, described in [this tutorial](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb), to ascribe the evolution equations to the grid functions.
```
import Maxwell.VacuumMaxwell_Flat_Evol_Curvilinear_rescaled as rhs
# Set which system to use, which are defined in Maxwell/InitialData.py
par.set_parval_from_str("Maxwell.InitialData::System_to_use","System_II")
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
rhs.VacuumMaxwellRHSs_rescaled()
Maxwell_RHSs_SymbExpressions = [\
lhrh(lhs=gri.gfaccess("rhs_gfs","aU0"),rhs=rhs.arhsU[0]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","aU1"),rhs=rhs.arhsU[1]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","aU2"),rhs=rhs.arhsU[2]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","eU0"),rhs=rhs.erhsU[0]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","eU1"),rhs=rhs.erhsU[1]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","eU2"),rhs=rhs.erhsU[2]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","psi"),rhs=rhs.psi_rhs),\
lhrh(lhs=gri.gfaccess("rhs_gfs","Gamma"),rhs=rhs.Gamma_rhs)]
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
Maxwell_RHSs_string = fin.FD_outputC("returnstring",
Maxwell_RHSs_SymbExpressions,
params="SIMD_enable=True").replace("IDX4","IDX4S")
```
<a id='mwcon'></a>
## Step 2.c: Generate symbolic expressions for constraint equations \[Back to [top](#toc)\]
$$\label{mwcon}$$
We now use the NRPy+ [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module, described in [this tutorial](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb), to ascribe the constraint equations.
```
C = gri.register_gridfunctions("AUX", "C")
G = gri.register_gridfunctions("AUX", "G")
Constraints_string = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs", "C"), rhs=rhs.C),
lhrh(lhs=gri.gfaccess("aux_gfs", "G"), rhs=rhs.G)],
params="outCverbose=False").replace("IDX4","IDX4S")
```
<a id='mwcart'></a>
## Step 2.d: Generate symbolic expressions for converting $A^i$ and $E^i$ to the Cartesian basis \[Back to [top](#toc)\]
$$\label{mwcart}$$
We now use the NRPy+ [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module, described in [this tutorial](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb), to ascribe the coordinate conversion, to make our convergence tests slightly easier.
```
AUCart = ixp.register_gridfunctions_for_single_rank1("AUX", "AUCart")
EUCart = ixp.register_gridfunctions_for_single_rank1("AUX", "EUCart")
Cartesian_Vectors_string = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs", "AUCart0"), rhs=rhs.AU_Cart[0]),
lhrh(lhs=gri.gfaccess("aux_gfs", "AUCart1"), rhs=rhs.AU_Cart[1]),
lhrh(lhs=gri.gfaccess("aux_gfs", "AUCart2"), rhs=rhs.AU_Cart[2]),
lhrh(lhs=gri.gfaccess("aux_gfs", "EUCart0"), rhs=rhs.EU_Cart[0]),
lhrh(lhs=gri.gfaccess("aux_gfs", "EUCart1"), rhs=rhs.EU_Cart[1]),
lhrh(lhs=gri.gfaccess("aux_gfs", "EUCart2"), rhs=rhs.EU_Cart[2])],
params="outCverbose=False").replace("IDX4","IDX4S")
```
<a id='mwccode'></a>
## Step 2.e: Output C codes for initial data and evolution equations \[Back to [top](#toc)\]
$$\label{mwccode}$$
Next we write the C codes for the initial data and evolution equations to files, to be used later by our main C code.
```
# Step 11: Generate all needed C functions
Part_P1_body = Maxwell_ID_CcodeKernel
desc="Part P1: Declare the function for the exact solution at a single point. time==0 corresponds to the initial data."
name="exact_solution_single_point"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const REAL xx0,const REAL xx1,const REAL xx2,const paramstruct *restrict params,\
REAL *EU0_exact, \
REAL *EU1_exact, \
REAL *EU2_exact, \
REAL *AU0_exact, \
REAL *AU1_exact, \
REAL *AU2_exact, \
REAL *PSI_exact,\
REAL *GAMMA_exact",
body = Part_P1_body,
loopopts = "")
desc="Part P2: Declare the function for the exact solution at all points. time==0 corresponds to the initial data."
name="exact_solution_all_points"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *restrict params,REAL *restrict xx[3], REAL *restrict in_gfs",
body ="""
REAL xx0 = xx[0][i0]; REAL xx1 = xx[1][i1]; REAL xx2 = xx[2][i2];
exact_solution_single_point(xx0,xx1,xx2,params,&in_gfs[IDX4S(EU0GF,i0,i1,i2)],
&in_gfs[IDX4S(EU1GF,i0,i1,i2)],
&in_gfs[IDX4S(EU2GF,i0,i1,i2)],
&in_gfs[IDX4S(AU0GF,i0,i1,i2)],
&in_gfs[IDX4S(AU1GF,i0,i1,i2)],
&in_gfs[IDX4S(AU2GF,i0,i1,i2)],
&in_gfs[IDX4S(PSIGF,i0,i1,i2)],
&in_gfs[IDX4S(GAMMAGF,i0,i1,i2)]);""",
loopopts = "AllPoints")
Part_P3_body = Maxwell_RHSs_string
desc="Part P3: Declare the function to evaluate the RHSs of Maxwell's equations"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="""rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs, REAL *restrict rhs_gfs""",
body =Part_P3_body,
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
```
<a id='mwccode_con'></a>
## Step 2.f: Output C code for constraint equations \[Back to [top](#toc)\]
$$\label{mwccode_con}$$
Finally output the C code for evaluating the divergence constraint, described in [this tutorial](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb). In the absence of numerical error, this constraint should evaluate to zero, but due to numerical (typically truncation and roundoff) error it does not. We will therefore measure the divergence constraint violation to gauge the accuracy of our simulation, and ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected. Specifically, we take the L2 norm of the constraint violation, via
\begin{align}
\lVert C \rVert^2 &= \frac{\int C^2 d\mathcal{V}}{\int d\mathcal{V}}.
\end{align}
Numerically approximating this integral, in spherical coordinates for example, then gives us
\begin{align}
\lVert C \rVert^2 &= \frac{\sum C^2 r^2 \sin^2 (\theta) dr d\theta d\phi}{\sum r^2 \sin^2 (\theta) dr d\theta d\phi}, \\ \\
&= \frac{\sum C^2 r^2 \sin^2 (\theta)}{\sum r^2 \sin^2 (\theta) } , \\ \\
&= \frac{\sum C^2 \ \sqrt{\text{det} \ \hat{\gamma}}}{\sum \sqrt{\text{det} \ \hat{\gamma}}},
\end{align}
where $\hat{\gamma}$ is the reference metric. Thus, along with the C code to calculate the constraints, we also print out the code required to evaluate $\sqrt{\text{det} \ \hat{\gamma}}$ at any given point.
```
# Set up the C function for the calculating the constraints
Part_P4_body = Constraints_string
desc="Evaluate the constraints"
name="Constraints"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = Part_P4_body,
loopopts = "InteriorPoints,Enable_rfm_precompute")
# intgrand to be used to calculate the L2 norm of the constraint
diagnostic_integrand_body = outputC(rfm.detgammahat,"*detg",filename='returnstring',
params="includebraces=False")
desc="Evaluate the volume element at a specific point"
name="diagnostic_integrand"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const REAL xx0,const REAL xx1,const REAL xx2,const paramstruct *restrict params, REAL *restrict detg",
body = diagnostic_integrand_body,
loopopts = "")
```
<a id='mwccode_cart'></a>
## Step 2.g: Output C code for converting $A^i$ and $E^i$ to Cartesian coordinates \[Back to [top](#toc)\]
$$\label{mwccode_cart}$$
Here we write the C code for the coordinate transformation to Cartesian coordinates.
```
desc="Convert EU and AU to Cartesian basis"
name="Cartesian_basis"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params, REAL *restrict xx[3],
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = "REAL xx0 = xx[0][i0]; REAL xx1 = xx[1][i1]; REAL xx2 = xx[2][i2];\n"+Cartesian_Vectors_string,
loopopts = "AllPoints, Enable_rfm_precompute")
```
<a id='mwccode_xzloop'></a>
## Step 2.h: Output C code for printing 2D data \[Back to [top](#toc)\]
$$\label{mwccode_xzloop}$$
Here we output the neccesary C code to print out a 2D slice of the data in the xz-plane, using the `xz_loop` macro
```
def xz_loop(CoordSystem):
ret = """// xz-plane output for """ + CoordSystem + r""" coordinates:
#define LOOP_XZ_PLANE(ii, jj, kk) \
"""
if "Spherical" in CoordSystem or "SymTP" in CoordSystem:
ret += r"""for (int i2 = 0; i2 < Nxx_plus_2NGHOSTS2; i2++) \
if(i2 == NGHOSTS || i2 == Nxx_plus_2NGHOSTS2/2) \
for (int i1 = 0; i1 < Nxx_plus_2NGHOSTS1; i1++) \
for (int i0 = 0; i0 < Nxx_plus_2NGHOSTS0; i0++)
"""
elif "Cylindrical" in CoordSystem:
ret += r"""for (int i2 = 0; i2 < Nxx_plus_2NGHOSTS2; i2++) \
for (int i1 = 0; i1 < Nxx_plus_2NGHOSTS1; i1++) \
if(i1 == NGHOSTS || i1 == Nxx_plus_2NGHOSTS1/2) \
for (int i0 = 0; i0 < Nxx_plus_2NGHOSTS0; i0++)
"""
elif "Cartesian" in CoordSystem:
ret += r"""for (int i2 = 0; i2 < Nxx_plus_2NGHOSTS2; i2++) \
for (int i1 = Nxx_plus_2NGHOSTS1/2; i1 < Nxx_plus_2NGHOSTS1/2+1; i1++) \
for (int i0 = 0; i0 < Nxx_plus_2NGHOSTS0; i0++)
"""
return ret
with open(os.path.join(Ccodesdir,"xz_loop.h"),"w") as file:
file.write(xz_loop(CoordSystem))
```
<a id='cparams_rfm_and_domainsize'></a>
## Step 2.i: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](#toc)\]
$$\label{cparams_rfm_and_domainsize}$$
Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.
Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
```
# Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
params.time = 0.0; // Initial simulation time time corresponds to exact solution at time=0.
params.amp = 1.0;
params.lam = 1.0;
params.wavespeed = 1.0;\n""")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir, grid_centering="cell")
# Generate xx_to_Cart.h, which contains xx_to_Cart() for
# (the mapping from xx->Cartesian) for the chosen CoordSystem:
rfm.xx_to_Cart_h("xx_to_Cart","./set_Cparameters.h",os.path.join(Ccodesdir,"xx_to_Cart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
```
<a id='bc_functs'></a>
# Step 3: Set up Sommerfeld boundary condition for Cartesian coordinate system \[Back to [top](#toc)\]
$$\label{bc_functs}$$
Next we output the C code necessary to implement the Sommerfeld boundary condition in Cartesian coordinates, [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-SommerfeldBoundaryCondition.ipynb)
```
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),
Cparamspath=os.path.join("../"),
BoundaryCondition=BoundaryCondition)
if BoundaryCondition == "Sommerfeld":
bcs = cbcs.sommerfeld_boundary_condition_class(fd_order=4,
vars_radial_falloff_power_default=3,
vars_speed_default=1.,
vars_at_inf_default=0.)
bcs.write_sommerfeld_file(Ccodesdir)
```
<a id='mainc'></a>
# Step 4: `Maxwell_Playground.c`: The Main C Code \[Back to [top](#toc)\]
$$\label{mainc}$$
```
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"Maxwell_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the CFL Factor. Can be overwritten at command line.
REAL CFL_FACTOR = """+str(default_CFL_FACTOR)+";")
%%writefile $Ccodesdir/Maxwell_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "Maxwell_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xx_to_Cart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P5: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P6: Find the CFL-constrained timestep
#include "find_timestep.h"
// Part P7: Declare the function for the exact solution at a single point. time==0 corresponds to the initial data.
#include "exact_solution_single_point.h"
// Part P8: Declare the function for the exact solution at all points. time==0 corresponds to the initial data.
#include "exact_solution_all_points.h"
// Step P9: Declare function for evaluating constraints (diagnostic)
#include "Constraints.h"
// Step P10: Declare rhs_eval function, which evaluates the RHSs of Maxwells equations
#include "rhs_eval.h"
// Step P11: Declare function to calculate det gamma_hat, used in calculating the L2 norm
#include "diagnostic_integrand.h"
// Step P12: Declare function to transform to the Cartesian basis
#include "Cartesian_basis.h"
// Step P13: Declare macro to print out data along the xz-plane
#include "xz_loop.h"
// Step P14: Define xx_to_Cart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xx_to_Cart.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output x and y components of evolution variables and the
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 5 && argc != 6) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./Maxwell_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[5],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = strtod(argv[4],NULL);
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
int output_every_N = (int)((REAL)N_final/800.0);
if(output_every_N == 0) output_every_N = 1;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict y_0_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *restrict diagnostic_output_gfs_0 = (REAL *)malloc(sizeof(REAL) * NUM_AUX_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
LOOP_ALL_GFS_GPS(i) {
y_n_gfs[i] = 0.0/0.0;
}
// Step 1: Set up initial data to be exact solution at time=0:
params.time = 0.0;
exact_solution_all_points(¶ms, xx, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final+1;n++) { // Main loop to progress forward in time.
// At each timestep, set Constraints to NaN in the grid interior
LOOP_REGION(NGHOSTS,NGHOSTS+Nxx0,
NGHOSTS,NGHOSTS+Nxx1,
NGHOSTS,NGHOSTS+Nxx2) {
const int idx = IDX3S(i0,i1,i2);
diagnostic_output_gfs[IDX4ptS(CGF,idx)] = 0.0/0.0;
}
// Step 3.a: Output 2D data file periodically, for visualization
params.time = ((REAL)n)*dt;
// Evaluate Divergence constraint violation
Constraints(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
// log_L2_Norm = log10( sqrt[Integral( [numerical - exact]^2 * dV)] )
REAL L2Norm_sum_C = 0.;
int sum = 0;
LOOP_REGION(NGHOSTS,NGHOSTS+Nxx0,
NGHOSTS,NGHOSTS+Nxx1,
NGHOSTS,NGHOSTS+Nxx2) {
const int idx = IDX3S(i0,i1,i2);
double C = (double)diagnostic_output_gfs[IDX4ptS(CGF,idx)];
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL detghat; diagnostic_integrand(xx0, xx1, xx2, ¶ms, &detghat);
L2Norm_sum_C += C*C*sqrt(detghat);
sum = sum + sqrt(detghat);
}
REAL L2Norm_C = sqrt(L2Norm_sum_C/(sum));
printf("%e %.15e\n",params.time, L2Norm_C);
// Step 3.a: Output 2D data file periodically, for visualization
if(n%20 == 0) {
exact_solution_all_points(¶ms, xx, y_0_gfs);
Cartesian_basis(&rfmstruct, ¶ms, xx, y_n_gfs, diagnostic_output_gfs);
Cartesian_basis(&rfmstruct, ¶ms, xx, y_0_gfs, diagnostic_output_gfs_0);
char filename[100];
sprintf(filename,"out%d-%08d.txt",Nxx[0],n);
FILE *out2D = fopen(filename, "w");
LOOP_XZ_PLANE(ii, jj, kk){
REAL xCart[3];
xx_to_Cart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
REAL Ex_num = (double)diagnostic_output_gfs[IDX4ptS(EUCART0GF,idx)];
REAL Ey_num = (double)diagnostic_output_gfs[IDX4ptS(EUCART1GF,idx)];
REAL Ax_num = (double)diagnostic_output_gfs[IDX4ptS(AUCART0GF,idx)];
REAL Ay_num = (double)diagnostic_output_gfs[IDX4ptS(AUCART1GF,idx)];
double C = (double)diagnostic_output_gfs[IDX4ptS(CGF,idx)];
fprintf(out2D,"%e %e %e %.15e %.15e %.15e %.15e %.15e\n", params.time,
xCart[0],xCart[2], Ex_num, Ey_num, Ax_num, Ay_num, C);
}
fclose(out2D);
}
if(n==N_final-1) {
exact_solution_all_points(¶ms, xx, y_0_gfs);
Cartesian_basis(&rfmstruct, ¶ms, xx, y_n_gfs, diagnostic_output_gfs);
Cartesian_basis(&rfmstruct, ¶ms, xx, y_0_gfs, diagnostic_output_gfs_0);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_XZ_PLANE(ii, jj, kk){
REAL xCart[3];
xx_to_Cart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
REAL Ex_num = (double)diagnostic_output_gfs[IDX4ptS(EUCART0GF,idx)];
REAL Ey_num = (double)diagnostic_output_gfs[IDX4ptS(EUCART1GF,idx)];
REAL Ax_num = (double)diagnostic_output_gfs[IDX4ptS(AUCART0GF,idx)];
REAL Ay_num = (double)diagnostic_output_gfs[IDX4ptS(AUCART1GF,idx)];
REAL Ex_exact = (double)diagnostic_output_gfs_0[IDX4ptS(EUCART0GF,idx)];
REAL Ey_exact = (double)diagnostic_output_gfs_0[IDX4ptS(EUCART1GF,idx)];
REAL Ax_exact = (double)diagnostic_output_gfs_0[IDX4ptS(AUCART0GF,idx)];
REAL Ay_exact = (double)diagnostic_output_gfs_0[IDX4ptS(AUCART1GF,idx)];
REAL Ex__E_rel = log10(fabs(Ex_num - Ex_exact));
REAL Ey__E_rel = log10(fabs(Ey_num - Ey_exact));
REAL Ax__E_rel = log10(fabs(Ax_num - Ax_exact));
REAL Ay__E_rel = log10(fabs(Ay_num - Ay_exact));
fprintf(out2D,"%e %e %.15e %.15e %.15e %.15e\n",xCart[0],xCart[2],
Ex__E_rel, Ey__E_rel, Ax__E_rel, Ay__E_rel);
}
fclose(out2D);
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 10 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(y_0_gfs);
free(diagnostic_output_gfs_0);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
```
<a id='compileexec'></a>
# Step 5: Compile generated C codes & perform simulation of the propagating toriodal electromagnetic field \[Back to [top](#toc)\]
$$\label{compileexec}$$
To aid in the cross-platform-compatible (with Windows, MacOS, & Linux) compilation and execution, we make use of `cmdline_helper` [(**Tutorial**)](Tutorial-cmdline_helper.ipynb).
```
import cmdline_helper as cmd
CFL_FACTOR=0.5
cmd.C_compile(os.path.join(Ccodesdir,"Maxwell_Playground.c"),
os.path.join(outdir,"Maxwell_Playground"),compile_mode="optimized")
# Change to output directory
os.chdir(outdir)
# Clean up existing output files
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.delete_existing_files("out-*resolution.txt")
# Set tme to end simulation
t_final = str(0.5*domain_size) + ' '
# Run executables
if 'Spherical' in CoordSystem or 'SymTP' in CoordSystem:
cmd.Execute("Maxwell_Playground", "64 48 4 "+t_final+str(CFL_FACTOR), "out-lowresolution.txt")
cmd.Execute("Maxwell_Playground", "96 72 4 "+t_final+str(CFL_FACTOR), "out-medresolution.txt")
Nxx0_low = '64'
Nxx0_med = '96'
elif 'Cylindrical' in CoordSystem:
cmd.Execute("Maxwell_Playground", "50 4 100 "+t_final+str(CFL_FACTOR), "out-lowresolution.txt")
cmd.Execute("Maxwell_Playground", "80 4 160 "+t_final+str(CFL_FACTOR), "out-medresolution.txt")
Nxx0_low = '50'
Nxx0_med = '80'
else:
# Cartesian
cmd.Execute("Maxwell_Playground", "64 64 64 "+t_final+str(CFL_FACTOR), "out-lowresolution.txt")
cmd.Execute("Maxwell_Playground", "128 128 128 "+t_final+str(CFL_FACTOR), "out-medresolution.txt")
Nxx0_low = '64'
Nxx0_med = '128'
# Return to root directory
os.chdir(os.path.join("../../"))
print("Finished this code cell.")
```
<a id='visualize'></a>
# Step 6: Visualize the output! \[Back to [top](#toc)\]
$$\label{visualize}$$
In this section we will generate a movie, plotting the x component of the electric field on a 2D grid..
<a id='installdownload'></a>
## Step 6.a: Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded \[Back to [top](#toc)\]
$$\label{installdownload}$$
Note that if you are not running this within `mybinder`, but on a Windows system, `ffmpeg` must be installed using a separate package (on [this site](http://ffmpeg.org/)), or if running Jupyter within Anaconda, use the command: `conda install -c conda-forge ffmpeg`.
```
!pip install scipy > /dev/null
check_for_ffmpeg = !which ffmpeg >/dev/null && echo $?
if check_for_ffmpeg != ['0']:
print("Couldn't find ffmpeg, so I'll download it.")
# Courtesy https://johnvansickle.com/ffmpeg/
!wget http://astro.phys.wvu.edu/zetienne/ffmpeg-static-amd64-johnvansickle.tar.xz
!tar Jxf ffmpeg-static-amd64-johnvansickle.tar.xz
print("Copying ffmpeg to ~/.local/bin/. Assumes ~/.local/bin is in the PATH.")
!mkdir ~/.local/bin/
!cp ffmpeg-static-amd64-johnvansickle/ffmpeg ~/.local/bin/
print("If this doesn't work, then install ffmpeg yourself. It should work fine on mybinder.")
```
<a id='genimages'></a>
## Step 6.b: Generate images for visualization animation \[Back to [top](#toc)\]
$$\label{genimages}$$
Here we loop through the data files output by the executable compiled and run in [the previous step](#mainc), generating a [png](https://en.wikipedia.org/wiki/Portable_Network_Graphics) image for each data file.
**Special thanks to Terrence Pierre Jacques. His work with the first versions of these scripts greatly contributed to the scripts as they exist below.**
```
## VISUALIZATION ANIMATION, PART 1: Generate PNGs, one per frame of movie ##
import numpy as np
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from IPython.display import HTML
import matplotlib.image as mgimg
import glob
import sys
from matplotlib import animation
globby = glob.glob(os.path.join(outdir,'out'+Nxx0_med+'-00*.txt'))
file_list = []
for x in sorted(globby):
file_list.append(x)
bound = domain_size/2.0
pl_xmin = -bound
pl_xmax = +bound
pl_zmin = -bound
pl_zmax = +bound
for filename in file_list:
fig = plt.figure()
t, x, z, Ex, Ey, Ax, Ay, C = np.loadtxt(filename).T #Transposed for easier unpacking
plotquantity = Ex
time = np.round(t[0], decimals=3)
plotdescription = "Numerical Soln."
plt.title(r"$E_x$ at $t$ = "+str(time))
plt.xlabel("x")
plt.ylabel("z")
grid_x, grid_z = np.mgrid[pl_xmin:pl_xmax:200j, pl_zmin:pl_zmax:200j]
points = np.zeros((len(x), 2))
for i in range(len(x)):
# Zach says: No idea why x and y get flipped...
points[i][0] = x[i]
points[i][1] = z[i]
grid = griddata(points, plotquantity, (grid_x, grid_z), method='nearest')
gridcub = griddata(points, plotquantity, (grid_x, grid_z), method='cubic')
im = plt.imshow(gridcub, extent=(pl_xmin,pl_xmax, pl_zmin,pl_zmax))
ax = plt.colorbar()
plt.clim(-3,3)
ax.set_label(plotdescription)
savefig(os.path.join(filename+".png"),dpi=150)
plt.close(fig)
sys.stdout.write("%c[2K" % 27)
sys.stdout.write("Processing file "+filename+"\r")
sys.stdout.flush()
```
<a id='genvideo'></a>
## Step 6.c: Generate visualization animation \[Back to [top](#toc)\]
$$\label{genvideo}$$
In the following step, [ffmpeg](http://ffmpeg.org) is used to generate an [mp4](https://en.wikipedia.org/wiki/MPEG-4) video file, which can be played directly from this Jupyter notebook.
```
## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##
# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
# https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation
fig = plt.figure(frameon=False)
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
myimages = []
for i in range(len(file_list)):
img = mgimg.imread(file_list[i]+".png")
imgplot = plt.imshow(img)
myimages.append([imgplot])
ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)
plt.close()
ani.save(os.path.join(outdir,'Maxwell_ToroidalDipole.mp4'), fps=5,dpi=150)
## VISUALIZATION ANIMATION, PART 3: Display movie as embedded HTML5 (see next cell) ##
# https://stackoverflow.com/questions/18019477/how-can-i-play-a-local-video-in-my-ipython-notebook
# Embed video based on suggestion:
# https://stackoverflow.com/questions/39900173/jupyter-notebook-html-cell-magic-with-python-variable
HTML("""
<video width="480" height="360" controls>
<source src=\""""+os.path.join(outdir,"Maxwell_ToroidalDipole.mp4")+"""\" type="video/mp4">
</video>
""")
```
<a id='convergence'></a>
# Step 7: Plot the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling) \[Back to [top](#toc)\]
$$\label{convergence}$$
```
x_med, z_med, Ex__E_rel_med, Ey__E_rel_med, Ax__E_rel_med, Ay__E_rel_med = np.loadtxt(os.path.join(outdir,'out'+Nxx0_med+'.txt')).T #Transposed for easier unpacking
pl_xmin = -domain_size/2.
pl_xmax = +domain_size/2.
pl_ymin = -domain_size/2.
pl_ymax = +domain_size/2.
grid_x, grid_z = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points_med = np.zeros((len(x_med), 2))
for i in range(len(x_med)):
points_med[i][0] = x_med[i]
points_med[i][1] = z_med[i]
grid_med = griddata(points_med, Ex__E_rel_med, (grid_x, grid_z), method='nearest')
grid_medcub = griddata(points_med, Ex__E_rel_med, (grid_x, grid_z), method='cubic')
plt.clf()
plt.title(r"Nxx0="+Nxx0_med+" Num. Err.: $\log_{10}|Ex|$ at $t$ = "+t_final)
plt.xlabel("x")
plt.ylabel("z")
fig_medcub = plt.imshow(grid_medcub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig_medcub)
x_low,z_low, EU0__E_rel_low, EU1__E_rel_low, AU0__E_rel_low, AU1__E_rel_low = np.loadtxt(os.path.join(outdir,'out'+Nxx0_low+'.txt')).T #Transposed for easier unpacking
points_low = np.zeros((len(x_low), 2))
for i in range(len(x_low)):
points_low[i][0] = x_low[i]
points_low[i][1] = z_low[i]
grid_low = griddata(points_low, EU0__E_rel_low, (grid_x, grid_z), method='nearest')
griddiff__low_minus__med = np.zeros((100,100))
griddiff__low_minus__med_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid_low_1darray_yeq0 = np.zeros(100)
grid_med_1darray_yeq0 = np.zeros(100)
count = 0
for i in range(100):
for j in range(100):
griddiff__low_minus__med[i][j] = grid_low[i][j] - grid_med[i][j]
griddiff__low_minus__med_1darray[count] = griddiff__low_minus__med[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid_low_1darray_yeq0[i] = grid_low[i][j] + np.log10((float(Nxx0_low)/float(Nxx0_med))**4)
grid_med_1darray_yeq0[i] = grid_med[i][j]
count = count + 1
fig, ax = plt.subplots()
plt.title(r"4th-order Convergence of $E_x$ at t = "+t_final)
plt.xlabel("x")
plt.ylabel("log10(Absolute error)")
ax.plot(gridx_1darray_yeq0, grid_med_1darray_yeq0, 'k-', label='Nr = '+Nxx0_med)
ax.plot(gridx_1darray_yeq0, grid_low_1darray_yeq0, 'k--', label='Nr = '+Nxx0_low+', mult. by ('+Nxx0_low+'/'+Nxx0_med+')^4')
# ax.set_ylim([-8.5,0.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
```
<a id='div_e'></a>
# Step 8: Comparison Divergence Constrain Violation \[Back to [top](#toc)\]
$$\label{div_e}$$
Here we calculate the violation quantity
$$
\mathcal{C} \equiv \nabla^i E_i,
$$
at each point on our grid (excluding the ghost zones) then calculate the normalized L2 norm over the entire volume via
\begin{align}
\lVert C \rVert &= \left( \frac{\int C^2 d\mathcal{V}}{\int d\mathcal{V}} \right)^{1/2}.
\end{align}
```
# Plotting the constraint violation
nrpy_div_low = np.loadtxt(os.path.join(outdir,'out-lowresolution.txt')).T
nrpy_div_med = np.loadtxt(os.path.join(outdir,'out-medresolution.txt')).T
plt.plot(nrpy_div_low[0]/domain_size, (nrpy_div_low[1]), 'k-',label='Nxx0 = '+Nxx0_low)
plt.plot(nrpy_div_med[0]/domain_size, (nrpy_div_med[1]), 'k--',label='Nxx0 = '+Nxx0_med)
plt.yscale('log')
plt.xlabel('Light Crossing Times')
plt.ylabel('||C||')
# plt.xlim(0,2.2)
# plt.ylim(1e-4, 1e-5)
plt.title('L2 Norm of the Divergence of E - '+par.parval_from_str("reference_metric::CoordSystem")+' Coordinates')
plt.legend(loc='center right')
plt.show()
```
<a id='latex_pdf_output'></a>
# Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear.pdf](Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear")
```
| github_jupyter |
```
import matplotlib
# matplotlib.use('Agg') # Or any other X11 back-end
import numpy as np
import torch.nn as nn
import torch.nn.init as init
import matplotlib.pyplot as plt
import numpy as np
import os
from scipy import signal
import time
import torch
import torch.nn as nn
from torch.autograd import Variable
import h5py
import numpy as np
import os
import pandas as pd
from torch.utils.data import Dataset
import wfdb
%matplotlib inline
class SleepDatasetValid(Dataset):
"""Physionet 2018 dataset."""
def __init__(self, records_file, root_dir, s, f, window_size, hanning_window):
"""
Args:
records_file (string): Path to the records file.
root_dir (string): Directory with all the signals.
"""
self.landmarks_frame = pd.read_csv(records_file)[s:f]
self.root_dir = root_dir
self.window_size = window_size
self.hw = hanning_window
self.num_bins = window_size//hanning_window
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
folder_name = os.path.join(self.root_dir,
self.landmarks_frame.iloc[idx, 0])
file_name = self.landmarks_frame.iloc[idx, 0]
# print(file_name)
# print(folder_name)
# file_name='tr03-0005/'
# folder_name='../data/training/tr03-0005/'
signals = wfdb.rdrecord(os.path.join(folder_name, file_name[:-1]))
arousals = h5py.File(os.path.join(folder_name, file_name[:-1] + '-arousal.mat'), 'r')
tst_ann = wfdb.rdann(os.path.join(folder_name, file_name[:-1]), 'arousal')
positive_indexes = []
negative_indexes = []
arous_data = arousals['data']['arousals'].value.ravel()
for w in range(len(arous_data)//self.window_size):
if arous_data[w*self.window_size:(w+1)*self.window_size].max() > 0:
positive_indexes.append(w)
else:
negative_indexes.append(w)
# max_in_window = arous_data[].max()
if len(positive_indexes) < len(negative_indexes):
windexes = np.append(positive_indexes, np.random.choice(negative_indexes, len(positive_indexes)//10, replace=False))
else:
windexes = np.append(negative_indexes, np.random.choice(positive_indexes, len(negative_indexes), replace=False))
windexes = np.sort(windexes)
labels = []
total = 0
positive = 0
for i in windexes:
tmp = []
window_s = i*self.window_size
window_e = (i+1)*self.window_size
for j in range(self.num_bins):
total += 1
bin_s = j*self.hw + window_s
bin_e = (j+1)*self.hw + window_s
if arous_data[bin_s:bin_e].max() > 0:
tmp.append(1)
positive += 1
else:
tmp.append(0)
labels.append(tmp)
# print('sample percent ratio: {:.2f}'.format(positive/total))
interested = [0]
# for i in range(13):
# if signals.sig_name[i] in ['SaO2', 'ABD', 'F4-M1', 'C4-M1', 'O2-M1', 'AIRFLOW']:
# interested.append(i)
# POI = arousal_centers
sample = ((signals.p_signal[:,interested], windexes), arous_data)
return sample
class SleepDataset(Dataset):
"""Physionet 2018 dataset."""
def __init__(self, records_file, root_dir, s, f, window_size, hanning_window, validation=False):
"""
Args:
records_file (string): Path to the records file.
root_dir (string): Directory with all the signals.
"""
self.landmarks_frame = pd.read_csv(records_file)[s:f]
self.root_dir = root_dir
self.window_size = window_size
self.hw = hanning_window
self.num_bins = window_size//hanning_window
self.validation=validation
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
np.random.seed(12345)
folder_name = os.path.join(self.root_dir,
self.landmarks_frame.iloc[idx, 0])
file_name = self.landmarks_frame.iloc[idx, 0]
# print(file_name)
# print(folder_name)
# file_name='tr03-0005/'
# folder_name='../data/training/tr03-0005/'
signals = wfdb.rdrecord(os.path.join(folder_name, file_name[:-1]))
arousals = h5py.File(os.path.join(folder_name, file_name[:-1] + '-arousal.mat'), 'r')
tst_ann = wfdb.rdann(os.path.join(folder_name, file_name[:-1]), 'arousal')
positive_indexes = []
negative_indexes = []
arous_data = arousals['data']['arousals'].value.ravel()
for w in range(len(arous_data)//self.window_size):
if arous_data[w*self.window_size:(w+1)*self.window_size].max() > 0:
positive_indexes.append(w)
else:
negative_indexes.append(w)
# max_in_window = arous_data[].max()
if self.validation:
windexes = np.append(positive_indexes, negative_indexes)
else:
if len(positive_indexes) < len(negative_indexes):
windexes = np.append(positive_indexes, np.random.choice(negative_indexes, len(positive_indexes)//10, replace=False))
else:
windexes = np.append(negative_indexes, np.random.choice(positive_indexes, len(negative_indexes), replace=False))
windexes = np.sort(windexes)
# windexes = np.array(positive_indexes)
labels = []
total = 0
positive = 0
for i in windexes:
tmp = []
window_s = i*self.window_size
window_e = (i+1)*self.window_size
for j in range(self.num_bins):
total += 1
bin_s = j*self.hw + window_s
bin_e = (j+1)*self.hw + window_s
if arous_data[bin_s:bin_e].max() > 0:
positive += 1
tmp.append(1.)
else:
tmp.append(0.)
labels.append(tmp)
interested = []
# print('# sample positive: {:.2f} #'.format(positive/total))
for i in range(13):
# if signals.sig_name[i] in ['SaO2', 'ABD', 'F4-M1', 'C4-M1', 'O2-M1', 'AIRFLOW']:
interested.append(i)
# POI = arousal_centers
# tst_sig = np.random.rand(len(signals.p_signal[:,interested]),1)
sample = ((signals.p_signal[:,interested], windexes), np.array(labels))
# sample = ((tst_sig, windexes), np.array(labels))
return sample
class Model_V3(nn.Module):
def __init__(self, window_size, han_size):
super(Model_V3, self).__init__()
num_bins = window_size//han_size
self.cnn1 = nn.Conv2d(13, num_bins, 3, padding=1)
init.xavier_uniform(self.cnn1.weight, gain=nn.init.calculate_gain('relu'))
init.constant(self.cnn1.bias, 0.1)
self.cnn2 = nn.Conv2d(4, 8, 3, padding=1)
init.xavier_uniform(self.cnn2.weight, gain=nn.init.calculate_gain('relu'))
init.constant(self.cnn2.bias, 0.1)
# self.cnn3 = nn.Conv2d(32, num_bins, 3, padding=1)
self.cnn3 = nn.Conv2d(8, num_bins, 3, padding=1)
init.xavier_uniform(self.cnn3.weight, gain=nn.init.calculate_gain('relu'))
init.constant(self.cnn3.bias, 0.1)
self.pool = nn.MaxPool2d(2, 2)
self.relu = nn.ReLU()
# out_dim = ((han_size//2+1)//8)*((window_size//han_size)//8)*16
self.output = nn.AdaptiveMaxPool2d(1)
# self.fc = nn.Linear(out_dim, num_bins)
self.fc = nn.Linear(num_bins, num_bins)
self.sigmoid = nn.Sigmoid()
self.do = nn.Dropout()
def forward(self, x):
x = self.relu(self.pool(self.cnn1(x)))
# x = self.relu(self.pool(self.cnn2(x)))
# x = self.relu(self.pool(self.cnn3(x)))
x = self.output(x)
x = self.fc(x.view(-1))
x = self.sigmoid(x)
return x.view(-1)
minutes = 2
raw_window_size = minutes*60*200
hanning_window = 2048
window_size = raw_window_size + (hanning_window - (raw_window_size + hanning_window) % hanning_window)
print('adjusted window size: {}, num bins: {}'.format(window_size, window_size//hanning_window))
output_pixels = ((window_size//hanning_window * (hanning_window//2+1))//64)*16
print('FC # params: {}'.format(output_pixels*window_size//hanning_window))
learning_rate = 1e-3
def to_spectogram(matrix):
spectograms = []
for i in range(all_data.size()[2]):
f, t, Sxx = signal.spectrogram(matrix[0,:,i].numpy(),
window=signal.get_window('hann',hanning_window, False),
fs=200,
scaling='density',
mode='magnitude',
noverlap=0
)
if (Sxx.min() != 0 or Sxx.max() != 0):
spectograms.append((Sxx - Sxx.mean()) / Sxx.std())
else:
spectograms.append(Sxx)
return torch.FloatTensor(spectograms).unsqueeze(0).cuda()
#TODO add torch.save(the_model.state_dict(), PATH) this to save the best models weights
train_dataset = SleepDataset('/beegfs/ga4493/projects/groupb/data/training/RECORDS',
'/beegfs/ga4493/projects/groupb/data/training/', 20, 21, window_size, hanning_window)
train_loaders = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=1,
shuffle=True)
test_dataset = SleepDataset('/beegfs/ga4493/projects/groupb/data/training/RECORDS',
'/beegfs/ga4493/projects/groupb/data/training/', 0, 1, window_size, hanning_window, validation=False)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=1,
shuffle=False)
model_v1 = Model_V3(window_size, hanning_window)
if torch.cuda.is_available():
print('using cuda')
model_v1.cuda()
criterion = nn.BCELoss(size_average=False)
optimizer = torch.optim.Adam(model_v1.parameters(), lr=learning_rate)
sig = nn.Sigmoid()
# i, ((data, cent), v_l) = next(enumerate(test_loader))
losses = []
v_losses = []
accuracy = []
v_accuracy = []
l = None
for epoch in range(20):
loss_t = 0.0
acc_t = 0.0
count_t = 0
start_time = time.time()
val_l = None
v_out = None
v_all = []
for c, ((all_data, windexes), labels) in enumerate(train_loaders):
for i, win in enumerate(windexes.numpy()[0]):
inp_subs = Variable(to_spectogram(all_data[:,win*window_size:(win+1)*window_size,]))
l = None
l = labels[0, i].type(torch.FloatTensor)
if torch.cuda.is_available():
l = l.cuda()
l = Variable(l)
output = model_v1(inp_subs)
# print(output)
loss = criterion(output, l)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_t += loss.data[0]
comparison = (output.cpu().data.numpy().ravel() > 0.5) == (l.cpu().data.numpy())
acc_t += comparison.sum() / (window_size//hanning_window)
count_t += 1
losses.append(loss_t/count_t)
accuracy.append(acc_t/count_t)
loss_v = 0.0
acc_v = 0.0
count_v = 0
for c, ((data, windexes), v_l) in enumerate(test_loader):
for i, win in enumerate(windexes.numpy()[0]):
inp_subs = Variable(to_spectogram(data[:,win*window_size:(win+1)*window_size,]))
l = None
l = v_l[0, i].type(torch.FloatTensor)
if torch.cuda.is_available():
l = l.cuda()
l = Variable(l)
output = model_v1(inp_subs)
loss = criterion(output, l)
loss_v += loss.data[0]
count_v += 1
comparison = (output.cpu().data.numpy().ravel() > 0.5) == (l.cpu().data.numpy())
acc_v += comparison.sum() / (window_size//hanning_window)
v_losses.append(loss_v/count_v)
v_accuracy.append(acc_v/count_v)
print('#'*45)
print('# epoch - {:>10} | time(s) -{:>10.2f} #'.format(epoch, time.time() - start_time))
print('# T loss - {:>10.2f} | V loss - {:>10.2f} #'.format(loss_t/count_t, loss_v/count_v))
print('# T acc - {:>10.2f} | V acc - {:>10.2f} #'.format(acc_t/count_t, acc_v/count_v))
print('#'*45)
v_dataset = SleepDatasetValid('/beegfs/ga4493/projects/groupb/data/training/RECORDS',
'/beegfs/ga4493/projects/groupb/data/training/', 20, 21, window_size, hanning_window)
v_loader = torch.utils.data.DataLoader(dataset=v_dataset,
batch_size=1,
shuffle=False)
start = 0
stop = 2000000
ones = np.ones(hanning_window)
# plt.plot(all_data.cpu().view(-1).numpy())
# plt.show()
for c, ((data, windexes), v_l) in enumerate(v_loader):
out_for_plot = []
# for i in range((data.size()[1]//window_size)):
for i in range((start//window_size), (stop//window_size)):
inp_subs = Variable(to_spectogram(data[:,i*window_size:(i+1)*window_size,]))
output = model_v1(inp_subs).cpu().data.numpy()
out_for_plot = np.append(out_for_plot, output)
out_for_plot = np.repeat(out_for_plot, hanning_window)
f = plt.figure(figsize=(20, 10))
plt.plot(out_for_plot)
# plt.plot((v_l.numpy()[0][:len(v_l.numpy()[0])] > 0).astype(float)*1.1, alpha=0.3)
# plt.plot((v_l.numpy()[0][:len(v_l.numpy()[0])] < 0).astype(float)*1.1, alpha=0.3)
plt.plot((v_l.numpy()[0][(start): (stop)] > 0).astype(float)*1.1, alpha=0.3)
plt.plot((v_l.numpy()[0][(start): (stop)] < 0).astype(float)*1.1, alpha=0.3)
plt.ylim((0,1.15))
plt.axhline(y=0.5, color='r', linestyle='-')
plt.show()
plt.imshow(inp_subs.cpu().squeeze(0).squeeze(0).data.numpy(), aspect='auto')
inp_subs.cpu().squeeze(0).squeeze(0)
```
| github_jupyter |
**Introduction:**
This project performs the K-Nearest-Neighbor Regression on a 100 generated data samples and obtains the closest neighbor information for various K values. The project is divided into 7 cases.
- case1: K=1, the K neighbors contribute equally
- case2: K=3, the K neighbors contribute equally
- case3: K-50, the K neighbors contribute equally
- case4: K=1, each of the K neighbors has an influence that is inversely proportional to the distance from the point
- case5: K=3, each of the K neighbors has an influence that is inversely proportional to the distance from the point
- case6: K=50, each of the K neighbors has an influence that is inversely proportional to the distance from the point
- case7: K=N,each of the K neighbors has an influence that is inversely proportional to the distance from the point
Steps involved:
- Step1: Data Preparation of X
- Step2: Data Preparation of y
- Step3: KNN Regression modeling for each of the above seven cases
- Step4: Obtain predictions for the test set for each of the above seven cases
- Step5: Plots for each of the above seven cases
#### Step1: Data Preparation of X
Consider a single dimension (variable) X. Obtain N = 100 iid samples x1, x2, · · · of X uniformly randomly between 1 and 10, and then obtain the corresponding y values as the natural logarithm of x plus a Gaussian noise (mean 0, standard deviation 0.1), with different points having different amounts of noise.
```
import pandas as pd
import numpy as np
np.random.seed(44)
N = 100
X = np.random.uniform(low=1,high=10,size=N)
X
```
#### Step2: Data Preparation of y
Gaussian noise (mean 0, standard deviation 0.1) with different points having different amounts of noise.
Note: Guassian is nothing but normal distribution, therefore using numpy.random.normal gives 100 noises (size=100)
```
np.random.seed(44)
noise_list = np.random.normal(loc=0,scale=0.1,size=100)
noise_list
y = list()
for x_i, noise_i in zip(X, noise_list):
# natural logarithm of x is just np.log where as log10 is np.log10
y_i = np.log(x_i) + noise_i
y.append(y_i)
print(y)
```
#### Step3 : KNN Regression Modeling
Now use K-NN regression to obtain yˆ values (= estimates of y) at x-values of 1, 3, 5, 7 and 9 for each of the following three schemes:
case1,2,3: the K neighbors contribute equally (separately for K = 1, 3, 50)
case4,5,6: each of the K neighbors has an influence that is inversely proportional to the distance
from the point (separately for K = 1, 3, 50)
case7: all the N points contribute, with each contribution proportional to e−12d2, where d represents distance.
While I recommend that you write all (most) of the code from scratch without using off-the-shelf packages (we learn best when we write code to implement algorithms from scratch), you may use packages, including the ones where K-NN regression is available as a ready-to-use function. E.g., you may use numpy, scipy, sklearn
(sklearn.neighbors.KNeighborsRegressor may come in handy), matplotlib, and seaborn. There will be no penalty for using packages.
```
from sklearn.neighbors import KNeighborsRegressor
# convert shapes and to numpy to give as model input
X_numpy = np.asarray(X).reshape(-1,1) # use np.delete in X if required
y_numpy = np.asarray(y).reshape(-1,1)
X_test = [1,3,5,7,9] # Should we do np.delete? change this if required
# case1:the K neighbors contribute equally
uniform_k_1_nn_reg_model_class = KNeighborsRegressor(n_neighbors=1, weights='uniform')
uniform_k_1_nn_reg_model_class.fit(X_numpy,y_numpy)
# case2:the K neighbors contribute equally
uniform_k_3_nn_reg_model_class = KNeighborsRegressor(n_neighbors=3, weights='uniform')
uniform_k_3_nn_reg_model_class.fit(X_numpy,y_numpy)
# case3:the K neighbors contribute equally
uniform_k_50_nn_reg_model_class = KNeighborsRegressor(n_neighbors=50, weights='uniform')
uniform_k_50_nn_reg_model_class.fit(X_numpy,y_numpy)
# case4: each of the K neighbors has an influence that is inversely proportional to the distance from the point
distance_k_1_nn_reg_model_class = KNeighborsRegressor(n_neighbors=1, weights='distance')
distance_k_1_nn_reg_model_class.fit(X_numpy,y_numpy)
# case5: each of the K neighbors has an influence that is inversely proportional to the distance from the point
distance_k_3_nn_reg_model_class = KNeighborsRegressor(n_neighbors=3, weights='distance')
distance_k_3_nn_reg_model_class.fit(X_numpy,y_numpy)
# case6: each of the K neighbors has an influence that is inversely proportional to the distance from the point
distance_k_50_nn_reg_model_class = KNeighborsRegressor(n_neighbors=50, weights='distance')
distance_k_50_nn_reg_model_class.fit(X_numpy,y_numpy)
# case7: all the N points contribute, with each contribution proportional to e−12d2, where d represents distance.
def custom_function(array_distances:list)->list:
"""
Returns:
(array of weights)
"""
output_weights = []
for dist in array_distances:
power_value = -1 * 0.5 * dist * dist
output_weights.append(np.exp(power_value))
# normalizing the weights to sum up to 1
return output_weights/np.sum(output_weights)
# N= 100 is defined in the data preparation
k_all_N_nn_reg_model_class = KNeighborsRegressor(n_neighbors=N, weights=custom_function)
k_all_N_nn_reg_model_class.fit(X_numpy,y_numpy)
```
#### Step4 and Step5: Obtain predictions for the test set for each of the above seven cases and Plot the closest neighbor
Next three cells are helper functions that are reused
```
def get_and_print_predictions(model_class_object, x_test_values):
# Print the numerical values of the (x, yˆ) pairs
y_hat_output = [model_class_object.predict([[xi]]) for xi in x_test_values]
predictions_x_y_hat = []
for x_pred, y_pred in zip(x_test_values, y_hat_output):
# (x, y^)
current_prediction = (x_pred, y_pred[0][0])
print (current_prediction)
predictions_x_y_hat.append(current_prediction)
return predictions_x_y_hat
#Also, plot the (x′,y′) and (x,yˆ) points for each of these seven cases, where x′ is the point
#(out of the 100 sample points) closest to x and y′ is the y-value of x′
# Reference: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor.kneighbors
def get_x_y_prime_for_plot(model_class_obj, preds_x_y_hat):
x_values = []
y_hat_values = []
x_prime_values = []
y_prime_values = []
for x, y_hat in preds_x_y_hat:
print("Extracting closest neightbor for x=", x, ", y_hat=", y_hat)
# n_neighbors = 1, because we need the closest neighbor
closes_neighbor_info = model_class_obj.kneighbors(X=[[x]],n_neighbors=1, return_distance=True)
closes_neighbor_distance = closes_neighbor_info[0][0][0]
closes_neighbor_index_in_X = closes_neighbor_info[1][0][0]
# closest neighbor is x_prime and simaltenously y_prime
x_prime = X[closes_neighbor_index_in_X]
y_prime = y[closes_neighbor_index_in_X]
print("Closest neightbor x_prime and its corresponding y_prime is: (x_prime,y_prime)", (x_prime, y_prime))
# gather the below values for scatter plot
x_values.append(x)
y_hat_values.append(y_hat)
x_prime_values.append(x_prime)
y_prime_values.append(y_prime)
return x_values, y_hat_values, x_prime_values, y_prime_values
import matplotlib.pyplot as plt
def scatter_plot_closest_neighbor(model_class_obj, preds_x_y_hat, title: str):
x_values, y_hat_values, x_prime_values, y_prime_values = get_x_y_prime_for_plot(model_class_obj, preds_x_y_hat)
plt.figure(figsize=(15, 7), dpi=80)
plt.scatter(x_values, y_hat_values, marker="*", color=['red','green', 'black', 'orange', 'blue'])
plt.scatter(x_prime_values, y_prime_values, marker="^", color=['red','green', 'black', 'orange', 'blue'])
plt.xlabel("X values or X Prime Values")
plt.ylabel("Y hat values or Y Prime Values")
plt.title(title)
plt.show()
# case1: predictions
uniform_K_1_predictions_x_y_hat = get_and_print_predictions(uniform_k_1_nn_reg_model_class, X_test)
print("Done printing predictions")
# will be used in the graph
# case1 plot
scatter_plot_closest_neighbor(uniform_k_1_nn_reg_model_class, uniform_K_1_predictions_x_y_hat,
title="Case - 1: Scatter plot for identifying closest neighbor for each X_predict")
# case2: predictions
uniform_K_3_predictions_x_y_hat = get_and_print_predictions(uniform_k_3_nn_reg_model_class, X_test)
print("Done printing predictions")
# will be used in the graph
# case2 plot
scatter_plot_closest_neighbor(uniform_k_3_nn_reg_model_class, uniform_K_3_predictions_x_y_hat,
title="Case - 2: Scatter plot for identifying closest neighbor for each X_predict")
# case3: predictions
uniform_K_50_predictions_x_y_hat = get_and_print_predictions(uniform_k_50_nn_reg_model_class, X_test)
print("Done printing predictions")
# will be used in the graph
# case3 plot
scatter_plot_closest_neighbor(uniform_k_50_nn_reg_model_class, uniform_K_50_predictions_x_y_hat,
title="Case - 3: Scatter plot for identifying closest neighbor for each X_predict")
# case4: predictions
distance_K_1_predictions_x_y_hat = get_and_print_predictions(distance_k_1_nn_reg_model_class, X_test)
print("Done printing predictions")
# will be used in the graph
# case3 plot
scatter_plot_closest_neighbor(uniform_k_1_nn_reg_model_class, uniform_K_1_predictions_x_y_hat,
title="Case - 4: Scatter plot for identifying closest neighbor for each X_predict")
# case5: predictions
distance_K_3_predictions_x_y_hat = get_and_print_predictions(distance_k_3_nn_reg_model_class, X_test)
print("Done printing predictions")
# will be used in the graph
# case3 plot
scatter_plot_closest_neighbor(uniform_k_3_nn_reg_model_class, uniform_K_3_predictions_x_y_hat,
title="Case - 5: Scatter plot for identifying closest neighbor for each X_predict")
# case6: predictions
distance_K_50_predictions_x_y_hat = get_and_print_predictions(distance_k_50_nn_reg_model_class, X_test)
print("Done printing predictions")
# will be used in the graph
# case3 plot
scatter_plot_closest_neighbor(uniform_k_50_nn_reg_model_class, uniform_K_50_predictions_x_y_hat,
title="Case - 6: Scatter plot for identifying closest neighbor for each X_predict")
# case7: predictions
k_all_N_predictions_x_y_hat = get_and_print_predictions(k_all_N_nn_reg_model_class, X_test)
print("Done printing predictions")
# will be used in the graph
# case3 plot
scatter_plot_closest_neighbor(k_all_N_nn_reg_model_class, k_all_N_predictions_x_y_hat,
title="Case - 7: Scatter plot for identifying closest neighbor for each X_predict")
k_all_N_nn_reg_model_class.get_weights
np.sum(k_all_N_nn_reg_model_class.get_weights) # all weights add up to 1 verified
```
**Conclusion**:
As seen above, we have performed KNN regression in various cases and obtained the scatterplots accordingly for the closest neighbor in each case. We have leveraged sklearn python library to use KNNRegression model class.
| github_jupyter |
```
import pandas as pd
# ERROR with 'TOTPkg'?
cols = ['NO','SUB','YEAR','FOREST_COVER','PCP','UPSTRM_AREAHA','SNOWMELT','FLOW_INcms','FLOW_OUTcms','EVAPcms','TLOSScms','SED_INtons','SED_OUTtons','SEDCONCmgL','ORGN_INkg','ORGN_OUTkg','ORGP_INkg','ORGP_OUTkg','NO3_INkg','NO3_OUTkg','NH4_INkg','NH4_OUTkg','NO2_INkg','NO2_OUTkg','MINP_INkg','MINP_OUTkg','CHLA_INkg','CHLA_OUTkg','CBOD_INkg','CBOD_OUTkg','DISOX_INkg','DISOX_OUTkg','TOTNkg']
predictands = [cols[i] for i in [8,9,10,12,13,15,17,19,21,23,25,27,29,31,32]]
print(len(predictands))
print(predictands)
# Read the original results from the SWAT model.
df = pd.read_table('data/regression_rch.txt',
header=None,
delim_whitespace=True,
names=cols)
print(df.shape)
pd.set_option('display.max_columns', 500)
print(df.describe())
df
df_predictands = df[df.columns.intersection(predictands)]
df_areaha = df['UPSTRM_AREAHA'].copy()
df_predictands
# Multiply each column in the predictands by
df_predictands = df_predictands.multiply(df_areaha, axis="index")
# Need to include the subbasin for later sorting.
df_predictands['SUB'] = df['SUB']
predictands_basin = []
for basin in range(1,32):
# Select by basin.
x = df_predictands[df_predictands['SUB'] == basin]
# Drop the basin information.
x = x.drop(['SUB'], axis=1)
x = x.reset_index(drop=True)
# In effect, we don't care about the Year or Measurement NO.
predictands_basin.append(x)
print(len(predictands_basin))
print(predictands_basin[0].shape)
# Sum the outputs from each basin.
df_predictands_sum = predictands_basin[0]
for basin in range(1,31):
df_predictands_sum + predictands_basin[basin]
df_predictands_sum
# We do the same per-basin split for predictors.
predictors_basin = []
for basin in range(1,32):
# Select by basin.
x = df[df.columns.intersection(['FOREST_COVER', 'PCP'] + ['SUB'])]
x = x[x['SUB'] == basin]
# Drop the basin information.
x = x.drop(['SUB'], axis=1)
x.columns = ['FOREST_COVER' + str(basin), 'PCP' + str(basin)]
# We need to reset the index to the latter concat command.
x = x.reset_index(drop=True)
# In effect, we don't care about the Year or Measurement NO.
predictors_basin.append(x)
print(len(predictors_basin))
print(predictors_basin[1].shape)
df_predictors_final = pd.concat(predictors_basin, axis=1)
df_predictors_final.describe()
t = df_predictors_final.mean(axis=0)
print(dict(t))
print(df_predictors_final.shape, df_predictands_sum.shape)
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import TimeSeriesSplit
coeff_dict = {c: [] for c in df_predictors_final.columns}
coeff_dict
tscv = TimeSeriesSplit(n_splits=5)
rf_accuracies = {}
for s in predictands:
# Cross validation scores.
y = df_predictands_sum[s]
# It might help to shuffle the X,y for fitting a linear regression.
scores_linear = cross_val_score(LinearRegression(), df_predictors_final, y, cv=tscv.split(df_predictors_final))
print(s)
print('linear:')
print(scores_linear)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores_linear.mean(), scores_linear.std() * 2))
scores_rf = cross_val_score(RandomForestRegressor(), df_predictors_final, y, cv=tscv.split(df_predictors_final))
regr = RandomForestRegressor()
regr.fit(df_predictors_final, y)
print(regr.feature_importances_)
for i in range(0, len(regr.feature_importances_)):
cname = df_predictors_final.columns[i]
coeff_dict[cname].append(regr.feature_importances_[i])
print('random forest:')
print(scores_rf)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores_rf.mean(), scores_rf.std() * 2))
rf_accuracies[s] = [scores_rf.mean(), scores_rf.std()]
print('\n')
df_accuracies = pd.DataFrame(rf_accuracies, ['accuracy', 'standard deviation'])
df_accuracies
df_accuracies.to_csv('results/regression_accuracies.csv')
coeff_dict
df_coeffs = pd.DataFrame(coeff_dict, predictands)
df_coeffs
#df_coeffs.to_csv('results/regression_coefficients.csv')
for s in predictands:
# Take the last row as a test year, this should be 2011.
t = df_predictors_final.tail(1).copy()
# Setup RF model.
y = df_predictands_sum[s]
# Recreate the RF model outside of cross validation.
regr = RandomForestRegressor()
regr.fit(df_predictors_final, y)
# Predictions.
p1 = regr.predict(t)
# Decrease forest cover in all basins by 30%.
d = 30 # % to decr.
for basin in range(1,32):
t['FOREST_COVER'+str(basin)] = t['FOREST_COVER'+str(basin)].apply(lambda i: i-d)
p2 = regr.predict(t)
# Increase precipitation in all basins by 10%.
# Recopy test year.
t = df_predictors_final.tail(1).copy()
dpcp = 0.10 # % to decr.
for basin in range(1,32):
t['PCP'+str(basin)] = t['PCP'+str(basin)].apply(lambda i: i+i*dpcp)
p3 = regr.predict(t)
# Chop down 90% of the trees in basin 13.
# Recopy test year.
t = df_predictors_final.tail(1).copy()
dfc = 0.9
f = t['FOREST_COVER13']
t['FOREST_COVER13'] = t['FOREST_COVER13'] - dfc*f
p4 = regr.predict(t)
print('{0} with regular forest covers, predicted {0} value: {1}'.format(
s, p1[0]))
print('{0} with forest covers decreased by {1}, predicted {0} value: {2}'.format(
s, d, p2[0]))
print('{0} with overall precipitation increased by {1} percent, predicted {0} value: {2}'.format(
s, dpcp, p3[0]))
print('{0} with forest cover in basin 13 decreased by {1} percent, predicted {0} value: {2}'.format(
s, dfc, p4[0]))
print('\n')
ol = []
for s in predictands:
y = df_predictands_sum[s]
ol.append(y)
df_outputs = pd.DataFrame(ol)
df_outputs = df_outputs.transpose()
df_outputs.columns
df_predictors_final.shape
df_outputs.shape
# Exports
# df_predictors_final.to_csv("data/features.csv", index=False)
# df_outputs.to_csv("data/outputs.csv", index=False)
```
| github_jupyter |
# Projeto Padrões de Projeto
# Treinamento da Rede
Esta implementação tem como objetivo criar o treinamento da rede com curriculos os dados oriundos do lattes para classificação de demandas.
```
import pandas as pd
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score, f1_score
from xgboost import XGBClassifier
from sklearn.utils import shuffle
pd.set_option('display.max_columns', 131)
pd.set_option('float_format', '{:f}'.format)
pd.set_option('display.max_colwidth', -1)
```
## Indice:
* [1. Carga dos Dados](#first-bullet)
* [2. Criação das Features](#second-bullet)
* [3. Treinamento da Rede](#4th-bullet)
* [4. Matriz de Confusão](#5th-bullet)
## 1. Carga dos dados <a class="anchor" id="first-bullet"></a>
As informações utilizadas para treinamento o serão carregadas do arquivo dados_treino.csv ,arquivo criado com base nos dados oriundos do lattes
```
df = pd.read_csv("dados_treino.csv",sep=';')
df.head()
```
## 2. Criação das Features <a class="anchor" id="second-bullet"></a>
```
count_words = df.Descricao.str.split(expand=True).stack().value_counts().index.tolist()
len(count_words)
colunas = count_words
df = df[['Descricao','Class']]
# Criando as colunas no dataframe.
for col in colunas:
df[col] = 0
for x in colunas:
df[x] = df['Descricao'].str.count(x)
df1 = df[df['Class'] == 0]
df2 = df[df['Class'] == 1]
df3 = df[df['Class'] == 2]
a, b, c = np.split(df1, [int(.2*len(df1)), int(.5*len(df1))])
d, e, f = np.split(df2, [int(.2*len(df2)), int(.5*len(df2))])
g, h, i = np.split(df3, [int(.2*len(df3)), int(.5*len(df3))])
frames_treino = [c,f,i]
frames_validacao = [a,d,g]
frames_teste = [b,e,h]
df = pd.concat(frames_treino)
df2 = pd.concat(frames_teste)
df3 = pd.concat(frames_validacao)
df = shuffle(df)
df2 = shuffle(df2)
df3 = shuffle(df3)
x_train = df.copy()
y_train = df['Class']
del x_train['Descricao']
del x_train['Class']
x_test = df2.copy()
y_test = df2['Class']
del x_test['Class']
del x_test['Descricao']
x_val = df3.copy()
y_val = df3['Class']
del x_val['Class']
del x_val['Descricao']
def gerawords(desc):
return desc.lower().split()
codtotal={}
wordtotal={}
for d in df['Descricao']:
for w in gerawords(d):
if w not in wordtotal:
wordtotal[w] = len(wordtotal)
for d in df['Class']:
if d not in codtotal:
codtotal[d] = len(codtotal)
df3.to_csv("dados_teste.csv",sep=',',index=False)
print(x_train.shape)
print(x_test.shape)
dfx_train = x_train
dfy_train = y_train
dfx_val = x_val
dfy_val = y_val
dfx_test = x_test
dfy_test = y_test
x_train = x_train[count_words]
x_val = x_val[count_words]
x_test = x_test[count_words]
x_train = np.array(x_train)
x_val = np.array(x_val)
x_test = np.array(x_test)
y_train = np.array(y_train)
y_val = np.array(y_val)
y_test = np.array(y_test)
X = x_train
Y = [row for row in y_train]
X_val = x_val
Y_val = [row for row in y_val]
X_test = x_test
Y_test = [row for row in y_test]
import collections
c = collections.Counter(Y_test)
c
```
## 3. Treinamento da Rede <a class="anchor" id="4th-bullet"></a>
```
# XGBoost CV model
model = XGBClassifier(learning_rate =0.019,
n_estimators=100,
max_depth=5,
min_child_weight=1,
gamma=20,
subsample=0.6,
colsample_bytree=0.95,
objective= 'multi:softprob',num_class=3, nthread=4, scale_pos_weight=1, seed=27)
model.fit(X, Y, eval_metric='merror', eval_set=[(X_val, Y_val)], verbose=True,early_stopping_rounds=100)
y_pred
predictions
y_pred = model.predict_proba(x_test)
y_pred = [row[0] for row in y_pred]
y_pred = [1*(t>0.5) for t in y_pred] ###################Consertar
predictions = [value for value in y_pred]
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
```
## 4. Matriz de Confusão <a class="anchor" id="5th-bullet"></a>
```
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
import itertools
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
cnf_matrix = confusion_matrix(predictions, y_test, labels=[0, 1, 2])
np.set_printoptions(precision=2)
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=[0, 1, 2],
title='Matriz de Confusão')
print("Accuracy: %.2f%%" % (accuracy * 100.0))
```
## Salva o Modelo
```
import pickle
pickle.dump(model, open("modelo.bin", "wb"))
loaded_model = pickle.load(open("modelo.bin", "rb"))
from xgboost import plot_tree
plot_tree(model)
```
| github_jupyter |
# 1A.data - Décorrélation de variables aléatoires
On construit des variables corrélées gaussiennes et on cherche à construire des variables décorrélées en utilisant le calcul matriciel.
```
from jyquickhelper import add_notebook_menu
add_notebook_menu()
```
Ce TD appliquera le calcul matriciel aux vecteurs de variables normales [corrélées](http://fr.wikipedia.org/wiki/Covariance) ou aussi [décomposition en valeurs singulières](https://fr.wikipedia.org/wiki/D%C3%A9composition_en_valeurs_singuli%C3%A8res).
## Création d'un jeu de données
### Q1
La première étape consiste à construire des variables aléatoires normales corrélées dans une matrice $N \times 3$. On cherche à construire cette matrice au format [numpy](http://www.numpy.org/). Le programme suivant est un moyen de construire un tel ensemble à l'aide de combinaisons linéaires. Complétez les lignes contenant des ``....``.
```
import random
import numpy as np
def combinaison () :
x = random.gauss(0,1) # génère un nombre aléatoire
y = random.gauss(0,1) # selon une loi normale
z = random.gauss(0,1) # de moyenne null et de variance 1
x2 = x
y2 = 3*x + y
z2 = -2*x + y + 0.2*z
return [x2, y2, z2]
# mat = [ ............. ]
# npm = np.matrix ( mat )
```
### Q2
A partir de la matrice ``npm``, on veut construire la matrice des corrélations.
```
npm = ... # voir question précédente
t = npm.transpose ()
a = t * npm
a /= npm.shape[0]
```
A quoi correspond la matrice ``a`` ?
### Corrélation de matrices
### Q3
Construire la matrice des corrélations à partir de la matrice ``a``. Si besoin, on pourra utiliser le module [copy](https://docs.python.org/3/library/copy.html).
```
import copy
b = copy.copy (a) # remplacer cette ligne par b = a
b[0,0] = 44444444
print(b) # et comparer le résultat ici
```
### Q4
Construire une fonction qui prend comme argument la matrice ``npm`` et qui retourne la matrice de corrélation. Cette fonction servira plus pour vérifier que nous avons bien réussi à décorréler.
```
def correlation(npm):
# ..........
return "....."
```
## Un peu de mathématiques
Pour la suite, un peu de mathématique. On note $M$ la matrice ``npm``. $V=\frac{1}{n}M'M$ correspond à la matrice des *covariances* et elle est nécessairement symétrique. C'est une matrice diagonale si et seulement si les variables normales sont indépendantes. Comme toute matrice symétrique, elle est diagonalisable. On peut écrire :
$$\frac{1}{n}M'M = P \Lambda P'$$
$P$ vérifie $P'P= PP' = I$. La matrice $\Lambda$ est diagonale et on peut montrer que toutes les valeurs propres sont positives ($\Lambda = \frac{1}{n}P'M'MP = \frac{1}{n}(MP)'(MP)$).
On définit alors la racine carrée de la matrice $\Lambda$ par :
$$\begin{array}{rcl} \Lambda &=& diag(\lambda_1,\lambda_2,\lambda_3) \\ \Lambda^{\frac{1}{2}} &=& diag\left(\sqrt{\lambda_1},\sqrt{\lambda_2},\sqrt{\lambda_3}\right)\end{array}$$
On définit ensuite la racine carrée de la matrice $V$ :
$$V^{\frac{1}{2}} = P \Lambda^{\frac{1}{2}} P'$$
On vérifie que $\left(V^{\frac{1}{2}}\right)^2 = P \Lambda^{\frac{1}{2}} P' P \Lambda^{\frac{1}{2}} P' = P \Lambda^{\frac{1}{2}}\Lambda^{\frac{1}{2}} P' = V = P \Lambda P' = V$.
## Calcul de la racine carrée
### Q6
Le module [numpy](http://www.numpy.org/) propose une fonction qui retourne la matrice $P$ et le vecteur des valeurs propres $L$ :
```
L,P = np.linalg.eig(a)
```
Vérifier que $P'P=I$. Est-ce rigoureusement égal à la matrice identité ?
### Q7
Que fait l'instruction suivante : ``np.diag(L)`` ?
### Q8
Ecrire une fonction qui calcule la racine carrée de la matrice $\frac{1}{n}M'M$ (on rappelle que $M$ est la matrice ``npm``). Voir aussi [Racine carrée d'une matrice](https://fr.wikipedia.org/wiki/Racine_carr%C3%A9e_d%27une_matrice).
## Décorrélation
``np.linalg.inv(a)`` permet d'obtenir l'inverse de la matrice ``a``.
### Q9
Chaque ligne de la matrice $M$ représente un vecteur de trois variables corrélées. La matrice de covariance est $V=\frac{1}{n}M'M$. Calculer la matrice de covariance de la matrice $N=M V^{-\frac{1}{2}}$ (mathématiquement).
### Q10
Vérifier numériquement.
## Simulation de variables corrélées
### Q11
A partir du résultat précédent, proposer une méthode pour simuler un vecteur de variables corrélées selon une matrice de covariance $V$ à partir d'un vecteur de lois normales indépendantes.
### Q12
Proposer une fonction qui crée cet échantillon :
```
def simultation (N, cov) :
# simule un échantillon de variables corrélées
# N : nombre de variables
# cov : matrice de covariance
# ...
return M
```
### Q13
Vérifier que votre échantillon a une matrice de corrélations proche de celle choisie pour simuler l'échantillon.
| github_jupyter |
# Random Graphs with Arbitrary Degree Distributions and their Applications
> M. E. J. Newman, S. H. Strogatz, and D. J. Watts
- toc:true
- branch: master
- badges: false
- comments: true
- categories: [graph-theory, random-graphs]
## Key Ideas from the [Paper](https://arxiv.org/abs/cond-mat/0007235)
Certain real world networks (e.g. a community of people) could be abstracted by a Random Graph where vertex degree (no. of edges incident on the vertex) follows some distribution, e.g. flip a coin to decide if two nodes should be connected. Such abstractions could be used to study phenomenon such as passage of disease through a community. Studies of Random Graphs so far assumed vertex degree to have the Poisson distribution, but this paper studies graphs with arbitrary degree distributions.
## Random Graphs with Poisson Distributed Vertex Degree
Studies so far assumed that presence of an edge between any two nodes is determined by a coin flip with probability of $1$ being $p$. Further, presence of an edge betwen two nodes is assumed independent of presence of other edges in the network. These assumptions imply that vertex degree has a Poisson distribution (as we will see).
The authors state:
> If there are $N$ vertices in the graph, and each is connected to an average of $z$ edges, then it is trivial to show that $p=z/(N-1)$.
`Here is my short proof for this statement.`
Let $E$ be the set of edges. We can say:
$$
\mathbb{E}[|E|] = \frac{zN}{2} \\
\implies \frac{1}{2}\sum_{i=1}^Np(N-1) = \frac{zN}{2} \\
\implies p = \frac{z}{N-1} \\
\implies p \sim \frac{z}{N} \tag{for large $N$ (Eq. 1)}
$$
Given the independence assumption, the distribution of degree of a vertex, $p_k$, is thus Binomial, [which for large $N$](https://en.wikipedia.org/wiki/Poisson_limit_theorem) can be approximated by a Poisson distribution.
$$
p_k = {N\choose k}p^k(1-p)^{N-k} \sim \frac{z^ke^{-z}}{k!} \tag{Eq. 2}
$$
## Introducing Generating Functions
The authors express the probability distribution of a discrete random variables (e.g. degree of a vertex) as what is called a Generating Function{% fn 2 %}. They then exploit properties of Generating Functions to derive some interesting insights, e.g. finding the distribution of sums of random variables.
A sequence of numbers can be encapsulated into a function (the Generating Function), with the property that when that function is expressed as the sum of a power series ($a_0 + a_1x + a_2x^2 + \cdots$), the coefficients of this series ($a_0,a_1,a_2,\cdots$) give us the original number sequence{% fn 2 %}. E.g. For a graph with Poisson distributed vertex degree, $p_k$ (the probability a vertex has degree $k$) is a sequence whose Generating Function can be written as:
$$
G_0(x) = \sum_{k=0}^N{N\choose k}p^k(1-p)^{N-k}x^k \\
= (1-p+px)^N \\
\sim e^{z(x-1)} \tag{for $N\to\infty$ (Eq. 3)}
$$
Several properties of the Generating Function $G_0(x) = \sum_{k=0}^Np_kx^k$ come handy.
* **Moments**: The average degree can be written as:
$$
z = \sum_kkp_k \\
= G'_0(1) \tag{First Moment (Eq. 4)}
$$
For the case of Poisson distributed vertex degree, we have $G'_0(1) = z$.
* **Powers**: The authors note (with my [annotations]):
> If the distribution of a property $k$ [such as vertex degree] of an object [vertex of a graph] is generated by a given Generating Function [$G_0(x)$], then the distribution of the total $k$ summed over $m$ independent realizations of the object [sum of degrees of $m$ randomly chosen vertices] is generated by the $m^{th}$ power of that Generating Function [$G_0(x)^m$].
So the coefficient of $x^k$ in $G_0(x)^m$ is the probability that the sum of degrees of $m$ randomly chosen vertices is $k$. With this we can start talking about distribution of sums of degrees of vertices.
## Degree of a Vertex arrived at by following a random Edge
Intuitively, the distribution of degree of such a vertex will be different from that of a randomly chosen vertex ($p_k$) because by following a random edge, we are more likely to end up on a vertex with a high degree than at one with a relatively lower degree. And so the probability distribution of degree of a vertex chosen as such, would be skewed towards higher degree values. Formally, let $k_i$ be the degree of vertex $V_i$, then the probability of arriving at $V_i$ will be $\frac{k_i}{\sum_{j=0}^Nk_j} \propto k_i$. And so the probability that a vertex chosen by following a random edge has degree $k$ is proportional to the probability that this vertex is chosen (which is $\propto k$) $\times$ probability this vertex has degree $k$ (which is $p_k$).
And so the authors note:
>... the vertex therefore has a probability distribution of degree proportional to $kp_k$.
And so for coefficientes ($p'_k \propto kp_k$) of the Generating Function to sum to 1, we need to normalize: $\frac{\sum_kkp_kx^k}{\sum_kkp_k} = x\frac{G'_0(x)}{G'_0(1)}$.
## Distribution of Number of Neighbors $m$ Steps Away
Let's first look at neighbors 1 steps away. We choose a random vertex and follow its edges to its immediate neighbors, and consider the distribution of degree of one such neighbor. The authors indicate that this distribution will be the same as that when we arrive at this vertex by following a random edge. This wasn't clear to me at first because in this case we are randomly choosing a _vertex_ not an _edge_. But then it struck that vertices with high degrees are connected to more vertices than those with a low degree, and so are more likely to be arrived at when we pick a vertex randomly and visit its neighbors. The author's note for neighbors 1 step away:
> ... the vertices arrived at each have the distribution of remaining outgoing edges generated by this function, less one power of $x$, to allow for the edge that we arrived along.
$$
G_1(x) = \frac{G'_0(x)}{G'_0(1)} = \frac{1}{z}G'_0(x) \tag{Eq. 5}
$$
Now let's consider the distribution of second-neighbors of a vertex $v$, i.e. the sum of number of neighbors of immediate neighbors of $v$. (Note that this sum excludes the number of immediate neighbors). Let's first assume it is given that $v$ has $k$ immediate neighbors. Then the probability that it has $j$ second-neighbors, $p^{second}_{j|k}$ is the coefficient of $x^j$ in the Generating Function $G_1(x)^k$ (from the "Powers" property of Generating Functions). So we can get $p^{second}_{j}$ by simply marginalizing over the number of immediate neighbors of $v$. And thust the authors note:
> ... the probability distribution of second neighbors of the original vertex can be written as:
$$
\sum_{k}p_k[G_1(x)]^k = G_0(G_1(x)) \tag{Eq. 6}
$$
## Distribution of Component Sizes
Armed with the background knowledge above, the authors discuss distributions of interesting properties of random graphs, such as the size of connected components.
The authors assert the following and I'm unable to prove it thus far (but let's take it for granted for now):
> ... the chances of a connected component containing a close loop of edges goes as $N^{-1}$ which is negligible in the limit of large $N$.
The authors denote by $H_1(x)$:
> ... the generating function for the distribution of the sizes of components which are reached by choosing a random edge and following it to one of its ends.
Now if $q_k$ be the probability that the vertex $v$ we arrive at (by following a random edge) has $k$ other incident edges. Then the probability that this component (the one containing $v$) has size $j$, is given by:
$$
p^\text{component size}_j = q_0 \cdots \tag{Eq. 7}\\
+ q_1\times\text{(coefficient of }x^{j-1}\text{ in }H_1(x) \cdots \\
+ q_2\times\text{(coefficient of }x^{j-1}\text{ in }[H_1(x)]^2 \cdots \\
+ \text{and so on}
$$
And thus the authors assert:
> $H_1(x)$ must satisfy a self-consistency condition of the form
$$
H_1(x) = xq_0 + xq_1H_1(x) + xq_2[H_1(x)]^2 + \cdots \tag{Eq. 8}
$$
We know from results given before (see Eq. 5) that $q_k$ is just the coefficient of $x^k$ in $G_1(x)$, and so the self-consistency condition can also be written as:
$$
H_1(x) = G_1(H_1(x)) \tag{Eq. 9}
$$
Following a similar line of reasoning as I've shared in Eq. 7, if $H_0(x)$ is the Generating Function for the distribution of size of the component which contains a randomly chosen vertex, then we can show:
$$
H_0(x) = G_0(H_1(x)) \tag{Eq. 10}
$$
## Mean Component Size
The authors note that in practice Eq. 9 is complicated and rarely has a closed form solution for $H_1$ (given $G_1$). That said, we can still compute average component size using Eqs. 9 & 10. Using the Moments property of Generating Functions (see Eq. 4), let $\langle s\rangle$ denote the average size of the component to which a randomly chosen vertex belongs, then the authors note:
$$
\langle s\rangle = H'_0(1) \\
= G_0(H_1(1)) + G'_0(H_1(1))H'_1(1) \\
= 1 + G'_0(1)H'_1(1) \tag{Eq. 11}
$$
The last step follows from the fact that $G_0(1) = H_1(1) = 1$ (this is required to normalize the distributions representated by $G_0$ and $H_1$ - so that the coefficients in each of those series sum to 1).
From Eq. 9, the authors note:
$$
H'_1(1) = 1 + G'_1(1)H'_1(1) \tag{Eq. 12}
$$
And from Eqs. 11 & 12, after eliminating $H'_1(1)$, we get (here $z_1$ is the average degree of a vertex and $z_2$ is the average number of second neighbors):
$$
\langle s \rangle = 1 + \frac{G'_0(1)}{1 - G'_1(1)} \\
= 1 + \frac{z_1^2}{z_1-z_2} \tag{Eq. 13}
$$
Let's look at that how $\langle s\rangle$ behavse versus $z_1$ and $z_2$. We can see from the plot below that for each value of $z_1$, as the value of $z_2$ starts to exceed $z_1$, the mean component size explodes. This phenomenon is called a "Phase Transition", at which point a "Giant Component" (a component of size $\Omega(N)$ {% fn 3 %}) appears in the graph. Phase Transitions (see Chapter 4, Foundations of Data Science by Profs. Blum, Hopcraft and Kannan {% fn 3 %}) are characterized by abrupt transitions in a graph (such as appearance of cycles) once the probability $p$ (of there being an edge between two random vertices, crosses a certain threshold).
```
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
z1 = np.arange(2, 100)
z2 = np.arange(2, 100)
xx, yy = np.meshgrid(z1, z2)
s = 1 + xx**2/(xx - yy)
fig, ax = plt.subplots()
for z1 in np.linspace(0.1,1.0, 10):
z2 = np.linspace(0.1, z1,10)
s = 1 + z1**2/(z1 - z2)
_ = ax.plot(z2, s)
ax.annotate(f"z1={z1:.2f}", (z1*0.9, s[-2]*1.05))
ax.set_xlabel("z2 (Mean No. of Second Neighbors)")
ax.set_ylabel(r"$\langle s \rangle$ (Mean Component Size)")
ax.set_title(r"Plot of $\langle s \rangle$ versus $z_1$, $z_2$", fontsize=14)
fig.set_size_inches(10,6)
```
## References
{{'[View on Arxiv](https://arxiv.org/abs/cond-mat/0007235)' | fndetail: 1}}
{{'Herbert S. Wilf, generatingfunctionology. View PDF on author s [website](https://www.math.upenn.edu/~wilf/gfologyLinked2.pdf)' | fndetail: 2}}
{{'Chapter 4, Foundations of Data Science, Avrim Blum, John Hopcroft, and Ravindran Kannan. [PDF from authors website.](https://www.cs.cornell.edu/jeh/book.pdf)' | fndetail: 3}}
| github_jupyter |
# CME Session
### Goals
1. Search and download some coronagraph images
2. Load into Maps
3. Basic CME front enhancement
4. Extract CME front positions
6. Convert positions to height
5. Fit some simple models to height-time data
```
%matplotlib notebook
import warnings
warnings.filterwarnings("ignore")
import astropy.units as u
import numpy as np
from astropy.time import Time
from astropy.coordinates import SkyCoord
from astropy.visualization import time_support, quantity_support
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm, SymLogNorm
from scipy import ndimage
from scipy.optimize import curve_fit
from sunpy.net import Fido, attrs as a
from sunpy.map import Map
from sunpy.coordinates.frames import Heliocentric, Helioprojective
```
# LASCO C2
## Data Search and Download
```
c2_query = Fido.search(a.Time('2017-09-10T15:10', '2017-09-10T18:00'),
a.Instrument.lasco, a.Detector.c2)
c2_query
c2_results = Fido.fetch(c2_query);
c2_results
c2_results
```
## Load into maps and plot
```
# results = ['/Users/shane/sunpy/data/22650296.fts', '/Users/shane/sunpy/data/22650294.fts', '/Users/shane/sunpy/data/22650292.fts', '/Users/shane/sunpy/data/22650297.fts', '/Users/shane/sunpy/data/22650295.fts', '/Users/shane/sunpy/data/22650290.fts', '/Users/shane/sunpy/data/22650293.fts', '/Users/shane/sunpy/data/22650291.fts', '/Users/shane/sunpy/data/22650298.fts', '/Users/shane/sunpy/data/22650289.fts']
c2_maps = Map(c2_results, sequence=True);
c2_maps.plot();
```
Check the polarisation and filter to make sure they don't change
```
[(m.exposure_time, m.meta.get('polar'), m.meta.get('filter')) for m in c2_maps]
```
Rotate the map so standard orintation and pixel are aligned with wcs axes
```
c2_maps = [m.rotate() for m in c2_maps];
```
# Running and Base Difference
The corona above $\sim 2 R_{Sun}$ is dominated by the F-corona (Fraunhofer corona) which is composed of photospheric radiation Rayleigh scattered off dust particles It forms a continuous spectrum with the Fraunhofer absorption lines superimposed. The radiation has a very low degree of polarisation.
There are a number of approaches to remove this the most straight forward are
* Running Difference $I(x,y)=I_i(x,y) - I_{i-1}(x,y)$
* Base Difference $I(x,y)=I_i(x,y) - I_{B}(x,y)$
* Background Subtraction $I(x,y)=I_i(x,y) - I_{BG}(x,y)$$
Can create new map using data and meta from other maps
```
c2_bdiff_maps = Map([(c2_maps[i+1].data/c2_maps[i+1].exposure_time
- c2_maps[0].data/c2_maps[0].exposure_time, c2_maps[i+1].meta)
for i in range(len(c2_maps)-1)], sequence=True)
```
In jupyter notebook sunpy has very nice preview functionality
```
c2_bdiff_maps
```
## CME front
One technique to help extract CME front is to create a space-time plot or j-plot we can define a region of interest and then sum ovber the region to increase signal to noise.
```
fig, ax = plt.subplots(subplot_kw={'projection': c2_bdiff_maps[3]})
c2_bdiff_maps[3].plot(clip_interval=[1,99]*u.percent, axes=ax)
bottom_left = SkyCoord(0*u.arcsec, -200*u.arcsec, frame=c2_bdiff_maps[3].coordinate_frame)
top_right = SkyCoord(6500*u.arcsec, 200*u.arcsec, frame=c2_bdiff_maps[3].coordinate_frame)
c2_bdiff_maps[9].draw_quadrangle(bottom_left, top_right=top_right, axes=ax)
```
Going to extract the data in the region above and then sum over the y-direction to a 1-d plot of intensity vs pixel coordinate.
```
c2_submaps = []
for m in c2_bdiff_maps:
# define the coordinates of the bottom left and top right for each map should really define once and then transform
bottom_left = SkyCoord(0*u.arcsec, -200*u.arcsec, frame=m.coordinate_frame)
top_right = SkyCoord(6500*u.arcsec, 200*u.arcsec, frame=m.coordinate_frame)
c2_submaps.append(m.submap(bottom_left, top_right=top_right))
c2_submaps[0].data.shape
```
No we can create a space-time diagram by stack these slices one after another,
```
c2_front_pix = []
def onclick(event):
global coords
ax.plot(event.xdata, event.ydata, 'o', color='r')
c2_front_pix.append((event.ydata, event.xdata))
fig, ax = plt.subplots()
ax.imshow(np.stack([m.data.mean(axis=0)/m.data.mean(axis=0).max() for m in c2_submaps]).T,
aspect='auto', origin='lower', interpolation='none', norm=SymLogNorm(0.1,vmax=2))
cid = fig.canvas.mpl_connect('button_press_event', onclick)
if not c2_front_pix:
c2_front_pix = [(209.40188156061873, 3.0291329045449533),
(391.58261749135465, 3.9464716142223724)]
pix, index = c2_front_pix[0]
index = round(index)
fig, ax = plt.subplots(subplot_kw={'projection': c2_bdiff_maps[index]})
c2_bdiff_maps[index].plot(clip_interval=[1,99]*u.percent, axes=ax)
pp = c2_submaps[index].pixel_to_world(*[pix,34/4]*u.pix)
ax.plot_coord(pp, marker='x', ms=20, color='k');
```
Extract the times an coordinates for later
```
c2_times = [m.date for m in c2_submaps[3:5] ]
c2_coords = [c2_submaps[i].pixel_to_world(*[c2_front_pix[i][0],34/4]*u.pix) for i, m in enumerate(c2_submaps[3:5])]
c2_times, c2_coords
```
# Lasco C3
## Data seach and download
```
c3_query = Fido.search(a.Time('2017-09-10T15:10', '2017-09-10T19:00'),
a.Instrument.lasco, a.Detector.c3)
c3_query
```
Download
```
c3_results = Fido.fetch(c3_query);
```
Load into maps
```
c3_maps = Map(c3_results, sequence=True);
c3_maps
```
Rotate
```
c3_maps = [m.rotate() for m in c3_maps]
```
## Create Base Differnce maps
```
c3_bdiff_maps = Map([(c3_maps[i+1].data/c3_maps[i+1].exposure_time.to_value('s')
- c3_maps[0].data/c3_maps[0].exposure_time.to_value('s'),
c3_maps[i+1].meta)
for i in range(0, len(c3_maps)-1)], sequence=True)
c3_bdiff_maps
```
Can use a median filter to reduce some of the noise and make front easier to identify
```
c3_bdiff_maps = Map([(ndimage.median_filter(c3_maps[i+1].data/c3_maps[i+1].exposure_time.to_value('s')
- c3_maps[0].data/c3_maps[0].exposure_time.to_value('s'), size=5),
c3_maps[i+1].meta)
for i in range(0, len(c3_maps)-1)], sequence=True)
```
## CME front
```
fig, ax = plt.subplots(subplot_kw={'projection': c3_bdiff_maps[9]})
c3_bdiff_maps[9].plot(clip_interval=[1,99]*u.percent, axes=ax)
bottom_left = SkyCoord(0*u.arcsec, -2000*u.arcsec, frame=c3_bdiff_maps[9].coordinate_frame)
top_right = SkyCoord(29500*u.arcsec, 2000*u.arcsec, frame=c3_bdiff_maps[9].coordinate_frame)
c3_bdiff_maps[9].draw_quadrangle(bottom_left, top_right=top_right,
axes=ax)
```
Extract region of data
```
c3_submaps = []
for m in c3_bdiff_maps:
bottom_left = SkyCoord(0*u.arcsec, -2000*u.arcsec, frame=m.coordinate_frame)
top_right = SkyCoord(29500*u.arcsec, 2000*u.arcsec, frame=m.coordinate_frame)
c3_submaps.append(m.submap(bottom_left, top_right=top_right))
c3_front_pix = []
def onclick(event):
global coords
ax.plot(event.xdata, event.ydata, 'o', color='r')
c3_front_pix.append((event.ydata, event.xdata))
fig, ax = plt.subplots()
ax.imshow(np.stack([m.data.mean(axis=0)/m.data.mean(axis=0).max() for m in c3_submaps]).T,
aspect='auto', origin='lower', interpolation='none', norm=SymLogNorm(0.1,vmax=2))
cid = fig.canvas.mpl_connect('button_press_event', onclick)
if not c3_front_pix:
c3_front_pix = [(75.84577056752656, 3.007459455920803),
(124.04923377098979, 3.9872981655982223),
(173.6704458922019, 5.039717520436931),
(216.20291342466945, 5.874394939791771),
(248.81113853289455, 6.854233649469189),
(287.0903593121153, 7.797782036565963),
(328.20507792683395, 8.995362681727254),
(369.3197965415526, 9.866330423662738),
(401.92802164977775, 10.991330423662738)]
pix, index = c3_front_pix[5]
index = round(index)
fig, ax = plt.subplots(subplot_kw={'projection': c3_bdiff_maps[index]})
c3_bdiff_maps[index].plot(clip_interval=[1,99]*u.percent, axes=ax)
pp = c3_submaps[index].pixel_to_world(*[pix,37/4]*u.pix)
ax.plot_coord(pp, marker='x', ms=20, color='r');
```
Extract times and coordinates for later
```
c3_times = [m.date for m in c3_submaps[3:12] ]
c3_coords = [c3_submaps[i].pixel_to_world(*[c3_front_pix[i][0],37/4]*u.pix) for i, m in enumerate(c3_submaps[3:12])]
```
# Coordintes to Heights
```
times = Time(np.concatenate([c2_times, c3_times]))
coords = c2_coords + c3_coords
heights_pos = np.hstack([(c.observer.radius * np.tan(c.Tx)) for c in coords])
heights_pos
heights_pos_error = np.hstack([(c.observer.radius * np.tan(c.Tx + 56*u.arcsecond*5)) for c in coords])
height_err = heights_pos_error - heights_pos
heights_pos, height_err
times.shape, height_err.shape
with Helioprojective.assume_spherical_screen(center=coords[0].observer):
heights_sph = np.hstack([np.sqrt(c.transform_to('heliocentric').x**2
+ c.transform_to('heliocentric').y**2
+ c.transform_to('heliocentric').z**2) for c in coords])
heights_sph
fig, axs = plt.subplots()
axs.errorbar(times.datetime, heights_pos.to_value(u.Rsun), yerr=height_err.to_value(u.Rsun), fmt='.')
axs.plot(times.datetime, heights_sph.to(u.Rsun), '+')
```
C2 data points look off not uncommon as different telescope different sensetivity we'll just drop these for the moment
```
times = Time(np.hstack([c2_times, c3_times]))
heights_pos = heights_pos
height_err = height_err
```
# Model Fitting
### Models
Constant velocity model
\begin{align}
a = \frac{dv}{dt} = 0 \\
h(t) = h_0 + v_0 t \\
\end{align}
Constant acceleration model
\begin{align}
a = a_{0} \\
v(t) = v_0 + a_0 t \\
h(t) = h_0 + v_0 t + \frac{1}{2}a_0 t^{2}
\end{align}
```
def const_vel(t0, h0, v0):
return h0 + v0*t0
def const_accel(t0, h0, v0, a0):
return h0 + v0*t0 + 0.5 * a0*t0**2
t0 = (times-times[0]).to(u.s)
const_vel_fit = curve_fit(const_vel, t0, heights_pos, sigma=height_err,
p0=[heights_pos[0].to_value(u.m), 350000])
h0, v0 = const_vel_fit[0]
delta_h0, delta_v0 = np.sqrt(const_vel_fit[1].diagonal())
h0 = h0*u.m
v0 = v0*u.m/u.s
delta_h0 = delta_h0*u.m
delta_v0 = delta_v0*(u.m/u.s)
print(f'h0: {h0.to(u.Rsun).round(2)} +/- {delta_h0.to(u.Rsun).round(2)}')
print(f'v0: {v0.to(u.km/u.s).round(2)} +/- {delta_v0.to(u.km/u.s).round(2)}')
const_accel_fit = curve_fit(const_accel, t0, heights_pos, p0=[heights_pos[0].to_value(u.m), 600000, -5])
h0, v0, a0 = const_accel_fit[0]
delta_h0, delta_v0, delta_a0 = np.sqrt(const_accel_fit[1].diagonal())
h0 = h0*u.m
v0 = v0*u.m/u.s
a0 = a0*u.m/u.s**2
delta_h0 = delta_h0*u.m
delta_v0 = delta_v0*(u.m/u.s)
delta_a0 = delta_a0*(u.m/u.s**2)
print(f'h0: {h0.to(u.Rsun).round(2)} +/- {delta_h0.to(u.Rsun).round(2)}')
print(f'v0: {v0.to(u.km/u.s).round(2)} +/- {delta_v0.to(u.km/u.s).round(2)}')
print(f'a0: {a0.to(u.m/u.s**2).round(2)} +/- {delta_a0.to(u.m/u.s**2).round(2)}')
```
# Check against CDAW CME list
* https://cdaw.gsfc.nasa.gov/CME_list/UNIVERSAL/2017_09/univ2017_09.html
```
with quantity_support():
fig, axes = plt.subplots()
axes.errorbar(times.datetime, heights_pos.to(u.Rsun), fmt='.', yerr=height_err)
axes.plot(times.datetime, const_vel(t0.value, *const_vel_fit[0])*u.m, 'r-')
axes.plot(times.datetime, const_accel(t0.value, *const_accel_fit[0])*u.m, 'r-')
fig.autofmt_xdate()
with quantity_support() and time_support(format='isot'):
fig, axes = plt.subplots()
axes.plot(times, heights_pos.to(u.Rsun), 'x')
axes.plot(times, const_vel(t0.value, *const_vel_fit[0])*u.m, 'r-')
axes.plot(times, const_accel(t0.value, *const_accel_fit[0])*u.m, 'r-')
fig.autofmt_xdate()
```
Estimate the arrival time at Earth like distance for constant velocity model
```
(((1*u.AU) - const_vel_fit[0][0] * u.m) / (const_vel_fit[0][1] * u.m/u.s)).decompose().to(u.hour)
roots = np.roots([((1*u.AU) - const_accel_fit[0][0] * u.m).to_value(u.m),
const_accel_fit[0][1], 0.5*const_accel_fit[0][2]][::-1])
(roots*u.s).to(u.hour)
```
| github_jupyter |
# Importing Libraries
```
pip install catboost
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from catboost import CatBoostRegressor
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import StandardScaler
%matplotlib inline
```
# Importing dataset
```
df=pd.read_csv('insurance.csv')
```
# Inspecting data
```
df.head()
df.info()
df.describe()
df.isnull().sum()
print("The unique values of categorical variables are")
print(df['sex'].value_counts())
print()
print(df['children'].value_counts())
print()
print(df['region'].value_counts())
print()
print(df['smoker'].value_counts())
```
# Exploratory Data Analysis
```
sns.pairplot(df)
df.corr()
sns.heatmap(data=df.corr(), cmap='coolwarm')
sns.displot(df.loc[:,:"region"])
# Box plot for Categorical columns
plt.figure(figsize=(20,20))
plt.subplot(4,2,1)
sns.boxplot(x="region",y="charges",data=df)
plt.subplot(4,2,2)
sns.boxplot(x="smoker",y="charges",data=df)
plt.subplot(4,2,3)
sns.boxplot(x="sex",y="charges",data=df)
plt.subplot(4,2,4)
sns.boxplot(x="children",y="charges",data=df)
```
# Data Preprocessing
```
#reducing the outliers
df[["charges"]]= np.log10(df[["charges"]])
#checking outliers
plt.figure(figsize=(20,20))
plt.subplot(4,2,1)
sns.boxplot(x="region",y="charges",data=df)
plt.subplot(4,2,2)
sns.boxplot(x="smoker",y="charges",data=df)
plt.subplot(4,2,3)
sns.boxplot(x="sex",y="charges",data=df)
plt.subplot(4,2,4)
sns.boxplot(x="children",y="charges",data=df)
#function that will change bmi to a category
def weightCondition(bmi):
if bmi<18.5:
return "Underweight"
elif (bmi>= 18.5)&(bmi< 24.986):
return "Normal"
elif (bmi >= 25) & (bmi < 29.926):
return "Overweight"
else:
return "Obese"
#adding weight condition to the dataFrame
df["weight_Condition"]=[weightCondition(val) for val in df["bmi"] ]
df.head(5)
```
Label Encoding
```
#List of categorical variables
categorical = ["sex","children","smoker","region","weight_Condition"]
#Converting data types to categorical datatypes
df[categorical] = df[categorical].apply(lambda x: x.astype("category"), axis = 0)
#Creating dummy variables on the dataFrame
df = pd.get_dummies(data = df, columns = categorical, drop_first =True)
df.head()
```
# Linear Regression
```
#extracting X and y for the model
X = df.drop('charges', axis =1)
y=df[["charges"]]
# split the data to train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234)
X_train.head()
```
Scaling
```
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
#Instatiating model
lr= LinearRegression()
#Fitting
lr.fit(X_train,y_train)
# Predicting
predictions=lr.predict(X_test)
#original predictions values
original_predictions=(10**predictions)
print("original value of charges predictions: ", original_predictions)
# Evaluate accuracy on the test set
print('Mean squared error: %.2f'
% mean_squared_error(y_test, predictions))
print('Coefficient of determination: %.2f'
% (r2_score(y_test, predictions)*100)+'%')
```
# CatBoost Regression
```
#Instatiating model
cb = CatBoostRegressor(random_state=42)
#Fitting
cb.fit(X_train,y_train,use_best_model=True,eval_set=(X_test,y_test),early_stopping_rounds=30)
# Predicting
cb_pred = cb.predict(X_test)
# Evaluate accuracy on the test set
print('Mean squared error: %.2f'
% mean_squared_error(y_test,cb_pred))
print('Coefficient of determination: %.2f'
% (r2_score(y_test,cb_pred)*100)+'%')
```
| github_jupyter |
# GPU-accelerated LightGBM
This kernel explores a GPU-accelerated LGBM model to predict customer transaction.
## Notebook Content
1. [Re-compile LGBM with GPU support](#1)
1. [Loading the data](#2)
1. [Training the model on CPU](#3)
1. [Training the model on GPU](#4)
1. [Submission](#5)
<a id="1"></a>
## 1. Re-compile LGBM with GPU support
In Kaggle notebook setting, set the `Internet` option to `Internet connected`, and `GPU` to `GPU on`.
We first remove the existing CPU-only lightGBM library and clone the latest github repo.
```
!rm -r /opt/conda/lib/python3.6/site-packages/lightgbm
!git clone --recursive https://github.com/Microsoft/LightGBM
```
Next, the Boost development library must be installed.
```
!apt-get install -y -qq libboost-all-dev
```
The next step is to build and re-install lightGBM with GPU support.
```
%%bash
cd LightGBM
rm -r build
mkdir build
cd build
cmake -DUSE_GPU=1 -DOpenCL_LIBRARY=/usr/local/cuda/lib64/libOpenCL.so -DOpenCL_INCLUDE_DIR=/usr/local/cuda/include/ ..
make -j$(nproc)
!cd LightGBM/python-package/;python3 setup.py install --precompile
```
Last, carry out some post processing tricks for OpenCL to work properly, and clean up.
```
!mkdir -p /etc/OpenCL/vendors && echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
!rm -r LightGBM
```
<a id="2"></a>
## 2. Loading the data
```
import pandas as pd
import numpy as np
from sklearn.model_selection import StratifiedKFold
import lightgbm as lgb
from sklearn import metrics
import gc
pd.set_option('display.max_columns', 200)
train_df = pd.read_csv('../input/train.csv')
test_df = pd.read_csv('../input/test.csv')
#extracting a subset for quick testing
#train_df = train_df[1:1000]
```
<a id="3"></a>
## 3. Training the model on CPU
```
param = {
'num_leaves': 10,
'max_bin': 127,
'min_data_in_leaf': 11,
'learning_rate': 0.02,
'min_sum_hessian_in_leaf': 0.00245,
'bagging_fraction': 1.0,
'bagging_freq': 5,
'feature_fraction': 0.05,
'lambda_l1': 4.972,
'lambda_l2': 2.276,
'min_gain_to_split': 0.65,
'max_depth': 14,
'save_binary': True,
'seed': 1337,
'feature_fraction_seed': 1337,
'bagging_seed': 1337,
'drop_seed': 1337,
'data_random_seed': 1337,
'objective': 'binary',
'boosting_type': 'gbdt',
'verbose': 1,
'metric': 'auc',
'is_unbalance': True,
'boost_from_average': False,
}
%%time
nfold = 2
target = 'target'
predictors = train_df.columns.values.tolist()[2:]
skf = StratifiedKFold(n_splits=nfold, shuffle=True, random_state=2019)
oof = np.zeros(len(train_df))
predictions = np.zeros(len(test_df))
i = 1
for train_index, valid_index in skf.split(train_df, train_df.target.values):
print("\nfold {}".format(i))
xg_train = lgb.Dataset(train_df.iloc[train_index][predictors].values,
label=train_df.iloc[train_index][target].values,
feature_name=predictors,
free_raw_data = False
)
xg_valid = lgb.Dataset(train_df.iloc[valid_index][predictors].values,
label=train_df.iloc[valid_index][target].values,
feature_name=predictors,
free_raw_data = False
)
clf = lgb.train(param, xg_train, 5000, valid_sets = [xg_valid], verbose_eval=50, early_stopping_rounds = 50)
oof[valid_index] = clf.predict(train_df.iloc[valid_index][predictors].values, num_iteration=clf.best_iteration)
predictions += clf.predict(test_df[predictors], num_iteration=clf.best_iteration) / nfold
i = i + 1
print("\n\nCV AUC: {:<0.2f}".format(metrics.roc_auc_score(train_df.target.values, oof)))
```
<a id="4"></a>
## 4. Train model on GPU
First, check the GPU availability.
```
!nvidia-smi
```
In order to leverage the GPU, we need to set the following parameters:
'device': 'gpu',
'gpu_platform_id': 0,
'gpu_device_id': 0
```
param = {
'num_leaves': 10,
'max_bin': 127,
'min_data_in_leaf': 11,
'learning_rate': 0.02,
'min_sum_hessian_in_leaf': 0.00245,
'bagging_fraction': 1.0,
'bagging_freq': 5,
'feature_fraction': 0.05,
'lambda_l1': 4.972,
'lambda_l2': 2.276,
'min_gain_to_split': 0.65,
'max_depth': 14,
'save_binary': True,
'seed': 1337,
'feature_fraction_seed': 1337,
'bagging_seed': 1337,
'drop_seed': 1337,
'data_random_seed': 1337,
'objective': 'binary',
'boosting_type': 'gbdt',
'verbose': 1,
'metric': 'auc',
'is_unbalance': True,
'boost_from_average': False,
'device': 'gpu',
'gpu_platform_id': 0,
'gpu_device_id': 0
}
%%time
nfold = 2
target = 'target'
predictors = train_df.columns.values.tolist()[2:]
skf = StratifiedKFold(n_splits=nfold, shuffle=True, random_state=2019)
oof = np.zeros(len(train_df))
predictions = np.zeros(len(test_df))
i = 1
for train_index, valid_index in skf.split(train_df, train_df.target.values):
print("\nfold {}".format(i))
xg_train = lgb.Dataset(train_df.iloc[train_index][predictors].values,
label=train_df.iloc[train_index][target].values,
feature_name=predictors,
free_raw_data = False
)
xg_valid = lgb.Dataset(train_df.iloc[valid_index][predictors].values,
label=train_df.iloc[valid_index][target].values,
feature_name=predictors,
free_raw_data = False
)
clf = lgb.train(param, xg_train, 5000, valid_sets = [xg_valid], verbose_eval=50, early_stopping_rounds = 50)
oof[valid_index] = clf.predict(train_df.iloc[valid_index][predictors].values, num_iteration=clf.best_iteration)
predictions += clf.predict(test_df[predictors], num_iteration=clf.best_iteration) / nfold
i = i + 1
print("\n\nCV AUC: {:<0.2f}".format(metrics.roc_auc_score(train_df.target.values, oof)))
```
<a id="5"></a>
## 5. Submission
```
sub_df = pd.DataFrame({"ID_code": test_df.ID_code.values})
sub_df["target"] = predictions
sub_df[:10]
sub_df.to_csv("lightgbm_gpu.csv", index=False)
```
| github_jupyter |
```
import matplotlib
import matplotlib.pyplot as plt
from mmcg import mmcg
import numpy as np
import operator
import pyart
radar = pyart.io.read('/home/zsherman/sgpxsaprcmacsurI5.c1.20171004.203018.nc')
grid = mmcg(radar, grid_shape=(31, 101, 101),
grid_limits=((0, 15000), (-50000, 50000), (-50000, 50000)),
z_linear_interp=True, toa=15000, weighting_function='cressman')
display = pyart.graph.GridMapDisplay(grid)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 0
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap='pyart_HomeyerRainbow')
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap='pyart_HomeyerRainbow')
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap='pyart_HomeyerRainbow')
# plt.savefig('')
cat_dict = {}
print('##')
print('## Keys for each gate id are as follows:')
for pair_str in grid.fields['gate_id']['notes'].split(','):
print('## ', str(pair_str))
cat_dict.update({pair_str.split(':')[1]:int(pair_str.split(':')[0])})
sorted_cats = sorted(cat_dict.items(), key=operator.itemgetter(1))
cat_colors = {'rain': 'green',
'multi_trip': 'red',
'no_scatter': 'gray',
'snow': 'cyan',
'melting': 'yellow'}
lab_colors = ['red', 'cyan', 'grey', 'green', 'yellow']
if 'xsapr_clutter' in grid.fields.keys():
cat_colors['clutter'] = 'black'
lab_colors = np.append(lab_colors, 'black')
lab_colors = [cat_colors[kitty[0]] for kitty in sorted_cats]
cmap = matplotlib.colors.ListedColormap(lab_colors)
display = pyart.graph.GridMapDisplay(grid)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 0
vmin = 0
vmax = 5
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('gate_id', level=level, vmin=vmin, vmax=vmax,
cmap=cmap)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('gate_id', lon=lon, lat=lat, vmin=vmin,
vmax=vmax, cmap=cmap)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('gate_id', lon=lon, lat=lat, vmin=vmin,
vmax=vmax, cmap=cmap)
level = 0
vmin = 0
vmax = 5
lat = 36.5
lon = -97.7
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('gate_id', level=level, vmin=vmin, vmax=vmax,
cmap=cmap)
```
| github_jupyter |
# 聚类分析
## 1 层次聚类法
### 1.1 Q 型聚类分析
Q 型聚类分析使用数量化的方法对事物进行分类。每个样本点由多变量刻画,看做 $R^n$ 空间的一个点,使用距离度量样本点间的相似程度。
#### 1.1.1 样本点间的距离
$x, y$ 为 $R^n$ 空间中两个样本点,定义 $d: (R^n, R^n)\rightarrow R$,常见定义如下:
* 闵氏距离:
$$
d_q(x, y)=\left[\sum_{k=1}^p|x_k-y_k|^q\right]^\frac{1}{q}
$$
* q = 1: 绝对值距离(cityblock)
$$
d_1(x, y) = \sum_{k=1}^p|x_k-y_k|
$$
* q = 2: 欧几里得距离(euclidean)
$$
d_2(x, y) = \left(\sum_{k=1}^p|x_k-y_k|\right)^\frac{1}{2}
$$
* q = $\infty$: 切比雪夫距离(chebychev)
$$
d_\infty(x, y)=\max\limits_{1\leq k\leq p} |x_k-y_k|
$$
* 马氏距离(mahalanobis)
$$
d(x, y)=\sqrt{(x-y)^T\Sigma^{-1}(x-y)}
$$
其中 $\Sigma$ 为总体样本 $Z$ 的协方差矩阵,$x, y\in Z$。马氏距离对一切线性变换不变,不受量纲影响
#### 1.1.2 类与类之间相似性度量
对于两个样本类 $G_1$ 和 $G_2$,可使用下列方法度量距离
* 最短距离法(single)
$$
D(G_1, G_2)=\min\limits_{x_i\in G_1, y_j\in G_2}\{d(x_i, y_i)\}
$$
* 最长距离法(complete)
$$
D(G_1, G_2)=\max\limits_{x_i\in G_1, y_j\in G_2}\{d(x_i, y_i)\}
$$
* 重心法(centroid)
$$
D(G_1, G_2)=d(\bar{x}, \bar{y})
$$
其中 $\bar{x}, \bar{y}$ 分别为 $G_1, G_2$ 的重心
* 类平均法(average)
$$
D(G_1, G_2)=\frac{1}{n_1n_2}\sum_{x_i\in G_1}\sum_{x_j\in G_2}d(x_i, x_j)
$$
* 离差平方和法(ward)
又称ward法,不常用且公式复杂,略
### 1.2 R 型聚类分析法
Q 型聚类常用于对变量(对象)进行聚类,而 R 型聚类常用于对变量的指标进行分析,以消除冗余或高相似度的指标。
#### 1.2.1 变量相似性度量
* 相关系数
$$
r_{ij}=\frac{\sum(x_j-\bar{x_j})\cdot(x_i-\bar{x_i})}{||x_j-\bar{x_j}||_2\cdot||x_i-\bar{x_i}||_2}
$$
* 夹角余弦.
$$
r_{ij}=\frac{\sum x_i\cdot x_j}{|x_i|\cdot|x_j|}
$$
#### 1.2.2 变量聚类法
令 $d_{ij}=1 - |r_{ij}|$ 或 $d_{ij}=1-r_{ij}^2$
* 最短距离法(single)
$$
D(G_1, G_2)=\min\limits_{x_i\in G_1, y_j\in G_2}\{d(x_i, y_i)\}
$$
* 最长距离法(complete)
$$
D(G_1, G_2)=\max\limits_{x_i\in G_1, y_j\in G_2}\{d(x_i, y_i)\}
$$
### 1.3 算法过程
层次聚类法的算法过程很简单。初始情况下每个样本单独作为一个类,随后依次合并样本相似度最近的两个类,直到所有的类都归为一个类。
### 1.4 编程实例
给定表格,每一列为教育指标,从第 0 列到第 9 列分别表示:"每百万人口高等院校数", "每10万人口高等院校毕业生数", "每10万人口高等院校招生数", "每10万人口高等院校在校生数", "每10万人口高等院校教职工数", "每10万人口高等院校专职教师数", "高级职称占专职教师的比例", "平均每所高等院校的在校生数", "国家财政预算内普通高教经费占国内生产总值的比例", "生均教育经费"。
每一行为省份或直辖市,从第 0 行到第 29 行分别表示:"北京","上海","天津","陕西","辽宁","吉林","黑龙江","湖北","江苏","广东","四川","山东","甘肃","湖南","浙江","新疆","福建","山西","河北","安徽","云南","江西","海南","内蒙古","西藏","河南","广西","宁夏","贵州","青海"
运用 Q 型聚类和 R 型聚类方法对我国各地区普通高等教育的发展状况进行分析。
```
# R 型聚类以及 Q 型聚类,并绘制聚类图
import numpy as np
from scipy.cluster.hierarchy import linkage, dendrogram
from scipy.spatial.distance import pdist
from matplotlib import pyplot as pl
from sklearn.preprocessing import StandardScaler
edu = ['每百万人口高等院校数', "每10万人口高等院校毕业生数", "每10万人口高等院校招生数", "每10万人口高等院校在校生数", "每10万人口高等院校教职工数", "每10万人口高等院校专职教师数", "高级职称占专职教师的比例", "平均每所高等院校的在校生数", "国家财政预算内普通高教经费占国内生产总值的比例", "生均教育经费"]
loc = ["北京","上海","天津","陕西","辽宁","吉林","黑龙江","湖北","江苏","广东","四川","山东","甘肃","湖南","浙江","新疆","福建","山西","河北","安徽","云南","江西","海南","内蒙古","西藏","河南","广西","宁夏","贵州","青海"]
print(len(edu), len(loc))
# 读入数据
data = np.loadtxt("./data/gj.txt", delimiter="\t")
# R 型聚类
# 标准化并计算相关系数矩阵
edu_data = StandardScaler().fit_transform(data) # 对每一列进行标准化
r = np.corrcoef(data.T) # 计算每一列的相关系数,r[i][j] 表示第 i 列与第 j 列向量的相关系数
r
```
从数据中可以看出,存在较为接近的教育指标,故对教育指标进行聚类分析
```
# 计算相关系数导出的矩阵,d[i][j] 表示第 i 列到第 j 列向量的相似性
d = pdist(edu_data.T, 'correlation')
z = linkage(d, 'average') # 进行分层聚类
# 绘制聚类图
dn = dendrogram(z)
from scipy.cluster.hierarchy import fcluster
T = fcluster(z, 6, criterion='maxclust')
for k in range(1, 7):
cls = zip((i for i, v in enumerate(T) if v == k), (edu[i] for i, v in enumerate(T) if v == k))
print(f"第{k}类的有:", list(cls))
```
通过 R 型聚类发现,第 1, 2, 3, 4, 5 所代表的数据相似度较高,而在现实中其也刚好代表了学校规模,故其归为一类。因此可将 10 个教育指标划分为 6 类,分别以序号为 0, 1, 6, 7, 8, 9 的指标来进行代表。
接着使用相似度较低的 6 个教育指标来对不同地区的教育情况进行 Q 型聚类分析。
```
# 使用具有代表性的 6 个教育指标
loc_data = data[:, [0, 1, 6, 7, 8, 9]]
loc_data = StandardScaler().fit_transform(loc_data) # z-score 标准化
# 绘制聚类图
import matplotlib.pyplot as plt
y = pdist(loc_data, 'euclidean')# 求对象的欧氏距离
z = linkage(y, 'average')
plt.figure(figsize=(8, 5))
h = dendrogram(z)
# 对不同的聚类情况进行分析
for k in range(3, 6):
print(f"划分成{k}类的结果如下:")
T = fcluster(z, k, 'maxclust')
for i in range(1, k+1):
print(f"第{i}类的有:", [loc[j] for j, v in enumerate(T) if v == i])
if k < 5:
print("***************************************")
```
可以发现,当划分为 5 类时,其所划分成的类分别有:北京一类;西藏一类;上海、天津一类;其他一类。北京为首都,西藏采用独特的民族政策,宁夏贵州青海为较不发达且为少数民族聚集地,以及其他省份,因此可以解释上述结果。
## 2 K-means 聚类方法
### 2.1 算法原理
1. 手动选择 k 个中心点
2. 遍历每一个样本点,选择距离其最近的中心点 $C_i$,并将其划分为第 $i$ 类
3. 计算每一类新的中心点 $C_i, i=1, 2, \cdots, k$
4. 遍历每一个样本点,选择距离其最近的中心点 $C_i$,并将其划分为第 $i$ 类
5. 判断聚类是否发生改动,若改动跳转至3,否则结束迭代
### 2.2 算法实现
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
from sklearn import metrics
import matplotlib.pyplot as plt
# 生成随机数据
x,y = make_blobs(n_samples=1000,n_features=4,centers=[[-1,-1],[0,0],[1,1],[2,2]],cluster_std=[0.4,0.2,0.2,0.4],random_state=10)
# 聚类分析
k_means = KMeans(n_clusters=3, random_state=10, init="k-means++", n_init=20) # 初始化分析器
k_means.fit(x) # 分析
# 绘图
y_predict = k_means.predict(x)
plt.scatter(x[:,0],x[:,1],c=y_predict)
plt.show()
# 输出聚类预测结果,前 10 个样本所属类,类的中心,聚合评估标准
print(k_means.predict((x[:30,:])))
print(k_means.labels_[:10])
print(k_means.cluster_centers_)
print(k_means.inertia_) # 每点到其所属族中心的距离平方和,越小越好
print(metrics.silhouette_score(x,y_predict)) # 评估系数,越小越好
```
### 手肘法寻找最佳 k 值
K-means 方法存在核心指标,SSE(Sum of The Squared Errors)
$$
\text{SSE}=\sum_{i=1}^k\sum_{p\in C_i}|p-m_i|^2
$$
> 当k小于真实聚类数时,由于k的增大会大幅增加每个簇的聚合程度,故SSE的下降幅度会很大,而当k到达真实聚类数时,再增加k所得到的聚合程度回报会迅速变小,所以SSE的下降幅度会骤减,然后随着k值的继续增大而趋于平缓,也就是说SSE和k的关系图是一个手肘的形状,而这个肘部对应的k值就是数据的真实聚类数
作者:小歪与大白兔
链接:https://www.jianshu.com/p/335b376174d4
故可以通过寻找肘点的方法寻找最佳 k 值
```
# 寻找最佳 k 值(肘点)
k_list = list(range(2, 10))
kms = [KMeans(n_clusters=k, init="k-means++") for k in k_list]
k_score = [v.fit(x).inertia_ for v in kms] # 得到 k 值对应的 SSE
plt.plot(k_list, k_score)
# 绘图
plt.figure(figsize=(14, 7))
for i in range(len(kms)):
y_predict = kms[i].predict(x)
plt.subplot(int(f'24{i+1}'))
plt.scatter(x[:, 0], x[:, 1], c=y_predict, s=1)
plt.show()
```
显然,当 k=4 时结果最佳
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
from matplotlib import dates as mdates
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
```
# Exploratory Precipitation Analysis
```
# Find the most recent date in the data set.
recent_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
print(recent_date)
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# Calculate the date one year from the last date in data set.
last_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
precipitation_scores = session.query(func.strftime("%Y-%m-%d", Measurement.date), Measurement.prcp).\
filter(func.strftime("%Y-%m-%d", Measurement.date) >= dt.date(2016, 8, 23)).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
precipitation_df = pd.DataFrame(precipitation_scores, columns = ['date', 'precipitation'])
precipitation_df['date'] = pd.to_datetime(precipitation_df['date'], format="%Y-%m-%d")
precipitation_df.set_index('date', inplace=True)
# Sort the dataframe by date
precipitation_df = precipitation_df.sort_values(by = 'date')
# Use Pandas Plotting with Matplotlib to plot the data
fig, ax = plt.subplots(figsize = (10, 5))
precipitation_df.plot(ax = ax, x_compat = True)
plt.xlabel("Date")
plt.ylabel("Precipitation in Inches")
plt.title("Precipitation 2016-2017")
plt.xticks(rotation=45)
plt.tight_layout()
plt.show()
precipitation_df
# Use Pandas to calcualte the summary statistics for the precipitation data
precipitation_df.describe()
```
# Exploratory Station Analysis
```
# Design a query to calculate the total number stations in the dataset
number_of_stations = session.query(Measurement.station).distinct().count()
print(f"There are {number_of_stations} stations.")
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
most_active_stations = session.query(Measurement.station,func.count(Measurement.station)).group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
print(f"Most Active Stations")
print('=====================')
most_active_stations
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
most_active = most_active_stations[0][0]
temperatures = session.query(func.min(Measurement.tobs), func.max(Measurement.tobs),func.avg(Measurement.tobs)).filter(Measurement.station == most_active).all()
print(f"Most Active Station Temperatures")
print(f"Lowest Temperature: {temperatures[0][0]}")
print(f"Highest Temperature: {temperatures[0][1]}")
print(f"Average Temperature: {round(temperatures[0][2], 1)}")
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
temperature_observations = session.query(Measurement.tobs).filter(Measurement.date >= last_year).filter(Measurement.station == most_active).all()
temperature_observations_df = pd.DataFrame(temperature_observations, columns=['temperature'])
temperature_observations_df.plot.hist(bins=12, title="Temperature Observation Data")
plt.tight_layout()
plt.show()
```
# Close session
```
# Close Session
session.close()
```
| github_jupyter |
# Predictable t-SNE
[t-SNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) is not a transformer which can produce outputs for other inputs than the one used to train the transform. The proposed solution is train a predictor afterwards to try to use the results on some other inputs the model never saw.
```
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
```
## t-SNE on MNIST
Let's reuse some part of the example of [Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…](https://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html#sphx-glr-auto-examples-manifold-plot-lle-digits-py).
```
import numpy
from sklearn import datasets
digits = datasets.load_digits(n_class=6)
Xd = digits.data
yd = digits.target
imgs = digits.images
n_samples, n_features = Xd.shape
n_samples, n_features
```
Let's split into train and test.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test, imgs_train, imgs_test = train_test_split(Xd, yd, imgs)
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, init='pca', random_state=0)
X_train_tsne = tsne.fit_transform(X_train, y_train)
X_train_tsne.shape
import matplotlib.pyplot as plt
from matplotlib import offsetbox
def plot_embedding(Xp, y, imgs, title=None, figsize=(12, 4)):
x_min, x_max = numpy.min(Xp, 0), numpy.max(Xp, 0)
X = (Xp - x_min) / (x_max - x_min)
fig, ax = plt.subplots(1, 2, figsize=figsize)
for i in range(X.shape[0]):
ax[0].text(X[i, 0], X[i, 1], str(y[i]),
color=plt.cm.Set1(y[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
if hasattr(offsetbox, 'AnnotationBbox'):
# only print thumbnails with matplotlib > 1.0
shown_images = numpy.array([[1., 1.]]) # just something big
for i in range(X.shape[0]):
dist = numpy.sum((X[i] - shown_images) ** 2, 1)
if numpy.min(dist) < 4e-3:
# don't show points that are too close
continue
shown_images = numpy.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(imgs[i], cmap=plt.cm.gray_r),
X[i])
ax[0].add_artist(imagebox)
ax[0].set_xticks([]), ax[0].set_yticks([])
ax[1].plot(Xp[:, 0], Xp[:, 1], '.')
if title is not None:
ax[0].set_title(title)
return ax
plot_embedding(X_train_tsne, y_train, imgs_train, "t-SNE embedding of the digits");
```
## Repeatable t-SNE
We use class *PredictableTSNE* but it works for other trainable transform too.
```
from mlinsights.mlmodel import PredictableTSNE
ptsne = PredictableTSNE()
ptsne.fit(X_train, y_train)
X_train_tsne2 = ptsne.transform(X_train)
plot_embedding(X_train_tsne2, y_train, imgs_train, "Predictable t-SNE of the digits");
```
The difference now is that it can be applied on new data.
```
X_test_tsne2 = ptsne.transform(X_test)
plot_embedding(X_test_tsne2, y_test, imgs_test, "Predictable t-SNE on new digits on test database");
```
By default, the output data is normalized to get comparable results over multiple tries such as the *loss* computed between the normalized output of *t-SNE* and their approximation.
```
ptsne.loss_
```
## Repeatable t-SNE with another predictor
The predictor is a [MLPRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html).
```
ptsne.estimator_
```
Let's replace it with a [KNeighborsRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html) and a normalizer [StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html).
```
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler
ptsne_knn = PredictableTSNE(normalizer=StandardScaler(),
estimator=KNeighborsRegressor())
ptsne_knn.fit(X_train, y_train)
X_train_tsne2 = ptsne_knn.transform(X_train)
plot_embedding(X_train_tsne2, y_train, imgs_train,
"Predictable t-SNE of the digits\nStandardScaler+KNeighborsRegressor");
X_test_tsne2 = ptsne_knn.transform(X_test)
plot_embedding(X_test_tsne2, y_test, imgs_test,
"Predictable t-SNE on new digits\nStandardScaler+KNeighborsRegressor");
```
The model seems to work better as the loss is better but as it is evaluated on the training dataset, it is just a way to check it is not too big.
```
ptsne_knn.loss_
```
| github_jupyter |
# Wiener Filter + UNet
https://github.com/vpronina/DeepWienerRestoration
```
# Import some libraries
import numpy as np
from skimage import color, data, restoration
import matplotlib.pyplot as plt
import torch
import utils
import torch.nn as nn
from networks import UNet
import math
import os
from skimage import io
import skimage
import warnings
warnings.filterwarnings('ignore')
def show_images(im1, im1_title, im2, im2_title, im3, im3_title, font):
fig, (image1, image2, image3) = plt.subplots(1, 3, figsize=(15, 50))
image1.imshow(im1, cmap='gray')
image1.set_title(im1_title, fontsize=font)
image1.set_axis_off()
image2.imshow(im2, cmap='gray')
image2.set_title(im2_title, fontsize=font)
image2.set_axis_off()
image3.imshow(im3, cmap='gray')
image3.set_title(im3_title, fontsize=font)
image3.set_axis_off()
fig.subplots_adjust(wspace=0.02, hspace=0.2,
top=0.9, bottom=0.05, left=0, right=1)
fig.show()
```
# Load the data
```
#Load the target image
image = io.imread('./image.tif')
#Load the blurred and distorted images
blurred = io.imread('./blurred.tif')
distorted = io.imread('./distorted.tif')
#Load the kernel
psf = io.imread('./PSF.tif')
show_images(image, 'Original image', blurred, 'Blurred image',\
distorted, 'Blurred and noisy image', font=18)
```
We know, that the solution is described as follows:
$\hat{\mathbf{x}} = \arg\min_\mathbf{x}\underbrace{\frac{1}{2}\|\mathbf{y}-\mathbf{K} \mathbf{x}\|_{2}^{2}+\lambda r(\mathbf{x})}_{\mathbf{J}(\mathbf{x})}$,
where $\mathbf{J}$ is the objective function.
According to the gradient descent iterative scheme,
$\hat{\mathbf{x}}_{k+1}=\hat{\mathbf{x}}_{k}-\beta \nabla \mathbf{J}(\mathbf{x})$.
Solution is described with the iterative gradient descent equation:
$\hat{\mathbf{x}}_{k+1} = \hat{\mathbf{x}}_{k} - \beta\left[\mathbf{K}^\top(\mathbf{K}\hat{\mathbf{x}}_{k} - \mathbf{y}) + e^\alpha f^{CNN}(\hat{\mathbf{x}}_{k})\right]$, and here $\lambda = e^\alpha$ and $r(\mathbf{x}) = f^{CNN}(\hat{\mathbf{x}})$.
```
# Anscombe transform to transform Poissonian data into Gaussian
#https://en.wikipedia.org/wiki/Anscombe_transform
def anscombe(x):
'''
Compute the anscombe variance stabilizing transform.
the input x is noisy Poisson-distributed data
the output fx has variance approximately equal to 1.
Reference: Anscombe, F. J. (1948), "The transformation of Poisson,
binomial and negative-binomial data", Biometrika 35 (3-4): 246-254
'''
return 2.0*torch.sqrt(x + 3.0/8.0)
# Exact unbiased Anscombe transform to transform Gaussian data back into Poissonian
def exact_unbiased(z):
return (1.0 / 4.0 * z.pow(2) +
(1.0/4.0) * math.sqrt(3.0/2.0) * z.pow(-1) -
(11.0/8.0) * z.pow(-2) +
(5.0/8.0) * math.sqrt(3.0/2.0) * z.pow(-3) - (1.0 / 8.0))
class WienerUNet(torch.nn.Module):
def __init__(self):
'''
Deconvolution function for a batch of images. Although the regularization
term does not have a shape of Tikhonov regularizer, with a slight abuse of notations
the function is called WienerUNet.
The function is built upon the iterative gradient descent scheme:
x_k+1 = x_k - lamb[K^T(Kx_k - y) + exp(alpha)*reg(x_k)]
Initial parameters are:
regularizer: a neural network to parametrize the prior on each iteration x_k.
alpha: power of the trade-off coefficient.
lamb: step of the gradient descent algorithm.
'''
super(WienerUNet, self).__init__()
self.regularizer = UNet(mode='instance')
self.alpha = nn.Parameter(torch.FloatTensor([0.0]))
self.lamb = nn.Parameter(torch.FloatTensor([0.3]))
def forward(self, x, y, ker):
'''
Function that performs one iteration of the gradient descent scheme of the deconvolution algorithm.
:param x: (torch.(cuda.)Tensor) Image, restored with the previous iteration of the gradient descent scheme, B x C x H x W
:param y: (torch.(cuda.)Tensor) Input blurred and noisy image, B x C x H x W
:param ker: (torch.(cuda.)Tensor) Blurring kernel, B x C x H_k x W_k
:return: (torch.(cuda.)Tensor) Restored image, B x C x H x W
'''
#Calculate Kx_k
x_filtered = utils.imfilter2D_SpatialDomain(x, ker, padType='symmetric', mode="conv")
Kx_y = x_filtered - y
#Calculate K^T(Kx_k - y)
y_filtered = utils.imfilter_transpose2D_SpatialDomain(Kx_y, ker,
padType='symmetric', mode="conv")
#Calculate exp(alpha)*reg(x_k)
regul = torch.exp(self.alpha) * self.regularizer(x)
brackets = y_filtered + regul
out = x - self.lamb * brackets
return out
class WienerFilter_UNet(nn.Module):
'''
Module that uses UNet to predict individual gradient of a regularizer for each input image and then
applies gradient descent scheme with predicted gradient of a regularizers per-image.
'''
def __init__(self):
super(WienerFilter_UNet, self).__init__()
self.function = WienerUNet()
#Perform gradient descent iterations
def forward(self, y, ker, n_iter):
output = y.clone()
for i in range(n_iter):
output = self.function(output, y, ker)
return output
#Let's transform our numpy data into pytorch data
x = torch.Tensor(distorted[None, None])
ker = torch.Tensor(psf[None, None])
#Define the model
model = WienerFilter_UNet()
#Load the pretrained weights
state_dict = torch.load(os.path.join('./', 'WF_UNet_poisson'))
state_dict = state_dict['model_state_dict']
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:] # remove `module.`
new_state_dict[name] = v
# load params
model.load_state_dict(new_state_dict)
model.eval()
#Perform Anscombe transform
x = anscombe(x)
#Calculate output
out = model(x, ker, 10)
#Perform inverse Anscombe transform
out = exact_unbiased(out)
#Some post-processing of data
out = out/image.max()
image = image/image.max()
show_images(image, 'Original image', distorted, 'Blurred image',\
out[0][0].detach().cpu().numpy().clip(0,1), 'Restored with WF-UNet', font=18)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/2lory/Linear-Algebra_ChE_2nd-Sem-2021-2022/blob/main/Assignment_3_Lacuesta_Ituriaga.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#TASK 1
Create a function named mat_desc() that througouhly describes a matrix, it should:
* Displays the shape, size, and rank of the matrix.
* Displays whether the matrix is square or non-square.
* Displays whether the matrix is an empty matrix.
* Displays if the matrix is an identity, ones, or zeros matrix
Use 5 sample matrices in which their shapes are not lower than . In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
# mat_desc function
def mat_desc(mat):
sq = False # initial value, to be replaced depending on conditions
mat = np.array(mat)
print(mat)
print('\n' + 'Shape:', mat.shape)
print('Size:', mat.size)
print('Rank:', np.linalg.matrix_rank(mat))
if(mat.shape[0] == mat.shape[1]):
sq = True
print('Square Matrix')
else:
print('Non-Square Matrix')
if(mat.shape[0] == 0 and mat.shape[1] == 0):
print('Empty Matrix')
else:
print('Matrix is not empty')
iden = np.identity(mat.shape[0])
one = np.ones(mat.shape[0], dtype=int)
if sq == True : # executed only if the matrix is square
if sq and (iden== mat).all():
print('Identity Matrix')
elif (one == mat).all() :
print('Ones matrix')
elif (one != mat).all():
print('Zeros Matrix')
else:
print()
one_not_square = np.ones((mat.shape[0], mat.shape[1])) # basis for either ones or zeros matrices that are non-square
zeros_not_square = np.zeros((mat.shape[0], mat.shape[1]))
if sq == False :
if (one_not_square == mat).all() :
print('Ones matrix')
elif (zeros_not_square != mat).all():
print('Zeros Matrix')
else:
print()
mat_desc([
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]
])
mat_desc([
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]
])
mat_desc([
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]
])
mat_desc([
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]
])
mat_desc([
[2, 4, 6, 8],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]
])
```
#TASK 2
Create a function named mat_operations() that takes in two matrices a input parameters it should:
* Determines if the matrices are viable for operation and returns your own error message if they are not viable.
* Returns the sum of the matrices.
* Returns the difference of the matrices.
* Returns the element-wise multiplication of the matrices.
* Returns the element-wise division of the matrices.
Use 5 sample matrices in which their shapes are not lower than . In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def mat_operations(mat1, mat2):
mat1 = np.array(mat1)
mat2 = np.array(mat2)
print('Matrix 1:', mat1)
print('Matrix 2:', mat2)
if(mat1.shape != mat2.shape):
print('Shape of Both Matrices are Not the Same. Sorry we cannot perform the operations.')
return
print('Sum of the Given Matrices:')
msum = mat1 + mat2
print(msum)
print('Difference of the Given Matrices:')
mdiff = mat1 - mat2
print(mdiff)
print('Element-Wise Multiplication of the Given Matrices:')
mmul = np.multiply(mat1, mat2)
print(mmul)
print('Element-Wise Division of the Given Matrices:')
mmul = np.divide(mat1, mat2)
print(mmul)
mat_operations([[2, 4, 6], [1, 2, 3], [3, 2, 1]],
[[0, 1, 0], [1, 1, 1], [0, 0, 0]])
mat_operations([[2, 0, 0], [0, 2, 0], [0, 0, 2]],
[[1, 2, 4], [2, 3, 4], [4, 5, 6]])
mat_operations([[1, 2, 3], [4, 5, 6], [7, 8, 9]],
[[-1, -2, -3], [-4, -5, -6], [-6, -8, -9]])
mat_operations([[1, 1, 1], [3, 3, 3], [5, 5, 5]],
[[0, 0, 0], [2, 2, 2], [4, 4, 4]])
mat_operations([[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0]])
```
| github_jupyter |
```
import torch
import numpy as np
import torch.utils.data
from utils import dataloader
from models.casenet import casenet101 as CaseNet101
import os
import cv2
import tqdm
import argparse
import argparse
import os
import random
import shutil
import time
import warnings
import math
import numpy as np
from PIL import ImageOps, Image
import pprint
import tempfile
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.distributed as dist
import torch.optim
import torch.multiprocessing as mp
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
import torch.nn.functional as F
from tqdm import tqdm
import matplotlib.pyplot as plt
#############################################
# This is code to generate our test dataset
#
# We take the list of Imagenet-R classes, sort the list, and
# take 100/200 of them by going through it with stride 2.
#
# This is how our distorted dataset is made, so these classes should match the
# classes used for trainig and the classes used for eval.
#############################################
# 200 classes used in ImageNet-R
imagenet_r_wnids = ['n01443537', 'n01484850', 'n01494475', 'n01498041', 'n01514859', 'n01518878', 'n01531178', 'n01534433', 'n01614925', 'n01616318', 'n01630670', 'n01632777', 'n01644373', 'n01677366', 'n01694178', 'n01748264', 'n01770393', 'n01774750', 'n01784675', 'n01806143', 'n01820546', 'n01833805', 'n01843383', 'n01847000', 'n01855672', 'n01860187', 'n01882714', 'n01910747', 'n01944390', 'n01983481', 'n01986214', 'n02007558', 'n02009912', 'n02051845', 'n02056570', 'n02066245', 'n02071294', 'n02077923', 'n02085620', 'n02086240', 'n02088094', 'n02088238', 'n02088364', 'n02088466', 'n02091032', 'n02091134', 'n02092339', 'n02094433', 'n02096585', 'n02097298', 'n02098286', 'n02099601', 'n02099712', 'n02102318', 'n02106030', 'n02106166', 'n02106550', 'n02106662', 'n02108089', 'n02108915', 'n02109525', 'n02110185', 'n02110341', 'n02110958', 'n02112018', 'n02112137', 'n02113023', 'n02113624', 'n02113799', 'n02114367', 'n02117135', 'n02119022', 'n02123045', 'n02128385', 'n02128757', 'n02129165', 'n02129604', 'n02130308', 'n02134084', 'n02138441', 'n02165456', 'n02190166', 'n02206856', 'n02219486', 'n02226429', 'n02233338', 'n02236044', 'n02268443', 'n02279972', 'n02317335', 'n02325366', 'n02346627', 'n02356798', 'n02363005', 'n02364673', 'n02391049', 'n02395406', 'n02398521', 'n02410509', 'n02423022', 'n02437616', 'n02445715', 'n02447366', 'n02480495', 'n02480855', 'n02481823', 'n02483362', 'n02486410', 'n02510455', 'n02526121', 'n02607072', 'n02655020', 'n02672831', 'n02701002', 'n02749479', 'n02769748', 'n02793495', 'n02797295', 'n02802426', 'n02808440', 'n02814860', 'n02823750', 'n02841315', 'n02843684', 'n02883205', 'n02906734', 'n02909870', 'n02939185', 'n02948072', 'n02950826', 'n02951358', 'n02966193', 'n02980441', 'n02992529', 'n03124170', 'n03272010', 'n03345487', 'n03372029', 'n03424325', 'n03452741', 'n03467068', 'n03481172', 'n03494278', 'n03495258', 'n03498962', 'n03594945', 'n03602883', 'n03630383', 'n03649909', 'n03676483', 'n03710193', 'n03773504', 'n03775071', 'n03888257', 'n03930630', 'n03947888', 'n04086273', 'n04118538', 'n04133789', 'n04141076', 'n04146614', 'n04147183', 'n04192698', 'n04254680', 'n04266014', 'n04275548', 'n04310018', 'n04325704', 'n04347754', 'n04389033', 'n04409515', 'n04465501', 'n04487394', 'n04522168', 'n04536866', 'n04552348', 'n04591713', 'n07614500', 'n07693725', 'n07695742', 'n07697313', 'n07697537', 'n07714571', 'n07714990', 'n07718472', 'n07720875', 'n07734744', 'n07742313', 'n07745940', 'n07749582', 'n07753275', 'n07753592', 'n07768694', 'n07873807', 'n07880968', 'n07920052', 'n09472597', 'n09835506', 'n10565667', 'n12267677']
imagenet_r_wnids.sort()
classes_chosen = imagenet_r_wnids[::2] # Choose 100 classes for our dataset
assert len(classes_chosen) == 100
imagenet_path = "/var/tmp/namespace/hendrycks/imagenet/train"
class ImageNetSubsetDataset(datasets.ImageFolder):
"""
Dataset class to take a specified subset of some larger dataset
"""
def __init__(self, root, *args, **kwargs):
print("Using {0} classes {1}".format(len(classes_chosen), classes_chosen))
self.new_root = tempfile.mkdtemp()
for _class in classes_chosen:
orig_dir = os.path.join(root, _class)
assert os.path.isdir(orig_dir)
os.symlink(orig_dir, os.path.join(self.new_root, _class))
super().__init__(self.new_root, *args, **kwargs)
def __del__(self):
# Clean up
shutil.rmtree(self.new_root)
val_transforms = [
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
),
]
val_dset = ImageNetSubsetDataset(
imagenet_path,
transform=transforms.Compose(val_transforms)
)
val_loader = torch.utils.data.DataLoader(
dataset=val_dset,
batch_size=1,
shuffle=False
)
def show_image(img, plt):
img = img.squeeze(0).permute((1, 2, 0))
plt.imshow(img, interpolation='nearest')
def do_test_sbd(net_, val_data_loader_, num_to_test = 1):
print('Running Inference....')
net_.eval()
# define plots to show data
fig, ax = plt.subplots(num_to_test,2)
fig.subplots_adjust(wspace=0.025, hspace=0.02)
fig.set_size_inches(30, 30)
for i_batch, (input_img, _) in enumerate(val_data_loader_):
print(input_img)
im = input_img.cuda()
out_masks = net_(im)
prediction = torch.sigmoid(out_masks[0])
print(prediction.shape)
# Show images
edges, _ = torch.max(prediction, dim=1, keepdim=False)
show_image(edges.cpu().numpy(), ax[i_batch][0])
show_image(input_img.squeeze(0).cpu().numpy(), ax[i_batch][1])
if i_batch + 1 >= num_to_test:
break
net = CaseNet101()
net = torch.nn.DataParallel(net.cuda())
ckpt = './checkpoints/sbd/model_checkpoint.pt'
print('loading ckpt :%s' % ckpt)
net.load_state_dict(torch.load(ckpt), strict=True)
do_test_sbd(net, val_loader, num_to_test=1)
```
| github_jupyter |
```
from pyxnat import Interface
import xmltodict
import xnat_downloader.cli.run as run
from pyxnat import Inspector
import os
```
### Play 04/11/2018
```
central = Interface(server="https://central.xnat.org", user='')
project = 'xnatDownload'
subject = 'sub-001'
proj_obj = central.select.project(project)
proj_obj.exists()
testSub = run.Subject(proj_obj, subject)
testSub.get_sessions()
testSub.ses_dict
testSub.get_scans(testSub.ses_dict.keys()[0])
testSub.scan_dict
testSub.download_scan('PU:anat-T1w', '/home/james/Documents/myTestBIDS')
```
### Play 04/10/2018
```
central = Interface(server="https://rpacs.iibi.uiowa.edu/xnat")
project = r'BIKE_EXTEND'
subject = r'sub-999'
proj_obj = central.select.project(project)
proj_obj.exists()
testSub = run.Subject(proj_obj, subject)
sub_objs = proj_obj.subjects()
# list the subjects by their label (e.g. sub-myname)
# instead of the RPACS_1223 ID given to them
sub_objs._id_header = 'label'
subjects = sub_objs.get()
testSub.get_sessions()
testSub.ses_dict
testSub.get_scans(testSub.ses_dict.keys()[0])
testSub.download_scan('PU:anat-FLAIR', '/home/james/Documents/myTestBIDS')
scan_obj = testSub.scan_dict['PU:anat-FLAIR']
scan_res = scan_obj.resources()
scan_files = scan_res.files()
exp_obj = scan_obj.parent()
help(scan_res)
scans = testSub.ses_dict['activepre'].scans()
exp_obj.scans().download()
testSub.scan_dict['PU:anat-FLAIR'].id()
inspect = Inspector(central)
inspect.experiment_values('xnat:mrSessionData')
project = 'GE3T_DEV'
subject = r'sub-voss01'
project_1 = 'VOSS_PACRAD'
subject_1 = '102'
proj_obj = central.select.project(project).subject(subject).experiments().get('')
all_subjects = central.select.project(project).subjects()
all_subjects.get()
type(proj_obj[0])
sub_xnat_path = '/project/{project}/subjects/{subject}'.format(subject=subject,
project=project)
sub_obj = central.pro(sub_xnat_path)
sub_obj.exists()
sub_dict = xmltodict.parse(sub_obj.get())
session_labels = None
sessions = "ALL"
# participant
sub_dict['xnat:Subject']['@label']
# sessions
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']
# single session label
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']['@label']
# scans
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']['xnat:scans']['xnat:scan']
# single scan
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']['xnat:scans']['xnat:scan'][3]
# scan type
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']['xnat:scans']['xnat:scan'][3]['@type']
run.sort_sessions(sub_dict, session_labels, sessions)
ses_labels = [None]
ses_list = [sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']]
ses_list
session_tech_labels = [ses['@label'] for ses in ses_list]
session_tech_labels
upload_date = [run.get_time(ses) for ses in ses_list]
upload_date
session_info = zip(ses_list, upload_date, session_tech_labels, ses_labels)
session_info
session_info.sort(key=lambda x: x[1])
session_info
ses_info = session_info[0]
ses_dict = ses_info[0]
project
central
tmp = sub_obj.resources()
tmp.get()
tmp2.get()
central.inspect.datatypes('xnat:subjectData')
central.inspect.experiment_types()
central.inspect.assessor_types()
central.inspect.scan_types()
central.inspect.structure()
central.select('//experiments').get()
constr = [('xnat:mrSessionData/PROJECT', '=', project)]
test_get = central.select('xnat:mrScanData').where(constr)
test_get.as_list()
central.inspect.datatypes()
central.inspect.datatypes('xnat:projectData')
central.inspect.datatypes('xnat:mrSessionData')
contraints = [('xnat:mrSessionData/Project','=', project)]
var = central.select('xnat:mrSessionData', ['xnat:mrSessionData/SUBJECT_ID', 'xnat:mrSessionData/AGE']).where(contraints)
var.
central.inspect.datatypes('xnat:projectData')
constraints = [('xnat:otherDicomSessionData/PROJECT','=', project)]
var2 = central.select('xnat:xnat:otherDicomSessionData', ['xnat:mrSessionData/SUBJECT_ID', 'xnat:mrSessionData/AGE']).where(contraints)
central.select('//subjects').get('ID')
proj_obj = central.select.project(project)
scan_obj = central.select.project(project).subject(subject).experiment('20180131').scans().get('')[11]
all_scans = central.select.project(project).subject(subject).experiment('20180131').scans()
ses_obj = central.select.project(project_1).subject(subject_1).experiment('20160218').scans()
first_scan = ses_obj.get('')[0]
# rsrce_obj = first_scan.resources().get('')
# fil = rsrce_obj.files().get('')[0]
parent_obj = first_scan.parent().label()
# parent_obj.label()
parent_obj
first_scan.id()
all_scans.download(dest_dir='/home/james/Documents', type='task-block_bold', extract=True)
scan_files = scan_rce.files()
dicom_rce = scan_obj.resource('DICOM')
all_files = dicom_rce.files()
file_ex = all_files.get('')[0]
file_ex.get_copy('/home/james/')
sub_obj = proj_obj.subjects()
sub_obj
sub_obj.get('ID')
subject = proj_obj.subject('sub-voss01')
subject.experiments()
ses_obj = subject.experiments()
# have to run twice to get the correct result (e.g. the date)
ses_obj.get('label')
test_ses_first = ses_obj.first()
test_scans = test_ses_first.scans()
scan = test_scans.fetchone()
scan
test_ses_first.attrs.get()
scans_obj = ses_obj.scans()
scan_tmp = scans_obj.get()[4]
# how to access scan type
scan_tmp.attrs.get('type')
rsrcs = scans_obj.resources()
tmp_first = rsrcs.first()
tmp = tmp_first.files()
scan1 = scans_obj.resource('1')
scan1.files().get()
scans_obj.resources
rsrcs.get('series_description')
scans_obj._get_array
central.inspect.datatypes('xnat:mrScanData')
```
## Test data: 05/08/2018
```
sub_obj.attrs.get('label').zfill(3)
central = Interface(server="https://central.xnat.org")
project = 'xnatDownload'
subject = '21'
proj_obj = central.select.project(project)
proj_obj.exists()
testSub = run.Subject(proj_obj, subject)
testSub.get_sessions()
ses_label = testSub.ses_dict.keys()[0]
testSub.get_scans(ses_label)
testSub.scan_dict
scan = "T1rho - SL10 (NO AUTO PRESCAN)"
dest = '/home/james/Downloads'
scan_repl_dict = {
"SAG FSPGR BRAVO": "anat-T1w",
"Field Map": "fmap",
"T1rho - SL50": "anat-T1rho_acq-SL50",
"T1rho - SL10 (NO AUTO PRESCAN)": "anat-T1rho_acq-SL10",
"fMRI Resting State": "func-bold_task-rest",
"fMRI SIMON": "func-bold_task-simon",
"DTI": "dwi",
"3D ASL": "func-asl",
"PROBE-SV 35": "mrs-fid",
"PU:SAG FSPGR BRAVO": "anat-T1w_rec-pu",
"PU:fMRI Resting State": "func-bold_task-rest_rec-pu",
"PU:fMRI SIMON": "func-bold_task-simon_rec-pu",
"Cerebral Blood Flow": "func-asl_rec-cbf",
"NOT DIAGNOSTIC: PFile-PROBE-SV 35": "mrs-fid_rec-pfile"
}
bids_num_len = 3
testSub.download_scan_unformatted(scan, dest, scan_repl_dict, bids_num_len)
```
| github_jupyter |
# "ArviZ customization with rcParams"
> "Use ArviZ rcParams to get sensible defaults right out of the box"
- toc: true
- author: Oriol Abril
- badges: true
- categories: [project, python, arviz]
- tags: [customization, rcparams]
- image: images/nb/rc_context.png
# About
ArviZ not only builds on top of matplotlib's `rcParams` but also adds its own rcParams instance to handle specific settings. This post will only graze matplotlib's rcParams, which are already detailed in [matplotlib's docs](https://matplotlib.org/1.4.1/users/customizing.html); it will dive into specific ArviZ rcParams.
# Introduction
Paraphrasing the description on rcParams in the documentation of matplotlib:
> ArviZ uses arvizrc configuration files to customize all kinds of properties, which we call rcParams. You can control the defaults of many properties in ArviZ: data loading mode (lazy or eager), automatically showing generated plots, the default information criteria and so on.
There are several ways of modifying `arviz.rcParams` instance, each of them targeted to specific needs.
```
import arviz as az
import matplotlib.pyplot as plt
idata = az.load_arviz_data("centered_eight")
```
# Customizing ArviZ
## arvizrc file
To define default values on a per user or per project basis, `arvizrc` file should be used. When imported, ArviZ search for an `arvizrc` file in several locations sorted below by priority:
- `$PWD/arvizrc`
- `$ARVIZ_DATA/arvizrc`
- On Linux,
- `$XDG_CONFIG_HOME/arviz/arvizrc` (if `$XDG_CONFIG_HOME`
is defined)
- or `$HOME/.config/arviz/arvizrc` (if `$XDG_CONFIG_HOME`
is not defined)
- On other platforms,
- `$HOME/.arviz/arvizrc` if `$HOME` is defined
Once one of these files is found, ArviZ stops looking and loads its configuration. If none of them are present, the values hardcoded in ArviZ codebase are used. The file used to set the default values in ArviZ can be obtained with the following command:
```
import arviz as az
print(az.rcparams.get_arviz_rcfile())
```
ArviZ has loaded a file used to set defaults on a per user basis. Unless I use a different rc file in the current directory or modify `rcParams` as explained above, this configuration will be automatically used every time ArviZ is imported.
This can be really useful to define the favourite backend or information criterion, written once in the rc file and ArviZ automatically uses the desired values.
> Important: You should not rely on ArviZ defaults being always the same.
ArviZ strives to encourage best practices and therefore will change the default values whenever a new algorithm is developed to achieve this goal. If you rely on a specific value, you should either use an `arvizrc` template or set the defaults at the beggining of every script/notebook.
## Dynamic rc settings
To set default values on a per file or per project basis, `rcParams` can also be modified dynamically, either overwritting a specific key:
```
az.rcParams["data.load"] = "eager"
```
Note that `rcParams` is the instance to be modified, exactly like in matplotlib. Careful with capitalization!
Another option is to define a dictionary with several new defaults and update rcParams all at once.
```
rc = {
"data.load": "lazy",
"plot.max_subplots": 30,
"stats.ic_scale": "negative_log",
"plot.matplotlib.constrained_layout": False
}
az.rcParams.update(rc)
```
## rc_context
And last but not least, to temporarily use a different set of defaults, ArviZ also has a [`rc_context`](https://arviz-devs.github.io/arviz/generated/arviz.rc_context.html#arviz.rc_context) function. Its main difference and advantage is that it is a context manager, therefore, all code executed inside the context will use the defaults defined by `rc_context` but once we exit the context, everything goes back to normal. Let's generate 3 posterior plots with the same command to show this:
```
_, axes = plt.subplots(1,3, figsize=(15,4))
az.plot_posterior(idata, var_names="mu", ax=axes[0])
with az.rc_context({"plot.point_estimate": "mode", "stats.hdi_prob": 0.7}):
az.plot_posterior(idata, var_names="mu", ax=axes[1])
az.plot_posterior(idata, var_names="mu", ax=axes[2]);
```
# ArviZ default settings
This section will describe ArviZ rcParams as version 0.8.3 (see [GitHub](https://github.com/arviz-devs/arviz/blob/master/arvizrc.template) for an up to date version).
## Data
The rcParams in this section are related to the [data](https://arviz-devs.github.io/arviz/api.html#data) module in ArviZ, that is, they are either related to `from_xyz` converter functions or to `InferenceData` class.
**data.http_protocol** : *{https, http}*
Only the first two example datasets `centered_eight` and `non_centered_eight` come as part of ArviZ. All the others are downloaded from figshare the first time and stored locally to help reloading them the next time. We can get the names of the data available by not passing any argument to `az.load_arviz_data` (you can also get the description of each of them with `az.list_datasets`):
```
az.load_arviz_data().keys()
```
Thus, the first time you call `az.load_arviz_data("radon")`, ArviZ downloads the dataset using `data.http_protocol`. The default is set to `https` but if needed, it can be modified to `http`. Notice how there is no fallback, if downloading with `https` fails, there is no second try with `http`, an error is risen. To use `http` you have to set the rcParam explicitly.
**data.index_origin** : *{0, 1}*
ArviZ integration with Stan and Julia who use 1 based indexing motivate this rcParam. This rcParam is still at an early stage and its implementation is bound to vary, therefore it has no detailed description.
**data.load** : *{lazy, eager}*
Even when not using Dask, xarray's default is to load data lazily into memory when reading from disk. ArviZ's `from_netcdf` also uses the same default. That is, ArviZ functions that read data from disk `from_netcdf` and `load_arviz_data` do not load the data into memory unless `data.load` rcParam is set to `eager`.
Most use cases not only do not require loading data into memory but will also benefit from lazy loading. However, there is one clear exception: when too many files are lazily opened at the same time, xarray ends up crashing with extremely cryptic error messages, these cases require setting data loading to eager mode. One example of such situation is generating ArviZ documentation, we therefore set `data.load` to `eager` in sphinx configuration file.
**data.metagroups** : *mapping of {str : list of str}*
> Warning: Do not overwrite `data.metagroups` as things may break, to add custom metagroups add new keys to the dictionary as shown below
One of the current projects in ArviZ is to extend the capabilities of `InferenceData`. One of the limitations was not allowing its functions and methods to be applied to several groups at the same time. Starting with ArviZ 0.8.0, [`InferenceData` methods](https://arviz-devs.github.io/arviz/generated/arviz.InferenceData.html#arviz.InferenceData) take arguments `groups` and `filter_groups` to overcome this limitation. These two combined arguments have the same capabilities as `var_names`+`filter_vars` in plotting functions: exact matching, like and regex matching like pandas and support for ArviZ `~` negation prefix and one extra feature: metagroups. So what are metagroups? Let's see
```
#collapse-hide
for metagroup, groups in az.rcParams["data.metagroups"].items():
print(f"{metagroup}:\n {groups}\n")
```
Imagine the data you passed to the model was rescaled, after converting to `InferenceData` you have to rescale the data again to its original values, but not only the observations, posterior and prior predictive values too!
Having to apply the rescaling manually to each of the three groups is tedious at best, and creating a variable called `observed_vars` storing a list with these 3 groups is problematic -- when doing prior checks there is no `posterior_predictive` group, it's a highway towards errors at every turn. Metagroups are similar to the variable approach but it's already there and it applies the function only to present groups. Let's add a new metagroup and use it to shift our data:
```
az.rcParams["data.metagroups"]["sampled"] = (
'posterior', 'posterior_predictive', 'sample_stats', 'log_likelihood', 'prior', 'prior_predictive'
)
shifted_idata = idata.map(lambda x: x-7, groups="sampled")
```
**data.save_warmup** : *bool*
If `True`, converter functions will store warmup iterations in the corresponding groups by default.
> Note: `data.save_warmup` does not affect `from_netcdf`, all groups are always loaded from file
---
## Plot
### General
**plot.backend** : *{matplotlib, bokeh}*
Default plotting backend.
**plot.max_subplots** : int
Maximum number of subplots in a single figure. Adding too many subplots into a figure can be really slow, to the point that it looks like everthing has crashed without any error message. When there are more variables to plot than `max_subplots` allowed, ArviZ sends a warning and plots at most `max_suplots`. See for yourselves:
```
with az.rc_context({"plot.max_subplots": 3}):
az.plot_posterior(idata);
```
**plot.point_estimate** : *{mean, median, model, None}*
Default point estimate to include in plots like `plot_posterior` or `plot_density`.
### Bokeh
**plot.bokeh.bounds_x_range**, **plot.bokeh.bounds_y_range** : *auto, None or tuple of (float, float), default auto*
**plot.bokeh.figure.dpi** : *int, default 60*
**plot.bokeh.figure.height**, **plot.bokeh.figure.width** : *int, default 500*
**plot.bokeh.layout.order** : *str, default default*
Select subplot structure for bokeh. One of `default`, `column`, `row`, `square`, `square_trimmed` or `Ncolumn` (`Nrow`) where N is an integer number of columns (rows), here is one example to generate a subplot grid with 2 columns and the necessary rows to fit all variables.
```
with az.rc_context({"plot.bokeh.layout.order": "2column"}):
az.plot_ess(idata, backend="bokeh")
```
**plot.bokeh.layout.sizing_mode** : *{fixed, stretch_width, stretch_height, stretch_both, scale_width, scale_height, scale_both}*
**plot.bokeh.layout.toolbar_location** : *{above, below, left, right, None}*
Location for toolbar on bokeh layouts. `None` will hide the toolbar.
**plot.bokeh.marker** : *str, default Cross*
Default marker for bokeh plots. See [bokeh reference on markers](https://docs.bokeh.org/en/latest/docs/reference/models/markers.html) for more details.
**plot.bokeh.output_backend** : *{webgl, canvas, svg}*
**plot.bokeh.show** : *bool, default True*
Show bokeh plot before returning in ArviZ function.
**plot.bokeh.tools** : *str, default reset,pan,box_zoom,wheel_zoom,lasso_select,undo,save,hove*
Default tools in bokeh plots. More details on [Configuring Plot Tools docs](https://docs.bokeh.org/en/latest/docs/user_guide/tools.html)
### Matplotlib
Matplotlib already has its own [`rcParams`](https://matplotlib.org/3.2.1/tutorials/introductory/customizing.html#a-sample-matplotlibrc-file), which are actually the inspiration for ArviZ rcParams. Therefore, this section is minimalistic.
**plot.matplotlib.show** : *bool, default False*
Call `plt.show` from within ArviZ plotting functions. This generally makes no difference in jupyter like environments, but it can be useful for instance in the IPython terminal when we don't want to customize the plots genereted by ArviZ by changing titles or labels.
---
## Stats
**stats.hdi_prob** : *float*
Default probability of the calculated HDI intervals.
> Important: This probability is completely arbitrary. ArviZ using 0.94 instead of the more common 0.95 aims to emphasize this arbitrary choice.
**stats.information_criterion** : *{loo, waic}*
Default information criterion used by `compare` and `plot_elpd`
**stats.ic_pointwise** : *bool, default False*
Return pointwise values when calling `loo` or `waic`. Pointwise values are an intermediate result and therefore setting `ic_pointwise` to true does not require extra computation.
**stats.ic_scale** : *{log, deviance, negative_log}*
Default information criterion scale. See docs on [`loo`](https://arviz-devs.github.io/arviz/generated/arviz.loo.html#arviz.loo) or [`waic`](https://arviz-devs.github.io/arviz/generated/arviz.waic.html#arviz.waic) for more detail.
---
> Tip: Is there any extra rcParam you'd like to see in ArviZ? Check out [arviz-devs/arviz#792](https://github.com/arviz-devs/arviz/issues/792), it's more than possible you'll be able to add it yourself!
Package versions used to generate this post:
```
#hide_input
%load_ext watermark
%watermark -n -u -v -iv -w
```
---
Comments are not enabled for the blog, to inquiry further about the contents of the post, ask on [ArviZ Issues](https://github.com/arviz-devs/arviz/issues).
| github_jupyter |
# 1장 딥러닝이란?
소개되는 내용은 아래 책들의 내용을 많이 참고합니다.
- [<핸즈온 머신러닝(2판)>](https://m.hanbit.co.kr/store/books/book_view.html?p_code=B7033438574), 오렐리앙 제롱
- [<Deep Learning with Python(2판)>](https://www.manning.com/books/deep-learning-with-python-second-edition#toc), 프랑소와 숄레
- [<쉬운 딥러닝>](https://www.aladin.co.kr/shop/wproduct.aspx?ItemId=269891239), 반병현
소스코드, 강의자료 등을 공개하고 제공한 저자, 출판사에게 진심어린 감사를 전합니다.
## 1.1 인공지능, 머신러닝, 딥러닝
### 관계 1: 연구 분야
<div align="center"><img src="https://raw.githubusercontent.com/codingalzi/handson-ml2/master/slides/images/ai-ml-relation.png" style="width:600px;"></div>
그림 출처: [교보문고(에이지 오브 머신러닝)](http://www.kyobobook.co.kr/readIT/readITColumnView.laf?thmId=00198&sntnId=14142)
### 관계 2: 역사
<div align="center"><img src="https://raw.githubusercontent.com/codingalzi/handson-ml2/master/slides/images/ai-ml-relation2.png" style="width:600px;"></div>
그림 출처: [NVIDIA 블로그](https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/)
### 인공지능
- (1950년대) 컴퓨터가 생각할 수 있는가? 라는 질문에서 출발
- (1956년) 존 맥카시(John McCarthy)
- 컴퓨터로 인간의 모든 지능 활동 구현
- 정의: 사람의 지능적 작업을 컴퓨터로 자동화하는 노력
- (1980년대까지) __학습__(러닝)이 아닌 가능한 모든 규칙을 지정하는
__심볼릭 AI__(symbolic AI) 연구
- 모든 가능성을 논리적으로 전개하는 기법
- 서양장기(체스) 등에서 우수한 성능 발휘
- 반면에 이미지 분류, 음석 인식, 자연어 번역 등 보다 복잡한 문제는 제대로 다루지 못함.
- 이후 머신러닝 등장
### 머신러닝
- 머신러닝 시스템:
- 명시적인 규칙(명령문)만을 수행하지 않음.
- 데이터로부터 특정 통계적 구조 학습 후 지정된 작업을
스스로(자동으로) 완수할 수 있는 규칙 생성
- 예제: 사진 태그 시스템. 태그 달린 사진 데이터셋을 학습한 후
자동으로 사진의 태그 작성.
- 1990년대 본격적으로 발전
- 보다 빨라진 하드웨어와 보다 커진 데이터셋의 영향
<div align="center"><img src="https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch01-a-new-programming-paradigm.png" style="width:400px;"></div>
그림 출처: [Deep Learning with Python(Manning MEAP)](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch01-a-new-programming-paradigm.png)
#### 머신러닝 대 통계학
- 머신러닝의 기초는 통계학.
- 하지만 아주 큰 데이터(빅데이터)를 단순히 통계학적으로 다룰 수는 없음.
- 특히 딥러닝의 경우 수학적, 통계적 이론 보다는 공학적 접근법이 보다 중요해짐.
- 소프트웨어와 하드웨어의 발전에 보다 의존함
### 데이터 표현법 학습
- 데이터 __표현__(data representation): 특정한 방식으로 구현된 데이터
- 예제: 컬러 이미지 표현법
- 빨간색-초록색-파란색을 사용하는 RGB 방식 또는
- 색상-채도-명도를 사용하는 HSV 방식
- 주어진 과제에 따라 적절한 표현법 선택해야 함
- 컬러 사진에서 빨간색 픽셀만을 선택하고자 할 때: RGB 방식 활용
- 컬리 이미지의 채도를 낮추고자 할 때: HSV 방식 활용
#### 머신러닝 모델
- 필요 사항
- __입력 데이터셋__: 음성 인식 모델을 위한 음성 파일, 이미지 태깅 모델을 위한 사진 등.
- __기대 출력값__: 음성 인식 작업의 경우 사람이 직접 작성한 글,
이미지 작업의 경우 '강아지', '고양이', 등의 사람이 직접 붙힌 태그.
- __알고리즘 성능측정법__: 출력 예상값과 기대 출력값 사이의 거리(차이) 측정법.
거리를 줄이는 방향으로 알고리즘에 사용되는 파라미터를 반복 수정하는
과정을 __학습__이라 부름.
- 역할: 입력 데이터를 적절한 표현으로 변환한 후, 변환된 테이터셋으로부터
과제 해결을 위한 적절한 수학적, 통계적 규칙 찾기
#### 예제: 선형 분류
1. 왼쪽 그림: 입력 데이터셋
1. 가운데 그림: 부적절한 좌표 변환. 분류 과제 해결에 적합하지 않음.
1. 오른쪽 그림: 적절한 좌표 변환. 분류 과제를 보다 효율적으로 해결할 수 있음.
<div align="center"><img src="https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch01-learning_representations.png" style="width:700px;"></div>
그림 출처: [Deep Learning with Python(Manning MEAP)](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch01-learning_representations.png)
#### 데이터 변환 수동화의 어려움
- 위 예제의 경우 수동으로 데이터 변환 방식을 어렵지 않게 알아낼 수 있음.
- 반면에 손글씨 숫자 인식(MNIST)의 경우 간단하지 않음(아래 그림 참조).
<div align="center"><img src="https://raw.githubusercontent.com/codingalzi/handson-ml2/master/slides/images/ch03/homl03-10.png" width="300"/></div>
그림 출처: [핸즈온 머신러닝(2판)](https://www.hanbit.co.kr/store/books/look.php?p_code=B9267655530)
#### 데이터 변환 자동화
- 머신러닝 모델 학습: 보다 유용한 데이터 표현으로 변환하는 과정을 자동으로 찾는 과정
- __데이터 표현의 유용성__에 대한 기준: 주어진 과제 해결을 위한 보다 쉬운 규칙 제공
- 변환 방식 종류
- 좌표 변환, 픽셀 개수, 닫힌 원의 개수, 선형/비선형 변환, 이동, ...
- 기본적으로 주어진 문제에 따라 다른 변환 방식 활용
#### 가설 공간
- 주어진 문제에 가장 적절한 변환을 머신러닝 알고리즘 스스로 알아내기는 기본적으로 불가능.
- __가설 공간__: 프로그래머에 의해 지정된 함수들의 집합
- 머신러닝 알고리즘: 가설공간 내에서 적합한 변환 함수 탐색
- 예제: 위 2차원 좌표 변환 문제의 가설 공간은 '모든 가능한 좌표 변환 함수'들의 집합
### 딥러닝의 '딥'(deep)이란?
- '딥'(deep)이란?: __데이터 표현의 연속적 변환__을 지원하는 여러 개의 '층'(layer)을 활용한 학습
- 즉 __계층적 표현 학습__을 지원하는 머신러닝을 딥러닝이라 부름.
- 딥러닝 모델의 깊이 = 계층으로 쌓아 올린 층의 높이
- 수 십개 또는 수 백개의 층으로 구성된 모델 존재
- 모든 층에서 데이터 표현의 변환이 __자동__으로 이루어지는 것이 핵심!
- 섈로 러닝(shallow learning): 한 두 개의 층만 사용하는 학습
<div align="center"><img src="https://raw.githubusercontent.com/codingalzi/handson-ml2/master/slides/images/ch14/homl14-15b.png" width="700"/></div>
<그림 참조: [ImageNet Classification with Deep Convolutional Neural Networks](https://papers.nips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html)>
#### 신경망
- 유닛(unit): 층(layer)을 구성하는 요소. 몇 개에서 몇 십개로 이루어짐.
- 신경망(neural network): 계층적 표현 학습이 이루어지는 모델
- __계층적 '인풋-투-타깃'(input-to-target) 변환__ 학습 모델
- 예제: 숫자 이미지 $\Rightarrow \cdots \Rightarrow$ 숫자
- 단순하지만 매우 강력한 결과를 생산하는 아이디어! __뇌 과학과 아무 상관 없음!__
- 예제: 손글씨 숫자 인식
<div align="center"><img src="https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch01-mnist_representations.png" style="width:550px;"></div>
그림 출처: [Deep Learning with Python(Manning MEAP)](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch01-mnist_representations.png)
### 딥러닝의 지금까지 성과
- 사람과 비슷한 수준의 이미지 분류, 음성 인식, 필기 인식, 자율 주행
- 급격하게 향상된 기계 번역, TTS(text-to-speech) 변환
- 구글 어시스턴트, 아마존 알레사 등의 디지털 도우미
- 향상된 광고 타게팅, 웹 검색
- 자연어 질문 대처 능력
- 사람을 능가하는 바둑 실력(2013 알파고)
### 전망
- 단기적으로 너무 높은 기대를 갖는 것은 위험함.
- 실망할 경우 AI에 대한 투자가 급속도로 줄어들 수 있음.
- 1970년대와 1990년대 1차, 2차 AI 겨울(AI winter)가 실제로 존재했었음.
- 2020년대 초반 현재 중요한 문제에 본격적으로 딥러닝 적용됨.
- 하지만 아직 대중화되지 못함.
- 1995년의 인터넷 처럼 앞으로 딥러닝의 가져올 영향에 대해 제대로 알 수 없음.
## 1.2 딥러닝 이전: 머신러닝의 역사
- 산업계에서 사용되는 머신러닝 알고리즘의 대부분은 딥러닝 알고리즘이 아님.
- 훈련 데이터가 너무 적거나, 딥러닝과 다른 알고리즘이 보다 좋은 성능 발휘 가능.
### 확률적 모델링
- 나이브 베이즈 알고리즘(Naive Bayes algorithm)을 활용하는 분석 기법이 대표적임.
- 베이즈 정리(Bayes theorem)에 기초하는 전통적인 기법
- 1950년대 부터 컴퓨터 없이 적용 시작
- 베이즈 정리 등 확률록의 기초는 18세기부터 시작됨.
- 예제: 로지스틱 회귀(logistic regression)
### 초창기 신경망
- 기본 아이디어: 1950년대부터 연구됨.
- LeNet 합성곱 신경망: 손글씨 숫자 이미지 자동 분류 시스템
- 1989년 벨 연구소(Bell Labs)의 얀 르쿤(Yann LeCun)
<div align="center"><img src="https://raw.githubusercontent.com/codingalzi/handson-ml2/master/slides/images/ch14/homl14-16.gif" width="400"/></div>
< 그림 출처: [LeNet-T CNN](http://yann.lecun.com/exdb/lenet/index.html) >
### 커널 기법
- 1990년대: 서포트 벡터 머신(SVM) + 커널 기법
- 초창기 신경망 성능을 뛰어 넘음.
- 한계
- 대용량 데이터셋 처리에 부적합(매우 느림)
- 이미지 분류 등 지각 문제 해결 잘 못함
- __특성 공학__(feature engineering)에 약함
- 유용한 데이터 표현으로의 변환을 수동을 해결해야 함
### 결정트리, 랜덤 포레스트, 그레이디언트 부스팅 머신
- (2000년대) 결정트리: 입력값을 순서도 형식으로 특정 기준으로 분류하는 방식
- 랜덤 포레스트: 2010년 경까지 커널 기법보다 선호됨.
- 그레이디언트 부스팅 머신: 2014년 경 가장 선호되는 앙상블 학습 기법
- 지각 문제 이외의 경우 여전히 가장 성능이 좋은 모델 중 하나임.
<div align="center"><img src="https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch01-decision_tree.png" style="width:350px;"></div>
그림 출처: [Deep Learning with Python(Manning MEAP)](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch01-decision_tree.png)
### 딥러닝의 본격적 발전
- 2011년: GPU를 활용한 딥 신경망 훈련 시작
- 2012년: ImageNet Challenge(이미지 분류 경진대회)의 획기적 성공
- 2011년 최고 성능: 74.3%의 정확도
- 2012년 합성곱 신경망(convnet)의 최고 성능: 83.6%의 정확도
- 2015년 최고 성능: 96.4%의 정확도
- ImageNet Challenge 대회 더 이상 진행되지 않음.
- 2015년 이후: 많은 문제 영역에서 SVM, 결정트리 등을 딥러닝 모델로 대체함.
### 딥러닝의 특징
- 자동화된 데이터 표현의 변환, 즉 특성 공학 자동화
- 층을 거치면서 점진적으로 더 복잡한 데이터 표현을 만들어 냄.
- 모든 과정의 데이터 표현의 변환, 즉 모든 층에 대한 특성 공학 스스로 해결
#### 최근 머신러닝 분야의 동향 1
- 2019년 캐글(Kaggle) 경진대회에서 상위팀이 사용한 도구 설문조사 결과
<div align="center"><img src="https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/kaggle_top_teams_tools.png" style="width:500px;"></div>
그림 출처: [Deep Learning with Python(Manning MEAP)](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/kaggle_top_teams_tools.png)
#### 최근 머신러닝 분야의 동향 2
- 데이터과학 일반에서 가장 많이 사용되는 도구(캐글 설문조사 2020)
<div align="center"><img src="https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/kaggle_ds_survey_2020.png" style="width:500px;"></div>
그림 출처: [www.kaggle.com/kaggle-survey-2020(20쪽)](https://www.kaggle.com/kaggle-survey-2020)
## 1.3 딥러닝 발전 동력
### 딥러닝 주요 요소
- 컴퓨터 비전, 자연어 인식 분야가 2012년 이후 획기적으로 발전하였음.
- 획기적 발전에 기여한 아래 기법은 하지만 1990년대에 제시됨
- 1990년: 합성곱 신경망(convnet, convolutional neural network)과 역전파(backpropagation)
- 1997년: LSTM(Long Short-Term Memory)
- 2010년대에 딥러닝의 급격한 발전에 기여한 세 가지 요소
- 하드웨어
- 데이터셋과 벤치마크
- 알고리즘
### 하드웨어
- CPU: 1990년에 비해 5,000배 이상 빨라짐
- GPU(Graphical Processing Unit):
- NVIDIA, AMD: 2000년대부터 게임용 그래픽 카드 개발에 천문학적으로 투자
- 2007년 NVIDIA의 CUDA 개발: GPU를 위한 프로그래밍 인터페이스. 다량의 행렬 계산을 병렬처리 가능해짐.
- 2011년: 신경망용 CUDA 개발.
- TPU(Tensor Processing Unit): 2016년 구글이 소개한 딥러닝 전용 칩.
- GPU보다 훨씬 빠르고 에너지 효율적임.
- 2020년에 3세대 TPU 카드 발표. 1990년의 최고 슈퍼컴퓨터보다 10,000배 이상 빠름.
- 2020년 최고의 슈퍼컴퓨터 = 27,000 개의 NVIDIA GPUs = 10개의 pod 성능
(1 pod = 1024개의 TPU 카드)
### 데이터
- 인터넷과 저장장치의 발전으로 인한 엄청난 양의 데이터 축적
- __무어의 법칙__(Moore's law): 2년마다 반도체 집적회로의 성능이 2배로 향상됨.
- Flickr(이미지), YouTube(동영상), Wikipedia(문서) 등이 컴퓨터 비전과 자연어 처리(NLP)의
혁신적 발전의 기본 전제조건이었음.
- 벤치마크(성능비교)의 활성화
- ImageNet 데이터셋: 140만 개의 이미지와 손으로 작성된 1,000개의 클래스 태그
- IamgeNet Challenge, Kaggle Competitions 등과 같은 경진대회
### 알고리즘
- 2000년대 후반까지 딥러닝 네트워크를 효율적으로 훈련시킬 수 있는 알고리즘 부재(역전파 문제 미해결)
- 2009-2010: 주요 알고리즘 개선
- 보다 좋은 신경망 층에 사용되는 활성화 함수
- 보다 좋은 가중치 초기화
- 보다 좋은 옵티마이저(RMSProp, Adam 등)
- 2014-2016: 역전파에 도움되는 다양한 기법 개발
- 배치 정규화(batch normalization), 잔차 연결(residual connection),
깊이별 분리 합성곱(depthwise separable convolution) 등
- 현재: 수십 개의 층을 가지며 수천 만개의 가중치(파라미터)를 갖는
깊은 층으로 구성된 신경망 네트워크 훈련 가능
### 투자
- 2013년 이후 투자가 획기적으로 증가함
<div align="center"><img src="https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/startup_investment_oecd.png" style="width:500px;"></div>
그림 출처: [OECD estimate of total investments in AI startups](https://www.oecd-ilibrary.org/sites/3abc27f1-en/index.html?itemId=/content/component/3abc27f1-en&mimeType=text/html)
### 딥러닝의 대중화
- 이전: C++, CUDA 등을 이용한 어려운 프로그램을 구현할 수 있었어야 함.
- 지금: 파이썬 기초 프로그래밍 수준에서 시작 가능
- Sci-kit Learn, Theano, Tensorflow, Keras 등의 라이브러리 활용
### 딥러닝의 미래
- 여전히 딥러닝의 혁신적 발전이 진행중임.
- 최근의 가장 큰 혁신: 트랜스포머(transformer) 기법를 이용한 자연어 처리
- 20년 뒤엔 알 수 없음. 하지만 딥러닝 발전의 토대를 이룬 아래 요소는 계속해서 활용될 것으로 기대함.
- 단순함: 특성 공학의 자동화로 인해 모델 생성과 훈련이 단순화됨.
- 확장성: GPU 또는 TPU 등을 이용한 병렬화가 가능하기에 무어의 법칙을 최대한 활용할 수 있음.
작은 크기의 데이터 배치(묶음)로 나눈 후 훈련 반복을 병렬화하면 임의의 크기의 데이터셋을
이용한 훈련이 가능해짐.
- 다용도와 재사용성: 추가 입력된 데이터로 훈련을 이어갈 수 있음.
또한 잘 훈련된 모델을 다른 용도의 모델 훈련에 재활용할 수 있음.
이는 또한 아주 작은 데이터셋을 대상으로 딥러닝 모델을 적용할 수 있도록 해줌.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import scipy as sp
import seaborn as sns
np.set_printoptions(precision=2, linewidth=120)
from copy import copy
from tqdm import *
from drift_qec.Q import *
```
- Original basis `Q0`
- Recovered basis `Qc` (Controlled-bsais)
- Effected basis `Qeff = Qt.T * Q0`
- Use effected basis for error sampling
- Learn `Qt` progressively better
- When data comes in from the `Qeff` alignment, you must transform it back to the standard basis before average with the existing channel estimate
Error x Time x Cycle_ratio
```
D = 0.01
N_ERRORS = 1e6
N_TRIALS = 100
N_CYCLES = np.logspace(1, 3, 10).astype(np.int)
RECORDS = []
for trial in tqdm(range(N_TRIALS)):
for n_cycles in N_CYCLES:
n = int(N_ERRORS / n_cycles)
channel = Channel(kx=0.7, ky=0.2, kz=0.1,
Q=np.linalg.qr(np.random.randn(3,3))[0],
n=n, d=D)
RECORDS.append({
"trial": trial,
"cycle_length": n,
"n_cycles": n_cycles,
"time": 0,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3))
})
for cycle in range(n_cycles):
channel.update()
RECORDS.append({
"trial": trial,
"cycle_length": n,
"n_cycles": n_cycles,
"time": (cycle+1)*n,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3))
})
df = pd.DataFrame(RECORDS)
df.to_csv("{}errorsd{}.csv".format(N_ERRORS,D))
df["cycle_length"] = (N_ERRORS / df["n_cycles"]).astype(np.int)
df.tail(10)
PAL = sns.color_palette("hls", len(N_CYCLES))
fig, ax = plt.subplots(1, 1, figsize=(8,6))
for idx, n_cycles in enumerate(N_CYCLES):
sel = (df["n_cycles"] == n_cycles)
subdf = df.loc[sel, :]
v = subdf.groupby("time").mean()
s = subdf.groupby("time").std()
t = v.index.values
y = v["Mdist"].values
e = s["Mdist"].values
ax.loglog(t, y, label=str(subdf.iloc[0, 2]), c=PAL[idx])
ax.fill_between(t, y-e, y+e, alpha=0.1, color=PAL[idx])
plt.title("Recover error over time for varied ratios of cycles to realignments")
plt.xlabel("Time [cycles]")
plt.ylabel("Basis recovery error")
plt.legend()
```
## Regime 1 basis alignment
```
D = 0.01
N_TRIALS = 100
MAX_N = int(1e6)
N_STEP = int(1e3)
RECORDS = []
for trial in tqdm(range(N_TRIALS)):
channel = Channel(kx=0.7, ky=0.2, kz=0.1,
Q=np.linalg.qr(np.random.randn(3,3))[0],
n=N_STEP, d=D)
pxhat, pyhat, pzhat = list(np.linalg.svd(channel.Mhat)[1])
RECORDS.append({
"trial": trial,
"time": 0,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3)),
"pxval": channel.kx, "pyval": channel.ky, "pzval": channel.kz,
"pxhat": pxhat, "pyhat": pyhat, "pzhat": pzhat
})
for time in range(0, MAX_N, N_STEP):
channel.update()
pxhat, pyhat, pzhat = list(np.linalg.svd(channel.Mhat)[1])
RECORDS.append({
"trial": trial,
"time": time,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3)),
"pxval": channel.kx, "pyval": channel.ky, "pzval": channel.kz,
"pxhat": pxhat, "pyhat": pyhat, "pzhat": pzhat
})
df = pd.DataFrame(RECORDS)
df.to_csv("regime1.csv")
df = pd.read_csv("regime1.csv")
v = df.groupby("time").mean()["Qdist"]
s = df.groupby("time").std()["Qdist"]
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
t = v.index.values
y = v.values
e = s.values
ax.plot(t, y,)
ax.fill_between(t, y-e, y+e, alpha=0.25)
plt.ylabel("Measure of orthonormality between $Q_{hat}$ and $Q_{val}$")
plt.xlabel("Time [n_errors]")
df = pd.read_csv("regime1.csv")
v = df.groupby("time").mean()["Mdist"]
s = df.groupby("time").std()["Mdist"]
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
t = v.index.values
y = v.values
e = s.values
ax.loglog(t, y,)
ax.fill_between(t, y-e, y+e, alpha=0.25)
plt.ylabel("Norm distance between $M_{hat}$ and $M_{val}$")
plt.xlabel("Time [n_errors]")
```
## Regime 2 basis alignment
```
D = 0.01
N_TRIALS = 100
MAX_N = int(1e6)
N_STEP = int(1e3)
RECORDS = []
for trial in tqdm(range(N_TRIALS)):
channel = Channel(kx=0.985, ky=0.01, kz=0.005,
Q=np.linalg.qr(np.random.randn(3,3))[0],
n=N_STEP, d=D)
pxhat, pyhat, pzhat = list(np.linalg.svd(channel.Mhat)[1])
RECORDS.append({
"trial": trial,
"time": 0,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3)),
"pxval": channel.kx, "pyval": channel.ky, "pzval": channel.kz,
"pxhat": pxhat, "pyhat": pyhat, "pzhat": pzhat
})
for time in range(0, MAX_N, N_STEP):
channel.update()
pxhat, pyhat, pzhat = list(np.linalg.svd(channel.Mhat)[1])
RECORDS.append({
"trial": trial,
"time": time,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3)),
"pxval": channel.kx, "pyval": channel.ky, "pzval": channel.kz,
"pxhat": pxhat, "pyhat": pyhat, "pzhat": pzhat
})
df = pd.DataFrame(RECORDS)
df.to_csv("regime2.csv")
df = pd.read_csv("regime2.csv")
v = df.groupby("time").mean()["Qdist"]
s = df.groupby("time").std()["Qdist"]
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
t = v.index.values
y = v.values
e = s.values
ax.plot(t, y)
ax.plot(t, y-e, ls="--")
ax.plot(t, y+e, ls="--")
plt.ylabel("Measure of orthonormality between $Q_{hat}$ and $Q_{val}$")
plt.xlabel("Time [n_errors]")
df = pd.read_csv("regime2.csv")
v = df.groupby("time").mean()["Mdist"]
s = df.groupby("time").std()["Mdist"]
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
t = v.index.values
y = v.values
e = s.values
ax.loglog(t, y)
ax.fill_between(t, y-e, y+e, alpha=0.25)
plt.ylabel("Norm distance between $M_{hat}$ and $M_{val}$")
plt.xlabel("Time [n_errors]")
```
# The only thing that matters: effective error probabilities
```
df1 = pd.read_csv("regime1_1e3_1e6.csv")
df1["dpx"] = np.abs(df1["pxval"] - df1["pxhat"])
df1["dpy"] = np.abs(df1["pyval"] - df1["pyhat"])
df1["dpz"] = np.abs(df1["pzval"] - df1["pzhat"])
v1 = df1.groupby("time").mean()
s1 = df1.groupby("time").std()
df2 = pd.read_csv("regime2_1e3_1e6.csv")
df2["dpx"] = np.abs(df2["pxval"] - df2["pxhat"])
df2["dpy"] = np.abs(df2["pyval"] - df2["pyhat"])
df2["dpz"] = np.abs(df2["pzval"] - df2["pzhat"])
v2 = df2.groupby("time").mean()
s2 = df2.groupby("time").std()
fig, axs = plt.subplots(2, 3, figsize=(12, 8), sharey=True, sharex=True,
tight_layout={"h_pad": 1.0, "rect": [0.0, 0.0, 1.0, 0.95]})
for idx, stat in enumerate(["dpx", "dpy", "dpz"]):
t1 = v1[stat].index.values
y1 = v1[stat].values
e1 = s1[stat].values
x = np.log(v1.loc[1:, stat].index.values)
y = np.log(v1.loc[1:, stat].values)
reg = sp.stats.linregress(x, y)
fitted = np.exp(reg.intercept + reg.slope * x)
axs[0, idx].semilogy(t1, y1, ls="", marker=".", color=sns.color_palette()[idx], alpha=0.05)
axs[0, idx].semilogy(t1, y1+e1, ls="--", color=sns.color_palette()[idx])
axs[0, idx].semilogy(t1[1:], fitted, ls="-", color=sns.color_palette()[idx],
label="{} = {:0.2f} e^({:0.2f}*n)".format(stat, np.exp(reg.intercept), reg.slope))
axs[0, idx].set_title(stat)
axs[0, idx].legend(frameon=True)
t2 = v2[stat].index.values
y2 = v2[stat].values
e2 = s2[stat].values
x = np.log(v2.loc[1:, stat].index.values)
y = np.log(v2.loc[1:, stat].values)
reg = sp.stats.linregress(x, y)
fitted = np.exp(reg.intercept + reg.slope * x)
axs[1, idx].semilogy(t2, y2, ls="", marker=".", color=sns.color_palette()[idx], alpha=0.05)
axs[1, idx].semilogy(t2, y2+e2, ls="--", color=sns.color_palette()[idx])
axs[1, idx].semilogy(t2[1:], fitted, ls="-", color=sns.color_palette()[idx],
label="{} = {:0.2f} e^({:0.2f}*n)".format(stat, np.exp(reg.intercept), reg.slope))
axs[1, idx].set_xlabel("Number of errors")
axs[1, idx].legend(frameon=True)
fig.suptitle("Average difference in effective error probability (steps are 1e3, max is 1e6)")
axs[0, 0].set_ylabel("kx=0.7, ky=0.2, kz=0.1")
axs[1, 0].set_ylabel("kx=0.985, ky=0.01, kz=0.005")
fig.savefig("dp_1e3_1e6.pdf")
df1 = pd.read_csv("regime1_1e5_1e8.csv")
df1["dpx"] = np.abs(df1["pxval"] - df1["pxhat"])
df1["dpy"] = np.abs(df1["pyval"] - df1["pyhat"])
df1["dpz"] = np.abs(df1["pzval"] - df1["pzhat"])
v1 = df1.groupby("time").mean()
s1 = df1.groupby("time").std()
df2 = pd.read_csv("regime2_1e5_1e8.csv")
df2["dpx"] = np.abs(df2["pxval"] - df2["pxhat"])
df2["dpy"] = np.abs(df2["pyval"] - df2["pyhat"])
df2["dpz"] = np.abs(df2["pzval"] - df2["pzhat"])
v2 = df2.groupby("time").mean()
s2 = df2.groupby("time").std()
fig, axs = plt.subplots(2, 3, figsize=(12, 8), sharey=True, sharex=True,
tight_layout={"h_pad": 1.0, "rect": [0.0, 0.0, 1.0, 0.95]})
for idx, stat in enumerate(["dpx", "dpy", "dpz"]):
t1 = v1[stat].index.values
y1 = v1[stat].values
e1 = s1[stat].values
x = np.log(v1.loc[1:, stat].index.values)
y = np.log(v1.loc[1:, stat].values)
reg = sp.stats.linregress(x, y)
fitted = np.exp(reg.intercept + reg.slope * x)
axs[0, idx].semilogy(t1, y1, ls="", marker=".", color=sns.color_palette()[idx], alpha=0.05)
axs[0, idx].semilogy(t1, y1+e1, ls="--", color=sns.color_palette()[idx])
axs[0, idx].semilogy(t1[1:], fitted, ls="-", color=sns.color_palette()[idx],
label="{} = {:0.2f} e^({:0.2f}*n)".format(stat, np.exp(reg.intercept), reg.slope))
axs[0, idx].set_title(stat)
axs[0, idx].legend(frameon=True)
t2 = v2[stat].index.values
y2 = v2[stat].values
e2 = s2[stat].values
x = np.log(v2.loc[1:, stat].index.values)
y = np.log(v2.loc[1:, stat].values)
reg = sp.stats.linregress(x, y)
fitted = np.exp(reg.intercept + reg.slope * x)
axs[1, idx].semilogy(t2, y2, ls="", marker=".", color=sns.color_palette()[idx], alpha=0.05)
axs[1, idx].semilogy(t2, y2+e2, ls="--", color=sns.color_palette()[idx])
axs[1, idx].semilogy(t2[1:], fitted, ls="-", color=sns.color_palette()[idx],
label="{} = {:0.2f} e^({:0.2f}*n)".format(stat, np.exp(reg.intercept), reg.slope))
axs[1, idx].set_xlabel("Number of errors")
axs[1, idx].legend(frameon=True)
fig.suptitle("Average difference in effective error probability (steps are 1e5, max is 1e8)")
axs[0, 0].set_ylabel("kx=0.7, ky=0.2, kz=0.1")
axs[1, 0].set_ylabel("kx=0.985, ky=0.01, kz=0.005")
fig.savefig("dp_1e5_1e8.pdf")
sel = (df1["pxhat"] + df1["pyhat"] + df1["pzhat"]) != 1.0
df1.loc[sel, :]
```
## Constant p_uncorr advantages in regimes 1 and 2
```
print "Regime 1 advantage: {}".format((1.5 / (1.0-0.7))**2)
print "Regime 2 advantage: {}".format((1.5 / (1.0-0.985))**2)
```
Regime 2 lands kz within 0.002 of it's value of 0.005. That's good!
## TRY UPDATING every 10 steps
This will be indicative of whether drifting case is worth it.
| github_jupyter |
```
class MulLayer:
def __init__(self):
self.x = None
self.y = None
def forward(self, x, y):
self.x = x
self.y = y
out = x * y
return out
def backward(self, dout):
dx = dout * self.y
dy = dout * self.x
return dx, dy
apple = 100
apple_num = 2
tax = 1.1
mul_apple_layer = MulLayer()
mul_tax_layer = MulLayer()
apple_price = mul_apple_layer.forward(apple, apple_num)
price = mul_tax_layer.forward(apple_price, tax)
print(price)
dprice = 1
dapple_price, dtax = mul_tax_layer.backward(dprice)
dapple, dapple_num = mul_apple_layer.backward(dapple_price)
print(dapple, dapple_num, dtax)
class AddLayer:
def __init__(self):
pass
def forward(self, x, y):
out = x + y
return out
def backward(self, dout):
dx = dout * 1
dy = dout * 1
return dx, dy
apple = 100
apple_num = 2
orange = 150
orange_num = 3
tax = 1.1
mul_apple_layer = MulLayer()
mul_orange_layer = MulLayer()
add_apple_orange_layer = AddLayer()
mul_tax_layer = MulLayer()
apple_price = mul_apple_layer.forward(apple, apple_num)
orange_price = mul_orange_layer.forward(orange, orange_num)
all_price = add_apple_orange_layer.forward(apple_price, orange_price)
price = mul_tax_layer.forward(all_price, tax)
dprice = 1
dall_price, dtax = mul_tax_layer.backward(dprice)
dapple_price, d_orange_price = add_apple_orange_layer.backward(dall_price)
dorange, dorange_num = mul_orange_layer.backward(d_orange_price)
dapple, dapple_num = mul_apple_layer.backward(dapple_price)
print(price)
print(dapple_num, dapple, dorange, dorange_num, dtax)
class Relu:
def __init__(self):
self.mask = None
def forward(self, x):
self.mask = (x <= 0)
out = x.copy()
out[self.mask] = 0
return out
def backward(self, dout):
dout[self.mask] = 0
dx = dout
return dx
class Sigmoid:
def __init__(self):
self.out = None
def forward(self, x):
out = 1 / (1 + np.exp(-x))
self.out = out
return out
def backward(self, dout):
dx = dout * (1.0 - self.out) * self.out
return dx
class Affine:
def __init__(self, W, b):
self.W = W
self.b = b
self.x = None
self.dW = None
self.db = None
def forward(self, x):
self.x = x
out = np.dot(x, self.W) + self.b
return out
def backward(self, dout):
dx = np.dot(dout, self.W.T)
self.dW = np.dot(self.x.T, dout)
self.db = np.sum(dout, axis=0)
return dx
class SoftmaxWithLoss:
def __init__(self):
self.loss = None
self.y = None
self.t = None
def forward(self, x, t):
self.t = t
self.y = softmax(x)
self.loss = cross_entropy_error(self.y, self.t)
return self.loss
def backward(self, dout=1):
batch_size = self.t.shape[0]
dx = (self.y-self.t) / batch_size
return dx
from common.layers import *
from common.gradient import numerical_gradient
from collections import OrderedDict
class TwoLayerNet:
def __init__(self, input_size, hidden_size, output_size, weight_init_std=0.01):
self.params = {}
self.params['W1'] = weight_init_std * np.random.randn(input_size, hidden_size)
self.params['b1'] = np.zeros(hidden_size)
self.params['W2'] = weight_init_std * np.random.randn(hidden_size, output_size)
self.params['b2'] = np.zeros(output_size)
self.layers = OrderedDict()
self.layers['Affine1'] = Affine(self.params['W1'], self.params['b1'])
self.layers['Relu1'] = Relu()
self.layers['Affine2'] = Affine(self.params['W2'], self.params['b2'])
self.lastLayer = SoftmaxWithLoss()
def predict(self, x):
for layer in self.layers.values():
x = layer.forward(x)
return x
def loss(self, x , t):
y = self.predict(x)
return self.lastLayer.forward(y, t)
def accuracy(self, x, t):
y = self.predict(x)
y = np.argmax(y, axis=1)
if t.ndim != 1 : t = np.argmax(t, axis=1)
accuracy = np.sum(y==t) / float(x.shape[0])
return accuracy
def numerical_gradient(self, x, t):
loss_W = lambda W : self.loss(x, t)
grads = {}
grads['W1'] = numerical_gradient(loss_W, self.params['W1'])
grads['b1'] = numerical_gradient(loss_W, self.params['b1'])
grads['W2'] = numerical_gradient(loss_W, self.params['W2'])
grads['b2'] = numerical_gradient(loss_W, self.params['b2'])
return grads
def gradient(self, x, t):
self.loss(x, t)
dout = 1
dout = self.lastLayer.backward(dout)
layers = list(self.layers.values())
layers.reverse()
for layer in layers:
dout = layer.backward(dout)
grads = {}
grads['W1'] = self.layers['Affine1'].dW
grads['b1'] = self.layers['Affine1'].db
grads['W2'] = self.layers['Affine2'].dW
grads['b2'] = self.layers['Affine2'].db
return grads
import sys, os
sys.path.append(os.curdir)
from dataset.mnist import load_mnist
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True)
network = TwoLayerNet(input_size=784, hidden_size=50, output_size=10)
x_batch = x_train[:3]
t_batch = t_train[:3]
grad_numerical = network.numerical_gradient(x_batch, t_batch)
grad_backprop = network.gradient(x_batch, t_batch)
for key in grad_numerical.keys():
diff = np.average(np.abs(grad_backprop[key] - grad_numerical[key]))
print(key + ":" + str(diff))
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True)
network = TwoLayerNet(input_size=784, hidden_size=50, output_size=10)
iters_num = 10000
train_size = x_train.shape[0]
batch_size = 100
learning_rate = 0.1
train_loss_list = []
train_acc_list = []
test_acc_list = []
iter_per_epoch = max(train_size / batch_size, 1)
for i in range(iters_num):
batch_mask = np.random.choice(train_size, batch_size)
x_batch = x_train[batch_mask]
t_batch = t_train[batch_mask]
grad = network.gradient(x_batch, t_batch)
for key in ('W1', 'b1', 'W2', 'b2'):
network.params[key] -= learning_rate * grad[key]
loss = network.loss(x_batch, t_batch)
train_loss_list.append(loss)
if i % iter_per_epoch == 0:
train_acc = network.accuracy(x_train, t_train)
test_acc = network.accuracy(x_test, t_test)
train_acc_list.append(train_acc)
test_acc_list.append(test_acc)
print(train_acc, test_acc)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/LxYuan0420/eat_tensorflow2_in_30_days/blob/master/notebooks/3_3_High_level_API_Demonstration.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**3-3 High-level API: Demonstration**
The examples below use high-level APIs in TensorFlow to implement a linear regression model and a DNN binary classification model.
Typically, the high-level APIs are providing the class interfaces for tf.keras.models.
There are three ways of modeling using APIs of Keras: sequential modeling using Sequential function, arbitrary modeling using API functions, and customized modeling by inheriting base class Model.
Here we are demonstrating using Sequential function and customized modeling by inheriting base class Model, respectively.
```
import tensorflow as tf
# Time stamp
@tf.function
def printbar():
today_ts = tf.timestamp()%(24*60*60)
hour = tf.cast(today_ts//3600+8,tf.int32)%tf.constant(24)
minite = tf.cast((today_ts%3600)//60,tf.int32)
second = tf.cast(tf.floor(today_ts%60),tf.int32)
def timeformat(m):
if tf.strings.length(tf.strings.format("{}",m))==1:
return(tf.strings.format("0{}",m))
else:
return(tf.strings.format("{}",m))
timestring = tf.strings.join([timeformat(hour),timeformat(minite),
timeformat(second)],separator = ":")
tf.print("=========="*8+timestring)
```
**1. Linear Regression Model**
In this example, we used `Sequential` function to construct the model sequentially and use the predefined method `model.fit` for training (for the beginners).
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
from tensorflow.keras import models,layers,losses,metrics,optimizers
# Number of sample
n = 400
# Generating the datasets
X = tf.random.uniform([n,2],minval=-10,maxval=10)
w0 = tf.constant([[2.0],[-3.0]])
b0 = tf.constant([[3.0]])
Y = X@w0 + b0 + tf.random.normal([n,1],mean = 0.0,stddev= 2.0) # @ is matrix multiplication; adding Gaussian noise
# Data Visualization
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
plt.figure(figsize = (12,5))
ax1 = plt.subplot(121)
ax1.scatter(X[:,0],Y[:,0], c = "b")
plt.xlabel("x1")
plt.ylabel("y",rotation = 0)
ax2 = plt.subplot(122)
ax2.scatter(X[:,1],Y[:,0], c = "g")
plt.xlabel("x2")
plt.ylabel("y",rotation = 0)
plt.show()
```
**(b) Model Definition**
```
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(1, input_shape=(2,)))
model.summary()
```
**(c) Model Training**
```
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
model.fit(X, Y, batch_size=10, epochs=200)
tf.print("w=", model.layers[0].kernel)
tf.print("b=", model.layers[0].bias)
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
w,b = model.variables
plt.figure(figsize = (12,5))
ax1 = plt.subplot(121)
ax1.scatter(X[:,0],Y[:,0], c = "b",label = "samples")
ax1.plot(X[:,0],w[0]*X[:,0]+b[0],"-r",linewidth = 5.0,label = "model")
ax1.legend()
plt.xlabel("x1")
plt.ylabel("y",rotation = 0)
ax2 = plt.subplot(122)
ax2.scatter(X[:,1],Y[:,0], c = "g",label = "samples")
ax2.plot(X[:,1],w[1]*X[:,1]+b[0],"-r",linewidth = 5.0,label = "model")
ax2.legend()
plt.xlabel("x2")
plt.ylabel("y",rotation = 0)
plt.show()
```
**2. DNN Binary Classification Model**
This example demonstrates the customized model using the child class inherited from the base class Model, and use a customized loop for training (for the experts).
(a) Data Preparation
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers,losses,metrics,optimizers
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
# Number of the positive/negative samples
n_positive,n_negative = 2000,2000
# Generating the positive samples with a distribution on a smaller ring
r_p = 5.0 + tf.random.truncated_normal([n_positive,1],0.0,1.0)
theta_p = tf.random.uniform([n_positive,1],0.0,2*np.pi)
Xp = tf.concat([r_p*tf.cos(theta_p),r_p*tf.sin(theta_p)],axis = 1)
Yp = tf.ones_like(r_p)
# Generating the negative samples with a distribution on a larger ring
r_n = 8.0 + tf.random.truncated_normal([n_negative,1],0.0,1.0)
theta_n = tf.random.uniform([n_negative,1],0.0,2*np.pi)
Xn = tf.concat([r_n*tf.cos(theta_n),r_n*tf.sin(theta_n)],axis = 1)
Yn = tf.zeros_like(r_n)
# Assembling all samples
X = tf.concat([Xp,Xn],axis = 0)
Y = tf.concat([Yp,Yn],axis = 0)
# Shuffling the samples
data = tf.concat([X,Y],axis = 1)
data = tf.random.shuffle(data)
X = data[:,:2]
Y = data[:,2:]
# Visualizing the data
plt.figure(figsize = (6,6))
plt.scatter(Xp[:,0].numpy(),Xp[:,1].numpy(),c = "r")
plt.scatter(Xn[:,0].numpy(),Xn[:,1].numpy(),c = "g")
plt.legend(["positive","negative"]);
ds_train = tf.data.Dataset.from_tensor_slices((X[0:n*3//4], Y[0:n*3//4])).shuffle(buffer_size=1000).batch(20).prefetch(tf.data.experimental.AUTOTUNE).cache()
ds_valid = tf.data.Dataset.from_tensor_slices((X[n*3//4:,:], Y[n*3//4:,:])).batch(20).prefetch(tf.data.experimental.AUTOTUNE).cache()
```
**(b) Model Definition**
```
class DNNModel(tf.keras.models.Model):
def __init__(self):
super(DNNModel, self).__init__()
def build(self, input_shape):
self.dense1 = tf.keras.layers.Dense(4, activation='relu', name='dense1')
self.dense2 = tf.keras.layers.Dense(8, activation='relu', name='dense2')
self.dense3 = tf.keras.layers.Dense(1, activation='sigmoid', name='dense3')
super(DNNModel, self).build(input_shape)
@tf.function(input_signature=[tf.TensorSpec(shape=[None, 2], dtype=tf.float32)])
def call(self, x):
x = self.dense1(x)
x = self.dense2(x)
y = self.dense3(x)
return y
model = DNNModel()
model.build(input_shape=(None, 2))
model.summary()
```
**(c) Model Training**
```
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
loss_func = tf.keras.losses.BinaryCrossentropy()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_metric = tf.keras.metrics.BinaryAccuracy(name='train_accuracy')
valid_loss = tf.keras.metrics.Mean(name='valid_loss')
valid_metric = tf.keras.metrics.BinaryAccuracy(name='valid_accuracy')
@tf.function
def train_step(model, features, labels):
with tf.GradientTape() as tape:
predictions = model(features)
loss = loss_func(labels, predictions)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss.update_state(loss)
train_metric.update_state(labels, predictions)
@tf.function
def valid_step(model, features, labels):
predictions = model(features)
batch_loss = valid_loss(labels, predictions)
valid_loss.update_state(batch_loss)
valid_metric.update_state(labels, predictions)
def train_model(model, ds_train, ds_valid, epochs):
for epoch in tf.range(1, epochs+1):
for features, labels in ds_train:
train_step(model, features, labels)
for features, labels in ds_valid:
valid_step(model, features, labels)
logs = "Epoch={}, Loss:{}, Accuracy:{}, Valid loss:{}, valid Accuracy:{}"
if epoch%100==0:
printbar()
tf.print(tf.strings.format(logs,
(epoch,train_loss.result(),train_metric.result(),valid_loss.result(),valid_metric.result())))
train_loss.reset_states()
train_metric.reset_states()
valid_loss.reset_states()
valid_metric.reset_states()
train_model(model, ds_train, ds_valid, 1000)
```
| github_jupyter |
# Session #5: Automate ML workflows and focus on innovation (300)
In this session, you will learn how to use [SageMaker Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines-sdk.html) to train a [Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) Transformer model and deploy it. The SageMaker integration with Hugging Face makes it easy to train and deploy advanced NLP models. A Lambda step in SageMaker Pipelines enables you to easily do lightweight model deployments and other serverless operations.
You will learn how to:
1. Setup Environment and Permissions
2. define pipeline with preprocessing, training & deployment steps
3. Run Pipeline
4. Test Inference
Let's get started! 🚀
---
*If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.*
**Prerequisites**:
- Make sure your notebook environment has IAM managed policy `AmazonSageMakerPipelinesIntegrations` as well as `AmazonSageMakerFullAccess`
**Blog Post**
* [Use a SageMaker Pipeline Lambda step for lightweight model deployments](https://aws.amazon.com/de/blogs/machine-learning/use-a-sagemaker-pipeline-lambda-step-for-lightweight-model-deployments/)
# Development Environment and Permissions
## Installation & Imports
We'll start by updating the SageMaker SDK, and importing some necessary packages.
```
!pip install "sagemaker>=2.48.0" --upgrade
import boto3
import os
import numpy as np
import pandas as pd
import sagemaker
import sys
import time
from sagemaker.workflow.parameters import ParameterInteger, ParameterFloat, ParameterString
from sagemaker.lambda_helper import Lambda
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.workflow.steps import CacheConfig, ProcessingStep
from sagemaker.huggingface import HuggingFace, HuggingFaceModel
import sagemaker.huggingface
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import TrainingStep
from sagemaker.processing import ScriptProcessor
from sagemaker.workflow.properties import PropertyFile
from sagemaker.workflow.step_collections import CreateModelStep, RegisterModel
from sagemaker.workflow.conditions import ConditionLessThanOrEqualTo,ConditionGreaterThanOrEqualTo
from sagemaker.workflow.condition_step import ConditionStep, JsonGet
from sagemaker.workflow.pipeline import Pipeline, PipelineExperimentConfig
from sagemaker.workflow.execution_variables import ExecutionVariables
```
## Permissions
_If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```
import sagemaker
sess = sagemaker.Session()
region = sess.boto_region_name
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sagemaker_session.default_bucket()}")
print(f"sagemaker session region: {sagemaker_session.boto_region_name}")
```
# Pipeline Overview

# Defining the Pipeline
## 0. Pipeline parameters
Before defining the pipeline, it is important to parameterize it. SageMaker Pipeline can directly be parameterized, including instance types and counts.
Read more about Parameters in the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-parameters.html)
```
# S3 prefix where every assets will be stored
s3_prefix = "hugging-face-pipeline-demo"
# s3 bucket used for storing assets and artifacts
bucket = sagemaker_session.default_bucket()
# aws region used
region = sagemaker_session.boto_region_name
# base name prefix for sagemaker jobs (training, processing, inference)
base_job_prefix = s3_prefix
# Cache configuration for workflow
cache_config = CacheConfig(enable_caching=False, expire_after="30d")
# package versions
transformers_version = "4.11.0"
pytorch_version = "1.9.0"
py_version = "py38"
model_id_="distilbert-base-uncased"
dataset_name_="imdb"
model_id = ParameterString(name="ModelId", default_value="distilbert-base-uncased")
dataset_name = ParameterString(name="DatasetName", default_value="imdb")
```
## 1. Processing Step
A SKLearn Processing step is used to invoke a SageMaker Processing job with a custom python script - `preprocessing.py`.
### Processing Parameter
```
processing_instance_type = ParameterString(name="ProcessingInstanceType", default_value="ml.c5.2xlarge")
processing_instance_count = ParameterInteger(name="ProcessingInstanceCount", default_value=1)
processing_script = ParameterString(name="ProcessingScript", default_value="./scripts/preprocessing.py")
```
### Processor
```
processing_output_destination = f"s3://{bucket}/{s3_prefix}/data"
sklearn_processor = SKLearnProcessor(
framework_version="0.23-1",
instance_type=processing_instance_type,
instance_count=processing_instance_count,
base_job_name=base_job_prefix + "/preprocessing",
sagemaker_session=sagemaker_session,
role=role,
)
step_process = ProcessingStep(
name="ProcessDataForTraining",
cache_config=cache_config,
processor=sklearn_processor,
job_arguments=["--transformers_version",transformers_version,
"--pytorch_version",pytorch_version,
"--model_id",model_id_,
"--dataset_name",dataset_name_],
outputs=[
ProcessingOutput(
output_name="train",
destination=f"{processing_output_destination}/train",
source="/opt/ml/processing/train",
),
ProcessingOutput(
output_name="test",
destination=f"{processing_output_destination}/test",
source="/opt/ml/processing/test",
),
ProcessingOutput(
output_name="validation",
destination=f"{processing_output_destination}/test",
source="/opt/ml/processing/validation",
),
],
code=processing_script,
)
```
## 2. Model Training Step
We use SageMaker's [Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html) Estimator class to create a model training step for the Hugging Face [DistilBERT](https://huggingface.co/distilbert-base-uncased) model. Transformer-based models such as the original BERT can be very large and slow to train. DistilBERT, however, is a small, fast, cheap and light Transformer model trained by distilling BERT base. It reduces the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster.
The Hugging Face estimator also takes hyperparameters as a dictionary. The training instance type and size are pipeline parameters that can be easily varied in future pipeline runs without changing any code.
### Training Parameter
```
# training step parameters
training_entry_point = ParameterString(name="TrainingEntryPoint", default_value="train.py")
training_source_dir = ParameterString(name="TrainingSourceDir", default_value="./scripts")
training_instance_type = ParameterString(name="TrainingInstanceType", default_value="ml.p3.2xlarge")
training_instance_count = ParameterInteger(name="TrainingInstanceCount", default_value=1)
# hyperparameters, which are passed into the training job
epochs=ParameterString(name="Epochs", default_value="1")
eval_batch_size=ParameterString(name="EvalBatchSize", default_value="32")
train_batch_size=ParameterString(name="TrainBatchSize", default_value="16")
learning_rate=ParameterString(name="LearningRate", default_value="3e-5")
fp16=ParameterString(name="Fp16", default_value="True")
```
### Hugging Face Estimator
```
huggingface_estimator = HuggingFace(
entry_point=training_entry_point,
source_dir=training_source_dir,
base_job_name=base_job_prefix + "/training",
instance_type=training_instance_type,
instance_count=training_instance_count,
role=role,
transformers_version=transformers_version,
pytorch_version=pytorch_version,
py_version=py_version,
hyperparameters={
'epochs':epochs,
'eval_batch_size': eval_batch_size,
'train_batch_size': train_batch_size,
'learning_rate': learning_rate,
'model_id': model_id,
'fp16': fp16
},
sagemaker_session=sagemaker_session,
)
step_train = TrainingStep(
name="TrainHuggingFaceModel",
estimator=huggingface_estimator,
inputs={
"train": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"train"
].S3Output.S3Uri
),
"test": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"test"
].S3Output.S3Uri
),
},
cache_config=cache_config,
)
```
## 3. Model evaluation Step
A ProcessingStep is used to evaluate the performance of the trained model. Based on the results of the evaluation, either the model is created, registered, and deployed, or the pipeline stops.
In the training job, the model was evaluated against the test dataset, and the result of the evaluation was stored in the `model.tar.gz` file saved by the training job. The results of that evaluation are copied into a `PropertyFile` in this ProcessingStep so that it can be used in the ConditionStep.
### Evaluation Parameter
```
evaluation_script = ParameterString(name="EvaluationScript", default_value="./scripts/evaluate.py")
```
### Evaluator
```
script_eval = SKLearnProcessor(
framework_version="0.23-1",
instance_type=processing_instance_type,
instance_count=processing_instance_count,
base_job_name=base_job_prefix + "/evaluation",
sagemaker_session=sagemaker_session,
role=role,
)
evaluation_report = PropertyFile(
name="HuggingFaceEvaluationReport",
output_name="evaluation",
path="evaluation.json",
)
step_eval = ProcessingStep(
name="HuggingfaceEvalLoss",
processor=script_eval,
inputs=[
ProcessingInput(
source=step_train.properties.ModelArtifacts.S3ModelArtifacts,
destination="/opt/ml/processing/model",
)
],
outputs=[
ProcessingOutput(
output_name="evaluation",
source="/opt/ml/processing/evaluation",
destination=f"s3://{bucket}/{s3_prefix}/evaluation_report",
),
],
code=evaluation_script,
property_files=[evaluation_report],
cache_config=cache_config,
)
```
## 4. Register the model
The trained model is registered in the Model Registry under a Model Package Group. Each time a new model is registered, it is given a new version number by default. The model is registered in the "Approved" state so that it can be deployed. Registration will only happen if the output of the [6. Condition for deployment](#6.-Condition-for-deployment) is true, i.e, the metrics being checked are within the threshold defined.
```
model = HuggingFaceModel(
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
role=role,
transformers_version=transformers_version,
pytorch_version=pytorch_version,
py_version=py_version,
sagemaker_session=sagemaker_session,
)
model_package_group_name = "HuggingFaceModelPackageGroup"
step_register = RegisterModel(
name="HuggingFaceRegisterModel",
model=model,
content_types=["application/json"],
response_types=["application/json"],
inference_instances=["ml.g4dn.xlarge", "ml.m5.xlarge"],
transform_instances=["ml.g4dn.xlarge", "ml.m5.xlarge"],
model_package_group_name=model_package_group_name,
approval_status="Approved",
)
```
## 5. Model Deployment
We create a custom step `ModelDeployment` derived from the provided `LambdaStep`. This Step will create a Lambda function and invocate to deploy our model as SageMaker Endpoint.
```
# custom Helper Step for ModelDeployment
from utils.deploy_step import ModelDeployment
# we will use the iam role from the notebook session for the created endpoint
# this role will be attached to our endpoint and need permissions, e.g. to download assets from s3
sagemaker_endpoint_role=sagemaker.get_execution_role()
step_deployment = ModelDeployment(
model_name=f"{model_id_}-{dataset_name_}",
registered_model=step_register.steps[0],
endpoint_instance_type="ml.g4dn.xlarge",
sagemaker_endpoint_role=sagemaker_endpoint_role,
autoscaling_policy=None,
)
```
## 6. Condition for deployment
For the condition to be `True` and the steps after evaluation to run, the evaluated accuracy of the Hugging Face model must be greater than our `TresholdAccuracy` parameter.
### Condition Parameter
```
threshold_accuracy = ParameterFloat(name="ThresholdAccuracy", default_value=0.8)
```
### Condition
```
cond_gte = ConditionGreaterThanOrEqualTo(
left=JsonGet(
step=step_eval,
property_file=evaluation_report,
json_path="eval_accuracy",
),
right=threshold_accuracy,
)
step_cond = ConditionStep(
name="CheckHuggingfaceEvalAccuracy",
conditions=[cond_gte],
if_steps=[step_register, step_deployment],
else_steps=[],
)
```
# Pipeline definition and execution
SageMaker Pipelines constructs the pipeline graph from the implicit definition created by the way pipeline steps inputs and outputs are specified. There's no need to specify that a step is a "parallel" or "serial" step. Steps such as model registration after the condition step are not listed in the pipeline definition because they do not run unless the condition is true. If so, they are run in order based on their specified inputs and outputs.
Each Parameter we defined holds a default value, which can be overwritten before starting the pipeline. [Parameter Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-parameters.html)
### Overwriting Parameters
```
# define parameter which should be overwritten
pipeline_parameters=dict(
ModelId="distilbert-base-uncased",
ThresholdAccuracy=0.7,
Epochs="3",
TrainBatchSize="32",
EvalBatchSize="64",
)
```
### Create Pipeline
```
pipeline = Pipeline(
name=f"HuggingFaceDemoPipeline",
parameters=[
model_id,
dataset_name,
processing_instance_type,
processing_instance_count,
processing_script,
training_entry_point,
training_source_dir,
training_instance_type,
training_instance_count,
evaluation_script,
threshold_accuracy,
epochs,
eval_batch_size,
train_batch_size,
learning_rate,
fp16
],
steps=[step_process, step_train, step_eval, step_cond],
sagemaker_session=sagemaker_session,
)
```
We can examine the pipeline definition in JSON format. You also can inspect the pipeline graph in SageMaker Studio by going to the page for your pipeline.
```
import json
json.loads(pipeline.definition())
```

`upsert` creates or updates the pipeline.
```
pipeline.upsert(role_arn=role)
```
### Run the pipeline
```
execution = pipeline.start(parameters=pipeline_parameters)
execution.wait()
```
## Getting predictions from the endpoint
After the previous cell completes, you can check whether the endpoint has finished deploying.
We can use the `endpoint_name` to create up a `HuggingFacePredictor` object that will be used to get predictions.
```
from sagemaker.huggingface import HuggingFacePredictor
endpoint_name = f"{model_id}-{dataset_name}"
# check if endpoint is up and running
print(f"https://console.aws.amazon.com/sagemaker/home?region={region}#/endpoints/{endpoint_name}")
hf_predictor = HuggingFacePredictor(endpoint_name,sagemaker_session=sagemaker_session)
```
### Test data
Here are a couple of sample reviews we would like to classify as positive (`pos`) or negative (`neg`). Demonstrating the power of advanced Transformer-based models such as this Hugging Face model, the model should do quite well even though the reviews are mixed.
```
sentiment_input1 = {"inputs":"Although the movie had some plot weaknesses, it was engaging. Special effects were mind boggling. Can't wait to see what this creative team does next."}
hf_predictor.predict(sentiment_input1)
sentiment_input2 = {"inputs":"There was some good acting, but the story was ridiculous. The other sequels in this franchise were better. It's time to take a break from this IP, but if they switch it up for the next one, I'll check it out."}
hf_predictor.predict(sentiment_input2)
```
## Cleanup Resources
The following cell will delete the resources created by the Lambda function and the Lambda itself.
Deleting other resources such as the S3 bucket and the IAM role for the Lambda function are the responsibility of the notebook user.
```
sm_client = boto3.client("sagemaker")
# Delete the Lambda function
step_deployment.func.delete()
# Delete the endpoint
hf_predictor.delete_endpoint()
```
| github_jupyter |
# Assignment 2
For this assignment you'll be looking at 2017 data on immunizations from the CDC. Your datafile for this assignment is in [assets/NISPUF17.csv](assets/NISPUF17.csv). A data users guide for this, which you'll need to map the variables in the data to the questions being asked, is available at [assets/NIS-PUF17-DUG.pdf](assets/NIS-PUF17-DUG.pdf). **Note: you may have to go to your Jupyter tree (click on the Coursera image) and navigate to the assignment 2 assets folder to see this PDF file).**
## Question 1
Write a function called `proportion_of_education` which returns the proportion of children in the dataset who had a mother with the education levels equal to less than high school (<12), high school (12), more than high school but not a college graduate (>12) and college degree.
*This function should return a dictionary in the form of (use the correct numbers, do not round numbers):*
```
{"less than high school":0.2,
"high school":0.4,
"more than high school but not college":0.2,
"college":0.2}
```
```
import pandas as pd
def proportion_of_education():
df = pd.read_csv("assets/NISPUF17.csv", index_col="SEQNUMC")
del df['Unnamed: 0']
df.sort_index(inplace=True)
df = df["EDUC1"].to_frame()
count = len(df.index)
lhs = df.loc[df["EDUC1"] == 1].count()["EDUC1"] / count
hs = df.loc[df["EDUC1"] == 2].count()["EDUC1"] / count
mhs = df.loc[df["EDUC1"] == 3].count()["EDUC1"] / count
college = df.loc[df["EDUC1"] == 4].count()["EDUC1"] / count
return {"less than high school": lhs,
"high school": hs,
"more than high school but not college": mhs,
"college": college
}
assert type(proportion_of_education())==type({}), "You must return a dictionary."
assert len(proportion_of_education()) == 4, "You have not returned a dictionary with four items in it."
assert "less than high school" in proportion_of_education().keys(), "You have not returned a dictionary with the correct keys."
assert "high school" in proportion_of_education().keys(), "You have not returned a dictionary with the correct keys."
assert "more than high school but not college" in proportion_of_education().keys(), "You have not returned a dictionary with the correct keys."
assert "college" in proportion_of_education().keys(), "You have not returned a dictionary with the correct keys."
```
## Question 2
Let's explore the relationship between being fed breastmilk as a child and getting a seasonal influenza vaccine from a healthcare provider. Return a tuple of the average number of influenza vaccines for those children we know received breastmilk as a child and those who know did not.
*This function should return a tuple in the form (use the correct numbers:*
```
(2.5, 0.1)
```
```
def average_influenza_doses():
df = pd.read_csv("assets/NISPUF17.csv", index_col="SEQNUMC")
del df['Unnamed: 0']
df.sort_index(inplace=True)
df = df[["CBF_01", "P_NUMFLU"]]
df = df.dropna()
df = df.groupby(["CBF_01"]).mean()
return (df.loc[1]["P_NUMFLU"], df.loc[2]["P_NUMFLU"])
assert len(average_influenza_doses())==2, "Return two values in a tuple, the first for yes and the second for no."
```
## Question 3
It would be interesting to see if there is any evidence of a link between vaccine effectiveness and sex of the child. Calculate the ratio of the number of children who contracted chickenpox but were vaccinated against it (at least one varicella dose) versus those who were vaccinated but did not contract chicken pox. Return results by sex.
*This function should return a dictionary in the form of (use the correct numbers):*
```
{"male":0.2,
"female":0.4}
```
Note: To aid in verification, the `chickenpox_by_sex()['female']` value the autograder is looking for starts with the digits `0.0077`.
```
def chickenpox_by_sex():
df = pd.read_csv("assets/NISPUF17.csv", index_col="SEQNUMC")
del df['Unnamed: 0']
df.sort_index(inplace=True)
df = df[["SEX", "HAD_CPOX", "P_NUMVRC"]]
df["SEX"] = df["SEX"].replace({1: "Male", 2: "Female"})
df = df.fillna(0)
df = df[(df["P_NUMVRC"]>0) & (df["HAD_CPOX"].isin((1,2)))]
# number of males vaccinated that contracted
nmvc = df[(df["SEX"] == "Male") & (df["HAD_CPOX"] == 1)].count()["SEX"]
# number of males vaccinated that did not contracted
nmvnc = df[(df["SEX"] == "Male") & (df["HAD_CPOX"] == 2)].count()["SEX"]
# number of females vaccinated that contracted
nfvc = df[(df["SEX"] == "Female") & (df["HAD_CPOX"] == 1)].count()["SEX"]
# number of females vaccinated that did not contracted
nfvnc = df[(df["SEX"] == "Female") & (df["HAD_CPOX"] == 2)].count()["SEX"]
return {"male":nmvc/nmvnc,"female":nfvc/nfvnc}
chickenpox_by_sex()
assert len(chickenpox_by_sex())==2, "Return a dictionary with two items, the first for males and the second for females."
```
## Question 4
A correlation is a statistical relationship between two variables. If we wanted to know if vaccines work, we might look at the correlation between the use of the vaccine and whether it results in prevention of the infection or disease [1]. In this question, you are to see if there is a correlation between having had the chicken pox and the number of chickenpox vaccine doses given (varicella).
Some notes on interpreting the answer. The `had_chickenpox_column` is either `1` (for yes) or `2` (for no), and the `num_chickenpox_vaccine_column` is the number of doses a child has been given of the varicella vaccine. A positive correlation (e.g., `corr > 0`) means that an increase in `had_chickenpox_column` (which means more no’s) would also increase the values of `num_chickenpox_vaccine_column` (which means more doses of vaccine). If there is a negative correlation (e.g., `corr < 0`), it indicates that having had chickenpox is related to an increase in the number of vaccine doses.
Also, `pval` is the probability that we observe a correlation between `had_chickenpox_column` and `num_chickenpox_vaccine_column` which is greater than or equal to a particular value occurred by chance. A small `pval` means that the observed correlation is highly unlikely to occur by chance. In this case, `pval` should be very small (will end in `e-18` indicating a very small number).
[1] This isn’t really the full picture, since we are not looking at when the dose was given. It’s possible that children had chickenpox and then their parents went to get them the vaccine. Does this dataset have the data we would need to investigate the timing of the dose?
```
def corr_chickenpox():
import scipy.stats as stats
import numpy as np
import pandas as pd
# this is just an example dataframe
df=pd.DataFrame({"had_chickenpox_column":np.random.randint(1,3,size=(100)),
"num_chickenpox_vaccine_column":np.random.randint(0,6,size=(100))})
# here is some stub code to actually run the correlation
corr, pval=stats.pearsonr(df["had_chickenpox_column"],df["num_chickenpox_vaccine_column"])
# just return the correlation
# return corr
df = pd.read_csv("assets/NISPUF17.csv")
df.sort_index(inplace=True)
df = df[["HAD_CPOX", "P_NUMVRC"]]
df = df.dropna()
df = df[df["HAD_CPOX"]<=3]
corr, pval = stats.pearsonr(df["HAD_CPOX"],df["P_NUMVRC"])
return corr
assert -1<=corr_chickenpox()<=1, "You must return a float number between -1.0 and 1.0."
corr_chickenpox()
```
| github_jupyter |
***
# **FloodProofs Labs - HMC Training - TimeSeries Analyzer**
<img style="float: left; padding-right: 80px; padding-left: 5px;" src="img/logo_hmc.png" width="200px" align=”left” >
In the laboratory of **HMC time-series** we will perform the following points:
* Configure the libraries and the dependecies of the laboratory;
* Read the configuration file of the laboratory;
* Read the static datasets of terrain, river networks and outlet sections;
* Select the 'time run' and the 'outlet section' to perform the time-series analysis;
* Read the dynamic datasets of the time-series (collections and hydrographs);
* Plot the position of the analyzed outlet section;
* Plot the time-series of discharge and the time-series of the hmc average forcings.
## **Import libraries and dependencies**
```
%matplotlib inline
%matplotlib widget
# Libraries
from library.jupyter_generic.lib_jupyter_data_io_json import read_file_settings, read_file_ts_hydrograph
from library.jupyter_generic.lib_jupyter_data_io_generic import define_file_path, define_file_template, \
fill_file_template, create_dframe_ts, get_path_root, get_path_folders, get_folders_time
from library.jupyter_generic.lib_jupyter_data_geo_ascii import read_data_grid
from library.jupyter_generic.lib_jupyter_data_geo_shapefile import read_data_section, find_data_section
from library.jupyter_generic.lib_jupyter_data_io_netcdf import read_file_ts_collections
from library.jupyter_generic.lib_jupyter_plot_ts import plot_ts_discharge, plot_ts_forcing
from library.jupyter_generic.lib_jupyter_plot_map import plot_map_terrain
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
# Define configuration file
file_name_settings="fp_labs_analyzer_hmc_timeseries.json"
# Info
print(' ==> Libraries loaded')
```
### **Configure the flood-proofs laboratory**
- Load the configuration file:
```
# Read data from settings algorithm file
settings_info = read_file_settings(file_name_settings)
# Info
print(' ==> Settings information loaded')
```
- Define the static and dynamic file paths:
```
# Define static and dynamic file path(s)
file_path_dset_static = define_file_path(settings_info['source']['static'])
file_path_dset_dynamic = define_file_path(settings_info['source']['dynamic'])
file_path_dset_plot = define_file_path(settings_info['destination']['plot'])
# Define dynamic path root
file_path_dynamic_root = get_path_root(list(file_path_dset_dynamic.values())[0])
# Define list of available folders
list_folder_dynamic_root = get_path_folders(file_path_dynamic_root)
```
### **Read the static datasets**
- Read the terrain and river networks files in ascii format
```
# Read terrain datasets
darray_terrain = read_data_grid(file_path_dset_static['terrain'], var_limit_min=0, var_limit_max=None)
# Read river network datasets
darray_river_network = read_data_grid(file_path_dset_static['river_network'], var_limit_min=0, var_limit_max=1)
```
- Read the sections information in vector format
```
# Read sections shapefile
dset_section = read_data_section(file_path_dset_static['sections'])
# Define outlet sections list
outlet_section_list = ['_'.join([sec_domain, sec_name]) for sec_name, sec_domain in zip(dset_section['section_name'], dset_section['section_domain'])]
```
### **Select the "time run" and the "outlet section"**
- Select the time run to perform the time-series analysis
```
# Define list of available runs based on folders name
time_dynamic_list = get_folders_time(list_folder_dynamic_root)
# Display available time run
time_run_selection = widgets.Select(options=time_dynamic_list, description='Time Run', disabled=False)
display(time_run_selection)
# Parser the selection from the scroll menu
time_run_value = time_run_selection.value
# Set time run in the setting info object
settings_info['time_run'] = time_run_value
# Info time run
print(' ==> Time Run: ' + settings_info['time_run'] )
```
- Select the outlet section to perform the time-series analysis
```
# Display available outlet sections
outlet_sections_selection = widgets.Select(options=outlet_section_list, description='Outlet Sections', disabled=False)
display(outlet_sections_selection)
# Parser the selection from the scroll menu
outlet_section_value = outlet_sections_selection.value
basin_name, section_name = outlet_section_value.split('_')
# Set section name and basin in the setting info object
settings_info['info']['section_name'] = section_name
settings_info['info']['basin_name'] = basin_name
# Info section
print(' ==> SectionName: ' + settings_info['info']['section_name'] + ' -- BasinName: ' + settings_info['info']['basin_name'] )
```
- Define the information of section, domain and time
```
# Get domain, section and time information
info_section = find_data_section(dset_section, section_name=settings_info['info']['section_name'],
basin_name=settings_info['info']['basin_name'])
info_domain = settings_info['info']['domain_name']
info_time_run = settings_info['time_run']
# Fill dynamic file path(s)
file_template_filled = define_file_template(
info_time_run, section_name=info_section['section_name'], basin_name=info_section['basin_name'],
domain_name=info_domain, template_default=settings_info['template'])
file_path_dset_dynamic = fill_file_template(file_path_dset_dynamic,
template_filled=file_template_filled,
template_default=settings_info['template'])
file_path_dset_plot = fill_file_template(file_path_dset_plot,
template_filled=file_template_filled,
template_default=settings_info['template'])
# Info
print(' ==> SectionName: ' + info_section['section_name'] + ' -- BasinName: ' + info_section['basin_name'] + ' -- TimeRun: ' + info_time_run)
```
### **Read the dynamic datasets**
- Read the collections file in netcdf format
```
# Read the collections file
dframe_ts_cls, attrs_ts_cls = read_file_ts_collections(file_path_dset_dynamic['time_series_collections'])
# Info
print(' ==> Collections dynamic file ' + file_path_dset_dynamic['time_series_collections'] + '... LOADED')
```
- Read the hydrograph file in json format
```
# Read the hydrograph file
dframe_ts_cls, attrs_cls = read_file_ts_collections(file_path_dset_dynamic['time_series_collections'])
dframe_ts_hydro, attrs_ts = read_file_ts_hydrograph(file_path_dset_dynamic['time_series_hydrograph'])
# Info
print(' ==> Hydrograph dynamic file ' + file_path_dset_dynamic['time_series_hydrograph'] + ' ... LOADED')
```
- Select the variable(s) and create the dataframe
```
# Select output variable(s) to plot time-series
dframe_ts_discharge_obs = create_dframe_ts(dframe_ts_hydro, var_name_in='discharge_observed', var_name_out='discharge_observed')
dframe_ts_discharge_sim = create_dframe_ts(dframe_ts_hydro, var_name_in='discharge_simulated', var_name_out='discharge_simulated')
dframe_ts_rain = create_dframe_ts(dframe_ts_cls, var_name_in='rain', var_name_out='rain')
dframe_ts_sm = create_dframe_ts(dframe_ts_cls, var_name_in='soil_moisture', var_name_out='soil_moisture')
# Select forcing variable(s) to plot time-series
dframe_ts_airt = create_dframe_ts(dframe_ts_cls, var_name_in='air_temperature', var_name_out='air_temperature')
dframe_ts_incrad = create_dframe_ts(dframe_ts_cls, var_name_in='incoming_radiation', var_name_out='incoming_radiation')
dframe_ts_rh = create_dframe_ts(dframe_ts_cls, var_name_in='relative_humidity', var_name_out='relative_humidity')
dframe_ts_wind = create_dframe_ts(dframe_ts_cls, var_name_in='wind', var_name_out='wind')
```
### **Plot the dynamic datasets in time-series format**
- Create the plot for analyzing the hmc discharges output
```
# Plot ts discharge
file_name_ts_discharge = file_path_dset_plot['time_series_discharge']
plot_ts_discharge(file_name_ts_discharge, dframe_ts_discharge_sim, attrs_ts,
df_discharge_obs=dframe_ts_discharge_obs, df_rain=dframe_ts_rain,
df_soil_moisture=dframe_ts_sm)
```
- Create the plot for analyzing the hmc forcing datasets
```
# Plot ts forcing
file_name_ts_forcing = file_path_dset_plot['time_series_forcing']
plot_ts_forcing(file_name_ts_forcing,
df_rain=dframe_ts_rain, df_airt=dframe_ts_airt, df_incrad=dframe_ts_incrad,
df_rh=dframe_ts_rh, df_winds=dframe_ts_wind,
attrs_forcing=attrs_ts)
# Plot map terrain with section
file_name_section_locator = file_path_dset_plot['section_locator']
plot_map_terrain(file_name_section_locator, darray_terrain, darray_river_network, info_section,
mask_terrain=True)
```
**Training on-the-job**
- Download/Organize the static and dynamic datasets for a different "time run";
- Select a different case-study (time run);
- Select a different outlet section (section name, basin name);
- Add/change the variables in the time-series plot (have to check the names of the variables in the netcdf forcing or outcome files);
- Add/change the plot of terrain datasets (for example using a different map background);
- ...
| github_jupyter |
# Computer vision and deep learning - Laboratory 5
In this laboratory we'll work with a semantic segmentation model. The task of semantic segmentation implies the labeling/classification of __all__ the pixels in the input image.
You'll build and train a fully convolutional neural network inspired by U-Net.
Also, you will learn about how you can use various callbacks during the training of your model.
Finally, you'll implement several metrics suitable for evaluating segmentation models.
```
import os
import cv2
import glob
import shutil
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
```
## Data loading
As in the previous laboratory, we'll work with the OxfordPets dataset. Each image has a segmentation assigned; three classes are defined on each segmentation mask
- Label 1: pet;
- Label 3: border of the pet;
- Label 2: background.
Let's first write an offline processing step that will split our dataset into train/test sets, and will pre-process the input images.
```
!wget https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
!wget https://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
!tar -xf images.tar.gz
!tar -xf annotations.tar.gz
!ls images/*.jpg | head -4
!ls annotations/trimaps/*.png | head -4
img = cv2.imread('images/Abyssinian_101.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
mask = cv2.imread('annotations/trimaps/Abyssinian_101.png', cv2.IMREAD_GRAYSCALE)
plt.subplot(1, 3, 1)
plt.imshow(img)
plt.subplot(1, 3, 2)
plt.imshow(mask)
plt.subplot(1, 3, 3)
plt.imshow(mask, cmap="gray")
mask = cv2.resize(mask, ( 45//2, 31//2), interpolation=cv2.INTER_NEAREST)
print(mask)
def make_image_square(img, padding_mode='edge', padding_value=0):
height, width = img.shape[0], img.shape[1]
if height > width:
padding = ((0, 0), ((height - width) // 2, (height - width) // 2), (0, 0))
else:
padding = (((width - height) // 2, (width - height) // 2), (0, 0), (0, 0))
if padding_mode == 'edge':
return np.pad(img, padding, mode=padding_mode)
return np.pad(img, padding, mode=padding_mode, constant_values=padding_value)
def get_splits(root_dir):
image_paths = glob.glob(root_dir + "/*.jpg")
labels = ["_".join(os.path.basename(path).split("_")[:-1]) for path in image_paths]
class_names = sorted(list(set(labels)))
assert (len(class_names) == 37)
X = image_paths
y = np.array([class_names.index(label) for label in labels])
X_train, y_train, X_test, y_test = train_test_split(X, y, test_size=len(class_names)*20,
random_state=13,
shuffle=True)
return X_train, X_test, y_train, y_test
def save_splits(images_paths, anno_dir, out_dir, img_shape = (128, 128)):
for img_path in images_paths:
filename = os.path.basename(img_path)
anno_path = os.path.join(anno_dir, filename.replace('.jpg', '.png'))
img = cv2.imread(img_path)
if img is None:
print('Error while loading image: ', img_path)
continue
mask = cv2.imread(anno_path)
img = make_image_square(img, 'edge')
mask = make_image_square(mask, 'constant', 2) # 2 - color of the background pixels
img = cv2.resize(img, img_shape)
mask = cv2.resize(mask, img_shape, interpolation=cv2.INTER_NEAREST)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
np.save(os.path.join(out_dir, filename.replace('.jpg', '.img.npy')), img)
np.save(os.path.join(out_dir, filename.replace('.jpg', '.mask.npy')), mask)
def save_db(images_dir, anno_dir, outdir, img_shape = (128, 128)):
X_train, _, X_test, _ = get_splits(images_dir)
train_dir = os.path.join(outdir, 'train')
test_dir = os.path.join(outdir, 'test')
if os.path.exists(outdir):
shutil.rmtree(outdir)
os.mkdir(outdir)
os.mkdir(train_dir)
os.mkdir(test_dir)
save_splits(X_train, anno_dir, train_dir)
save_splits(X_test, anno_dir, test_dir)
def show_pair(img, mask):
plt.subplot(1, 2, 1)
plt.imshow(img)
plt.title('Image')
plt.subplot(1, 2, 2)
plt.imshow(mask)
plt.title('Mask')
plt.show()
save_db('./images', './annotations/trimaps', 'oxford_pets_seg')
```
You will use a slightly modified version of the data generator that you've written in the previous laboratory.
```
class DataGenerator(tf.keras.utils.Sequence):
def __init__(self, db_dir, batch_size,
shuffle=True):
self.batch_size = batch_size
self.shuffle = shuffle
self.image_paths, self.mask_paths = None, None
self.get_data(db_dir)
self.indices = np.arange(len(self.image_paths))
self.on_epoch_end()
def get_data(self, root_dir):
""""
Loads the paths to the images and their corresponding labels from the database directory
"""
self.image_paths = np.asarray(glob.glob(root_dir + "/*.img.npy"))
self.mask_paths = np.asarray([path.replace('.img.npy', '.mask.npy') for path in self.image_paths])
def __len__(self):
"""
Returns the number of batches per epoch: the total size of the dataset divided by the batch size
"""
return int(np.floor(len(self.image_paths) / self.batch_size))
def __getitem__(self, index):
""""
Generates a batch of data
"""
batch_indices = self.indices[index*self.batch_size : (index+1)*self.batch_size]
batch_x = np.asarray([np.load(img_path).astype(np.float32)/255.0 for img_path in self.image_paths[batch_indices]])
batch_y = np.asarray([np.expand_dims(np.load(mask_path) - 1, axis=-1) for mask_path in self.mask_paths[batch_indices]])
return batch_x, batch_y
def on_epoch_end(self):
""""
Called at the end of each epoch
"""
# if required, shuffle your data after each epoch
self.indices = np.arange(len(self.image_paths))
if self.shuffle:
# you might find np.random.shuffle useful here
np.random.shuffle(self.indices)
train_generator = DataGenerator("./oxford_pets_seg/train", 32)
batch_x, batch_y = train_generator[0]
print(len(batch_x))
fig, axes = plt.subplots(nrows=1, ncols=6, figsize=[16, 9])
for i in range(len(axes)//2):
axes[2*i].set_title('Image')
axes[2*i].imshow(batch_x[i])
axes[2*i + 1].set_title('Mask')
axes[2*i + 1].imshow(batch_y[i][:,:,0]*64)
plt.show()
```
## Building the model
The model that will be used in this laboratory is inspired by the [U-Net](https://arxiv.org/abs/1505.04597) architecture.
U-Net is a fully convolutional neural network comprising two symmetric paths: a contracting path (to capture context) and an expanding path (which enables precise localization).
The network also uses skip connections between the corresponding layers in the downsampling path to the layer in the upsampling path, and thus directly fast-forwards high-resolution feature maps from the encoder to the decoder network.
An overview of the U-Net architecture is depicted in the figure below.
<img src="https://miro.medium.com/max/1400/1*J3t2b65ufsl1x6caf6GiBA.png"/>
## The downsampling path
For the downsampling path we'll use a convolutional neural network from the tensorflow.keras.applications module. We'll first load the pre-trained weights on ImageNet, and we'll "freeze" these weights during the training process.
In this example, we'll use the Mobile-Net architecture, but you can feel free to experiment with any other network.
The problem is that to create the skip connections required by the U-Net architecture we need access to the feature maps of some intermediate layers in the network and these are not accessible by default.
The first step is to determine the names for the layers that will be included in the skip connections. So, let's load the MobileNetV2 architecture and use the _summary()_ method to identify these layers.
[MobileNetV2](https://keras.io/api/applications/) has a size of 16MB and achieves a top-1 accuracy of 71.3% on ImageNet.
```
input_shape = (128, 128, 3)
feature_extractor = tf.keras.applications.MobileNetV2(input_shape=input_shape, include_top=False)
feature_extractor.summary()
# TODO select the names of the layers that we'll be included in the skip connections
downsample_skip_layers = ["block_1_expand_relu", "block_2_expand_relu", "block_5_expand_relu", "block_8_expand_relu"]
```
After identifying the layers that you want to use in the skip connections, you can get access to these layers by using the method _get\_layer()_ defined from the _tensorflow.keras.Model_ class.
```
layer_act_map = feature_extractor.get_layer("block_1_expand_relu").output
print(layer_act_map)
for layer_name in downsample_skip_layers:
print(layer_name, '->', feature_extractor.get_layer(layer_name).output.shape)
```
## The upsampling path
In the upsampling path, we'll use transposed convolutions to progressively increase the resolution of the activation maps. The layers for the transposed convolution is [Conv2DTranspose](https://keras.io/api/layers/convolution_layers/convolution2d_transpose/).
Let's write a function to implement an upsampling block, consisting of a transposed convolution, a batch normalization block and a ReLu activation.
Remember, the output size $W_o$ of a transposed convolutional layer is:
\begin{equation}
W_o = (W_i - 1) \cdot S - 2P + F
\end{equation},
where $W_i$ is the size of the input, $S$ is the stride, $P$ is the amount of padding and $F$ is the filter size.
```
def upsample_block(x, filters, size, stride = 2):
"""
x - the input of the upsample block
filters - the number of filters to be applied
size - the size of the filters
"""
# TODO your code here
# transposed convolution
# BN
# relu activation
x = tf.keras.layers.Convolution2DTranspose(kernel_size=size, filters=filters, strides=stride, padding="same", kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
output = tf.keras.layers.ReLU()(x)
return output
```
Now let's test this upsampling block
```
in_layer = feature_extractor.get_layer(downsample_skip_layers[0]).output
filter_sz = 3
num_filters = 16
for stride in [2, 4, 8]:
x = upsample_block(in_layer, num_filters, filter_sz, stride)
print('in shape: ', in_layer.shape, ' upsample with filter size ', filter_sz, '; stride ', stride, ' -> out shape ', x.shape)
```
## Putting it all together
Now we understand all the parts required to build the U-Net architecture.
Let's write the function _build\_unet()_ which will build our architecture. Of course, we'll use the function API when writing the model.
```
def build_unet(input_shape, num_classes, trainable=False):
# define the input layer
inputs = tf.keras.layers.Input(shape=input_shape)
# TODO define the downsampling path as a MobilenetV2 architecture, pre-loaded with imagenet weights
mobilenet_v2 = tf.keras.applications.MobileNetV2(input_shape=input_shape, include_top=False)
downsample_skip_layer_name = ["block_1_expand_relu", "block_2_expand_relu", "block_5_expand_relu", "block_8_expand_relu"]
# get access to the skip layers in the downsample path
down_stack = tf.keras.Model(inputs=mobilenet_v2.input, outputs=[mobilenet_v2.get_layer(name).output for name in downsample_skip_layer_name])
# TODO freeze the downsampling path
down_stack.trainable = trainable
skips = down_stack(inputs)
x = skips[-1]
filter_sz = 3
for skip_layer in reversed(skips[:-1]):
x = upsample_block(x, skip_layer.shape[-1], 3)
x = tf.keras.layers.Concatenate()([x, skip_layer])
# determine the number of filters
# add an upsampling block
# add the skip connection - tf.keras.layers.Concatenate
# add the last conv2d transpose layer with the number of filters equal to the number of segmentation classes
output = tf.keras.layers.Conv2DTranspose(filters=num_classes, kernel_size=3, strides=(2, 2), padding='same')(x)
# return the model
return tf.keras.Model(inputs=inputs, outputs=output)
model = build_unet((128, 128, 3), 3)
tf.keras.utils.plot_model(model, show_shapes=True)
```
## Training the model. Defining callbacks.
Now that we've built the model, we can proceed to the training step.
We are dealing with a multi-class segmentation problem (pet pixels, pet border pixels and background pixels), so we should use the [tf.keras.losses.CategoricalCrossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy) loss function.
When training the model you can use various [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback).
Define the following callbacks:
- [ModelCheckpoint](https://keras.io/api/callbacks/model_checkpoint/) to save the best model so far every 5 to 5 epochs;
- [TerminateOnNaN](https://keras.io/api/callbacks/terminate_on_nan/) to stop the training process is a NaN loss is encountered;
- [EarlyStopping](https://keras.io/api/callbacks/early_stopping/) for when the loss has stopped improving.
```
train_generator = DataGenerator("./oxford_pets_seg/train", 32)
val_generator = DataGenerator("./oxford_pets_seg/test", 32)
def train(input_shape, num_classes, train_datagen, val_datagen, num_epochs=5):
model = build_unet(input_shape, num_classes)
# TODO configure the traning process model.compile(...)
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction="auto", name="sparse_categorical_crossentropy"
),
metrics=["accuracy"]
)
# train the model
history = model.fit(train_datagen, validation_data=val_datagen, epochs=num_epochs)
model.save('model_final.h5')
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure()
plt.plot(history.epoch, loss, 'r', label='Training loss')
plt.plot(history.epoch, val_loss, 'b', label='Validation loss')
plt.title('Loss evolution')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.ylim([0, 1])
plt.legend()
plt.show()
return model
model = train((128, 128, 3), 3, train_generator, val_generator, 5)
model = build_unet((128, 128, 3), 3)
# now let's see some predictions
batch_x, batch_y = val_generator[0]
predictions = model.predict(batch_x)
predictions = tf.argmax(predictions, axis=-1)
fig, axes = plt.subplots(nrows=6, ncols=3, figsize=[16, 9])
for i in range(len(axes)):
axes[i][0].imshow(batch_x[i])
axes[i][0].set_title('Image')
axes[i][1].imshow(batch_y[i][:, :, 0])
axes[i][1].set_title('GT Mask')
# pred_mask = model.predict(image)
axes[i][2].imshow(predictions[i] )
axes[i][2].set_title('Pred Mask')
plt.show()
```
## Evaluation metrics
Finally, you will implement several segmentation metrics to evaluate the model you've just trained. As usual, try to implement these metrics without using any for loops.
In the remainder of this section we'll use the following notation:
- $n_{ij}$ - the total number of pixels classified to class
j but actually belonging to class i; $i, j \in 1, .., C$;
- $t_i = \sum_{j = 1}^{C} n_{ij}$ - the total number of pixels belonging to class $i$ (in the ground truth segmentation mask);
- $C$ - the total number of classes in the segmentation problem.
### Mean pixel accuracy
Pixel accuracy is the simplest image segmentation metric; it is defined as the percentage of pixels that were correctly classified by the model.
\begin{equation}
p_a = \frac{1}{C} \frac{\sum_{i}^{C} n_{ii}}{\sum_{i}^{C} t_i}
\end{equation}
This metric is not that relevant for class imbalanced problems (which occurs for most segmentation problems).
### Intersection over Union (IoU)
the intersection over union metric is defined as the ratio between the area of intersection and the area of union (between the predicted segmentation mask and the ground truth segmentation mask of a single class).
In case of a multi-class segmentation problem, we simple average the IoUs over all the classes. This metric is called mean Intersection over Union (mIou).
\begin{equation}
mIoU = \frac{1}{C} \sum_{i = 1}^{C} \frac{p_{ii}}{t_i - n_{ii} + \sum_{j = 1}^{C} n_{ji}}
\end{equation}
The ideal value for this metric is 1; usually values lower than 0.6 indicate a very bad performance.
### Frequency Weighted Intersection over Union
The frequency weighted over union metric is similar to mean IoU, but the values are weighted with the adequate frequencies of the pixels.
\begin{equation}
fIou = (\sum_{i = 1}^{k} t_i)^{-1} \sum_{i = 1}^{C} t_i \cdot \frac{p_{ii}}{t_i - n_{ii} + \sum_{j = 1}^{C} n_{ji}}
\end{equation}
The values of this metric lie in the interval [0, 1], and the ideal value for this metric is 1.
Compute and report these metrics for your trained model(s).
```
def mean_pixel_acc(y_true, y_pred):
"""
y_true - array of shape (batch_size, height, width) with the ground truth labels
y_pred - array of shape (bacth_size, height, width) with the predicted labels
"""
y_true = y_true.flatten().astype(int)
y_pred = y_pred.flatten().astype(int)
conf_mat = confusion_matrix(y_true, y_pred)
C = conf_mat.shape[0]
return (1/C) * (np.trace(conf_mat) / np.sum(np.sum(conf_mat, axis=1)))
def iou(y_true, y_pred):
"""
y_true - array of shape (batch_size, height, width) with the ground truth labels
y_pred - array of shape (bacth_size, height, width) with the predicted labels
"""
y_true = y_true.flatten().astype(int)
y_pred = y_pred.flatten().astype(int)
conf_mat = confusion_matrix(y_true, y_pred)
C = conf_mat.shape[0]
diag = np.diagonal(conf_mat)
return (1/C) * (np.sum(diag / (np.sum(conf_mat, axis=1) - diag + np.sum(conf_mat, axis=0)) ))
def fw_iou(y_true, y_pred):
"""
y_true - array of shape (batch_size, height, width) with the ground truth labels
y_pred - array of shape (bacth_size, height, width) with the predicted labels
"""
y_true = y_true.flatten().astype(int)
y_pred = y_pred.flatten().astype(int)
conf_mat = confusion_matrix(y_true, y_pred)
C = conf_mat.shape[0]
diag = np.diagonal(conf_mat)
ti = np.sum(conf_mat, axis=1)
return 1/np.sum(ti) * np.sum(ti * diag / (ti - diag + np.sum(conf_mat, axis=0)))
true = batch_y[2]
plt.imshow(true[:,:,0])
plt.imshow(batch_x[2])
print(batch_x[2].shape)
pred = predictions[2].numpy()
plt.imshow(pred)
# printing scores
print(mean_pixel_acc(true, pred))
print(iou(true, pred))
print(fw_iou(true, pred))
```
| github_jupyter |
```
# !pip install econml
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
# IHDP: ForestRiesz
## Library Imports
```
from pathlib import Path
import os
import glob
from joblib import dump, load
import pandas as pd
import scipy
import scipy.stats
import scipy.special
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from utils.forestriesz import ForestRieszATE
from utils.ihdp_data import *
```
## Moment Definition
```
def moment_fn(x, test_fn): # Returns the moment for the ATE example, for each sample in x
t1 = np.hstack([np.ones((x.shape[0], 1)), x[:, 1:]])
t0 = np.hstack([np.zeros((x.shape[0], 1)), x[:, 1:]])
return test_fn(t1) - test_fn(t0)
```
## MAE Experiment
```
data_base_dir = "./data/IHDP/sim_data"
simulation_files = sorted(glob.glob("{}/*.csv".format(data_base_dir)))
nsims = 1000
np.random.seed(123)
sim_ids = np.random.choice(len(simulation_files), nsims, replace=False)
methods = ['dr', 'direct', 'ips', 'plugin']
true_ATEs = []
results = []
for it, sim in enumerate(sim_ids):
simulation_file = simulation_files[sim]
x = load_and_format_covariates(simulation_file, delimiter=' ')
t, y, y_cf, mu_0, mu_1 = load_other_stuff(simulation_file, delimiter=' ')
X = np.c_[t, x]
true_ATE = np.mean(mu_1 - mu_0)
true_ATEs.append(true_ATE)
y_scaler = StandardScaler(with_mean=True).fit(y)
y = y_scaler.transform(y)
est = ForestRieszATE(criterion='het', n_estimators=1000, min_samples_leaf=2,
min_var_fraction_leaf=0.001, min_var_leaf_on_val=True,
min_impurity_decrease = 0.01, max_samples=.8, max_depth=None,
warm_start=False, inference=False, subforest_size=1,
honest=True, verbose=0, n_jobs=-1, random_state=123)
est.fit(X[:, 1:], X[:, [0]], y.reshape(-1, 1))
params = tuple(x * y_scaler.scale_[0] for method in methods
for x in est.predict_ate(X, y, method = method)) + (true_ATE, )
results.append(params)
res = tuple(np.array(x) for x in zip(*results))
truth = res[-1:]
res_dict = {}
for it, method in enumerate(methods):
point, lb, ub = res[it * 3: (it + 1)*3]
res_dict[method] = {'point': point, 'lb': lb, 'ub': ub,
'MAE': np.mean(np.abs(point - truth)),
'std. err.': np.std(np.abs(point - truth)) / np.sqrt(nsims),
}
print("{} : MAE = {:.3f} +/- {:.3f}".format(method, res_dict[method]['MAE'], res_dict[method]['std. err.']))
path = './results/IHDP/ForestRiesz/MAE'
if not os.path.exists(path):
os.makedirs(path)
dump(res_dict, path + '/IHDP_MAE_RF.joblib')
```
### Table
```
path = './results/IHDP/ForestRiesz/MAE'
if not os.path.exists(path):
os.makedirs(path)
methods_str = ["DR", "Direct", "IPS", "\\midrule \n" +
"\\multicolumn{2}{l}{\\textbf{Benchmark:}} \\\\ \n RF Plug-in"]
with open(path + '/IHDP_MAE_RF.tex', "w") as f:
f.write("\\begin{tabular}{lc} \n" +
"\\toprule \n" +
"& MAE $\\pm$ std. err. \\\\ \n" +
"\\midrule \n" +
"\\multicolumn{2}{l}{\\textbf{Auto-DML:}} \\\\ \n")
for i, method in enumerate(methods):
f.write(" & ".join([methods_str[i], "{:.3f} $\\pm$ {:.3f}".format(res_dict[method]['MAE'],
res_dict[method]['std. err.'])]) + " \\\\ \n")
f.write("\\bottomrule \n \\end{tabular}")
```
## Coverage Experiment
```
data_base_dir = "./data/IHDP/sim_data_redraw_T"
simulation_files = sorted(glob.glob("{}/*.csv".format(data_base_dir)))
def rmse_fn(y_pred, y_true):
return np.sqrt(np.mean((y_pred - y_true)**2))
nsims = 100
np.random.seed(123)
sim_ids = np.random.choice(len(simulation_files), nsims, replace=False)
methods = ['dr', 'direct', 'ips']
true_ATEs = []
results = []
sim_ids = np.random.choice(len(simulation_files), nsims, replace=False)
for it, sim in enumerate(sim_ids):
simulation_file = simulation_files[sim]
x = load_and_format_covariates(simulation_file, delimiter=' ')
t, y, y_cf, mu_0, mu_1 = load_other_stuff(simulation_file, delimiter=' ')
X = np.c_[t, x]
true_ATE = np.mean(mu_1 - mu_0)
true_ATEs.append(true_ATE)
y_scaler = StandardScaler(with_mean=True).fit(y)
y = y_scaler.transform(y)
est = ForestRieszATE(criterion='het', n_estimators=100, min_samples_leaf=2,
min_var_fraction_leaf=0.001, min_var_leaf_on_val=True,
min_impurity_decrease = 0.01, max_samples=.8, max_depth=None,
warm_start=False, inference=False, subforest_size=1,
honest=True, verbose=0, n_jobs=-1, random_state=123)
est.fit(X[:, 1:], X[:, [0]], y.reshape(-1, 1))
params = tuple(x * y_scaler.scale_[0] for method in methods
for x in est.predict_ate(X, y, method = method)) + (true_ATE, )
results.append(params)
res = tuple(np.array(x) for x in zip(*results))
truth = res[-1:]
res_dict = {}
for it, method in enumerate(methods):
point, lb, ub = res[it * 3: (it + 1)*3]
res_dict[method] = {'point': point, 'lb': lb, 'ub': ub,
'cov': np.mean(np.logical_and(truth >= lb, truth <= ub)),
'bias': np.mean(point - truth),
'rmse': rmse_fn(point, truth)
}
print("{} : bias = {:.3f}, rmse = {:.3f}, cov = {:.3f}".format(method, res_dict[method]['bias'], res_dict[method]['rmse'], res_dict[method]['cov']))
path = './results/IHDP/ForestRiesz/coverage'
if not os.path.exists(path):
os.makedirs(path)
dump(res_dict, path + '/IHDP_coverage_RF.joblib')
```
### Histogram
```
path = './results/IHDP/ForestRiesz/coverage'
if not os.path.exists(path):
os.makedirs(path)
method_strs = ["{}. Bias: {:.3f}, RMSE: {:.3f}, Coverage: {:.3f}".format(method, d['bias'], d['rmse'], d['cov'])
for method, d in res_dict.items()]
plt.title("\n".join(method_strs))
for method, d in res_dict.items():
plt.hist(np.array(d['point']), alpha=.5, label=method)
plt.axvline(x = np.mean(truth), label='true', color='red')
plt.legend()
plt.savefig(path + '/IHDP_coverage_RF.pdf', bbox_inches='tight')
plt.show()
```
| github_jupyter |
## Multiple Output Models
+ Multi Tast Elastic Net
+ Multi Task Models
```
import pandas as pd
import numpy as np
from sklearn.linear_model import MultiTaskElasticNet
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import r2_score, mean_squared_error
#### Read in Crime by year / month data set
## read in data
path = '../Homeworks/chicagoCrimesByYear.csv'
df = pd.read_csv(path).fillna(0)
## set month year as index
df.sort_values(by=['year', 'month'])
df.set_index(['year', 'month'], inplace=True)
df.head()
```
#### One Month Ahead Forecasting using mutlitask Elastic Net
This predicts all crime counts one month in advance
uses linear model (elastic net) at the core
```
# date shift
X = df.iloc[0:-1, :]
y = df.iloc[1: : ]
# builds Model
model = MultiTaskElasticNet().fit(X, y)
# create predictions, formats to pandas data frame
preds = pd.DataFrame(model.predict(X), columns=df.columns, index=y.index)
# create a dictionary to calculate peformance for every column
r2_dict = {}
for col in df.columns:
r2_dict[col] = r2_score(y[col], preds[col])
print('models r2 scores :')
r2_dict
```
#### One Month Ahead Forecasting using DecisionTreeRegression
This predicts all crime counts one month in advance
uses multiple decision Tree Regressors
```
# date shift
X = df.iloc[0:-1, :]
y = df.iloc[1: : ]
# builds Model
model = MultiOutputRegressor(DecisionTreeRegressor()).fit(X, y)
# create predictions, formats to pandas data frame
preds = pd.DataFrame(model.predict(X), columns=df.columns, index=y.index)
# create a dictionary to calculate peformance for every column
r2_dict = {}
for col in df.columns:
r2_dict[col] = r2_score(y[col], preds[col])
print('models r2 scores :')
r2_dict
```
#### One Year Ahead Forecasting using DecisionTreeRegression
This predicts all crime counts one month in advance
uses multiple decision Tree Regressors
```
# date shift
df_grouped = df.groupby(df.index.get_level_values('year')).sum()
# filter out partial year
df_grouped = df_grouped.loc[df_grouped.index < 2020, :]
X =df_grouped.iloc[0:-1, :]
y = df_grouped.iloc[1: : ]
# builds Model
model = MultiTaskElasticNet().fit(X, y)
# create predictions, formats to pandas data frame
preds = pd.DataFrame(model.predict(X), columns=df.columns, index=y.index)
# create a dictionary to calculate peformance for every column
r2_dict = {}
for col in df.columns:
r2_dict[col] = r2_score(y[col], preds[col])
print('models r2 scores :')
r2_dict
```
#### Predict Next Year's crime trends
use all the data to create a prediction for next year
```
index = list(df_grouped.index)
last_value = index[-1]
index.append(last_value + 1)
index = list(df_grouped.index)
last_value = index[-1]
index.append(last_value + 1)
index = index[1:]
preds = pd.DataFrame(model.predict(df_grouped), columns=df_grouped.columns, index = index)
preds.tail()
```
| github_jupyter |
### 1 - Crie um programa que recebe uma lista de números e
- retorne o maior elemento
- retorne a soma dos elementos
- retorne o número de ocorrências do primeiro elemento da lista
- retorne a média dos elementos
- retorne o valor mais próximo da média dos elementos
- retorne a soma dos elementos com valor negativo
- retorne a quantidade de vizinhos iguais
```
lista = [0, 5, -6, 1, -9, 2, 4, 0, 1, 5, 7, 3.5, 7, 7, 7]
maior_numero = max(lista)
soma_elementos = sum(lista)
numero_ocorrencias = lista.count(lista[0])
media_elementos = soma_elementos/len(lista)
inf = float('inf')
for numero in lista:
if abs(numero - media_elementos) < inf:
mais_proximo = numero
inf = abs(numero - media_elementos)
soma_negativos = sum([numero for numero in lista if numero < 0])
vizinhos_iguais = 0
for i in range(len(lista)):
if lista[i] == lista[i-1] or lista[i] == lista[i+1]:
vizinhos_iguais += 1
print(f'Lista: {lista}\n'
f'Maior número: {maior_numero}\n'
f'Soma dos elementos: {soma_elementos}\n'
f'Número de ocorrências do primeiro elemento da lista: {numero_ocorrencias}\n'
f'Média dos elementos: {media_elementos}\n'
f'Número mais próximo da média de elementos: {mais_proximo}\n'
f'Soma dos números com valores negativos: {soma_negativos}\n'
f'Quantidade de vizinhos iguais: {vizinhos_iguais}'
)
```
### 2 - Faça um programa que receba duas listas e retorne True se são iguais ou False caso contrario.
Duas listas são iguais se possuem os mesmos valores e na mesma ordem.
```
def saoIguais(lista1, lista2):
if lista1 == lista2:
return True
return False
lista1 = [1, 'g', 4, 5, 6, 7, 20.5]
lista2 = [20.5, 'g', 4, 5, 6, 7, 1]
lista3 = [1, 'g', 4, 5, 6, 7, 20.5]
print(saoIguais(lista1, lista2))
print(saoIguais(lista2, lista3))
print(saoIguais(lista1, lista3))
```
### 3 - Faça um programa que receba duas listas e retorne True se têm os mesmos elementos ou False caso contrário
Duas listas possuem os mesmos elementos quando são compostas pelos mesmos valores, mas não obrigatoriamente na mesma ordem.
```
def saoIguais(lista1, lista2):
if set(lista1) == set(lista2):
return True
return False
lista1 = [1, 'g', 4, 5, 6, 7, 20.5]
lista2 = [20.5, 'g', 4, 5, 6, 7, 1]
lista3 = [1, 'g', 4, 5, 6, 7, 20.5]
print(saoIguais(lista1, lista2))
print(saoIguais(lista2, lista3))
print(saoIguais(lista1, lista3))
```
### 4 - Faça um programa que percorre uma lista com o seguinte formato:
[['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]], ['Italia', 'Espanha', [7,8]]]. Essa lista indica o número de faltas que cada time fez em cada jogo. Na lista acima, no jogo entre Brasil e Itália, o Brasil fez 10 faltas e a Itália fez 9.
O programa deve imprimir na tela:
- o total de faltas do campeonato
- o time que fez mais faltas
- o time que fez menos faltas
```
jogos = [['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]], ['Italia', 'Espanha', [7,8]]]
dict = {}
for time1, time2, faltas in jogos:
dict.setdefault(time1, []).append(faltas[0])
dict.setdefault(time2, []).append(faltas[1])
total_faltas = sum(sum(faltas) for faltas in dict.values())
mais_faltas = max(dict)
menos_faltas = min(dict)
print(
f'Número de faltas do campeonato: {total_faltas}\n'
f'Time que fez mais faltas: {mais_faltas}\n'
f'Time que fez menos faltas: {menos_faltas}'
)
```
## Dicionários
### 5 - Escreva um programa que conta a quantidade de vogais em uma string e armazena tal quantidade em um dicionário, onde a chave é a vogal considerada.
```
string = 'aaaasdfpoiuuqer'
vogais = {'a':0,'e':0,'i':0,'o':0,'u':0}
for letra in string:
if letra.lower() in vogais.keys():
vogais[letra] += 1
print(vogais)
```
### 6 - Escreva um programa que lê duas notas de vários alunos e armazena tais notas em um dicionário, onde a chave é o nome do aluno. A entrada de dados deve terminar quando for lida uma string vazia como nome. Escreva uma função que retorna a média do aluno, dado seu nome.
```
alunos = {}
while 1:
nome_aluno = input('Nome do aluno: ')
if nome_aluno == '':
break
nota1 = int(input('Insira a primeira nota: '))
nota2 = int(input('Insira a segunda nota: '))
alunos[nome_aluno] = [(nota1 + nota2)/2]
print(alunos)
```
### 7 - Uma pista de Kart permite 10 voltas para cada um de 6 corredores. Escreva um programa que leia todos os tempos em segundos e os guarde em um dicionário, onde a chave é o nome do corredor. Ao final diga de quem foi a melhor volta da prova e em que volta; e ainda a classificação final em ordem (1 o o campeão). O campeão é o que tem a menor média de tempos.
```
corredores = {
'joão': {'1': 54, '2': 76,'3': 43,'4': 107,'5': 93,'6': 75,'7': 70,'8': 65,'9': 90,'10': 89},
'pedro': {'1': 46, '2': 43,'3': 63,'4': 100,'5': 43,'6': 58,'7': 90,'8': 40,'9': 70,'10': 95},
'pablo': {'1': 94, '2': 85,'3': 55,'4': 86,'5': 70,'6': 79,'7': 63,'8': 47,'9': 84,'10': 64},
'maria': {'1': 44, '2': 43,'3': 63,'4': 67,'5': 43,'6': 55,'7': 90,'8': 42,'9': 70,'10': 98},
'marcio': {'1': 80, '2': 93,'3': 54,'4': 63,'5': 86,'6': 59,'7': 70,'8': 81,'9': 69,'10': 64},
'julia': {'1': 50, '2': 87,'3': 70,'4': 59,'5': 43,'6': 57,'7': 72,'8': 100,'9': 87,'10': 94}
}
classificacao_final = {}
melhor_volta_geral = {'1': float("inf")}
for corredor, voltas in corredores.items():
menor_tempo = min(voltas.values())
melhor_volta_corredor = {volta: tempo for volta, tempo in voltas.items() if tempo == menor_tempo}
if next(iter(melhor_volta_corredor.values())) < next(iter(melhor_volta_geral.values())):
melhor_volta_geral = melhor_volta_corredor
media_tempos = sum(voltas.values())/len(voltas.values())
classificacao_final[corredor] = media_tempos
print(f'Melhor volta: {melhor_volta_geral}')
print(f'Classificação Final:')
for corredor, voltas in sorted(classificacao_final.items(), key=lambda classificacao: classificacao[1]):
print(corredor + ' ' + str(voltas))
```
8 - Escreva um programa para armazenar uma agenda de telefones em um dicionário.
Cada pessoa pode ter um ou mais telefones e a chave do dicionário é o nome da
pessoa. Seu programa deve ter as seguintes funções:
- incluirNovoNome – essa função acrescenta um novo nome na agenda, com um ou
mais telefones. Ela deve receber como argumentos o nome e os telefones.
- incluirTelefone – essa função acrescenta um telefone em um nome existente na
agenda. Caso o nome não exista na agenda, você̂ deve perguntar se a pessoa deseja
inclui-lo. Caso a resposta seja afirmativa, use a função anterior para incluir o novo
nome.
- excluirTelefone – essa função exclui um telefone de uma pessoa que já está na
agenda. Se a pessoa tiver apenas um telefone, ela deve ser excluída da agenda.
- excluirNome – essa função exclui uma pessoa da agenda.
- consultarTelefone – essa função retorna os telefones de uma pessoa na agenda.
```
agenda = {}
def incluirNovoNome(nome, telefones):
global agenda
agenda[nome] = telefones
def incluirTelefone(nome, telefone):
global agenda
if nome in agenda.keys():
agenda[nome].append(telefone)
else:
if input("Essa pessoa não existe, incluir ela na agenda (s/n)? ") == "s":
telefones = [telefone]
incluirNovoNome(nome, telefones)
def excluirTelefone(nome, telefone):
global agenda
if nome in agenda.keys():
if len(agenda[nome]) == 1:
del agenda[nome]
else:
for num in range(0, len(agenda[nome]) - 1):
if telefone == agenda[nome][num]:
agenda[nome].pop(num)
def consultarTelefone(nome):
global agenda
if nome in agenda.keys():
print(agenda[nome])
opcao = -1
while opcao != 0:
opcao = int(input("\nEscolha uma opção: \n1 - Incluir Novo Nome\n2 - Incluir Telefone\n3 - Excluir Telefone\n4 - Consultar Agenda\n0 - Encerrar Programa\n"))
# INSERIR NOVO NOME
if opcao == 1:
nome = input("Insira o nome da pessoa: ")
telefones = []
telefones.append(int(input("Insira um telefone da pessoa: "))),
telefone = -1
while telefone != 0:
telefone = int(input("Inserir outro telefone (ou digite 0 para parar): "))
if telefone != 0:
telefones.append(telefone)
incluirNovoNome(nome, telefones)
# INCLUIR TELEFONE,
elif opcao == 2:
nome = input("Insira o nome da pessoa: ")
telefone = int(input("Insira um novo telefone da pessoa: "))
incluirTelefone(nome, telefone)
# EXCLUIR TELEFONE,
elif opcao == 3:
nome = input("Insira o nome da pessoa: ")
telefone = int(input("Insira o telefone da pessoa a qual deseja excluir: "))
excluirTelefone(nome, telefone)
elif opcao == 4:
consultarTelefone(input("Insira o nome da pessoa: "))
```
9 - Faça um programa que leia um arquivo texto contendo uma lista de endereços IP e gere um outro arquivo, contendo um relatório dos endereços IP válidos e inválidos.
```
import socket
ipsValidos = []
ipsInvalidos = []
ips = open("ips.txt", "w+")
for ip in ips:
try:
socket.inet_aton(ip)
# Validos
ipsValidos.append(ip)
except socket.error:
# Invalidos
ipsInvalidos.append(ip)
ips.close()
saida = open("ipsvalidados.txt", "w+")
saida.write("[Endereços válidos:]\n")
for ip in ipsValidos:
saida.write(ip)
saida.write("[Endereços inválidos:]\n")
for ip in ipsInvalidos:
saida.write(ip)
saida.close()
```
10 - A ACME Inc., uma empresa de 500 funcionários, está tendo problemas de espaço
em disco no seu servidor de arquivos. Para tentar resolver este problema, o
Administrador de Rede precisa saber qual o espaço ocupado pelos usuários, e
identificar os usuários com maior espaço ocupado...
```
def paraMb(tamanho):
return float(tamanho / (1024*1024))
def usoMemoria(tamanho, total):
return float(tamanho / total) * 100
dic = {}
total = 0
usuarios = open("usuarios.txt")
for usuario in usuarios:
usuario = usuario.replace("\n", "")
usuario = usuario.split(" ")
nome = usuario[0]
memoria = usuario[1]
dic[nome] = int(memoria)
total += int(memoria)
usuarios.close()
media = total / len(dic.keys())
relatorio = open("relatório.txt", "w+")
relatorio.write("ACME Inc. 4 Uso do espaço em disco pelos usuários.\n")
relatorio.write("--------------------------------------------------------------\n")
relatorio.write("Nr.\tUsuário \tEspaço utilizado\t% do uso\n")
nr = 1
for usuario in dic:
relatorio.write(str(nr) + "\t" + usuario + "\t" + "{0:.2f}".format(paraMb(dic[usuario])) + " MB\t\t" + "{0:.2f}".format(usoMemoria(dic[usuario], total)) + "%\n")
nr += 1
relatorio.write("Espaço total ocupado: " + "{0:.2f}".format(paraMb(total)) + "\n")
relatorio.write("Espaço médio ocupado: " + "{0:.2f}".format(paraMb(media)))
relatorio.close()
```
| github_jupyter |
```
--- Day 7: Internet Protocol Version 7 ---
While snooping around the local network of EBHQ, you compile a list of IP addresses (they're IPv7, of course; IPv6 is much too limited). You'd like to figure out which IPs support TLS (transport-layer snooping).
An IP supports TLS if it has an Autonomous Bridge Bypass Annotation, or ABBA. An ABBA is any four-character sequence which consists of a pair of two different characters followed by the reverse of that pair, such as xyyx or abba. However, the IP also must not have an ABBA within any hypernet sequences, which are contained by square brackets.
For example:
abba[mnop]qrst supports TLS (abba outside square brackets).
abcd[bddb]xyyx does not support TLS (bddb is within square brackets, even though xyyx is outside square brackets).
aaaa[qwer]tyui does not support TLS (aaaa is invalid; the interior characters must be different).
ioxxoj[asdfgh]zxcvbn supports TLS (oxxo is outside square brackets, even though it's within a larger string).
How many IPs in your puzzle input support TLS?
```
```
import re
HYPERNET = re.compile("\[([a-z]*)\]")
def check_abba(ip):
for a,b,c,d in zip(ip[:-3], ip[1:-2], ip[2:-1], ip[3:]):
if a==d and b==c and a!=b:
return True
return False
def check_tls(ip):
ip = ip.strip()
for hypernet in HYPERNET.findall(ip):
if check_abba(hypernet):
return False
for segment in HYPERNET.sub("#", ip).split('#'):
if check_abba(segment):
return True
return False
check_tls('abba[mnop]qrst[sdfdf]kjhjk')
check_tls('abcd[bddb]xyyx')
check_tls('aaa[qwer]tyui')
check_tls('ioxxoj[asdfgh]zxcvbn')
with open('inputs/day7.txt', 'rt') as fd:
print(sum(1 for ip in filter(check_tls, fd)))
```
```
--- Part Two ---
You would also like to know which IPs support SSL (super-secret listening).
An IP supports SSL if it has an Area-Broadcast Accessor, or ABA, anywhere in the supernet sequences (outside any square bracketed sections), and a corresponding Byte Allocation Block, or BAB, anywhere in the hypernet sequences. An ABA is any three-character sequence which consists of the same character twice with a different character between them, such as xyx or aba. A corresponding BAB is the same characters but in reversed positions: yxy and bab, respectively.
For example:
aba[bab]xyz supports SSL (aba outside square brackets with corresponding bab within square brackets).
xyx[xyx]xyx does not support SSL (xyx, but no corresponding yxy).
aaa[kek]eke supports SSL (eke in supernet with corresponding kek in hypernet; the aaa sequence is not related, because the interior character must be different).
zazbz[bzb]cdb supports SSL (zaz has no corresponding aza, but zbz has a corresponding bzb, even though zaz and zbz overlap).
How many IPs in your puzzle input support SSL?
```
```
def find_aba(ip):
for a,b,c in zip(ip[:-2], ip[1:-1], ip[2:]):
if a==c:
yield a + b + a, b + a + b
def check_ssl(ip):
ip = ip.strip()
hypernets = HYPERNET.findall(ip)
for segment in HYPERNET.sub("#", ip).split('#'):
for aba, bab in find_aba(segment):
if len([h for h in hypernets if bab in h]) > 0:
return True
return False
check_ssl("aba[bab]xyz")
check_ssl("xyx[xyx]xyx")
check_ssl("aaa[kek]eke")
check_ssl("zazbz[bzb]cdb")
with open('inputs/day7.txt', 'rt') as fd:
print(sum(1 for ip in filter(check_ssl, fd)))
```
| github_jupyter |
# Problem Set One
1) LOdd1Three0 : Set of strings over {0,1} with an odd # of 1s OR exactly three 0s.
* Hint on how to arrive at the language:
- develop NFAs for the two cases and perform their union. Obtain DFA
- develop REs for the two cases and perform the union.
- Testing the creations:
. Come up with language for even # of 1s and separately for "other than three 0s".
. Do two intersections.
. Is the language empty?
2) Language of strings over {0,1} with exactly two occurrences of 0101 in it.
* Come up with it directly (take overlaps into account, i.e. 010101 has two occurrences in it
* Come up in another way
Notes:
* Most of the problem students will have in this course is interpreting English (technical English)
* So again, read the writeup at the beginning of Module6 (should be ready soon today) and work on using the tool.
```
import sys
sys.path[0:0] = ['../..','../../3rdparty'] # Put these at the head of the search path
from jove.Module5_RE import re2nfa
from jove.Module4_NFA import min_dfa_brz, nfa2dfa
from jove.Module3_DFA import min_dfa, langeq_dfa, iso_dfa, pruneUnreach
from jove.Module3_DFA import union_dfa, intersect_dfa, totalize_dfa
from jove.DotBashers import dotObj_dfa
```
__Solutions__
1) LOdd1Three0 : Set of strings over {0,1} with an odd # of 1s OR exactly three 0s.
* Hint on how to arrive at the language:
- develop NFAs for the two cases and perform their union. Obtain DFA
- develop REs for the two cases and perform the union.
- Testing the creations:
. Come up with language for even # of 1s and separately for "other than three 0s".
. Do two intersections.
. Is the language empty?
2) Language of strings over {0,1} with exactly two occurrences of 0101 in it.
* Come up with it directly (take overlaps into account, i.e. 010101 has two occurrences in it
* Come up in another way
Notes:
* Most of the problem students will have in this course is interpreting English (technical English)
* So again, read the writeup at the beginning of Module6 (should be ready soon today) and work on using the tool.
```
RE_Odd1s = "0* 1 0* (1 0* 1 0*)*"
NFA_Odd1s = re2nfa(RE_Odd1s)
DO_Odd1s = dotObj_dfa(min_dfa(nfa2dfa(NFA_Odd1s)))
DO_Odd1s
RE_Ex3z = "1* 0 1* 0 1* 0 1* "
NFA_Ex3z = re2nfa(RE_Ex3z)
DO_Ex3z = dotObj_dfa(min_dfa(nfa2dfa(NFA_Ex3z)))
DO_Ex3z
RE_O13z = "0* 1 0* (1 0* 1 0*)* + 1* 0 1* 0 1* 0 1* "
NFA_O13z = re2nfa(RE_O13z)
MD_O13z = min_dfa(nfa2dfa(NFA_O13z))
DO_O13z = dotObj_dfa(MD_O13z)
DO_O13z
RE_O13z = "0* 1 0* (1 0* 1 0*)* + 1* 0 1* 0 1* 0 1* "
NFA_O13z = re2nfa(RE_O13z)
NMD_O13z = nfa2dfa(NFA_O13z)
MD_O13zB = min_dfa_brz(NMD_O13z)
DO_O13zB = dotObj_dfa(MD_O13zB)
DO_O13zB
iso_dfa(MD_O13z,MD_O13zB)
langeq_dfa(NMD_O13z,MD_O13z)
iso_dfa(NMD_O13z, MD_O13z)
dotObj_dfa(min_dfa(nfa2dfa(re2nfa("''"))))
D1 = min_dfa(nfa2dfa(re2nfa("aa")))
dotObj_dfa(D1)
D2 = min_dfa(nfa2dfa(re2nfa("bb")))
dotObj_dfa(D2)
D1
D2
D1or2 = min_dfa(union_dfa(D1,D2))
D1or2p = pruneUnreach(D1or2)
dotObj_dfa(D1or2)
dotObj_dfa(D1or2p)
D1and2 = min_dfa(intersect_dfa(D1,D2))
D1and2p = pruneUnreach(D1and2)
dotObj_dfa(D1and2)
dotObj_dfa(D1and2p)
d1=nfa2dfa(re2nfa("abcde"))
d2=nfa2dfa(re2nfa("abced"))
langeq_dfa(d1,d2,True)
dotObj_dfa(d1)
dotObj_dfa(d2)
d1a=nfa2dfa(re2nfa("aa*+bc"))
d2a=nfa2dfa(re2nfa("a(a*+bc)"))
langeq_dfa(d1a,d2a,True)
dotObj_dfa(d1a)
dotObj_dfa(d2a)
d1b=nfa2dfa(re2nfa("aaa*+aa*bc+bcaa*+bcbc"))
d2b=nfa2dfa(re2nfa("(aa*+bc)(aa*+bc)"))
langeq_dfa(d1b,d2b,True)
dotObj_dfa(d1b)
dotObj_dfa(d2b)
iso_dfa(d1b,d2b)
d1c=min_dfa(d1b)
d2c=min_dfa(d2b)
iso_dfa(d1c,d2c)
dotObj_dfa(d1c)
dotObj_dfa(d2c)
d1d=nfa2dfa(re2nfa("aaa*+aa*bc+bcaaa*+bcbc"))
d2d=nfa2dfa(re2nfa("(aa*+bc)(aa*+bc)"))
langeq_dfa(d1d,d2d,True)
d1d=nfa2dfa(re2nfa("a a a*+a a* b c+ b c a a a*+b c b c"))
d2d=nfa2dfa(re2nfa("(a a*+b c)(a a*+b c)"))
langeq_dfa(d1d,d2d,True)
dotObj_dfa(d1d)
dotObj_dfa(d2d)
d1d=nfa2dfa(re2nfa("james*+bond*"))
dotObj_dfa(d1d)
d1d=nfa2dfa(re2nfa("ja mes*+bo nd*"))
dotObj_dfa(d1d)
```
| github_jupyter |
```
!pip install sqlalchemy
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, inspect, MetaData
import pandas as pd
engine = create_engine("sqlite:///./Website/static/sqlite/twitch.sqlite",echo = False)
# metadata = MetaData(bind=engine)
Base = automap_base()
connection = engine.connect()
Base.prepare(engine, reflect=True)
# Base.metadata.reflect(engine)
print(Base.classes.keys())
top_games = Base.classes.top_games
top_games.game_name
session = Session(engine)
columns = session.query(top_games).first()
columns.__dict__
for row in session.query(game_data.game_name).limit(100).all():
print(row)
data_list = []
for row in session.query(top_games).all():
data_list.append(row.__dict__)
data_list = []
for row in session.query(top_games).all():
#print(row.__dict__)
data_list.append(row.__dict__["game_name"])
data_list
data_list
game_data = "./Website/static/csv/Twitch_game_data.csv"
game_df = pd.read_csv(game_data)
game_df.head()
clean_game = game_df.loc[:,['Game',"Month","Year","Hours_watched","Hours_Streamed","Avg_viewers","Avg_channels"]]
clean_game_16 = clean_game.loc[clean_game["Year"] == 2016,['Game',"Month","Year","Hours_watched","Hours_Streamed","Avg_viewers","Avg_channels"]]
clean_game_17 = clean_game.loc[clean_game["Year"] == 2017,['Game',"Month","Year","Hours_watched","Hours_Streamed","Avg_viewers","Avg_channels"]]
clean_game_18 = clean_game.loc[clean_game["Year"] == 2018,['Game',"Month","Year","Hours_watched","Hours_Streamed","Avg_viewers","Avg_channels"]]
clean_game_19 = clean_game.loc[clean_game["Year"] == 2019,['Game',"Month","Year","Hours_watched","Hours_Streamed","Avg_viewers","Avg_channels"]]
clean_game_20 = clean_game.loc[clean_game["Year"] == 2020,['Game',"Month","Year","Hours_watched","Hours_Streamed","Avg_viewers","Avg_channels"]]
clean_game_16.head()
clean_game_17.head()
clean_game_18.head()
clean_game_19.head()
clean_game_20.head()
clean_game_16.groupby("Month").sum().div(1000000).round(1)
clean_game_17.groupby("Month").sum().div(1000000).round(1)
clean_game_18.groupby("Month").sum().div(1000000).round(1)
clean_game_19.groupby("Month").sum().div(1000000).round(1)
clean_game_20.groupby("Month").sum().div(1000000).round(1)
clean_game_20.groupby("Month").sum().div(1000).round(1)
video_data = "./Website/static/csv/topvideos.csv"
video_df = pd.read_csv(video_data)
video_stats = video_df.loc[:, ["vid_length","video_views","total_view_time-calc-hours","channel_game","channel_lan","channel_mature","channel_partner"]]
list(range(16))
video_stats.head()
video_grouping = pd.DataFrame(video_stats.groupby("channel_game").sum())
video_grouping = video_grouping.reset_index()
video_grouping
video_grouping.loc[:, "channel_game"].tolist()
video_grouping.loc[:, "video_views"].tolist()
video_stats.loc[:, "channel_game"].tolist()
video_grouping.loc[:, "total_view_time-calc-hours"].round().astype(int).tolist()
video_grouping.loc[:, "vid_length"].tolist()
video_stats.loc[:, "vid_length"].tolist()
video_view_stats = video_stats.loc[:, ["video_views","channel_game"]]
video_view_stats = video_view_stats.sort_values("video_views")
video_view_stats
video_view_stats.loc[:, "video_views"].tolist()
video_view_stats.loc[:, "channel_game"].tolist()
video_stats.loc[:, "total_view_time-calc-hours"].round().astype(int).tolist()
video_stats["channel_mature"].value_counts()
video_stats["channel_partner"].value_counts()
video_stats["channel_lan"].value_counts()
```
| github_jupyter |
# Neural networks with PyTorch
Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
```
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
```
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below
<img src='assets/mnist.png'>
Our goal is to build a neural network that can take one of these images and predict the digit in the image.
First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
```
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like
```python
for image, label in trainloader:
## do things with images and labels
```
You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
```
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
```
This is what one of the images looks like.
```
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
```
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.
The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.
Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.
> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
```
## Your solution
def sigmoid(x):
return 1 / (1 + torch.exp(-x))
num_batches = images.shape[0]
flattened_images = images.view(num_batches, -1)
input_width = flattened_images.shape[-1]
hidden_width = 256
output_width = 10
w1 = torch.randn(input_width, hidden_width)
w2 = torch.randn(hidden_width, output_width)
b1 = torch.randn(hidden_width)
b2 = torch.randn(output_width)
hidden = sigmoid(flattened_images.mm(w1) + b1)
out = hidden.mm(w2) + b2
out.shape # output of your network, should have shape (64,10)
```
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:
<img src='assets/image_distribution.png' width=500px>
Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.
To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like
$$
\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}
$$
What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.
> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
```
def softmax(x):
exponents = torch.exp(x)
return exponents / exponents.sum(dim=1).view(-1, 1)
# Here, out should be the output of the network in the previous excercise with shape (64,10)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
```
## Building networks with PyTorch
PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
```
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
```
Let's go through this bit by bit.
```python
class Network(nn.Module):
```
Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.
```python
self.hidden = nn.Linear(784, 256)
```
This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.
```python
self.output = nn.Linear(256, 10)
```
Similarly, this creates another linear transformation with 256 inputs and 10 outputs.
```python
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
```
Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.
```python
def forward(self, x):
```
PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.
```python
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
```
Here the input tensor `x` is passed through each operation and reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.
Now we can create a `Network` object.
```
# Create the network and look at it's text representation
model = Network()
model
```
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
```
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
```
### Activation functions
So far we've only been looking at the sigmoid activation function, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).
<img src="assets/activation.png" width=700px>
In practice, the ReLU function is used almost exclusively as the activation function for hidden layers.
### Your Turn to Build a Network
<img src="assets/mlp_mnist.png" width=600px>
> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.
It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names.
```
## Your solution here
class MyNetwork(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 64)
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.softmax(self.fc3(x), dim=1)
return x
model = MyNetwork()
```
### Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
```
print(model.fc1.weight)
print(model.fc1.bias)
```
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
```
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
```
### Forward pass
Now that we have a network, let's see what happens when we pass in an image.
```
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
```
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
### Using `nn.Sequential`
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network:
```
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
```
Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output.
The operations are available by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
```
print(model[0])
model[0].weight
```
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
```
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
```
Now you can access layers either by integer or the name
```
print(model[0])
print(model.fc1)
```
In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
| github_jupyter |
# Data Wrangling datos linio Colombia de la seccion de computación
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import plotly
import nltk
from nltk import word_tokenize
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from collections import Counter
from wordcloud import WordCloud
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from pylab import rcParams
import warnings
#Base de datos
#pip install pymysql
#pip install sqlalchemy
from pymongo import MongoClient
print("Librerias instaladas")
```
**Conexion con la Base de datos**
Relizamos y verificamos la conexion de la base de datos, en donde ya esta previamente creada con nombre "liniodb", de manera local, y verificamos que la conexion este establecida, para más adelante poder manejarlas y utilizar la informacion directamente con la BD.
```
#Conexion Base de datos
cadena_conexion= "mongodb+srv://rootlinio:1234@cluster0.2bmqp.mongodb.net/liniodb?retryWrites=true&w=majority"
client=MongoClient(cadena_conexion)
#Escoger la base de datos
db = client['liniodb']
print("Conexion establecida")
```
Tenemos un archivo .csv que contiene información sobre los comentarios de los producto de la pagina Linio colombia especificamente de la seccion de Computación, asociando algunos valores basicos del producto, con las respectivas reseñas que dejan los usuarios, de esta forma, se tienen los siguientes atributos en los datos:
1. **Nombre o Titulo del producto :** cadena de caracteres que describen el nombre del producto
3. **Precio:** cadena de caracteres que describen el valor comercial del producto
4. **Reseñas** comentario escrito en texto plano que describen la opinion del usuario sobre el producto
5. **Estrellas:** Cantidad de estrellas que recibe el producto
6. **Autor:** cadena de caracteres que describen el nombre del ususario que realizo la reseña
```
# Cargando el dataset de web scraping
data = pd.read_csv('Data/linio.csv')
data.index.names = ['IdReseña']
# Primeras 10 filas del Dataframe
data.head(10)
data.shape
```
**1. Dataset Structuring**
Tomando como referencia las columnas presentes en el dataset que hemos importando, donde se puede notar que existe una columna de nombre ```Autor``` que tiene datos como el nombre de la persona que realizo el comentario y la decha en la que este fue realizado, ambos datos separados por la siguiente expresion ``` el ```.
```
data[['Autor','Fecha']] = data['Autor'].str.split(' el ',expand=True)
data.head()
```
Como podemos notar existen las columnas necesarias para llevar a cabo nuestro estudio, por lo cual no es necesario eliminar algunas columna
**2. Dataset Cleaning**
Como se puede observar tenemos algunos campos nulos, por lo que se exploran los valores nulos para realizarles el tratamiento, donde se limpian los datos para un análisis de alta calidad.
Primero vamos a listar cuántos datos nulos o inexistentes existen en el conjunto de datos.
```
data.isnull().sum()
```
Podemos ver que hay una columnas dentro de nuestro conjunto de datos que presenta datos nulos o inexistentes son: la columna de ```Reseñas``` y la columna de```Estrellas```.
Primero se lleva a cabo el proceso de filtrar, el cual permite visaulizar los datos nulos en un nuevo DataFrame con solo estas filas:
```
data[pd.isnull(data).any(axis=1)]
```
En este caso, a la columna ```Reseñas``` le daremos el *Tratamiento de Eliminacion*, pues haremos un análisis del texto de los comentarios y al no tener esta variable de vital importancia en las observaciones, no son de nuestro interés, lo mismo ocurre columna ```Estrellas```. Esto lo haremos usando el método ```.notna()``` sobre la serie de interés:
```
data = data[data['Reseñas'].notna()]
data
#verficamos la eliminacion de datos nulos
data.isnull().sum()
```
Despues de elimnar los datos nulos, procedemos a verificar la existencia de filas duplicadas, donde no aportan informacion y en gran parte se genera un sesgo en los datos.
Podemos proceder a filtrar el DataFrame en la busqueda de filas duplicadas:
```
data[data.duplicated()]
```
Se observa que por alguna razon que se desconoce referencia tres elementos repetidos, pero observar el resultado se detalla que son elementos de amplia diferencia, para esto , no se realiza la eliminacion, ya que se pierde informacion que puede ser de utilidad para el estudio.
**3. Dataset Enriching**
Para el proceso de enriquecimiento del conjunto de datos principal, vamos a convertir la columna ```Fecha``` a un formato de tiempo ```datetime```, pues en este momento se encuentra como un tipo de dato ```string```.
Esto se hace con la siguiente indicación:
```
data['Fecha'] = pd.to_datetime(data['Fecha'])
data['Fecha']
```
Despues se realizo la extraccion de forma mas detallada de los meses, por la razón de que esta información puede servir para realizar un analisis con mas detalle.
Para esto se ejecuta lo siguiente:
```
df = data.copy()
#datosL['Mes'] = datosL.apply(lambda row: '0'*(2-len(str(row['Fecha'].month))) + str(row['Fecha'].month) + '-' + row['Fecha'].month_name(), axis=1)
#datosL.head()
```
# BD
**Base de datos Modificacion TABLA RESEÑAS**
Luego de realizar la limpieza de los datos, procedemos a guardar esa informacion a la base de datos en MongoD, la cual tiene
nonmbre "liniodb", por tal razón modificaremos el excel para guardarlo en la bd correctamente.
Eliminamos algunas columnas, solo para guardarlo en la base de datos, por tal razón solo se **realiza una vez**
Ademas, creamos y guardamos un Json para obtener esta informacion, por si la utilizamos mas adelante.
Se comenta, debido a que solo se realiza una vez al momento de realizar la primera limpieza
```
#Tabla RESEÑAS
#Configurar el excel para la base de datos
# del(df['Precio'])
# del(df['Titulo'])
# df.index.names = ['IdReseña']
# df.reset_index(inplace=True)
# Seleccion tabla de base de datos
#dr = db['reseñas']
#data_dict = df.to_dict("records")
# Insert collection
#dr.insert_many(data_dict)
```
**Base de datos Modificacion TABLA PRODUCTOS**
Luego de realizar la limpieza de los datos, procedemos a guardar esa informacion a la base de datos en Mysql, la cual tiene
nonmbre "liniodb", por tal razón modificaremos el excel para guardarlo en la bd correctamente.
Eliminamos algunas columnas, solo para guardarlo en la base de datos, por tal razón solo se **realiza una vez**
Ademas, creamos y guardamos un Json para obtener esta informacion, por si la utilizamos mas adelante.
Se comenta, debido a que solo se realiza una vez al momento de realizar la primera limpieza
```
#Tabla PRODUCTOS
#Configurar el excel para la base de datos
#Eliminar colmnas innecesarias
#del(df['Reseñas'])
#del(df['Estrellas'])
#del(df['Autor'])
#del(df['Fecha'])
# Eliminar id productos repetidos y dejar uno unico
#df = df.drop_duplicates(df.columns[~df.columns.isin(['IdProducto'])], keep='first')
# Seleccion tabla de base de datos
# dp = db['productos']
# data_dict = df.to_dict("records")
# Insert collection
# dp.insert_many(data_dict)
```
Vamos a exportar nuestro dataset limpio de vuleta al formato .csv con el nombre:
```linio_EDA.csv```:
```
data.to_csv('Data/linio_EDA.csv', index=False)
```
**Comenzamos a interactuar con la Base de datos**
Primero creamos las consultas, para obtener la inforamcion de las dos tablas de la base de datos las cuales vamos a trabajar
```
dp = db['productos'].find()
dfp=pd.DataFrame(list(dp))
del dfp['_id']
dr = db['reseñas'].find()
dfr=pd.DataFrame(list(dr))
del dfr['_id']
df = pd.read_csv('Data/linio_EDA.csv')
df.index.names = ['IdReseña']
```
## Análisis Exploratorio de Datos (Exploratory Data Analysis (EDA))
```
dfr.head()
dfr.shape
```
Visulizar la cantidad de comentarios que contiene cada articulo, para asi determinar la cantidad de comentarios que vamos a analizar.
Para esto se ejecuta lo siguiente:
```
dfr['IdProducto'].value_counts()
df['Titulo'].value_counts()
```
El siguiente paso que se llevo a cabo, fue el de determinar la cantidad de comentarios segun la estrella que este presente.
Para esto se ejecuta lo siguiente:
```
cross_tab_stars = pd.crosstab(dfr['IdProducto'], dfr['Estrellas']).sort_index()
columns_stars = list(cross_tab_stars.columns)
cross_tab_stars
```
Se puede Observar como se cuenta el numero de comentario que presentan la misma cantidad de estrellas.
Despues se realiza una busqueda sobre la longitud de palabras y la frecuencia con la que esta misma longitud es utilizada.
Por lo cual se ejecuta lo siguiente:
```
words_per_review = dfr.Reseñas.apply(lambda x: len(x.split(" ")))
words_per_review.hist(bins = 100)
plt.xlabel('Longitud de la Reseña')
plt.ylabel('Frecuencia')
plt.show()
print('Promedio de las Palabras:', words_per_review.mean())
print('Asimetria :', words_per_review.skew())
```
Se puede evidenciar que en la plataforma de Linio Colombia los usuarios al momento de calificar un producto no son tan expresivos, ya que se puede observar que el promedio de la longitud de las reseñas es de aproximadamente 11 palabras.
Ahora podemos ver la distribución de las estrellas:
```
#La suma de estrellas??
percent_val = dfr['Estrellas'].value_counts()
percent_val
percent_val.plot.bar()
plt.show()
percent_val = 100 * dfr['Estrellas'].value_counts()/len(dfr)
percent_val
```
Se puede observar una gran desvalance en las estrellas, ya que la cantidad de productos con 5 estrellas mucho mas grande a comparacion de 4.0, 3.0, 2.0 y 1.0
# Visualización de texto mediante nubes de palabras
Ahora se busca la forma para apreciar la distribucion de las reseñas, y la mejor forma de hacer eso es con una nube de palabras.
Se puede conseguir una nuve de palabras de la siguiente manera:
```
import nltk
from nltk import word_tokenize
from collections import Counter
from wordcloud import WordCloud
word_cloud_text = ''.join(dfr['Reseñas'])
wordcloud = WordCloud(max_font_size=100,
max_words=10,
background_color="white",
scale = 10,
width=800,
height=400
).generate(word_cloud_text)
plt.figure()
plt.imshow(wordcloud,
interpolation="bilinear")
plt.axis("off")
plt.show()
```
podemos evidenciar que en la nube de palabras obtenida se presentan palabras no deseadas, porque dichas no nos favorecen a el estudio.
# Estandarización de las estrellas para el análisis de sentimientos
Para el analisis de sentiminietos principalmente vamos a ignorar las calificaciones de 3 estrellas representan reseñas virtuales, para eso las clasificamos de forma binaria estipulando lo siguiente:
1. Las reseñas con calificacion de 4.0 o 5.0 estrellas tomaran el valor de 1
2. Las reseñas con calificacion de 1.0 o 2.0 estrellas tomaran el valor de 0
```
dfr["Estrellas"] = dfr["Estrellas"].astype(float).astype(int)
dfr['Sentimiento'] = np.where(dfr.Estrellas > 3,1,0)
##remover las clasificaciones neutrales
dfr = dfr[dfr.Estrellas != 3]
dfr['Sentimiento'].value_counts()
dfr.Sentimiento.value_counts().plot.bar()
plt.show()
```
Se evidencia un desequilibrio de clases.
```
dfr.head()
```
# Eliminar Stopwords
Vamos a elimar las palabras que consideramos irrelevantes para el estudio.
```
#StopWords ESPAÑOL
from nltk.corpus import stopwords
noise_words = []
eng_stop_words = stopwords.words('spanish')
```
Para elimar este tipo de palabras usaremos la siguiente funcion:
```
stop_words = set(eng_stop_words)
def stopwords_removal(stop_words, sentence):
return [word for word in nltk.word_tokenize(sentence) if word not in stop_words]
#ReseñasSinStopWords
dfr['ReseñaNew'] = dfr['Reseñas'].apply(lambda row: stopwords_removal(stop_words, row))
dfr[['Reseñas','ReseñaNew']]
```
# Creación de un modelo de Machine Learning
```
dfr
```
La tecnica de aprendizaje que fue escogida para este proyecto es la **regresion lineal** ya que es lo que mas facil se adapta a nuestro tipo de datos, y relativamente sencilla de comprender y aplicar
Con nuestros datos limpios y un analisis completo, se empieza a crear el modelo de Machine Learning, para esto nos centramos en tres columnas importantes de nuestro data Set las cuales son:
1. **```Reseñas```**
2. **```Estrellas```**
3. **```Sentimiento ```**
```
dfr[['Reseñas','Estrellas','Sentimiento']].head(5)
```
# Balance De Clases
```
from sklearn.datasets import load_breast_cancer
data = dfr[['Reseñas','Estrellas']]
target = dfr[['Sentimiento']]
from imblearn.over_sampling import RandomOverSampler, SMOTE
#Inicialización de los métodos de sobremuestreo
#ROS
ros = RandomOverSampler()#random_state = 0
#SMOTE
smote = SMOTE()
#ROS. Duplica muestras de la clase menos representadas
dataRos, targetRos = ros.fit_resample(data,target)
BuenoRos = targetRos.sum()
MaloRos = targetRos.shape[0]- BuenoRos
print('Bueno: ', BuenoRos, ' , Malo: ', MaloRos )
dataRos
```
# Creacion del modelo
```
from sklearn.feature_extraction.text import CountVectorizer
vec2 = CountVectorizer()
X2 = vec2.fit_transform(dataRos['Reseñas'])
df2 = pd.DataFrame(X2.toarray(), columns = vec2.get_feature_names())
df2.head()
bow_counts = CountVectorizer(tokenizer= word_tokenize,
stop_words=noise_words,
ngram_range=(1,1))
bow_data = bow_counts.fit_transform(dataRos['Reseñas'])
bow_data
X_train_bow, X_test_bow, y_train_bow, y_test_bow = train_test_split(bow_data, # Caracteristicas
targetRos['Sentimiento'], # Variable Objetivo
test_size = 0.2, # 20% de tamaño de prueba
random_state = 0) # estado aleatorio para fines de replicación
### Training the model
lr_model_all = LogisticRegression() # Logistic regression
lr_model_all.fit(X_train_bow, y_train_bow) # Fitting a logistic regression model
## Predicting the output
test_pred_lr_all = lr_model_all.predict(X_test_bow) # Class prediction
## Calculate key performance metrics
print("F1 score: ", f1_score(y_test_bow, test_pred_lr_all))
bow_counts = CountVectorizer(tokenizer= word_tokenize,
ngram_range=(1,4))
bow_data = bow_counts.fit_transform(dataRos['Reseñas'])
bow_data
X_train_bow, X_test_bow, y_train_bow, y_test_bow = train_test_split(bow_data,
targetRos['Sentimiento'],
test_size = 0.2,
random_state = 0)
lr_model_all_new = LogisticRegression(max_iter = 200)
lr_model_all_new.fit(X_train_bow, y_train_bow)
# Predicting the results
test_pred_lr_all = lr_model_all_new.predict(X_test_bow)
print("F1 score: ", f1_score(y_test_bow,test_pred_lr_all))
lr_weights = pd.DataFrame(list(zip(bow_counts.get_feature_names(), # ge tall the n-gram feature names
lr_model_all_new.coef_[0])), # get the logistic regression coefficients
columns= ['Palabras','Frecuencia']) # defining the colunm names
lr_weights.sort_values(['Frecuencia'], ascending = False)[:15] # top-15 more important features for positive reviews
lr_weights.sort_values(['Frecuencia'], ascending = False)[-15:] # top-15 more important features for negative reviews
```
# Modelo TF-IDF
```
from sklearn.feature_extraction.text import TfidfVectorizer
### Creating a python object of the class CountVectorizer
tfidf_counts = TfidfVectorizer(tokenizer= word_tokenize, # type of tokenization
stop_words=noise_words, # List of stopwords
ngram_range=(1,1)) # number of n-grams
tfidf_data = tfidf_counts.fit_transform(dataRos['Reseñas'])
tfidf_data
X_train_tfidf, X_test_tfidf, y_train_tfidf, y_test_tfidf = train_test_split(tfidf_data,
targetRos['Sentimiento'],
test_size = 0.2,
random_state = 0)
```
# Regresion con datos TF-IDF
```
### Setting up the model class
lr_model_tf_idf = LogisticRegression()
## Training the model
lr_model_tf_idf.fit(X_train_tfidf,y_train_tfidf)
## Prediciting the results
test_pred_lr_all = lr_model_tf_idf.predict(X_test_tfidf)
## Evaluating the model
print("F1 score: ",f1_score(y_test_bow, test_pred_lr_all))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test_tfidf, test_pred_lr_all)
from sklearn.metrics import classification_report
classification_report(y_test_tfidf, test_pred_lr_all)
```
| github_jupyter |
# Digit Classifier using CNNs
```
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras.preprocessing.image import ImageDataGenerator
```
Loading the dataset using TensorFlow API:
```
mnist = tf.keras.datasets.mnist
(training_data, training_labels), (validation_data, validation_labels) = mnist.load_data()
```
Preprocessing data:
```
training_data = training_data.reshape(-1, 28, 28, 1)
validation_data = validation_data.reshape(-1, 28, 28, 1)
training_datagen = ImageDataGenerator(
rescale = 1 / 255,
rotation_range = 30,
width_shift_range = 0.2,
height_shift_range = 0.2,
zoom_range = 0.1,
shear_range = 0.1,
fill_mode = 'constant'
)
validation_datagen = ImageDataGenerator(
rescale = 1 / 255
)
training_generator = training_datagen.flow(
training_data,
training_labels
)
validation_generator = validation_datagen.flow(
validation_data,
validation_labels,
shuffle = False
)
```
Model:
```
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation = 'relu', input_shape = (28, 28, 1)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(32, (3, 3), activation = 'relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(32, (3, 3), activation = 'relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(64, (5, 5), activation = 'relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(64, (5, 5), activation = 'relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(64, (5, 5), activation = 'relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation = 'relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation = 'softmax')
])
model.summary()
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 0.03 * 0.95 ** (epoch / 10)
)
model.compile(loss = 'sparse_categorical_crossentropy',
optimizer = tf.keras.optimizers.SGD(learning_rate = 0.03),
metrics = ['accuracy']
)
history = model.fit(training_generator,
epochs = 100,
validation_data = validation_generator,
callbacks = [lr_schedule]
)
train_acc = history.history['accuracy']
train_loss = history.history['loss']
validation_acc = history.history['val_accuracy']
validation_loss = history.history['val_loss']
num_epochs = len(train_acc)
epochs = range(num_epochs)
plt.plot(epochs, train_acc)
plt.plot(epochs, validation_acc)
plt.title("Training and Validation Accuracy")
plt.axis([0, num_epochs, 0.95, 1])
plt.figure()
plt.plot(epochs, train_loss)
plt.plot(epochs, validation_loss)
plt.title("Training and Validation Loss")
plt.axis([0, num_epochs, 0, 0.15])
model.save("CNN_MNIST_V3.h5")
```
Error Analysis
```
predictions = model.predict(validation_generator)
predictions = np.argmax(predictions, axis = 1)
len(predictions)
incorrect = []
for i in range(10000):
if predictions[i] != validation_labels[i]:
incorrect.append(i)
i = incorrect[0]
print(f"Expected: {validation_labels[i]}, Predicted: {predictions[i]}")
plt.imshow(validation_data[i].reshape(28, 28), cmap = 'gray')
plt.figure()
```
| github_jupyter |
```
import torch
import pandas as pd
import numpy as np
import sklearn
from collections import Counter
from sklearn.utils import Bunch
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from itertools import combinations
import re
import os
import torch.nn as nn
import matplotlib.pyplot as plt
```
# Data Loading
```
path = r"E:\github\movie_hatespeech_detection\data\fox_news\fox_news.csv"
df = pd.read_csv(path, index_col=0)
df = df.rename(columns={'class': 'label'})
df['label'] = df['label'].replace({2:1})
df = df.append({'comment': 'I love you', 'label': 0}, ignore_index=True)
df = df.append({'comment': 'I hate you', 'label': 1}, ignore_index=True)
df.tail()
path = r'E:\github\movie_hatespeech_detection\data\movies_for_training\all_movies.csv'
movie_data = pd.read_csv(path, index_col=0)
movie_data.head()
print(df.label.value_counts())
df.label.value_counts().plot(kind='pie', subplots=True, autopct='%1.0f%%', title='Hate Speech Distribution')
```
## Data Splitting
```
def split_dataset(df, seed):
df = df.copy()
test = df.loc[1513:1514]
df.drop(df.tail(1).index, inplace=True)
train = df.sample(frac=1, random_state=seed)
return train.comment.values, train.label.values, test.comment, test.label
categories = [0,1]
seed = 11
train, train_targets, test, test_targets = split_dataset(df, seed=seed)
train_size = len(train)
test_size = len(test)
print(train_size)
print(test_size)
def calculate_dataset_class_distribution(targets, categories):
df = pd.DataFrame({'category':targets})
s = df.category.value_counts(normalize=True)
s = s.reindex(categories)
return [s.index[0], s[0]], [s.index[1], s[1]]
train_class_distribution = calculate_dataset_class_distribution(train_targets, categories)
test_class_distribution = calculate_dataset_class_distribution(test_targets, categories)
print(train_class_distribution)
print(test_class_distribution)
train_ds = Bunch(data=train, target=train_targets)
test_ds = Bunch(data=test, target=test_targets)
```
## Buidling the Model
```
# Getting all the vocabularies and indexing to a unique position
vocab = Counter()
#Indexing words from the training data
for text in train_ds.data:
for word in text.split(' '):
vocab[word.lower()]+=1
#Indexing words from the training data
for text in test_ds.data:
for word in text.split(' '):
vocab[word.lower()]+=1
for text in movie_data.text.values:
for word in text.split(' '):
vocab[word.lower()]+=1
total_words = len(vocab)
def get_word_2_index(vocab):
word2index = {}
for i,word in enumerate(vocab):
word2index[word.lower()] = i
return word2index
word2index = get_word_2_index(vocab)
print(len(word2index))
print(word2index["the"]) # Showing the index of 'the'
print (total_words)
# define the network
class News_20_Net(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(News_20_Net, self).__init__()
self.layer_1 = nn.Linear(input_size,hidden_size, bias=True).cuda()
self.relu = nn.ReLU().cuda()
self.layer_2 = nn.Linear(hidden_size, hidden_size, bias=True).cuda()
self.output_layer = nn.Linear(hidden_size, num_classes, bias=True).cuda()
# accept input and return an output
def forward(self, x):
out = self.layer_1(x)
out = self.relu(out)
out = self.layer_2(out)
out = self.relu(out)
out = self.output_layer(out)
return out
def get_batch(df,i,batch_size):
batches = []
results = []
# Split into different batchs, get the next batch
texts = df.data[i*batch_size:i*batch_size+batch_size]
# get the targets
categories = df.target[i*batch_size:i*batch_size+batch_size]
#print(categories)
for text in texts:
# Dimension, 196609
layer = np.zeros(total_words,dtype=float)
for word in text.split(' '):
layer[word2index[word.lower()]] += 1
batches.append(layer)
# We have 5 categories
for category in categories:
#print(category)
index_y = -1
if category == 0:
index_y = 0
elif category == 1:
index_y = 1
elif category == 2:
index_y = 2
results.append(index_y)
# the training and the targets
return np.array(batches),np.array(results)
# Parameters
learning_rate = 0.001
num_epochs = 8
batch_size = 32
display_step = 1 # ADDED will multiplied by 10
# Network Parameters
hidden_size = 100 # 1st layer and 2nd layer number of features
input_size = total_words # Words in vocab
num_classes = len(categories) # Categories: "graphics","space","baseball","guns", "christian"
```
## Training
```
results = []
news_net = News_20_Net(input_size, hidden_size, num_classes)
# Loss and Optimizer
criterion = nn.CrossEntropyLoss() # This includes the Softmax loss function
optimizer = torch.optim.Adam(news_net.parameters(), lr=learning_rate)
# Train the Model
for epoch in range(num_epochs):
# determine the number of min-batches based on the batch size and size of training data
total_batch = int(len(train_ds.data)/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x,batch_y = get_batch(train_ds,i,batch_size)
articles = torch.cuda.FloatTensor(batch_x, device='cuda')
labels = torch.cuda.LongTensor(batch_y, device='cuda')
# Forward + Backward + Optimize
optimizer.zero_grad() # zero the gradient buffer
outputs = news_net(articles)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if (i+1) % display_step == 0:
result = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f'%(epoch+1, num_epochs, i+1, len(train_ds.data)/batch_size, loss.data)
results.append({'Epoch': epoch+1, 'Step': i+1, 'Loss': loss.data.item()})
if (i+1) % (display_step*10) == 0:
print({'Epoch': epoch+1, 'Step': i+1, 'Loss': loss.data.item()})
```
## Validation
```
# Test the Model
correct = 0
total = 0
total_test_data = len(test_ds.target)
iterates = total_test_data/batch_size # ignore last (<batch_size) batch
all_total = []
all_correct = []
labels_all = []
predicted_all = []
for i in range(int(iterates)):
batch_x_test,batch_y_test = get_batch(test_ds,i,batch_size)
articles = torch.FloatTensor(batch_x_test).to('cuda')
labels = torch.LongTensor(batch_y_test).to('cuda')
outputs = news_net(articles)
_, predicted = torch.max(outputs.data, 1)
labels_all.extend([x.item() for x in labels])
predicted_all.extend([x.item() for x in predicted])
report = classification_report(labels_all, predicted_all, output_dict=True)
df_report = pd.DataFrame(report).transpose()
df_report.round(2)
```
----
## Classication of Movies
```
def annotate_df(movie_df):
utterances = movie_df.text.values
predictions = []
batch = []
for text in utterances:
# Dimension, 196609
layer = np.zeros(total_words,dtype=float)
for word in text.split(' '):
layer[word2index[word.lower()]] += 1
batch.append(layer)
texts = torch.FloatTensor(batch).to('cuda')
outputs = news_net(texts)
_, predicted = torch.max(outputs.data, 1)
predictions.extend([x.item() for x in predicted])
result = []
for i, pred in enumerate(predictions):
result.append({'index': i, 'label_bow_fox_news': pred})
result_df = pd.DataFrame(result)
movie_df = movie_df.merge(result_df, right_index=True, left_index=True)
return movie_df
result_df = annotate_df(movie_data)
result_for_sana = result_df[['text', 'label_bow_fox_news']]
result_df
result_df.label_bow_fox_news.value_counts()
result_df.majority_answer.value_counts()
def get_classifications_results(df):
df['majority_answer'] = df['majority_answer'].replace({2:1})
labels_all = df.majority_answer.values
predicted_all = df.label_bow_fox_news.values
results_classification = classification_report(labels_all, predicted_all, output_dict=True)
df_report = pd.DataFrame(results_classification).transpose()
return df_report
get_classifications_results(result_df).round(2)
```
| github_jupyter |
# Example: Swiss Referenda
We propose in this notebook an example of how to use the `predikon` library to make vote predictions.
The data is a subsample (10%) of Swiss referenda results.
The full dataset can be found in the [submatrix-factorization](https://github.com/indy-lab/submatrix-factorization/blob/master/data/munvoteinfo.pkl) repo.
## Imports
```
import numpy as np
from predikon import LogisticSubSVD, GaussianSubSVD, WeightedAveraging
DATA_PATH = '../tests/data/'
```
## Load Data
Each entry `data[i,j]` is the percentage of "yes" in region `i` for referendum `j`.
A region in this dataset is a Swiss municipality.
The `weights` are the number of valid votes in each municipality.
The `outcomes` are the aggregate national outcomes for each referendum.
```
data = np.loadtxt(f'{DATA_PATH}/data.csv', dtype=np.float, delimiter=',')
weights = np.loadtxt(f'{DATA_PATH}/weights.csv', dtype=np.int, delimiter=',')
outcomes = np.loadtxt(f'{DATA_PATH}/outcomes.csv', dtype=np.float, delimiter=',')
```
## Prepare Data
The matrix `Y` contains historical data up to vote `V`.
The vector `y` contains the vote results for the vote we would like to make predictions.
```
Y, y = data[:, :-1], data[:, -1]
ytrue = outcomes[-1]
R, V = Y.shape
print(f'Number of regions: {R:>3}')
print(f'Number of votes: {V:>3}')
```
## Set Observations
Set which regions are observed.
The unobserved regional results are `nan`.
```
# Fix the seed for reproducibility.
np.random.seed(200)
# Random permutation of the regions.
inds = np.random.permutation(R)
# Proportion of observed results.
p = 0.1
# Number of observations (10 %).
n = int(np.ceil(R * p))
# Set observations.
obs = inds[:n]
# Define new vector of (partial) regional results.
ynew = np.array([np.nan] * R)
ynew[obs] = y[obs]
```
## Evaluate Models
We evaluate three models:
1. A weighted average baseline
2. Our algorithm with a Gaussian likelihood
3. Our algorithm with a Bernoulli likelihood
We set the latent dimensions `D=10` and the regularizer `reg=1e-5`.
We report the predicted aggregated outcome, and we compare it against the true aggregate outcome.
An aggregate outcome is the weighted average of the regional observations and the regional predictions, where the weight is the number of valid votes in each region.
```
# Hyperparameters: number of latent dimensions and regularizers.
D, reg = 10, 1e-5
# Define models.
base = WeightedAveraging(Y, weighting=weights)
gaus = GaussianSubSVD(Y, weighting=weights, n_dim=D, add_bias=True, l2_reg=reg)
bern = LogisticSubSVD(Y, weighting=weights, n_dim=D, add_bias=True, l2_reg=reg)
for model in [base, gaus, bern]:
print(model)
# Predict missing results.
pred = model.fit_predict(ynew)
# Compute aggregate outcome.
ypred = 1/np.sum(weights) * np.sum(weights.dot(pred))
print(f' Predicted outcome: {ypred*100:.2f}%')
print(f' True outcome: {ytrue*100:.2f}%')
print(f' Absolute diff.: {np.abs(ypred - ytrue)*100:.4f}\n')
```
| github_jupyter |
<b>EnergyUsagePrediction.feature_eng.asum.v1_0_8.ipynb</b>
<br/>For my use case "Energy usage prediction based on historical weather and energy usage data.". The original dataset can be downloaded from <a href="https://www.kaggle.com/taranvee/smart-home-dataset-with-weather-information">kaggle</a>
<br/>The dataset used in this step (feature engineering) has already been transformed in the ETL step.
<br/>Data exploration is described/performed in "EnergyUsagePrediction.data_exp.asum.1_0_5.Ipynb"
<br/>ETL is described/performed in "EnergyUsagePrediction.etl.asum.1_0_8.Ipynb"
<br/>
<br/>This task transforms input columns of various relations into additional columns to improve model performance.
<br/>A subset of those features can be created in an initial task (for example, one-hot encoding of categorical variables or normalization of numerical variables).
<br/>Some others require business understanding or multiple iterations to be considered.
<br/>This task is one of those benefiting the most from the highly iterative nature of this method.
<br/>
<br/>In this task I will normalize the data columns in order to be easier to use by machine learning algorithms
<br/>I will apply one hot encoding to enum like text columns (columns with just a few text items which indicate a category/type/state)
<br/>Additional features will be added like: day of year, minute of day.
<br/>Load <i>smart-home-dataset-with-weather-information_filtered.csv</i> file into pandas dataframe
```
import types
import numpy as np
import pandas as pd
from botocore.client import Config
import ibm_boto3
def __iter__(self): return 0
# @hidden_cell
# The following code accesses a file in your IBM Cloud Object Storage. It includes your credentials.
# You might want to remove those credentials before you share the notebook.
client_x = ibm_boto3.client(service_name='s3',
ibm_api_key_id='[credentials]',
ibm_auth_endpoint="https://iam.cloud.ibm.com/oidc/token",
config=Config(signature_version='oauth'),
endpoint_url='https://s3.eu-geo.objectstorage.service.networklayer.com')
body = client_x.get_object(Bucket='xyz',Key='smart-home-dataset-with-weather-information_post_etl.csv')['Body']
# add missing __iter__ method, so pandas accepts body as file-like object
if not hasattr(body, "__iter__"): body.__iter__ = types.MethodType( __iter__, body )
df = pd.read_csv(body)
df.head()
```
For usability we define constants for the labels.
```
df.dtypes
lbTimestamp = 'Timestamp'
lbTotalEneryUsage = 'TotalUsage_kW'
lbEneryGeneration = 'Generated_kW'
lbEneryUsageHouseOverall = 'HouseOverall_kW'
lbDishwasherUsage = 'DishWasher_kW'
lbFurnace1Usage = 'Furnace1_kW'
lbFurnace2Usage = 'Furnace2_kW'
lbHomeOfficeUsage = 'HomeOffice_kW'
lbFridgeUsage = 'Fridge_kW'
lbWineUsage = 'WineCellar_kW'
lbGarageDoorUsage = 'GarageDoor_kW'
lbKitchen12Usage = 'KitchenDevice12_kW'
lbKitchen14Usage = 'KitchenDevice14_kW'
lbKitchen38Usage = 'KitchenDevice38_kW'
lbBarnUsage = 'Barn_kW'
lbWellUsage = 'Well_kW'
lbMicrowaveUsage = 'Microwave_kW'
lbLivingRoomUsage = 'Living room_kW'
lbSolarGenerated = 'Solar_kW'
lbTemperature = 'Temperature_F'
#Textial weather indicators: clear-day, clear-night, cloudy, fog, partly-cloudy-day, partly-cloudy-night, rain, snow, wind
lbWeatherIndicator = 'WeatherIndicator'
#Humidity range [0,1]
lbHumidity = 'Humidity'
#Visibility range [0,10]
lbVisibility = 'Visibility'
#WeatherSummary: Breezy, Breezy and mostly cloudy, clear, drizzle, dry , flurries, flurries and breezy, foggy, heavy snow, light rain, light snow, mostly cloudy, overcast, partly cloudy, rain, rain and breezy, snow
lbWeatherSummary = 'WeatherSummary'
lbApparentTemperature = 'ApparentTemperature_F'
lbPressure = 'Pressure_hPa'
lbWindSpeed = 'WindSpeed'
lbCloudCover = 'cloudCover'
lbWindBearing = 'WindBearing'
#average intensity of rain fall: could be mm/minute mm/hour, inch per hour, inch per minute
lbPrecipIntensity = 'PrecipIntensity'
lbDewPoint = 'dewPoint_F'
# chance of rain
lbPrecipProbability = 'PrecipProbability'
lbDayOfYear='dayOfYear'
lbHourOfDay='hourOfDay'
lbMinuteOfDay='minuteOfDay'
```
In order to get rid of the small amount of text values in cloudCover, we replace the content with the median value of cloudCover
Furthermore, some values are floats as text, and others floats as floats. We are going to change the type float too
```
filtered = df[lbCloudCover]
median = (df[filtered != 'cloudCover'])[lbCloudCover].median()
print('median: ' + str(median))
df[lbCloudCover].replace('cloudCover', median, inplace=True)
```
We can now change the type to float
```
df[lbCloudCover] = df[lbCloudCover].astype(float)
df[lbCloudCover].unique()
```
Now let's convert the two text columns to one hot endoded columns
We can use pandas get_dummies method to create one hot encoded columns with a prefix.
We start with 'WeatherIndicator'
```
df[lbWeatherIndicator].unique()
df = pd.concat([df,pd.get_dummies(df[lbWeatherIndicator], prefix='weatherIndicator')],axis=1)
# now drop the original column (you don't need it anymore)
df.drop([lbWeatherIndicator],axis=1, inplace=True)
```
Let's do the same for 'WeatherSummary'
```
df[lbWeatherSummary].unique()
df = pd.concat([df,pd.get_dummies(df[lbWeatherSummary], prefix='weatherSummary')],axis=1)
# now drop the original column (you don't need it anymore)
df.drop([lbWeatherSummary],axis=1, inplace=True)
```
So now all our data types are either type int or type float
```
df.dtypes
```
As described in part I, we need to convert the timesamp int value to proper 1 minute step values
The first item starts at '2016-01-01 05:00:00'
```
df[lbTimestamp] = pd.DatetimeIndex(pd.date_range('2016-01-01 05:00', periods=len(df), freq='min'))
print(df[lbTimestamp].min())
print(df[lbTimestamp].max())
```
We create new features dayOfYear, hourOfDay and minuteOfDay
```
df[lbDayOfYear] = df[lbTimestamp].apply(lambda x : x.dayofyear)
df[lbHourOfDay] = df[lbTimestamp].apply(lambda x : x.hour)
df[lbMinuteOfDay] = df[lbTimestamp].apply(lambda x : x.minute + 60*x.hour)
print('day of year examples:' + str(df[lbDayOfYear].iloc[0+3])+ ','+ str(df[lbDayOfYear].iloc[20 * 1440+ 8*60 + 5]))
print('hour of day examples:' + str(df[lbHourOfDay].iloc[0+3])+ ','+ str(df[lbHourOfDay].iloc[20 * 1440 + 8*60 + 5]))
print('minute of day examples:' + str(df[lbMinuteOfDay].iloc[0+3])+ ','+ str(df[lbMinuteOfDay].iloc[20 * 1440 + 8*60 + 5]))
```
Removing outliers
```
outlierFilter = 2
#median = float(df[lbTotalEneryUsage].median())
df[lbTotalEneryUsage] = np.where(df[lbTotalEneryUsage] > outlierFilter, outlierFilter, df[lbTotalEneryUsage])
df[lbTotalEneryUsage].max()
outlierFilter = 16
df[lbWindSpeed] = np.where(df[lbWindSpeed] > outlierFilter, outlierFilter, df[lbWindSpeed])
df[lbWindSpeed].max()
outlierFilterMax = 1030
outlierFilterMin = 1000
df[lbPressure] = np.where(df[lbPressure] > outlierFilterMax, outlierFilterMax, df[lbPressure])
df[lbPressure] = np.where(df[lbPressure] < outlierFilterMin, outlierFilterMin, df[lbPressure])
print(str(df[lbPressure].max()))
print(str(df[lbPressure].min()))
```
Now lets create normalized columns of our features of interest
```
from sklearn import preprocessing
# Create a minimum and maximum processor object
min_max_scaler = preprocessing.MinMaxScaler()
x = df[[lbTemperature]].values.astype(float)
# Create an object to transform the data to fit minmax processor
x_scaled = min_max_scaler.fit_transform(x)
# Run the normalizer on the dataframe
df[lbTemperature+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbWindSpeed]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbWindSpeed+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbHumidity]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbHumidity+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbPressure]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbPressure+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbCloudCover]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbCloudCover+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbWindBearing]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbWindBearing+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbPrecipIntensity]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbPrecipIntensity+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbDewPoint]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbDewPoint+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbDayOfYear]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbDayOfYear+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbHourOfDay]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbHourOfDay+'_normalized'] = pd.DataFrame(x_scaled)
from sklearn import preprocessing
x = df[[lbMinuteOfDay]].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df[lbMinuteOfDay+'_normalized'] = pd.DataFrame(x_scaled)
print(df.dtypes[0:32])
print(df.dtypes[32:])
```
Now lets save our post ETL dataset to a new csv file.
```
from project_lib import Project
project = Project(None,"[GUID]","p-[GUID]")
project.save_data(file_name = "smart-home-dataset-with-weather-information_post_feature_eng.csv",data = df.to_csv(index=False))
```
We have now extracted transformed and loaded our data. We can now continue with feature creation in part III
| github_jupyter |
## investigations
Conclusions on mods:
- Freddie data only flags mod on the month it occurs, Fannie keeps the flag for the duration of the mod.
- The mod_sticky_flg unifies the data: it is always Y after a mod
- Mods have significant prior dq
Conclusions on borr_asst_plan:
- Flags are sticky
- Can have multiple episodes of plans
- Some but not a lot of overlap with mods
- F: most were had at most months_dq=1
- R: most were 1+ months_dq prior, mostly used by Freddie
- Tend to be short term in nature
- Fannie only reporting since 7/2020
Conclusions on interest rate reductions:
- Almost all ir reductions are marked as mods
- mods involve ir reduction about 60% of the time
Conclusions on Deferrals
- Fannie started reporting 7/2020. Freddie goes back in time.
- Some overlap with borr_asst_plan when defrl_amt > 0 first time
```
import mortgage_imports.clickhouse_utilities as cu
import pandas as pd
import numpy as np
from muti import chu
# pandas options
pd.set_option('display.max_columns', None)
pd.set_option('display.width', 1000)
pd.set_option('display.colheader_justify', 'center')
pd.set_option('display.max_rows', 1000)
client = chu.make_connection()
qry = \
"""
SELECT
src_data,
m.mod_flg AS mf,
m.mod_sticky_flg AS msf,
COUNT(*) AS ln_mon
FROM
unified.frannie ARRAY JOIN monthly AS m
GROUP BY
src_data,
mf,
msf
ORDER BY
src_data,
mf,
msf;
"""
df1 = chu.run_query(qry, client, return_df=True)
df1.head(n=100)
# look at relations between mod_flg and dq on that date and max prior dq
# most frequently they are reset to current at the mod date and were 6+ months dq prior
qry = \
"""
SELECT
src_data,
prior_dq > 6 ? 6 : prior_dq AS prior_dq,
count(*) AS nl
FROM (
SELECT
src_data,
arrayMax(arraySlice(monthly.months_dq, 1, indexOf(monthly.mod_flg, 'Y'))) AS prior_dq
FROM
unified.frannie
WHERE
has(monthly.mod_flg, 'Y'))
GROUP BY src_data, prior_dq
ORDER BY src_data, prior_dq
"""
df2 = chu.run_query(qry, client, return_df=True)
df2.head(n=1000)
```
you can see that the effect of the sticky mod flag--little effect on Fannie, big effect of Freddie
where the flag only appears on the month of the mod
modded loans tend to be very dq in their history
```
# look at the relationship between mod_flg and borrower assistance plan
# R = repayment plan
# F = forbearance
# T = Trial
# N = no plan
# 7/9 = Not applicable/available (fannie)
# repayment plans: Freddie does more, about 1/3 are Y (standard) for mod_sticky_flg. None for fannie.
# forbearance: vast majority are not listed as mod -- perhaps coincidental overlap (ie forbearance on
# a loan that happened to be modified)
qry = \
"""
SELECT
src_data,
m.mod_flg AS mf,
m.mod_sticky_flg AS msf,
m.borr_asst_plan AS bap,
count(*) AS ln_mon
FROM
unified.frannie ARRAY JOIN monthly AS m
GROUP BY
src_data,
mf,
msf,
bap
ORDER BY
src_data,
mf,
msf,
bap;
"""
df = chu.run_query(qry, client, return_df=True)
df.sort_values(['src_data', 'bap', 'mf', 'msf'])[['src_data', 'bap', 'mf', 'msf', 'ln_mon']].head(n=1000)
# look at sticky-ness of borr_asst_plan: Fannie
# sticky
# code '7' seems to indicate end of forbearance
qry = \
"""
SELECT
arraySlice(monthly.borr_asst_plan, indexOf(monthly.borr_asst_plan, 'F')) AS bap
FROM
fannie.final
WHERE
has(monthly.borr_asst_plan, 'F')
ORDER BY rand32(1)
LIMIT 35
"""
df3 = chu.run_query(qry, client, return_df=True)
print(np.asarray(df3['bap']))
# look at sticky-ness of borr_asst_plan: Freddie, sticky
# becomes unpopulated after forbearance ends
qry = \
"""
SELECT
arraySlice(monthly.borr_asst_plan, indexOf(monthly.borr_asst_plan, 'F')) AS bap
FROM
freddie.final
WHERE
has(monthly.borr_asst_plan, 'F')
ORDER BY rand32(1)
LIMIT 35
"""
df4 = chu.run_query(qry, client, return_df=True)
print(np.asarray(df4['bap']))
# look at sticky-ness of borr_asst_plan: Fannie
# sticky
# code '7' seems to indicate end of repayment
qry = \
"""
SELECT
arraySlice(monthly.borr_asst_plan, indexOf(monthly.borr_asst_plan, 'R')) AS bap
FROM
fannie.final
WHERE
has(monthly.borr_asst_plan, 'R')
ORDER BY rand32(1)
LIMIT 35
"""
df5 = chu.run_query(qry, client, return_df=True)
print(np.asarray(df5['bap']))
# look at sticky-ness of borr_asst_plan: Freddie
# sticky
qry = \
"""
SELECT
arraySlice(monthly.borr_asst_plan, indexOf(monthly.borr_asst_plan, 'R')) AS bap
FROM
freddie.final
WHERE
has(monthly.borr_asst_plan, 'R')
ORDER BY rand32(1)
LIMIT 35
"""
df6 = chu.run_query(qry, client, return_df=True)
print(np.asarray(df6['bap']))
# look at sticky-ness of borr_asst_plan: Fannie
# sticky
# code '7' seems to indicate end of trial
qry = \
"""
SELECT
arraySlice(monthly.borr_asst_plan, indexOf(monthly.borr_asst_plan, 'T')) AS bap
FROM
fannie.final
WHERE
has(monthly.borr_asst_plan, 'T')
ORDER BY rand32(1)
LIMIT 35
"""
df7 = chu.run_query(qry, client, return_df=True)
print(np.asarray(df7['bap']))
# look at sticky-ness of borr_asst_plan: Fannie
# sticky
# code '7' seems to indicate end of trial
qry = \
"""
SELECT
arraySlice(monthly.borr_asst_plan, indexOf(monthly.borr_asst_plan, 'T')) AS bap
FROM
freddie.final
WHERE
has(monthly.borr_asst_plan, 'T')
ORDER BY rand32(1)
LIMIT 35
"""
df8 = chu.run_query(qry, client, return_df=True)
print(np.asarray(df8['bap']))
# loans active 12/2020
qry = \
"""
SELECT
has(monthly.dt, toDate('2020-12-01')) AS dec2020,
count(*) AS nl
FROM
unified.frannie
GROUP BY dec2020
"""
df8b = chu.run_query(qry, client, return_df=True)
df8b.head()
# loans with bap = R
qry = \
"""
SELECT
has(monthly.borr_asst_plan, 'R') AS bap,
count(*) AS nl
FROM
unified.frannie
GROUP BY bap
"""
df8c = chu.run_query(qry, client, return_df=True)
df8c.head()
# loans with bap = F: first date distribution
qry = \
"""
SELECT
src_data,
arrayElement(monthly.dt, indexOf(monthly.borr_asst_plan, 'F')) AS dt,
count(*) AS nl
FROM
unified.frannie
WHERE
has(monthly.borr_asst_plan, 'F')
GROUP BY src_data, dt
ORDER BY src_data, nl DESC
"""
df8d = chu.run_query(qry, client, return_df=True)
df8d.head(n=1000)
# loans with bap = F
qry = \
"""
SELECT
has(monthly.borr_asst_plan, 'F') AS bap,
count(*) AS nl
FROM
unified.frannie
GROUP BY bap
"""
df8b = chu.run_query(qry, client, return_df=True)
df8b.head()
# loans with bap = T
qry = \
"""
SELECT
has(monthly.borr_asst_plan, 'T') AS bap,
count(*) AS nl
FROM
unified.frannie
GROUP BY bap
"""
df8a = chu.run_query(qry, client, return_df=True)
df8a.head()
# look at relations between borr_asst_plan=F and max prior dq
qry = \
"""
SELECT
src_data,
prior_dq > 6 ? 6 : prior_dq AS prior_dq,
count(*) AS nl
FROM (
SELECT
src_data,
arrayMax(arraySlice(monthly.months_dq, 1, indexOf(monthly.borr_asst_plan, 'F'))) AS prior_dq
FROM
unified.frannie
WHERE
has(monthly.borr_asst_plan, 'F'))
GROUP BY src_data, prior_dq
ORDER BY src_data, prior_dq
"""
df9 = chu.run_query(qry, client, return_df=True)
df9.head(n=1000)
# look at relations between borr_asst_plan=R and max prior dq
qry = \
"""
SELECT
src_data,
prior_dq > 6 ? 6 : prior_dq AS prior_dq,
count(*) AS nl
FROM (
SELECT
src_data,
arrayMax(arraySlice(monthly.months_dq, 1, indexOf(monthly.borr_asst_plan, 'R'))) AS prior_dq
FROM
unified.frannie
WHERE
has(monthly.borr_asst_plan, 'R'))
GROUP BY src_data, prior_dq
ORDER BY src_data, prior_dq
"""
df9 = chu.run_query(qry, client, return_df=True)
df9.head(n=1000)
# of months bap=F distribution
qry = \
"""
SELECT
avg(num_months)
FROM (
SELECT
arrayElement(monthly.borr_asst_plan, length(monthly.borr_asst_plan)) AS last_element,
countEqual(monthly.borr_asst_plan, 'F') AS num_months
FROM
unified.frannie
WHERE
has(monthly.borr_asst_plan, 'F')
AND last_element != 'F')
"""
df9a = chu.run_query(qry, client, return_df=True)
df9a.head(n=1000)
# of months bap=F distribution
qry = \
"""
SELECT
num_months,
count(*) AS num_loans
FROM (
SELECT
arrayElement(monthly.borr_asst_plan, length(monthly.borr_asst_plan)) AS last_element,
countEqual(monthly.borr_asst_plan, 'F') AS num_months
FROM
unified.frannie
WHERE
has(monthly.borr_asst_plan, 'F')
AND last_element != 'F')
GROUP BY num_months
ORDER BY num_months
"""
df9a = chu.run_query(qry, client, return_df=True)
df9a.head(n=1000)
# of months bap=R distribution
qry = \
"""
SELECT
num_months,
count(*) AS num_loans
FROM (
SELECT
arrayElement(monthly.borr_asst_plan, length(monthly.borr_asst_plan)) AS last_element,
countEqual(monthly.borr_asst_plan, 'R') AS num_months
FROM
unified.frannie
WHERE
has(monthly.borr_asst_plan, 'R')
AND last_element != 'R')
GROUP BY num_months
ORDER BY num_months
"""
df9a = chu.run_query(qry, client, return_df=True)
df9a.head(n=1000)
# of months bap=R distribution
qry = \
"""
SELECT
num_months,
count(*) AS num_loans
FROM (
SELECT
arrayElement(monthly.borr_asst_plan, length(monthly.borr_asst_plan)) AS last_element,
countEqual(monthly.borr_asst_plan, 'T') AS num_months
FROM
unified.frannie
WHERE
has(monthly.borr_asst_plan, 'T')
AND last_element != 'T')
GROUP BY num_months
ORDER BY num_months
"""
df9a = chu.run_query(qry, client, return_df=True)
df9a.head(n=1000)
```
Conclusions on borr_asst_plan:
- Flags are sticky
- Can have multiple episodes of plans
- Some but not of overlap with mods
- F: most were had at most months_dq=1
- R: most were 1+ months_dq prior, mostly used by Freddie
- Tend to be short term in nature
```
# look at relations between mod_flg and interest reductions
qry = \
"""
SELECT
src_data,
msf,
COUNT(*) AS ln
FROM (
SELECT
ln_id,
src_data,
ln_amort_cd,
arrayMap((ir, zb) -> IF(zb='!', ln_orig_ir - ir, 0.0), monthly.ir, monthly.zb_cd) AS delta_ir,
arrayMax(delta_ir) AS md,
arrayFirstIndex(x-> IF(x > 0.25, 1, 0), delta_ir) AS fbig,
arrayElement(monthly.mod_sticky_flg, fbig) AS msf
FROM
unified.frannie
WHERE
md > 0.25
AND ln_amort_cd = 'FRM')
GROUP BY src_data, msf
ORDER BY src_data, msf
"""
dfa = chu.run_query(qry, client, return_df=True)
dfa.head(n=1000)
# look at relations between mod_flg and interest reductions
qry = \
"""
SELECT
src_data,
sum(IF(max_delta_ir > 0.25, 1, 0)) AS ir_reduction,
count(*) AS ln_mods,
sum(IF(max_delta_ir > 0.25, 1, 0)) / count(*) AS ir_reduction_rate
FROM (
SELECT
ln_id,
src_data,
ln_amort_cd,
arrayMap((ir, zb) -> IF(zb='!', ln_orig_ir - ir, 0.0), monthly.ir, monthly.zb_cd) AS delta_ir,
arrayMax(delta_ir) AS max_delta_ir
FROM
unified.frannie
WHERE
has(monthly.mod_sticky_flg, 'Y')
AND ln_amort_cd = 'FRM')
GROUP BY src_data
ORDER BY src_data
"""
dfb = chu.run_query(qry, client, return_df=True)
dfb.head(n=1000)
```
Almost all ir reductions are marked as mods
mods involve ir reduction about 60% of the time
freddie/fannie comparisons
```
# ln_purp_cd
qry = \
"""
SELECT
src_data,
ln_purp_cd,
count(*) AS nl
FROM
unified.frannie
GROUP BY src_data, ln_purp_cd
ORDER BY src_data, ln_purp_cd
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
dfx = dfx1.groupby('src_data')['nl'].sum()
dfx.rename('tots')
dfx.name='tot'
dfx1 = dfx1.merge(dfx, on='src_data')
dfx1['distr'] = 100.0 * dfx1['nl'] / dfx1['tot']
dfx1[['src_data', 'ln_purp_cd', 'nl', 'distr']].head(n=100)
# ln_hrprog_flg
qry = \
"""
SELECT
src_data,
ln_hrprog_flg,
count(*) AS nl
FROM
unified.frannie
GROUP BY src_data, ln_hrprog_flg
ORDER BY src_data, ln_hrprog_flg
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
dfx = dfx1.groupby('src_data')['nl'].sum()
dfx.rename('tots')
dfx.name='tot'
dfx1 = dfx1.merge(dfx, on='src_data')
dfx1['distr'] = 100.0 * dfx1['nl'] / dfx1['tot']
dfx1[['src_data', 'ln_hrprog_flg', 'nl', 'distr']].head(n=100)
# prop_type_cd
qry = \
"""
SELECT
src_data,
prop_type_cd,
count(*) AS nl
FROM
unified.frannie
GROUP BY src_data, prop_type_cd
ORDER BY src_data, prop_type_cd
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
dfx = dfx1.groupby('src_data')['nl'].sum()
dfx.rename('tots')
dfx.name='tot'
dfx1 = dfx1.merge(dfx, on='src_data')
dfx1['distr'] = 100.0 * dfx1['nl'] / dfx1['tot']
dfx1[['src_data', 'prop_type_cd', 'nl', 'distr']].head(n=100)
# prop_occ_cd
qry = \
"""
SELECT
src_data,
prop_occ_cd,
count(*) AS nl
FROM
unified.frannie
GROUP BY src_data, prop_occ_cd
ORDER BY src_data, prop_occ_cd
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
dfx = dfx1.groupby('src_data')['nl'].sum()
dfx.rename('tots')
dfx.name='tot'
dfx1 = dfx1.merge(dfx, on='src_data')
dfx1['distr'] = 100.0 * dfx1['nl'] / dfx1['tot']
dfx1[['src_data', 'prop_occ_cd', 'nl', 'distr']].head(n=100)
# ln_orig_ltv
qry = \
"""
SELECT
src_data,
min(ln_orig_ltv),
avg(ln_orig_ltv),
max(ln_orig_ltv)
FROM
unified.frannie
WHERE
ln_orig_ltv > 0
GROUP BY src_data
ORDER BY src_data
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# ln_orig_cltv
qry = \
"""
SELECT
src_data,
min(ln_orig_cltv),
avg(ln_orig_cltv),
max(ln_orig_cltv)
FROM
unified.frannie
WHERE
ln_orig_cltv > 0
GROUP BY src_data
ORDER BY src_data
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# borr_orig_fico
qry = \
"""
SELECT
src_data,
min(borr_orig_fico),
avg(borr_orig_fico),
max(borr_orig_fico)
FROM
unified.frannie
WHERE
borr_orig_fico > 0
GROUP BY src_data
ORDER BY src_data
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# borr_dti
qry = \
"""
SELECT
src_data,
min(borr_dti),
avg(borr_dti),
max(borr_dti)
FROM
unified.frannie
WHERE
borr_dti > 0
GROUP BY src_data
ORDER BY src_data
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# ln_orig_prin
qry = \
"""
SELECT
src_data,
min(ln_orig_prin),
avg(ln_orig_prin),
max(ln_orig_prin)
FROM
unified.frannie
WHERE
ln_orig_prin > 0
GROUP BY src_data
ORDER BY src_data
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# ln_orig_ir
qry = \
"""
SELECT
src_data,
min(ln_orig_ir),
avg(ln_orig_ir),
max(ln_orig_ir)
FROM
unified.frannie
WHERE
ln_orig_ir > 0
GROUP BY src_data
ORDER BY src_data
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# ln_mi_pct
qry = \
"""
SELECT
src_data,
min(ln_mi_pct),
avg(ln_mi_pct),
max(ln_mi_pct)
FROM
unified.frannie
WHERE
ln_mi_pct > 0
GROUP BY src_data
ORDER BY src_data
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# ln_defrl_amt
qry = \
"""
SELECT
src_data,
m.mod_sticky_flg AS msf,
count(*) as nl
FROM
unified.frannie ARRAY JOIN monthly AS m
WHERE
m.defrl_amt > 0
GROUP BY src_data, msf
ORDER BY src_data, msf
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# mod flg when defrl_amt is first > 0
qry = \
"""
SELECT
src_data,
msf,
count(*) AS nl
FROM (
SELECT
src_data,
arrayFirstIndex(x -> IF(x > 0, 1, 0), monthly.defrl_amt) AS first_def,
arrayElement(monthly.mod_sticky_flg, first_def) AS msf,
arrayElement(monthly.borr_asst_plan, first_def) AS bap
FROM
unified.frannie
WHERE first_def > 0)
GROUP BY src_data, msf
ORDER BY src_data, msf
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# bap when defrl_amt is first > 0
qry = \
"""
SELECT
src_data,
bap,
count(*) AS nl
FROM (
SELECT
src_data,
arrayFirstIndex(x -> IF(x > 0, 1, 0), monthly.defrl_amt) AS first_def,
arrayElement(monthly.mod_sticky_flg, first_def) AS msf,
arrayElement(monthly.borr_asst_plan, first_def) AS bap
FROM
unified.frannie
WHERE first_def > 0)
GROUP BY src_data, bap
ORDER BY src_data, bap
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# ln_defrl_amt
qry = \
"""
SELECT
src_data,
m.dt AS dt,
count(*) as nl
FROM
unified.frannie ARRAY JOIN monthly AS m
WHERE
m.defrl_amt > 0
GROUP BY src_data, dt
ORDER BY src_data, nl DESC
"""
dfx1 = chu.run_query(qry, client, return_df=True)
dfx1.head(n=1000)
# ln_defrl_amt
qry = \
"""
SELECT
src_data,
arrayMax(monthly.defrl_amt) AS da,
monthly.borr_asst_plan AS bap,
monthly.mod_sticky_flg AS msf,
monthly.defrl_amt AS mad,
monthly.dt AS dt
FROM
unified.frannie
WHERE
da > 0
ORDER BY rand32()
LIMIT 10
"""
dfx1 = chu.run_query(qry, client, return_df=True)
print(np.asarray(dfx1['mad']))
print(np.asarray(dfx1['msf']))
print(np.asarray(dfx1['dt']))
```
| github_jupyter |
# Connect to PI Web API and SLTC database and create dataframe
# Develop initial model using obvious predictors
```
import json
import getpass
import requests
import pandas as pd
import numpy as np
import urllib3
import datetime
import dateutil.parser
import sklearn
import matplotlib
import types
import uuid
import io
urllib3.disable_warnings()
from requests.auth import HTTPBasicAuth
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn import neighbors
from matplotlib import pyplot as plt
# This cell does most of the data obtaining and cleanup work. It first connects to the PI System via PI Web API
# using basic/kerberos authentication where in it gets maximum 1000 eventframes which fit our criteria.
""" Create API call security method
@param security_method string: security method to use: basic or kerberos
@param user_name string: The user's credentials name
@param user_password string: The user's credentials password
"""
def read_config():
with open('test_config.json') as c:
config = json.load(c)
return config
def call_security_method(security_method, user_name, user_password):
if security_method.lower() == 'basic':
security_auth = HTTPBasicAuth(user_name, user_password)
return security_auth
""" Method to send HTTP GET requests
@param query: query string to execute
Also uses the test_config.json file to read the username and password for Basic Authentication
"""
def get(query):
data = read_config()
username = data['Username']
password = data['Password']
securitymethod = data.get('AuthType', 'basic')
verify_ssl = data.get('VerifySSL', True)
security_auth = call_security_method(securitymethod, username, password)
response = requests.get(query, auth=security_auth, verify=verify_ssl)
return response
""" Method to send HTTP POST requests
@param query: query string to execute
@param body: body of the request
Also uses the test_config.json file to read the username and password for Basic Authentication
"""
def post(query, body):
header = {
'content-type': 'application/json',
'X-Requested-With':'XmlHttpRequest'
}
data = read_config()
username = data['Username']
password = data['Password']
securitymethod = data.get('AuthType', 'basic')
verify_ssl = data.get('VerifySSL', True)
security_auth = call_security_method(securitymethod, username, password)
response = requests.post(query, auth=security_auth, verify=verify_ssl, json=body, headers=header)
return response
""" Method to send HTTP DELETE requests
@param query: query string to execute
Also uses the test_config.json file to read the username and password for Basic Authentication
"""
def delete(query):
data = read_config()
username = data['Username']
password = data['Password']
securitymethod = data.get('AuthType', 'basic')
verify_ssl = data.get('VerifySSL', True)
security_auth = call_security_method(securitymethod, username, password)
response = requests.delete(query, auth=security_auth, verify=verify_ssl)
return response
""" Method to get the database web ID of a given database path
@param path: path of the database. More info can be found here: https://your-server/piwebapi/help/topics/path-syntax
"""
def getDatabaseWebID(path):
data = read_config()
piwebapi_url = data['Resource']
getDatabaseQuery = piwebapi_url + "/assetdatabases?path=" + path
database = json.loads(get(getDatabaseQuery).text)
return database['WebId']
#get database WebID
data = read_config()
databasePath = "\\\\"+ data['AssetServerName'] + "\\" + data['AssetDatabaseName']
databaseWebID = getDatabaseWebID(databasePath)
# This query is built for targeting the PI Web API EventFrame Search endpoint.
# More information can be found under : https://your-server/piwebapi/help/controllers/eventframe/actions/geteventframesquery
# In this particular case, we are looking for 1000 eventframes which are part of VAVCO startup template and which ocurred during
# the past year. That explains our query parameters of template, startTime, endTime and maxCount.
# Additionally, we are looking for eventframes which have the duration less than 2 hours so that we can count out anamolies for
# the purpose of cleaning up our dataset. That is why we decided to use the query feature of the search endpoints.
# More information about query syntax can be found here : https://your-server/piwebapi/help/topics/search-query-syntax
piwebapi_url = data['Resource']
getEFQuery = piwebapi_url + "/eventframes/search?databaseWebId="+\
databaseWebID + "&template=VAVCO%20startup&startTime=*-1d&endTime=*+1d&maxCount=1000&query=duration:<2h"
#send GET request to PI Web API
response = get(getEFQuery)
data = json.loads(response.text)
items = data['Items']
#initalizing predictor arrays
efarray = []
setpoint = []
percentcooling = []
isMonday = []
dayofweek = []
outsideairtemp = []
outsidehumidity = []
#target variable
duration = []
for i in items:
attributes_response = get(i['Links']['Attributes'])
attributes = json.loads(attributes_response.text) #attributes
attribute_items = attributes['Items']
#Add eventframe name to the dataframe
efarray.append(i['Name'])
#parse Time
startTime = dateutil.parser.parse(i['StartTime'])
endTime = dateutil.parser.parse(i['EndTime'])
#add target duration in minutes
elapsedTime = endTime-startTime
duration.append(elapsedTime.seconds/60)
for j in attribute_items:
#add predictors to data frame
if j['Name'] == 'Setpoint Offset at start time':
value_response = get(j['Links']['Value'])
setpoint.append(json.loads(value_response.text)['Value'])
if j['Name'] == '% Cooling at VAV Start':
value_response = get(j['Links']['Value'])
percentcooling.append(json.loads(value_response.text)['Value'])
if j['Name'] == 'Outside Relative Humidity at VAV Start':
value_response = get(j['Links']['Value'])
outsidehumidity.append(json.loads(value_response.text)['Value'])
if j['Name'] == 'Outside Air Temperature at VAV Start':
value_response = get(j['Links']['Value'])
outsideairtemp.append(json.loads(value_response.text)['Value'])
#add IsMonday predictor
if startTime.weekday() == 0:
isMonday.append(1)
else:
isMonday.append(0)
dayofweek.append(startTime.weekday())
df = pd.DataFrame({
'VAVCO Startup' : efarray,
'Setpoint Offset at start time' : setpoint,
'% Cooling at VAV Start' : percentcooling,
'IsMonday' : isMonday,
'Duration' : duration,
'Day of Week' : dayofweek,
'Outside Relative Humidity at VAV Start' : outsidehumidity,
'Outside Air Temperature at VAV Start' : outsideairtemp
})
df.head()
X = df[['Outside Air Temperature at VAV Start', 'Outside Relative Humidity at VAV Start']]
Y = df['Duration']
x_train, x_test,y_train,y_test = train_test_split(X,Y,test_size =0.3)
#Using Linear Regression
regr1 = linear_model.LinearRegression()
regr1.fit(x_train, y_train)
print(regr1.score(x_test, y_test))
```
# Develop model with better predictors
```
X = df[['% Cooling at VAV Start', 'Setpoint Offset at start time', 'IsMonday']]
Y = df['Duration']
x_train, x_test,y_train,y_test = train_test_split(X,Y,test_size =0.3)
#Using Linear Regression
regr = linear_model.LinearRegression()
regr.fit(x_train, y_train)
print(regr.score(x_test, y_test))
```
# Phase 2 : Predict for real-time predictors
```
def read_config():
with open('test_config.json') as c:
config = json.load(c)
return config
data = read_config()
# query elements
databasePath = "\\\\"+ data['AssetServerName'] + "\\" + data['AssetDatabaseName']
databaseWebID = getDatabaseWebID(databasePath)
piwebapi_url = data['Resource']
query = piwebapi_url + "/assetdatabases/"+ databaseWebID + "/elements?templateName=VAVCO&searchFullHierarchy=true"
elemResponse = get(query)
data = json.loads(elemResponse.text)
items = data['Items']
for i in items:
attributes_response = get(i['Links']['Attributes'])
attributes = json.loads(attributes_response.text) #attributes
attribute_items = attributes['Items']
for j in attribute_items:
if j['Name']== '% cooling':
value_response = get(j['Links']['Value'])
percentcooling = json.loads(value_response.text)['Value']
if j['Name']== 'Setpoint Offset Current':
value_response = get(j['Links']['Value'])
setpoint = json.loads(value_response.text)['Value']
if datetime.datetime.now().weekday() == 0:
isMonday = 1
else:
isMonday = 0
#clean up data: only predict if the value of predictors is greater than 0 and not a dictionary(NODATA case);
if isinstance(percentcooling, float):
if percentcooling > 0.0:
forecast = regr.predict([[percentcooling, setpoint, isMonday]])
print(forecast)
# data ingress using pi web api
body = {'Value' : forecast[0]}
for j in attribute_items:
if j['Name']=='Predicted Cooling Time':
query = piwebapi_url + "/streams/"+ j['WebId']+ "/value"
response = post(query, body)
print(response.status_code)
```
# Scatter Plots
```
plt.scatter(df['% Cooling at VAV Start'], df['Duration'])
plt.scatter(df['Setpoint Offset at start time'], df['Duration'])
plt.scatter(df['Outside Air Temperature at VAV Start'], df['Duration'])
global finished
finished = True
def test_finished():
global finished
assert finished
```
| github_jupyter |
## Macrorheology
As the material model is based on microscopic parameters and not on macroscopic parameters (as e.g. the bulk stiffness), the material parameters cannot directly be measured using a rheometer. Instead the rheological experiments are "simulated" on the material model and the resulting curves can be used to fit the material parameters so that the the "simulated" rheological experiments on the material model fit the measured rheological response of the material.
Here, we describe three different rheological experiments that can be simulated on the material model.
- Shear Rheometer
- Stretch Thinning
- Extensional Rheometer
The stretch experiment is needed to reliably fit later on the buckling of the material and either the Shear Rheometer or the Extensional Rheometer experiment can be used to fit the fiber stiffness and the strain stiffening.
This section first describes the functions that simulate these experiments on the material model and the next section explains how these functions can be used to fit the material parameters from experimental data.
### Shear Rheometer
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from saenopy import macro
from saenopy.materials import SemiAffineFiberMaterial
material = SemiAffineFiberMaterial(900, 0.0004, 0.0075, 0.033)
print(material)
gamma = np.arange(0.005, 0.3, 0.0001)
x, y = macro.getShearRheometerStress(gamma, material)
plt.loglog(x, y, "-", lw=3, label="model")
plt.xlabel("strain")
plt.ylabel("shear stress [Pa]")
plt.show()
```
### Stretcher
```
import numpy as np
import matplotlib.pyplot as plt
from saenopy import macro
from saenopy.materials import SemiAffineFiberMaterial
material = SemiAffineFiberMaterial(900, 0.0004, 0.0075, 0.033)
print(material)
lambda_h = np.arange(1-0.05, 1+0.07, 0.01)
lambda_v = np.arange(0, 1.1, 0.001)
x, y = macro.getStretchThinning(lambda_h, lambda_v, material)
plt.plot(x, y, lw=3, label="model")
plt.xlabel("horizontal stretch")
plt.ylabel("vertical contraction")
plt.ylim(0, 1.2)
plt.xlim(0.9, 1.2)
plt.show()
```
### Extensional Rheometer
```
import numpy as np
import matplotlib.pyplot as plt
from saenopy import macro
from saenopy.materials import SemiAffineFiberMaterial, LinearMaterial
material = SemiAffineFiberMaterial(900, 0.0004, 0.0075, 0.033)
print(material)
epsilon = np.arange(1, 1.17, 0.0001)
x, y = macro.getExtensionalRheometerStress(epsilon, material)
plt.plot(x, y, lw=3, label="model")
plt.xlabel("strain")
plt.ylabel("stress [Pa]")
plt.show()
```
## Fitting material parameters
```
from saenopy import macro
import numpy as np
# example data, stress-strain curves for collagen of three different concentrations
data0_6 = np.array([[4.27e-06,-2.26e-03],[1.89e-02,5.90e-01],[3.93e-02,1.08e+00],[5.97e-02,1.57e+00],[8.01e-02,2.14e+00],[1.00e-01,2.89e+00],[1.21e-01,3.83e+00],[1.41e-01,5.09e+00],[1.62e-01,6.77e+00],[1.82e-01,8.94e+00],[2.02e-01,1.17e+01],[2.23e-01,1.49e+01],[2.43e-01,1.86e+01],[2.63e-01,2.28e+01],[2.84e-01,2.71e+01]])
data1_2 = np.array([[1.22e-05,-1.61e-01],[1.71e-02,2.57e+00],[3.81e-02,4.69e+00],[5.87e-02,6.34e+00],[7.92e-02,7.93e+00],[9.96e-02,9.56e+00],[1.20e-01,1.14e+01],[1.40e-01,1.35e+01],[1.61e-01,1.62e+01],[1.81e-01,1.97e+01],[2.02e-01,2.41e+01],[2.22e-01,2.95e+01],[2.42e-01,3.63e+01],[2.63e-01,4.43e+01],[2.83e-01,5.36e+01],[3.04e-01,6.37e+01],[3.24e-01,7.47e+01],[3.44e-01,8.61e+01],[3.65e-01,9.75e+01],[3.85e-01,1.10e+02],[4.06e-01,1.22e+02],[4.26e-01,1.33e+02]])
data2_4 = np.array([[2.02e-05,-6.50e-02],[1.59e-02,8.46e+00],[3.76e-02,1.68e+01],[5.82e-02,2.43e+01],[7.86e-02,3.34e+01],[9.90e-02,4.54e+01],[1.19e-01,6.11e+01],[1.40e-01,8.16e+01],[1.60e-01,1.06e+02],[1.80e-01,1.34e+02],[2.01e-01,1.65e+02],[2.21e-01,1.96e+02],[2.41e-01,2.26e+02]])
# hold the buckling parameter constant, as it cannot be deferimed well with shear experiments
ds0 = 0.0004
# minimize 3 shear rheometer experiments with different collagen concentration and, therefore, different k1 parameters
# but keep the other paramters the same
parameters, plot = macro.minimize([
[macro.getShearRheometerStress, data0_6, lambda p: (p[0], ds0, p[3], p[4])],
[macro.getShearRheometerStress, data1_2, lambda p: (p[1], ds0, p[3], p[4])],
[macro.getShearRheometerStress, data2_4, lambda p: (p[2], ds0, p[3], p[4])],
],
[900, 1800, 12000, 0.013, 0.1],
)
# print the resulting parameters
print(parameters)
# and plot the results
plot()
```
To fit all parameters, experiments of different types should be combined, e.g. a shear rheological experiment and a stretching experiment.
```
from saenopy import macro
import numpy as np
# example data, stress-strain curves and a stretch experiment
shear = np.array([[7.50e-03,2.78e-01],[1.25e-02,4.35e-01],[1.75e-02,6.44e-01],[2.25e-02,7.86e-01],[2.75e-02,9.98e-01],[3.25e-02,1.13e+00],[3.75e-02,1.52e+00],[4.25e-02,1.57e+00],[4.75e-02,1.89e+00],[5.25e-02,2.10e+00],[5.75e-02,2.46e+00],[6.25e-02,2.67e+00],[6.75e-02,3.15e+00],[7.25e-02,3.13e+00],[7.75e-02,3.83e+00],[8.25e-02,4.32e+00],[8.75e-02,4.35e+00],[9.25e-02,4.78e+00],[9.75e-02,5.45e+00],[1.02e-01,5.87e+00],[1.07e-01,6.16e+00],[1.13e-01,6.89e+00],[1.17e-01,7.89e+00],[1.22e-01,8.28e+00],[1.28e-01,9.13e+00],[1.33e-01,1.06e+01],[1.38e-01,1.10e+01],[1.42e-01,1.27e+01],[1.47e-01,1.39e+01],[1.52e-01,1.53e+01],[1.58e-01,1.62e+01],[1.63e-01,1.78e+01],[1.68e-01,1.89e+01],[1.72e-01,2.03e+01],[1.77e-01,2.13e+01],[1.82e-01,2.23e+01],[1.88e-01,2.38e+01],[1.93e-01,2.56e+01],[1.98e-01,2.78e+01],[2.03e-01,3.02e+01],[2.07e-01,3.28e+01],[2.12e-01,3.55e+01],[2.17e-01,3.83e+01],[2.23e-01,4.13e+01],[2.28e-01,4.48e+01],[2.33e-01,4.86e+01],[2.37e-01,5.27e+01],[2.42e-01,5.64e+01],[2.47e-01,6.08e+01],[2.53e-01,6.48e+01],[2.58e-01,6.93e+01],[2.63e-01,7.44e+01],[2.68e-01,7.89e+01],[2.73e-01,8.40e+01],[2.78e-01,8.91e+01],[2.82e-01,9.41e+01],[2.87e-01,1.01e+02],[2.92e-01,1.07e+02],[2.97e-01,1.12e+02],[3.02e-01,1.19e+02],[3.07e-01,1.25e+02],[3.12e-01,1.32e+02],[3.18e-01,1.39e+02],[3.23e-01,1.45e+02],[3.28e-01,1.53e+02],[3.33e-01,1.60e+02],[3.38e-01,1.67e+02],[3.43e-01,1.76e+02],[3.47e-01,1.83e+02],[3.52e-01,1.90e+02],[3.57e-01,1.99e+02],[3.62e-01,2.06e+02],[3.67e-01,2.15e+02],[3.72e-01,2.23e+02],[3.78e-01,2.31e+02],[3.83e-01,2.40e+02],[3.88e-01,2.48e+02],[3.93e-01,2.56e+02],[3.98e-01,2.56e+02],[4.03e-01,2.73e+02],[4.07e-01,2.77e+02],[4.12e-01,2.86e+02],[4.17e-01,2.97e+02],[4.22e-01,3.08e+02],[4.27e-01,3.15e+02],[4.32e-01,3.25e+02],[4.38e-01,3.33e+02],[4.43e-01,3.39e+02],[4.48e-01,3.51e+02],[4.53e-01,3.59e+02],[4.58e-01,3.69e+02],[4.63e-01,3.76e+02],[4.68e-01,3.83e+02],[4.72e-01,3.93e+02],[4.77e-01,3.97e+02],[4.82e-01,4.04e+02],[4.87e-01,4.13e+02],[4.92e-01,4.18e+02],[4.97e-01,4.31e+02],[5.02e-01,4.38e+02],[5.07e-01,4.25e+02],[5.12e-01,4.48e+02],[5.17e-01,4.49e+02],[5.22e-01,4.56e+02],[5.27e-01,4.66e+02],[5.32e-01,4.70e+02],[5.37e-01,4.76e+02],[5.42e-01,4.82e+02],[5.47e-01,4.89e+02],[5.52e-01,4.99e+02],[5.57e-01,5.01e+02],[5.62e-01,5.06e+02],[5.68e-01,5.14e+02],[5.73e-01,5.15e+02],[5.78e-01,5.21e+02],[5.83e-01,5.28e+02],[5.88e-01,5.30e+02],[5.93e-01,5.38e+02],[5.98e-01,5.40e+02],[6.03e-01,5.41e+02],[6.08e-01,5.38e+02],[6.13e-01,5.39e+02],[6.18e-01,5.50e+02],[6.23e-01,5.56e+02],[6.27e-01,5.59e+02],[6.32e-01,5.68e+02],[6.37e-01,5.69e+02],[6.42e-01,5.70e+02],[6.47e-01,5.79e+02],[6.52e-01,5.78e+02],[6.57e-01,5.80e+02],[6.62e-01,5.83e+02],[6.67e-01,5.83e+02],[6.72e-01,5.89e+02],[6.77e-01,5.86e+02],[6.82e-01,5.88e+02],[6.88e-01,5.91e+02],[6.93e-01,5.86e+02],[6.98e-01,5.91e+02],[7.03e-01,5.91e+02],[7.08e-01,5.87e+02],[7.13e-01,5.89e+02],[7.18e-01,5.88e+02],[7.23e-01,5.89e+02],[7.28e-01,5.89e+02],[7.33e-01,5.81e+02],[7.38e-01,5.85e+02],[7.43e-01,5.86e+02],[7.48e-01,5.78e+02],[7.52e-01,5.78e+02],[7.57e-01,5.79e+02],[7.62e-01,5.76e+02],[7.67e-01,5.74e+02],[7.72e-01,5.70e+02],[7.77e-01,5.73e+02],[7.82e-01,5.70e+02],[7.87e-01,5.66e+02],[7.92e-01,5.69e+02],[7.97e-01,5.59e+02],[8.02e-01,5.50e+02],[8.07e-01,5.52e+02],[8.12e-01,5.52e+02],[8.18e-01,5.59e+02],[8.23e-01,5.58e+02],[8.28e-01,5.57e+02],[8.33e-01,5.59e+02],[8.38e-01,5.53e+02],[8.43e-01,5.55e+02],[8.48e-01,5.56e+02],[8.53e-01,5.49e+02],[8.58e-01,5.50e+02],[8.63e-01,5.47e+02],[8.68e-01,5.22e+02],[8.73e-01,5.44e+02],[8.77e-01,5.36e+02],[8.82e-01,5.38e+02],[8.87e-01,5.33e+02],[8.92e-01,5.28e+02],[8.97e-01,5.30e+02],[9.02e-01,5.23e+02],[9.07e-01,5.22e+02],[9.12e-01,5.18e+02],[9.17e-01,5.12e+02],[9.22e-01,5.11e+02],[9.27e-01,5.05e+02],[9.32e-01,5.03e+02],[9.38e-01,4.99e+02],[9.43e-01,4.88e+02],[9.48e-01,4.86e+02],[9.53e-01,4.82e+02],[9.58e-01,4.73e+02],[9.63e-01,4.70e+02],[9.68e-01,4.61e+02],[9.73e-01,4.56e+02],[9.78e-01,4.51e+02],[9.83e-01,4.41e+02],[9.88e-01,4.36e+02],[9.93e-01,4.22e+02],[9.98e-01,4.09e+02],[1.00e+00,4.07e+02]])[:50]
stretch = np.array([[9.33e-01,1.02e+00],[9.40e-01,1.01e+00],[9.47e-01,1.02e+00],[9.53e-01,1.02e+00],[9.60e-01,1.02e+00],[9.67e-01,1.01e+00],[9.73e-01,1.01e+00],[9.80e-01,1.01e+00],[9.87e-01,1.01e+00],[9.93e-01,1.00e+00],[1.00e+00,1.00e+00],[1.01e+00,9.89e-01],[1.01e+00,9.70e-01],[1.02e+00,9.41e-01],[1.03e+00,9.00e-01],[1.03e+00,8.46e-01],[1.04e+00,7.76e-01],[1.05e+00,6.89e-01],[1.05e+00,6.02e-01],[1.06e+00,5.17e-01],[1.07e+00,4.39e-01],[1.07e+00,3.74e-01],[1.08e+00,3.17e-01],[1.09e+00,2.72e-01],[1.09e+00,2.30e-01],[1.10e+00,2.02e-01]])
# fit all 4 parameters simultaneously to two experiments
parameters, plot = macro.minimize([
[macro.getShearRheometerStress, shear, lambda p: (p[0], p[1], p[2], p[3])],
[macro.getStretchThinning, stretch, lambda p: (p[0], p[1], p[2], p[3])],
],
[900, 0.0004, 0.075, 0.33],
)
# print the resulting parameters
print(parameters)
# and plot the results
plot()
```
| github_jupyter |
<h2>Python NumPy</h2>
#To install do pip install numpy
<b>What is a Python NumPy?</b>
NumPy is a Python package which stands for ‘Numerical Python’. It is the core library for scientific computing, which contains a powerful n-dimensional array object, provide tools for integrating C, C++ etc. It is also useful in linear algebra, random number capability etc. NumPy array can also be used as an efficient multi-dimensional container for generic data.
<b>NumPy Array:</b>
Numpy array is a powerful N-dimensional array object which is in the form of rows and columns. We can initialize numpy arrays from nested Python lists and access it elements.

Here, I have different elements that are stored in their respective memory locations. It is said to be two dimensional because it has rows as well as columns. In the above image, we have 3 columns and 4 rows available.
<h3>Single & Multi dimensional Numpy Array:</h3>
```
import numpy as np
a=np.array([1,2,3])
print(a)
#Multi-dimensional Array:
a=np.array([(1,2,3),(4,5,6)])
print(a)
```
<h3>Python NumPy Array v/s List</h3>
We use python numpy array instead of a list because of the below three reasons:
Less Memory
Fast
Convenient
The very first reason to choose python numpy array is that it occupies less memory as compared to list. Then, it is pretty fast in terms of execution and at the same time it is very convenient to work with numpy. So these are the major advantages that python numpy array has over list.
```
import numpy as np
import time
import sys
S= range(1000)
print(sys.getsizeof(S)*len(S))
D= np.arange(1000)
print(D.size*D.itemsize)
```
The above output shows that the memory allocated by list (denoted by S) is 14000 whereas the memory allocated by the numpy array is just 4000. From this, you can conclude that there is a major difference between the two and this makes python numpy array as the preferred choice over list.
```
#python numpy array is faster and more convenient when compared to list
import time
import sys
SIZE = 1000000
L1= range(SIZE)
L2= range(SIZE)
A1= np.arange(SIZE)
A2=np.arange(SIZE)
start= time.time()
result=[x+y for x,y in zip(L1,L2)]
print((time.time()-start)*1000)
start=time.time()
result= A1+A2
print((time.time()-start)*1000)
```
In the above code, we have defined two lists and two numpy arrays. Then, we have compared the time taken in order to find the sum of lists and sum of numpy arrays both. If you see the output of the above program, there is a significant change in the two values. List took 380ms whereas the numpy array took almost 49ms. Hence, numpy array is faster than list. Now, if you noticed we had run a ‘for’ loop for a list which returns the concatenation of both the lists whereas for numpy arrays, we have just added the two array by simply printing A1+A2. That’s why working with numpy is much easier and convenient when compared to the lists.
<h3>Python NumPy Operations</h3>
```
#ndim:
import numpy as np
a = np.array([(1,2,3),(4,5,6),(8,9,10)])
print(a.ndim) #Since the output is 2, it is a two-dimensional array (multi dimension).
```

<b>itemsize:</b>
You can calculate the byte size of each element. In the below code, I have defined a single dimensional array and with the help of ‘itemsize’ function, we can find the size of each element.
```
import numpy as np
a = np.array([(1,2,3)])
print(a.itemsize)
```
<b>dtype:</b>
You can find the data type of the elements that are stored in an array. So, if you want to know the data type of a particular element, you can use ‘dtype’ function which will print the datatype along with the size. In the below code, I have defined an array where I have used the same function.
```
import numpy as np
a = np.array([(1,1,1)])
print(a.dtype)
#Calculating size and shape of an array
import numpy as np
a = np.array([(1,2,3,4,5,6),(2,3,4,5,6,7)])
print(a.size)
print(a.shape)
```
<b>reshape:</b>
Reshape is when you change the number of rows and columns which gives a new view to an object.

As you can see in the above image, we have 3 columns and 2 rows which has converted into 2 columns and 3 rows
```
import numpy as np
a = np.array([(8,9,10),(11,12,13)])
print('Old -->',a)
a=a.reshape(3,2)
print('New-->',a)
```
<b>slicing:</b>
Slicing is basically extracting particular set of elements from an array. This slicing operation is pretty much similar to the one which is there in the list as well.

```
#We have an array and we need a particular element (say 3) out of a given array.
import numpy as np
a=np.array([(1,2,3,4),(3,4,5,6)])
print(a[0,2])
#Here, the array(1,2,3,4) is your index 0 and (3,4,5,6) is index 1 of the python numpy array.
#Therefore, we have printed the second element from the zeroth index.
#let’s say we need the 2nd element from the zeroth and first index of the array
import numpy as np
a=np.array([(1,2,3,4),(3,4,5,6)])
print(a[0:,2]) # Here colon represents all the rows, including zero.
#Now to get the 2nd element, we’ll call index 2 from both of the rows which gives us the value 3 and 5 respectively.
import numpy as np
a=np.array([(8,9),(10,11),(12,13)])
print(a[0:2,1])
#As you can see in the above code, only 9 and 11 gets printed. Now when I have written 0:2, this does not include the second index of the third row of an array.
#Therefore, only 9 and 11 gets printed else you will get all the elements i.e [9 11 13].
```
<b>linspace:</b>
This is another operation in python numpy which returns evenly spaced numbers over a specified interval.
```
import numpy as np
a=np.linspace(10,1,5)
print(a) #it has printed 10 values between 1 to 3.
```
<b>Min, max, mean, sum ,Square Root, Standard Deviationetc</b>
```
import numpy as np
a= np.array([19,23,56,10,19,76,84,90,12])
print(a.min())
print(a.max())
print(a.sum())
print(a.mean())
print(np.sqrt(a))
print(np.std(a))
a=np.array([(8,9),(10,11),(12,13)])
print(a.min())
print(a.max())
print(a.sum())
print(a.mean())
print(np.sqrt(a))
print(np.std(a))
```
<b>Calculating mean, median with numpy inbuilt functions</b>
```
import numpy as np
# 1D array
arr = [20, 2, 7, 1, 34,45,67]
print("arr : ", arr)
print("arr : ", np.mean(arr))
print("median of arr : ", np.median(arr))
import numpy as np
# 2D array
arr = [[14, 17, 12, 33, 44],
[15, 6, 27, 8, 19],
[23, 2, 54, 1, 4, ]]
# median of the flattened array
print("\nmedian of arr, axis = None : ", np.median(arr))
print("\nmean of arr, axis = None : ", np.mean(arr))
# median along the axis = 0
print("\nmedian of arr, axis = 0 : ", np.median(arr, axis = 0))
print("\nmean of arr, axis = 0 : ", np.mean(arr, axis = 0))
# median along the axis = 1
print("\nmedian of arr, axis = 1 : ", np.median(arr, axis = 1))
print("\nmean of arr, axis = 1 : ", np.mean(arr, axis = 1))
out_arr = np.arange(3)
print("\nout_arr : ", out_arr)
print("median of arr, axis = 1 : ",
np.median(arr, axis = 1, out = out_arr))
```
<b>Addition Operation</b>
```
#You can perform more operations on numpy array i.e addition, subtraction,multiplication and division of the two matrices.
import numpy as np
x= np.array([(1,2,3),(3,4,5)])
y= np.array([(1,2,3),(3,4,5)])
print(x+y)
print(x-y)
print(x*y)
print(x/y)
```
<b>Vertical & Horizontal Stacking</b>
if you want to concatenate two arrays and not just add them, you can perform it using two ways – vertical stacking and horizontal stacking.
```
import numpy as np
x= np.array([(1,2,3),(3,4,5)])
y= np.array([(1,2,3),(3,4,5)])
print(np.vstack((x,y)))
print(np.hstack((x,y)))
```
<b>ravel</b>
There is one more operation where you can convert one numpy array into a single column i.e ravel.
```
import numpy as np
x= np.array([(1,2,3),(3,4,5)])
print(x.ravel())
```
<b>Python Numpy Special Functions</b>
```
#There are various special functions available in numpy such as sine, cosine, tan, log etc
import numpy as np
import matplotlib.pyplot as plt
x= np.arange(0,3*np.pi,0.1)
y=np.sin(x)
plt.plot(x,y)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
x= np.arange(0,3*np.pi,0.1)
y=np.cos(x)
plt.plot(x,y)
plt.show()
#Exp
a= np.array([1,2,3])
print(np.exp(a))
#log
import numpy as np
import matplotlib.pyplot as plt
a= np.array([1,2,3])
print(np.log(a))
```
<b>Creating Identity matrix,zero matrix , matrix multiplication using numpy</b>
```
#Identity matrix
import numpy as np
# 2x2 matrix with 1's on main diagnol
b = np.identity(2, dtype = float)
print("Matrix b : \n", b)
a = np.identity(4)
print("\nMatrix a : \n", a)
#Zero matrix
import numpy as np
# 2x2 matrix with 1's on main diagnol
b = np.zeros((2,2), dtype = float)
print("Matrix b : \n", b)
a = np.zeros((4,4))
print("\nMatrix a : \n", a)
#Matrix multiplication
a = np.array([[1, 0],[0, 1]])
b = np.array([[4, 1],[2, 2]])
np.matmul(a, b)
#Matrix transpose
x = np.arange(4).reshape((2,2))
x
np.transpose(x)
```
<h2>An Introduction to Pandas in Python</h2>
Pandas is a software library written for the Python programming language. It is used for data manipulation and analysis. It provides special data structures and operations for the manipulation of numerical tables and time series.
Pandas is the name for a Python module, which is rounding up the capabilities of Numpy, Scipy and Matplotlab. The word pandas is an acronym which is derived from "Python and data analysis" and "panel data".
```
#pip install pandas
import pandas as pd
```
<h3>Data structures in pandas</h3>
<b>Dataframe and series</b>
<b>A DataFrame is a two-dimensional array of values with both a row and a column index.</b>
<b>A Series is a one-dimensional array of values with an index.</b>

If it looks like the picture on the left is also present in the picture on the right, you’re right! Where a DataFrame is the entire dataset, including all rows and columns — a Series is essentially a single column within that DataFrame.
<h3>Series</h3>
A Series is a one-dimensional labelled array-like object. It is capable of holding any data type, e.g. integers, floats, strings, Python objects, and so on. It can be seen as a data structure with two arrays: one functioning as the index, i.e. the labels, and the other one contains the actual data
```
import pandas as pd
S = pd.Series([11, 28, 72, 3, 5, 8])
print(S)
```
We haven't defined an index in our example, but we see two columns in our output: The right column contains our data, whereas the left column contains the index. Pandas created a default index starting with 0 going to 5, which is the length of the data minus 1.
```
print(S.index)
print(S.values)
```
<b>Difference between Numpy array and Series</b>
There is often some confusion about whether Pandas is an alternative to Numpy, SciPy and Matplotlib. The truth is that it is built on top of Numpy. This means that Numpy is required by pandas. Scipy and Matplotlib on the other hand are not required by pandas but they are extremely useful. That's why the Pandas project lists them as "optional dependency".
```
import numpy as np
X = np.array([11, 28, 72, 3, 5, 8])
print(X)
print(S.values)
# both are the same type:
print(type(S.values), type(X))
#What is the actual difference
fruits = ['apples', 'oranges', 'cherries', 'pears'] #We can define Series objects with individual indices(We can use arbitrary indices.)
quantities = [20, 33, 52, 10]
S = pd.Series(quantities, index=fruits)
print(S)
#add two series with the same indices, we get a new series with the same index and the correponding values will be added
fruits = ['apples', 'oranges', 'cherries', 'pears']
S = pd.Series([20, 33, 52, 10], index=fruits)
S2 = pd.Series([17, 13, 31, 32], index=fruits)
print(S + S2)
print("sum of S: ", sum(S))
#The indices do not have to be the same for the Series addition. The index will be the "union" of both indices.
#If an index doesn't occur in both Series, the value for this Series will be NaN
fruits = ['peaches', 'oranges', 'cherries', 'pears']
fruits2 = ['raspberries', 'oranges', 'cherries', 'pears']
S = pd.Series([20, 33, 52, 10], index=fruits)
S2 = pd.Series([17, 13, 31, 32], index=fruits2)
print(S + S2)
#indices can be completely different, as in the following example.
#We have two indices. One is the Turkish translation of the English fruit names:
fruits = ['apples', 'oranges', 'cherries', 'pears']
fruits_tr = ['elma', 'portakal', 'kiraz', 'armut']
S = pd.Series([20, 33, 52, 10], index=fruits)
S2 = pd.Series([17, 13, 31, 32], index=fruits_tr)
print(S + S2)
```
<h3>Series indexing</h3>
```
print('Single Indexing',S['apples'])
print('@@@@@@@@@@@@@@@@')
print(r'Multi Indexing ',S[['apples', 'oranges', 'cherries']])
```
<h3>pandas.Series.apply</h3>
The function "func" will be applied to the Series and it returns either a Series or a DataFrame, depending on "func".
Parameter Meaning
func a function, which can be a NumPy function that will be applied to the entire Series or a Python function that will be applied to every single value of the series
convert_dtype A boolean value. If it is set to True (default), apply will try to find better dtype for elementwise function results. If False, leave as dtype=object
args Positional arguments which will be passed to the function "func" additionally to the values from the series.
**kwds Additional keyword arguments will be passed as keywords to the function
```
#Ex
S.apply(np.log)
# Let's assume, we have the following task. The test the amount of fruit for every kind.
#If there are less than 50 available, we will augment the stock by 10:
S.apply(lambda x: x if x > 50 else x+10 )
S>30
#Conditioning in a series
S[S>30]
"apples" in S
#Creating Series Objects from Dictionaries
cities = {"London": 8615246,
"Berlin": 3562166,
"Madrid": 3165235,
"Rome": 2874038,
"Paris": 2273305,
"Vienna": 1805681,
"Bucharest": 1803425,
"Hamburg": 1760433,
"Budapest": 1754000,
"Warsaw": 1740119,
"Barcelona": 1602386,
"Munich": 1493900,
"Milan": 1350680}
city_series = pd.Series(cities)
print(city_series)
```
<h3>Handling missing data in pandas</h3>
One problem in dealing with data analysis tasks consists in missing data. Pandas makes it as easy as possible to work with missing data.
```
my_cities = ["London", "Paris", "Zurich", "Berlin",
"Stuttgart", "Hamburg"]
my_city_series = pd.Series(cities,
index=my_cities)
my_city_series
```
Due to the Nan values the population values for the other cities are turned into floats. There is no missing data in the following examples, so the values are int:
```
my_cities = ["London", "Paris", "Berlin", "Hamburg"]
my_city_series = pd.Series(cities,
index=my_cities)
my_city_series
#Finding whether a data is null or not
my_cities = ["London", "Paris", "Zurich", "Berlin",
"Stuttgart", "Hamburg"]
my_city_series = pd.Series(cities,
index=my_cities)
print(my_city_series.isnull())
print(my_city_series.notnull())
#Drop the nulls
print(my_city_series.dropna())
#Fill the nulls
print(my_city_series.fillna(0))
missing_cities = {"Stuttgart":597939, "Zurich":378884}
my_city_series.fillna(missing_cities)
#Still the values are not integers, we can convert it into int
my_city_series = my_city_series.fillna(0).astype(int)
print(my_city_series)
#Pandas next topics will be continued in week4 classes
l1=[1,2,3]
l2=[10,4,5]
list(zip(l1,l2))
```
<h3>Pandas Dataframes</h3>
```
#Creating a pandas dataframe from list of lists
import pandas as pd
Df = pd.DataFrame(data = [
['NJ', 'Towaco', 'Square'],
['CA', 'San Francisco', 'Oval'],
['TX', 'Austin', 'Triangle'],
['MD', 'Baltimore', 'Square'],
['OH', 'Columbus', 'Hexagon'],
['IL', 'Chicago', 'Circle']],
columns = ['State', 'City', 'Shape'])
Df
#Creating DataFrame from dict of narray/lists
import pandas as pd
# intialise data of lists.
data = {'Name':['Tom', 'nick', 'krish', 'jack'], 'Age':[20, 21, 19, 18]}
# Create DataFrame
df = pd.DataFrame(data)
df
#Creates a indexes DataFrame using arrays.
import pandas as pd
# initialise data of lists.
data = {'Name':['Tom', 'Jack', 'nick', 'juli'], 'marks':[99, 98, 95, 90]}
# Creates pandas DataFrame.
df = pd.DataFrame(data, index =['rank1', 'rank2', 'rank3', 'rank4'])
# print the data
df
l1=[1,2,3]
l2=[10,4,5]
list(zip(l1,l2))
#Creating DataFrame using zip() function.
#Two lists can be merged by using list(zip()) function. Now, create the pandas DataFrame by calling pd.DataFrame() function.
import pandas as pd
# List1
Name = ['tom', 'krish', 'nick', 'juli']
# List2
Age = [25, 30, 26, 22]
# get the list of tuples from two lists.
# and merge them by using zip().
list_of_tuples = list(zip(Name, Age))
# Assign data to tuples.
list_of_tuples
print(list_of_tuples)
# Converting lists of tuples into
# pandas Dataframe.
df = pd.DataFrame(list_of_tuples, columns = ['Name', 'Age'])
# Print data.
df
#Creating DataFrame from Dicts of series.
import pandas as pd
# Intialise data to Dicts of series.
d = {'one' : pd.Series([10, 20, 30, 40], index =['a', 'b', 'c', 'd']),
'two' : pd.Series([10, 20, 30, 40], index =['a', 'b', 'c', 'd'])}
# creates Dataframe.
df = pd.DataFrame(d)
# print the data.
df
#Load data from csv
import pandas as pd
df = pd.read_csv('RegularSeasonCompactResults.csv')
#Head,tail
df.head(5)
df.dtypes
df[['Wscore', 'Lscore']].head()
#Shape of dataset
df.shape
#Columns in dataset
df.columns
#we can call the describe() function to see statistics like mean, min, etc about each column of the dataset.
df.describe()
#Max ,min,mean,median
df.max()
df['Wscore'].max()
df['Wscore'].argmax()#Let's say we want to actually see the game(row) where this max score happened.
#We can call the argmax() function to identify the row index
df.iloc[[df['Wscore'].argmax()]]
#Let's take this a step further. Let's say you want to know the game with the highest scoring winning team (this is what we just calculated),
#but you then want to know how many points the losing team scored.
df.iloc[[df['Wscore'].argmax()]]['Lscore']
df[df['Wscore'] > 150]
#Extracting only values
df.values
df.values[0][1]
#Dataframe Iteration
df.isnull().sum()
#https://github.com/jonathanrocher/pandas_tutorial/blob/master/analyzing_and_manipulating_data_with_pandas_manual.pdf
#https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html
```
<h3>Append,concatenate and merge dataframes</h3>
<h3>Append a dataframe</h3>
```
#Creating an empty dataframe
dfObj = pd.DataFrame(columns=['User_ID', 'UserName', 'Action'])
print( dfObj, sep='\n')
dfObj
# Append Dataframe by adding dictionaries
dfObj = dfObj.append({'User_ID': 23, 'UserName': 'Riti', 'Action': 'Login'}, ignore_index=True)
dfObj = dfObj.append({'User_ID': 24, 'UserName': 'Aadi', 'Action': 'Logout'}, ignore_index=True)
dfObj = dfObj.append({'User_ID': 25, 'UserName': 'Jack', 'Action': 'Login'}, ignore_index=True)
print("Dataframe Contens ", dfObj, sep='\n')
#Create an complete empty DataFrame without any column name or indices
dfObj = pd.DataFrame()
print(dfObj)
# Append columns to the Empty DataFrame
dfObj['UserName'] = ['Riti', 'Aadi', 'Jack']
dfObj['Name'] = ['Riti', 'Aadi', 'Jack']
#dfObj['Name'] = ['Riti', 'Aadi', 'Jack']
print("Dataframe Contents ", dfObj, sep='\n') #Automatically a range index is been formed
#Create an empty Dataframe with column names & row indices but no data
dfObj = pd.DataFrame(columns=['User_ID', 'UserName', 'Action'], index=['a', 'b', 'c'])
print("Empty Dataframe", dfObj, sep='\n')
#Appending the data directly to the indices
dfObj.loc['a'] = [23, 'Riti', 'Login']
dfObj.loc['b'] = [24, 'Aadi', 'Logout']
dfObj.loc['c'] = [25, 'Jack', 'Login']
print("Dataframe Contents ", dfObj, sep='\n')
#Appending rows with for loop
import pandas as pd
cols = ['Zip']
lst = []
zip = 32100
for a in range(10):
lst.append([zip])
zip = zip + 1
df = pd.DataFrame(lst, columns=cols)
print(df)
```
<h3>Concatenate dataframe</h3>
```
#Concatenate two columns of dataframe in pandas python
import pandas as pd
import numpy as np
#Create a DataFrame
df1 = {
'State':['Arizona','Georgia','Newyork','Indiana','Florida'],
'State_code':['AZ','GG','NY','IN','SL'],
'Score':[62,47,55,74,31]}
df1 = pd.DataFrame(df1,columns=['State','State_code','Score'])
print(df1)
#Concatenate two string columns pandas:Let’s concatenate two columns of dataframe with ‘+’ as shown below
df1['state_and_code'] = df1['State'] + df1['State_code']
print(df1)
#Concatenate two string columns with space
df1['state_and_code'] = df1['State'] +' '+ df1['State_code']
print(df1)
#Concatenate String and numeric column
df1['code_and_score'] = df1["State_code"]+ "-" + df1["Score"].map(str)
print(df1)
df2=df1.copy()
#Concating two dataframes
# Stack the DataFrames on top of each other, by default it is vertical concatenation
df3 = pd.concat([df1, df2], axis=0)
print(df3)
print('*' * 100)
# Place the DataFrames side by side
df4 = pd.concat([df1, df3.head(5)], axis=1)
print(df4)
#Concatenating with index resetting
df3 = pd.concat([df1, df2], axis=0,ignore_index=True)
print(df3)
#Concatenating pandas dataframes using .append()
dataflair_A = pd.DataFrame([['a', 1], ['b', 2]], columns=['letter', 'number'])
dataflair_B = pd.DataFrame([['c', 3], ['d', 4]], columns=['letter', 'number'])
result = dataflair_A.append(dataflair_B)
result
print('dataflair_A ->')
print(dataflair_A)
print('dataflair_B ->')
print(dataflair_B)
print('Result ->')
print(result)
```
<h3>Merge dataframes</h3>
#Why “Merge”?
You’d have probably encountered multiple data tables that have various bits of information that you would like to see all in one place — one dataframe in this case.
And this is where the power of merge comes in to efficiently combine multiple data tables together in a nice and orderly fashion into a single dataframe for further analysis.
“Merging” two datasets is the process of bringing two datasets together into one, and aligning the rows from each based on common attributes or columns.
The words “merge” and “join” are used relatively interchangeably in Pandas and other languages. Despite the fact that Pandas has both “merge” and “join” functions, essentially they both do the similar things.

To understand pd.merge, let’s start with a simple line of code as below. What this line of code does is to merge two dataframes — left_dfand right_df — into one based on their values with the samecolumn_name available in both dataframes. With the how='inner', this will perform inner merge to only combine values in the column_name that match.
pd.merge(left_df, right_df, on='column_name', how='inner'
Since the method how has different parameters (by default Pandas uses inner), we’ll look into different parameters (left, right, inner, outer) and their use cases.
Quick loot at data:
user_usage — A first dataset containing users monthly mobile usage statistics.
user_device — A second dataset containing details of an individual “use” of the system, with dates and device information.
android_device — A third dataset with device and manufacturer data, which lists all Android devices and their model code
```
user_usage = pd.read_csv(r"D:\Data_Science\Batch1_Lessons\user_usage.csv")
user_device = pd.read_csv(r"D:\Data_Science\Batch1_Lessons\user_device.csv")
android_device = pd.read_csv(r"D:\Data_Science\Batch1_Lessons\android_devices.csv")
user_usage.head(5)
user_device.shape
user_device.head(5)
# INNER Merge
#Pandas uses “inner” merge by default. This keeps only the common values in both the left and right dataframes for the merged data.
#In our case, only the rows that contain use_id values that are common between user_usage and user_device remain in the merged data — inner_merge.
inner_merge = pd.merge(user_usage,user_device, on='use_id',how='inner')
inner_merge.head()
inner_merge.shape
android_device.head(5)
```
It’s important to note here that:
The column name use_id is shared between the user_usage and user_device.
The device column of user_device and Model column of the android_device dataframe contain common codes

```
#LEFT Merge
#Keep every row in the left dataframe.
#Where there are missing values of the “on” variable in the right dataframe, add empty / NaN values in the result.
left_merge = pd.merge(user_usage,user_device, on='use_id',how='left')
left_merge.head()
left_merge.tail()
```
As expected, the column use_id has already been merged together. We also see that the empty values are replaced by NaN in the right dataframe — user_device.
```
#RIGHT Merge
#To perform the right merge, we just repeat the code above by simply changing the parameter of how from left to right.
right_merge = pd.merge(user_usage,user_device, on='use_id',how='right')
right_merge.head()
right_merge.head()
```
This time, we see that the empty values are replaced by NaN in the left dataframe — user_usage.
```
inner_merge.tail()
```
Although the “inner” merge is used by Pandas by default, the parameter inner is specified above to be explicit.
With the operation above, the merged data — inner_merge has different size compared to the original left and right dataframes (user_usage & user_device) as only common values are merged.
```
#Finally, we have “outer” merge.
#The “outer” merge combines all the rows for left and right dataframes with NaN when there are no matched values in the rows.
outer_merge = pd.merge(user_usage,user_device, on='use_id',how='outer',indicator=True)
outer_merge.head()
outer_merge.iloc[[0,1,200,201,350,351]]
```
To further illustrate how the “outer” merge works, we purposely specify certain rows of the outer_merge to understand where the rows originate from.
For the 1st and 2th rows, the rows come from both the dataframes as they have the same values of use_id to be merged.
For the 3rd and 4th rows, the rows come from the left dataframe as the right dataframe doesn’t have the common values of use_id.
For the 5th and 6th rows, the rows come from the right dataframe as the left dataframe doesn’t have the common values of use_id.
```
#Merge Dataframes with Different Column Names
#So we’ve talked about how to merge data using different ways — left, right, inner, and outer.
#But the method on only works for the same column name in the left and right dataframes.
#Therefore, we use left_on and right_on to replace the method on as shown below.
left_merge = pd.merge(user_device,android_device, left_on='device',right_on='Model',how='left',indicator=True)
left_merge.head()
#Here we’ve merged user_device with android_device since they both contain common codes in their columns — device and Model respectively.
```
<h2>A Brief Introduction to matplotlib for Data Visualization</h2>
```
#Install matplotlib in python
python3 -m pip install matplotlib
```
<h3>Data import and modules import</h3>
```
#We have to import pyplot to have an matlab like graphical environment and mlines to draw lines on a plot
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
#Lets import the data and work on it
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
with cbook.get_sample_data('goog.npz') as datafile:
price_data = np.load(datafile)['price_data'].view(np.recarray)
price_data = price_data[-250:] # get the most recent 250 trading days
type(price_data)
#We then transform the data in a way that is done quite often for time series, etc.
#We find the difference, $d_i$, between each observation and the one before it:
delta1 = np.diff(price_data.adj_close) / price_data.adj_close[:-1]
#We can also look at the transformations of different variables, such as volume and closing price:
# Marker size in units of points^2
volume = (15 * price_data.volume[:-2] / price_data.volume[0])**2
close = 0.003 * price_data.close[:-2] / 0.003 * price_data.open[:-2]
```
To actually plot this data, you can use the subplots() functions from plt (matplotlib.pyplot). By default this generates the area for the figure and the axes of a plot.
Here we will make a scatter plot of the differences between successive days. To elaborate, x is the difference between day i and the previous day. y is the difference between day i+1 and the previous day (i):
```
fig, ax = plt.subplots()
ax.scatter(delta1[:-1], delta1[1:], c=close, s=volume, alpha=0.5)
ax.set_xlabel(r'$\Delta_i$', fontsize=15)
ax.set_ylabel(r'$\Delta_{i+1}$', fontsize=15)
ax.set_title('Volume and percent change')
ax.grid(True)
fig.tight_layout()
plt.show() #plt.show() displays the plot for us.
```
<h3>Adding a Line</h3>
```
#We can add a line to this plot by providing x and y coordinates as lists to a Line2D instance:
import matplotlib.lines as mlines
fig, ax = plt.subplots()
line = mlines.Line2D([-.15,0.25], [-.07,0.09], color='red')
ax.add_line(line)
# reusing scatterplot code
ax.scatter(delta1[:-1], delta1[1:], c=close, s=volume, alpha=0.5)
ax.set_xlabel(r'$\Delta_i$', fontsize=15)
ax.set_ylabel(r'$\Delta_{i+1}$', fontsize=15)
ax.set_title('Volume and percent change')
ax.grid(True)
fig.tight_layout()
plt.show()
```
<h3>Plotting Histograms</h3>
To plot a histogram, we follow a similar process and use the hist() function from pyplot. We will generate 10000 random data points, x, with a mean of 100 and standard deviation of 15.
The hist function takes the data, x, number of bins, and other arguments such as density, which normalizes the data to a probability density, or alpha, which sets the transparency of the histogram.
We will also use the library mlab to add a line representing a normal density function with the same mean and standard deviation:
```
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
mu, sigma = 100, 15
x = mu + sigma*np.random.randn(10000)
# the histogram of the data
n, bins, patches = plt.hist(x, 30, density=1, facecolor='blue', alpha=0.75)
# add a 'best fit' line
y = mlab.normpdf( bins, mu, sigma)
l = plt.plot(bins, y, 'r--', linewidth=4)
plt.xlabel('IQ')
plt.ylabel('Probability')
plt.title(r'$\mathrm{Histogram\ of\ IQ:}\ \mu=100,\ \sigma=15$')
plt.axis([40, 160, 0, 0.03])
plt.grid(True)
plt.show()
```
<h3>Bar Charts</h3>
While histograms helped us with visual densities, bar charts help us view counts of data. To plot a bar chart with matplotlib, we use the bar() function. This takes the counts and data labels as x and y, along with other arguments.
As an example, we could look at a sample of the number of programmers that use different languages:
```
import numpy as np
import matplotlib.pyplot as plt
objects = ('Python', 'C++', 'Java', 'Perl', 'Scala', 'Lisp')
y_pos = np.arange(len(objects))
performance = [10,8,6,4,2,1]
plt.bar(y_pos, performance, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.ylabel('Usage')
plt.title('Programming language usage')
plt.show()
```
<h3>Boxplot</h3>
```
import numpy as np
import matplotlib.pyplot as plt
# Fixing random state for reproducibility
np.random.seed(19680801)
# fake up some data
spread = np.random.rand(50) * 100
center = np.ones(25) * 50
flier_high = np.random.rand(10) * 100 + 100
flier_low = np.random.rand(10) * -100
data = np.concatenate((spread, center, flier_high, flier_low))
fig1, ax1 = plt.subplots()
ax1.set_title('Basic Plot')
ax1.boxplot(data)
```
<h3>Subplots</h3>
<b>The subplot() function allows you to plot different things in the same figure. In the following script, sine and cosine values are plotted.</b>
```
import numpy as np
import matplotlib.pyplot as plt
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
plt.figure(figsize=(10,4), dpi=120) # 10 is width, 4 is height
# Left hand side plot
plt.subplot(1,2,1) # (nRows, nColumns, axes number to plot)
plt.plot([1,2,3,4,5], [1,2,3,4,10], 'go') # green dots
plt.title('Scatterplot Greendots')
plt.xlabel('X'); plt.ylabel('Y')
plt.xlim(0, 6); plt.ylim(0, 12)
# Right hand side plot
plt.subplot(1,2,2)
plt.plot([1,2,3,4,5], [2,3,4,5,11], 'b*') # blue stars
plt.title('Scatterplot Bluestars')
plt.xlabel('X'); plt.ylabel('Y')
plt.xlim(0, 6); plt.ylim(0, 12)
plt.show()
```
#matplotlib.pyplot.subplot2grid(shape, loc, rowspan=1, colspan=1, fig=None, **kwargs)[source]
shape : sequence of 2 ints
Shape of grid in which to place axis. First entry is number of rows, second entry is number of columns.
loc : sequence of 2 ints
Location to place axis within grid. First entry is row number, second entry is column number.
rowspan : int
Number of rows for the axis to span to the right.
colspan : int
Number of columns for the axis to span downwards.
fig : Figure, optional
Figure to place axis in. Defaults to current figure.
**kwargs
Additional keyword arguments are handed to add_subplot.
```
import pandas as pd
# Setup the subplot2grid Layout
fig = plt.figure(figsize=(10, 5))
ax1 = plt.subplot2grid((2,4), (0,0))
ax2 = plt.subplot2grid((2,4), (0,1))
ax3 = plt.subplot2grid((2,4), (0,2))
ax4 = plt.subplot2grid((2,4), (0,3))
ax5 = plt.subplot2grid((2,4), (1,0), colspan=2)
ax6 = plt.subplot2grid((2,4), (1,2))
ax7 = plt.subplot2grid((2,4), (1,3))
# Input Arrays
n = np.array([0,1,2,3,4,5])
x = np.linspace(0,5,10)
xx = np.linspace(-0.75, 1., 100)
# Scatterplot
ax1.scatter(xx, xx + np.random.randn(len(xx)))
ax1.set_title("Scatter Plot")
# Step Chart
ax2.step(n, n**2, lw=2)
ax2.set_title("Step Plot")
# Bar Chart
ax3.bar(n, n**2, align="center", width=0.5, alpha=0.5)
ax3.set_title("Bar Chart")
# Fill Between
ax4.fill_between(x, x**2, x**3, color="steelblue", alpha=0.5);
ax4.set_title("Fill Between");
# Time Series
dates = pd.date_range('2018-01-01', periods = len(xx))
ax5.plot(dates, xx + np.random.randn(len(xx)))
ax5.set_xticks(dates[::30])
ax5.set_xticklabels(dates.strftime('%Y-%m-%d')[::30])
ax5.set_title("Time Series")
# Box Plot
ax6.boxplot(np.random.randn(len(xx)))
ax6.set_title("Box Plot")
# Histogram
ax7.hist(xx + np.random.randn(len(xx)))
ax7.set_title("Histogram")
fig.tight_layout()
# Credits to
# Matplotlib tutorial
# Nicolas P. Rougier
# https://github.com/rougier/matplotlib-tutorial
```
| github_jupyter |
# Soring, searching, and counting
```
import numpy as np
np.__version__
author = 'kyubyong. longinglove@nate.com'
```
## Sorting
Q1. Sort x along the second axis.
```
x = np.array([[1,4],[3,1]])
out = np.sort(x, axis=1)
x.sort(axis=1)
assert np.array_equal(out, x)
print out
```
Q2. Sort pairs of surnames and first names and return their indices. (first by surname, then by name).
```
surnames = ('Hertz', 'Galilei', 'Hertz')
first_names = ('Heinrich', 'Galileo', 'Gustav')
print np.lexsort((first_names, surnames))
```
Q3. Get the indices that would sort x along the second axis.
```
x = np.array([[1,4],[3,1]])
out = np.argsort(x, axis=1)
print out
```
Q4. Create an array such that its fifth element would be the same as the element of sorted x, and it divide other elements by their value.
```
x = np.random.permutation(10)
print "x =", x
print "\nCheck the fifth element of this new array is 5, the first four elements are all smaller than 5, and 6th through the end are bigger than 5\n",
out = np.partition(x, 5)
x.partition(5) # in-place equivalent
assert np.array_equal(x, out)
print out
```
Q5. Create the indices of an array such that its third element would be the same as the element of sorted x, and it divide other elements by their value.
```
x = np.random.permutation(10)
print "x =", x
partitioned = np.partition(x, 3)
indices = np.argpartition(x, 3)
print "partitioned =", partitioned
print "indices =", partitioned
assert np.array_equiv(x[indices], partitioned)
```
## Searching
Q6. Get the maximum and minimum values and their indices of x along the second axis.
```
x = np.random.permutation(10).reshape(2, 5)
print "x =", x
print "maximum values =", np.max(x, 1)
print "max indices =", np.argmax(x, 1)
print "minimum values =", np.min(x, 1)
print "min indices =", np.argmin(x, 1)
```
Q7. Get the maximum and minimum values and their indices of x along the second axis, ignoring NaNs.
```
x = np.array([[np.nan, 4], [3, 2]])
print "maximum values ignoring NaNs =", np.nanmax(x, 1)
print "max indices =", np.nanargmax(x, 1)
print "minimum values ignoring NaNs =", np.nanmin(x, 1)
print "min indices =", np.nanargmin(x, 1)
```
Q8. Get the values and indices of the elements that are bigger than 2 in x.
```
x = np.array([[1, 2, 3], [1, 3, 5]])
print "Values bigger than 2 =", x[x>2]
print "Their indices are ", np.nonzero(x > 2)
assert np.array_equiv(x[x>2], x[np.nonzero(x > 2)])
assert np.array_equiv(x[x>2], np.extract(x > 2, x))
```
Q9. Get the indices of the elements that are bigger than 2 in the flattend x.
```
x = np.array([[1, 2, 3], [1, 3, 5]])
print np.flatnonzero(x)
assert np.array_equiv(np.flatnonzero(x), x.ravel().nonzero())
```
Q10. Check the elements of x and return 0 if it is less than 0, otherwise the element itself.
```
x = np.arange(-5, 4).reshape(3, 3)
print np.where(x <0, 0, x)
```
Q11. Get the indices where elements of y should be inserted to x to maintain order.
```
x = [1, 3, 5, 7, 9]
y = [0, 4, 2, 6]
np.searchsorted(x, y)
```
## Counting
Q12. Get the number of nonzero elements in x.
```
x = [[0,1,7,0,0],[3,0,0,2,19]]
print np.count_nonzero(x)
assert np.count_nonzero(x) == len(x[x!=0])
```
| github_jupyter |
# Building your Recurrent Neural Network - Step by Step
Welcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.
Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future.
**Notation**:
- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer.
- Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.
- Superscript $(i)$ denotes an object associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example input.
- Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step.
- Example: $x^{\langle t \rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\langle t \rangle}$ is the input at the $t^{th}$ timestep of example $i$.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.
We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!
Let's first import all the packages that you will need during this assignment.
```
import numpy as np
from rnn_utils import *
```
## 1 - Forward propagation for the basic Recurrent Neural Network
Later this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$.
<img src="images/RNN.png" style="width:500;height:300px;">
<caption><center> **Figure 1**: Basic RNN model </center></caption>
Here's how you can implement an RNN:
**Steps**:
1. Implement the calculations needed for one time-step of the RNN.
2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time.
Let's go!
## 1.1 - RNN cell
A Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell.
<img src="images/rnn_step_forward.png" style="width:700px;height:300px;">
<caption><center> **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $y^{\langle t \rangle}$ </center></caption>
**Exercise**: Implement the RNN-cell described in Figure (2).
**Instructions**:
1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.
2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided you a function: `softmax`.
3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in cache
4. Return $a^{\langle t \rangle}$ , $y^{\langle t \rangle}$ and cache
We will vectorize over $m$ examples. Thus, $x^{\langle t \rangle}$ will have dimension $(n_x,m)$, and $a^{\langle t \rangle}$ will have dimension $(n_a,m)$.
```
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh(np.dot(Waa,a_prev)+ np.dot(Wax,xt)+ ba)
# compute output of the current cell using the formula given above
yt_pred = softmax(np.dot(Wya,a_next)+ by)
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
```
**Expected Output**:
<table>
<tr>
<td>
**a_next[4]**:
</td>
<td>
[ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
-0.18887155 0.99815551 0.6531151 0.82872037]
</td>
</tr>
<tr>
<td>
**a_next.shape**:
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**yt[1]**:
</td>
<td>
[ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
0.36920224 0.9966312 0.9982559 0.17746526]
</td>
</tr>
<tr>
<td>
**yt.shape**:
</td>
<td>
(2, 10)
</td>
</tr>
</table>
## 1.2 - RNN forward pass
You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\langle t-1 \rangle}$) and the current time-step's input data ($x^{\langle t \rangle}$). It outputs a hidden state ($a^{\langle t \rangle}$) and a prediction ($y^{\langle t \rangle}$) for this time-step.
<img src="images/rnn_minusculas.png" style="width:800px;height:300px;">
<caption><center> **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. </center></caption>
**Exercise**: Code the forward propagation of the RNN described in Figure (3).
**Instructions**:
1. Create a vector of zeros ($a$) that will store all the hidden states computed by the RNN.
2. Initialize the "next" hidden state as $a_0$ (initial hidden state).
3. Start looping over each time step, your incremental index is $t$ :
- Update the "next" hidden state and the cache by running `rnn_cell_forward`
- Store the "next" hidden state in $a$ ($t^{th}$ position)
- Store the prediction in y
- Add the cache to the list of caches
4. Return $a$, $y$ and caches
```
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and parameters["Wya"]
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y" with zeros (≈2 lines)
a = np.zeros((n_a,m,T_x))
y_pred = np.zeros((n_y,m,T_x))
# Initialize a_next (≈1 line)
a_next = a0
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t],a_next, parameters) #one cell: rnn_cell_forward(xt, a_prev, parameters) return a_next, yt_pred, cache
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
```
**Expected Output**:
<table>
<tr>
<td>
**a[4][1]**:
</td>
<td>
[-0.99999375 0.77911235 -0.99861469 -0.99833267]
</td>
</tr>
<tr>
<td>
**a.shape**:
</td>
<td>
(5, 10, 4)
</td>
</tr>
<tr>
<td>
**y[1][3]**:
</td>
<td>
[ 0.79560373 0.86224861 0.11118257 0.81515947]
</td>
</tr>
<tr>
<td>
**y.shape**:
</td>
<td>
(2, 10, 4)
</td>
</tr>
<tr>
<td>
**cache[1][1][3]**:
</td>
<td>
[-1.1425182 -0.34934272 -0.20889423 0.58662319]
</td>
</tr>
<tr>
<td>
**len(cache)**:
</td>
<td>
2
</td>
</tr>
</table>
Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\langle t \rangle}$ can be estimated using mainly "local" context (meaning information from inputs $x^{\langle t' \rangle}$ where $t'$ is not too far from $t$).
In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps.
## 2 - Long Short-Term Memory (LSTM) network
This following figure shows the operations of an LSTM-cell.
<img src="images/LSTM.png" style="width:500;height:400px;">
<caption><center> **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. </center></caption>
Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with $T_x$ time-steps.
### About the gates
#### - Forget gate
For the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this:
$$\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} $$
Here, $W_f$ are weights that govern the forget gate's behavior. We concatenate $[a^{\langle t-1 \rangle}, x^{\langle t \rangle}]$ and multiply by $W_f$. The equation above results in a vector $\Gamma_f^{\langle t \rangle}$ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state $c^{\langle t-1 \rangle}$. So if one of the values of $\Gamma_f^{\langle t \rangle}$ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of $c^{\langle t-1 \rangle}$. If one of the values is 1, then it will keep the information.
#### - Update gate
Once we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate:
$$\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} $$
Similar to the forget gate, here $\Gamma_u^{\langle t \rangle}$ is again a vector of values between 0 and 1. This will be multiplied element-wise with $\tilde{c}^{\langle t \rangle}$, in order to compute $c^{\langle t \rangle}$.
#### - Updating the cell
To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is:
$$ \tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} $$
Finally, the new cell state is:
$$ c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} $$
#### - Output gate
To decide which outputs we will use, we will use the following two formulas:
$$ \Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}$$
$$ a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} $$
Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the $\tanh$ of the previous state.
### 2.1 - LSTM cell
**Exercise**: Implement the LSTM cell described in the Figure (3).
**Instructions**:
1. Concatenate $a^{\langle t-1 \rangle}$ and $x^{\langle t \rangle}$ in a single matrix: $concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$
2. Compute all the formulas 1-6. You can use `sigmoid()` (provided) and `np.tanh()`.
3. Compute the prediction $y^{\langle t \rangle}$. You can use `softmax()` (provided).
```
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the memory value
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"]
bf = parameters["bf"]
Wi = parameters["Wi"]
bi = parameters["bi"]
Wc = parameters["Wc"]
bc = parameters["bc"]
Wo = parameters["Wo"]
bo = parameters["bo"]
Wy = parameters["Wy"]
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈3 lines)
concat = np.concatenate((a_prev, xt), axis=0)
# <Gema> concat.shape is (n_a+n_x,m)
#concat[: n_a, :] = None
#concat[n_a :, :] = None
# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
# Γ⟨t⟩f=σ(Wf[a⟨t−1⟩,x⟨t⟩]+bf)
ft = sigmoid(np.dot(Wf,concat)+bf)
# Γ⟨t⟩u=σ(Wu[a⟨t−1⟩,x{t}]+bu)
it = sigmoid(np.dot(Wi,concat)+bi)
# c̃ ⟨t⟩=tanh(Wc[a⟨t−1⟩,x⟨t⟩]+bc)
cct = np.tanh(np.dot(Wc,concat)+bc)
# c⟨t⟩=Γ⟨t⟩f∗c⟨t−1⟩+Γ⟨t⟩u∗c̃ ⟨t⟩
c_next = np.multiply(ft,c_prev)+ np.multiply(it,cct)
# Γ⟨t⟩o=σ(Wo[a⟨t−1⟩,x⟨t⟩]+bo)
ot = sigmoid(np.dot(Wo,concat)+bo)
# a⟨t⟩=Γ⟨t⟩o∗tanh(c⟨t⟩)
a_next = ot*np.tanh(c_next)
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(np.dot(Wy, a_next) + by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))
```
**Expected Output**:
<table>
<tr>
<td>
**a_next[4]**:
</td>
<td>
[-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
0.76566531 0.34631421 -0.00215674 0.43827275]
</td>
</tr>
<tr>
<td>
**a_next.shape**:
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**c_next[2]**:
</td>
<td>
[ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
0.76449811 -0.0981561 -0.74348425 -0.26810932]
</td>
</tr>
<tr>
<td>
**c_next.shape**:
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**yt[1]**:
</td>
<td>
[ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
0.00943007 0.12666353 0.39380172 0.07828381]
</td>
</tr>
<tr>
<td>
**yt.shape**:
</td>
<td>
(2, 10)
</td>
</tr>
<tr>
<td>
**cache[1][3]**:
</td>
<td>
[-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
0.07651101 -1.03752894 1.41219977 -0.37647422]
</td>
</tr>
<tr>
<td>
**len(cache)**:
</td>
<td>
10
</td>
</tr>
</table>
### 2.2 - Forward pass for LSTM
Now that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs.
<img src="images/LSTM_rnn.png" style="width:500;height:300px;">
<caption><center> **Figure 4**: LSTM over multiple time-steps. </center></caption>
**Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps.
**Note**: $c^{\langle 0 \rangle}$ is initialized with zeros.
```
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
# Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wy"].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = np.zeros((n_a, m, T_x))
c = np.zeros((n_a, m, T_x))
y = np.zeros((n_y, m, T_x))
# Initialize a_next and c_next (≈2 lines)
a_next = a0
c_next = np.zeros((n_a, m))
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
# one LSTM cell: lstm_cell_forward(xt, a_prev, c_prev, parameters) return a_next, c_next, yt_pred, cache
a_next, c_next, yt, cache = lstm_cell_forward(x[:,:,t], a_next, c_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = yt
# Save the value of the next cell state (≈1 line)
c[:,:,t] = c_next
# Append the cache into caches (≈1 line)
#np.append(caches,cache) <Gema> con esto no funciona el backward LSTM, aunque aquí sí
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))
```
**Expected Output**:
<table>
<tr>
<td>
**a[4][3][6]** =
</td>
<td>
0.172117767533
</td>
</tr>
<tr>
<td>
**a.shape** =
</td>
<td>
(5, 10, 7)
</td>
</tr>
<tr>
<td>
**y[1][4][3]** =
</td>
<td>
0.95087346185
</td>
</tr>
<tr>
<td>
**y.shape** =
</td>
<td>
(2, 10, 7)
</td>
</tr>
<tr>
<td>
**caches[1][1][1]** =
</td>
<td>
[ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
0.41005165]
</td>
</tr>
<tr>
<td>
**c[1][2][1]** =
</td>
<td>
-0.855544916718
</td>
</tr>
</tr>
<tr>
<td>
**len(caches)** =
</td>
<td>
2
</td>
</tr>
</table>
Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance.
The rest of this notebook is optional, and will not be graded.
## 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below.
### 3.1 - Basic RNN backward pass
We will start by computing the backward pass for the basic RNN-cell.
<img src="images/rnn_cell_backprop.png" style="width:500;height:300px;"> <br>
<caption><center> **Figure 5**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculas. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. </center></caption>
#### Deriving the one step backward functions:
To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand.
The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \text{sech}(x)^2 = 1 - \tanh(x)^2$
Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of $\tanh(u)$ is $(1-\tanh(u)^2)du$.
The final two equations also follow same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
```
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of tanh with respect to a_next (≈1 line)
# <Gema> a_next is refered to a<t>, the first formula for the basic RNN cell.
# <Gema> The derivative of tanh(u) is (1−tanh(u)**2)du
dtanh = (1- a_next**2)*da_next # Formula number 1
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = np.dot(np.transpose(Wax),dtanh)# Formula number 6
dWax = np.dot(dtanh,np.transpose(xt)) # Formula number 3
# compute the gradient with respect to Waa (≈2 lines)
da_prev = np.dot(np.transpose(Waa),dtanh)# Formula number 7
dWaa = np.dot(dtanh,np.transpose(a_prev))# Formula number 4
# compute the gradient with respect to b (≈1 line)
dba = np.sum(dtanh,keepdims=True,axis=-1) # Formula number 5
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
```
**Expected Output**:
<table>
<tr>
<td>
**gradients["dxt"][1][2]** =
</td>
<td>
-0.460564103059
</td>
</tr>
<tr>
<td>
**gradients["dxt"].shape** =
</td>
<td>
(3, 10)
</td>
</tr>
<tr>
<td>
**gradients["da_prev"][2][3]** =
</td>
<td>
0.0842968653807
</td>
</tr>
<tr>
<td>
**gradients["da_prev"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dWax"][3][1]** =
</td>
<td>
0.393081873922
</td>
</tr>
<tr>
<td>
**gradients["dWax"].shape** =
</td>
<td>
(5, 3)
</td>
</tr>
<tr>
<td>
**gradients["dWaa"][1][2]** =
</td>
<td>
-0.28483955787
</td>
</tr>
<tr>
<td>
**gradients["dWaa"].shape** =
</td>
<td>
(5, 5)
</td>
</tr>
<tr>
<td>
**gradients["dba"][4]** =
</td>
<td>
[ 0.80517166]
</td>
</tr>
<tr>
<td>
**gradients["dba"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
</table>
#### Backward pass through the RNN
Computing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.
**Instructions**:
Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
```
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
# caches -- tuple of values needed for the backward pass, contains (list of caches, x)
(caches, x) = (caches[0], caches[1])
(a1, a0, x1, parameters) = caches[0]
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (≈6 lines)
dx = np.zeros((n_x, m, T_x))
dWax = np.zeros((n_a, n_x))
dWaa = np.zeros((n_a, n_a))
dba = np.zeros((n_a, 1))
da0 = np.zeros((n_a, m))
da_prevt = np.zeros((n_a, m))
# Loop through all the time steps
for t in reversed(range(T_x)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = rnn_cell_backward(da[:,:,t]+ da_prevt, caches[t])
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
# <Gema> you increment the overall dba, dWaa , dWax
dx[:, :, t] = dxt
dWax += dWaxt
dWaa += dWaat
dba += dbat
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = da_prevt
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a, y, caches = rnn_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = rnn_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
```
**Expected Output**:
<table>
<tr>
<td>
**gradients["dx"][1][2]** =
</td>
<td>
[-2.07101689 -0.59255627 0.02466855 0.01483317]
</td>
</tr>
<tr>
<td>
**gradients["dx"].shape** =
</td>
<td>
(3, 10, 4)
</td>
</tr>
<tr>
<td>
**gradients["da0"][2][3]** =
</td>
<td>
-0.314942375127
</td>
</tr>
<tr>
<td>
**gradients["da0"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dWax"][3][1]** =
</td>
<td>
11.2641044965
</td>
</tr>
<tr>
<td>
**gradients["dWax"].shape** =
</td>
<td>
(5, 3)
</td>
</tr>
<tr>
<td>
**gradients["dWaa"][1][2]** =
</td>
<td>
2.30333312658
</td>
</tr>
<tr>
<td>
**gradients["dWaa"].shape** =
</td>
<td>
(5, 5)
</td>
</tr>
<tr>
<td>
**gradients["dba"][4]** =
</td>
<td>
[-0.74747722]
</td>
</tr>
<tr>
<td>
**gradients["dba"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
</table>
## 3.2 - LSTM backward pass
### 3.2.1 One Step backward
The LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.)
### 3.2.2 gate derivatives
$$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$
$$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_u^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$
$$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$
$$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$
### 3.2.3 parameter derivatives
$$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$
$$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$
$$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$
$$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$
To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.
Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.
$$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$
Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)
$$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$
$$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$
where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)
**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
```
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = xt.shape
n_a, m = a_next.shape
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
# dΓ⟨t⟩o=danext∗tanh(cnext)∗Γ⟨t⟩o∗(1−Γ⟨t⟩o) Formula 7
# Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
# Note: c stands for the memory value
dot = da_next*np.tanh(c_next)*ot*(1-ot)
# dc̃ ⟨t⟩=dcnext∗Γ⟨t⟩u+Γ⟨t⟩o(1−tanh(cnext)2)∗it∗danext∗c̃ ⟨t⟩∗(1−tanh(c̃ )2) THE FORMULA IS TOTALLY WRONG
dcct = (dc_next*it+ot*(1-(np.tanh(c_next))**2)*it*da_next)*(1-cct**2)
# dΓ⟨t⟩u=dcnext∗c̃ ⟨t⟩+Γ⟨t⟩o(1−tanh(cnext)2)∗c̃ ⟨t⟩∗danext∗Γ⟨t⟩u∗(1−Γ⟨t⟩u) IS WRONG
dit = (dc_next*cct+ot*(1-np.tanh(c_next)**2)*cct*da_next)*it*(1-it)
# The formula in the notebook is not correct!!! Look the parenthesis.
# dΓ⟨t⟩f=dcnext∗c̃ prev+Γ⟨t⟩o(1−tanh(cnext)2)∗cprev∗danext∗Γ⟨t⟩f∗(1−Γ⟨t⟩f) IS WRONG
dft = (dc_next*c_prev+ot*(1-np.tanh(c_next)**2)*c_prev*da_next)*ft*(1-ft)
# The formula in the notebook is not correct!!! Look the parenthesis.
# Code equations (7) to (10) (≈4 lines)
#dit = None
#dft = None
#dot = None
#dcct = None
# Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
dWf = np.dot(dft,np.transpose(np.concatenate((a_prev, xt), axis=0)))
dWi = np.dot(dit,np.transpose(np.concatenate((a_prev, xt), axis=0)))
dWc = np.dot(dcct,np.transpose(np.concatenate((a_prev, xt), axis=0)))
dWo = np.dot(dot,np.transpose(np.concatenate((a_prev, xt), axis=0)))
dbf = np.sum(dft,axis=1,keepdims=True)
dbi = np.sum(dit,axis=1,keepdims=True)
dbc = np.sum(dcct,axis=1,keepdims=True)
dbo = np.sum(dot,axis=1,keepdims=True)
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
Wf=parameters["Wf"]
Wi=parameters["Wi"]
Wc=parameters["Wc"]
Wo=parameters["Wo"]
da_prev = np.dot(np.transpose(Wf[:,:n_a]),dft)+np.dot(np.transpose(Wi[:,:n_a]),dit)+np.dot(np.transpose(Wc[:,:n_a]),dcct)+np.dot(np.transpose(Wo[:,:n_a]),dot)
dc_prev = dc_next*ft+ot*(1-np.tanh(c_next)**2)*ft*da_next
dxt = np.dot(np.transpose(Wf[:,n_a:]),dft)+np.dot(np.transpose(Wi[:,n_a:]),dit)+np.dot(np.transpose(Wc[:,n_a:]),dcct)+np.dot(np.transpose(Wo[:,n_a:]),dot)
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
```
**Expected Output**:
<table>
<tr>
<td>
**gradients["dxt"][1][2]** =
</td>
<td>
3.23055911511
</td>
</tr>
<tr>
<td>
**gradients["dxt"].shape** =
</td>
<td>
(3, 10)
</td>
</tr>
<tr>
<td>
**gradients["da_prev"][2][3]** =
</td>
<td>
-0.0639621419711
</td>
</tr>
<tr>
<td>
**gradients["da_prev"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dc_prev"][2][3]** =
</td>
<td>
0.797522038797
</td>
</tr>
<tr>
<td>
**gradients["dc_prev"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dWf"][3][1]** =
</td>
<td>
-0.147954838164
</td>
</tr>
<tr>
<td>
**gradients["dWf"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWi"][1][2]** =
</td>
<td>
1.05749805523
</td>
</tr>
<tr>
<td>
**gradients["dWi"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWc"][3][1]** =
</td>
<td>
2.30456216369
</td>
</tr>
<tr>
<td>
**gradients["dWc"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWo"][1][2]** =
</td>
<td>
0.331311595289
</td>
</tr>
<tr>
<td>
**gradients["dWo"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dbf"][4]** =
</td>
<td>
[ 0.18864637]
</td>
</tr>
<tr>
<td>
**gradients["dbf"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbi"][4]** =
</td>
<td>
[-0.40142491]
</td>
</tr>
<tr>
<td>
**gradients["dbi"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbc"][4]** =
</td>
<td>
[ 0.25587763]
</td>
</tr>
<tr>
<td>
**gradients["dbc"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbo"][4]** =
</td>
<td>
[ 0.13893342]
</td>
</tr>
<tr>
<td>
**gradients["dbo"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
</table>
### 3.3 Backward pass through the LSTM RNN
This part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients.
**Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
```
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (≈12 lines)
dx = np.zeros((n_x, m, T_x))
da0 = np.zeros((n_a, m))
da_prevt = np.zeros((n_a, m))
dc_prevt = np.zeros((n_a, m))
dWf = np.zeros((n_a, n_a + n_x))
dWi = np.zeros((n_a, n_a + n_x))
dWc = np.zeros((n_a, n_a + n_x))
dWo = np.zeros((n_a, n_a + n_x))
dbf = np.zeros((n_a, 1))
dbi = np.zeros((n_a, 1))
dbc = np.zeros((n_a, 1))
dbo = np.zeros((n_a, 1))
# loop back over the whole sequence
for t in reversed(range(T_x)):
# Compute all gradients using lstm_cell_backward
gradients = lstm_cell_backward(da[:,:,t]+da_prevt, dc_prevt, caches[t])
# Store or add the gradient to the parameters' previous step's gradient
# <Gema> Retrieve derivatives from gradients. you increment the overall dWf, dWi...dbc,dbo
dx[:,:,t] = gradients["dxt"]
dWf = dWf+gradients["dWf"]
dWi = dWi+gradients["dWi"]
dWc = dWc+gradients["dWc"]
dWo = dWo+gradients["dWo"]
dbf = dbf+gradients["dbf"]
dbi = dbi+gradients["dbi"]
dbc = dbc+gradients["dbc"]
dbo = dbo+gradients["dbo"]
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = gradients["da_prev"]
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
```
**Expected Output**:
<table>
<tr>
<td>
**gradients["dx"][1][2]** =
</td>
<td>
[-0.00173313 0.08287442 -0.30545663 -0.43281115]
</td>
</tr>
<tr>
<td>
**gradients["dx"].shape** =
</td>
<td>
(3, 10, 4)
</td>
</tr>
<tr>
<td>
**gradients["da0"][2][3]** =
</td>
<td>
-0.095911501954
</td>
</tr>
<tr>
<td>
**gradients["da0"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dWf"][3][1]** =
</td>
<td>
-0.0698198561274
</td>
</tr>
<tr>
<td>
**gradients["dWf"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWi"][1][2]** =
</td>
<td>
0.102371820249
</td>
</tr>
<tr>
<td>
**gradients["dWi"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWc"][3][1]** =
</td>
<td>
-0.0624983794927
</td>
</tr>
<tr>
<td>
**gradients["dWc"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWo"][1][2]** =
</td>
<td>
0.0484389131444
</td>
</tr>
<tr>
<td>
**gradients["dWo"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dbf"][4]** =
</td>
<td>
[-0.0565788]
</td>
</tr>
<tr>
<td>
**gradients["dbf"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbi"][4]** =
</td>
<td>
[-0.06997391]
</td>
</tr>
<tr>
<td>
**gradients["dbi"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbc"][4]** =
</td>
<td>
[-0.27441821]
</td>
</tr>
<tr>
<td>
**gradients["dbc"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbo"][4]** =
</td>
<td>
[ 0.16532821]
</td>
</tr>
<tr>
<td>
**gradients["dbo"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
</table>
### Congratulations !
Congratulations on completing this assignment. You now understand how recurrent neural networks work!
Lets go on to the next exercise, where you'll use an RNN to build a character-level language model.
| github_jupyter |
```
%pylab inline
from constantLowSkill3 import *
Vgrid = np.load("LowSkillWorker3.npy")
gamma
num = 10000
'''
x = [w,n,m,s,e,o]
x = [5,0,0,0,0,0]
'''
from jax import random
def simulation(key):
initE = random.choice(a = nE, p=E_distribution, key = key)
initS = random.choice(a = nS, p=S_distribution, key = key)
x = [5, 0, 0, initS, initE, 0]
path = []
move = []
for t in range(T_min, T_max):
_, key = random.split(key)
if t == T_max-1:
_,a = V(t,Vgrid[:,:,:,:,:,:,t],x)
else:
_,a = V(t,Vgrid[:,:,:,:,:,:,t+1],x)
xp = transition(t,a.reshape((1,-1)),x)
p = xp[:,-1]
x_next = xp[:,:-1]
path.append(x)
move.append(a)
x = x_next[random.choice(a = nS*nE, p=p, key = key)]
path.append(x)
return jnp.array(path), jnp.array(move)
%%time
# simulation part
keys = vmap(random.PRNGKey)(jnp.arange(num))
Paths, Moves = vmap(simulation)(keys)
# x = [w,n,m,s,e,o]
# x = [0,1,2,3,4,5]
ws = Paths[:,:,0].T
ns = Paths[:,:,1].T
ms = Paths[:,:,2].T
ss = Paths[:,:,3].T
es = Paths[:,:,4].T
os = Paths[:,:,5].T
cs = Moves[:,:,0].T
bs = Moves[:,:,1].T
ks = Moves[:,:,2].T
hs = Moves[:,:,3].T
actions = Moves[:,:,4].T
plt.plot(detEarning)
plt.figure(figsize = [16,8])
plt.title("The mean values of simulation")
plt.plot(range(20, T_max + 21),jnp.mean(ws + H*pt*os - ms,axis = 1), label = "wealth + home equity")
plt.plot(range(20, T_max + 21),jnp.mean(ws,axis = 1), label = "wealth")
plt.plot(range(20, T_max + 20),jnp.mean(cs,axis = 1), label = "consumption")
plt.plot(range(20, T_max + 20),jnp.mean(bs,axis = 1), label = "bond")
plt.plot(range(20, T_max + 20),jnp.mean(ks,axis = 1), label = "stock")
plt.legend()
plt.title("housing consumption")
plt.plot(range(20, T_max + 20),(hs).mean(axis = 1), label = "housing")
plt.title("housing consumption for renting peole")
plt.plot(hs[:, jnp.where(os.sum(axis = 0) == 0)[0]].mean(axis = 1), label = "housing")
plt.title("house owner percentage in the population")
plt.plot(range(20, T_max + 21),(os).mean(axis = 1), label = "owning")
jnp.where(os[T_max - 1, :] == 0)
# agent number, x = [w,n,m,s,e,o]
agentNum = 35
plt.figure(figsize = [16,8])
plt.plot(range(20, T_max + 21),(ws + os*(H*pt - ms))[:,agentNum], label = "wealth + home equity")
plt.plot(range(20, T_max + 21),ws[:,agentNum], label = "wealth")
plt.plot(range(20, T_max + 21),ns[:,agentNum], label = "401k")
plt.plot(range(20, T_max + 21),ms[:,agentNum], label = "mortgage")
plt.plot(range(20, T_max + 20),cs[:,agentNum], label = "consumption")
plt.plot(range(20, T_max + 20),bs[:,agentNum], label = "bond")
plt.plot(range(20, T_max + 20),ks[:,agentNum], label = "stock")
plt.plot(range(20, T_max + 21),os[:,agentNum]*100, label = "ownership", color = "k")
plt.legend()
# agent selling time collection
agentTime = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 1)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 1))[0]:
agentTime.append([t, agentNum])
agentTime = jnp.array(agentTime)
# agent selling time collection
agentHold = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 0)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 0))[0]:
agentHold.append([t, agentNum])
agentHold = jnp.array(agentHold)
plt.title("weath level for buyer and renter")
www = (os*(ws+H*pt - ms)).sum(axis = 1)/(os).sum(axis = 1)
for age in range(30):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, ws[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, www[age], color = "green")
plt.scatter(age, ws[renter[:,0], renter[:,1]].mean(),color = "r")
plt.title("employement status for buyer and renter")
for age in range(31):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, es[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, es[renter[:,0], renter[:,1]].mean(),color = "r")
# At every age
plt.plot((os[:T_max,:]*ks/(ks+bs)).sum(axis = 1)/os[:T_max,:].sum(axis = 1), label = "owner")
plt.plot(((1-os[:T_max,:])*ks/(ks+bs)).sum(axis = 1)/(1-os)[:T_max,:].sum(axis = 1), label = "renter")
plt.legend()
# At every age
plt.plot((os[:T_max,:]*ks).sum(axis = 1)/os[:T_max,:].sum(axis = 1), label = "owner")
plt.plot(((1-os[:T_max,:])*ks).sum(axis = 1)/(1-os)[:T_max,:].sum(axis = 1), label = "renter")
plt.legend()
```
| github_jupyter |
# Introduction
Machine learning competitions are a great way to improve your data science skills and measure your progress.
In this exercise, you will create and submit predictions for a Kaggle competition. You can then improve your model (e.g. by adding features) to improve and see how you stack up to others taking this micro-course.
The steps in this notebook are:
1. Build a Random Forest model with all of your data (**X** and **y**)
2. Read in the "test" data, which doesn't include values for the target. Predict home values in the test data with your Random Forest model.
3. Submit those predictions to the competition and see your score.
4. Optionally, come back to see if you can improve your model by adding features or changing your model. Then you can resubmit to see how that stacks up on the competition leaderboard.
## Recap
Here's the code you've written so far. Start by running it again.
```
# Code you have previously used to load data
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex7 import *
# Path of the file to read. We changed the directory structure to simplify submitting to a competition
iowa_file_path = '../input/train.csv'
home_data = pd.read_csv(iowa_file_path)
# Create target object and call it y
y = home_data.SalePrice
# Create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]
# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# Specify Model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit Model
iowa_model.fit(train_X, train_y)
# Make validation predictions and calculate mean absolute error
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE when not specifying max_leaf_nodes: {:,.0f}".format(val_mae))
# Using best value for max_leaf_nodes
iowa_model = DecisionTreeRegressor(max_leaf_nodes=100, random_state=1)
iowa_model.fit(train_X, train_y)
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE for best value of max_leaf_nodes: {:,.0f}".format(val_mae))
# Define the model. Set random_state to 1
rf_model = RandomForestRegressor(random_state=1)
rf_model.fit(train_X, train_y)
rf_val_predictions = rf_model.predict(val_X)
rf_val_mae = mean_absolute_error(rf_val_predictions, val_y)
print("Validation MAE for Random Forest Model: {:,.0f}".format(rf_val_mae))
```
# Creating a Model For the Competition
Build a Random Forest model and train it on all of **X** and **y**.
```
# To improve accuracy, create a new Random Forest model which you will train on all training data
rf_model_on_full_data = ____
# fit rf_model_on_full_data on all data from the training data
____
```
# Make Predictions
Read the file of "test" data. And apply your model to make predictions
```
# path to file you will use for predictions
test_data_path = '../input/test.csv'
# read test data file using pandas
test_data = ____
# create test_X which comes from test_data but includes only the columns you used for prediction.
# The list of columns is stored in a variable called features
test_X = ____
# make predictions which we will submit.
test_preds = ____
# The lines below shows how to save predictions in format used for competition scoring
# Just uncomment them.
#output = pd.DataFrame({'Id': test_data.Id,
# 'SalePrice': test_preds})
#output.to_csv('submission.csv', index=False)
```
Before submitting, run a check to make sure your `test_preds` have the right format.
```
#%%RM_IF(PROD)%%
rf_model_on_full_data = RandomForestRegressor()
rf_model_on_full_data.fit(X, y)
test_data_path = '../input/test.csv'
test_data = pd.read_csv(test_data_path)
test_X = test_data[features]
test_preds = rf_model_on_full_data.predict(test_X)
step_1.assert_check_passed()
# Check your answer
step_1.check()
# step_1.solution()
```
# Test Your Work
To test your results, you'll need to join the competition (if you haven't already). So open a new window by clicking on [this link](https://www.kaggle.com/c/home-data-for-ml-course). Then click on the **Join Competition** button.

Next, follow the instructions below:
#$SUBMIT_TO_COMP$
# Continuing Your Progress
There are many ways to improve your model, and **experimenting is a great way to learn at this point.**
The best way to improve your model is to add features. Look at the list of columns and think about what might affect home prices. Some features will cause errors because of issues like missing values or non-numeric data types.
The **[Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning)** micro-course will teach you how to handle these types of features. You will also learn to use **xgboost**, a technique giving even better accuracy than Random Forest.
# Other Micro-Courses
The **[Pandas](https://kaggle.com/Learn/Pandas)** micro-course will give you the data manipulation skills to quickly go from conceptual idea to implementation in your data science projects.
You are also ready for the **[Deep Learning](https://kaggle.com/Learn/Deep-Learning)** micro-course, where you will build models with better-than-human level performance at computer vision tasks.
| github_jupyter |
# Data Analytics I
## Training and Test Samples
For the following analysis, we use a web-scraped used car offers from the online advertisement platform *myLemons* (credit goes to Anthony Strittmatter). We restrict our sample to the compact cars BMW 320 series, Opel Astra, Mercedes C-class, VW Golf, and VW Passat. We select only used cars with a mileage between 10,000-200,000 km and an age between 1-20 years.
The data set contains the following variables:
|Variable name| Description|
|:----|:----|
|**Outcome variables** ||
|*first_price*| First asking price in 1,000 CHF |
|**Baseline covariates**| |
|*bmw_320, opel_astra, mercedes_c, vw_golf, vw_passat*| Dummies for the car producer and model|
|*mileage*| Mileage of the used car (in 1,000 km)|
|*age_car_years*| Age of the used car (in years)|
|*diesel*| Dummy for diesel engines |
|*private_seller*| Dummy for private seller (as opposed to professional used car sellers) |
|*other_car_owner*| Number of previous car owners |
|*guarantee*| Dummy indicating that the seller offers a guarantee for the used car|
|*maintenance_cert*| Dummy indicating that the seller has a complete maintenace certificate for the used car|
|*inspection*| Categorial variable for the duration until next general inspection (3 categories: new, 1-2 years, < 1 year) |
|*pm_green*| Dummy indicating that the used car has low particular matter emissions|
|*co2_em*| CO2 emssion (in g/km)|
|*euro_norm*| EURO emission norm under which the car is registered |
Furthermore, we generate some transformations of our covariates. The transformed covariates are:
|Variable name| Description|
|:----|:----|
|**Additional covariates** ||
|*mileage2, mileage3, mileage4, age_car_years2, age_car_years3, age_car_years4*| Squared, cubic, and quadratic *mileage* and *age_car_years* |
|*mile_20, mile_30, mile_40, mile_50, mile_100, mile_150*| Dummies indicating that the used car has a mileage above 20,000km, 30,000km, 40,000km, 50,000km, 100,000km, or 150,000km |
|*age_3, age_6*| Dummies indicating that the used car is above 3 or 6 years old |
|*dur_next_ins_0*| Dummy indicating that the duration until the next general inspection is less than a year |
|*dur_next_ins_1_2*| Dummy indicating that the duration until the next general inspection is between 1 and 2 years |
|*new_inspection*| Dummy indicating that the used car has a new general inspection |
|*euro_1, euro_2, euro_3, euro_4, euro_5, euro_6*| Dummies for EURO emission norms |
### Load Data
```
### Load Data ###
# Set seed
set.seed(25112020)
# Load data frame
df_train <- read.csv("used_cars_train.csv",header=TRUE, sep=",") # Load training data
df_test <- read.csv("used_cars_test.csv",header=TRUE, sep=",") # Load test data
# Specify Outcome Variable
first_price_train <- as.matrix(df_train[,2])
first_price_test <- as.matrix(df_test[,2])
# Specify Covariates
# First Variable is the Intercept
covariates_train <- as.matrix(cbind(rep(1,nrow(df_train)),df_train[,c(3:ncol(df_train))]))
covariates_test <- as.matrix(cbind(rep(1,nrow(df_test)),df_test[,c(3:ncol(df_test))]))
print('Data frame successfully loaded.')
```
### Estimation
We estimate different linear models by OLS. We start with model that contains only a constant. Then we increase the number of covariates succesively. For each model, we calculate the MSE in the training and test sample.
```
### Estimation ###
# Generate Matrices to Store the Results
# (1) Matrix for the MSE: Rows = Models, Columns = In- and Out-of-sample MSE
mse <- matrix(NA, nrow = ncol(covariates_train), ncol = 2)
# (2) Matrix for the in-sample predictions: Rows = Observations, Columns = Models
y_hat_train <- matrix(NA,nrow = nrow(first_price_train), ncol = ncol(covariates_train))
# (3) Matrix for the out-of-sample predictions: Rows = Observations, Columns = Models
y_hat_test <- matrix(NA,nrow = nrow(first_price_test), ncol = ncol(covariates_train))
# Estimate Different OLS Models
# Start with a model containing only an intercept
# Add covariates one-by-one
for (c in (1:ncol(covariates_train))){
formular <- lm.fit(as.matrix(covariates_train[,c(1:c)]),first_price_train) # OLS regression
y_hat_train[,c] <- formular$fitted.values # Fitted values in training sample
coef <- as.matrix(formular$coefficients) # Store vector of coefficients
coef[is.na(coef)] <- 0 # Replace NAs with 0 (in case of perfect multicollinearity)
y_hat_test[,c] <- covariates_test[,c(1:c)] %*% coef # Fitted values in test sample
mse[c,1] <- round(mean((y_hat_train[,c] - first_price_train)^2),digits=3) # in-sample MSE on the training sample
mse[c,2] <- round(mean((y_hat_test[,c] - first_price_test)^2), digits=3) # out-of sample MSE on the test sample
}
# Add Column with Number of Covariates
mse <- data.frame(cbind(mse,seq(1,nrow(mse))))
colnames(mse) <- c("MSE.in", "MSE.out", "Covariates") # Give columnns names
print('Models are estimated.')
```
### In-sample MSE Plot - Training Sample
```
### In-sample MSE ###
library(ggplot2)
ggplot(mse, aes(x = Covariates, y = MSE.in)) +
geom_line() +
ylab("In-sample MSE") +
xlab("Number of Covariates")
print(paste0("MSE for K = 1: ",mse[1,1]))
print(paste0("MSE for K = 10: ",mse[10,1]))
print(paste0("MSE for K = 40: ",mse[40,1]))
```
### Fit Plot - Training Sample
```
### Fit Plot - Training Sample ###
# Create data frame for plotting predictions from models with K = 1, K = 10 and K = 407
modelK <- c(rep("K=1",length(first_price_train)), rep("K=10",length(first_price_train)), rep("K=40",length(first_price_train)))
observedY <- c(rep(first_price_train, 3))
predictedY <- c(y_hat_train[,1], y_hat_train[,10], y_hat_train[,40])
fit.plot <- data.frame("observedY" = observedY, "predictedY" = predictedY, "modelK" = modelK)
# Plot
ggplot(fit.plot, aes(x=observedY, y = predictedY, col = modelK)) +
geom_point(size = 3) +
geom_abline(intercept = 0, slope = 1, color = "black", size = 1)
```
### Out-of-sample MSE - Test Sample
```
### Out-of-sample MSE ###
ggplot(mse, aes(x = Covariates, y = MSE.out)) +
geom_line() +
ylab("Out-of-sample MSE") +
xlab("Number of Covariates")
print(paste0("MSE for K = 1: ",mse[1,2]))
print(paste0("MSE for K = 10: ",mse[10,2]))
print(paste0("MSE for K = 40: ",mse[40,2]))
```
### Fit Plot - Test Sample
```
### Fit Plot - Test Sample ###
# Create data frame for plotting predictions from models with K = 1, K = 10 and K = 407
modelK <- c(rep("K=1",length(first_price_test)), rep("K=10",length(first_price_test)), rep("K=40",length(first_price_test)))
observedY <- c(rep(first_price_test, 3))
predictedY <- c(y_hat_test[,1], y_hat_test[,10], y_hat_test[,40])
fit.plot <- data.frame("observedY" = observedY, "predictedY" = predictedY, "modelK" = modelK)
# Plot
ggplot(fit.plot, aes(x=observedY, y = predictedY, col = modelK)) +
geom_point(size = 3) +
geom_abline(intercept = 0, slope = 1, color = "black", size = 1)
```
## Simulation of Bias-Variance Trade-Off
We load a large data set from *myLemons* with 104,721 observations and the same covariates as above. We take one observation to the test sample. We simulate the used car prices with the linear model
\begin{equation*}
price = X\beta_0 + \epsilon,
\end{equation*}
where $\beta_0$ is obtained from OLS estimates in the real data and $\epsilon \sim N(0,sd^2)$ (with $sd = 3$) adds artifical noise to the DGP. We subsample the data in $sub = 1500$ partitions.
For each subsample, we estimate an OLS model on the simulated price. We add covariates successively. We start with a model containing only a constant. The final model contains all observable covariates. We calulate the MSE, squared-bias, and variance accross all simulations for each size of the linear model,
\begin{equation*}
MSE = Var(\hat{Y}) + Bias(\hat{Y})^2 + Var(\epsilon).
\end{equation*}
```
############################# Simulation of Bias-Variance Trade-Off #############################
# Load data
data_raw <- read.csv("mylemon.csv",header=TRUE, sep=",") # Load larger data set
# Set starting values for random number generator
set.seed(100001)
# Split data
df_train <- data_raw[-1,] # drops the first observation -> training samples will be drawn from here
df_test <- data_raw[1,] # contains only the first observation -> test set
# Generate the DGP -> (1) Take 10 real covariates, (2) Choose their beta coefficients, (3) Simulate response
# (1) All covariates -> the first ten will generate the response
covariates_train <- as.matrix(cbind(rep(1,nrow(df_train)),df_train[,c(3:ncol(df_train))]))
covariates_test <- as.matrix(cbind(rep(1,nrow(df_test)),df_test[,c(3:ncol(df_test))]))
# (2) Beta - take the empirical OLS coefficients from the full model (= all covariates)
# Get Outcome
first_price_train <- as.matrix(df_train[,2])
first_price_test <- as.matrix(df_test[,2])
# Estimate the empirical coefficients for the first ten covariates
formular <- lm.fit(rbind(covariates_train[,c(1:10)],covariates_test[,c(1:10)]),rbind(first_price_train,first_price_test))
coef <- as.matrix(formular$coefficients) # Beta_0
coef[is.na(coef)] <- 0 # Fix multicollineraity issue
# (3) Simulate the car price based on empirical coefficients, observed covariates, and noise
# Noise
sd = 3 # sd of the irreducible noise
u_tr <- matrix(rnorm(nrow(first_price_train),0,sd),nrow= nrow(first_price_train), ncol =1) # Irreducible noise
# Simulate the price based on empricial coeffcents, observed covariates, and noise
y_new_train <- covariates_train[,c(1:10)] %*% coef + u_tr # Simulated response for future training sets
# De-noised true response on the test set
y_0 <- covariates_test[,c(1:10)] %*% coef
# Set input parameters for the simulation
# (1) number of subsamples:
sub = 1500
# (2) each of the 1500 training sets has 70 random observations from the df_train:
p <- matrix(sample(seq(1,nrow(df_train)), 70*sub, replace = TRUE), ncol = sub) # each column contains 70 row indices
# Estimate different OLS models on simulated price
mse <- matrix(NA, nrow = ncol(covariates_train), ncol = sub) # Out-of-sample MSE, Rows = models, Columns = subsamples
y_hat_test <- matrix(NA, nrow = ncol(covariates_train), ncol = sub) # Predictions, Rows = models, Columns = subsamples
for (n in (1:sub)) { # Loop over subsamples
for (c in (1:ncol(covariates_train))){ # Loop over OLS models with different number of covariates
formular <- lm.fit(as.matrix(covariates_train[p[,n],c(1:c)]),y_new_train[p[,n],]) # Model for the 70 observations
coef <- as.matrix(formular$coefficients) # Hat Beta
coef[is.na(coef)] <- 0
y_hat_test[c,n] <- covariates_test[,c(1:c)] %*% coef # Predicted y for the test set
mse[c,n] <- mean((y_hat_test[c,n] - y_0 - rnorm(1,0,sd))^2) # MSE = mean(predicton - observed)^2 and observed = true response + noise
}
}
# Aggregate results accross all subsamples
test <- matrix(NA, nrow = ncol(covariates_train), ncol = 3) # Rows = models, columns = [Var(hat f), bias^2(hat f), MSE]
for (c in (1:ncol(covariates_train))){
test[c,1] <- var(y_hat_test[c,])
test[c,2] <- (mean(y_hat_test[c,]) - y_0)^2
test[c,3] <- mean(mse[c,])
}
colnames(test) <- c("Variance", "Squared Bias", "MSE") # Column names for structure "test"
# Data frame for plot
library(reshape)
test <- melt(test) # Reshape data for the plot
colnames(test) <- c("Model", "Variable", "Value") # New column names
head(test)
ggplot(test, aes(x=Model, y= Value, col = Variable)) +
geom_line(size = 1) +
xlab("Number of covariates") +
ylab("") +
geom_hline(yintercept = sd^2, linetype=4) +
ggtitle("Bias-Variance Tradeoff")
```
| github_jupyter |
### Your very own neural network
In this notebook, we're going to build a neural network using naught but pure numpy and steel nerves. It's going to be fun, I promise!

```
# use the preloaded keras datasets and models
! mkdir -p ~/.keras/datasets
! mkdir -p ~/.keras/models
! ln -s $(realpath ../readonly/keras/datasets/*) ~/.keras/datasets/
! ln -s $(realpath ../readonly/keras/models/*) ~/.keras/models/
from __future__ import print_function
import numpy as np
np.random.seed(42)
```
Here goes our main class: a layer that can .forward() and .backward().
```
class Layer:
"""
A building block. Each layer is capable of performing two things:
- Process input to get output: output = layer.forward(input)
- Propagate gradients through itself: grad_input = layer.backward(input, grad_output)
Some layers also have learnable parameters which they update during layer.backward.
"""
def __init__(self):
"""Here you can initialize layer parameters (if any) and auxiliary stuff."""
# A dummy layer does nothing
pass
def forward(self, input):
"""
Takes input data of shape [batch, input_units], returns output data [batch, output_units]
"""
# A dummy layer just returns whatever it gets as input.
return input
def backward(self, input, grad_output):
"""
Performs a backpropagation step through the layer, with respect to the given input.
To compute loss gradients w.r.t input, you need to apply chain rule (backprop):
d loss / d x = (d loss / d layer) * (d layer / d x)
Luckily, you already receive d loss / d layer as input, so you only need to multiply it by d layer / d x.
If your layer has parameters (e.g. dense layer), you also need to update them here using d loss / d layer
"""
# The gradient of a dummy layer is precisely grad_output, but we'll write it more explicitly
num_units = input.shape[1]
d_layer_d_input = np.eye(num_units)
return np.dot(grad_output, d_layer_d_input) # chain rule
```
### The road ahead
We're going to build a neural network that classifies MNIST digits. To do so, we'll need a few building blocks:
- Dense layer - a fully-connected layer, $f(X)=W \cdot X + \vec{b}$
- ReLU layer (or any other nonlinearity you want)
- Loss function - crossentropy
- Backprop algorithm - a stochastic gradient descent with backpropageted gradients
Let's approach them one at a time.
### Nonlinearity layer
This is the simplest layer you can get: it simply applies a nonlinearity to each element of your network.
```
class ReLU(Layer):
def __init__(self):
"""ReLU layer simply applies elementwise rectified linear unit to all inputs"""
pass
def forward(self, input):
"""Apply elementwise ReLU to [batch, input_units] matrix"""
# <your code. Try np.maximum>
def backward(self, input, grad_output):
"""Compute gradient of loss w.r.t. ReLU input"""
relu_grad = input > 0
return grad_output*relu_grad
# some tests
from util import eval_numerical_gradient
x = np.linspace(-1,1,10*32).reshape([10,32])
l = ReLU()
grads = l.backward(x,np.ones([10,32])/(32*10))
numeric_grads = eval_numerical_gradient(lambda x: l.forward(x).mean(), x=x)
assert np.allclose(grads, numeric_grads, rtol=1e-3, atol=0),\
"gradient returned by your layer does not match the numerically computed gradient"
```
#### Instant primer: lambda functions
In python, you can define functions in one line using the `lambda` syntax: `lambda param1, param2: expression`
For example: `f = lambda x, y: x+y` is equivalent to a normal function:
```
def f(x,y):
return x+y
```
For more information, click [here](http://www.secnetix.de/olli/Python/lambda_functions.hawk).
### Dense layer
Now let's build something more complicated. Unlike nonlinearity, a dense layer actually has something to learn.
A dense layer applies affine transformation. In a vectorized form, it can be described as:
$$f(X)= W \cdot X + \vec b $$
Where
* X is an object-feature matrix of shape [batch_size, num_features],
* W is a weight matrix [num_features, num_outputs]
* and b is a vector of num_outputs biases.
Both W and b are initialized during layer creation and updated each time backward is called.
```
class Dense(Layer):
def __init__(self, input_units, output_units, learning_rate=0.1):
"""
A dense layer is a layer which performs a learned affine transformation:
f(x) = <W*x> + b
"""
self.learning_rate = learning_rate
# initialize weights with small random numbers. We use normal initialization,
# but surely there is something better. Try this once you got it working: http://bit.ly/2vTlmaJ
self.weights = np.random.randn(input_units, output_units)*0.01
self.biases = np.zeros(output_units)
def forward(self,input):
"""
Perform an affine transformation:
f(x) = <W*x> + b
input shape: [batch, input_units]
output shape: [batch, output units]
"""
return #<your code here>
def backward(self,input,grad_output):
# compute d f / d x = d f / d dense * d dense / d x
# where d dense/ d x = weights transposed
grad_input = #<your code here>
# compute gradient w.r.t. weights and biases
grad_weights = #<your code here>
grad_biases = #<your code here>
assert grad_weights.shape == self.weights.shape and grad_biases.shape == self.biases.shape
# Here we perform a stochastic gradient descent step.
# Later on, you can try replacing that with something better.
self.weights = self.weights - self.learning_rate * grad_weights
self.biases = self.biases - self.learning_rate * grad_biases
return grad_input
```
### Testing the dense layer
Here we have a few tests to make sure your dense layer works properly. You can just run them, get 3 "well done"s and forget they ever existed.
... or not get 3 "well done"s and go fix stuff. If that is the case, here are some tips for you:
* Make sure you compute gradients for W and b as __sum of gradients over batch__, not mean over gradients. Grad_output is already divided by batch size.
* If you're debugging, try saving gradients in class fields, like "self.grad_w = grad_w" or print first 3-5 weights. This helps debugging.
* If nothing else helps, try ignoring tests and proceed to network training. If it trains alright, you may be off by something that does not affect network training.
```
l = Dense(128, 150)
assert -0.05 < l.weights.mean() < 0.05 and 1e-3 < l.weights.std() < 1e-1,\
"The initial weights must have zero mean and small variance. "\
"If you know what you're doing, remove this assertion."
assert -0.05 < l.biases.mean() < 0.05, "Biases must be zero mean. Ignore if you have a reason to do otherwise."
# To test the outputs, we explicitly set weights with fixed values. DO NOT DO THAT IN ACTUAL NETWORK!
l = Dense(3,4)
x = np.linspace(-1,1,2*3).reshape([2,3])
l.weights = np.linspace(-1,1,3*4).reshape([3,4])
l.biases = np.linspace(-1,1,4)
assert np.allclose(l.forward(x),np.array([[ 0.07272727, 0.41212121, 0.75151515, 1.09090909],
[-0.90909091, 0.08484848, 1.07878788, 2.07272727]]))
print("Well done!")
# To test the grads, we use gradients obtained via finite differences
from util import eval_numerical_gradient
x = np.linspace(-1,1,10*32).reshape([10,32])
l = Dense(32,64,learning_rate=0)
numeric_grads = eval_numerical_gradient(lambda x: l.forward(x).sum(),x)
grads = l.backward(x,np.ones([10,64]))
assert np.allclose(grads,numeric_grads,rtol=1e-3,atol=0), "input gradient does not match numeric grad"
print("Well done!")
#test gradients w.r.t. params
def compute_out_given_wb(w,b):
l = Dense(32,64,learning_rate=1)
l.weights = np.array(w)
l.biases = np.array(b)
x = np.linspace(-1,1,10*32).reshape([10,32])
return l.forward(x)
def compute_grad_by_params(w,b):
l = Dense(32,64,learning_rate=1)
l.weights = np.array(w)
l.biases = np.array(b)
x = np.linspace(-1,1,10*32).reshape([10,32])
l.backward(x,np.ones([10,64]) / 10.)
return w - l.weights, b - l.biases
w,b = np.random.randn(32,64), np.linspace(-1,1,64)
numeric_dw = eval_numerical_gradient(lambda w: compute_out_given_wb(w,b).mean(0).sum(),w )
numeric_db = eval_numerical_gradient(lambda b: compute_out_given_wb(w,b).mean(0).sum(),b )
grad_w,grad_b = compute_grad_by_params(w,b)
assert np.allclose(numeric_dw,grad_w,rtol=1e-3,atol=0), "weight gradient does not match numeric weight gradient"
assert np.allclose(numeric_db,grad_b,rtol=1e-3,atol=0), "weight gradient does not match numeric weight gradient"
print("Well done!")
```
### The loss function
Since we want to predict probabilities, it would be logical for us to define softmax nonlinearity on top of our network and compute loss given predicted probabilities. However, there is a better way to do so.
If you write down the expression for crossentropy as a function of softmax logits (a), you'll see:
$$ loss = - log \space {e^{a_{correct}} \over {\underset i \sum e^{a_i} } } $$
If you take a closer look, ya'll see that it can be rewritten as:
$$ loss = - a_{correct} + log {\underset i \sum e^{a_i} } $$
It's called Log-softmax and it's better than naive log(softmax(a)) in all aspects:
* Better numerical stability
* Easier to get derivative right
* Marginally faster to compute
So why not just use log-softmax throughout our computation and never actually bother to estimate probabilities.
Here you are! We've defined the both loss functions for you so that you could focus on neural network part.
```
def softmax_crossentropy_with_logits(logits,reference_answers):
"""Compute crossentropy from logits[batch,n_classes] and ids of correct answers"""
logits_for_answers = logits[np.arange(len(logits)),reference_answers]
xentropy = - logits_for_answers + np.log(np.sum(np.exp(logits),axis=-1))
return xentropy
def grad_softmax_crossentropy_with_logits(logits,reference_answers):
"""Compute crossentropy gradient from logits[batch,n_classes] and ids of correct answers"""
ones_for_answers = np.zeros_like(logits)
ones_for_answers[np.arange(len(logits)),reference_answers] = 1
softmax = np.exp(logits) / np.exp(logits).sum(axis=-1,keepdims=True)
return (- ones_for_answers + softmax) / logits.shape[0]
logits = np.linspace(-1,1,500).reshape([50,10])
answers = np.arange(50)%10
softmax_crossentropy_with_logits(logits,answers)
grads = grad_softmax_crossentropy_with_logits(logits,answers)
numeric_grads = eval_numerical_gradient(lambda l: softmax_crossentropy_with_logits(l,answers).mean(),logits)
assert np.allclose(numeric_grads,grads,rtol=1e-3,atol=0), "The reference implementation has just failed. Someone has just changed the rules of math."
```
### Full network
Now let's combine what we've just built into a working neural network. As we announced, we're gonna use this monster to classify handwritten digits, so let's get them loaded.
```
import matplotlib.pyplot as plt
%matplotlib inline
from preprocessed_mnist import load_dataset
X_train, y_train, X_val, y_val, X_test, y_test = load_dataset(flatten=True)
plt.figure(figsize=[6,6])
for i in range(4):
plt.subplot(2,2,i+1)
plt.title("Label: %i"%y_train[i])
plt.imshow(X_train[i].reshape([28,28]),cmap='gray');
```
We'll define network as a list of layers, each applied on top of previous one. In this setting, computing predictions and training becomes trivial.
```
network = []
network.append(Dense(X_train.shape[1],100))
network.append(ReLU())
network.append(Dense(100,200))
network.append(ReLU())
network.append(Dense(200,10))
def forward(network, X):
"""
Compute activations of all network layers by applying them sequentially.
Return a list of activations for each layer.
Make sure last activation corresponds to network logits.
"""
activations = []
input = X
# <your code here>
assert len(activations) == len(network)
return activations
def predict(network,X):
"""
Compute network predictions.
"""
logits = forward(network,X)[-1]
return logits.argmax(axis=-1)
def train(network,X,y):
"""
Train your network on a given batch of X and y.
You first need to run forward to get all layer activations.
Then you can run layer.backward going from last to first layer.
After you called backward for all layers, all Dense layers have already made one gradient step.
"""
# Get the layer activations
layer_activations = forward(network,X)
layer_inputs = [X]+layer_activations #layer_input[i] is an input for network[i]
logits = layer_activations[-1]
# Compute the loss and the initial gradient
loss = softmax_crossentropy_with_logits(logits,y)
loss_grad = grad_softmax_crossentropy_with_logits(logits,y)
# <your code: propagate gradients through the network>
return np.mean(loss)
```
Instead of tests, we provide you with a training loop that prints training and validation accuracies on every epoch.
If your implementation of forward and backward are correct, your accuracy should grow from 90~93% to >97% with the default network.
### Training loop
As usual, we split data into minibatches, feed each such minibatch into the network and update weights.
```
from tqdm import trange
def iterate_minibatches(inputs, targets, batchsize, shuffle=False):
assert len(inputs) == len(targets)
if shuffle:
indices = np.random.permutation(len(inputs))
for start_idx in trange(0, len(inputs) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield inputs[excerpt], targets[excerpt]
from IPython.display import clear_output
train_log = []
val_log = []
for epoch in range(25):
for x_batch,y_batch in iterate_minibatches(X_train,y_train,batchsize=32,shuffle=True):
train(network,x_batch,y_batch)
train_log.append(np.mean(predict(network,X_train)==y_train))
val_log.append(np.mean(predict(network,X_val)==y_val))
clear_output()
print("Epoch",epoch)
print("Train accuracy:",train_log[-1])
print("Val accuracy:",val_log[-1])
plt.plot(train_log,label='train accuracy')
plt.plot(val_log,label='val accuracy')
plt.legend(loc='best')
plt.grid()
plt.show()
```
### Peer-reviewed assignment
Congradulations, you managed to get this far! There is just one quest left undone, and this time you'll get to choose what to do.
#### Option I: initialization
* Implement Dense layer with Xavier initialization as explained [here](http://bit.ly/2vTlmaJ)
To pass this assignment, you must conduct an experiment showing how xavier initialization compares to default initialization on deep networks (5+ layers).
#### Option II: regularization
* Implement a version of Dense layer with L2 regularization penalty: when updating Dense Layer weights, adjust gradients to minimize
$$ Loss = Crossentropy + \alpha \cdot \underset i \sum {w_i}^2 $$
To pass this assignment, you must conduct an experiment showing if regularization mitigates overfitting in case of abundantly large number of neurons. Consider tuning $\alpha$ for better results.
#### Option III: optimization
* Implement a version of Dense layer that uses momentum/rmsprop or whatever method worked best for you last time.
Most of those methods require persistent parameters like momentum direction or moving average grad norm, but you can easily store those params inside your layers.
To pass this assignment, you must conduct an experiment showing how your chosen method performs compared to vanilla SGD.
### General remarks
_Please read the peer-review guidelines before starting this part of the assignment._
In short, a good solution is one that:
* is based on this notebook
* runs in the default course environment with Run All
* its code doesn't cause spontaneous eye bleeding
* its report is easy to read.
_Formally we can't ban you from writing boring reports, but if you bored your reviewer to death, there's noone left alive to give you the grade you want._
### Bonus assignments
As a bonus assignment (no points, just swag), consider implementing Batch Normalization ([guide](https://gab41.lab41.org/batch-normalization-what-the-hey-d480039a9e3b)) or Dropout ([guide](https://medium.com/@amarbudhiraja/https-medium-com-amarbudhiraja-learning-less-to-learn-better-dropout-in-deep-machine-learning-74334da4bfc5)). Note, however, that those "layers" behave differently when training and when predicting on test set.
* Dropout:
* During training: drop units randomly with probability __p__ and multiply everything by __1/(1-p)__
* During final predicton: do nothing; pretend there's no dropout
* Batch normalization
* During training, it substracts mean-over-batch and divides by std-over-batch and updates mean and variance.
* During final prediction, it uses accumulated mean and variance.
| github_jupyter |
# Exploring Additional Data in Predicting Molecular Properties
In this competition we are asked to **predict magnetic interactions between a pair of atoms**. As we all know, literally everything is made of atoms. So if we could get deeper insight into the structure and dynamics of molecules, that could advance many fields of science including environmental science, pharmaceutical science, and materials science.
I like this kind of science competition, as I can defintely gain a reward. Winning a kaggle competition is wonderful, but getting to know a new field of science is surely another reward regardless of your final result may be.
Here I explore **the additional data, which are consisted of 5 csv files (dipole_moments.csv, magnetic_shielding_tensors.csv, mulliken_charges.csv, potential_energy.csv, scalar_coupling_contributions.csv)**. They are provided only for training data, so these infomration may not be super useful for our predictions. But those data are given. Why don't we explore what they are? At least we can learn new things and potentially get better understandings about training & test data!
I am not a chemist or anything, so your feedbacks are very welcome:D
## Files
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
```
## Libraries
```
# libraries
import seaborn as sns
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from scipy import stats
import gc
import warnings
warnings.filterwarnings("ignore")
print("Libraries were loaded.")
```
## Loading data
```
# load data
train_df = pd.read_csv('../input/train.csv')
potential_energy_df = pd.read_csv('../input/potential_energy.csv')
mulliken_charges_df = pd.read_csv('../input/mulliken_charges.csv')
scalar_coupling_contributions_df = pd.read_csv('../input/scalar_coupling_contributions.csv')
magnetic_shielding_tensors_df = pd.read_csv('../input/magnetic_shielding_tensors.csv')
dipole_moments_df = pd.read_csv('../input/dipole_moments.csv')
structure_df = pd.read_csv('../input/structures.csv')
test_df = pd.read_csv('../input/test.csv')
print("All the data were loaded.")
```
## Basic information about each file
What does each file look like?
```
# What are inside those files?
dfs = [train_df, potential_energy_df, mulliken_charges_df,
scalar_coupling_contributions_df, magnetic_shielding_tensors_df,
dipole_moments_df, structure_df, test_df]
names = ["train_df", "potential_energy_df", "mulliken_charges_df",
"scalar_coupling_contributions_df", "magnetic_shielding_tensors_df",
"dipole_moments_df", "structure_df", "test_df"]
# display info about a DataFrame
def dispDF(df, name):
print("========== " + name + " ==========")
print("SHAPE ----------------------")
print(df.shape)
print('')
print("HEAD ----------------------")
print(df.head(5))
print('')
print("DATA TYPE ----------------")
print(df.dtypes)
print('')
print("UNIQUES -------------------")
print(df.nunique())
print('')
print("======================================")
pd.set_option('display.expand_frame_repr', False)
for df, name in zip(dfs, names):
dispDF(df, name)
```
Now we look at each file one by one:) I try to visualize the content of each file.
```
# colors
colors = sns.color_palette("cubehelix", 8)
sns.set()
subsample = 100
```
# Dipole moments
### What are the dipole moments?
Dipole moments quantify **a separation of charge between atoms**.
When a pair of different atoms forms an ionic bond or a covalent bond, an atom with the stronger **electronegativity (F > O > Cl > N > Br > I > S > C > H > metals)** pulls shared electrons away from the other. For example, in CO2, O has stronger electronegativity than C. That means that O pulls electrons from C, generating the dipole moments between a pair of the two atoms.
However, CO2 as a molecule itself does not have dipole moments because in its structure (O = C = O), Os pull electrons away from C equally but in the opposite direction, which leads to overall zero separation of charge between atoms.
H2O is another story. Unlike CO2, H2O has the asymmetric structure. Therefore, although O pulls electrons from Hs, a separation of charge still remains.

Reference:
- CHEMISTRY LibreTexts: Dipole Moments
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Physical_Properties_of_Matter/Atomic_and_Molecular_Properties/Dipole_Moments
- CHEMISTRY LibreTexts: Electronegativity
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Physical_Properties_of_Matter/Atomic_and_Molecular_Properties/Electronegativity
Let's see how contents of the file look like.
### File description
> dipole_moments.csv - contains the molecular electric dipole moments. These are three dimensional vectors that indicate the charge distribution in the molecule. The first column (molecule_name) are the names of the molecule, the second to fourth column are the X, Y and Z components respectively of the dipole moment.
```
# display info about the data frame
dispDF(dipole_moments_df, "dipole moments")
```
Let's look at the charge distribution. Note that I sampled data to reduce the computational burden.
```
fig = plt.figure()
ax = Axes3D(fig)
# ax = fig.add_subplot(111, projection='3d')
scatter_colors = sns.color_palette("husl", 85003)
# 3D scatter
ax.scatter(dipole_moments_df['X'][::subsample], dipole_moments_df['Y'][::subsample],
dipole_moments_df['Z'][::subsample], s=30, alpha=0.5, c=scatter_colors[::subsample])
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.set_title('Dipole Moment')
```
Each molecule has a different color. There seems to be no cluster or anything, but we can find some outliers.
Here is a histogram of the distances of molecules in this space.
```
dipole_moments_df['dm_distance'] = np.asarray([x**2 + y**2 + z**2 for x, y, z in zip(dipole_moments_df['X'],dipole_moments_df['Y'], dipole_moments_df['Z'])])
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
ax = ax.flatten()
# original distribution
sns.distplot(dipole_moments_df['dm_distance'], color=colors[0], kde=False, norm_hist=False, ax=ax[0])
ax[0].set_xlabel('distance')
ax[0].set_title('dipole moment')
# in log
sns.distplot(np.log(dipole_moments_df['dm_distance'] + 0.00001), color=colors[0], kde=False, norm_hist=False, ax=ax[1])
ax[1].set_xlabel('log distance')
```
As you see, the original histogram (left) is highly right-scewed. Hence it becomes approximately normal after the log-transform (right).
Maybe it is worthwhile making a list of "outlier" molecules in terms of the distance in the dipole-moment-space. Here I set the threshold to be 100. This list may be useful for our predictions.
```
dipole_moments_df['dm_outliers'] = np.zeros(dipole_moments_df.shape[0]).astype(int)
dipole_moments_df.loc[dipole_moments_df['dm_distance'] > 100, 'dm_outliers'] = int(1)
print("outliers (dipole moments): " + str(np.sum(dipole_moments_df['dm_outliers'] == 1)) + " molecules")
dipole_moments_df.head(7)
```
# Magnetic shielding tensors
### What are the magnetic shielding tensors?
Magnetic shielding tensors describe the magnetic field surrounding the nucleus.
The **NMR (Nuclear Magnetic Resonance)** puts a strong magnetic field upon the molecule. Since the electrons orbit the nucleus (which by the way consists of neutrons and protons), they slightly influence with so-called shielding, or how the external magnetic alters the energy levels of charged particles.
As the dynamics of theses electrons are influenced by other atoms due to electronegativity, the shielding is inevitably influenced by the local structure or composition of atoms in the molecule. In other words, we can estimate them by NMR. **The magnetic shielding tensors are resulting 3-by-3 tensor (matrix) to represent the measured magnetic field surrounding the nucleus**.

Here the red solid lines indicate the principal components of the shift tensor (σxx, σyy, σzz). Connecting these principal components results in the black ellipses, which represents the electron field surrounding the nucleus.
Reference:
- CHEMISTRY LibreTexts: Chemical Shift (Shielding)
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Theory/NMR_Interactions/Chemical_Shift_(Shielding)
### Data description
> magnetic_shielding_tensors.csv - contains the magnetic shielding tensors for all atoms in the molecules. The first column (molecule_name) contains the molecule name, the second column (atom_index) contains the index of the atom in the molecule, the third to eleventh columns contain the XX, YX, ZX, XY, YY, ZY, XZ, YZ and ZZ elements of the tensor/matrix respectively.
```
# display info about the data frame
dispDF(magnetic_shielding_tensors_df, "magnetic shielding tensors")
```
This file stores the 3-by-3 matrix (tensor). I only use the principal components (XX, YY, and ZZ) to visualize their distributions.
```
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
scatter_colors = sns.color_palette("husl", 29)
# 3D scatter
for i in range(29):
xx = magnetic_shielding_tensors_df.loc[magnetic_shielding_tensors_df['atom_index']==i, 'XX']
yy = magnetic_shielding_tensors_df.loc[magnetic_shielding_tensors_df['atom_index']==i, 'YY']
zz = magnetic_shielding_tensors_df.loc[magnetic_shielding_tensors_df['atom_index']==i, 'ZZ']
ax.scatter(xx[::subsample*10], yy[::subsample*10], zz[::subsample*10], s=30, alpha=0.5, c=scatter_colors[i])
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.set_title('Magnetic shielding tensors')
```
It is hard to see but there are 29 colors representing the 29 unique atom index. Are there ... clusters? I mean, small (left side) and big (right side) ones?
# Potential energy
### What is the potential energy?
Potential energy is relatively easy to understand in the physics. It is **the energy held by an object relative to others**. For example, in the figure below, you can easily imagine what would happen if the string was released. The elastic potential energy of the bow is then transformed into kinetic energy.

This example is essentially similar to the case with molecules. Atoms form a certain bond (e.g. ionic) to be a molecule such that the overall energy becomes lower (more stable). In other words, **the bonds between atoms have potential energy, which is converted into kinetic energy once the bonds are broken**.
Reference:
- CHEMISTRY LibreTexts: Potential energy https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Thermodynamics/Energies_and_Potentials/Potential_Energy
- [Wikipedia: Potential energy](https://en.wikipedia.org/wiki/Potential_energy)
> potential_energy.csv - contains the potential energy of the molecules. The first column (molecule_name) contains the name of the molecule, the second column (potential_energy) contains the potential energy of the molecule.
```
# display info about the data frame
dispDF(potential_energy_df, "potential energy")
# potential energy
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
# histogram
sns.distplot(potential_energy_df['potential_energy'],
kde=False, color=colors[2], ax=ax)
# median
pe_median = potential_energy_df['potential_energy'].median()
ax.axvline(pe_median, color='r', linestyle='--', lw=4)
ax.text(pe_median + 30, 15000, 'median = ' + str(pe_median), fontsize=12, color='r')
```
We can see that molecules are bimodally distributed with respect to the potential energy. Let's use the median split to have two clusters.
```
# median split
highPE_molecules = potential_energy_df.loc[potential_energy_df['potential_energy'] >= pe_median]
lowPE_molecules = potential_energy_df.loc[potential_energy_df['potential_energy'] < pe_median]
print(str(highPE_molecules.shape[0]) + " high potential energy molecules:")
highPE_molecules.head(7)
print(str(lowPE_molecules.shape[0]) + " low potential energy molecules:")
lowPE_molecules.head(7)
# low (0) and high (0) potential energy
potential_energy_df['energy_class'] = np.zeros(potential_energy_df.shape[0]).astype(int)
potential_energy_df.loc[potential_energy_df['potential_energy'] >= pe_median, 'energy_class'] = int(1)
potential_energy_df.head(7)
```
# Mulliken charges
### What are the Mulliken charges?
The Mulliken charges characterize **the electric charge distribution in a molecule**.
A molecular orbital consists of multiple atomic orbitals, so we can estimate the electric charge distribution by a linear combination of these oribitals. The Mulliken charge is assigned to each atom as the sum of the linear terms over all the orbitals that belong to the atom.

Here net Mulliken charge distribution on H, C and N atoms in histidines are shown as an example (Alia et al., 2004).
Reference
- [Mulliken population analysis](https://en.wikipedia.org/wiki/Mulliken_population_analysis)
- [Alia et al (2004) Heteronuclear 2D ( 1 H- 13 C) MAS NMR Resolves the Electronic Structure of Coordinated Histidines in Light-Harvesting Complex II: Assessment of Charge Transfer and Electronic Delocalization Effect](https://www.researchgate.net/publication/8893081_Heteronuclear_2D_1_H-_13_C_MAS_NMR_Resolves_the_Electronic_Structure_of_Coordinated_Histidines_in_Light-Harvesting_Complex_II_Assessment_of_Charge_Transfer_and_Electronic_Delocalization_Effect)
### Data description
> mulliken_charges.csv - contains the mulliken charges for all atoms in the molecules. The first column (molecule_name) contains the name of the molecule, the second column (atom_index) contains the index of the atom in the molecule, the third column (mulliken_charge) contains the mulliken charge of the atom.
```
# display info about the data frame
dispDF(mulliken_charges_df, "mulliken charges")
# distribution of mulliken_charge
mulliken_charges_df["mulliken_charge"].hist()
```
Mulliken charges seems to be slightly above 0 in many cases.
# Structure
This structure.csv is served for us to visualize molecules in 3D (e.g. [How To: Easy Visualization of Molecules.](https://www.kaggle.com/borisdee/how-to-easy-visualization-of-molecules)). I wonder if this file can be useful beyound the visualization.
### Data description
> structure.zip - folder containing molecular structure (xyz) files, where the first line is the number of atoms in the molecule, followed by a blank line, and then a line for every atom, where the first column contains the atomic element (H for hydrogen, C for carbon etc.) and the remaining columns contain the X, Y and Z cartesian coordinates (a standard format for chemists and molecular visualization programs).
```
# display info about the data frame
dispDF(structure_df, "structure")
# # Let's visualize one molecule anyway
# !pip install ase
# import ase
# from ase import Atoms
# import ase.visualize
# positions = structure_df.loc[structure_df['molecule_name'] == 'dsgdb9nsd_000001', ['x', 'y', 'z']]
# symbols = structure_df.loc[structure_df['molecule_name'] == 'dsgdb9nsd_000001', 'atom']
# ase.visualize.view(Atoms(positions=positions, symbols=symbols), viewer="x3d")
```
~~This molecule (dsgdb9nsd_000001) is apparently CH4 (Methane). Looks pretty.~~
Since structure.csv contains atom name in a column, it may be a good idea to put another column about electronegativity. Do you not remember electronegativity of each atom? Me neither. This is exactly what internet is for.

taken from [What would cause an atom to have a low electronegativity value? ](https://socratic.org/questions/what-would-cause-an-atom-to-have-a-low-electronegativity-value)
According to this table, electronegativities are 2.20 for H, 2.55 for C, 3.04 for N, 3.44 for O, and 3.98 for F.
```
# add electronegativity to df
structure_df['electronegativity'] = structure_df['atom']
structure_df['electronegativity'] = structure_df['electronegativity'].map({'H': 2.20, 'C': 2.55, 'N': 3.04, 'O': 3.44, 'F': 3.98})
structure_df.head(12)
```
# Scaler coupling contributions
### What is the scaler coupling?
As electrons orbit the nucleus (neutrons + protons), the nucleus becomes a magnetic dipole and spins to the random orientations. The scaler coupling (J-coupling in an isotropic liquid, an indirect dipole-dipole coupling) arises from **an indirect interaction between two nuclear spins**.
As the following data description says, the scaler coupling tells a lot about the molecule such as **the Fermi contact interaction (interaction between an electron and an atomic nucleus)**, **the spin-dipolar interaction**, **the paramagnetic (there are unpaired electrons in an orbital) or the diamagnetic ( electrons are paired or their total spin is 0) spin-orbit interaction (interaction of a particle's spin with its motion inside a potential)**.
Reference
- [Wikipedia: Fermi contact interaction](https://en.wikipedia.org/wiki/Fermi_contact_interaction)
- [Wikipedia: Spin-orbit interaction](https://en.wikipedia.org/wiki/Spin%E2%80%93orbit_interaction)
- [Diamagnetism and Paramagnetism](https://courses.lumenlearning.com/introchem/chapter/diamagnetism-and-paramagnetism/)
### Data description
> scalar_coupling_contributions.csv - The scalar coupling constants in train.csv (or corresponding files) are a sum of four terms. scalar_coupling_contributions.csv contain all these terms. The first column (molecule_name) are the name of the molecule, the second (atom_index_0) and third column (atom_index_1) are the atom indices of the atom-pair, the fourth column indicates the type of coupling, the fifth column (fc) is the Fermi Contact contribution, the sixth column (sd) is the Spin-dipolar contribution, the seventh column (pso) is the Paramagnetic spin-orbit contribution and the eighth column (dso) is the Diamagnetic spin-orbit contribution.
```
# display info about the data frame
dispDF(scalar_coupling_contributions_df, "scalar coupling contributions")
fig, ax = plt.subplots(2,1, figsize=(12, 8))
ax = ax.flatten()
# proportions of fc, sd, pso, dso in scaler-coupling constant
fc = scalar_coupling_contributions_df['fc']
sd = scalar_coupling_contributions_df['sd']
pso = scalar_coupling_contributions_df['pso']
dso = scalar_coupling_contributions_df['dso']
columns = ['fc', 'sd', 'pso', 'dso']
contribution_colors = sns.color_palette("Set2", 4)
nrows = scalar_coupling_contributions_df.shape[0]
for i, c in enumerate(columns):
contributions = 100 * scalar_coupling_contributions_df[c].abs() \
/ (np.abs(fc) + np.abs(sd) + np.abs(pso) + np.abs(dso))
ax[0].plot(np.arange(0, nrows, subsample*100), contributions[::subsample*100], c=contribution_colors[i], label=c)
ax[0].set_xlabel('molecule')
ax[0].set_ylabel('% to scaler coupling constant')
ax[0].legend()
# unique counts of molecular type
counts = np.zeros(scalar_coupling_contributions_df['type'].nunique())
for i, u in enumerate(scalar_coupling_contributions_df['type'].unique()):
counts[i] = np.sum(scalar_coupling_contributions_df['type'].values == u)
sns.barplot(x=scalar_coupling_contributions_df['type'].unique(), y=counts, ax=ax[1])
```
From the top panel we can know that our target, scaler coupling constant, is in most cases determined by the Fermi Contact, which tells us about an interaction between an electron and an atomic nucleus.
From the bottom panel we can see that the type of coupling is not uniformly distributed. We have **pretty much data about 3JHC, 2JHC, and 1JHC** but **limited amount for 1JHN, 2JHN, and 3JHN**.
Let's see if these types of molecules are related to the values of interactions (fc, sd, pso, & dso). The following snipets were suggested by [Dmitrii Borkin](https://www.kaggle.com/hedgehoginfog). Thanks a lot!
```
# Values of each interaction vs molecular types
fig, ax = plt.subplots(4, 1, figsize=(20, 14))
ax = ax.flatten()
for i, col in enumerate(columns):
means = scalar_coupling_contributions_df[["type", col]].groupby(by='type')[col].mean()
SDs = scalar_coupling_contributions_df[["type", col]].groupby(by='type')[col].std()
means.plot(kind="bar", yerr=SDs, ax=ax[i])
ax[i].set_ylabel(col)
plt.tight_layout()
```
Interesting, **values of each interaction type varies a lot across molecular types**. For example, the molecular type 2JHH gives rise to predominantly positive values of sd (the spin-dipolar interaction) and pso (the paramagnetic spin-orbit interaction) but negative values of fc (the Fermi contact interaction) and dso (the diamagnetic spin-orbit interaction).
# Combine all the files into one
It seems possible to combine all the additional files as one. Such a big file may be useful for our feature engineering.
Note that
**"dipole_moments" and "potential_energy"** files are about **molecule**.
**"magnetic_shielding_tensor", "mulliken_charge", and "structure"** files are about **an atom in a molecule**. Don't forget that only "structure" contains data for both train and test.
**"scalar_coupling_contributions"** file is about **"a pair of atoms in a molecule"**.
So we got to be careful not to loose infomration when combining these files.
```
# combine "dipole_moments_df" and "potential_energy_df" (The both have 85003 rows, information per molecule)
DM_PE_df = pd.merge(dipole_moments_df, potential_energy_df, on='molecule_name')
del dipole_moments_df, potential_energy_df, train_df, test_df
gc.collect()
print("There are {} rows and {} columns in DM_PE_df.".format(DM_PE_df.shape[0], DM_PE_df.shape[1]))
DM_PE_df.head(12)
# combine "magnetic_shielding_tensors_df" and "mulliken_charges_df" (The both have 1533537 rows, information per atom in a molecule)
MST_MC_df = pd.merge(magnetic_shielding_tensors_df, mulliken_charges_df, on=['molecule_name', 'atom_index'])
del magnetic_shielding_tensors_df, mulliken_charges_df
gc.collect()
print("There are {} rows and {} columns in DM_PE_df.".format(MST_MC_df.shape[0], MST_MC_df.shape[1]))
MST_MC_df.head(12)
# combine these two
MST_MC_DM_PE_df = pd.merge(MST_MC_df, DM_PE_df, on='molecule_name', how='left')
del MST_MC_df, DM_PE_df
gc.collect()
print("There are {} rows and {} columns in DM_PE_df.".format(MST_MC_DM_PE_df.shape[0], MST_MC_DM_PE_df.shape[1]))
MST_MC_DM_PE_df.head(12)
```
### reduce memory burden
Thanks to https://www.kaggle.com/speedwagon/permutation-importance
```
# lighter structure
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
c_prec = df[col].apply(lambda x: np.finfo(x).precision).max()
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max and c_prec == np.finfo(np.float16).precision:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max and c_prec == np.finfo(np.float32).precision:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
MST_MC_DM_PE_df = reduce_mem_usage(MST_MC_DM_PE_df)
scalar_coupling_contributions_df = reduce_mem_usage(scalar_coupling_contributions_df)
# combine it with "scaler_coupling_contributions_df" (information per a pair of atoms in a molecule)
combined_df0 = pd.merge(scalar_coupling_contributions_df, MST_MC_DM_PE_df,
left_on=['molecule_name','atom_index_0'], right_on=['molecule_name','atom_index'], how='left')
print("There are {} rows and {} columns in combined_df0.".format(combined_df0.shape[0], combined_df0.shape[1]))
combined_df0.head(12)
```
Here the combined file is based on "atom_index_0", but of course you can do the same using "atom_index_1" like the following. You can even concatenate these two big files together to have a complete information about a pair of molecules. But I could not manage due to the memory limit of this kernel.
```
# # combine it with "scaler_coupling_contributions_df" (information per a pair of atoms in a molecule)
# combined_df1 = pd.merge(scalar_coupling_contributions_df, MST_MC_DM_PE_df,
# left_on=['molecule_name','atom_index_1'], right_on=['molecule_name','atom_index'], how='left')
# del scalar_coupling_contributions_df, MST_MC_DM_PE_df
# gc.collect()
# print("There are {} rows and {} columns in combined_df0.".format(combined_df1.shape[0], combined_df1.shape[1]))
# combined_df1.head(12)
# # combine these two
# combined_df = pd.merge(combined_df0, combined_df1, on=['molecule_name'])
# del combined_df0, combined_df1
# gc.collect()
# print("There are {} rows and {} columns in combined_df.".format(combined_df.shape[0], combined_df.shape[1]))
# combined_df.head(12)
```
TO BE UPDATED...
| github_jupyter |
## 加载 MNIST 数据集
```
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data('mnist/mnist.npz')
print(x_train.shape, type(x_train))
print(y_train.shape, type(y_train))
```
## 数据处理:规范化
```
# 将图像本身从[28,28]转换为[784,]
X_train = x_train.reshape(60000, 784)
X_test = x_test.reshape(10000, 784)
print(X_train.shape, type(X_train))
print(X_test.shape, type(X_test))
# 将数据类型转换为float32
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# 数据归一化
X_train /= 255
X_test /= 255
```
## 统计训练数据中各标签数量
```
import numpy as np
import matplotlib.pyplot as plt
label, count = np.unique(y_train, return_counts=True)
print(label, count)
fig = plt.figure()
plt.bar(label, count, width = 0.7, align='center')
plt.title("Label Distribution")
plt.xlabel("Label")
plt.ylabel("Count")
plt.xticks(label)
plt.ylim(0,7500)
for a,b in zip(label, count):
plt.text(a, b, '%d' % b, ha='center', va='bottom',fontsize=10)
plt.show()
```
## 数据处理:one-hot 编码
### 几种编码方式的对比
| Binary | Gray code | One-hot |
| ------ | --------- | -------- |
| 000 | 000 | 00000001 |
| 001 | 001 | 00000010 |
| 010 | 011 | 00000100 |
| 011 | 010 | 00001000 |
| 100 | 110 | 00010000 |
| 101 | 111 | 00100000 |
| 110 | 101 | 01000000 |
| 111 | 100 | 10000000 |
### one-hot 应用

```
from keras.utils import np_utils
n_classes = 10
print("Shape before one-hot encoding: ", y_train.shape)
Y_train = np_utils.to_categorical(y_train, n_classes)
print("Shape after one-hot encoding: ", Y_train.shape)
Y_test = np_utils.to_categorical(y_test, n_classes)
print(y_train[0])
print(Y_train[0])
```
## 使用 Keras sequential model 定义神经网络
### softmax 网络层

```
from keras.models import Sequential
from keras.layers.core import Dense, Activation
model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dense(10))
model.add(Activation('softmax'))
```
## 编译模型
[model.compile()](https://keras.io/models/sequential/#compile)
```python
compile(optimizer, loss=None, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None)
```
```
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
```
## 训练模型,并将指标保存到 history 中
[model.fit()](https://keras.io/models/sequential/#fit)
```python
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
```
```
history = model.fit(X_train,
Y_train,
batch_size=128,
epochs=5,
verbose=2,
validation_data=(X_test, Y_test))
```
## 可视化指标
```
fig = plt.figure()
plt.subplot(2,1,1)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='lower right')
plt.subplot(2,1,2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.tight_layout()
plt.show()
```
## 保存模型
[model.save()](https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model)
You can use `model.save(filepath)` to save a Keras model into a single **HDF5 file** which will contain:
- the architecture of the model, allowing to re-create the model
- the weights of the model
- the training configuration (loss, optimizer)
- the state of the optimizer, allowing to resume training exactly where you left off.
You can then use `keras.models.load_model(filepath)` to reinstantiate your model. load_model will also take care of compiling the model using the saved training configuration (unless the model was never compiled in the first place).
```
import os
import tensorflow.gfile as gfile
save_dir = "./mnist/model/"
if gfile.Exists(save_dir):
gfile.DeleteRecursively(save_dir)
gfile.MakeDirs(save_dir)
model_name = 'keras_mnist.h5'
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
```
## 加载模型
```
from keras.models import load_model
mnist_model = load_model(model_path)
```
## 统计模型在测试集上的分类结果
```
loss_and_metrics = mnist_model.evaluate(X_test, Y_test, verbose=2)
print("Test Loss: {}".format(loss_and_metrics[0]))
print("Test Accuracy: {}%".format(loss_and_metrics[1]*100))
predicted_classes = mnist_model.predict_classes(X_test)
correct_indices = np.nonzero(predicted_classes == y_test)[0]
incorrect_indices = np.nonzero(predicted_classes != y_test)[0]
print("Classified correctly count: {}".format(len(correct_indices)))
print("Classified incorrectly count: {}".format(len(incorrect_indices)))
```
| github_jupyter |
## Hospital Preparation
```
import requests
import math
import pprint
import time
import numpy as np
import pandas as pd
import urllib
# List from ATC EMS raw data
ems_hospitals = ["South Austin Hospital", "Seton Northwest", "North Austin Hospital", "Baylor Scott & White - Austin", "Baylor Scott & White - Lakeway", "Seton Southwest", "Dell Childrens Med Ctr", "Dell Seton Med Ctr", "Seton Med Ctr", "Saint Davids Med Ctr", "Westlake Hospital", "Heart Hospital"]
pprint.pprint(ems_hospitals)
# Setup and test Bing Maps API querying
BingMapsKey = "AgmNkDXiPX65nDUv6MUa0gRtHyrrGAMuD5n8mNQfl72ISI6xGZA37clQ0OcP56y-"
# Example
call = requests.get("https://dev.virtualearth.net/REST/v1/LocalSearch/?query=hospital&userLocation=47.602038,-122.333964&key=" + BingMapsKey)
pprint.pprint(call.json())
# Test Find a location by query
query = ems_hospitals[0]
encoded_query = urllib.parse.quote(query)
call = requests.get("http://dev.virtualearth.net/REST/v1/Locations?query=" + encoded_query + "&key=" + BingMapsKey)
pprint.pprint(call.json())
# Test local search
query = ems_hospitals[3]
area = "30.2672,-97.7431"
if("Lakeway" in query):
area = "30.365307, -97.976154"
call = requests.get("https://dev.virtualearth.net/REST/v1/LocalSearch/?query=" + query + "&userLocation=" + area + "&maxResults=25&key=" + BingMapsKey)
pprint.pprint(call.json())
hospital_data_list = []
for h in ems_hospitals:
query = h
area = "30.2672,-97.7431"
if("Lakeway" in query):
area = "30.365307, -97.976154"
call = requests.get("https://dev.virtualearth.net/REST/v1/LocalSearch/?query=" + query + "&userLocation=" + area + "&maxResults=25&key=" + BingMapsKey)
i = 0
while(1):
hospital_data = call.json()['resourceSets'][0]['resources'][i]
if("Animal" not in hospital_data['name'] and "Clinic" not in hospital_data['name']):
hospital_data_list.append(hospital_data)
break
else:
i += 1
# Verify data
pprint.pprint(ems_hospitals)
print()
#pprint.pprint(hospital_data_list)
for i in range(len(hospital_data_list)):
print(hospital_data_list[i]['name'])
#print(hospital_data_list[i]['point']['coordinates'])
# Export data to csv
hospital_names = []
hospital_coords = []
for i in range(len(hospital_data_list)):
hospital_names.append(hospital_data_list[i]['name'])
hospital_coords.append(hospital_data_list[i]['point']['coordinates'])
hospital_locations = np.array(hospital_coords)
dataset = pd.DataFrame({'hospital_name': hospital_names, "longitude": hospital_locations[:,1], "latitude": hospital_locations[:,0]})
print(dataset)
dataset.to_csv('ems_hospitals.csv', index = False)
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import scipy
import scipy.stats as stats
import scipy.optimize as opt
import statsmodels.api as sm
import scipy.stats as stats
import statsmodels.api as sm
import theano.tensor as tt
from sklearn import preprocessing
%matplotlib inline
plt.style.use('bmh')
colors = ['#348ABD', '#A60628', '#7A68A6', '#467821', '#D55E00',
'#CC79A7', '#56B4E9', '#009E73', '#F0E442', '#0072B2']
# Many examples adapted from the official pymc3 documentation and Mark Dregan's tutorial on Bayesian techniques:
# https://github.com/markdregan/Bayesian-Modelling-in-Python
nypd = pd.read_csv("nypd_df.csv", parse_dates=True)
# The shape of the data is still recognizable with 10% of the data (smaller sample helps computational exploration)
nypd['responsetime'].sample(frac=.1).hist(bins=200, range=[0, 400])
df = nypd.sample(frac=.08)
y_obs = df['responsetime'].values
#Create out of sample (oos) dataframe to predict
oos = nypd[~nypd.index.isin(df.index)].sample(n=1000)
y_obs.shape
fig = plt.figure(figsize=(11,3))
_ = plt.title('Frequency of police response times')
_ = plt.xlabel('Response time (seconds)')
_ = plt.ylabel('Number of messages')
_ = plt.hist(y_obs,
range=[0, 400], bins=400, histtype='stepfilled')
```
A frequentist measure of the value of $mu$ for a poisson distribution
```
def poisson_logprob(mu, sign=-1):
return np.sum(sign*stats.poisson.logpmf(y_obs, mu=mu))
freq_results = opt.minimize_scalar(poisson_logprob)
%time print("The estimated value of mu is: %s" % freq_results['x'])
fig = plt.figure(figsize=(11,3))
ax = fig.add_subplot(111)
x_lim = 290
mu = np.int(freq_results['x'])
for i in np.arange(x_lim):
plt.bar(i, stats.poisson.pmf(mu, i), color=colors[3])
_ = ax.set_xlim(160, x_lim)
_ = ax.set_ylim(0, 0.03)
_ = ax.set_xlabel('Response time in minutes')
_ = ax.set_ylabel('Probability mass')
_ = ax.set_title('Estimated Poisson distribution for Police response time')
_ = plt.legend(['$\lambda$ = %s' % mu])
x = np.linspace(1, 600)
y_min = np.min([poisson_logprob(i, sign=1) for i in x])
y_max = np.max([poisson_logprob(i, sign=1) for i in x])
fig = plt.figure(figsize=(6,4))
_ = plt.plot(x, [poisson_logprob(i, sign=1) for i in x])
_ = plt.fill_between(x, [poisson_logprob(i, sign=1) for i in x],
y_min, color=colors[0], alpha=0.3)
_ = plt.title('Optimization of $\mu$')
_ = plt.xlabel('$\mu$')
_ = plt.ylabel('Log probability of $\mu$ given data')
_ = plt.vlines(freq_results['x'], y_max, y_min, colors='red', linestyles='dashed')
_ = plt.scatter(freq_results['x'], y_max, s=110, c='red', zorder=3)
_ = plt.ylim(ymin=y_min, ymax=0)
_ = plt.xlim(xmin=1, xmax=600)
with pm.Model() as model:
mu = pm.Uniform('mu', lower=0, upper=400)
likelihood = pm.Poisson('likelihood', mu=mu, observed=y_obs)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(200000, step, start=start, progressbar=True)
_ = pm.traceplot(trace, vars=['mu'], lines={'mu': freq_results['x']})
fig = plt.figure(figsize=(11,3))
plt.subplot(121)
_ = plt.title('Burnin trace')
_ = plt.ylim(ymin=234, ymax=235)
_ = plt.plot(trace.get_values('mu')[:3000])
fig = plt.subplot(122)
_ = plt.title('Full trace')
_ = plt.ylim(ymin=234, ymax=235)
_ = plt.plot(trace.get_values('mu'))
_ = pm.autocorrplot(trace[:5000], varnames=['mu'])
with pm.Model() as model:
mu = pm.Uniform('mu', lower=0, upper=400)
y_est = pm.Poisson('y_est', mu=mu, observed=y_obs)
y_pred = pm.Poisson('y_pred', mu=mu)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(200000, step, start=start, progressbar=True)
x_lim = 400
burnin = 2000
y_pred = trace[burnin:].get_values('y_pred')
mu_mean = trace[burnin:].get_values('mu').mean()
fig = plt.figure(figsize=(10,6))
fig.add_subplot(211)
_ = plt.hist(y_pred, range=[0, x_lim], bins=x_lim, histtype='stepfilled', color=colors[1])
_ = plt.xlim(1, x_lim)
_ = plt.ylabel('Frequency')
_ = plt.title('Posterior predictive distribution')
fig.add_subplot(212)
_ = plt.hist(y_obs, range=[0, x_lim], bins=x_lim, histtype='stepfilled')
_ = plt.xlabel('Response time in seconds')
_ = plt.ylabel('Frequency')
_ = plt.title('Distribution of observed data')
plt.tight_layout()
with pm.Model() as model:
alpha = pm.Exponential('alpha', lam=1)
mu = pm.Uniform('mu', lower=0, upper=400)
y_pred = pm.NegativeBinomial('y_pred', mu=mu, alpha=alpha)
y_est = pm.NegativeBinomial('y_est', mu=mu, alpha=alpha, observed=y_obs)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(2000, step, start=start, progressbar=True)
_ = pm.traceplot(trace[100:], vars=['alpha', 'mu'])
x_lim = 400
y_pred = trace[burnin:].get_values('y_pred')
fig = plt.figure(figsize=(10,6))
fig.add_subplot(211)
fig.add_subplot(211)
_ = plt.hist(y_pred, range=[0, x_lim], bins=x_lim, histtype='stepfilled', color=colors[1])
_ = plt.xlim(1, x_lim)
_ = plt.ylabel('Frequency')
_ = plt.title('Posterior predictive distribution')
fig.add_subplot(212)
_ = plt.hist(y_obs, range=[0, x_lim], bins=x_lim, histtype='stepfilled')
_ = plt.xlabel('Response time in minutes')
_ = plt.ylabel('Frequency')
_ = plt.title('Distribution of observed data')
plt.tight_layout()
# Use a theano shared variable to be able to exchange the data the model runs on
from theano import shared
df = df.rename(columns={"Distance Km": "distance"})
df_shared = shared(df[["distance", "isweekend", "samezip", "responsetime"]].values)
df[["distance", "isweekend", "samezip", "responsetime"]].values
shared_dist = shared(df["distance"].values)
shared_weekend = shared(df["isweekend"].values)
shared_zip = shared(df["samezip"].values)
shared_time = shared(df["responsetime"].values)
# Convert categorical variables to integer
le_precincts = preprocessing.LabelEncoder()
precincts_idx = le_precincts.fit_transform(df['PolicePrct'])
precincts = le_precincts.classes_
precincts_shared_idx = shared(precincts_idx)
n_precincts = len(precincts)
# Convert categorical variables to integer
le_borough = preprocessing.LabelEncoder()
boroughs_idx = le_borough.fit_transform(df['Borough'])
boroughs = le_borough.classes_
boroughs_shared_idx = shared(boroughs_idx)
n_boroughs = len(boroughs)
with pm.Model() as model:
borough = pm.Normal('borough', mu=0, sd=100, shape=n_boroughs)
precinct = pm.Normal('precinct', mu=0, sd=100, shape=n_precincts)
slope_distance = pm.Normal('slope_distance', mu=0, sd=100)
slope_is_weekend = pm.Normal('slope_is_weekend', mu=0, sd=100)
slope_same_zip = pm.Normal('slope_same_zip', mu=0, sd=100)
mu = tt.exp(borough[boroughs_shared_idx]
+ precinct[precincts_shared_idx]
+ slope_is_weekend*shared_weekend
+ slope_same_zip*shared_zip
+ slope_distance*shared_dist)
alpha = pm.Exponential('alpha', lam=1.1)
y_est = pm.NegativeBinomial('y_est', mu=mu, alpha=alpha, observed=y_obs)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(200000, step, start=start, progressbar=True)
# Convert categorical variables to integer
le_precincts = preprocessing.LabelEncoder()
precincts_idx = le_precincts.fit_transform(df['PolicePrct'])
precincts = le_precincts.classes_
precincts_shared_idx = shared(precincts_idx)
n_precincts = len(precincts)
# Convert categorical variables to integer
le_borough = preprocessing.LabelEncoder()
boroughs_idx = le_borough.fit_transform(df['Borough'])
boroughs = le_borough.classes_
boroughs_shared_idx = shared(boroughs_idx)
n_boroughs = len(boroughs)
with pm.Model() as w_model:
borough = pm.Normal('borough', mu=0, sd=100, shape=n_boroughs)
precinct = pm.Normal('precinct', mu=0, sd=100, shape=n_precincts)
slope_distance = pm.Normal('slope_distance', mu=0, sd=100)
slope_is_weekend = pm.Normal('slope_is_weekend', mu=0, sd=100)
slope_same_zip = pm.Normal('slope_same_zip', mu=0, sd=100)
beta = tt.exp(borough[boroughs_shared_idx]
+ precinct[precincts_shared_idx]
+ slope_is_weekend*shared_weekend
+ slope_same_zip*shared_zip
+ slope_distance*shared_dist)
alpha = pm.Exponential('alpha', lam=1.1)
y_est = pm.Weibull('y_est', beta=beta, alpha=alpha, observed=y_obs)
start = pm.find_MAP()
step = pm.Metropolis()
w_trace = pm.sample(200000, step, start=start, progressbar=True)
_ = pm.traceplot(w_trace[10000:])
_ = plt.figure(figsize=(10, 5))
_ = pm.forestplot(trace[10000:], vars=['borough'], ylabels=boroughs)
plt.suptitle("Borough Differences in NYPD Resolution Times, Weibull Regression", size=12)
oos = nypd[(~nypd.index.isin(df.index)) & (nypd.responsetime < 400)].sample(n=5000)
shared_dist.set_value(oos["Distance Km"].values)
shared_weekend.set_value(oos["isweekend"].values)
shared_zip.set_value(oos["samezip"].values)
shared_time.set_value(oos["responsetime"].values)
# Change values of the shared variables to predict out of sample distribution
precincts_idx = le_precincts.transform(oos['PolicePrct'])
precincts = le_precincts.classes_
precincts_shared_idx.set_value(precincts_idx)
n_precincts = len(precincts)
boroughs_idx = le_borough.transform(oos['Borough'])
boroughs = le_borough.classes_
boroughs_shared_idx.set_value(boroughs_idx)
n_boroughs = len(boroughs)
# Simply running PPC will use the updated values and do prediction
ppc = pm.sample_ppc(w_trace, model=w_model, samples=200)
np.power((oos['responsetime'].values - ppc['y_est'].mean(axis=0)), 2).mean()
pd.DataFrame({"Pred":(ppc['y_est'].mean(axis=0) * 10000), "Actual": oos['responsetime'].values}).plot(x="Actual",
y="Pred",
kind="scatter",
figsize=(16, 7))
plt.suptitle("Weibull Mixed Effects Predictions vs. Actual Resolution Time", size=17)
pd.DataFrame({"Pred":(ppc['y_est'].mean(axis=0)), "Actual": oos['responsetime'].values}).plot(kind="hist",
figsize=(15, 7),
alpha=.3)
plt.suptitle("Weibull Mixed Effects Predictions vs. Actual Resolution Time Density", size=17)
pd.DataFrame({"Pred":(ppc['y_est'].mean(axis=0))}).plot(kind="hist",figsize=(15,7), bins=40, alpha=.3)
plt.suptitle("Weibull Mixed Effects Predictions vs. Actual Resolution Time Density", size=17)
pd.DataFrame({"Observed Time to Resolution":y_obs}).plot(kind="hist",range=[0, 500], bins=50, figsize=(15,7), alpha=.7)
plt.suptitle("Time to Resolution, Sample of 311 NYPD complaints", size=15)
plt.xlabel("Minutes")
pm.summary(trace)
```
| github_jupyter |
```
import pandas as pd, numpy as np
from scipy import stats
stations=pd.read_csv('data/stations.csv').set_index('ID')
c='ro'
df=pd.read_csv('data/'+c+'_ds.csv') #daily data
# df=pd.read_csv('data/'+c+'_hs.csv') #high_res data
df['time']=pd.to_datetime(df['time'])
df['year']=df['time'].dt.year
df['month']=df['time'].dt.month
df['day']=df['time'].dt.day
df['hour']=df['time'].dt.hour
df=df.set_index('time')
df=df.sort_index()
df.groupby('year').nunique()['ID'].plot()
history=df.groupby('ID').nunique()['year'].sort_values(ascending=False)
history=pd.DataFrame(history).join(stations)
history.head()
nepi=pd.read_excel(c+'/idojaras_'+c+'.xlsx')
Setup plot params
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.collections import PolyCollection
%matplotlib inline
import matplotlib as mpl
import matplotlib.font_manager as font_manager
path = 'KulimPark-Regular.ttf'
path2 = 'Symbola.ttf'
prop = font_manager.FontProperties(fname=path)
prop2 = font_manager.FontProperties(fname=path2)
color_ax='#E7CFBC'
color_bg='#FFF4EC'
color_obs_right0='#F2B880'
color_obs_left0=color_ax
color_pred_right0='#C98686'
color_pred_left0='#966B9D'
color_pred_talalt0='#59c687'
color_pred_nem_talalt0='#c95498'
font_size=12
s=40
obs_talalt_glyph0='★'
obs_nem_talalt_glyph0='☆'
pred_talalt_glyph0='✔️'
pred_nem_talalt_glyph0='✖️'
title_icon_right={'Temp':'☼','Wind':'►','Hail':'▲','Snow':'▲','Snow Depth':'▲','Rain':'☔️','Visib':'☀️'}
title_icon_left={'Temp':'❄️','Wind':'◄','Hail':'▼','Snow':'▼','Snow Depth':'▼','Rain':'☂️','Visib':'☁️'}
title_icon={'Temp':'♨️','Rain':'☂️','Hail':'❄️','Snow':'⛷️','Snow Depth':'⛄️','Wind':'☘','Cloud':'☁️','Visib':'☀️'}
def get_data(data,th):
a1=pd.DataFrame(data[data<=th])
a1['g']='left'
a2=pd.DataFrame(data[data>th])
a2['g']='right'
a3=pd.concat([a1,a2])
a3['x']='x'
return a1,a2,a3
def violin_plot(data,th,ax,color_left,color_right):
a=0.3
a1,a2,a3=get_data(data,th)
if len(a1)==1:
a11=pd.DataFrame(a1)
a11['x']='x'
a11[a1.columns[0]]=[a1[a1.columns[0]].values[0]*1.1]
a3=pd.concat([a3,a11])
ax.axvline(0,color=color_ax)
if a3.nunique()['g']>1:
sns.violinplot(y=a1.columns[0], x='x',hue='g', data=a3, split=True, ax=ax,
inner=None,linewidth=1, scale="count", saturation=1)
ax.get_children()[0].set_color(matplotlib.colors.colorConverter.to_rgba(color_left, alpha=a))
ax.get_children()[0].set_edgecolor(color_left)
ax.get_children()[1].set_color(matplotlib.colors.colorConverter.to_rgba(color_right, alpha=a))
ax.get_children()[1].set_edgecolor(color_right)
ax.legend().remove()
else:
if len(a1)>0:
w=a1
c=color_left
else:
w=a2
c=color_right
sns.violinplot(y=w.columns[0], data=w, ax=ax,
inner=None,linewidth=1, scale="count", saturation=1)
ax.set_xlim([-1,0])
ax.get_children()[0].set_color(matplotlib.colors.colorConverter.to_rgba(c, alpha=a))
ax.get_children()[0].set_edgecolor(c)
spine_plot(datum,nep['Mondás'].strip(),mondas,nep['Jelentés'].strip(),nep['Kondíció'],nep['Mennyiség'],
observation_ts,observation_th,prediction_ts,prediction_th)
def setup_axes():
fig,axes=plt.subplots(1,3,figsize=(8,5),gridspec_kw={'width_ratios': [1, 3, 1]})
axi_top= axes[2].inset_axes([0.1, 0.65, 1, 0.3])
axi_top.axis('off')
axi_bottom= axes[2].inset_axes([0.1, 0, 1, 0.5])
axi_bottom.axis('off')
axes[0].axis('off')
axes[1].axis('off')
axes[2].axis('off')
axes[0]=axes[0].inset_axes([0, 0.15, 1, 0.85])
axes[1]=axes[1].inset_axes([0, 0.15, 1, 0.85])
axes[0].axis('off')
axes[1].axis('off')
return fig, axes, axi_top, axi_bottom
def stem_plot(data,ax,color,s=s):
data=pd.DataFrame(data)
x=data.index
y=data[data.columns[0]].values
for i,e in enumerate(y):
ax.plot([0,e],[x[i],x[i]],color=color)
ax.scatter(y,x,s,color=color,zorder=10)
def stem2_plot(data,th,ax,color_left,color_right,s=s,axv_color=None):
if axv_color==None:axv_color=color_right
a1,a2,a3=get_data(data,th)
stem_plot(a1,ax,color_left,s)
stem_plot(a2,ax,color_right,s)
ax.axvline(0,color=color_ax)
#if th!=0:
if True:
ax.axvline(th,color=axv_color,ls='--',zorder=5)
def icons_plot(axes,kondicio,mennyiseg,observation_th,prediction_th):
ylim=axes[0].get_ylim()
xlim=axes[1].get_xlim()
y_max_coord=ylim[0]+(ylim[1]-ylim[0])*1.05
y_max_coord2=ylim[0]+(ylim[1]-ylim[0])*1.05 #1.04
x_icon_coord_shift=(xlim[1]-xlim[0])*0.1
axes[0].text(observation_th, y_max_coord, title_icon[kondicio],
horizontalalignment='center', color=color_obs_right0, fontproperties=prop2, fontsize=font_size*1.5)
axes[1].text(prediction_th, y_max_coord, title_icon[mennyiseg],
horizontalalignment='center', color=color_ax, fontproperties=prop2, fontsize=font_size*1.5)
axes[1].text(prediction_th+x_icon_coord_shift, y_max_coord2, title_icon_right[mennyiseg],
horizontalalignment='center', color=color_pred_right, fontproperties=prop2, fontsize=font_size*1.5)
axes[1].text(prediction_th-x_icon_coord_shift, y_max_coord2, title_icon_left[mennyiseg],
horizontalalignment='center', color=color_pred_left, fontproperties=prop2, fontsize=font_size*1.5)
def talalat_plot_line(axes,n_prediction_ts_good,n_prediction_ts_bad,
n_prediction_ts_good_talalt,n_prediction_ts_good_nem_talalt,
observation_th,prediction_th):
ylim=axes[0].get_ylim()
xlim=axes[0].get_xlim()
y_max_coord=ylim[0]+(ylim[1]-ylim[0])*(-0.07)
x_icon_coord_shift=(xlim[1]-xlim[0])*0.1
x_icon_coord_shift2=(xlim[1]-xlim[0])*0.27
axes[0].text(observation_th+x_icon_coord_shift, y_max_coord, obs_talalt_glyph,
horizontalalignment='center', color=color_obs_right, fontproperties=prop2)
axes[0].text(observation_th-x_icon_coord_shift, y_max_coord, obs_nem_talalt_glyph,
horizontalalignment='center', color=color_obs_left, fontproperties=prop2)
axes[0].text(observation_th+x_icon_coord_shift2, y_max_coord, n_prediction_ts_good,
horizontalalignment='center', color=color_obs_right, fontproperties=prop)
axes[0].text(observation_th-x_icon_coord_shift2, y_max_coord, n_prediction_ts_bad,
horizontalalignment='center', color=color_obs_left, fontproperties=prop)
axes[0].text(observation_th, y_max_coord, '|',
horizontalalignment='center', color=color_obs_right0, fontproperties=prop,fontsize=19)
xlim=axes[1].get_xlim()
x_icon_coord_shift=(xlim[1]-xlim[0])*0.04
x_icon_coord_shift2=(xlim[1]-xlim[0])*0.1
axes[1].text(prediction_th+x_icon_coord_shift, y_max_coord, pred_talalt_glyph,
horizontalalignment='center', color=color_pred_talalt, fontproperties=prop2)
axes[1].text(prediction_th-x_icon_coord_shift, y_max_coord, pred_nem_talalt_glyph,
horizontalalignment='center', color=color_pred_nem_talalt, fontproperties=prop2)
axes[1].text(prediction_th+x_icon_coord_shift2, y_max_coord, n_prediction_ts_good_talalt,
horizontalalignment='center', color=color_pred_talalt, fontproperties=prop)
axes[1].text(prediction_th-x_icon_coord_shift2, y_max_coord, n_prediction_ts_good_nem_talalt,
horizontalalignment='center', color=color_pred_nem_talalt, fontproperties=prop)
axes[1].text(prediction_th, y_max_coord, '|',
horizontalalignment='center', color=color_pred_right, fontproperties=prop,fontsize=19)
y_max_coord=ylim[0]+(ylim[1]-ylim[0])*(-0.14)
axes[0].text(observation_th, y_max_coord, 'feltétel',
horizontalalignment='center', color=color_obs_right0, fontproperties=prop)
axes[1].text(prediction_th, y_max_coord, 'jóslat',
horizontalalignment='center', color=color_pred_right, fontproperties=prop)
y_max_coord=ylim[0]+(ylim[1]-ylim[0])*(-0.13)
x_coord_shift=prediction_th+(prediction_th-xlim[0])*(-0.4)
axes[1].annotate('', xy=(x_coord_shift, y_max_coord),xycoords='data',annotation_clip=False,
xytext=(xlim[0], y_max_coord),arrowprops=dict(arrowstyle= '->',color=color_obs_right0))
def talalat_plot_violin(axes,n_prediction_ts_good,n_prediction_ts_bad,n_prediction_ts_good_talalt,n_prediction_ts_good_nem_talalt):
y_icon_obs=0.65
y_icon_pred=0.5
if color_obs_right==color_obs_right0: x=0.72
else: x=0.47
axes[2].text(0.72, y_icon_obs, obs_talalt_glyph,
horizontalalignment='center', color=color_obs_right, fontproperties=prop2)
axes[2].text(0.9, y_icon_obs,n_prediction_ts_good,
horizontalalignment='center', color=color_obs_right, fontproperties=prop)
axes[2].text(0.47, y_icon_obs, obs_nem_talalt_glyph,
horizontalalignment='center', color=color_obs_left, fontproperties=prop2)
axes[2].text(0.29, y_icon_obs, n_prediction_ts_bad,
horizontalalignment='center', color=color_obs_left, fontproperties=prop)
axes[2].text(0.72, y_icon_pred, pred_talalt_glyph,
horizontalalignment='center', color=color_pred_talalt, fontproperties=prop2)
axes[2].text(0.9, y_icon_pred, n_prediction_ts_good_talalt,
horizontalalignment='center', color=color_pred_talalt, fontproperties=prop)
axes[2].text(0.47, y_icon_pred, pred_nem_talalt_glyph,
horizontalalignment='center', color=color_pred_nem_talalt, fontproperties=prop2)
axes[2].text(0.29, y_icon_pred, n_prediction_ts_good_nem_talalt,
horizontalalignment='center', color=color_pred_nem_talalt, fontproperties=prop)
axes[2].annotate('', xy=(0.59, y_icon_pred*1.04),xycoords='data',
xytext=(x, y_icon_obs*0.98),arrowprops=dict(arrowstyle= '->',color=color_obs_right0))
def talalat_plot(axes,ns,observation_th,prediction_th):
n_prediction_ts_good,n_prediction_ts_bad,n_prediction_ts_good_talalt,n_prediction_ts_good_nem_talalt=ns
talalat_plot_line(axes,n_prediction_ts_good,n_prediction_ts_bad,
n_prediction_ts_good_talalt,n_prediction_ts_good_nem_talalt,
observation_th,prediction_th)
talalat_plot_violin(axes,n_prediction_ts_good,n_prediction_ts_bad,
n_prediction_ts_good_talalt,n_prediction_ts_good_nem_talalt)
def year_plot(data,ax,k):
y=data.values
x=data.index
ex=max(y)-min(y)
text_off=abs(ex*k)
text_align='left'
if y[0]<0:
text_off=-text_off
text_align='right'
ax.text(y[0]+text_off, x[0], str(x[0]),
horizontalalignment=text_align, verticalalignment='center',
color=color_ax, fontproperties=prop)
text_off=abs(text_off)
text_align='left'
if y[-1]<0:
text_off=-text_off
text_align='right'
ax.text(y[-1]+text_off, x[-1], str(x[-1]),
horizontalalignment=text_align, verticalalignment='center',
color=color_ax, fontproperties=prop)
def spine_plot(datum,title,mondas,jelentes,kondicio,mennyiseg,
observation_ts,observation_th,prediction_ts,prediction_th):
#data
prediction_ts_good=prediction_ts.loc[observation_ts[observation_ts>observation_th].index]
prediction_ts_bad=prediction_ts.loc[observation_ts[observation_ts<=observation_th].index]
n_prediction_ts_good=len(prediction_ts_good)
n_prediction_ts_bad=len(prediction_ts_bad)
if color_obs_right0!=color_obs_right:
prediction_ts_good,prediction_ts_bad=prediction_ts_bad,prediction_ts_good
prediction_ts_good_nem_talalt,prediction_ts_good_talalt,\
prediction_ts_good_joined=get_data(prediction_ts_good,prediction_th)
n_prediction_ts_good_talalt=len(prediction_ts_good_talalt)
n_prediction_ts_good_nem_talalt=len(prediction_ts_good_nem_talalt)
ns=[n_prediction_ts_good,n_prediction_ts_bad,n_prediction_ts_good_talalt,n_prediction_ts_good_nem_talalt]
#plots
fig, axes, axi_top, axi_bottom=setup_axes()
stem2_plot(observation_ts,observation_th,axes[0],color_obs_left,color_obs_right,s/2,color_obs_right0)
stem2_plot(prediction_ts_good,prediction_th,axes[1],color_pred_left,color_pred_right)
stem_plot(prediction_ts_bad,axes[1],color_ax)
violin_plot(observation_ts,observation_th,axi_top,color_obs_left,color_obs_right)
violin_plot(prediction_ts_good,prediction_th,axi_bottom,color_pred_left,color_pred_right)
#icons
icons_plot(axes,kondicio,mennyiseg,observation_th,prediction_th)
#talalat
talalat_plot(axes,ns,observation_th,prediction_th)
#years
year_plot(observation_ts,axes[0],0.09)
year_plot(prediction_ts,axes[1],0.03)
#titles
len_ratio=0.15*(-1+(len(jelentes.split(',')[0])/len(jelentes.split(',')[1])))
fig.text(0.5+len_ratio,0.04,jelentes.split(',')[0]+',',color=color_obs_right0,
fontproperties=prop,fontsize=font_size*0.7,horizontalalignment='right')
if color_pred_talalt==color_pred_talalt0: color_pred_side=color_pred_right
else: color_pred_side=color_pred_left
fig.text(0.5+len_ratio,0.04,jelentes.split(',')[1],color=color_pred_side,
fontproperties=prop,fontsize=font_size*0.7,horizontalalignment='left')
if n_prediction_ts_good_nem_talalt>=n_prediction_ts_good_talalt:
color_title=color_pred_nem_talalt
verdict=pred_nem_talalt_glyph
else:
color_title=color_pred_talalt
verdict=pred_talalt_glyph
plt.suptitle(title,y=0.11,color=color_title,fontproperties=prop,fontsize=font_size)
fig.text(0.96,0.04,verdict, fontproperties=prop2,
horizontalalignment='right', color=color_title, fontsize=font_size*2, )
fig.text(0.04,0.045, datum, fontproperties=prop,
horizontalalignment='left', color=color_obs_right0, fontsize=font_size*2, )
plt.savefig(c+'/'+str(mondas)+'.png',dpi=300, facecolor=color_bg)
plt.show()
def filter_data(dz,observation_range,prediction_range):
dgs=[]
dhs=[]
for year in range(int(dz.min()['year']),int(dz.max()['year'])):
k=0
from_date=pd.to_datetime(str(year)+'-'+str(observation_range[k].month)+'-'+str(observation_range[k].day))
from_pred=pd.to_datetime(str(year)+'-'+str(prediction_range[k].month)+'-'+str(prediction_range[k].day))
k=1
to_date=pd.to_datetime(str(year)+'-'+str(observation_range[k].month)+'-'+str(observation_range[k].day))
to_pred=pd.to_datetime(str(year)+'-'+str(prediction_range[k].month)+'-'+str(prediction_range[k].day))
if to_pred<to_date:
to_pred+=pd.to_timedelta('1Y')
dg=dz.loc[from_date:]
dg=dg[:to_date]
dg['pyear']=year
dgs.append(dg)
dh=dz.loc[from_pred:]
dh=dh[:to_pred]
dh['pyear']=year
dhs.append(dh)
return pd.concat(dgs),pd.concat(dhs)
dz=df.groupby(['time']).mean()
dz['year']=dz['year'].astype(int)
dz.head()
def set_direction(kondicio, mennyiseg):
if kondicio:
color_obs_right=color_obs_right0
color_obs_left=color_obs_left0
obs_talalt_glyph='★'
obs_nem_talalt_glyph='☆'
else:
color_obs_right=color_obs_left0
color_obs_left=color_obs_right0
obs_talalt_glyph='☆'
obs_nem_talalt_glyph='★'
if mennyiseg:
color_pred_talalt=color_pred_talalt0
color_pred_nem_talalt=color_pred_nem_talalt0
pred_talalt_glyph='✔️'
pred_nem_talalt_glyph='✖️'
else:
color_pred_talalt=color_pred_nem_talalt0
color_pred_nem_talalt=color_pred_talalt0
pred_talalt_glyph='✖️'
pred_nem_talalt_glyph='✔️'
return color_obs_right,color_obs_left,obs_talalt_glyph,obs_nem_talalt_glyph,\
color_pred_talalt,color_pred_nem_talalt,pred_talalt_glyph,pred_nem_talalt_glyph
def get_sign(sign,key):
positive=True
if (('-' in sign) or ('+' in sign)):
if sign=='-':
positive=False
elif sign=='+':
positive=True
elif (('<' in sign) or ('>' in sign)):
if '<' in sign:
positive=False
elif '>' in sign:
positive=True
return positive
universal_normalize=['XTEMP','XVSB','XSPD']
def get_ts_data(data,key,sign):
ts=data.groupby('year').mean()[key]
if (('-' in sign) or ('+' in sign)):
th=ts.mean()
else:
th=float(sign[1:])
if key in universal_normalize:
th-=ts.mean()
ts-=ts.mean()
return ts,th
def get_comp_data(observation_data,obs_key,obs_sign,prediction_data,pred_key,pred_sign):
ertek_sign=True
irany_sign=True
observation_ts=observation_data.groupby('year').mean()[obs_key]
prediction_ts=prediction_data.groupby('year').mean()[pred_key]
prediction_th=observation_ts.mean()
observation_ts-=observation_ts.mean()
observation_th=observation_ts.min()*1.01
prediction_th-=prediction_ts.mean()
prediction_ts-=prediction_ts.mean()
if obs_sign=='A':
if pred_sign=='A':
observation_th=0
prediction_th=0
else:
irany_sign=False
return observation_ts,observation_th,prediction_ts,prediction_th,ertek_sign,irany_sign
mennyiseg_key={'Temp':'XTEMP','Snow Depth':'XSD','Wind':'XSPD','Rain':'YPCP','Visib':'XVSB',
'Snow':'YSNW','Hail':'YHAL'}
nepi=pd.read_excel(c+'/idojaras_'+c+'.xlsx')
mondasok=nepi['ID'].values
# mondasok=range(60,61)
mondasok=[55]
for mondas in mondasok:
nep=nepi.loc[mondas]
if str(nep['Mennyiség'])!='nan':
print(mondas)
obs_key=mennyiseg_key[nep['Kondíció']]
pred_key=mennyiseg_key[nep['Mennyiség']]
observation_range=[nep['Dátum:mettől']+pd.to_timedelta('-1D'),nep['Dátum:meddig']+pd.to_timedelta('+2D')]
prediction_range=[nep['Periódus:mettől'],nep['Periódus:meddig']+pd.to_timedelta('+1D')]
observation_data,prediction_data=filter_data(dz,observation_range,prediction_range)
#comparison
if str(nep['Érték']) in ['A','B']:
observation_ts,observation_th,prediction_ts,prediction_th,ertek_sign,irany_sign=\
get_comp_data(observation_data,obs_key,nep['Érték'],\
prediction_data,pred_key,nep['Irány'])
#time series
else:
ertek_sign=get_sign(nep['Érték'],obs_key)
irany_sign=get_sign(nep['Irány'],pred_key)
observation_ts,observation_th=get_ts_data(observation_data,obs_key,nep['Érték'])
prediction_ts,prediction_th=get_ts_data(prediction_data,pred_key,nep['Irány'])
color_obs_right,color_obs_left,obs_talalt_glyph,obs_nem_talalt_glyph,\
color_pred_talalt,color_pred_nem_talalt,pred_talalt_glyph,pred_nem_talalt_glyph=\
set_direction(ertek_sign, irany_sign)
datum=str(nep['Dátum:mettől'].month)+'.'+str(nep['Dátum:mettől'].day)+'.'
datum=str(nep['Dátums'])[:3]+'. '+str(nep['Dátum:mettől'].day)
spine_plot(datum,nep['Mondás'].strip(),mondas,nep['Jelentés'].strip(),nep['Kondíció'],nep['Mennyiség'],
observation_ts,observation_th,prediction_ts,prediction_th)
```
| github_jupyter |
# Convolutional Neural Networks
---
In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.
The process will be broken down into the following steps:
>1. Load and visualize the data
2. Define a neural network
3. Train the model
4. Evaluate the performance of our trained model on a test dataset!
Before we begin, we have to import the necessary libraries for working with data and PyTorch.
also want to make this dynamic if in case some one want to use GPU
```
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import datasets
import torchvision.transforms as transforms
%matplotlib inline
```
### Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)
Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
```
class Env:
"""
ENV class will be used as a dict for constant
"""
batch_size = 20
valid_size = 0.2
num_workers = 0
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
train_on_gpu = torch.cuda.is_available()
epochs = 20
valid_loss_min = np.Inf
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
```
---
## Load the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data.
```
train_data = datasets.CIFAR10(root='', download=True, train=True, transform=Env.transform)
test_data = datasets.CIFAR10(root='', download=False, train=False, transform=Env.transform)
#create train test and validation data loader.
indices = list(range(len(train_data)))
np.random.shuffle(indices)
split = int(np.floor(Env.valid_size * len(train_data)))
train_sampler = SubsetRandomSampler(indices[split:])
valid_sampler = SubsetRandomSampler(indices[:split])
train_loader = torch.utils.data.DataLoader(train_data,
num_workers=Env.num_workers,
sampler=train_sampler,
batch_size=Env.batch_size)
valid_loader = torch.utils.data.DataLoader(train_data,
num_workers=Env.num_workers,
sampler=valid_sampler,
batch_size=Env.batch_size)
test_loader = torch.utils.data.DataLoader(test_data,
num_workers=Env.num_workers,
batch_size=Env.batch_size)
#visualize data
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0)))
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
fig = plt.figure(figsize=(20,4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(Env.classes[labels[idx]])
```
---
## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)
This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:
* [Convolutional layers](https://pytorch.org/docs/stable/nn.html#conv2d), which can be thought of as stack of filtered images.
* [Maxpooling layers](https://pytorch.org/docs/stable/nn.html#maxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.
* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.
A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer.
<img src='notebook_ims/2_layer_conv.png' height=50% width=50% />
#### TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.
The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting.
It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure.
#### Output volume for a convolutional layer
To compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/#layers)):
> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`.
For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
```
class NeuralNet(nn.Module):
def __init__(self):
super(NeuralNet, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
self.pool = nn.MaxPool2d(2,2)
self.fc1 = nn.Linear(64*4*4, 500)
self.fc2 = nn.Linear(500, 10)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = x.view(-1, 64 * 4 * 4)
x = self.dropout(x)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = self.fc2(x)
return x
model = NeuralNet()
print(model)
if Env.train_on_gpu:
model.cuda()
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
---
## Train the Network
Remember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting. (In fact, in the below example, we could have stopped around epoch 33 or so!)
```
valid_loss_min = np.inf
for epoch in range(1, Env.epochs + 1):
train_loss = 0.0
valid_loss = 0.0
model.train()
for images, targets in train_loader:
if Env.train_on_gpu:
images, targets = images.cuda(), targets.cuda()
optimizer.zero_grad()
output = model(images)
loss = criterion(output, targets)
loss.backward()
optimizer.step()
train_loss += loss.item() * images.size(0)
model.eval()
for images, targets in valid_loader:
if Env.train_on_gpu:
images, targets = images.cuda(), targets.cuda()
output = model(images)
loss = criterion(output, targets)
valid_loss += loss.item() * images.size(0)
train_loss = train_loss / len(train_loader.dataset)
valid_loss = valid_loss / len(valid_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(epoch, train_loss, valid_loss))
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'cnn.pt')
valid_loss_min = valid_loss
```
### Load the Model with the Lowest Validation Loss
```
model.load_state_dict(torch.load('cnn.pt'))
```
---
## Test the Trained Network
Test your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
```
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for data, target in test_loader:
# move tensors to GPU if CUDA is available
if Env.train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not Env.train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(Env.batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
Env.classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (Env.classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if Env.train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not Env.train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(30, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(Env.classes[preds[idx]], Env.classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
```
| github_jupyter |
# TMDb movie dataset analysis
## Table of Contents
<ul>
<li><a href="#intro">Introduction</a></li>
<li><a href="#wrangling">Data Wrangling</a></li>
<li><a href="#eda">Exploratory Data Analysis</a></li>
<li><a href="#conclusions">Conclusions</a></li>
</ul>
<a id='intro'></a>
## Introduction
> The Movie Database (TMDb) is an open source data base on movies and TV series. The use of its database is free and accessible to any user. With this facility, we will use this open data to analyze data from multiple movies and use data visualization for a more complete analysis about correlations and possibles trends of popularity, profitability, genres, and various other insights.
>
> The link to download the TMDb dataset you can find [here](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/July/5b57919a_data-set-options/data-set-options.pdf).
> An in-depth examination of the data raised several questions to be analyzed. Some of them are:
>> 1. Which movie had the highest revenue?
>> 2. Which movie had the lowest revenue?
>> 3. What is the most popular movie?
>> 4. What is the most profitable movie?
>> 5. What is the correlation between revenue and popularity?
>> 6. Which genres had the highest revenue?
>> 7. What is the correlation between popularity, budget, revenue and vote?
>> 8. Was the movie with the biggest budget more profitable?
>> 9. Which was the most profitable production company?
>> 10. Which are the 3 most voted movies?
>> 11. What is the most popular genre?
>> 12. Which genre had the highest release per year?
>> 13. Which director has made the most movies?
>> 14. What is the relationship between runtime and popularity?
>> 15. what is the average runtime evolution per year?
>> 16. Does the length of films relate to revenue?
>> 17. Number of movies over the years increased?
```
# Import packages for this analysis
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
```
<a id='wrangling'></a>
## Data Wrangling
> Download the dataset, inspect data to understand the structure and see if have any changes or unused data to delete. We will be keeping only relevent data. There are no specifictions about the currency used in the dataset, we will assume that it is in US dollar.
### General Properties
```
# Load 'tmdb-movies.csv 'and read the CSV File using Pandas read_csv function
# View dataset using head() function
df = pd.read_csv('tmdb-movies.csv')
df.head(5)
# view dimensions of dataset
df.shape
# check which columns have missing values with info()
df.info()
# describe the dataset for more informations
df.describe()
# check for missing value count for each features in the dataset
df.isnull().sum()
# Check if there is any duplicates data in tmdb dataset
df.duplicated().sum()
```
### Data Cleaning
> - Replace the missing data for '0';
> - Drop duplicate data;
> - Delete the columns (homepage, tagline, overview, imdb_id) that are not needed for this analysis;
> - The genre and productions companies columns are separeted by'|', split the data to facilitate the future analysis.
```
# Replace all NaN elements with 0s
df.fillna(0, inplace=True)
# confirm changes
df.isnull().sum()
# drop duplicates rowls
df.drop_duplicates(inplace=True)
# check number of duplicates to confirm the change made
df.duplicated().sum()
# drop columns that i wont use
df.drop(['homepage', 'tagline', 'overview', 'imdb_id'], axis=1, inplace=True)
# confirm changes
df.head(1)
# Use the function 'split' to separate the string and create a list whith the genres
df.genres = df.genres.str.split(pat="|")
df.head(3)
#Use the function 'expode' to create new rowls for the same movie but with separated genres
df = df.explode('genres')
df.head()
# Use the function 'split' to separate the string and create a list whith the production companies
df.production_companies = df.production_companies.str.split(pat="|")
df.head(3)
#Use the function 'expode' to create new rowls for the same movie but with separated production companies
df = df.explode('production_companies')
df.head()
```
<a id='eda'></a>
## Exploratory Data Analysis
> Once the data cleaning is carried out, we will use formulas, statistics and graphical analysis to answer the questions.
### Understanding the data distribution in the dataset
```
# Plot a histogram chart for each item in the dataset for better vizualization and understand of data distribution
df.hist(figsize=(15, 10));
# use 'mean()' function to find the mean of runtime
df.runtime.mean()
```
The mean films duration is 105.08 minutes.
```
# plot a box chart of runtime to understand the distribution
df['runtime'].plot(kind='box', figsize=(10,8))
# Set labels and title
plt.title("Runtime distribution")
plt.xlabel('Runtime')
plt.ylabel('Duration (in minutes)');
```
There is bigger duration concentration between 80 minutes and 160 minutes, and a few above 400 minutes.
```
# Use 'unique()' function to see all categorical variable in the genre column
df.genres.unique()
# Use 'nunique function to count how many diferent genres have in the dataset'
df.genres.nunique()
```
There is 20 diferent genres in this dataset
```
# Count all genre types to see the distributions of all genres types
df.genres.value_counts()
# Plot a bar chart to see the genre distribution
g = sns.countplot(data=df, x='genres')
sns.set(rc={'figure.figsize':(8,10)})
# Set title and labels
plt.xticks(rotation=90)
g.set_title('Genres Distribution')
g.set_xlabel("Genres",fontsize=15)
g.set_ylabel("Quantity",fontsize=15)
```
The genre with the most number of movies is drama, followed by comedy and thriller.
```
# Use 'describe()' function to see the statistics of vote average
df.vote_average.describe()
```
The mean of vote average is 5.97. The lowest vote is 0.87 and the highest vote is 9.2.
```
# Plot a box chart to understand the vote distribution
df['vote_average'].plot(kind='box',figsize=(10,8))
plt.title("Average Vote distribution")
plt.xlabel('Votes')
plt.ylabel('Quantity');
# Plot a bar chart to have another visual look in the vote distribution
df['vote_average'].hist(figsize=(10,8))
plt.title("Average Vote distribution")
plt.xlabel('Votes')
plt.ylabel('Quantity');
```
There is a higher concentration of vote average between 5 and 7 in the vote range.
## Questions
### 1. Which movie had the highest revenue?
```
# use the function 'nlargest' to find de the max value in the 'revenue' column
df.nlargest(1, 'revenue')
```
**Avatar** was the movie with the highest revenue.
The movie was directed by James Cameron and he raised just over 2.7 billions dollars.
### 2. Which movie had the lowest revenue?
```
# use the function 'nsmallest' to find de the min value in the 'revenue' column
df.nsmallest(1, 'revenue')
```
**Wild Card** directed by Simon West was the movie with the lowest revenue.
### 3. What is the most popular movie?
```
# Use the same 'nlargest' function we previously used to find the highest value in the popularity column to filter the most popular movie
df.nlargest(1, 'popularity')
```
The most popular movie was **Jurassic World**. Directed by Colin Trevorrow, his movie received 32.985763 points of popularity in the TMDb dataset.
### 4. What is the most profitable movie?
```
# In order to find profitability,deduct the colum 'revenue' by the 'budget' column
prof = df.revenue - df.budget
prof
# use the function max() to find the highest value of profitability
prof.max()
# use the function 'idxmax()' to find the index of highest profit movie.
prof.idxmax()
# Locate the index of highest profit movie using 'df.loc[]'
df.loc[prof.idxmax()]
```
The most profible movie is **Avatar** with more than 2.5B of profitability. Avatar is a adventure, action, fantasy and science fiction film directed by James Cameron.
### 5. What is the correlation between revenue and popularity?
```
# Let`s have a quick look in the revenue distribution
df.revenue.plot(figsize=(10,8))
plt.title("Revenue distribution")
plt.xlabel('Id')
plt.ylabel('Revenue');
# To undertand a little more and have a visual look, plot popularity
df.popularity.plot(figsize=(10,8))
plt.title("Popularity distribution")
plt.xlabel('Id')
plt.ylabel('Popularity');
# To visually check if there is any correlation, let's plot revenue and popularity together using the scatter plot
df.plot(x='revenue', y='popularity', kind= 'scatter', figsize=(16,8), c='b')
plt.title("Popularity and revenue correlation");
plt.xlabel('Revenue')
plt.ylabel('Popularity');
# We can see that there is a correlation between the two. To be more precise, let's calculate the correlation between revenue and popularity.
df['revenue'].corr(df['popularity'])
```
As visually perceived with the scatter chart, we can confirm the strong correlation between revenue and popularity.The **correlation coefficient is 0.6597**.
### 6. Which genres had the highest revenue?
```
# Check the genres quantity using 'value_counts()' function
df.genres.value_counts()
# Group the gernes column to the popularity and add up the popularity for genre
df1 = df.groupby(['genres'])['revenue'].agg('sum')
df1
# sort the serie created in decending order
df1.sort_values(ascending=False, inplace=True)
df1
# Let`s visualize the results found by ploting a bar chart
df1.plot(kind='bar',figsize=(16,8))
plt.title("Revenue per genres")
plt.xlabel('Genre')
plt.ylabel('Revenue in dollar');
```
The genre with the **highest revenue is Action**. Next with the highest revenue are **Adventure**, **Drama** and **Thriller films**
### 7. What is the correlation between popularity, budget, revenue and vote?
```
# To calculate the correlation between popularity, budget, revenue and vote average use 'corr()' function
colist=list(['popularity','budget','revenue','vote_average'])
colist = df[colist].corr()
colist
# To visualise the corrrelation plot a heatmap using seaborn library
sns.heatmap(colist)
plt.title("Correlations")
```
We can note that there is a positive correlation between all of them, but the **average** vote has a very low correlation, **less than 0.26** in relation to the others.
**Popularity**, **revenue** and **budget** have a **correlation above 0.5**, indicating a strong correlation between them.
### 8. Was the movie with the biggest budget more profitable?
```
#use the 'prof' formula created before and locate the movie with the highest profit
df.loc[prof.idxmax()]
# Find the movie with the highest budget
df.nlargest(1, 'budget')
```
The movie with the bigger budget is The Warrior's Way and the highest profit is Avatar.
We can conclude that **not necessarily** the movie with the **highest budget will be the most profitable one**.
### 9. Which was the most profitable production company?
```
# Use 'value_counts()' function to have a quick look in which companies made more movies
df.production_companies.value_counts()
# Group the production companies column to the revenue and add up the revenue for each production companies
dfsum = df.groupby(['production_companies'])['revenue'].agg('sum')
dfsum
# sort the serie created in decending order
dfsum.sort_values(ascending=False, inplace=True)
dfsum
# select the fist row that contain the company with the highest profitability
dfsum.head(1)
```
**Warner Bros** earned over **169B** dollars, making it the most profitable production company.
### 10. Which are the 3 most voted movies?
```
# Group the original title column to the vote count and add up the vote count for each movie
dfvo = df.groupby(['original_title'])['vote_count'].agg('sum')
dfvo
# sort the serie created in decending order
dfvo.sort_values(ascending=False, inplace=True)
dfvo
# Select the 3 firt rowls to selec the 3 movies with the highest votes
dfvo.head(3)
```
The 3 most voted movies was **Inception** with 146505 votes, followed by **Avatar** with 135328 votes and **Dark Night** with 134912 votes.
### 11. What is the most popular genre?
```
# Group the gernes column to the popularity and add up the popularity for genre
dfpo = df.groupby(['genres'])['popularity'].agg('sum')
dfpo
# sort the serie created in decending order
dfpo.sort_values(ascending=False, inplace=True)
dfpo
# Select the firt rowl to selec the most popular genre
dfpo.head(1)
```
With 7896.548341 points, **Drama** is the most popular genre.
### 12. Which genre had the highest release per year?
```
# Group the genres column to the resease_year and count the release per year for each genre
dfre = df.groupby(['genres'])['release_year'].agg('value_counts')
dfre
# sort the serie created in decending order
dfre.sort_values(ascending=False, inplace=True)
dfre
# Select the firt rowl to selec the genre with the highest release per year
dfre.head(1)
```
The genre with the highest released per year was **drama** with **759 movies released**. The **year** that had the highest number of film releases was **2014**.
### 13. Which director has made the most movies?
```
# Group the director column to the original_title and count the original title per year for each director
dfdi = df.groupby(['director'])['original_title'].agg('count')
dfdi
# sort the serie created in decending order
dfdi.sort_values(ascending=False, inplace=True)
dfdi
# Select the firt rowl to selec the director who made the most movies
dfdi.head(1)
```
**Ridley Scott** was the director who made the most movies. He made the total of **250 movies**.
### 14. What is the relationship between runtime and popularity?
```
# Let's take a quick visual graph look of runtime distribution
df.runtime.plot()
plt.title("Runtime distribution")
plt.xlabel('Id')
plt.ylabel('Runtime');
# Plot popularity to compare with runtime
df.popularity.plot()
plt.title("Popularity distribution")
plt.xlabel('Id')
plt.ylabel('Popularity');
# To analyse better the relationship betwen runtime and popularity, plot a scatter graph to see if there is any corralation
df.plot(x='runtime', y='popularity', kind= 'scatter', figsize=(16,8))
plt.title("Popularity and runtime correlation")
plt.xlabel('Runtime')
plt.ylabel('Popularity');
# To calculate the correlation between popularity and runtime use 'corr()' function
rtlist=list(['popularity','runtime'])
rtlist = df[rtlist].corr()
rtlist
```
We can see in correlation graph that there is a little or even no correlation, but let's take a closer look where is a bigger concentration in runtime.
```
# Filter the runtime using 'query' function and selec the range with highest concentration
rtfil = df.query('runtime > 70 & runtime < 200')
rtfil
# Plot the filtered dataframe correlation
rtfil.plot(x='runtime', y='popularity', kind= 'scatter', figsize=(16,8))
plt.title("Popularity and runtime correlation")
plt.xlabel('Runtime')
plt.ylabel('Popularity');
```
We can notice a **bigger concentration** of runtime between **80 untill 160 minutes**, but the **correlation** between runtime and popularity is **very small**. Making the corralation formula the result is a correlation of **0.169**.
### 15. What is the average runtime evolution per year?
```
# Group the release_year column to the runtime and average the runtime for each year
dfev = df.groupby(['release_year'])[['runtime']].agg('mean')
dfev
# sort the serie created in ascending order
dfev.sort_values('release_year', ascending=True, inplace=True)
dfev
# Plot the average runtime evolution per year
dfev.plot(kind='line', figsize=(16,8))
plt.title("Runtime evolution per year")
plt.xlabel('Release per year')
plt.ylabel('Runtime');
```
With the visualization of the line graph, we can see that the average runtime of films has **decreased** over the years.
### 16. Does the length of films relate to revenue?
```
# To calculate the correlation between runtime and revenue use 'corr()' function
corre=list(['runtime','revenue'])
df[corre].corr()
# plot a scatter graph to visualize the correlation between runtime and revenue
df.plot(x='runtime', y='revenue', kind= 'scatter', figsize=(16,8), alpha=0.8, c='b')
plt.title("Runtime and revenue correlation")
plt.xlabel('Runtime')
plt.ylabel('Revenue');
```
The **corelations coeficient** between runtime and revenue is **0.2089** this means they have low correlation. This result can be verified with the scatter graph.
### 17. Number of movies over the years increased?
```
# Group the resease_year column to the original_title and count the original_tile for each year
dfmo = df.groupby(['release_year'])['original_title'].agg('count')
dfmo
# For a better look in the results found, plot a scatter graph
dfmo.plot(kind='line', figsize=(16,8))
plt.title("Release per year evolution")
plt.xlabel('Year')
plt.ylabel('Release per year');
```
The data show that in **1960**, **116** films were released per year and in **2015** the number of films released was **3357**. We can see the **significant increase** in new films over the years in the chart above.
<a id='conclusions'></a>
## Conclusions
> Having performed the data analysis, we can draw some conclusions:
>> - The number of releases per year has increased a lot over the years;
>> - Duration time has a low correlation with billing and has decreased over the years;
>> - Drama was the most popular genre and the most release per year;
>> - Action was the genre which had the highest revenue
>> - Revenue, budget and popularity have a high correlation between them;
>> - Warner Bross is the company that earned the most up to 2015.
### Limitations
> - In the dataset there is a considerable number of Nan values that have been replaced by zero, this may imply the accuracy of the analyses.
> - The currency used in budget and revenue has not been specified, it was deducted that it is in US dollars
> - The execution time of this analysis may suffer a small performance delay because the split function was used to separate genres and production companies to carry out some punctual analyse.
> - In the dataset there is a lot of lines with revenue equal to zero, as we do not know the veracity or the possible lack of data, this analysis may be incomplete.
> - Remembering that correlation does not imply causality.
| github_jupyter |
# Mask R-CNN - Inspect Datasets
Inspect and visualize data loading and pre-processing code.
```
import os
import sys
#import itertools
#import math
#import logging
#import json
#import re
import random
#from collections import OrderedDict
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.lines as lines
from matplotlib.patches import Polygon
# Root directory of the project
ROOT_DIR = os.path.abspath("../../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
import mrcnn.model as modellib
from mrcnn.model import log
from samples.tabletop import tabletop
from samples.tabletop import datasets
from samples.tabletop import configurations
%matplotlib inline
```
## Configuration
```
config = configurations.TabletopConfigTraining()
DATASET_ROOT_DIR = os.path.join(ROOT_DIR, "datasets/RAL_experiments")
```
## Dataset
```
# Load dataset
# Get the dataset from the releases page
dataset = datasets.TabletopDataset()
dataset.load_dataset(DATASET_ROOT_DIR, "val")
# Actually load image paths
dataset.prepare()
#print("Image Count: {}".format(len(dataset.image_ids)))
#print("Class Count: {}".format(dataset.num_classes))
#for i, info in enumerate(dataset.class_info):
# print("{:3}. {:50}".format(i, info['name']))
```
## Display Samples
Load and display images and masks.
```
# Load and display random samples
image_ids = np.random.choice(dataset.image_ids, 10)
for image_id in image_ids:
image = dataset.load_image(image_id)
try:
mask, class_ids = dataset.load_mask(image_id)
except AssertionError:
print("No mask available for image {}".format(dataset.image_info[image_id]))
continue
visualize.display_top_masks(image, mask, class_ids, dataset.class_names)
```
## Bounding Boxes
Rather than using bounding box coordinates provided by the source datasets, we compute the bounding boxes from masks instead. This allows us to handle bounding boxes consistently regardless of the source dataset, and it also makes it easier to resize, rotate, or crop images because we simply generate the bounding boxes from the updates masks rather than computing bounding box transformation for each type of image transformation.
```
# Load random image and mask.
image_id = random.choice(dataset.image_ids)
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
# Compute Bounding box
bbox = utils.extract_bboxes(mask)
# Display image and additional stats
print("image_id ", image_id, dataset.image_reference(image_id))
log("image", image)
log("mask", mask)
log("class_ids", class_ids)
log("bbox", bbox)
# Display image and instances
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
```
| github_jupyter |
# 1000개의 숫자에 대해 sort 실시 결과
### 알고리즘 visual 이해
["VisuAlgo" link](https://visualgo.net/en)
### 난수 생성
```
import numpy as np
import matplotlib.pyplot as plt
import time
import random
np.random.seed(777)
data = np.random.randint(1,1e+8,size=1000)
print(data[:5], "len :", len(data))
```
### Sort 알고리즘 (버블)
```
def bubble_sort(data):
start_time = time.time()
for _ in range(0, len(data)):
# 다 정렬된 상태라면 시행 X
unsorted = True
for i in range(0, len(data)-1):
if data[i] > data[i+1]:
data[i], data[i+1] = data[i+1], data[i]
unsorted = False
if unsorted:
end_time = time.time()
sorting_time = end_time - start_time
return data, sorting_time
```
### 결과 시각화
```
%matplotlib inline
sorted_data, time_series = [], []
for _ in range(10):
sorted_data, sorting_time = bubble_sort(data)
time_series.append(sorting_time)
time_mean = np.mean(time_series)
time_mean_x = np.linspace(0,10,num=10)
time_mean_bubble = np.full(shape=time_mean_x.shape, fill_value=time_mean)
plt.style.use("seaborn")
plt.plot(time_series)
plt.plot(time_mean_bubble,'r--',label='bubble')
plt.title("time spent trend", fontsize=15)
plt.xlabel("# of trials", fontsize=15)
plt.ylabel("time spent (bubble sort)", fontsize=15)
plt.legend(fontsize=15)
print("sorted data :", sorted_data[:10], "/ average time spent :", time_mean)
import time
import random
np.random.seed(777)
data = np.random.randint(1,1e+8,size=1000)
print(data[:5], "len :", len(data))
```
### Sort 알고리즘 (삽입)
```
def insert_sort(data):
start_time = time.time()
for _ in range(0, len(data)):
unsorted = True
# 정렬되어있지 않다면, 반복시행
for i in range(1, len(data)):
if data[i] < data[i-1]:
data[i], data[i-1] = data[i-1], data[i]
unsorted = False
# 정렬되어 있다면, return
if unsorted:
end_time = time.time()
sorting_time = end_time - start_time
return data, sorting_time
```
### 결과 시각화
```
%matplotlib inline
sorted_data, time_series = [], []
for _ in range(10):
sorted_data, sorting_time = insert_sort(data)
time_series.append(sorting_time)
time_mean = np.mean(time_series)
time_mean_x = np.linspace(0,10,num=10)
time_mean_insert = np.full(shape=time_mean_x.shape, fill_value=time_mean)
plt.style.use("seaborn")
plt.plot(time_series)
plt.plot(time_mean_bubble,'r--', label='bubble')
plt.plot(time_mean_insert,'b--', label='insert')
plt.title("time spent trend", fontsize=15)
plt.xlabel("# of trials", fontsize=15)
plt.ylabel("time spent (insert sort)", fontsize=15)
plt.legend(fontsize=15)
print("sorted data :", sorted_data[:10], "/ average time spent :", time_mean)
import time
import random
np.random.seed(777)
data = np.random.randint(1,1e+8,size=1000)
print(data[:5], "len :", len(data))
```
### Sort 알고리즘 (선택)
```
def select_sort(data):
start_time = time.time()
for stand in range(0, len(data)-1):
min_idx = stand
for idx in range(stand+1, len(data)):
if data[min_idx] > data[idx]:
min_idx = idx
data[min_idx], data[stand] = data[stand], data[min_idx]
end_time = time.time()
sorting_time = end_time - start_time
return data, sorting_time
```
### 결과 시각화
```
%matplotlib inline
sorted_data, time_series = [], []
for _ in range(10):
sorted_data, sorting_time = select_sort(data)
time_series.append(sorting_time)
time_mean = np.mean(time_series)
time_mean_x = np.linspace(0,10,num=10)
time_mean_select = np.full(shape=time_mean_x.shape, fill_value=time_mean)
plt.style.use("seaborn")
plt.plot(time_series)
plt.plot(time_mean_bubble,'r--', label='bubble')
plt.plot(time_mean_insert,'b--', label='insert')
plt.plot(time_mean_select,'g--', label='select')
plt.title("time spent trend", fontsize=15)
plt.xlabel("# of trials", fontsize=15)
plt.ylabel("time spent (select sort)", fontsize=15)
plt.legend(fontsize=15)
print("sorted data :", sorted_data[:10], "/ average time spent :", time_mean)
import time
import random
np.random.seed(777)
data = np.random.randint(1,1e+8,size=1000)
print(data[:5], "len :", len(data))
```
### Sort 알고리즘 (병합)
```
def merge(left, right):
merged = []
left_idx, right_idx = 0, 0
while left_idx < len(left) and right_idx < len(right):
if left[left_idx] < right[right_idx]:
merged.append(left[left_idx])
left_idx += 1
else:
merged.append(right[right_idx])
right_idx += 1
while left_idx < len(left):
merged.append(left[left_idx])
left_idx += 1
while right_idx < len(right):
merged.append(right[right_idx])
right_idx += 1
return merged
def merge_split(data):
start_time = time.time()
if len(data) == 1:
return data
else:
median = int(len(data)/2)
left = merge_split(data[:median])
right = merge_split(data[median:])
return merge(left, right)
def merge_sort(data):
start_time = time.time()
result = merge_split(data)
end_time = time.time()
sorting_time = end_time - start_time
return result, sorting_time
```
### 결과 시각화
```
%matplotlib inline
sorted_data, time_series = [], []
for _ in range(10):
sorted_data, sorting_time = merge_sort(data)
time_series.append(sorting_time)
time_mean = np.mean(time_series)
time_mean_x = np.linspace(0,10,num=10)
time_mean_merge = np.full(shape=time_mean_x.shape, fill_value=time_mean)
plt.style.use("seaborn")
plt.plot(time_series)
plt.plot(time_mean_bubble,'r--', label='bubble')
plt.plot(time_mean_insert,'b--', label='insert')
plt.plot(time_mean_select,'g--', label='select')
plt.plot(time_mean_merge,'y--', label='merge')
plt.title("time spent trend", fontsize=15)
plt.xlabel("# of trials", fontsize=15)
plt.ylabel("time spent (merge sort)", fontsize=15)
plt.legend(fontsize=15)
print("sorted data :", sorted_data[:10], "/ average time spent :", time_mean)
import time
import random
np.random.seed(777)
data = np.random.randint(1,1e+8,size=1000)
print(data[:5], "len :", len(data))
```
### Sort 알고리즘 (퀵)
```
def q_sort(data):
if len(data) <= 1:
return data
left, right = [], []
pivot = data[0]
for idx in range(1, len(data)):
if pivot > data[idx]:
left.append(data[idx])
else:
right.append(data[idx])
return q_sort(left) + [pivot] + q_sort(right)
def quick_sort(data):
start_time = time.time()
result = q_sort(data)
end_time = time.time()
sorting_time = end_time - start_time
return result, sorting_time
```
### 결과 시각화
```
%matplotlib inline
sorted_data, time_series = [], []
for _ in range(10):
sorted_data, sorting_time = quick_sort(data)
time_series.append(sorting_time)
time_mean = np.mean(time_series)
time_mean_x = np.linspace(0,10,num=10)
time_mean_quick = np.full(shape=time_mean_x.shape, fill_value=time_mean)
plt.style.use("seaborn")
plt.plot(time_series)
plt.plot(time_mean_bubble,'r--', label='bubble')
plt.plot(time_mean_insert,'b--', label='insert')
plt.plot(time_mean_select,'g--', label='select')
plt.plot(time_mean_merge,'y--', label='merge')
plt.plot(time_mean_quick,'k--', label='quick')
plt.title("time spent trend", fontsize=15)
plt.xlabel("# of trials", fontsize=15)
plt.ylabel("time spent (quick sort)", fontsize=15)
plt.legend(fontsize=15)
print("sorted data :", sorted_data[:10], "/ average time spent :", time_mean)
```
| github_jupyter |
# Lecture 10 (https://bit.ly/intro_python_10)
Today:
* Finishing up sequences:
* Iterators vs. lists
* Generators and the yield keyword
* Modules:
* Some useful modules
* Hierarchical namespaces
* Making your own modules
* The main() function
* PEP 8 (v. briefly)
* A revisit to debugging, now that we're writing longer programs:
* Looking at different error types (syntax, runtime, logical)
# Generators vs. lists
```
# Recall that range can be used to iterate through a sequence of numbers:
for i in range(10):
print("i is", i)
# We can convert range to a list
x = list(range(10)) # Makes a list [ 0, 1, ... 9 ]
print(x)
```
**But isn't range a list to start with?**
```
# No!
x = range(10)
print(x)
# So what is the type of range:
x = range(10) # So what is a range?
print(type(x))
```
A range, or (as we'll see in a minute) generator function, is a promise to produce a sequence when asked.
Essentially, you can think of it like a function you can call repeatedly to get
successive values from an underlying sequence, e.g. 1, 2, ... etc.
Why not just make a list? In a word: memory.
```
x = list(range(100)) # This requires allocating memory to store 100 integers
print(x)
x = range(100) # This does not make the list, so the memory for the list is never allocated.
print(x)
# This requires only the memory for j, i and the Python system
# Compute the sum of integers from 1 (inclusive) to 100 (exclusive)
j = 0
for i in range(100):
j += i
print(j)
# Alternatively, this requires memory for j, i and the list of 100 integers:
j = 0
for i in list(range(100)):
j += i
print(j)
```
* Range, as an iterator, is the promise to produce a sequence of integers, but this does not require they all exist in memory at the same time.
* With a list, however, by definition, all the elements are present in memory.
* As a general guide, if we can be "lazy", and avoid ever building a complete sequence in memory, then we should be lazy about evaluation of sequences.
* So how do you code a function like range? This is where the "yield" keyword comes in, which allows you to create generator functions.
# Yield keyword
With *return* you exit a function completely, returning a value. The internal state of the function is lost.
Yield is like return, in that you return a value from the function and temporarily the function exits, however the state of the function is not lost, and the function can be resumed to return more values.
This allows a function to act like an iterator over a sequence, where the function incrementally yields values, one for each successive resumption of the function.
It's easiest to understand this by example:
```
def make_numbers(m):
i = 0
while i < m:
yield i
i += 1
for i in make_numbers(10):
print("i is now", i)
# What is the type?
x = make_numbers(5)
print(type(x))
```
Why use yield to write generator functions?:
* Shorter, cleaner code - here we saved all the messing around with lists
* More efficient in memory - we never have to construct the complete list in memory, rather we keep track of a limited amount of state in memory that represents where we are in the sequence of permutations.
# Challenge 1
```
# Write a iterator function to enumerate numbers in the Collatz 3n + 1 sequence
# Recall:
# The sequence is produced iteratively such that the next term is determined
# by the current value, n. If n is even then the next term in the sequence is n/2 otherwise
# it is n*3 + 1. The sequence terminates when n equals 1 (i.e. 1 is always the last integer returned).
def collatz(n):
pass #replace with your code
for i in collatz(11):
print(i)
```
# Modules
* A language like Python has vast libraries of useful functions, classes, etc. See https://pypi.org/:
* As of Dec 2020 there are over 270K different Python "packages" in PyPi.
* To make it possible to use these, and importantly, ensure the namespace of our code does not explode in size, Python has a hierarchical system for managing these libraries using "modules" and "packages".
```
# From a user perspective, modules are variables, functions, objects etc. defined separately
# to the code we're working on.
import math # This line "imports" the math module, so that we can refer to it
math.log10(100) # Now we're calling a function from the math module to compute log_10(100)
# The math module contains lots of math functions and constants
dir(math) # As I've shown you before, use dir to list the contents of an object or module
# Use help() to give you info (Note: this is great to use in the interactive interpretor)
help(math.sqrt) # e.g. get info on the math.sqrt function - this is pulling the doc string of the function
```
* In general, the Python standard library provides loads of useful modules for all sorts of things: https://docs.python.org/3/py-modindex.html
* Standard library packages are installed as part of a default Python installation - they are part of every Python of that version (e.g. 3.XX)
```
# For example, the random module from the standard library provides loads of functions
# for making random numbers/choices - this is useful for games, algorithms
# and machine learning where stochastic behaviour is needed
import random
def make_random_ints(num, lower_bound, upper_bound):
"""
Generate a list containing num random ints between lower_bound
and upper_bound. upper_bound is an open bound.
"""
rng = random.Random() # Create a random number generator
# Makes a sequence of random numbers using rng.randrange()
return [ rng.randrange(lower_bound, upper_bound) for i in range(num) ]
make_random_ints(10, 0, 6) # Make a sequence of 10 random numbers, each in the
# interval (0 6]
```
* There is a much larger universe of open source Python packages you can install from: https://pypi.org/
# Challenge 2
```
# Use the median function from the statistics module to calculate the median of the following list:
l = [ 1, 8, 3, 4, 2, 8, 7, 2, 6 ]
```
# Namespaces and dot notation
* To recap, the namespace is all the identifiers (variables, functions, classes (to be covered soon), and modules) available to a line of code (See previous notes on scope and name space rules).
* In Python (like most programming languages), namespaces are organized hierarchically into subpieces using modules and functions and classes.
* If all identifiers were in one namespace without any hierarchy then we would get lots of collisions between names, and this would result in ambiguity. (see Module1.py and Module2.py example in textbook: http://openbookproject.net/thinkcs/python/english3e/modules.html)
* The upshot is if you want to use a function from another module you need to import it into the "namespace" of your code and use '.' notation:
```
import math # Imports the math module into the current namespace
# The '.' syntax is a way of indicating membership
math.sqrt(2) # sqrt is a function that "belongs" to the math module
# (Later we'll see this notation reused with objects)
```
# Import statements
* As you've seen, to import a module just write "import x", where x is the module name.
**Import from**
* You can also import a specific function, class or object from a module into your program's namespace using the import from syntax:
```
from math import sqrt
sqrt(2.0) # Now sqrt is a just a function in the current program's name space,
# no dot notation required
```
If you want to import all the functions from a module you can use:
```
from math import * # Import all functions from math
# But, this is generally a BAD IDEA, because you need to be sure
# this doesn't bring in things that will collide with other things
# used by the program
log(10)
#etc.
```
More useful is the "as" modifier
```
from math import sqrt as square_root # This imports the sqrt function from math
# but names it square_root. This is useful if you want to abbreviate a long function
# name, or if you want to import two separate things with the same name
square_root(2.0)
```
# Challenge 3
```
# Write a statement to import the 'beep' function from the 'curses' module
```
# Writing your own modules
You can write your own modules.
* Create a file whose name is
x.py, where x is the name of the module you want to create.
* Edit x.py to contain the stuff you want
* Create a new python file, call it y.py, in the same directory as x.py and
include "import x" at the top of y.py.
(NOTE: do demo)
**Packages**
Packages are collections of modules, organized hierarchically (and accessed using the dot notation).
Beyond the scope here, but you can look more at environment setup to create your own "packages". If you're curious see: https://docs.python.org/3/tutorial/modules.html#packages
# The main() function
* You may write a program and then want to reuse some of the functions by importing them into another program. In this case you are treating the original program as a module.
* The problem is that when you import a module it is executed.
* Question: How do you stop the original program from running when you import it as a module?
* Answer: By putting the logic for the program in a "main()", which is only called if the program is being run by user, not imported as a module.
```
def some_useful_function():
"""Defines a function that would be useful to
other programs outside of main"""
pass
def main():
x = input()
print("python main function, x is:", x)
# Put the program logic in this function
if __name__ == '__main__': # This will only be true
# when the program is executed by a user
main()
print(__name__) # The name of the current module
type(__name__)
```
**Live demo!**
# PEP8: Use Style
It is easy to rush and write poorly structured, hard-to-read code. However, generally, this proves a false-economy, resulting in longer debug cycles, a larger maintenance burden (like, what was I thinking?) and less code reuse.
Although many sins have nothing to do with the cosmetics of the code, some can be fixed by adopting a consistent, sane set of coding conventions. Python did this with Python Enhancement Proposal (PEP) 8:
https://www.python.org/dev/peps/pep-0008/
Some things PEP-8 covers:
* use 4 spaces (instead of tabs) for indentation - you can make your text editor do this (insert spaces for tabs)
* limit line length to 78 characters
* when naming identifiers, use CamelCase for classes (we’ll get to those) and lowercase_with_underscores for functions and variables
* place imports at the top of the file
* keep function definitions together
* use docstrings to document functions
* use two blank lines to separate function definitions from each other
* keep top level statements, including function calls, together at the bottom of the program
# Debugging Revisited
We mentioned earlier that a lot of programming is debugging. Now we're going to debug programs and understand the different errors you can get.
There are three principle types of error:
- syntax errors
- runtime errors
- semantic/logical errors
# Syntax Errors
* when what you've written is not valid Python
```
# Syntax errors - when what you've written is not valid Python
for i in range(10)
print(i) # What's wrong with this?
# Syntax errors - when what you've written is not valid Python
for i in range(10):
print(i) # What's wrong with this?
# Syntax errors - when what you've written is not valid Python
for i in range(10):
""" This loop will print stuff ""
print(i)
# Syntax errors - when what you've written is not valid Python
# (note, this kind of print statement was legal in Python 2.XX and earlier)
print "Forgetting parentheses"
```
# Runtime Errors
* when the program crashes during runtime because it
tries to do something invalid
```
# Runtime errors - when the program errors out during runtime because it
# tries to do something invalid
print("This is an integer: " + 10)
# Runtime errors - when the program errors out during runtime because it
# tries to do something invalid
assert 1 + 1 == 3
```
# Semantic Errors (aka Logical Errors)
* when the program runs and exits without error, but produces an unexpected result
```
# Semantic errors - when the program runs and exits without error,
# but produces an unexpected result
j = int(input("Input a number: "))
x = 1
for i in range(1, j): # should be range(1, j+1):
x = x * i
print(str(j) + " factorial is " + str(x))
```
In my experience syntax errors are easy to fix, runtime errors are generally solvable fast, but semantic errors can take the longest time to fix
**Debug strategies**
To debug a failing program, you can:
* Use print statements dotted around the code to figure out what code is doing at specific points of time (remember to remove / comment these out when you're done!)
* Use a debugger - this allows you to step through execution, line-by-line, seeing what the program is up to at each step. (PyCharm has a nice interface to the Python debugger)
* Write unit-tests for individual parts of the code
* Use assert to check that expected properties are true during runtime
* Stare hard at it! Semantic errors will generally require you to question your program's logic.
# Challenge 4
See if you can get this to work:
```
import time
# Try debugging the following - a number guessing program
# It has all three types of errors
print("Think of a number from 1 to 100")
time.sleep(3)
min = 1
max = 100
while max == min
i = (min + max) // 2
answer = input("Is your number greater than " + str(i) + " Type YES or NO: ")
assert answer == "YES" or answer == "YES" # Check the value is what we expect
if answer == "YES":
min = i+1
else:
max = i
print("Your number is: " + str(min))
```
# Reading
Open book chapter 12: http://openbookproject.net/thinkcs/python/english3e/modules.html
# Homework
ZyBook Reading 10
| github_jupyter |
# Lab 9: Spatial-Temporal patterns of Natural Disturbance
Building on the ForestFire model from previous lab, compute the fire iternval distribution (times between fire intervals) and the patch size distribution (size of forested patches in steady-state landscape pattern).
Plot these data on a log-log using the techniques from Ch. 10 to see if the distribution appears to be "heavy tailed", and thus if this system exhibits properties of self-organizing criticality.
```
!pip install empiricaldist
import numpy as np
from matplotlib import pyplot as plt
from empiricaldist import Pmf
```
### Create Continuous Patches
This is a surprisingly challenging problem to solve in the general case given how good our visual system is at identifying them!
The idea I had here was to start by giving each occupied cell a unique value, then "grow" patches from occupied cells by allowing the smallest of these unique values to propogate to neighbouring cells. Repeat until the propogation is finished.
```
neighbourhood = np.array([
[0, 1, 0],
[1, 1, 1],
[0, 1, 0],
])
def min_neighbour(a):
""" Return the smallest non-zero neighbourhood value or 0 if centre cell is a zero """
p = a*neighbourhood
centre = tuple(d//2 for d in a.shape)
return np.min(p[p>0]) if a[centre] else 0
def consolidate(array):
""" return copy of array with adjacent cells consolidated into a patch with the lowest value among occupied neighbours """
rows, cols = array.shape
k = neighbourhood.shape[0]
array = np.pad(array, 1, 'constant')
return np.array([
[min_neighbour(array[row:row+k, col:col+k]) for col in range(cols) ]
for row in range(rows)
])
def patchify(array, category):
""" Return an array with each contiguous patch identified by a unique integer
array: array of int categorical values
category: the int category value to identify patches
return: array of same shape with a unique value identifying cells in each patch and zeros elsewhere
"""
patches = np.zeros(array.shape, dtype=np.uint)
patches[array==category] = range(1, len(array[array==category])+1)
patches_growing = np.array([True,])
while np.any(patches_growing):
prev_patches = patches
patches = consolidate(prev_patches)
patches_growing = patches != prev_patches # patches are growning until consolidate algorithm stablaizes.
return patches
```
## Code fragments...
```
# Create an array of patches from occupied cells of a forest array
patches = patchify(forest.array, OCCUPIED)
draw_array(patches, cmap='Greens', vmin=0, vmax=np.max(patches))
def plot_patch_sizes(patch_sizes, min_size=1, scale='linear', plot_type='bar'):
""" plot the distribution of patch sizes for the array of patch sizes """
plot_options = dict(xlabel='patch size', ylabel='N patches', xscale=scale, yscale=scale)
fig, ax = plt.subplots(figsize=(6, 6), subplot_kw=plot_options)
ax.set_title("Patch Size Distribution")
# get unique patch size classes with count of patches in each size class
size_classes, counts = np.unique(patch_sizes[patch_sizes>=min_size], return_counts=True)
if plot_type == 'bar' and scale == 'linear':
ax.bar(size_classes, counts)
else:
ax.plot(size_classes, counts)
n_patches = len(patch_sizes)
print('Number of patches:', n_patches, 'Unique patch size classes:', len(size_classes))
single_cell_patches = np.sum(patch_sizes[patch_sizes==1])
print('Number of single cell patches:', single_cell_patches, '({pct}%)'.format(pct=round(100*single_cell_patches/n_patches)))
print('Largest patch size:', np.max(patch_sizes))
# get list of unique patches with count of cells in each patch
patch_ids, patch_sizes = np.unique(patches[patches>0], return_counts=True)
# lot the patch size distribution as a bar chart
plot_patch_sizes(patch_sizes, )
```
| github_jupyter |
```
import numpy as np
from tifffile import imread, imsave
from glob import glob
import random
import tqdm
from matplotlib import pyplot as plt
from sklearn.feature_extraction import image
X = sorted(glob("/Users/prakash/Desktop/flywing/images/*.tif"))
Y = sorted(glob("/Users/prakash/Desktop/flywing/gt/*.tif"))
X = list(map(imread,X))
Y = list(map(imread,Y))
plt.subplot(121); plt.imshow(X[9],cmap='gray'); plt.axis('off'); plt.title('Raw image'); plt.show()
rng = np.random.RandomState(42)
ind = rng.permutation(len(X))
n_test = int(round(0.2*len(X)))
ind_pretrn, ind_test = ind[:-n_test], ind[-n_test:]
X_test, Y_test = [X[i] for i in ind_test] , [Y[i] for i in ind_test]
X_pretrn, Y_pretrn = [X[i] for i in ind_pretrn] , [Y[i] for i in ind_pretrn]
print('number of images: %3d' % len(X))
print('- training+validation: %3d' % len(X_pretrn))
print('- test: %3d' % len(X_test))
for i in range(len(X_test)):
imsave('/Users/prakash/Desktop/flywing/test/images/'+str(i)+'.tif', X_test[i])
imsave('/Users/prakash/Desktop/flywing/test/gt/'+str(i)+'.tif', Y_test[i])
count =0
for i in range (len(X_pretrn)):
patchesimages = image.extract_patches_2d(X_pretrn[i], patch_size=(128,128), max_patches=10, random_state=0)
patchesmasks = image.extract_patches_2d(Y_pretrn[i], patch_size=(128,128), max_patches=10, random_state=0)
for j in range(0, np.shape(patchesimages)[0]):
imsave('/Users/prakash/Desktop/flywing/patches/images/'+str(count).zfill(4)+'.tif', patchesimages[j])
imsave('/Users/prakash/Desktop/flywing/patches/gt/'+str(count).zfill(4)+'.tif', patchesmasks[j])
count+=1
X_pretrn= sorted(glob('/Users/prakash/Desktop/flywing/patches/images/*.tif'))
Y_pretrn= sorted(glob('/Users/prakash/Desktop/flywing/patches/gt/*.tif'))
X_test = sorted(glob('/Users/prakash/Desktop/flywing/test/images/*.tif'))
Y_test = sorted(glob('/Users/prakash/Desktop/flywing/test/gt/*.tif'))
X_pretrn = list(map(imread,X_pretrn))
Y_pretrn = list(map(imread,Y_pretrn))
X_test = list(map(imread,X_test))
Y_test = list(map(imread,Y_test))
print('- training+validation: %3d' % len(X_pretrn))
print('- test: %3d' % len(X_test))
rng = np.random.RandomState(42)
ind = rng.permutation(len(X_pretrn))
n_val = int(round(0.15 * len(X_pretrn)))
ind_train, ind_val = ind[:-n_val], ind[-n_val:]
X_val, Y_val = [X_pretrn[i] for i in ind_val] , [Y_pretrn[i] for i in ind_val]
X_train, Y_train = [X_pretrn[i] for i in ind_train] , [Y_pretrn[i] for i in ind_train]
print('- training: %3d' % len(X_train))
print('- validation: %3d' % len(X_val))
X_train = np.array(X_train)
X_test = np.array(X_test)
X_val = np.array(X_val)
Y_train = np.array(Y_train)
Y_test = np.array(Y_test)
Y_val = np.array(Y_val)
i = 9
img, lbl = X_train[i], Y_train[i]
plt.figure(figsize=(16,10))
plt.subplot(121); plt.imshow(img,cmap='gray'); plt.axis('off'); plt.title('Raw image (train)')
plt.subplot(122); plt.imshow(lbl,cmap='gray'); plt.axis('off'); plt.title('Mask image (train)')
plt.show()
print(np.min(img),np.max(img))
print(np.min(lbl),np.max(lbl))
def noisy(image, sigma):
row,col= image.shape
mean = 0
img=np.array(image).astype(np.float32)
gauss = np.random.normal(mean,sigma,(row,col))
gauss = gauss.reshape(row,col)
noisy = img + gauss
return noisy
std=10.0
X_train10 = np.array([noisy(x,std) for x in X_train])
X_test10 = np.array([noisy(x,std) for x in X_test])
X_val10 = np.array([noisy(x,std) for x in X_val])
std=20.0
X_train20 = np.array([noisy(x,std) for x in X_train])
X_test20 = np.array([noisy(x,std) for x in X_test])
X_val20 = np.array([noisy(x,std) for x in X_val])
np.savez_compressed('/Users/prakash/Desktop/flywing/train/train_data_n0.npz', X_train=X_train, Y_train=Y_train, X_val=X_val, Y_val=Y_val)
np.savez_compressed('/Users/prakash/Desktop/flywing/test/test_data_n0.npz', X_test=X_test, Y_test=Y_test)
np.savez_compressed('/Users/prakash/Desktop/flywing/train/train_data_n10.npz', X_train=X_train10, Y_train=Y_train, X_val=X_val10, Y_val=Y_val)
np.savez_compressed('/Users/prakash/Desktop/flywing/test/test_data_n10.npz', X_test=X_test10, Y_test=Y_test)
np.savez_compressed('/Users/prakash/Desktop/flywing/train/train_data_n20.npz', X_train=X_train20, Y_train=Y_train, X_val = X_val20, Y_val = Y_val)
np.savez_compressed('/Users/prakash/Desktop/flywing/test/test_data_n20.npz', X_test=X_test20, Y_test=Y_test)
```
| github_jupyter |
```
%%javascript
$('#appmode-leave').hide();
$('#copy-binder-link').hide();
$('#visit-repo-link').hide();
import ipywidgets as ipw
import json
import random
import time
import pandas as pd
import os
import webbrowser
import math
from IPython.display import display, Markdown
# set kinetic parameters
with open("rate_parameters.json") as infile:
jsdata = json.load(infile)
params = jsdata["kin4"]
```
Copyright **Jacob Martin and Paolo Raiteri**, January 2021
## Determination of the Rate Law \#4
You are studying the rate of decomposition of species `A` by monitoring its concentration with time.
Collect enough data to determine the order of the reaction, the rate constant, and the half-life.
### Instructions:
- Use the slide bar(s) below to select the times at which you perform the measurement.
- Click `Perform measurement` to run the virtual experiment and collect the result.
- Click `Download CSV` to export the complete data set for all the experiments as a CSV file.
- Note that every time you `Restart laboratory` some parameters of the experiments may change.
```
# define path to results.csv file
respath = os.path.join(os.getcwd(), "..", "results.csv")
# delete existing result file and setup rng
if os.path.exists(respath):
os.remove(respath)
#random.seed(params["error"].get("seed", 0))
t = int( time.time() * 1000.0 )
random.seed( ((t & 0xff000000) >> 24) +
((t & 0x00ff0000) >> 8) +
((t & 0x0000ff00) << 8) +
((t & 0x000000ff) << 24) )
class system:
def __init__(self, vol=0, conc=0, press=0):
self.vol = vol
self.conc = conc
self.press = press
class data:
def __init__(self, start=-1, error=0, label='none', units='pure', value=0,
minval=-1, maxval=3):
self.start = start
self.minval = minval
self.maxval = maxval
self.error = error
self.label = label
self.units = units
self.value = value
# Experiment setup (+ hidden paramters)
system = system()
def initialiseExperiment():
global n
global system
global columns_list
global scatter
scatter = 0.1
n = []
columns_list = []
n.append(len(args)) # number of input adjustable parameters
n.append(len(result)) # number of results for the experiment
for i in range(0, n[0]):
columns_list.append(f"{args[i].label} [{args[i].units}]")
for i in range(0, n[1]):
columns_list.append(f"{result[i].label} [{result[i].units}]")
# Random initial concentration
system.conc = random.random()
# Adjustable input parameters
def initialiseVariables():
global logScale
logScale = True
global args
args = []
args.append(
data(
label = "Elapsed time",
minval = 0,
maxval = 5,
start = 1.,
units = "s",
value = 0.
)
)
# Results
def initialiseResults():
global result
result = []
result.append(
data(
label = "[A]",
start = 0.,
error = random.random() / 10.,
units = "mol/L"
)
)
def measure():
res = system.conc * (math.exp(-params["k"] * args[0].value.value))
return res
initialiseVariables()
out_P = ipw.Output()
out_L = ipw.Output()
with out_L:
display(Markdown("[Download CSV](../results.csv)"))
def calc(btn):
out_P.clear_output()
# Measurement result
result[0].value = measure()
# Random error
result[0].error = result[0].value * scatter * (0.5 - random.random()) * 2
# Output result
out_R[0].value = f"{result[0].value + result[0].error:.3e}"
# Read previous lines
res = pd.read_csv(respath)
var_list = []
for i in range(0, n[0]):
var_list.append(args[i].value.value)
for i in range(0, n[1]):
var_list.append(result[i].value + result[i].error)
# Append result
res.loc[len(res)] = var_list
res.to_csv(respath, index=False)
with out_P:
display(res.tail(50))
def reset(btn):
if os.path.exists(respath):
os.remove(respath)
initialiseResults()
initialiseExperiment()
res = pd.DataFrame(columns=columns_list)
res.to_csv(respath, index=False)
with out_P:
out_P.clear_output()
display(res.tail(50))
# interactive buttons ---
btn_reset = ipw.Button(description="Restart Laboratory", layout=ipw.Layout(width="150px"))
btn_reset.on_click(reset)
btn_calc = ipw.Button(description="Perform measurement", layout=ipw.Layout(width="150px"))
btn_calc.on_click(calc)
# ---
reset(btn_reset)
rows = []
for i in range(0, n[0]):
if logScale:
args[i].value = ipw.FloatLogSlider(value=args[i].start, min=args[i].minval, max=args[i].maxval)
else:
args[i].value = ipw.FloatSlider(value=args[i].start, min=args[i].minval, max=args[i].maxval)
rows.append(ipw.HBox([ipw.Label(value=f"{args[i].label} [{args[i].units}]:",
layout=ipw.Layout(width="250px")),
args[i].value]))
out_R = []
for i in range(0, n[1]):
out_R.append(ipw.Label(value=""))
rows.append(ipw.HBox([ipw.Label(value=f"Measured {result[i].label} [{result[i].units}]:",
layout=ipw.Layout(width="250px")),
out_R[i]]))
rows.append(ipw.HBox([btn_reset, btn_calc, out_L]))
rows.append(ipw.HBox([out_P]))
ipw.VBox(rows)
```
| github_jupyter |
# Imports
```
from pymongo import MongoClient
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.neighbors import BallTree
from gensim import matutils, models
import gensim
import matplotlib.pyplot as plt
%matplotlib inline
```
# Getting Data From MongoDB
```
client = MongoClient()
db = client.yelp
reviews = db.reviews
business = db.business
databus = list(business.find({"name": "Mr Hoagie"}))
print databus
databuspd = pd.DataFrame(databus)
databuspd
busid = databuspd['business_id'][0]
datarev = list(reviews.find({"business_id": busid}))
reviews = []
for review in range(len(datarev)):
reviews.append(datarev[review]['text'])
reviews
```
# Preprocessing with CountVectorizer
```
count_vectorizer = CountVectorizer(ngram_range = (1,2), stop_words = 'english', token_pattern = "\\b[a-z][a-z]+\\b")
reviewcounts = count_vectorizer.fit_transform(reviews).transpose()
reviewcounts.shape
corpus = matutils.Sparse2Corpus(reviewcounts)
id2word = dict((v,k) for k,v in count_vectorizer.vocabulary_.items())
print id2word
```
# LDA
```
id2word
lda = models.LdaModel(corpus = corpus, num_topics = 5, id2word = id2word, passes = 5)
lda.print_topics()
lda_corpus = lda[corpus]
lda_docs = [doc for doc in lda_corpus]
lda_docs
```
*LDA hard to interpret
# Word2Vec
# Using Pretrained Vectors from Google
```
goog_model = gensim.models.Word2Vec.load('googmodel')
review_vec = []
for review in range(len(reviews)):
result = []
words = reviews[review].split()
for word in words:
try:
result.append(goog_model[word])
except:
pass
review_vec.append(np.array(result).mean(axis=0))
X = np.array(review_vec)
inert = []
for n in range(1, 6):
kmeans = KMeans(n_clusters = n, random_state=0).fit(X)
clusters = kmeans.predict(X)
inert.append(kmeans.inertia_)
xaxis = range(1,6,1)
plt.xlabel('Clusters')
plt.ylabel('Inertia')
plt.plot(xaxis,inert)
```
3 clusters was chosen as optimal
```
kmeans = KMeans(n_clusters = 3, random_state=0).fit(X)
clusters = kmeans.predict(X)
centroids = kmeans.cluster_centers_
goog_words = goog_model.syn0
```
# Minkowski Distance
```
tree = BallTree(goog_words, leaf_size=2)
wordindexes = []
for centroid in centroids:
dist, ind = tree.query(centroid, k=3)
wordindexes.append(ind)
wordindices = []
for index in range(len(wordindexes)):
for num in range(len(wordindexes[index][0])):
if wordindexes[index][0][num] not in wordindices:
wordindices.append(wordindexes[index][0][num])
for elem in wordindices:
print goog_model.index2word[elem]
```
Better to use cosine distance for NLP.
# Cosine Distance with Nearest Neighbors
```
from sklearn.neighbors import NearestNeighbors
neigh = NearestNeighbors(n_neighbors=5, algorithm='brute', metric='cosine')
neigh.fit(goog_words)
wordindex_cos = []
for centroid in centroids:
dist_cos, ind_cos = neigh.kneighbors(centroid, n_neighbors=100)
wordindex_cos.append(ind_cos)
wordindices = []
for index in range(len(wordindex_cos)):
for num in range(len(wordindex_cos[index][0])):
if wordindex_cos[index][0][num] not in wordindices:
wordindices.append(wordindex_cos[index][0][num])
for elem in wordindices:
print goog_model.index2word[elem]
```
Printing lots of words completely unrelated to reviews. Try the Stanford pretrained Word2Vec instead.
# Stanford Wikipedia
```
stan_model = gensim.models.Word2Vec.load('w2v')
from nltk.corpus import stopwords
stop = set(stopwords.words('english'))
review_vec = []
for review in reviews:
result = []
words = review.split()
for word in words:
if word not in stop:
try:
result.append(stan_model[word.lower()])
except:
pass
review_vec.append(np.array(result).mean(axis = 0))
stan_model.similar_by_vector(review_vec[0], topn = 500)
```
Many words not related to food, price, restaurant etc. Try a different strategy.
Removing the first feature may result in more accuracy.
```
stan_words = stan_model.syn0
stan_words = stan_words[:, 1:]
from sklearn.neighbors import NearestNeighbors
neigh = NearestNeighbors(n_neighbors=100, algorithm='brute', metric='cosine')
neigh.fit(stan_words)
review_test = review_vec[0][1:]
distances_stan, indices_stan = neigh.kneighbors(review_test, n_neighbors=100)
indices_stan = indices_stan[0]
for index in indices_stan:
print stan_model.index2word[index]
```
# TFIDF
```
reviews = db.reviews
datarev2 = list(reviews.find())
reviewdict = {}
for review in datarev2:
bizid = review['business_id']
bizrev = review['text']
try:
reviewdict[bizid] = reviewdict[bizid] + bizrev
except:
reviewdict[bizid] = bizrev
reviewlist_df = pd.DataFrame(reviewdict.items())
import pickle
with open('./reviewlist_df.pkl','wb') as f:
pickle.dump(reviewlist_df, f)
tfidf = TfidfVectorizer(stop_words="english",
token_pattern="\\b[a-zA-Z][a-zA-Z]+\\b",
min_df=10)
tfidf_vecs = tfidf.fit_transform(reviewlist_df.iloc[:,1])
with open('./tfidf_vecs.pkl','wb') as f:
pickle.dump([tfidf, tfidf_vecs], f)
id2words = dict((v, k) for k,v in tfidf.vocabulary_.iteritems())
vocabulary = tfidf.vocabulary_
with open('./words.pkl','wb') as f:
pickle.dump([vocabulary, id2words], f)
```
# Load Pickles here:
```
biz_id = '_qvxFHGbnbrAPeWBVifJEQ'
import pickle
with open('./reviewlist_df.pkl','rb') as f:
reviewlist_df = pickle.load(f)
with open('./tfidf_vecs.pkl','rb') as f:
tfidf, tfidf_vecs = pickle.load(f)
with open('./words.pkl', 'rb') as f:
vocabulary, id2words = pickle.load(f)
biz_index = reviewlist_df[reviewlist_df[0] == biz_id].index[0]
tfidf_df = pd.DataFrame(tfidf_vecs.todense())
top_10 = tfidf_df.iloc[biz_index,:].argsort()[-10:]
for i in top_10[::-1]:
print id2words[i]
review_summ = {}
for res in range(len(reviewlist_df[0])):
top_10 = tfidf_df.iloc[res,:].argsort()[-10:]
top_10_list = []
for i in top_10[::-1]:
top_10_list.append(id2words[i])
review_summ[reviewlist_df[0][res]] = top_10_list
with open('./review_summ.pkl','wb') as f:
pickle.dump(review_summ, f)
```
| github_jupyter |
# TensorQTL QTL association testing
This pipeline conduct QTL association tests using tensorQTL.

## Input
- `--molecular-pheno`, The bed.gz file containing the table describing the molecular phenotype. It shall also have a tbi index accompaning it.
- `genotype_list` a list of whole genome plink file for each chromosome
- `grm_list` is a file containing list of grm matrixs that generated by the GRM module of this pipeline.
- `covariate` is a file with #id + samples name as colnames and each row a covariate: fixed and known covariates as well as hidden covariates recovered from factor analysis.
## Output
A sets of summary statistics files for each chromosome, for both nomial significance for each test, as well as region (gene) level association evidence.
**FIXME: please fix the statement below**
# Command interface
```
sos run TensorQTL.ipynb -h
```
## Example
**FIXME: add it**
## Global parameter settings
The section outlined the parameters that can be set in the command interface.
**FIXME: same comments as in APEX.ipynb**
```
[global]
# Path to the input molecular phenotype file, per chrm, in bed.gz format.
parameter: molecular_pheno_list = path
# Covariate file, in similar format as the molecular_pheno
parameter: covariate = path
# Genotype file in plink trio format, per chrm
parameter: genotype_file_list = path
# An optional subset of region list containing a column of ENSG gene_id to limit the analysis
parameter: region_list = path("./")
# Path to the work directory of the analysis.
parameter: cwd = path('./')
# Specify the number of jobs per run.
parameter: job_size = 2
# Container option for software to run the analysis: docker or singularity
parameter: container = ''
# Prefix for the analysis output
parameter: name = 'ROSMAP'
# Specify the scanning window for the up and downstream radius to analyze around the region of interest, in units of bp
parameter: window = ['1000000']
import pandas as pd
molecular_pheno_chr_inv = pd.read_csv(molecular_pheno_list,sep = "\t")
geno_chr_inv = pd.read_csv(genotype_file_list,sep = "\t")
input_inv = molecular_pheno_chr_inv.merge(geno_chr_inv, on = "#id")
input_inv = input_inv.values.tolist()
```
## QTL Sumstat generation
This step generate the cis-QTL summary statistics and vcov (covariate-adjusted LD) files for downstream analysis from summary statistics. The analysis is done per chromosome to reduce running time.
## Cis QTL Sumstat generation via tensorQTL
```
[TensorQTL_cis_1]
input: for_each = "input_inv"
output: f'{cwd:a}/{path(_input_inv[1]):bnnn}.cis_qtl_pairs.{_input_inv[0]}.parquet',
f'{cwd:a}/{path(_input_inv[1]):bnnn}.emprical.cis_sumstats.txt',
long_table = f'{wd:a}/{path(_input_inv[1]):bnnn}.norminal.cis_long_table.txt'
task: trunk_workers = 1, trunk_size = 1, walltime = '12h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand= "$[ ]", stderr = f'{_output[0]}.stderr', container = container,stdout = f'{_output[0]}.stdout'
touch $[_output[0]].time_stamp
python: expand= "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout' , container = container
import pandas as pd
import numpy as np
import tensorqtl
from tensorqtl import genotypeio, cis, trans
## Defineing parameter
plink_prefix_path = $[path(_input_inv[2]):nr]
expression_bed = $[path(_input_inv[1]):r]
covariates_file = "$[covariate]"
Prefix = "$[_output[0]:nnn]"
## Loading Data
phenotype_df, phenotype_pos_df = tensorqtl.read_phenotype_bed(expression_bed)
### Filter by the optional keep gene
##if $[region_list].is_file():
## region = pd.read_csv("$[region_list]","\t")
## keep_gene = region["gene_ID"].to_list()
## phenotype_df = phenotype_df.query('gene_ID in keep_gene')
## phenotype_pos_df = phenotype_pos_df.query('gene_ID in keep_gene')
covariates_df = pd.read_csv(covariates_file, sep='\t', index_col=0).T
pr = genotypeio.PlinkReader(plink_prefix_path)
genotype_df = pr.load_genotypes()
variant_df = pr.bim.set_index('snp')[['chrom', 'pos']]
## Retaining only common samples
phenotype_df = phenotype_df[np.intersect1d(phenotype_df.columns, covariates_df.index)]
phenotype_df = phenotype_df[np.intersect1d(phenotype_df.columns, genotype_df.columns)]
covariates_df = covariates_df.transpose()[np.intersect1d(phenotype_df.columns, covariates_df.index)].transpose()
## cis-QTL mapping: nominal associations for all variant-phenotype pairs
cis.map_nominal(genotype_df, variant_df,
phenotype_df,
phenotype_pos_df,
Prefix, covariates_df=covariates_df)
## Load the parquet and save it as txt
pairs_df = pd.read_parquet("$[_output[0]]")
pairs_df.columns.values[6] = "pval"
pairs_df.columns.values[7] = "beta"
pairs_df.columns.values[8] = "se"
pairs_df = pairs_df.assign(
alt = lambda dataframe: dataframe['variant_id'].map(lambda variant_id:variant_id.split("_")[-1])).assign(
ref = lambda dataframe: dataframe['variant_id'].map(lambda variant_id:variant_id.split("_")[-2])).assign(
pos = lambda dataframe: dataframe['variant_id'].map(lambda variant_id:variant_id.split("_")[0].split(":")[1])).assign(
chrom = lambda dataframe: dataframe['variant_id'].map(lambda variant_id:variant_id.split(":")[0]))
pairs_df.to_csv("$[_output[2]]", sep='\t',index = None)
cis_df = cis.map_cis(genotype_df, variant_df,
phenotype_df,
phenotype_pos_df,
covariates_df=covariates_df, seed=999)
cis_df.index.name = "gene_id"
cis_df.to_csv("$[_output[1]]", sep='\t')
```
## Trans QTL Sumstat generation via tensorQTL
```
[TensorQTL_trans_1]
input: for_each = "input_inv"
output: f'{wd:a}/{path(_input_inv[1]):bnnn}.trans_sumstats.txt'
parameter: batch_size = 10000
parameter: pval_threshold = 1e-5
parameter: maf_threshold = 0.05
task: trunk_workers = 1, trunk_size = 1, walltime = '12h', mem = '40G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand= "$[ ]", stderr = f'{_output[0]}.stderr', container = container,stdout = f'{_output[0]}.stdout'
touch $[_output[0]].time_stamp
python: expand= "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout',container =container
import pandas as pd
import numpy as np
import tensorqtl
from tensorqtl import genotypeio, cis, trans
## Defineing parameter
plink_prefix_path = $[path(_input_inv[2]):nr]
expression_bed = $[path(_input_inv[1]):r]
covariates_file = "$[covariate]"
Prefix = "$[_output[0]:nnn]"
## Loading Data
phenotype_df, phenotype_pos_df = tensorqtl.read_phenotype_bed(expression_bed)
##### Filter by the optional keep gene
##if $[region_list].is_file():
## region = pd.read_csv("$[region_list]","\t")
## keep_gene = region["gene_ID"].to_list()
## phenotype_df = phenotype_df.query('gene_ID in keep_gene')
## phenotype_pos_df = phenotype_pos_df.query('gene_ID in keep_gene')
covariates_df = pd.read_csv(covariates_file, sep='\t', index_col=0).T
pr = genotypeio.PlinkReader(plink_prefix_path)
genotype_df = pr.load_genotypes()
variant_df = pr.bim.set_index('snp')[['chrom', 'pos']]
## Retaining only common samples
phenotype_df = phenotype_df[np.intersect1d(phenotype_df.columns, covariates_df.index)]
covariates_df.transpose()[np.intersect1d(phenotype_df.columns, covariates_df.index)].transpose()
## Trans analysis
trans_df = trans.map_trans(genotype_df, phenotype_df, covariates_df, batch_size=$[batch_size],
return_sparse=True, pval_threshold=$[pval_threshold], maf_threshold=$[maf_threshold])
## Filter out cis signal
trans_df = trans.filter_cis(trans_df, phenotype_pos_df.T.to_dict(), variant_df, window=$[window])
## Output
trans_df.columns.values[1] = "gene_ID"
trans_df.columns.values[6] = "pval"
trans_df.columns.values[7] = "beta"
trans_df.columns.values[8] = "se"
trans_df = pairs_df.assign(
alt = lambda dataframe: dataframe['variant_id'].map(lambda variant_id:variant_id.split("_")[1])).assign(
ref = lambda dataframe: dataframe['variant_id'].map(lambda variant_id:variant_id.split("_")[2])).assign(
pos = lambda dataframe: dataframe['variant_id'].map(lambda variant_id:variant_id.split("_")[0]))
trans_df.to_csv("$[_output[0]]", sep='\t')
```
**FIXME: we can consolidate these steps. I'll take a look myself after we have the MWE test**
```
[TensorQTL_cis_2]
input: output_from("TensorQTL_cis_1")["long_table"], group_by = "all"
output: f'{cwd:a}/{name}.TensorQTL_recipe.tsv'
python: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
import csv
import pandas as pd
data_tempt = pd.DataFrame({
"#chr" : [int(x.split(".")[-4].replace("chr","")) for x in [$[_input:r,]]],
"sumstat_dir" : [$[_input:r,]]
})
data_tempt.to_csv("$[_output]",index = False,sep = "\t" )
[TensorQTL_trans_2]
input: group_by = "all"
output: f'{cwd:a}/{name}.TensorQTL_recipe.tsv'
python: expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
import csv
import pandas as pd
data_tempt = pd.DataFrame({
"#chr" : [int(x[0].split(".")[-5].replace("chr","")) for x in [$[_input:r,]]],
"sumstat_dir" : [$[_input:r,]]
})
data_tempt.to_csv("$[_output]",index = False,sep = "\t" )
```
| github_jupyter |
# Identify Customer Segments
In this project, we applied unsupervised learning techniques to identify segments of the population that form the core customer base for a global services company in Germany.
These segments can then be used to direct marketing campaigns towards audiences that will have the highest expected rate of returns.
The data that have been provided by Bertelsmann Arvato Analytics.
##### 🔴Imports Libraries
```
# basic
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
import seaborn as sns
import time
# sklearn
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.cluster import KMeans
# magic
%matplotlib inline
```
# 🔵 Load the Data
There are four files associated with this project (not including this one):
- `Udacity_AZDIAS_Subset.csv`: Demographics data for the general population of Germany; 891211 persons (rows) x 85 features (columns).
- `Udacity_CUSTOMERS_Subset.csv`: Demographics data for customers of a mail-order company; 191652 persons (rows) x 85 features (columns).
- `Data_Dictionary.md`: Detailed information file about the features in the provided datasets.
- `AZDIAS_Feature_Summary.csv`: Summary of feature attributes for demographics data; 85 features (rows) x 4 columns
We will use this information to cluster the general population into groups with similar demographic properties. Then, we will see how the people in the customers dataset fit into those created clusters.
The hope here is that certain clusters are over-represented in the customers data, as compared to the general population; those over-represented clusters will be assumed to be part of the core userbase.
Under-represented clusters will also be analysed to further understand our customer characteristics.
This information can then be used for further applications, such as targeting for a marketing campaign.
#### 🔴Data Import and Exploration
```
# Load in the general demographics data
azdias = pd.read_csv('./data/Udacity_AZDIAS_Subset.csv', sep=';')
# Load in the feature summary file
feat_info = pd.read_csv('./data/AZDIAS_Feature_Summary.csv', sep=';')
# Check the structure of the data
display(azdias.head())
print('Shape of demographics data: ', azdias.shape)
print(azdias.dtypes)
# Display feature information
display(feat_info)
```
# 🔵Preprocessing
## 🔵Assess Missing Data
The feature summary file contains a summary of properties for each demographics data column. We will use this file to help us make cleaning decisions during this stage of the project.
At first, we will assess the demographics data in terms of missing data.
#### 🔵Convert Missing Value Codes to NaNs
The fourth column of the feature attributes summary (loaded in above as `feat_info`) documents the codes from the data dictionary that indicate missing or unknown data. While the file encodes this as a list (e.g. `[-1,0]`), this will get read in as a string object. We'll need to do a little bit of parsing to make use of it to identify and clean the data. We will convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value.
#### 🔴Identify missing or unknown data values and convert to NaNs
```
# Calculate naturally missing values (NaN) for later use
nat_nan = azdias.isnull().sum().sum()
# Helper function
def convert_nan(column):
# Extract list of encodings from feat_info for feature
codes_str = list(feat_info[feat_info['attribute'] == column.name]['missing_or_unknown'])[0]
# Parse string
codes = codes_str.replace('[', '').replace(']', '').split(',')
# Replace missing value with NaN
return column.apply(lambda x: np.NaN if str(x) in codes else x)
# Convert missing data into NaN using the helper function convert_nan
azdias = azdias.apply(convert_nan)
```
#### 🔴 Compare Naturally Missing Data vs Categorised as Missing
```
total_nan = azdias.isnull().sum().sum()
non_nat_nan = azdias.isnull().sum().sum() - nat_nan
total_values = azdias.size
percent_non_nat_nan = 100 * non_nat_nan / total_values
percent_nat_nan = 100 * nat_nan / total_values
percent_nan = 100 * total_nan / total_values
print('Data that are categorised as "missing" or "unknown": {}, ({}%)'.format(non_nat_nan, round(percent_non_nat_nan, 2)))
print('Data that are naturally missing: {}, ({}%)'.format(nat_nan, round(percent_nat_nan, 2)))
print('Total NaN: {}, ({}%)'.format(total_nan, round(percent_nan, 2)))
```
#### 🔵Assess Missing Data in Each Column
How much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. We will use matplotlib's [`hist()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html) function to visualize the distribution of missing value counts to find these columns. While some of these columns might have justifications for keeping or re-encoding the data, for this project we will just remove them from the dataframe.
#### 🔴Identify Hard Outliers (~40-80% NaN)
```
# Plot histogram of # NaN
total_nan_cols = azdias.isnull().sum() # NaNs per Column
n, bins, patches = plt.hist(total_nan_cols, bins=15) # Plot Histogram
plt.ylabel('# Features')
plt.xlabel('# NaNs')
plt.title('Dataset with Hard Outliers')
plt.show()
# Plot histogram of % NaN
ptotal_nan_cols = (azdias.isnull().mean())
n, bins, patches = plt.hist(ptotal_nan_cols, bins=15)
plt.ylabel('# Features')
plt.xlabel('% NaNs')
plt.show()
# Identify Outliers, using 40% NaN as the cut-off point
outliers_index = list(ptotal_nan_cols[(ptotal_nan_cols > 0.4)].index)
outliers_value = list(ptotal_nan_cols[(ptotal_nan_cols > 0.4)].values)
print('Outliers (many NaNs): \n')
outliers = pd.Series(outliers_value, outliers_index)
display(pd.DataFrame(outliers, columns = ['Proportion of NaNs']))
```
#### 🔴Drop Hard Outliers (from azdias and feat_info)
```
# Create new dataset without hard outliers
azdias_1 = azdias.drop(outliers_index, axis=1)
print('Dropped {} features.'.format(azdias.shape[1] - azdias_1.shape[1]))
# Create new feat_info without hard outliers (for later use)
drop_index = []
for i in range(feat_info.shape[0]):
if feat_info['attribute'][i] in outliers_index:
drop_index.append(i)
feat_info_1 = feat_info.drop(drop_index, axis=0)
print('Dropped {} features.'.format(feat_info.shape[0] - feat_info_1.shape[0]))
```
#### 🔴Identify Soft Outliers (~8-15% NaN)
```
# Plot soft histogram of # NaN
total_nan_cols_soft = azdias_1.isnull().sum() # Define NaNs per Column
n, bins, patches = plt.hist(total_nan_cols_soft, bins=20) # Plot Histogram
plt.ylabel("# Features")
plt.xlabel("# NaNs")
plt.title('Dataset with Soft Outliers')
plt.show()
# Plot soft histogram of % NaN
ptotal_nan_cols_soft = (azdias_1.isnull().mean())
n, bins, patches = plt.hist(ptotal_nan_cols_soft, bins=20)
plt.ylabel('# Features')
plt.xlabel('% NaNs')
plt.show()
# Identify Soft Outliers (between 8% - 16% NaN)
outliers_index_soft = list(ptotal_nan_cols[(ptotal_nan_cols > 0.08) & (ptotal_nan_cols < 0.16)].index)
outliers_value_soft = list(ptotal_nan_cols[(ptotal_nan_cols > 0.08) & (ptotal_nan_cols < 0.16)].values)
# Define series and display
outliers_soft = pd.Series(outliers_value_soft, outliers_index_soft)
outliers_soft = pd.DataFrame(outliers_soft, columns = ['Proportion of NaNs'])
display(outliers_soft)
```
#### 🔴Identify Patterns in Soft Outliers
```
# Plot number of NaNs vs Corresponding Feature
plt.plot(outliers_soft.index, 100*outliers_soft.values, 'bo', markersize=4)
plt.ylabel('% NaN')
plt.xlabel('Features')
plt.tick_params(
axis='x',
bottom=False,
top=False,
labelbottom=False)
plt.rcParams['figure.figsize'] = 10, 3
plt.show()
```
#### 🔴Features with close to 0 NaNs
```
clean_features = list(ptotal_nan_cols[(ptotal_nan_cols < 0.01)].index)
print('There are {} features with less than 1% NaNs!'.format(len(clean_features)))
```
#### 🟡Discussion
- Are there any patterns in missing values?
- Which columns were removed from the dataset?
From the initial histogram we can see that there are three significant outliers whose values are largely NaN (~40-80% NaN).
These features (which I decided to delete due to their abundance of missing values) are the following:
- Best-ager typology (AGER_TYP)
- Year of birth (GEBURTSJAHR)
- Consumer pattern over past 12 months (KK_KUNDENTYP)
Then, there is a larger set of features with fewer NaNs (8-15% NaNs) which I refer to as "soft outliers".
From the plot we can see that many of those features share the exact same amount of NaNs.
Finally, out of the 85 features (initially), we can see that there are 35 that have close to 0 NaNs (<1%).
# 🔵Assess Missing Data in Each Row
Now, we'll perform a similar assessment for the rows of the dataset.
Then, we will divide the data into two subsets: one for data points that are above some threshold for missing values, and a second subset for points below that threshold.
In order to know what to do with the outlier rows, we should see if the distribution of data values on columns that are not missing data (or are missing very little data) are similar or different between the two groups.
We will select at least five of these columns and compare the distribution of values.
#### 🔴Plot histogram of Rows vs Number of NaNs
```
# Plot soft histogram of # NaN
total_nan_rows = azdias_1.isnull().sum(axis=1)
n, bins, patches = plt.hist(total_nan_rows, bins=30)
plt.ylabel("# Feature Vectors")
plt.xlabel("# NaNs (rows)")
plt.title('Dataset with Soft Outliers')
plt.show()
# Plot soft histogram of % NaN
ptotal_nan_rows = (azdias_1.isnull().mean(axis=1))
n, bins, patches = plt.hist(ptotal_nan_rows, bins=30)
plt.ylabel('# Feature Vectors')
plt.xlabel('% NaNs (rows)')
plt.show()
```
#### 🔴Divide data into two subsets based on NaN count
```
#Divide the data into two subsets based on the number of missing values in each row
sub1_index = list(ptotal_nan_rows[ptotal_nan_rows < 0.3].index)
sub2_index = list(ptotal_nan_rows[ptotal_nan_rows >= 0.3].index)
azdias_2 = azdias_1.drop(sub2_index, axis=0)
sub_outliers = azdias_1.drop(sub1_index, axis=0)
# Count Outliers in Rows
print('The number of outliers (rows) is {} (~{}% of the original dataset)'.format(azdias_1.shape[0] - azdias_2.shape[0], round(100*(azdias_1.shape[0] - azdias_2.shape[0])/azdias_1.shape[0])))
```
🔵
Depending on what we observe in our comparison, there will be implications on how we approach our conclusions later in the analysis. If the distributions of non-missing features look similar between the data with many missing values and the data with few or no missing values, then we could argue that simply dropping those points from the analysis won't present a major issue. On the other hand, if the data with many missing values looks very different from the data with few or no missing values, then we should make a note on those data as special. We'll revisit these data later on.
#### 🔴Plot 5 columns for both datasets and compare the relative % value count
```
# Pick 5 random columns with 0 NaN
np.random.seed(42)
s0 = azdias_1.isna().sum()
s1 = s0[(s0 == 0)]
s2 = pd.Series(s1.index)
column_list = s2.sample(5).values
print(column_list)
# Use helper function to compare the feature values for the two subsets
def col_diff(col_list):
for column in col_list:
fig, axes = plt.subplots(1, 2, figsize=(15, 2))
# Define barplot for Low NaN Rows (make y-axis percentage so that difference in row vectors is accounted for)
ax1 = sns.barplot(x=azdias_2[column],
y=azdias_2[column],
data=azdias_2[column],
estimator=lambda x: len(x) / len(azdias_2[column]) * 100,
color='c', ax=axes[0])
ax1.set(ylabel="Percent")
axes[0].set_title('Low NaN Rows')
axes[0].set_yticks(np.arange(0, 101, 25))
# Define barplot for Low NaN Rows (make y-axis percentage so that difference in row vectors is accounted for)
ax2 = sns.barplot(x=sub_outliers[column],
y=sub_outliers[column],
data=sub_outliers[column],
estimator=lambda x: len(x) / len(sub_outliers[column]) * 100,
color='r', ax=axes[1])
ax2.set(ylabel="Percent")
axes[1].set_title('High NaN Rows')
axes[0].set
axes[1].set_yticks(np.arange(0, 101, 25))
plt.show()
col_diff(column_list)
```
#### 🟡Discussion
Reporting your observations regarding missing data in rows.
- Are the data with lots of missing values are qualitatively different from data with few or no missing values?
Firstly, we noticed that about 10% of the rows have more than 30% NaNs. Then, we split the dataset into two subsets based on that threshold.
From the plots we observe that the data vary significantly between the two subsets (for all 5 features that were picked at random). Thus, we conclude that removing the rows that are high in NaN values might lead to unwanted bias.
## 🔵Select and Re-Encode Features
Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, we need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. We check the third column of the feature summary (`feat_info`) for a summary of types of measurement.
- For numeric and interval data, these features can be kept without changes.
- Most of the variables in the dataset are ordinal in nature. While ordinal values may technically be non-linear in spacing, we will make the simplifying assumption that the ordinal variables can be treated as being interval in nature (that is, kept without any changes).
- Special handling may be necessary for the remaining two variable types: categorical, and 'mixed'.
In the first two parts of this sub-step, we will perform an investigation of the categorical and mixed-type features and make a decision on each of them, whether we will keep, drop, or re-encode each. Then, in the last part, we will create a new dataframe with only the selected and engineered columns.
```
# Number of features for each data type
display(pd.DataFrame({'Type': feat_info_1['type'].value_counts().index, 'Count': feat_info_1['type'].value_counts().values}))
```
#### 🔵Re-Encode Categorical Features
For categorical data, we would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, we perform one of the following:
- For binary (two-level) categoricals that take numeric values: keep them without needing to do anything.
- There is one binary variable that takes on non-numeric values. For this one, we re-encode the values as numbers.
- For multi-level categoricals (three or more values), we choose to encode the values using multiple dummy variables (e.g. via [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html))
#### 🔴Extract Categorical Features
```
# Extract categorical features
categorical = feat_info_1[feat_info['type'] == 'categorical']['attribute'].values
# Extract their unique value count
categorical_values = pd.DataFrame(azdias_2[categorical].nunique(dropna=True), columns=['Unique Values'])
# Extract Mixed features
mixed = feat_info_1[feat_info_1['type'] == 'mixed']['attribute'].values
display(categorical_values)
```
#### 🔴Extract Binary & Multi-Level Categorical Features
```
binary_categorical = categorical_values[categorical_values['Unique Values'] == 2]
multi_categorical = categorical_values[categorical_values['Unique Values'] > 2]
print('There are {} binary categorical features and {} multi-level categorical features.\n'.format(len(binary_categorical), len(categorical)-len(binary_categorical)))
print('Binary:\n')
display(binary_categorical)
print('Multi-Level:\n')
display(multi_categorical)
```
#### 🔴Extract Numerical & Non-numerical Features
```
numerical = []
non_numerical = []
for i, dtype in zip(feat_info_1.index, azdias_1.dtypes):
if dtype == object:
non_numerical.append(feat_info_1['attribute'][i])
else:
numerical.append(feat_info_1['attribute'][i])
# Check if dimensions check out with original dataset
if (len(numerical) + len(non_numerical)) == azdias_1.shape[1]:
print('Checks out')
else:
print('Something\'s wrong')
# Check if all binary categoricals are numericals
for feature in binary_categorical.index:
if feature in non_numerical:
print('{} is non-numerical!'.format(feature))
```
#### 🔴Encode OST_WEST_KZ
```
azdias_2["OST_WEST_KZ"] = azdias_2["OST_WEST_KZ"].map({"W": 1, "O": 2}, na_action="ignore")
```
#### 🔴Examine Correlation between multi-level features to determine which are safe to remove
```
multilevel = pd.DataFrame(azdias_2[multi_categorical.index])
corr = multilevel.corr()
display(corr[corr > 0.8])
display(corr[corr < -0.8])
```
#### 🔴Drop Highly Correlated Multi-Level Features
`LP_FAMILIE_FEIN` and `LP_FAMILIE_GROB` are very highly correlated, as well as `LP_STATUS_FEIN` and `LP_STATUS_GROB`. Thus, we decide to drop `LP_FAMILIE_FEIN` and `LP_STATUS_FEIN`.
```
drop = ['LP_FAMILIE_FEIN', 'LP_STATUS_FEIN'] # Feature names to be dropped
azdias_2 = azdias_2.drop(drop, axis=1) # Drop from azdias
multi_categorical = multi_categorical.drop(drop, axis = 0) # Drop from multilevel categoricals
```
#### 🔴One-Hot Encode Multi-Level Categorical Features
```
multilevel_names = multi_categorical.index
# Check datatypes
azdias_2[multilevel_names].dtypes
```
*In order to one-hot encode we will turn them into strings
```
azdias_onehot = pd.get_dummies(azdias_2[multilevel_names].astype('str'))
azdias_3 = azdias_2.drop(multilevel_names, axis=1)
azdias_3 = pd.concat([azdias_3, azdias_onehot], axis=1)
print('One-hot encoding added {} features. Total features: {}.'.format(azdias_3.shape[1]-azdias_2.shape[1], azdias_3.shape[1]))
```
#### 🔴Remove NaN Columns
```
nan_columns = [column for column in azdias_3.columns.tolist() if column[-4:] == "_nan"]
azdias_3 = azdias_3.drop(columns=nan_columns)
print('Total features: {}.'.format(azdias_3.shape[1]))
```
#### 🟡Discussion
In the analysis we found 5 binary and 14 multi-level categoricals. One of the binaries was non-numerical so it was engineered to contain numerical data. **All but two** multi-level categoricals were one-hot encoded to ensure that no potentially useful information is lost.
(LP_FAMILIE_GROB and LP_STATUS_FEIN were dropped due to their high correlation (> 0.98) with LP_FAMILIE_FEIN and LP_STATUS_GROB respectively.)
**One-Hot Encoding**
Since we turned all one-hot encoding values into string before the encoding, for the columns that contained at least 1 NaN there was a "nan" dummy variable created. Sometimes, NaNs can be meaningful and hold valuable information, but here since no such pattern is obvious (and we already got a lot of data) we dropped the extra NaN columns.
#### 🔵Engineer Mixed-Type Features
There are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis.
There are two in particular that deserve attention:
- "PRAEGENDE_JUGENDJAHRE"
- "CAMEO_INTL_2015"
#### 🔴Define Function to engineer the two mixed features
```
# Investigate "PRAEGENDE_JUGENDJAHRE" and "CAMEO_INTL_2015"
display(pd.DataFrame(azdias_3['PRAEGENDE_JUGENDJAHRE'].head()))
display(pd.DataFrame(azdias_3['CAMEO_INTL_2015'].head(5)))
def mixed_engineering(df):
df1 = df['PRAEGENDE_JUGENDJAHRE']
df2 = df['CAMEO_INTL_2015']
# Define mapping dictionary for the first feature
df1_map = {
1.0: [40, 1], 2.0: [40, 2], 3.0: [50, 1], 4.0: [50, 2], 5.0: [60, 1],
6.0: [60, 2], 7.0: [60, 2], 8.0: [70, 1], 9.0: [70, 2], 10.0: [80, 1],
11.0: [80, 2], 12.0: [80, 1], 13.0: [80, 2], 14.0: [90, 1], 15.0: [90, 2]
}
# Define new decade and movement features for the first feature
decade = df1.map(lambda x: df1_map[x][0], na_action='ignore')
decade = decade.rename('DEKADE')
movement = df1.map(lambda x: df1_map[x][1], na_action='ignore')
movement = movement.rename('TRUPPENBEWEGUNG')
# Define new wealth and life_stage features for the second feature
wealth = df2.map(lambda x: int(x) // 10, na_action='ignore')
wealth = wealth.rename('WEALTH')
life_stage = df2.map(lambda x: int(x) % 10, na_action='ignore')
life_stage = life_stage.rename('LIFE_STAGE')
# Remove mixed features and add new features to df
df = df.drop(columns=[df1.name, df2.name])
df[decade.name] = decade
df[movement.name] = movement
df[wealth.name] = wealth
df[life_stage.name] = life_stage
return df
azdias_3 = mixed_engineering(azdias_3)
display(azdias_3[['DEKADE', 'TRUPPENBEWEGUNG', 'WEALTH', 'LIFE_STAGE']].head())
```
#### 🟡Discussion
- `"PRAEGENDE_JUGENDJAHRE"` combines information on three dimensions: generation by decade, movement (mainstream vs. avantgarde), and nation (east vs. west). While there aren't enough levels to disentangle east from west, we created two new variables to capture the other two dimensions: an interval-type variable for decade (40s - 90s), and a binary variable for movement (1, 2).
- `"CAMEO_INTL_2015"` combines information on two axes: wealth and life stage. We broke up the two-digit codes by their 'tens'-place and 'ones'-place digits into two new ordinal variables, `wealth` and `life_stage`.
#### 🔵Complete Feature Selection
In order to finish this step up, we need to make sure that our dataframe now only has the columns that we want to keep. To summarize, the dataframe should consist of the following:
- All numeric, interval, and ordinal type columns from the original dataset.
- Binary categorical features (all numerically-encoded).
- Engineered features from other multi-level categorical features and mixed features.
#### 🔴Confirm that columns check out
```
# Do whatever you need to in order to ensure that the dataframe only contains the columns that should be passed to the algorithm functions.
# Check that binary categorical feature is indeed numerical
print('Binary categorical dtype: ', azdias_3['OST_WEST_KZ'].dtypes)
# Check that the two mixed features were successfully dropped
assert 'PRAEGENDE_JUGENDJAHRE' not in azdias_3.columns
assert 'CAMEO_INTL_2015' not in azdias_3.columns
# Check that the two mixed features were successfully replaced
assert 'DEKADE' in azdias_3.columns
assert 'TRUPPENBEWEGUNG' in azdias_3.columns
assert 'WEALTH' in azdias_3.columns
assert 'LIFE_STAGE' in azdias_3.columns
# Check that dependent features were successfully dropped
if ('LP_FAMILIE_GROB' not in azdias_3.columns) and ('LP_STATUS_FEIN' not in azdias_3.columns):
print('Everything appears to be fine!')
```
### 🔵🔵Create a Cleaning Function
Even though we've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that we'll need to perform the same cleaning steps on the customer demographics data. In this substep, we create the function below to execute the main feature selection, encoding, and re-engineering steps we performed above. Then, when it comes to looking at the customer data later on, we can just run this function on that DataFrame to get the trimmed dataset in a single step.
#### 🔴Create Dataset Cleaning Function
```
def clean_data(df):
'''
Perform feature trimming, re-encoding, and engineering for demographics
data
INPUT: Demographics DataFrame
OUTPUT: Trimmed and cleaned demographics DataFrame
'''
# Convert missing value codes into NaNs
df = df.apply(convert_nan)
# Remove hard outliers (Columns)
df = df.drop(outliers_index, axis=1)
# Remove outliers (rows with more than 30% NaN)
p_nan = df.isna().mean(axis=1)
high_nan = list(p_nan[p_nan > 0.3].index)
df = df.drop(high_nan, axis=0)
# Re-encode OST_WEST_KZ (numerical values)
df['OST_WEST_KZ'] = df['OST_WEST_KZ'].map({'W': 1, 'O': 2}, na_action='ignore')
# Drop dependent multi-level features
df = df.drop(columns=["LP_FAMILIE_FEIN", "LP_STATUS_FEIN"])
# One-hot encode multi-level features
df_onehot = pd.get_dummies(df[multilevel_names].astype('str'))
df_temp = df.drop(multilevel_names, axis=1)
df = pd.concat([df_temp, df_onehot], axis=1)
# Remove nan columns (resulting from one-hot encoding)
nan_cols = [column for column in df.columns.tolist() if column[-4:] == "_nan"]
df = df.drop(columns=nan_cols)
# Engineer new features from mixed ones
df = mixed_engineering(df)
return df
```
# 🔵Feature Transformation
## 🔵Apply Feature Scaling
Before we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features.
#### 🔴Calculate percentage of NaN
```
# Count % NaN
total_values = azdias_3.shape[0] * azdias_3.shape[1]
total_nan = azdias_3.isna().sum().sum()
print('The dataset consists {:.2f}% of NaN values.'.format(100*total_nan/total_values))
```
#### 🔴Deal with NaN values (using an Imputer)
Using an imputer, we have the ability to replace NaN values with other values of our liking.
A good tactic could be to replace NaN values with the median of that column, but this may lead to unreasonable, non-integer values for categorical data.
```
# Fit-transform imputer to dataset
imputer = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
imputer_fit = imputer.fit(azdias_3)
imputer_transform = imputer_fit.transform(azdias_3)
# Create dataset with 0 NaNs
azdias_4 = pd.DataFrame(imputer_transform, columns=azdias_3.columns)
# Check if process was successful
if azdias_4.isna().sum().sum() == 0:
print('Jolly good!')
```
#### 🔴Apply feature scaling (standardisation)
```
# Apply feature scaling to the general population demographics data
scaler = StandardScaler()
scaler_fit = scaler.fit(azdias_4)
scaler_transform = scaler_fit.transform(azdias_4)
azdias_5 = pd.DataFrame(scaler_transform, columns=azdias_4.columns)
# Quickly check whether head() values are reasonable
azdias_5.head()
```
### 🟡Discussion
#### Missing Values
An imputer was used to deal with NaN values. The missing values were replaced with the most frequent value in their respective column (to avoid unreasonable results in categorical features).
#### Feature Scaling
For feature scaling we simply standardised the data.
## 🔵Perform Dimensionality Reduction
On our scaled data, we are now ready to apply dimensionality reduction techniques. We choose to use sklearn's Principal Component Analysis.
The steps are documented below.
#### 🔴Create new dataset with PCA
```
# Instantiate
pca_test = PCA(azdias_5.shape[1])
# Fit-transform
azdias_pca_test = pca_test.fit_transform(azdias_5)
```
#### 🔴Create Scree Plot Helper Function
This will help us determine the amount of components that we want using the pca instance we defined above.
```
def scree_plot(pca):
'''
Creates a scree plot associated with the principal components
INPUT: pca - the result of instantian of PCA in scikit learn
OUTPUT: None
'''
num_components = len(pca.explained_variance_ratio_)
ind = np.arange(num_components) # Number of components
var = pca.explained_variance_ratio_ # Variance explained by each component
plt.figure(figsize=(10, 6))
ax = plt.subplot(111)
cumvar = np.cumsum(var) # Commulative sum of variance explained
ax.bar(ind, var)
ax.plot(ind, cumvar)
ax.xaxis.set_tick_params(width=0)
ax.yaxis.set_tick_params(width=2, length=12)
ax.set_xlabel("# Principal Component")
ax.set_ylabel("Variance Explained (%)")
plt.title('Explained Variance Per Principal Component')
scree_plot(pca_test)
# Number of components
ind = np.arange(len(pca_test.explained_variance_ratio_))
# Variance explained by each component
var = pca_test.explained_variance_ratio_
# Commulative sum of variance explained
cumvar = pd.Series(np.cumsum(var))
```
#### 🔴Pick components so that ~80% of variance is explained
```
# Clip commulative variance series at cutoff variance
cutoff = 0.808 # Define cutoff
clip1 = cumvar.clip(upper = cutoff) # Clip
clip2 = clip1[clip1 != cutoff] # Remove clipped
num_components = len(clip2) # Return number of principal components to keep
print('To retain ~80% of the information, {} principal components will be used'.format(num_components))
```
#### 🔴Re-apply PCA
This time for 85 components.
```
# Re-apply PCA to the data while selecting for number of components to retain.
pca = PCA(num_components)
azdias_pca = pca.fit_transform(azdias_5)
scree_plot(pca)
```
### 🟡Discussion
- How many principal components / transformed features are we retaining for the next step of the analysis?
we decided to retain 85 principal components since it is the number of components needed to explain 80% of the variance in the data, which seemed like a good cut-off point judging from the rate of change of the variance explained in the scree plot.
## 🔵Interpret Principal Components
Now that we have our transformed principal components, it's a nice idea to check out the weight of each variable on the first few components to see if they can be interpreted in some fashion.
As a reminder, each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature.
To investigate the features, we will map each weight to their corresponding feature name, then sort the features according to weight. The most interesting features for each principal component, then, will be those at the beginning and end of the sorted list. We use the data dictionary document to help us understand these most prominent features, their relationships, and what a positive or negative value on the principal component might indicate.
#### 🔴Create helper function to display sorted feature weights for a principal component
```
def interpret_comp(comp, feat_names):
'''
Input: principal component (obtained from pca.components_)
Output: DataFrame containing Feature names and their corresponding weight (for the top 5 positive and negative values)
'''
# Create feature weights dataframe for given component
comp_w = pd.DataFrame({'Feature': feat_names, 'Weight': comp})
# Sort the weights in descending order
comp_w = comp_w.sort_values(by='Weight', ascending=False)
# Crop dataframe to keep 5 first and last rows
comp_w = pd.concat([comp_w.iloc[0:5, :], comp_w.iloc[-5:, :]])
# Reset indeces
comp_w = comp_w.reset_index(drop=True)
return comp_w
```
#### 🔴Use helper function on our top 3 components
```
# Extract component weights
comp1 = pca.components_[0]
comp2 = pca.components_[1]
comp3 = pca.components_[2]
# Define feature names
feat_names = azdias_5.columns
# Display dataframes
print('------------------------------------\n First component:\n')
display(interpret_comp(comp1, feat_names))
print('------------------------------------\n Second component:\n')
display(interpret_comp(comp2, feat_names))
print('------------------------------------\n Third component:\n')
display(interpret_comp(comp3, feat_names))
```
### 🟡Discussion
Reporting our observations from detailed investigation of the first few principal components generated.
- Can we interpret positive and negative values from them in a meaningful way?
------------------------------
### 1st Component
Top 5 **positive** features:
- LP_STATUS_GROB: Social status, rough scale (ascending)
- PLZ8_ANTG3: Number of 6-10 family houses in the PLZ8 region
- WEALTH: Wealth (engineered feature)
- HH_EINKOMMEN_SCORE: Estimated household net income
- PLZ8_ANTG4: Number of 10+ family houses in the PLZ8 region
Top 5 **negative** features:
- MOBI_REGIO: Movement patterns
- PLZ8_ANTG1: Number of 1-2 family houses in the PLZ8 region
- FINANZ_MINIMALIST: Financial typology (descending)
- KBA05_GBZ: Number of buildings in the microcell
- KBA95_ANTG1: Number of 1-2 family houses in the microcell
**Interpretation**:
This component appears to be associated with a person's wealth (positive correlation), and population density in a given region (positive correlation).
This is because the latent feature is positively correlated with wealth and high-density regions and negatively correlated with low-density regions.
------------------------------
### 2nd Component
Top 5 **positive** features:
- ALTERSKATEGORIE_GROB: Estimated age based on given name analysis
- FINANZ_VORSORGER: Financial typology (descending)
- ZABEOTYP_3: Energy consumption typology
- SEMIO_ERL: Personality typology (descending affinity)
- SEMIO_LUST: Personality typology (descending affinity)
Top 5 **negative** features:
- DEKADE: Decade of birth
- FINANZ_SPARER: Financial typology (descending)
- SEMIO_REL: Personality typology (descending affinity)
- FINANZ_UNAUFFAELLIGER: Financial typology (descending)
- SEMIO_TRADV: Personality typology
**Interpretation**:
This component appears to mostly be associated with a person's age (positive correlation).
Specifically, we can see positive correlation with (fair/average) energy consumption and some personality traits (as well as financial activity).
Negative correlation is with the person's decade of birth and again some financial activity and personality traits.
------------------------------
### 3rd Component
Top 5 **positive** features:
- SEMIO_VERT: Personality typology (descending affinity)
- SEMIO_SOZ: Personality typology (descending affinity)
- SEMIO_FAM: Personality typology (descending affinity)
- SEMIO_KULT: Personality typology (descending affinity)
- FINANZ_MINIMALIST: Financial typology (descending)
Top 5 **negative** features:
- ANREDE_KZ: Gender (1 male, 2 female)
- SEMIO_KAEM: Personality typology (descending affinity)
- SEMIO_DOM: Personality typology (descending affinity)
- SEMIO_KRIT: Personality typology (descending affinity)
- SEMIO_ERL: Personality typology (descending affinity)
**Interpretation**:
This component is difficult to decipher due to the lack of insight into its top features.
It is highly correlated with personality types and appears to distinguish between men and women.
Specifically, the component points in the direction of male and hence is negatively correlated with the female gender.
# 🔵Clustering
## 🔵Apply Clustering to General Population
We've assessed and cleaned the demographics data, then scaled and transformed them. Now, it's time to see how the data clusters in the principal components space. In this substep, we will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep.
#### 🔴Elbow Method
Here we perform K-Means clustering on our dataset using cluster numbers from 1 to 10.
Then, we plot the absolute value of the score (average within-cluster distances) to see how the rate of change of the score changes.
Using the elbow method, we will attempt to pick the ideal number of clusters by choosing the point where the rate of change of the score gets significantly closer to 0 abruptly.
```
k = np.arange(1, 11, 1)
scores = []
for i in k:
model = KMeans(i, n_jobs=-1)
model = model.fit(azdias_pca)
scores.append(np.abs(model.score(azdias_pca)))
plt.plot(k, scores, linestyle='--', marker='o', color='b')
plt.show()
```
#### 🔴Refit K-Means For 6 Clusters
I chose 6 clusters because 2 seemed too low (for this many latent features) and because we notice a somewhat sudden change in the rate of change of the score (i.e. clusters after that contribute less and less).
```
np.random.seed(42)
start = time.time()
model = KMeans(6, n_jobs=-1).fit(azdias_pca)
end = time.time()
print('Time taken to fit K-Means: {:.2f}s'.format(end - start))
# Store predictions for step 3.3
azdias_pred = model.predict(azdias_pca)
```
## 🔵Apply All Steps to the Customer Data
Now that we have clusters and cluster centers for the general population, it's time to see how the customer data maps on to those clusters. In the last step of the project, we will interpret how the general population fits apply to the customer data.
```
# Load in the customer demographics data
customers = pd.read_csv("data/Udacity_CUSTOMERS_Subset.csv", delimiter=";")
```
#### 🔴Clean Customer Data and Check Feature Count
```
# Use cleaning function
customers = clean_data(customers)
# Check dimensions
print('Azdias feature count: ', azdias_5.shape[1])
print('Customers feature count: ', customers.shape[1])
# Check which columns are missing from the customers dataset
missing_columns = []
for column in azdias_5.columns:
if column not in customers.columns:
missing_columns.append(column)
print(missing_columns)
```
These are encoded features which did not get encoded in the customers dataset due to their values not being in there at all.
Thus, we will add the columns to the customers dataset and fill them with 0.
#### 🔴Add Missing Columns to `customers` (one-hot encoding)
```
for column in missing_columns:
customers[column] = 0
# Check if dimensions now much
if azdias_5.shape[1] == customers.shape[1]:
print('All is well!')
```
#### 🔴Deal with NaN values (using the same Imputer object as we did for azdias)
```
customers_transform = imputer_fit.transform(customers)
customers = pd.DataFrame(customers_transform, columns=customers.columns)
# Check if process was successful
if customers.isna().sum().sum() == 0:
print('Jolly good! 0 NaN')
else:
print('Something\'s wrong! There are still {} NaN'.format(customers.isna().sum().sum()))
```
#### 🔴Apply feature scaling, PCA, and Clustering to the `customers` dataset
```
# Feature Scaling
customers_transform = scaler.transform(customers)
customers = pd.DataFrame(data=customers_transform, columns=customers.columns)
# PCA
customers_pca = pca.transform(customers)
# K-Means Clustering
customers_pred = model.predict(customers_pca)
```
## 🔵Compare Customer Data to Demographics Data
In this final substep, we will compare the two cluster distributions to see where the strongest customer base for the company is.
Consider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics.
#### 🔴Compare Proportion of People in Each Cluster
Compare how many people are in each cluster for each dataset in order to identify underrepresented and overrepresented clusters.
```
# Convert predicitons to DataFrames
population = pd.DataFrame(dict(cluster=azdias_pred))
customer = pd.DataFrame(dict(cluster=customers_pred))
# Create bar plots for both datasets
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
ax1 = sns.barplot(x='cluster',
y='cluster',
data=population,
estimator=lambda x: len(x) / len(population) * 100,
color='c', ax=axes[0])
ax2 = sns.barplot(x='cluster',
y='cluster',
data=customer,
estimator=lambda x: len(x) / len(customer) * 100,
color='r', ax=axes[1])
# Set titles
axes[0].set_title('General Population')
axes[1].set_title('Customers')
# Set x, y labels
ax1.set(xlabel = 'Cluster', ylabel='Percent')
ax2.set(xlabel = 'Cluster', ylabel='Percent')
# Set y ticks
axes[0].set_yticks(np.arange(0, 101, 25))
axes[1].set_yticks(np.arange(0, 101, 25))
axes[0].set
plt.show()
```
#### 🔴Investigate Overrepresented-Underrepresented Clusters (customer)
We examine the weights for each component for each cluster centroid to determine what features we want to target.
**Decision-Making Process for Overrepresented Cluster**:
- If weight is high and positive: we want individuals that rank *high* in this dimension.
- If weight is high and positive: we want individuals that rank *low* in this dimension.
**Decision-Making Process for Underrepresented Cluster**:
- If weight is high and positive: we want individuals that rank *low* in this dimension.
- If weight is high and positive: we want individuals that rank *high* in this dimension.
##### Weights
The weights will be determined
```
# Graph Overrepresented Clusters
or_clusters = [0, 2]
#principal_components = np.zeros(1, 85)
pc0, pc1, pc4 = 0, 0, 0
for cluster_number in or_clusters:
or_cluster = cluster_number
or_cluster_centroid = pd.DataFrame(model.cluster_centers_[or_cluster], columns=['Component Weight'])
or_cluster_centroid.plot.bar(figsize=(15, 5))
plt.title("Overrepresented Cluster ({})".format(str(or_cluster)), fontdict={"fontsize": 18})
plt.show()
# Extract important principal components
pc0 += model.cluster_centers_[cluster_number][0]
pc1 += model.cluster_centers_[cluster_number][1]
pc4 += model.cluster_centers_[cluster_number][4]
# Graph Underrepresented Clusters
ur_clusters = [1, 3, 5]
for cluster_number in ur_clusters:
or_cluster = cluster_number
or_cluster_centroid = pd.DataFrame(model.cluster_centers_[or_cluster], columns=['Component Weight'])
or_cluster_centroid.plot.bar(figsize=(15, 5))
plt.title("Underrepresented Cluster ({})".format(str(or_cluster)), fontdict={"fontsize": 18})
plt.show()
# Extract important principal components
pc0 -= model.cluster_centers_[cluster_number][0]
pc1 -= model.cluster_centers_[cluster_number][1]
pc4 -= model.cluster_centers_[cluster_number][4]
```
#### 🔴Principal Component Assessment
Visually, the most heavily weighted principal components appear to be the 1st, 2nd, and 5th one.
So let's calculate how much they matter:
```
print('Principal component favour (the higher the better): ')
print('1st Component: {:.2f}'.format(pc0))
print('2nd Component: {:.2f}'.format(pc1))
print('5th Component: {:.2f}'.format(pc4))
```
From this we can see that we want to target individuals that are high in features associated with the 1st Component, the 2nd Component and the 5th Component, the first being by far the most important.
So let's remind ourselves what the most important principal components measure:
------------------------------
### 1st Component
Top 5 **positive** features:
- LP_STATUS_GROB: Social status, rough scale (ascending)
- PLZ8_ANTG3: Number of 6-10 family houses in the PLZ8 region
- WEALTH: Wealth (engineered feature)
- HH_EINKOMMEN_SCORE: Estimated household net income
- PLZ8_ANTG4: Number of 10+ family houses in the PLZ8 region
Top 5 **negative** features:
- MOBI_REGIO: Movement patterns
- PLZ8_ANTG1: Number of 1-2 family houses in the PLZ8 region
- FINANZ_MINIMALIST: Financial typology (descending)
- KBA05_GBZ: Number of buildings in the microcell
- KBA95_ANTG1: Number of 1-2 family houses in the microcell
**Interpretation**:
This component appears to be associated with a person's wealth (positive correlation), and population density in a given region (positive correlation).
This is because the latent feature is positively correlated with wealth and high-density regions and negatively correlated with low-density regions.
------------------------------
### 2nd Component
Top 5 **positive** features:
- ALTERSKATEGORIE_GROB: Estimated age based on given name analysis
- FINANZ_VORSORGER: Financial typology (descending)
- ZABEOTYP_3: Energy consumption typology
- SEMIO_ERL: Personality typology (descending affinity)
- SEMIO_LUST: Personality typology (descending affinity)
Top 5 **negative** features:
- DEKADE: Decade of birth
- FINANZ_SPARER: Financial typology (descending)
- SEMIO_REL: Personality typology (descending affinity)
- FINANZ_UNAUFFAELLIGER: Financial typology (descending)
- SEMIO_TRADV: Personality typology
**Interpretation**:
This component appears to mostly be associated with a person's age (positive correlation).
Specifically, we can see positive correlation with (fair/average) energy consumption and some personality traits (as well as financial activity).
Negative correlation is with the person's decade of birth and again some financial activity and personality traits.
------------------------------
### 🟡Discussion: Compare Customer Data to Demographics Data
Can we describe segments of the population that are relatively popular with the mail-order company, or relatively unpopular with the company?
Our analysis has conclusively shown that the group that is most popular with the mail-order company involves people that are *wealthy*, and live in *densely populated regions*.
People with these characteristics correlate more than 5 times more strongly than our 2nd best predictor.
The 2nd best predictor suggests relatively weak links with a person's age (the older the more likely to be associated with the company).
Also, there seems to be minor association with mediocre energy consumption.
---
# Conclusion
Overall, I would recommend that the company targets wealthy people that live in densely populated regions.
---
**Note**: The 5th component is negligible compared to the first two, but we can see that it also mainly measures wealth and population density, which agrees with our previous predictions.
```
comp4 = pca.components_[4]
display(interpret_comp(comp4, feat_names))
```
| github_jupyter |
# Decision Trees and Random Forests in Python
This is the code for the lecture video which goes over tree methods in Python. Reference the video lecture for the full explanation of the code!
Jose also wrote a [blog post](https://medium.com/@josemarcialportilla/enchanted-random-forest-b08d418cb411#.hh7n1co54) explaining the general logic of decision trees and random forests which you can check out.
## Import Libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
## Get the Data
```
df = pd.read_csv('kyphosis.csv')
df.head()
```
- This dataset essentially represents a number of patients who had Kyphosis, which is a spinal condition, and then they had an operation, and that operation was a corrective spinal surgery. And this dataframe basically represents whether or not, the kyphosis condition was present after the operation.
- Age : Age of person in months. This is data on children and their age being in months.
- Number : Number of vertebrae involved in the operation.
- Start : Number of first or top most vertebrae that was operated on.
## EDA
We'll just check out a simple pairplot for this small dataset.
```
sns.pairplot(df,hue='Kyphosis',palette='Set1')
```
## Train Test Split
Let's split up the data into a training set and a test set!
```
from sklearn.model_selection import train_test_split
X = df.drop('Kyphosis',axis=1)
y = df['Kyphosis']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
```
## Decision Trees
We'll start just by training a single decision tree.
```
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier()
dtree.fit(X_train,y_train)
```
## Prediction and Evaluation
Let's evaluate our decision tree.
```
predictions = dtree.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(classification_report(y_test,predictions))
print(confusion_matrix(y_test,predictions))
```
## Tree Visualization
Scikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this:
```
from IPython.display import Image
from sklearn.externals.six import StringIO
from sklearn.tree import export_graphviz
import pydotplus
features = list(df.columns[1:])
features
import os
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'
dot_data = StringIO()
export_graphviz(dtree, out_file=dot_data, feature_names=features,filled=True, rounded=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("tree.pdf")
graph.write_jpeg("tree.jpeg")
```
## Random Forests
Now let's compare the decision tree model to a random forest.
```
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X_train, y_train)
rfc_pred = rfc.predict(X_test)
print(confusion_matrix(y_test,rfc_pred))
print(classification_report(y_test,rfc_pred))
```
| github_jupyter |
# Simple applications of NLTK
In the following notebook, we exemplify some applications of the NLTK (Natural Language Tool KIT) library for NLP. For an overview of important terminology in NLP, we refer to [here](https://www.kdnuggets.com/2017/02/natural-language-processing-key-terms-explained.html). For a nice overview of applications of NLTK in python, consider this [link](https://likegeeks.com/nlp-tutorial-using-python-nltk/). To get a broader perspective on other libraries for NLP in python, go [here](https://medium.com/@srimanikantapalakollu/top-5-natural-language-processing-python-libraries-for-data-scientist-32463d36feae).
### The points we focus on here are:
- Desintegration of a text into tokens (sentences and words)
- Stemming (a way to delete endings of words to establish correlations in between word stems)
- POS-Tagging (Part of Speech) (to identify the type of word: noun, verb, adjective etc.)
- relevance of words/sentiment analysis
For this, we use data originally retrieved from [here](https://www.kaggle.com/snap/amazon-fine-food-reviews). In the respository a processed version of it is provided. For a gentle (and more detailed) introduction of the first part (tokenizing), check also [here](https://github.com/andreaspts/ML_NLP_Analyses/blob/master/simple_exploration_into_regular_expressions.ipynb). For the last part, we use the information gained before to train a (multinomial) naive Bayes model. (The latter was explored [here](https://github.com/andreaspts/ML_Naive_Bayes_for_Classification).)
```
#import relevant packages
import nltk
```
## 1) Tokenizing
```
#tokenization via a trained machine learning model
text = "He went into a supermarket in St. Louis. There, he bought wine for 10$."
sentences = nltk.sent_tokenize(text)
print(sentences)
```
Notice that the tokenizer (a machine learning model) is smart enough to split at a full stop an not at the . after St!
```
#splitting the sentence is done via the regular expression method
for sentence in sentences:
print(nltk.word_tokenize(sentence))
len(nltk.word_tokenize(sentence))
```
## 2) POS-Tagging
POS tagging can be done in various ways (i.e. via different machine learning methods) via the NLTK library. A nice overview with performance check can be found [here](https://natemccoy.github.io/2016/10/27/evaluatingnltktaggerstutorial.html).
The following discussion is based on the perceptron tagger.
```
#to assign a tag (what type) to the tokens
for sentence in sentences:
print(nltk.pos_tag(nltk.word_tokenize(sentence)))
```
The tags for each object are called treebank tags.
```
#develop model which filters for adjectives
for sentence in sentences:
tagged_words = nltk.pos_tag(nltk.word_tokenize(sentence))
final_sentence = []
for tagged_word in tagged_words:
final_sentence.append(tagged_word[0] + "/" + tagged_word[1])
#print(final_sentence)
print(" ".join(final_sentence))
```
These tagged sentences could be further processed using machine learning methods.
## 3) Stemming
An overview of stemming algorithms for various European languages can be found [here](https://snowballstem.org/algorithms/).
```
#importing relevant packages
from nltk.stem import SnowballStemmer
s = SnowballStemmer("english")
s.stem("cars")
s.stem("quickly")
s.stem("followed")
```
## 4) Lemmatizing
Lemmatizers work in a similar fashion as stemmers but are able to retrieve word associations (at the cost of computational time).
```
#import relevant packages
from nltk.stem.wordnet import WordNetLemmatizer
l = WordNetLemmatizer()
l.lemmatize("going", "v") #associate to verb
sentence = "He is going to the supermarket down the street."
words_tagged = nltk.pos_tag(nltk.word_tokenize(sentence))
from nltk.corpus import wordnet
def get_wordnet_pos(treebank_tag):
if treebank_tag.startswith("J"):
return wordnet.ADJ
elif treebank_tag.startswith("V"):
return wordnet.VERB
elif treebank_tag.startswith("N"):
return wordnet.NOUN
elif treebank_tag.startswith("R"):
return wordnet.ADV
else:
return wordnet.NOUN #if we can't identify the type, we return that it is a noun (crude)
for word in words_tagged:
print(l.lemmatize(word[0], get_wordnet_pos(word[1])))
```
## 5) Application: Combination of NLTK and machine learning for sentiment analysis
```
#import relevant packages
import pandas as pd
#define data frame
df = pd.read_csv("./Reviews_10000.csv.bz2")
df.head()
df.shape
df.info()
#define variable/text column with descriptions
df = df#.sample(100)
texts = df["Text"]#.sample(500)
#print(texts)
#splitting text into sentences
texts_transformed = []
for review in texts:
sentences = nltk.sent_tokenize(review)
adjectives = []
#tokenizing sentences into words
for sentence in sentences:
words = nltk.word_tokenize(sentence)
#proceed with pos-tagging
words_tagged = nltk.pos_tag(words)
for word_tagged in words_tagged:
#filter for adjectives
if word_tagged[1] == "JJ":
adjectives.append(word_tagged[0])
texts_transformed.append(" ".join(adjectives))
len(texts_transformed)
#print(texts_transformed)
#importing relevant packages
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
#define variables for training
X = texts_transformed
Y = (df["Score"] >=3)
#train/test splitting
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 0, test_size = 0.2)
#words and their occurrence have to be transformed into integers
cv = CountVectorizer(max_features = 100) # take the max_features most often appearing words
cv.fit(X_train)
X_train = cv.transform(X_train)
X_test = cv.transform(X_test)
X_train.shape
#define Naive Bayes ML model: MultinomialNB
model = MultinomialNB()
model.fit(X_train, Y_train)
print("Training score: " + str(model.score(X_train, Y_train)))
print("Test score: " + str(model.score(X_test, Y_test)))
#the names of the max_features are given and its order matters for the naive Bayes approach
cv.get_feature_names()
#the parameters of the MultinomialNB are given
model.coef_[0]
#we assemble these together to get a list of tuples
adj = list(zip(model.coef_[0], cv.get_feature_names()))
#sort this list according to coefficients in the tuples
adj = sorted(adj)
for i in adj:
print(i)
```
We observe that adjectives with a negative connotation are judged by our model with numbers (model coefficients) which are more negative than those with a positive connotation.
| github_jupyter |
### OCP Data Preprocessing Tutorial
This notebook provides an overview of converting ASE Atoms objects to PyTorch Geometric Data objects. To better understand the raw data contained within OC20, check out the following tutorial first: https://github.com/Open-Catalyst-Project/ocp/blob/master/docs/source/tutorials/data_playground.ipynb
```
from ocpmodels.preprocessing import AtomsToGraphs
import ase.io
from ase.build import bulk
from ase.build import fcc100, add_adsorbate, molecule
from ase.constraints import FixAtoms
from ase.calculators.emt import EMT
from ase.optimize import BFGS
```
### Generate toy dataset: Relaxation of CO on Cu
```
adslab = fcc100("Cu", size=(2, 2, 3))
ads = molecule("CO")
add_adsorbate(adslab, ads, 3, offset=(1, 1))
cons = FixAtoms(indices=[atom.index for atom in adslab if (atom.tag == 3)])
adslab.set_constraint(cons)
adslab.center(vacuum=13.0, axis=2)
adslab.set_pbc(True)
adslab.set_calculator(EMT())
dyn = BFGS(adslab, trajectory="CuCO_adslab.traj", logfile=None)
dyn.run(fmax=0, steps=1000)
raw_data = ase.io.read("CuCO_adslab.traj", ":")
print(len(raw_data))
```
### Convert Atoms object to Data object
The AtomsToGraphs class takes in several arguments to control how Data objects created:
- max_neigh (int): Maximum number of neighbors a given atom is allowed to have, discarding the furthest
- radius (float): Cutoff radius to compute nearest neighbors around
- r_energy (bool): Write energy to Data object
- r_forces (bool): Write forces to Data object
- r_distances (bool): Write distances between neighbors to Data object
- r_edges (bool): Write neigbhor edge indices to Data object
- r_fixed (bool): Write indices of fixed atoms to Data object
```
a2g = AtomsToGraphs(
max_neigh=50,
radius=6,
r_energy=True,
r_forces=True,
r_distances=False,
r_edges=True,
r_fixed=True,
)
data_objects = a2g.convert_all(raw_data, disable_tqdm=True)
data = data_objects[0]
data
data.atomic_numbers
data.cell
data.edge_index #neighbor idx, source idx
from torch_geometric.utils import degree
# Degree corresponds to the number of neighbors a given node has. Note there is no more than max_neigh neighbors for
# any given node.
degree(data.edge_index[1])
data.fixed
data.force
data.pos
data.y
```
### Adding additional info to your Data objects
In addition to the above information, the OCP repo requires several other pieces of information for your data to work
with the provided trainers:
- sid (int): A unique identifier for a particular system. Does not affect your model performance, used for prediction saving
- fid (int) (S2EF only): If training for the S2EF task, your data must also contain a unique frame identifier for atoms objects coming from the same system.
- tags (tensor): Tag information - 0 for adsorbate, 1 for surface, 2 for subsurface. Optional, can be used for training.
Other information may be added her as well if you choose to incorporate other information in your models/frameworks
```
data_objects = []
for idx, system in enumerate(raw_data):
data = a2g.convert(system)
data.fid = idx
data.sid = 0 # All data points come from the same system, arbitrarly define this as 0
data_objects.append(data)
data = data_objects[100]
data
data.sid
data.fid
```
Resources:
- https://github.com/Open-Catalyst-Project/ocp/blob/6604e7130ea41fabff93c229af2486433093e3b4/ocpmodels/preprocessing/atoms_to_graphs.py
- https://github.com/Open-Catalyst-Project/ocp/blob/master/scripts/preprocess_ef.py
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer).
# Design an online chat
## Constraints and assumptions
* Assume we'll focus on the following workflows:
* Text conversations only
* Users
* Add a user
* Remove a user
* Update a user
* Add to a user's friends list
* Add friend request
* Approve friend request
* Reject friend request
* Remove from a user's friends list
* Create a group chat
* Invite friends to a group chat
* Post a message to a group chat
* Private 1-1 chat
* Invite a friend to a private chat
* Post a meesage to a private chat
* No need to worry about scaling initially
## Solution
```
%%writefile online_chat.py
from abc import ABCMeta
class UserService(object):
def __init__(self):
self.users_by_id = {} # key: user id, value: User
def add_user(self, user_id, name, pass_hash): # ...
def remove_user(self, user_id): # ...
def add_friend_request(self, from_user_id, to_user_id): # ...
def approve_friend_request(self, from_user_id, to_user_id): # ...
def reject_friend_request(self, from_user_id, to_user_id): # ...
class User(object):
def __init__(self, user_id, name, pass_hash):
self.user_id = user_id
self.name = name
self.pass_hash = pass_hash
self.friends_by_id = {} # key: friend id, value: User
self.friend_ids_to_private_chats = {} # key: friend id, value: private chats
self.group_chats_by_id = {} # key: chat id, value: GroupChat
self.received_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest
self.sent_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest
def message_user(self, friend_id, message): # ...
def message_group(self, group_id, message): # ...
def send_friend_request(self, friend_id): # ...
def receive_friend_request(self, friend_id): # ...
def approve_friend_request(self, friend_id): # ...
def reject_friend_request(self, friend_id): # ...
class Chat(metaclass=ABCMeta):
def __init__(self, chat_id):
self.chat_id = chat_id
self.users = []
self.messages = []
class PrivateChat(Chat):
def __init__(self, first_user, second_user):
super(PrivateChat, self).__init__()
self.users.append(first_user)
self.users.append(second_user)
class GroupChat(Chat):
def add_user(self, user): # ...
def remove_user(self, user): # ...
class Message(object):
def __init__(self, message_id, message, timestamp):
self.message_id = message_id
self.message = message
self.timestamp = timestamp
class AddRequest(object):
def __init__(self, from_user_id, to_user_id, request_status, timestamp):
self.from_user_id = from_user_id
self.to_user_id = to_user_id
self.request_status = request_status
self.timestamp = timestamp
class RequestStatus(Enum):
UNREAD = 0
READ = 1
ACCEPTED = 2
REJECTED = 3
```
| github_jupyter |
# **Astro 160 Fall 2020 Essential Pieces of Code v.1**
## *Hunter Hall — hall@berkeley.edu — October 21, 2020*
### Edited by Ziyi Lu, Oct 2020
##### *These are simply taken from some of my lab assignments during the Fall of 2019. Feel free to make any edits you'd like to them, but when doing so, please make a new cell below the original code so that we can see why we edited the original as we did. - Hunter*
##### Formated for consistency & clarity. Added additional comments & edited some for precision.
Optimised code, esp. unnecessary append()s, where assignment should be used. They take so much more time & space. -Ziyi
```
#Import all important packages
import matplotlib.pyplot as plt
import numpy as np
# import os
import glob
# Settings below only apply for Jupyter
#This make plots inline
%matplotlib inline
#This increases image resolution
%config InlineBackend.figure_format = 'retina'
```
# 1.0 **Importing .txt or .csv Files**
## *Linked below are files from Hunter's 2019 datasets from Labs 1 and 2 that are used in this example — note that the glob function wasn't necessarily used in Lab 1, but that it still works for .csv and .txt files:*
https://drive.google.com/file/d/1ZeIZHzrKRpZVqp-3zwkk0u72AKEZX2AN/view?usp=sharing
## 1.1 Importing One File
```
# Define a variable that navigates to the csv file
# Make immediately clear variable names to avoid confusion of collaborators / future u
lab1_data_path = 'example_data/lab1_pmt_data/lab1_data_10k_.8b_1r.csv'
# Define a variable that represents the thing we want to assign our csv data to
lab1_data = np.loadtxt(lab1_data_path, delimiter=',', dtype='int32')
# In the np.loadtxt function, we must set a delimiter
# It is ',' per common practice, but this may not always be the case
# We specify the data type as a 32-bit integer to avoid lowered precision
# Parameters like delimiter are best declared as variables at the start of the script,
# instead of written ad hoc inline
# esp. they r repeated
# b/c DRY: Don't Repeat Yrslf
# That make typos less likely & changes easier
# Constants not subject to change r usu. CAPITALISED
DELIMITER = ','
DTYPE = 'int32'
lab1_data = np.loadtxt(lab1_data_path, delimiter=DELIMITER, dtype=DTYPE)
# Check to see if the data from csv file you've imported makes sense
# — for Lab 1, we should get an array with two columns
# in which the first represents the Channel Number of the Photomultiplier Tubes (PMTs)
# and the second represents the time-stamps
# Let's print out indicators about our data
#--------------------------------------#
print(type(lab1_data)) # Check to make sure that the class is an array
print(np.shape(lab1_data)) # Check to make sure that the dimensions of the data makes sense
# (9892, 2) means the array has 2 axes, the 0th having 9892 elements & the 1st having 2
# NumPy counts from 0 like most other computer programmes
print(lab1_data) # Check to make sure that the values of the data make sense
#--------------------------------------#
# Now, let's go from the imported array to useful data
# Obtains an array of useful data
#--------------------------------------#
time_step = lab1_data[:,1]
# This step gets rid of the 0th (first) column since we don't care about the Channel Number
# The ':' copies all 9892 elements along the 0th axis
# The 1 selects the 1st element out of the 2 along the 1st axis
# The 0th element is the Channel Numbers & the 1st is the time-stamps
#--------------------------------------#
# Let's print out indicators about our data again
# In practice, u may define this process as a function() to avoid repetition
# Instructions for function definition in 1.2
#--------------------------------------#
print(type(time_step)) # Check to make sure that the class is an array
print(np.shape(time_step)) # Check to make sure that the dimensions of the data makes sense
print(time_step) # Check to make sure these time-step values make sense from our data
#--------------------------------------#
# Merging all steps above into one cell
# Navigate to raw data
#--------------------------------------#
lab1_data_path = 'example_data/lab1_pmt_data/lab1_data_10k_.8b_1r.csv'
#--------------------------------------#
# Import data
#--------------------------------------#
lab1_data = np.loadtxt(lab1_data_path, delimiter=',', dtype='int32')
#--------------------------------------#
# Obtains an array of useful data (time-stamps)
#--------------------------------------#
time_step = lab1_data[:,1]
#--------------------------------------#
# Displays the final output we've made
#--------------------------------------#
print(time_step)
#--------------------------------------#
```
## 1.2 Importing Multiple Files with *glob*
```
# Instead of importing and manipulating data step-by-step like in 1.1,
# we will define a reusable function to do multiple steps
# This is a naive version with append()s that is slow & space-consuming
def get_signal(file_path):
'''
Reads signals from a file at FILE_PATH generated by a 1D CCD.
'''
# Note format of 'docstring'
# Write one for every function in case anyone forgets
# CAPITALISE input parameters
# This represents the pixels of the 1D CCD used in Lab 2
# This range is set as there are 2048 pixels in the CCD
# The result is from 0 to 2047
pixel = np.arange(0, 2048)
signal = [] # This represents the signal/intensities of each pixel in the 1D CCD used in Lab 2
# Here, we implement the glob.glob feature to unpack all .txt files in the lab2_neon_data folder
for file in glob.glob(file_path):
pixNum, s = np.genfromtxt(file, dtype=(float), skip_header=17, skip_footer=1, unpack=True)
# If we open one of the neon.txt files in a text editor,
# we can see it has a header that is 17 lines long, and a footer that is one line long,
# so we exclude those with the np.genfromtxt function
signal.append(s)
return(pixel, signal) # This will make the function output an array of pixels and an array of corresponding signals
# Appending is usu. a bad idea, esp. if u know the length of the output array already
# It takes much time & space unnecessarily
# Define a null array w known shape & assign values to it instead
def get_signal_optimal(file_path):
'''
Reads signals from a file at FILE_PATH generated by a 1D CCD.
This one replaces append()s w/ assignments to save time & space.
'''
# These constants better b defined at start of the whole code
CCD_PIXELS = 2048
LENGTH_HEADER = 17
LENGTH_FOOTER = 1
pixel = np.arange(0, CCD_PIXELS)
files = glob.glob(file_path)
n_files = len(files)
signal = np.zeros((n_files, CCD_PIXELS)) # An empty 2D array of zeros
for i in range(n_files):
pixNum, s = np.genfromtxt(file, dtype=(float), skip_header=LENGTH_HEADER, skip_footer=LENGTH_FOOTER, unpack=True)
signal[i] = s
return(pixel, signal)
# Now we will run the function from the previous cell using neon data from Lab 2
# which was provided in the dataset download link at the start of this notebook
lab2_file_path = 'example_data/lab2_neon_data/*.txt'
# Define a variable that navigates to the folder where all of the neon.txt files are located
# * is a wildcard representing a string of characters of any length, including 0
# This will output an array of pixels and an array of corresponding signals for each file in the neon dataset
pixel, signal = get_signal(lab2_file_path)
# What we really want though is an averaged signal for each pixel across the entire neon dataset,
# so in order to do this we will take the mean of the pixel-by-pixel signals across each file in the dataset
signal_avg_neon = np.mean(signal, axis=0) #axis=0 makes the mean move through each file in the dataset
# Without setting the axis, the mean will be of a flattened 1D array containing every number in it
# Now, let's check to make sure our results make sense
#Let's print out indicators about our data
#--------------------------------------#
print(type(signal_avg_neon)) # Check to make sure that the class is an array
print(np.shape(signal_avg_neon)) # Check to make sure that the dimensions of your data makes sense — in this case we should 2048 values as there should be a value for each of the 2048 pixels in the CCD
print(signal_avg_neon) # Check to make sure that the values of the data make sense
#--------------------------------------#
# Let's plot it to make sure it looks like the correct spectrum too
#--------------------------------------#
plt.figure()
plt.rcParams['figure.figsize'] = [10, 5]
plt.rcParams.update({'font.size': 12})
plt.plot(pixel,signal_avg_neon)
plt.title('Neon (Raw Data)')
plt.xlabel('Pixel Number')
plt.ylabel('Raw Signal [ADU]')
plt.show()
#--------------------------------------#
# Merging all steps above into one cell
# Function
#--------------------------------------#
def get_signal(file_path):
'''
Reads signals from a file at FILE_PATH generated by a 1D CCD.
'''
# This represents the pixels of the 1D CCD used in Lab 2
# This range is set as there are 2048 pixels in the CCD
# The result is from 0 to 2047
pixel = np.arange(0, 2048)
signal = [] # This represents the signal/intensities of each pixel in the 1D CCD used in Lab 2
def get_signal_optimal(file_path):
'''
Reads signals from a file at FILE_PATH generated by a 1D CCD.
This one replaces append()s w/ assignments to save time & space.
'''
CCD_PIXELS = 2048
LENGTH_HEADER = 17
LENGTH_FOOTER = 1
pixel = np.arange(0, CCD_PIXELS)
files = glob.glob(file_path)
n_files = len(files)
signal = np.zeros((n_files, CCD_PIXELS)) # An empty 2D array of zeros
for i in range(n_files):
pixNum, s = np.genfromtxt(file, dtype=(float), skip_header=LENGTH_HEADER, skip_footer=LENGTH_FOOTER, unpack=True)
signal[i] = s
return(pixel, signal)
#--------------------------------------#
# Navigate to raw data
#--------------------------------------#
lab2_file_path = 'example_data/lab2_neon_data/*.txt'
#--------------------------------------#
# Import data by executing function
#--------------------------------------#
pixel, signal = get_signal(lab2_file_path)
#--------------------------------------#
# Average files in the dataset
#--------------------------------------#
signal_avg_neon = np.mean(signal, axis=0)
#--------------------------------------#
# Plotting
#--------------------------------------#
plt.figure()
plt.rcParams['figure.figsize'] = [10, 5]
plt.rcParams.update({'font.size': 12})
plt.plot(pixel,signal_avg_neon)
plt.title('Neon (Raw Data)')
plt.xlabel('Pixel Number')
plt.ylabel('Raw Signal [ADU]')
plt.show()
#--------------------------------------#
```
# 2.0 **Peak Finding and Centroiding**
## *The data used in this section is the same data from section 1.0*
## 2.1 1D Peak Finding Algorithm
```
# We are going to use the same neon spectrum from the previous section,
# so let's re-run the final cell from the previous section to make sure we are starting with the fresh, unchanged raw data
# Everything below is identical to the final cell from the previous section
#-------------------------------------------------------------------------#
# Function
#--------------------------------------#
def get_signal(file_path):
'''
Reads signals from a file at FILE_PATH generated by a 1D CCD.
'''
# This represents the pixels of the 1D CCD used in Lab 2
# This range is set as there are 2048 pixels in the CCD
# The result is from 0 to 2047
pixel = np.arange(0, 2048)
signal = [] # This represents the signal/intensities of each pixel in the 1D CCD used in Lab 2
def get_signal_optimal(file_path):
'''
Reads signals from a file at FILE_PATH generated by a 1D CCD.
This one replaces append()s w/ assignments to save time & space.
'''
CCD_PIXELS = 2048
LENGTH_HEADER = 17
LENGTH_FOOTER = 1
pixel = np.arange(0, CCD_PIXELS)
files = glob.glob(file_path)
n_files = len(files)
signal = np.zeros((n_files, CCD_PIXELS)) # An empty 2D array of zeros
for i in range(n_files):
pixNum, s = np.genfromtxt(file, dtype=(float), skip_header=LENGTH_HEADER, skip_footer=LENGTH_FOOTER, unpack=True)
signal[i] = s
return(pixel, signal)
#--------------------------------------#
# Navigate to raw data
#--------------------------------------#
lab2_file_path = 'example_data/lab2_neon_data/*.txt'
#--------------------------------------#
# Import data by executing function
#--------------------------------------#
pixel, signal = get_signal(lab2_file_path)
#--------------------------------------#
# Average files in the dataset
#--------------------------------------#
signal_avg_neon = np.mean(signal, axis=0)
#--------------------------------------#
# Plotting
#--------------------------------------#
plt.figure()
plt.rcParams['figure.figsize'] = [10, 5]
plt.rcParams.update({'font.size': 12})
plt.plot(pixel,signal_avg_neon)
plt.title('Neon (Raw Data)')
plt.xlabel('Pixel Number')
plt.ylabel('Raw Signal [ADU]')
plt.show()
#--------------------------------------#
# Now, we want to write some code that will give us the signal values of the peaks in the neon raw spectrum
# We don't want ALL of the peaks in the spectrum,
# because there are many small peaks that we don't necessarily need to use for the wavelength solution or spectroscopy later.
# So we set a minimum signal threshold, in this case 200 ADU,
# to only return the peaks that are above this threshold instead of all peaks in the spectrum.
threshold_neon = 200
peaks_neon = []
peaks_index_neon = [] # x positions of the peaks, or rather, their index
for i in range(len(signal_avg_neon) - 1): #len(signal)-1 because you will be checking the value after than your last i
if (threshold_neon <= signal_avg_neon[i]) \
and (signal_avg_neon[i - 1] <= signal_avg_neon[i]) \
and (signal_avg_neon[i] >= signal_avg_neon[i + 1]): #three conditions to be a peak
peaks_neon.append(signal_avg_neon[i])
peaks_index_neon.append(i)
# Next, we want to confirm that the peak finding function worked,
# so we will print out the peak signal values, their pixel locations, and then plot them over the original spectrum
# to see if everything lines up
# Let's print out indicators about our data
#--------------------------------------#
# This prints out two columns with the first representing the pixel location of the peak
# and the second representing the signal value at that peak
print(np.hstack((np.vstack(peaks_index_neon),np.vstack(peaks_neon))))
# Output will be in the format below:
# Pixel number, signal
#--------------------------------------#
# Let's plot the spectrum as well as a scatter plot of our peaks to make sure it looks correct
#--------------------------------------#
plt.figure()
plt.rcParams['figure.figsize'] = [10, 5]
plt.rcParams.update({'font.size': 12})
plt.plot(pixel, signal_avg_neon)
plt.scatter(peaks_index_neon, peaks_neon, color='red', marker='o', label='Peaks')
plt.title('Neon (Raw Data)')
plt.xlabel('Pixel Number')
plt.ylabel('Raw Signal [ADU]')
plt.legend()
plt.show()
#--------------------------------------#
```
## 2.2 1D Centroiding
### *Tutorial I followed during 2019 to help make my Lab 2 centroiding function:*
https://prappleizer.github.io/Tutorials/Centroiding/centroiding_tutorial.html
```
#We want to write some code for the neon data
# that will give us the x-coordinate/pixel position of the centroids of our primary peaks.
# We will be using the peak finding code from section 2.1 for the initial pixel position guesses,
# so make sure that has been run before this cell.
# This is a naive implementation w/ inefficient append()s
centroids_neon = [] # Pixel coordinates for all the centroids
FWHM = [] # Full-width-half-max signal values for each centroid
for i in peaks_index_neon: # We are using the pixel indeces from our original peak finding function
half_max = signal_avg_neon[i] / 2.
xmin = np.where(signal_avg_neon[i:0:-1] <= half_max)[0][0] # np.nonzero() is preferred
xmax = np.where(signal_avg_neon[i:-1] <= half_max)[0][0]
x_range = pixel[i - xmin:i + xmax]
I_range = signal_avg_neon[i - xmin:i + xmax]
x_range = np.array(x_range) # Preferred way is x_range.copy()
I_range = np.array(I_range)
x_com = np.sum(x_range * I_range / np.sum(I_range)) # x_com stands for X Center of Mass,
# which is the x-coordinate of our centroid, since a centroid is a center of mass
centroids_neon.append(x_com)
FWHM.append(half_max)
# This is an optimised refactoring w/ append()s replaced by assignments
# & other functions replaced by officially recommended versions
n_peaks_neon = len(peaks_index_neon)
centroids_neon = np.zeros((n_peaks_neon,)) # Pixel coordinates for all the centroids
FWHM = np.zeros((n_peaks_neon,)) # Full-width-half-max signal values for each centroid
for i in range(n_peaks_neon): # We are using the pixel indeces from our original peak finding function
peak = peaks_index_neon[i]
half_max = signal_avg_neon[peak] / 2.
FWHM[i] = (half_max)
xmin = np.nonzero(signal_avg_neon[peak:0:-1] <= half_max)[0][0]
xmax = np.nonzero(signal_avg_neon[peak:-1] <= half_max)[0][0]
x_range = pixel[peak - xmin:peak + xmax].copy()
I_range = signal_avg_neon[peak - xmin:peak + xmax].copy()
I_total = np.sum(I_range) # Avoid duplicate calculation
x_com = np.sum(x_range * I_range / I_total) # x_com stands for X Center of Mass
centroids_neon[i] = (x_com)
# Similar to with the peak finding code,
# we want to confirm that the centroiding code worked,
# so we will print out the FWHM signal values, their new CENTROID locations,
# and then plot them over the original spectrum to see if everything lines up
# Let's print out indicators about our data
#--------------------------------------#
#This prints out two columns with the first representing the pixel location of the peak
# and the second representing the signal value at that peak
print(np.hstack((np.vstack(centroids_neon),np.vstack(FWHM))))
#Output will be in the format below:
#Centroid x-coordinate/pixel, FWHM signal value
#--------------------------------------#
# Let's plot the spectrum as well as vertical bars on our centroid x-coordinates/pixels
# to make sure it looks correct
#-------------------------------------------------------------------------#
# This will plot the vertical lines on the centroid x-coordinates
#--------------------------------------#
def plot_vert(x):
'''
Just plots vertical lines, in green dashes, at x-value X
'''
plt.axvline(x, color='green', ls='-.')
# ls specifies the format intuitively
for i in centroids_neon[1:]: # Call our plotting function on every centroid except the first
plot_vert(i)
#--------------------------------------#
# Let's plot the spectrum as well as the vertical centroid locations to make sure it looks like the correct
#--------------------------------------#
# Here, I am setting the minimum and maximum pixels that I want to view in the plot,
# since the neon peaks will be easier to see in this pixel range
min_pixel = 1300
max_pixel = 2000
plt.rcParams['figure.figsize'] = [10, 5]
plt.rcParams.update({'font.size': 12})
plt.axvline(centroids_neon[0], color='green', ls='-.', label='Centroid')
# Reserve the first so I don't have a million "centroid" labels
plt.plot(pixel[min_pixel:max_pixel], signal_avg_neon[min_pixel:max_pixel]) # Plot the actual spectrum
plt.scatter(centroids_neon, FWHM, color='red', marker='o', label='FWHM')
plt.title('Neon (Raw Data)')
plt.xlabel('Pixel Number')
plt.ylabel('Raw Signal [ADU]')
plt.legend()
plt.show()
#--------------------------------------#
#-------------------------------------------------------------------------#
```
| github_jupyter |
```
import numpy as onp
import jax.numpy as np
from jax import random, vmap
from jax.config import config
config.update("jax_enable_x64", True)
from scipy.optimize import minimize
from pyDOE import lhs
import matplotlib.pyplot as plt
from matplotlib import rc
from scipy.interpolate import griddata
from jaxbo.models import MultipleIndependentOutputsGP, GP
from jaxbo.utils import normalize, normalize_constraint, compute_w_gmm
from jaxbo.test_functions import *
from jax.scipy.stats import norm
import jaxbo.acquisitions as acquisitions
from jaxbo.input_priors import uniform_prior, gaussian_prior
onp.random.seed(1234)
# Example from
# https://asmedigitalcollection.asme.org/mechanicaldesign/article/141/12/121001/975244?casa_token=45A-r7iV9IUAAAAA:ji-aHZ_T_HQ5Q1xgNxloqrG2LjOpFkXMItdWnuGH9d02MysONc3VTfrtM8GSB5oTdE2jcQ
# Section 4, and constraint in section 4.2
def f(x):
x1, x2 = x[0], x[1]
a = 1.0
b = 5.1 / (4*np.pi**2)
c = 5 / np.pi
r = 6
s = 10
t = 1 / (8*np.pi)
f = a * (x2 - b*x1**2 + c*x1 -r)**2 + s * (1-t) * np.cos(x1) + s
return f
def constraint1(x):
x1, x2 = (x[0]-2.5)/7.5, (x[1] - 7.5)/7.5
g1 = (4 - 2.1*x1**2 + 1./3*x1**4)*x1**2 + x1*x2 + (-4+4*x2**2)*x2**2 + 3*np.sin(6*(1-x1)) + 3*np.sin(6*(1-x2))
return g1 - 6.
# Dimension of the problem
dim = 2
# Boundary of the domain
lb = np.array([-5.0, 0.0])
ub = np.array([10.0, 15.0])
bounds = {'lb': lb, 'ub': ub}
# Visualization of the function and constraints in 2D grid
nn = 100
xx = np.linspace(lb[0], ub[0], nn)
yy = np.linspace(lb[1], ub[1], nn)
XX, YY = np.meshgrid(xx, yy)
X_star = np.concatenate([XX.flatten()[:,None],
YY.flatten()[:,None]], axis = 1)
y_f_star = vmap(f)(X_star)
y1_c_star = vmap(constraint1)(X_star)
Y_f_star = griddata(onp.array(X_star), onp.array(y_f_star), (onp.array(XX), onp.array(YY)), method='cubic')
Y1_c_star = griddata(onp.array(X_star), onp.array(y1_c_star), (onp.array(XX), onp.array(YY)), method='cubic')
plt.figure(figsize = (16, 5))
plt.subplot(1, 2, 1)
fig = plt.contourf(XX, YY, Y_f_star)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Exact objective')
plt.colorbar(fig)
plt.subplot(1, 2, 2)
fig = plt.contourf(XX, YY, Y1_c_star)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'constraint1')
plt.colorbar(fig)
# Visualize the feasible domain and the location of the best value of this problem
judge1 = (y1_c_star >= 0)
total_judge = judge1
valid_index = np.where(total_judge)
#print(valid_index)
valid_x = X_star[valid_index]
valid_y = y_f_star[valid_index]
#print(valid_x.shape, valid_y.shape)
idx_best = np.argmin(valid_y)
x_best = valid_x[idx_best]
y_best = valid_y[idx_best]
plt.figure(figsize = (6,4))
fig = plt.contourf(XX, YY, Y_f_star)
plt.plot(valid_x[:,0], valid_x[:, 1], 'r.', markersize = 2.)
plt.plot(x_best[0], x_best[1], 'y.', markersize = 8.)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Exact objective')
plt.colorbar(fig)
print("best y", y_best, "best x", x_best)
true_x = x_best
true_y = y_best
# Problem settings
# Number of initial data for objective and constraints
N_f = 20
N_c = 50
noise_f = 0.00
noise_c = 0.01
nIter = 10
# Define prior distribution
p_x = uniform_prior(lb, ub)
# JAX-BO setting
options = {'kernel': 'RBF',
'input_prior': p_x,
'constrained_criterion': 'LCBC',
'criterion': 'LW_CLSF',
'kappa': 2.0,
'nIter': nIter}
gp_model = MultipleIndependentOutputsGP(options)
# JAX-BO setting for constraint
options_constraint = {'kernel': 'RBF',
'criterion': 'LW_CLSF',
'input_prior': p_x,
'kappa': 2.0,
'nIter': nIter}
gp_model_constraint = GP(options_constraint)
# Domain bounds (already defined before where we visualized the data)
bounds = {'lb': lb, 'ub': ub}
# Initial training data for objective
X_f = lb + (ub-lb)*lhs(dim, N_f)
y_f = vmap(f)(X_f)
y_f = y_f + noise_f*y_f_star.std(0)*onp.random.normal(0, 1, size=y_f.shape)
# Initial training data for constraints
X_c = lb + (ub-lb)*lhs(dim, N_c)
y1_c = vmap(constraint1)(X_c)
y1_c = y1_c + noise_c*y1_c_star.std(0)*onp.random.normal(0, 1, size=y1_c.shape)
# Visualize the initial data for objective and constraints
plt.figure(figsize = (10,5))
plt.subplot(1, 2, 1)
fig = plt.contourf(XX, YY, Y_f_star)
plt.plot(X_f[:,0], X_f[:,1], 'ro', label = "Initial objective data")
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Exact objective')
plt.colorbar(fig)
plt.subplot(1, 2, 2)
fig = plt.contourf(XX, YY, Y1_c_star)
plt.plot(X_c[:,0], X_c[:,1], 'bo', label = "Initial constraint data")
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'constraint1')
plt.colorbar(fig)
plt.legend()
# Main Bayesian optimization loop
rng_key = random.PRNGKey(0)
for it in range(options['nIter']):
print('-------------------------------------------------------------------')
print('------------------------- Iteration %d/%d -------------------------' % (it+1, options['nIter']))
print('-------------------------------------------------------------------')
# Fetch normalized training data (for objective and all the constraints)
norm_batch_f, norm_const_f = normalize(X_f, y_f, bounds)
norm_batch_c1, norm_const_c1 = normalize(X_c, y1_c, bounds)
# Define a list using the normalized data and the normalizing constants
norm_batch_list = [norm_batch_f, norm_batch_c1]
norm_const_list = [norm_const_f, norm_const_c1]
# Train GP model with 100 random restart
print('Train GP...')
rng_key = random.split(rng_key, 2)[0]
opt_params_list = gp_model.train(norm_batch_list,
rng_key,
num_restarts = 10)
# Fit GMM
if options['constrained_criterion'] == 'LW_LCBC' or options['constrained_criterion'] == 'LW_CLSF' or options['constrained_criterion'] == 'LW-US':
print('Fit GMM...')
rng_key = random.split(rng_key)[0]
kwargs = {'params': opt_params_list,
'batch': norm_batch_list,
'norm_const': norm_const_list,
'bounds': bounds,
'rng_key': rng_key}
gmm_vars = gp_model.fit_gmm(**kwargs, N_samples = 10000)
else:
gmm_vars = None
# Find the next acquisition point with 50 random restart
print('Computing next acquisition point (objective)...')
kwargs = {'params': opt_params_list,
'batch': norm_batch_list,
'norm_const': norm_const_list,
'bounds': bounds,
'kappa': options['kappa'],
'gmm_vars': gmm_vars,
'rng_key': rng_key}
# Acquire objective data
new_X_f,_,_ = gp_model.constrained_compute_next_point_lbfgs(num_restarts=50, **kwargs)
new_y_f = vmap(f)(new_X_f) # This is the output of the solver for generating the objective function
new_y_f = new_y_f + noise_f*y_f_star.std(0)*onp.random.normal(new_y_f.shape)
#################### Fit GP for constraint ##################
# Fetch transformed data for only constraint
norm_batch_c1, norm_const_c1 = normalize_constraint(X_c, y1_c, bounds)
# Train GP model
print('Train GP...')
rng_key = random.split(rng_key)[0]
opt_params = gp_model_constraint.train(norm_batch_c1,
rng_key,
num_restarts = 50)
# Fit GMM
if options_constraint['criterion'] == 'LW-LCB' or options_constraint['criterion'] == "LW_CLSF":
print('Fit GMM...')
rng_key = random.split(rng_key)[0]
kwargs = {'params': opt_params,
'batch': norm_batch_c1,
'norm_const': norm_const_c1,
'bounds': bounds,
'kappa': gp_model_constraint.options['kappa'],
'rng_key': rng_key}
gmm_vars = gp_model_constraint.fit_gmm(**kwargs, N_samples = 10000)
else:
gmm_vars = None
# Compute next point via minimizing the acquisition function
print('Computing next acquisition point...')
kwargs = {'params': opt_params,
'batch': norm_batch_c1,
'norm_const': norm_const_c1,
'bounds': bounds,
'kappa': gp_model_constraint.options['kappa'],
'gmm_vars': gmm_vars,
'rng_key': rng_key}
# Acquire constraint data
new_X_c,_,_ = gp_model_constraint.compute_next_point_lbfgs(num_restarts=50, **kwargs)
new_y1_c = vmap(constraint1)(new_X_c) # This is the output of the solver for generating the constraint1 functions
new_y1_c = new_y1_c + noise_c*y1_c_star.std(0)*onp.random.normal(new_y1_c.shape)
# # Augment training data
print('Updating data-set...')
X_f = np.concatenate([X_f, new_X_f], axis = 0)
X_c = np.concatenate([X_c, new_X_c], axis = 0)
y_f = np.concatenate([y_f, new_y_f], axis = 0)
y1_c = np.concatenate([y1_c, new_y1_c], axis = 0)
# # Print current best
print('True location: ({}), True value: {}'.format(true_x, true_y))
print('New location: ({}), New value: {}'.format(new_X_f, new_y_f))
# # Making prediction on the posterior objective and all constraints
mean, std = gp_model.predict(X_star, **kwargs)
mean = onp.array(mean * norm_const_list[-1]["sigma_y"] + norm_const_list[-1]["mu_y"])
Y1_c_pred = griddata(onp.array(X_star), mean, (onp.array(XX), onp.array(YY)), method='cubic')
# Visualize the final outputs
kwargs = {'params': opt_params_list,
'batch': norm_batch_list,
'norm_const': norm_const_list,
'bounds': bounds,
'kappa': gp_model.options['kappa'],
'rng_key': rng_key,
'gmm_vars': gmm_vars}
# Making prediction on the posterior objective and all constraints
mean, std = gp_model.predict_all(X_star, **kwargs)
mean = onp.array(mean)
std = onp.array(std)
mean[0:1,:] = mean[0:1,:] * norm_const_list[0]['sigma_y'] + norm_const_list[0]['mu_y']
std[0:1,:] = std[0:1,:] * norm_const_list[0]['sigma_y']
# Compute the weight
if options['constrained_criterion'] == 'LW_LCBC':
w_pred = compute_w_gmm(X_star, **kwargs)
# Compute the upper and lower bounds of the posterior distributions
lower = mean - 2.0*std
upper = mean + 2.0*std
print(mean.shape, std.shape, lower.shape, upper.shape)
# Evaluate the acquisition function
acq_fn1 = lambda x: gp_model.constrained_acquisition(x, **kwargs)
LW_LCBCacq = vmap(acq_fn1)(X_star)
# Compute the ratio and weights derived by the constraints and convert everything into numpy for plotting
ratio1 = mean[1,:] / std[1,:]
weight1 = norm.cdf(mean[1,:]/std[1,:])
LW_LCBCacq = onp.array(LW_LCBCacq)
mean = onp.array(mean)
std = onp.array(std)
ratio1 = onp.array(ratio1)
weight1 = onp.array(weight1)
y_f_pred = onp.array(mean[0,:])
y1_c_pred = onp.array(mean[1,:])
y_f_std = onp.array(std[0,:])
try:
w_pred = onp.array(w_pred)
except:
w_pred = onp.ones_like(y_f_std)
kappa = 2.
# Convert the numpy variable into grid data for visualization
Y_f_pred = griddata(onp.array(X_star), y_f_pred, (onp.array(XX), onp.array(YY)), method='cubic')
Y1_c_pred = griddata(onp.array(X_star), y1_c_pred, (onp.array(XX), onp.array(YY)), method='cubic')
Y_f_std = griddata(onp.array(X_star), y_f_std, (onp.array(XX), onp.array(YY)), method='cubic')
Ratio1 = griddata(onp.array(X_star), ratio1, (onp.array(XX), onp.array(YY)), method='cubic')
Weight1 = griddata(onp.array(X_star), weight1, (onp.array(XX), onp.array(YY)), method='cubic')
LW_LCBCacq = griddata(onp.array(X_star), LW_LCBCacq.flatten(), (onp.array(XX), onp.array(YY)), method='cubic')
W_pred = griddata(onp.array(X_star), w_pred.flatten(), (onp.array(XX), onp.array(YY)), method='cubic')
LCBacq = Y_f_pred - 3. - kappa*Y_f_std
# Visualization
plt.figure(figsize = (16,10))
plt.subplot(2, 4, 1)
fig = plt.contourf(XX, YY, Y1_c_star)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Exact constraint1')
plt.colorbar(fig)
plt.subplot(2, 4, 2)
fig = plt.contourf(XX, YY, Y1_c_pred)
plt.plot(X_c[:,0], X_c[:,1], 'r.')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Pred constraint1')
plt.colorbar(fig)
plt.subplot(2, 4, 3)
fig = plt.contourf(XX, YY, Ratio1)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Ratio1')
plt.colorbar(fig)
plt.subplot(2, 4, 4)
fig = plt.contourf(XX, YY, np.clip(Weight1, 0, np.inf))
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Weight1')
plt.colorbar(fig)
plt.subplot(2, 4, 5)
fig = plt.contourf(XX, YY, Y_f_star)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Exact objective')
plt.colorbar(fig)
plt.subplot(2, 4, 6)
fig = plt.contourf(XX, YY, Y_f_pred)
plt.plot(X_f[:,0], X_f[:,1], 'r.')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Pred objective')
plt.colorbar(fig)
plt.subplot(2, 4, 7)
fig = plt.contourf(XX, YY, LCBacq)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'LCB')
plt.colorbar(fig)
plt.subplot(2, 4, 8)
fig = plt.contourf(XX, YY, LW_LCBCacq)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'LW_LCBC')
plt.colorbar(fig)
# Data we collected and the ground truth
plt.figure(figsize = (15, 5))
plt.subplot(1, 3, 1)
fig = plt.contourf(XX, YY, Y_f_star)
plt.plot(valid_x[:,0], valid_x[:, 1], 'r.', markersize = 2.)
plt.plot(true_x[0], true_x[1], 'k.', markersize = 10.)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Exact objective')
plt.colorbar(fig)
plt.subplot(1, 3, 2)
fig = plt.contourf(XX, YY, Y_f_pred)
plt.plot(X_f[:,0], X_f[:,1], 'r.')
plt.plot(true_x[0], true_x[1], 'k.', markersize = 10.)
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Pred objective')
plt.colorbar(fig)
plt.subplot(1, 3, 3)
fig = plt.contourf(XX, YY, W_pred)
plt.plot(X_f[:,0], X_f[:,1], 'r.')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.title(r'Pred output weight')
plt.colorbar(fig)
```
| github_jupyter |
# Basic Example
## Vector Addition
Add a fixed value to an array with numbers in the range [0..99].
The example uses the vector addition kernel included in the [hello world](https://github.com/Xilinx/Vitis_Accel_Examples/tree/63bae10d581df40cf9402ed71ea825476751305d/hello_world) application of the [Vitis Accel Examples' Repository](https://github.com/Xilinx/Vitis_Accel_Examples/tree/63bae10d581df40cf9402ed71ea825476751305d).

See below for a [breakdown of the code](#Step-by-step-walkthrough-of-the-example).
```
import pynq
import numpy as np
# program the device
ol = pynq.Overlay("intro.xclbin")
vadd = ol.vadd_1
# allocate buffers
size = 1024*1024
in1_vadd = pynq.allocate((1024, 1024), np.uint32)
in2_vadd = pynq.allocate((1024, 1024), np.uint32)
out = pynq.allocate((1024, 1024), np.uint32)
# initialize input
in1_vadd[:] = np.random.randint(low=0, high=100, size=(1024, 1024), dtype=np.uint32)
in2_vadd[:] = 200
# send data to the device
in1_vadd.sync_to_device()
in2_vadd.sync_to_device()
# call kernel
vadd.call(in1_vadd, in2_vadd, out, size)
# get data from the device
out.sync_from_device()
# check results
msg = "SUCCESS!" if np.array_equal(out, in1_vadd + in2_vadd) else "FAILURE!"
print(msg)
# clean up
del in1_vadd
del in2_vadd
del out
ol.free()
```
## Step-by-step walkthrough of the example
### Overlay download
First, let's import `pynq`, download the overlay, and assign the vadd kernel IP to a variable called `vadd`.
```
import pynq
ol = pynq.Overlay("intro.xclbin")
vadd = ol.vadd_1
```
### Buffers allocation
Let's first take a look at the signature of the vadd kernel. To do so, we use the `.signature` property. The accelerator takes two input vectors, the output vector, and the vectors' size as arguments
```
vadd.signature
```
Data types in the signature that have the *pointer* (`*`) qualifier represent *buffers* that must be allocated in memory. Non-pointer data types represent registers and are set directly when the kernel is executed with `.call()`.
Buffers allocation is carried by [`pynq.allocate`](https://pynq.readthedocs.io/en/v2.5/pynq_libraries/allocate.html), which provides the same interface as a [`numpy.ndarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html).
The `numpy.ndarray` constructor represents the low-level API to instantiate multidimensional arrays in NumPy.
```python
import numpy as np
foo = np.ndarray(shape=(10,), dtype=int)
```
The `pynq.allocate` API provides a buffer object that can be used to interact with both host and device buffers. Host and FPGA buffers here are transparently managed, and the user is only presented with a single interface for both. The user is only asked to explicitly sync host and FPGA buffers before and after a kernel call through the `.sync_to_device()` and `.sync_from_device()` API, as will be shown later. If you are familiar with the PYNQ embedded API `sync_to_device` and `sync_from_device` are the mirrored buffer equivalent to `flush` and `invalidate` functions used for cache-coherent buffers.
In this case we're going to create 3 1024x1024 arrays, two input and one output. Since the kernel uses unsigned integers we specify `u4` as data type when performing allocation, which is shorthand for `numpy.uint32`, as explained in the [`numpy.dtypes`](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html#arrays-dtypes) documentation.
```
size = 1024*1024
in1_vadd = pynq.allocate((1024, 1024), 'u4')
in2_vadd = pynq.allocate((1024, 1024), 'u4')
out = pynq.allocate((1024, 1024), 'u4')
```
We can use numpy to easily initialize one of the input arrays with random data, with numbers in the range [0, 100). We instead set all the elements of the second input array to a fixed value so we can see at a glance whether the addition was successful.
```
import numpy as np
in1_vadd[:] = np.random.randint(low=0, high=100, size=(1024, 1024), dtype='u4')
in2_vadd[:] = 200
```
### Run the kernel
Before we can start the kernel we need to make sure that the buffers are synced to the FPGA card. We do this by calling `.sync_to_device()` on each of our input arrays.
To start the accelerator, we can use the `.call()` function and pass the kernel arguments. The function will take care of correctly setting the `register_map` of the IP and send the start signal. We pass the arguments to `.call()` following the `.signature` we previously inspected.
Once the kernel has completed, we can `.sync_from_device()` the output buffer to ensure that data from the FPGA is transferred back to the host memory.
We use the `%%timeit` magic to get the average execution time. This magic will automatically decide how many runs to perform to get a reliable average.
```
%%timeit
in1_vadd.sync_to_device()
in2_vadd.sync_to_device()
vadd.call(in1_vadd, in2_vadd, out, size)
out.sync_from_device()
```
Finally, let's compare the FPGA results with software, using [`numpy.array_equal`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array_equal.html)
```
np.array_equal(out, in1_vadd + in2_vadd)
```
## Cleaning up
Finally, we have to deallocate the buffers and free the FPGA context using `Overlay.free`.
In case buffers are used as output of a cell, we will have to use the [`%xdel`](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-xdel) magic to also remove any reference to these buffers in Jupyter/IPython. IPython holds on to references of cell outputs so a standard `del` isn’t sufficient to remove all references to the array and hence trigger the memory to be freed.
The same effect can also be achieved by *shutting down* the notebook.
```
%xdel in1_vadd
%xdel in2_vadd
%xdel out
ol.free()
```
Copyright (C) 2020 Xilinx, Inc
| github_jupyter |
# Week - 3 Segmenting and Clustering Neighbourhoods in Toronto
#### Note that all the Three parts of the assignment is covred in this same notbook
#### Load Libraries`
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import json
from geopy.geocoders import Nominatim
import requests
from pandas.io.json import json_normalize
import matplotlib.cm as cm
import matplotlib.colors as colors
from sklearn.cluster import KMeans
from bs4 import BeautifulSoup
import xml
import folium
print('Libraries imported.')
```
# Part 1
## Scrape Wikipedia
```
url = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text
soup = BeautifulSoup(url,'lxml')
```
## Locate Table and use tags to find postal code by Borough and Neighbourhood
```
table_post = soup.find('table')
fields = table_post.find_all('td')
postcode = []
borough = []
neighbourhood = []
for i in range(0, len(fields), 3):
postcode.append(fields[i].text.strip())
borough.append(fields[i+1].text.strip())
neighbourhood.append(fields[i+2].text.strip())
df_pc = pd.DataFrame(data=[postcode, borough, neighbourhood]).transpose()
df_pc.columns = ['Postcode', 'Borough', 'Neighbourhood']
df_pc.head()
print("Number of Columns",df_pc.shape[1])
print("Number of Rows",df_pc.shape[0])
```
# Part 2
## Remove "Not assigned" and then Aggregate
```
df_pc['Borough'].replace('Not assigned',np.nan,inplace=True)
df_pc.head()
df_pc.dropna(subset=['Borough'], inplace=True)
df_pc.head()
df_pcn = df_pc.groupby(['Postcode', 'Borough'])['Neighbourhood'].apply(', '.join).reset_index()
df_pcn.columns = ['Postcode', 'Borough', 'Neighbourhood']
df_pcn
df_pcn['Neighbourhood'].replace('Not assigned', "Queen's Park", inplace=True)
df_pcn
df_geo = pd.read_csv('http://cocl.us/Geospatial_data')
df_geo.columns = ['Postcode', 'Latitude', 'Longitude']
df_geo.head()
df_data = pd.merge(df_pcn,df_geo ,on=['Postcode'],how='inner')
df_data.head()
print('The dataframe has {} boroughs and {} neighborhoods.'.format(
len(df_data['Borough'].unique()),
df_data.shape[0]
)
)
address = 'Toronto, Canada'
geolocator = Nominatim(user_agent="ny_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto City are {}, {}.'.format(latitude, longitude))
```
# Part 3
### Select Only Toronto neighbourhoods
```
df1=df_data[df_data['Borough'].str.contains('Toronto')]
df_tor=df1.reset_index(drop=True)
df_tor
# create map of Toronto using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(df_data['Latitude'], df_data['Longitude'], df_data['Borough'], df_data['Neighbourhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
```
### Exploreing 17th Neighbourhood
```
df_tor.loc[17, 'Neighbourhood']
neighborhood_latitude = df_tor.loc[17, 'Latitude'] # neighborhood latitude value
neighborhood_longitude = df_tor.loc[17, 'Longitude'] # neighborhood longitude value
neighborhood_name = df_tor.loc[17, 'Neighbourhood'] # neighborhood name
print('Latitude and longitude values of {} are {}, {}.'.format(neighborhood_name,
neighborhood_latitude,
neighborhood_longitude))
CLIENT_ID = 'Enter YOURS' # your Foursquare ID
CLIENT_SECRET = 'Enter Yours' # your Foursquare Secret
VERSION = '20180604'
LIMIT = 30
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
neighborhood_latitude,
neighborhood_longitude,
radius,
LIMIT)
url
results = requests.get(url).json()
results
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
venues = results['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues.head()
nearby_venues.shape
print('{} venues were returned by Foursquare.'.format(nearby_venues.shape[0]))
```
## Explore Neighborhoods in Toronto
```
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, LIMIT)
results = requests.get(url).json()["response"]['groups'][0]['items']
venues_list.append([( name, lat, lng, v['venue']['name'], v['venue']['location']['lat'], v['venue']['location']['lng'], v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood', 'Neighborhood Latitude', 'Neighborhood Longitude', 'Venue', 'Venue Latitude', 'Venue Longitude', 'Venue Category']
return(nearby_venues)
toronto_venues = getNearbyVenues(names=df_tor['Neighbourhood'], latitudes=df_tor['Latitude'],longitudes=df_tor['Longitude'])
toronto_venues.head()
toronto_venues.groupby('Neighborhood').count()
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
```
## Analyse the each neigbouhood
```
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_onehot.shape
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
toronto_grouped.shape
```
### Top 5 Venues
```
num_top_venues = 5
for hood in toronto_grouped['Neighborhood']:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
```
## Clustering Neighborhoods
```
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
df_tor.rename(columns={'Neighbourhood':'Neighborhood'},inplace=True)
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
toronto_merged = df_tor
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index(['Neighborhood']), on='Neighborhood')
toronto_merged.head() # check the last columns!
```
### Visualize the clusters
```
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
```
#### Cluster 1
```
toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
```
#### Cluster 2
```
toronto_merged.loc[toronto_merged['Cluster Labels'] == 1, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
```
#### Cluster 3
```
toronto_merged.loc[toronto_merged['Cluster Labels'] == 2, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
```
#### CLuster 4
```
toronto_merged.loc[toronto_merged['Cluster Labels'] == 3, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
```
#### CLuster 5
```
toronto_merged.loc[toronto_merged['Cluster Labels'] == 4, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
```
| github_jupyter |
```
# Standard libraries
import pandas as pd
import numpy as np
from scipy import stats
# Visualization
import matplotlib.pyplot as plt
import datetime
import os
# os.system("pip install wrds") # TODO: Probably put this in utils.py
import wrds
# os.system("pip install pandas-datareader")
import pandas_datareader.data as web
# os.system("pip install seaborn")
import seaborn as sns
pd.set_option('display.max_columns', None)
from sklearn.linear_model import LinearRegression
# Note we don't actually need pandas_datareader. Could import/use YahooFinanceAPI which gives same info
stock_price_aa_records = pd.read_csv("../data/stock_price_aa_records.csv")
display(stock_price_aa_records)
# Cleaning and Dummy encoding
stock_price_aa_records['Type of Info'] = stock_price_aa_records['Type of Info'].str.replace(" ", "")
stock_price_aa_records['Attack'] = stock_price_aa_records['Attack'].str.replace("; ", "|")
stock_price_aa_records = pd.concat([stock_price_aa_records.drop('Type of Info', 1), stock_price_aa_records['Type of Info'].str.get_dummies(sep="|").add_suffix(" (Type of Info)")], 1)
stock_price_aa_records = pd.concat([stock_price_aa_records.drop('Attack', 1), stock_price_aa_records['Attack'].str.get_dummies(sep="|").add_suffix(" (Attack)")], 1)
stock_price_aa_records = pd.concat([stock_price_aa_records.drop('SIC Code', 1), stock_price_aa_records['SIC Code'].str.get_dummies(sep="|").add_suffix(" (Industry)")], 1)
stock_price_aa_records = pd.concat([stock_price_aa_records.drop('Region', 1), stock_price_aa_records['Region'].str.get_dummies(sep="|").add_suffix(" (Region)")], 1)
stock_price_aa_records.drop(columns = ['ND (Type of Info)', 'ND (Attack)', 'Mining (Industry)', 'Foreign (Region)'], inplace = True)
display(stock_price_aa_records)
# IMPORTANT NOTE
# I think we should predict percent stock price changes instead of actual dollar stock price change.
# But I could see using the latter if you want to use number of records or stuff like that.
lst = []
months_after = 12 #Toggle this value
col = []
for i in range(0, months_after + 1):
col.append("Stock Price (%s months DoD)" % i)
stock_prices = pd.DataFrame()
n = 1
for x in col[1:]:
stock_prices[n] = stock_price_aa_records.apply(lambda row: (row[x] - row[col[0]])/row[col[0]], axis = 1)
n += 1
test = stock_price_aa_records.drop(columns=['Company name', 'Ticker',
'Date of Breach', 'Date Became Aware of Breach',
'Date of Disclosure', 'Information'])
test.drop(columns=col, inplace = True)
test.drop(columns = ['median stock forecast', 'mean stock forecast'], inplace = True)
table = test
table = pd.concat([test, stock_prices], axis=1, join='inner')
display(table)
# Correlation matrix using absolute value of correlation
plt.subplots(figsize = (20,11))
corr = test.corr().abs()
sns.heatmap(corr, annot = True, cmap = "Blues", fmt = '.1g', linewidths=.5,)
plt.show()
```
| github_jupyter |
# Querying JSON Databases
This notebook demonstrates how to query hierarchical databases with our toy query language BQL.
We start with loading a demo JSON dataset with data on departments and employees of the city of Chicago ([source](https://data.cityofchicago.org/Administration-Finance/Current-Employee-Names-Salaries-and-Position-Title/xzkq-xp2w)). In general, we can treat any JSON document with a regular structure as a hierarchical database.
```
%cd -q ..
from citydb_json import citydb
```
This is the structure of the `citydb` database:
```
{
"departments": [
{
"name": ...,
"employees": [
{
"name": ...,
"surname": ...,
"position": ...,
"salary": ...
},
... ]
},
... ]
}
```
The top-level **City** object has the following fields:
* `departments`: an array of department objects.
**Department** objects have the following fields:
* `name`: the name of the department.
* `employees`: an array of employee objects.
**Employee** objects have the following fields:
* `name`: employee's first name.
* `surname`: employee's last name.
* `position`: employee's title.
* `salary`: annual salary of the employee.
Next, we import the BQL library.
```
from bql import *
```
The BQL query language is embedded in Python, which means any BQL query is a regular Python function which maps JSON input to JSON output. We call such functions _JSON combinators_.
Two trivial examples of JSON combinators are:
* `Const(val)`, which maps all input to the same output value;
* `Here()`, which returns its input unchanged.
```
C = Const(42)
C(None), C(42), C([1, 2, 3])
I = Here()
I(None), I(42), I([1, 2, 3])
```
More impressive is combinator `Field(name)` that extracts a field value from a JSON object.
```
F = Field('x')
F({'x': 24, 'y': 42})
```
By composing two field extractors, we can build a query that produces **the names of all departments**.
```
Departments = Field('departments')
Name = Field('name')
Dept_Names = Departments >> Name
dept_names = Dept_Names(citydb)
dept_names [:5]
```
What does the `>>` operator do exactly? Fundamentally, `(A >> B)` composes `A` and `B` by sending the output of `A` to the input of `B`.
$$
(A \gg B):\; x \;\overset{A}{\longmapsto}\; y \;\overset{B}{\longmapsto}\; z \quad
\text{(where $y = A(x),\, z = B(y)$)}
$$
However, if we directly apply this rule to evaluate the expression ``(Departments >> Name)(citydb)``, we will fail because `citydb['departments']['name']` does not exist.
To make this work, we need to clarify the composition rule. Namely, expression `(A >> B)(x)`, when `A(x)` is an array, applies `B` to _each_ element of the array.
$$
(A \gg B):\; x \;\overset{A}{\longmapsto}\; [y_1,\, y_2,\, \ldots] \;\overset{B}{\longmapsto}\; [z_1,\, z_2,\, \ldots] \quad
\text{(when $A(x) = [y_1,\, y_2\, \ldots],\, B(y_k) = z_k$)}
$$
Moreover, when `B` itself produces array values, all `B` outputs are combined into one array, which becomes the output of `(A >> B)`.
$$
(A \gg B):\; x \;\overset{A}{\longmapsto}\; [y_1,\, y_2,\, \ldots] \;\overset{B}{\longmapsto}\; [z_{11},\, z_{12},\, \ldots\, z_{21},\, z_{22},\, \ldots] \quad
\text{(when also $B(y_k)$ are arrays $[z_{k1},\, z_{k2},\, \ldots]$)}
$$
The last feature is used when we list **the names of all employees**.
```
Employees = Field('employees')
Empl_Names = Departments >> Employees >> Name
empl_names = Empl_Names(citydb)
empl_names [:5]
```
Dual to `Field(name)`, combinator `Select(...)` *constructs* JSON objects. Parameters of `Select(...)` are combinators that construct object fields. Here is a trivial example.
```
S = Select(x=Const(42), y=Here())
S(24)
```
Let us use `Select(...)` to generate **the name and the number of employees for each department**.
```
Depts_With_Size = Departments >> Select(name=Name, size=Count(Employees))
depts_with_size = Depts_With_Size(citydb)
depts_with_size [:5]
```
Here, combinator `Count(Employees)` returns the length of the `employees` array. In general, `Count(F)` lets `F` process its input expecting the output of `F` to be an array, then returns the length of the array.
$$
\operatorname{Count}(F):\; x \;\overset{F}{\longmapsto}\; [y_1,\, y_2,\, \ldots\, y_N] \;\overset{\operatorname{len}}{\longmapsto}\; N
$$
(You may've expected `Employees >> Count()`, but that'd make operator `>>` non-associative).
Array combinators such as `Count(...)` are called *aggregate combinators*. The following aggregate combinators are defined in BQL: `Count()`, `Min()`, `Max()`, `First()`.
```
Num_Depts = Count(Departments)
Num_Depts(citydb)
Salary = Field('salary')
Top_Salary = Max(Departments >> Employees >> Salary)
Top_Salary(citydb)
One_Empl = First(Departments >> Employees)
One_Empl(citydb)
Three_Depts = First(Departments >> Name, Const(3))
Three_Depts(citydb)
Half_Depts = First(Departments >> Name, Count(Departments)//2)
Half_Depts(citydb)
```
Combinator `Filter(P)` applies predicate `P` to its input. If the predicate condition is not satisfied, the input is dropped, otherwise it is returned unchanged. Let us use `Filter()` to find **the departments with more than 1000 employees**.
```
Size = Field('size')
Large_Depts = Depts_With_Size >> Filter(Size > 1000)
Large_Depts(citydb)
```
Here, combinator `Depts_With_Size`, which adds `size` field to each department object, is composed with combinator `Filter(Size > 1000)`, which gathers the departments that satisfy condition `Size > 1000`.
In the following example, we use `Filter()` to find **the number of employees whose annual salary exceeds 200k**.
```
Num_Well_Paid_Empls = \
Count(Departments >> Employees >> Filter(Salary >= 200000))
Num_Well_Paid_Empls(citydb)
```
Now suppose we'd like to find **the number of employees with salary in a certain range**, but we don't know the range in advance. In this case, we can construct a *parameterized query*.
```
Min_Salary = Ref('min_salary')
Max_Salary = Ref('max_salary')
Num_Empls_By_Salary_Range = \
Count(Departments >> Employees >> Filter((Salary >= Min_Salary) & (Salary < Max_Salary)))
```
To run the `Num_Empls_By_Salary_Range` query, we need to supply it with parameters `min_salary` and `max_salary`.
```
Num_Empls_By_Salary_Range(citydb, {'min_salary': 200000, 'max_salary': 1000000})
Num_Empls_By_Salary_Range(citydb, {'min_salary': 100000, 'max_salary': 200000})
Num_Empls_By_Salary_Range(citydb, {'min_salary': 0, 'max_salary': 100000})
```
The query knows which parameters it needs.
```
Num_Empls_By_Salary_Range.refs()
```
The last feature we discuss here is an ability to assign parameter values dynamically.
Consider a query: find **the top salary for each department**. It could be easily implemented using `Max()` aggregate.
```
Depts_With_Max_Salary = \
Departments >> Select(name=Name, max_salary=Max(Employees >> Salary))
Depts_With_Max_Salary(citydb) [:5]
```
Now let us ask a slightly different question: find **the employees with the highest salary at their department**. We may try to use the `Filter()` combinator as follows.
```
Highest_Paid_Empls_By_Dept = \
Departments >> Employees >> Filter(Salary == Max_Salary)
```
But the filter condition `(Salary == Max_Salary)` is problematic since we cannot supply `max_salary` as a query parameter. Instead it must be calculated dynamically for each department. The `Given(...)` combinator does exactly that.
```
Highest_Paid_Empls_By_Dept = \
Departments >> \
Given(
Employees >> Filter(Salary == Max_Salary),
max_salary=Max(Employees >> Salary))
Highest_Paid_Empls_By_Dept(citydb) [:5]
```
Notably, `Highest_Paid_Empls_By_Dept` requires no parameters despite the fact that its definition refers to `max_salary`.
```
Highest_Paid_Empls_By_Dept.refs()
```
| github_jupyter |
```
# Initialize Otter
import otter
grader = otter.Notebook("ps5.ipynb")
```
# Econ 140 – Problem Set 5
Before getting started on the assignment, run the cell at the very top that imports `otter` and the cell below which will import the packages we need.
**Important:** As mentioned in problem set 0, if you leave this notebook alone for a while and come back, to save memory datahub will "forget" which code cells you have run, and you may need to restart your kernel and run all of the cells from the top. That includes this code cell that imports packages. If you get `<something> not defined` errors, this is because you didn't run an earlier code cell that you needed to run. It might be this cell or the `otter` cell above.
```
import numpy as np
import pandas as pd
import statsmodels.api as sm
```
---
## Problem 1. Instrumental Variable Estimation
Consumption of gasoline is a critical component of household expenditures, and increasingly, it is the focus intense public policy debate given the concern over greenhouse emissions. For these reasons alone economists would like to find accurate estimates of price elasticity of demand for gasoline by American consumers. The data file `gasoline.csv` contains monthly data on U.S. consumption of gasoline from 1978 to 2002.
```
gas = pd.read_csv("gasoline.csv")
gas.head()
```
<!-- BEGIN QUESTION -->
**Question 1.a.**
Estimate a simple linear demand equation by regressing the quantity of gas `quantgas` consumed on the price of a gallon of gas `pricegas`. What is your estimate of the price coefficient from the OLS estimation? Remember to use robust standard errors, and to always include a constant.
<!--
BEGIN QUESTION
name: q1_a
manual: true
-->
```
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.b.**
Use your OLSEs to express the price elasticity of demand evaluated at the average price of gas. Does it make economic sense?
*Hint: Express the price elasticity when demand is linear.*
<!--
BEGIN QUESTION
name: q1_b
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.c.**
Now introduce per capita personal income `persincome` as a regressor in the linear demand model and re-estimate using OLS. How has your estimate of price coefficient changed?
This question is for your code, the next is for your explanation.
<!--
BEGIN QUESTION
name: q1_c
manual: true
-->
```
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.d.**
Explain.
<!--
BEGIN QUESTION
name: q1_d
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.e.**
Do you think that the above regression suffers from omitted variable bias? If so, can you determine the sign of the bias?
<!--
BEGIN QUESTION
name: q1_e
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.f.**
Give reasons why you should suspect that the gasoline price would be correlated with error term even after you introduced personal income into the regression. Evaluate the monthly sales of autos in the U.S. (carsales) serve as a good instrument for price of gas? Explain.
<!--
BEGIN QUESTION
name: q1_f
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.g.**
Estimate the first stage of a two stage least squares estimation by regressing price of gasoline on the sales of cars. Also include personal income. Perform a test that determines whether car sales is a “strong instrument.”
This question is for your code, the next is for your explanation.
<!--
BEGIN QUESTION
name: q1_g
manual: true
-->
```
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.h.**
Explain.
<!--
BEGIN QUESTION
name: q1_h
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.i.**
Can you suggest another instrument that is likely to be a better instrument than car sales?
<!--
BEGIN QUESTION
name: q1_i
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.j.**
Now perform the second stage of the TSLS estimation and report any change in the size of the coefficient on gasoline price as a result of using the instrumental variable.
*Hint: `results.fittedvalues` will give you an array of the $\hat y$ values.*
This question is for your code, the next is for your explanation.
<!--
BEGIN QUESTION
name: q1_j
manual: true
-->
```
gas['pricegas_hat'] = results_1g.fittedvalues
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.k.**
Explain.
<!--
BEGIN QUESTION
name: q1_k
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.l.**
Is the TSLS estimate of the price coefficient statistically significant? Do you have any reason to doubt the reported values of the standard errors from the second stage? Explain.
<!--
BEGIN QUESTION
name: q1_l
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.m.**
Suppose you were instead interested in studying how the supply of gas is influenced by its price. Would you feel comfortable regressing the quantity of gas produced on its price? Why?
<!--
BEGIN QUESTION
name: q1_m
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.n.**
Also included in the dataset is the BLS monthly price index for consumer purchases of “transportation services” over the same sample period `transindex`. Perform TSLS estimation using this price index as an instrument. Evaluate the results of the first and second stages.
This question is for your code, the next is for your explanation.
<!--
BEGIN QUESTION
name: q1_n
manual: true
-->
```
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.o.**
Explain.
<!--
BEGIN QUESTION
name: q1_o
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 1.p.**
Assume that you are told that at least one of the instruments above is not exogenous (it could be both). Based on your empirical results using these data, decide what you consider the “best” estimate of the price coefficient. It doesn't have to be one of the above instruments. Explain your reasoning.
<!--
BEGIN QUESTION
name: q1_p
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
---
## Problem 2. Experiments
Senior management at Ctrip, China's largest travel agency, is interested in allowing their Shanghai call center employees to work from home (telecommute). Allowing telecommuting may not only reduce office rental costs but it may also lower the high attrition rates the firm was experiencing by saving the employees from long commutes. However, management is also worried that employees may be less productive if they telecommute. To determine the effects of telecommuting on productivity, Ctrip decided to run an experiment wherein participants were allowed to work from home for several days over a 9 month period. They asked employees in the airfare and hotel departments whether they would be interested in volunteering for this experiment, and not all employees agreed to participate. Each employee who volunteered for the experiment was then assigned a random share of work days over the 9 months that they must work from home. The file `ctrip.csv` contains data from all 994 employees of Ctrip.
| Variable | Description | Units |
|-|-|-|
| **personid** | person ID | |
| **age** | age | years |
| **tenure** | tenure at Ctrip | months |
| **grosswage** | monthly gross salary | 1000s of CNY |
| **children** | whether person has children | |
| **bedroom** | whether person has independent bedroom to work in | |
| **commute** | daily commute in minutes | minutes |
| **men** | whether person is male | |
| **married** | whether person is married | |
| **volunteer** | whether person volunteers for experiment (work from home) | |
| **high_educ** | tertiary education and above | |
| **WFHShare** | share of work days worked from home during experiment | |
| **calls** | average number of calls taken per week during experiment | |
```
ctrip = pd.read_csv("ctrip.csv")
ctrip.head()
```
<!-- BEGIN QUESTION -->
**Question 2.a.**
What percentage of employees volunteered to participate in the experiment?
*Hint: Check out the [`Series.value_counts()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html) function.*
<!--
BEGIN QUESTION
name: q2_a
manual: true
-->
```
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.b.i.**
Use the variables `commute` as a dependent variable in a bivariate linear regression where `volunteer` is the explanatory variable.
<!--
BEGIN QUESTION
name: q2_b_1
manual: true
-->
```
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.b.ii.**
Interpret the coefficient on `volunteer` and comment on its statistical significance.
<!--
BEGIN QUESTION
name: q2_b_2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.c.i.**
Use the variable `tenure` as a dependent variable in a bivariate linear regression where `volunteer` is the explanatory variable.
<!--
BEGIN QUESTION
name: q2_c_1
manual: true
-->
```
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.c.ii.**
Interpret the coefficient on `volunteer` and comment on its statistical significance.
<!--
BEGIN QUESTION
name: q2_c_2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.d.i.**
Impressed by your recent econometrics training, Ctrip hires you as a consultant to analyze the results from their experiment. To begin with, you estimate a bivariate linear regression model of the productivity of workers, measured by the log of the average number of calls taken per week (call this variable `ln_calls`), on the variable `WFHShare` (work from home share).
*Hint: Add the argument [`missing='drop'`](https://www.statsmodels.org/stable/generated/statsmodels.regression.linear_model.OLS.html) when constructing your OLS model to drop the missing entries.*
<!--
BEGIN QUESTION
name: q2_d_1
manual: true
-->
```
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.d.ii.**
Interpret the regression coefficient on `WFHShare` in words. Is the effect statistically significant?
<!--
BEGIN QUESTION
name: q2_d_2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.e.**
Has the Ctrip company achieved the ideal of a randomized controlled experiement, so that we can view the estimated effects of working from home on productivity in causal terms?
<!--
BEGIN QUESTION
name: q2_e
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.g.i.**
Create a dummy variable called `longcommute` which is equal to one if the employee has a commute of greater than or equal to 120 (i.e. 2 hours) and add it to the `ctrip` column.
*Hint: First create a boolean column for `longcommute` then cast it into integers using [`Series.astype(int)`](https://pandas.pydata.org/docs/reference/api/pandas.Series.astype.html).*
<!--
BEGIN QUESTION
name: q2_g_1
manual: true
-->
```
ctrip['longcommute'] = (...).astype(int)
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.g.ii.**
How would you expect that including `longcommute` as a second explanatory variable would alter the coefficient on `WFHShare` – would it increase, decrease, or stay the same? Explain.
<!--
BEGIN QUESTION
name: q2_g_2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.h.i.**
Management believes that commute (the travel time from home to office and back) is an important determinant of a worker’s productivity. They have two hypotheses:
1. Employees who face a longer commute time are generally less productive than workers who have shorter commute times.
2. The effects of `WFHShare` on productivity is larger for those who face a longer commute.
Estimate a regression of `ln_calls`, with `WFHShare`, `longcommute`, and their interaction (call it `WFHShareXlongcommute`) as explanatory variables.
*Hint: Once again you will need to add the argument [`missing='drop'`](https://www.statsmodels.org/stable/generated/statsmodels.regression.linear_model.OLS.html) when constructing your OLS model to drop the missing entries.*
<!--
BEGIN QUESTION
name: q2_h_1
manual: true
-->
```
...
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.h.ii.**
Do your results support hypothesis (i), hypothesis (ii), both hypotheses, or neither one? Explain.
<!--
BEGIN QUESTION
name: q2_h_2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.i.**
If the coefficient on `longcommute` is statistically insignificant, would this lead you to drop `longcommute` from the regression model in part (h)? Explain your answer.
<!--
BEGIN QUESTION
name: q2_i
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 2.j.**
Using the regression in part (h) and without estimating any other regression, write the estimated equation for the simple regression of `ln_calls` on `WFHShare` using only data for those with a commute of fewer than 120 minutes. You must show your solution to obtain full credit.
<!--
BEGIN QUESTION
name: q2_j
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
---
## Problem 3. Natural Experiments
“Sin taxes” have not been the only way in which governments have attempted to reduce the consumption of cigarettes. In 1970, the U.S. passed a law that banned the advertising of cigarettes on radio and television. The ban took effect in 1971. The accompanying data file `cigads.csv` contains data on annual per capita consumption of tobacco measured in terms of “Annual grams of Tobacco Sold per Adult (15+)” for both the U.S. and Canada, 1968-1990 (`CIGSPC`). Also included in that file is a measure of the price of cigarettes given by the “Real Price of 20 grams Cents” for both countries (`PRICE`).
```
cigads = pd.read_csv("cigads.csv")
cigads.head()
```
<!-- BEGIN QUESTION -->
**Question 3.a.**
Treating the ban in cigarette advertising as a quasi-experiment, perform a differences-in-differences analysis of the effect of the ban on the consumption of tobacco. Fill in the table that indicates the conclusion of your analysis.
The top left box with work has been done for you.
<!--
BEGIN QUESTION
name: q3_a
manual: true
-->
```
# Mean of annual grams of Tobacco Sold per Adult (15+) across the pre-treatment periods in Canada
pre_period = cigads[cigads['YEAR'] <= 1970]
np.mean(pre_period[pre_period['COUNTRY'] == "CAN"]['CIGSPC'])
```
<!-- END QUESTION -->
| | Before | After | After - Before |
|:----------------------------------- | :--------- | :----- | :---- |
| **Canada** | ... | ... | ... |
| **USA** | ... | ... | ... |
| **USA - Canada** | ... | ... | ... |
*Your explanation here*
_Type your answer here, replacing this text._
<!-- BEGIN QUESTION -->
**Question 3.b.i.**
Now create a dummy variable `post` indicating the time period whether the ban was in effect or not, plus a dummy variable `treat` for the treatment group (i.e. the U.S.) and the control group (i.e. Canada). Regress tobacco consumption on these two dummies and on the interaction between the two (you can call this `treatpost`).
*Hint: Once again you will need to first create boolean columns then cast it into integers using [`Series.astype(int)`](https://pandas.pydata.org/docs/reference/api/pandas.Series.astype.html).*
<!--
BEGIN QUESTION
name: q3_b_1
manual: true
-->
```
cigads['post'] = (...).astype(int)
cigads['treat'] = (...).astype(int)
cigads['treatpost'] = ...
model_3b = ...
results_3b = ...
results_3b.summary()
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 3.b.ii.**
How do your results compare to your diffs-in-diffs estimator?
<!--
BEGIN QUESTION
name: q3_b_2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 3.c.i.**
Finally, recognizing that price does also affect consumption, you introduce the price variable into the regression in (b).
<!--
BEGIN QUESTION
name: q3_c_1
manual: true
-->
```
model_3c = ...
results_3c = ...
results_3c.summary()
```
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 3.c.ii.**
Report your results and compare to those from (b).
<!--
BEGIN QUESTION
name: q3_c_2
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
<!-- BEGIN QUESTION -->
**Question 3.d.**
Why would you expect that the price of a pack of cigarettes might be correlated with the error term? Note that some economists have argued that the advertising ban reduced competition among cigarette makers by eliminating one dimension on which they compete for customers, which in turn led to higher prices.
<!--
BEGIN QUESTION
name: q3_d
manual: true
-->
_Type your answer here, replacing this text._
<!-- END QUESTION -->
---
## Submission
Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output. The cell below will generate a zip file for you to submit. **Please save before exporting!**
```
# Save your notebook first, then run this cell to export your submission.
grader.export()
```
| github_jupyter |
```
import pandas as pd
import glob as glob
import synapseclient
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
from __future__ import division
%matplotlib inline
plt.rcParams["font.family"] = "arial"
tableau10 = [(78, 121, 167), (242, 142, 43), (225, 87, 89),
(118, 183, 178), (89, 161, 79), (237, 201, 72),
(176, 122, 161), (225, 157, 167), (156, 117, 95),
(186, 176, 172)]
for i in range(len(tableau10)):
r, g, b = tableau10[i]
tableau10[i] = (r / 255., g / 255., b / 255.)
```
# GENIE germline filtering
## Variants per sample
In this IPython notebook we will explore the number of variants per sample across GENIE as well as by institution, before and after applying our common variants filter. We also look at the number of somatic calls at potential germline sites(ExAC_AC_Adj > 1 and ExAC_FILTER=PASS), before and after applying our common variants filter, across GENIE and each individual center.
<br><br>
GENIE VCFs were annotated with VCF2MAF v1.6.12: <br>
source: https://github.com/mskcc/vcf2maf/blob/v1.6.12/docs/vep_maf_readme.txt
The following was generated on GENIE release 0.6: https://www.synapse.org/#!Synapse:syn7887009
## Results:
Our common variants filter shifts the distribution of variants per sample to the left. 128 samples had no variants which passed our common variants filter; at least one sample from each center fell into this category but weighted towards John-Hopkins and DFCI (51 and 47 samples, respectively). Filtering had the most effect on centers such as Vanderbilt (mean variants per sample 11.13 reduced to 9.2 after common variants filtering) and Dana-Farber (mean variants per sample 8.28 reduced to 7.70 after common variants filtering).
We quantified the reduction in samples with somatic calls at potential germline sites (ExAC_AC_Adj > 1 and ExAC_FILTER=PASS), after implementing the “common_variant” filter. 7171 of the 17289 GENIE samples (41.5%) contained at least one such call, which reduced to 5454 samples (31.5%) after filtering. Strict germline filtering increased the number of GENIE samples with zero somatic calls at potential germline sites, from 10118 to 11835. Overall, the strict germline filtering pipeline decreased somatic calls at potential germline sites, from 13.3% to 8.7% of all variants in GENIE.
<a id='table.of.contents'></a>
## Table of Contents
1. <a href='#all.data'>Variants per Sample, GENIE</a>
2. <a href='#center.data'>Variants per Sample, by center</a>
4. <a href='#all.germline'>Number of Somatic Calls at Potential Germline Sites, GENIE</a>
5. <a href='#center.germline'>Number of Somatic Calls at Potential Germline Sites, by center</a>
```
#mkdir -p current_release; cd current_release/
#synapse get -r syn7887009
df = pd.read_csv('current_release/data_mutations_extended.txt',
sep = '\t', comment = '#', low_memory = False)
# Prep dataframe to handle allelic fraction
df['t_alt_count'] = pd.to_numeric(df['t_alt_count'], errors='coerce').fillna(0)
df['t_depth'] = pd.to_numeric(df['t_depth'], errors='coerce').fillna(0)
df['i_tumor_f'] = df['t_alt_count'].divide(df['t_depth'])
df['i_tumor_f'] = df['i_tumor_f'].fillna(0)
```
<a id='all.data'></a>
## 1. Number of Variants per Sample - GENIE
<a href='#table.of.contents'>Return to top</a><br>
We are interested in seeing the number of variants per sample across GENIE before and after applying our common variants filter. The change in variants per sample across the entire GENIE cohort is visualized and summary statistics are acquired.
<b>Results</b>: We find that our distribution of variants per sample is, as expected, shifted left after applying our common variants filter. Our filter reduces the mean variants per sample from 6.8 to 6.45 with large variance (14.44 and 14.24). The variance is likely due to some centers having matched normals (MSK and Princess Margaret) while the other cents had unmatched samples. This is investigated in the next section.
```
columns = ['sample', 'center', '# variants', '# common variants', '# uncommon variants']
variants_per_sample = pd.DataFrame([], columns = columns)
variants_per_sample.loc[:,'sample'] = df['Tumor_Sample_Barcode'].unique()
common_variant_classes = ['common_variant,common_variant', 'common_variant']
variants_per_sample.loc[:,['# variants', '# common variants', '# uncommon variants']] = [0]
for sample_ in df['Tumor_Sample_Barcode'].unique():
index_ = variants_per_sample[variants_per_sample['sample'] == sample_].index.tolist()[0]
df_sample_ = df[df['Tumor_Sample_Barcode'] == sample_]
variants_per_sample.loc[index_,'sample'] = df_sample_['Tumor_Sample_Barcode'].tolist()[0]
variants_per_sample.loc[index_,'center'] = df_sample_['Center'].tolist()[0]
idx_common_variants = df_sample_['FILTER'].isin(common_variant_classes)
variants_per_sample.loc[index_, '# variants'] = len(df_sample_)
variants_per_sample.loc[index_, '# common variants'] = len(df_sample_[idx_common_variants])
variants_per_sample.loc[index_, '# uncommon variants'] = len(df_sample_[~idx_common_variants])
fig = plt.figure(figsize = (14,6))
ax = plt.subplot()
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
colors = [tableau10[0], tableau10[4]]
labels = ['Pre-Filter', 'Post-Filter']
data = [variants_per_sample['# variants'], variants_per_sample['# uncommon variants']]
plt.hist(data, bins = variants_per_sample['# variants'].max(),
color = colors, label = labels, alpha = 0.95)
plt.xlim(0, 20)
plt.ylim(0, 3250)
ind = np.arange(20)
width = 0.40
ax.set_xticks(ind + 0.5)
ax.set_xticklabels(range(0,20))
ylow = 0
ymax = 9000
for y in range(ylow, ymax, 500):
plt.plot(range(-1, 20 + 1), [y] * (len(range(-1, 20)) + 1), "--", lw=0.5, color="black", alpha=0.6)
plt.legend(fontsize=12)
plt.ylabel('Number of Samples', fontsize = 16)
plt.xlabel('Number of Variants per Sample', fontsize = 16)
plt.title('Variants per sample, pre and post common variants filter', fontsize = 18)
plt.savefig('figures/6.1.variants_per_sample.pdf', bbox_inches = 'tight')
plt.show()
print 'For GENIE samples...Somatic Calls at Potential Germline Sites'
print '...mean number of variants:', variants_per_sample.loc[:,'# variants'].mean()
print '...median number of variants:', variants_per_sample.loc[:,'# variants'].median()
print '...std dev of variants:', variants_per_sample.loc[:,'# variants'].std()
print '...max number of variants:', variants_per_sample.loc[:, '# variants'].max()
print ''
print '...mean number of common variants:', variants_per_sample.loc[:,'# common variants'].mean()
print '...median number of common variants:', variants_per_sample.loc[:,'# common variants'].median()
print '...std dev of common variants:', variants_per_sample.loc[:,'# common variants'].std()
print '...max number of common variants:', variants_per_sample.loc[:, '# common variants'].max()
print ''
print '...mean number of remaining variants:', variants_per_sample.loc[:,'# uncommon variants'].mean()
print '...median number of remaining variants:', variants_per_sample.loc[:,'# uncommon variants'].median()
print '...std dev of remaining variants:', variants_per_sample.loc[:,'# uncommon variants'].std()
print '...max number of remaining variants:', variants_per_sample.loc[:, '# uncommon variants'].max()
print ''
```
<a id='center.data'></a>
## 2. Variants Per Sample, per center
<a href='#table.of.contents'>Return to top</a><br>
Summary statistics are gathered for the change in variants per sample on a per institution basis.
```
df['Center'].value_counts()
centers = ['DFCI', 'GRCC', 'JHH', 'MDA', 'MSK', 'NKI', 'VICC', 'UHN']
vec = range(0, len(centers))
for i in vec:
center_ = centers[i]
print center_
print ''
df_center = df[df['Center'] == center_]
vps_center = variants_per_sample[variants_per_sample['center'] == center_]
print 'Reporting somatic calls at germline sites for center:', str(center_)
total_mutations_pre = len(df_center)
vps_pre = vps_center['# variants'].sum()
pct_vps_pre = vps_pre / total_mutations_pre
total_mutations_post = len(df_center[df_center['FILTER'] != 'common_variant'])
vps_post = vps_center['# uncommon variants'].sum()
pct_vps_post = vps_post / total_mutations_post
print 'For GENIE samples...Variants Per Sample'
print 'Total number of samples:', str(len(vps_center))
print 'Total number of variants (Pre-Filter):', str(vps_center['# variants'].sum())
print 'Total number of variants (Post-Filter):', str(vps_center['# uncommon variants'].sum())
print ''
print '...mean number of variants per sample:', vps_center.loc[:,'# variants'].mean()
print '...median number of variants per sample:', vps_center.loc[:,'# variants'].median()
print '...std dev of variants per sample:', vps_center.loc[:,'# variants'].std()
print '...max number of variants per sample:', vps_center.loc[:, '# variants'].max()
print ''
print '...mean number of common variants per sample:', vps_center.loc[:,'# common variants'].mean()
print '...median number of common variants per sample:', vps_center.loc[:,'# common variants'].median()
print '...std dev of common variants per sample:', vps_center.loc[:,'# common variants'].std()
print '...max number of common variants per sample:', vps_center.loc[:, '# common variants'].max()
print ''
print '...mean number of remaining variants per sample:', vps_center.loc[:,'# uncommon variants'].mean()
print '...median number of remaining variants per sample:', vps_center.loc[:,'# uncommon variants'].median()
print '...std dev of remaining variants per sample:', vps_center.loc[:,'# uncommon variants'].std()
print '...max number of remaining variants per sample:', vps_center.loc[:, '# uncommon variants'].max()
print ''
print ''
print ''
print '----------------'
```
<a id='all.germline'></a>
## 3. Number of Somatic Calls at Potential Germline Sites
<a href='#table.of.contents'>Return to top</a><br>
We are interested in seeing how the number of somatic calls at potential germline sites changes before and after applying our common variants filter. This is not only useful for quantifying reidentification risk but also improves the confidence that our dataset is truly a somatic dataset.
To identify which of our somatic sites are present at potential germline sites, we are defining a potential germline site as a genomic position that meets the following conditions based on ExAC -
1. The annotation ExAC_AC_AN_Adj > 1
2. ExAC_FILTER == 'PASS'
The rationale for this choice is that the variant must be present in ExAC and also be defined as likely germline by their own internal filter.
<b>Results</b>: The figure above quantifies the reduction in samples with somatic calls at potential germline sites (ExAC_AC_AN_Adj > 1 and ExAC_FILTER=PASS), after implementing the “common_variant” filter. 7171 of the 17285 GENIE samples (41.5%) contained at least one such call, which reduced to 5454 samples (31.6%) after filtering. Overall, the strict germline filtering pipeline decreased somatic calls at potential germline sites, from 13.3% to 8.75% of all variants in GENIE.
```
df = pd.read_csv('current_release/data_mutations_extended.txt',
sep = '\t', comment = '#', low_memory = False)
# Prep dataframe to handle allelic fraction
df['t_alt_count'] = pd.to_numeric(df['t_alt_count'], errors='coerce').fillna(0)
df['t_depth'] = pd.to_numeric(df['t_depth'], errors='coerce').fillna(0)
df['i_tumor_f'] = df['t_alt_count'].divide(df['t_depth'])
df['i_tumor_f'] = df['i_tumor_f'].fillna(0)
df['ExAC_Alleles'] = pd.DataFrame(df['ExAC_AC_AN_Adj'].fillna('0/1').str.split('/').tolist(), columns = ['A', 'B'])['A'].tolist()
idx_germ = (df['ExAC_Alleles'].astype(int) > 1) & (df['ExAC_FILTER'] == 'PASS')
df_germ = df[idx_germ]
pgv_per_sample = pd.DataFrame([], columns = ['sample', 'center', '# variants', '# common variants', '# uncommon variants'])
pgv_per_sample.loc[:,'sample'] = df['Tumor_Sample_Barcode'].unique()
common_variant_classes = ['common_variant,common_variant', 'common_variant']
pgv_per_sample.loc[:,['# variants', '# common variants', '# uncommon variants']] = [0]
for sample_ in df_germ['Tumor_Sample_Barcode'].unique():
index_ = pgv_per_sample[pgv_per_sample['sample'] == sample_].index.tolist()[0]
df_sample_ = df_germ[df_germ['Tumor_Sample_Barcode'] == sample_]
pgv_per_sample.loc[index_,'sample'] = df_sample_['Tumor_Sample_Barcode'].tolist()[0]
pgv_per_sample.loc[index_,'center'] = df_sample_['Center'].tolist()[0]
pgv_per_sample.loc[index_, '# variants'] = len(df_sample_)
idx_common_variants = df_sample_['FILTER'].isin(common_variant_classes)
pgv_per_sample.loc[index_, '# common variants'] = len(df_sample_[idx_common_variants])
pgv_per_sample.loc[index_, '# uncommon variants'] = len(df_sample_[~idx_common_variants])
pgv_per_sample['# variants'].value_counts().head(10)
pgv_per_sample['# uncommon variants'].value_counts().head(10)
variants_per_sample.head()
fig = plt.figure(figsize = (14,6))
ax = plt.subplot()
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
colors = [tableau10[0], tableau10[4]]
labels = ['Pre-Filter', 'Post-Filter']
data = [pgv_per_sample['# variants'], pgv_per_sample['# uncommon variants']]
histogram1 = plt.hist(data, bins = 51, color = colors, label = labels, alpha = 0.95, width = 0.40)
plt.xlim(0, 11)
plt.ylim(0, 12000)
ind = np.arange(11)
width = 0.40
ax.set_xticks(ind + 0.5)
ax.set_xticklabels([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], **hfont)
ylow = 0
ymax = 13000
for y in range(ylow, ymax, 1000):
plt.plot(range(-1, 12 + 1), [y] * (len(range(-1, 12)) + 1), "--", lw=0.5, color="black", alpha=0.6)
plt.legend()
plt.ylabel('Number of Samples', fontsize = 14, **hfont)
plt.xlabel('Number of Somatic Calls at Potential Germline Sites in a Sample', fontsize = 14, **hfont)
vec = [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
def histlabel(vec):
for vec_ in vec:
pre_value = pgv_per_sample['# variants'].value_counts()[vec_ - 1]
post_value = pgv_per_sample['# uncommon variants'].value_counts()[vec_ - 1]
ax.text(vec_ - 0.7, 1.05*pre_value, pre_value,
ha = 'center', va = 'bottom', fontsize = 11, **hfont)
ax.text(vec_ - 0.3, 1.05*post_value, post_value,
ha = 'center', va = 'bottom', fontsize = 11, **hfont)
histlabel(vec)
pre_value = pgv_per_sample['# variants'].value_counts()[0]
post_value = pgv_per_sample['# uncommon variants'].value_counts()[0]
ax.text(1 - 0.72, 1.05*pre_value, pre_value,
ha = 'center', va = 'bottom', fontsize = 11, **hfont)
ax.text(1 - 0.31, 1.03*post_value, post_value,
ha = 'center', va = 'bottom', fontsize = 11, **hfont)
plt.savefig('figures/6.3.Somatic_Germline_per_sample.pdf', bbox_inches = 'tight')
plt.show()
pgv_per_sample['# variants'].value_counts().head(20)
pgv_per_sample['# uncommon variants'].value_counts().head(20)
print 'For GENIE samples...Somatic Calls at Potential Germline Sites'
print '...mean number of variants:', pgv_per_sample.loc[:,'# variants'].mean()
print '...median number of variants:', pgv_per_sample.loc[:,'# variants'].median()
print '...std dev of variants:', pgv_per_sample.loc[:,'# variants'].std()
print '...max number of variants:', pgv_per_sample.loc[:, '# variants'].max()
print ''
print '...mean number of common variants:', pgv_per_sample.loc[:,'# common variants'].mean()
print '...median number of common variants:', pgv_per_sample.loc[:,'# common variants'].median()
print '...std dev of common variants:', pgv_per_sample.loc[:,'# common variants'].std()
print '...max number of common variants:', pgv_per_sample.loc[:, '# common variants'].max()
print ''
print '...mean number of remaining variants:', pgv_per_sample.loc[:,'# uncommon variants'].mean()
print '...median number of remaining variants:', pgv_per_sample.loc[:,'# uncommon variants'].median()
print '...std dev of remaining variants:', pgv_per_sample.loc[:,'# uncommon variants'].std()
print '...max number of remaining variants:', pgv_per_sample.loc[:, '# uncommon variants'].max()
print ''
```
### How many somatic calls at potential germline sites per sample, before and after filtering?
```
total_samples = len(df['Tumor_Sample_Barcode'].unique())
total_samples_pgv_pre = len(df_germ['Tumor_Sample_Barcode'].unique())
pct_samples_pgv_pre = total_samples_pgv_pre / total_samples
idx_post = pgv_per_sample['# uncommon variants'] != 0
total_samples_pgv_post = len(pgv_per_sample[idx_post].loc[:,'sample'].unique())
pct_samples_pgv_post = total_samples_pgv_post / total_samples
print 'Number of GENIE samples that contained at least one somatic call at a potential germline sites...'
print 'Total number of samples in GENIE:', str(total_samples)
print 'Pre-filtering:', str(len(df_germ['Tumor_Sample_Barcode'].unique())), '(', str(pct_samples_pgv_pre), ')'
print 'Post-filtering:', str(total_samples_pgv_post), '(', str(pct_samples_pgv_post), ')'
```
### How many somatic calls at potential germline sites, before and after filtering?
```
total_mutations_pre = len(df)
pgv_pre = pgv_per_sample['# variants'].sum()
pct_pgv_pre = pgv_pre / total_mutations_pre
total_mutations_post = len(df[df['FILTER'] != 'common_variant'])
pgv_post = pgv_per_sample['# uncommon variants'].sum()
pct_pgv_post = pgv_post / total_mutations_post
print 'Number of calls at potential germline sites...'
print ''
print '---Pre-Filter---'
print 'Total # of variants:', str(total_mutations_pre)
print 'Total # calls at potential germline sites:', str(pgv_pre)
print 'Percent calls at potential germline sites:', str(pct_pgv_pre)
print ''
print '---Post-Filter---'
print 'Total # of variants:', str(total_mutations_post)
print 'Total # calls at potential germline sites:', str(pgv_post)
print 'Percent calls at potential germline sites:', str(pct_pgv_post)
```
<a id='center.germline'></a>
## 4. Number of Somatic Calls at Potential Germline Sites, per Center
<a href='#table.of.contents'>Return to top</a><br>
Summary statistics are gathered for the change in somatic calls at potential germline sites on a per institution basis.
```
centers = ['DFCI', 'GRCC', 'JHH', 'MDA', 'MSK', 'NKI', 'VICC', 'UHN']
vec = range(0, len(centers))
for i in vec:
center_ = centers[i]
print center_
df_center = df[df['Center'] == center_]
pgv_center = pgv_per_sample[pgv_per_sample['center'] == center_]
print 'Reporting somatic calls at germline sites for center:', str(center_)
total_mutations_pre = len(df_center)
pgv_pre = pgv_center['# variants'].sum()
pct_pgv_pre = pgv_pre / total_mutations_pre
total_mutations_post = len(df_center[df_center['FILTER'] != 'common_variant'])
pgv_post = pgv_center['# uncommon variants'].sum()
pct_pgv_post = pgv_post / total_mutations_post
print 'Number of calls at potential germline sites...'
print ''
print '---Pre-Filter---'
print 'Total # of variants:', str(total_mutations_pre)
print 'Total # calls at potential germline sites:', str(pgv_pre)
print 'Percent calls at potential germline sites:', str(pct_pgv_pre)
print ''
print '---Post-Filter---'
print 'Total # of variants:', str(total_mutations_post)
print 'Total # calls at potential germline sites:', str(pgv_post)
print 'Percent calls at potential germline sites:', str(pct_pgv_post)
print ''
print ''
print '----------------'
```
| github_jupyter |
# Amazon Comprehend PII detection
Amazon comprehend added new capabilities to detect PII entities in text data. In this notebook, we will explore different ways to access and use Comprehend PII detection service.
## Overview
1. [PII detection via Console](#console)
1. [PII detection via CLI](#cli)
1. [Async APIs to Redact PII](#redact)
1. [Async APIs to Redact / Mask PII Entities](#mask)
1. [Cleanup](#cleanup)
## PII detection via Console <a class="anchor" id="console"/>
To get started with Amazon Comprehend, all you need is an [AWS account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/).
In the [Amazon Comprehend console](https://console.aws.amazon.com/comprehend/v2/home?region=us-east-1#home) in the Input Text section, choose analysis type Built-in radio. Provide the following text in Input text and click Analyze
```
Good morning, everybody. My name is Van Bokhorst Serdar, and today I feel like sharing a whole lot of personal information with you. Let's start with my Email address SerdarvanBokhorst@dayrep.com. My address is 2657 Koontz Lane, Los Angeles, CA. My phone number is 818-828-6231. My Social security number is 548-95-6370. My Bank account number is 940517528812 and routing number 195991012. My credit card number is 5534816011668430, Expiration Date 6/1/2022, my C V V code is 121, and my pin 123456. Well, I think that's it. You know a whole lot about me. And I hope that Amazon comprehend is doing a good job at identifying PII entities so you can redact my personal information away. Let's check.
```
1. Which entities do you see detected under **Insights** `PII` tab?
2. Examine the JSON response for one of these entities so you can see how `BeginOffset` and `EndOffset` could be used to highlight text.
## PII detection via CLI <a class="anchor" id="cli"/>
Let's try to use the [AWS CLI](https://aws.amazon.com/cli/) for sentiment detection.
1. Confirm you have the AWS CLI setup and configured using something like this `aws sagemaker list-notebook-instances`
```
#!aws sagemaker list-notebook-instances
```
2. Now let's try to identify PII entities using the command line.
```
!aws comprehend detect-pii-entities \
--language-code en --text \
"Good morning, everybody. My name is Van Bokhorst Serdar, and today I feel like sharing a whole lot of personal information with you. Let's start with my Email address SerdarvanBokhorst@dayrep.com. My address is 2657 Koontz Lane, Los Angeles, CA. My phone number is 818-828-6231. My Social security number is 548-95-6370. My Bank account number is 940517528812 and routing number 195991012. My credit card number is 5534816011668430, Expiration Date 6/1/2022, my C V V code is 121, and my pin 123456. Well, I think that's it. You know a whole lot about me. And I hope that Amazon comprehend is doing a good job at identifying PII entities so you can redact my personal information away from this document. Let's check."
```
Install jq for parsing output, jq is a lightweight and flexible command-line JSON processor.
```
# open a new terminal and install jq
# install jq
!apt-get update
!apt-get install jq
!aws comprehend detect-pii-entities \
--language-code en --text \
"Good morning, everybody. My name is Van Bokhorst Serdar, and today I feel like sharing a whole lot of personal information with you. Let's start with my Email address SerdarvanBokhorst@dayrep.com. My address is 2657 Koontz Lane, Los Angeles, CA. My phone number is 818-828-6231. My Social security number is 548-95-6370. My Bank account number is 940517528812 and routing number 195991012. My credit card number is 5534816011668430, Expiration Date 6/1/2022, my C V V code is 121, and my pin 123456. Well, I think that's it. You know a whole lot about me. And I hope that Amazon comprehend is doing a good job at identifying PII entities so you can redact my personal information away from this document. Let's check." \
| jq -r '.Entities[] | .Type '
```
## Async APIs to Redact PII Entities<a class="anchor" id="redact"/>
Lets look at the input content we want to redact, while redacting we will replace PIIEntity with the name of the entity
```
!aws s3 cp s3://ai-ml-services-lab/public/labs/comprehend/pii/input/redact/pii-s3-input.txt -
```
### Async request
1. Using Async APIs for an input file in s3, we can redact the content.
```
!aws comprehend start-pii-entities-detection-job \
--input-data-config S3Uri="s3://ai-ml-services-lab/public/labs/comprehend/pii/input/redact/pii-s3-input.txt" \
--output-data-config S3Uri="s3://ai-ml-services-lab/public/labs/comprehend/pii/output/redact/" \
--mode "ONLY_REDACTION" \
--redaction-config PiiEntityTypes="BANK_ACCOUNT_NUMBER","BANK_ROUTING","CREDIT_DEBIT_NUMBER","CREDIT_DEBIT_CVV","CREDIT_DEBIT_EXPIRY","PIN","EMAIL","ADDRESS","NAME","PHONE","SSN",MaskMode="REPLACE_WITH_PII_ENTITY_TYPE" \
--data-access-role-arn "arn:aws:iam::<ACCT>:role/ComprehendBucketAccessRole" \
--job-name "comprehend-blog-redact-001" \
--language-code "en"
```
2. Monitor redaction job
```
#!aws comprehend describe-pii-entities-detection-job --job-id "1fbe531aafad163b2fd3bf7287525482"
```
### Output
Lets look at the output
```
#!aws s3 cp s3://ai-ml-services-lab/public/labs/comprehend/pii/output/redact/<acct>-PII-1fbe531aafad163b2fd3bf7287525482/output/pii-s3-input.txt.out -
```
## Async APIs to Redact / Mask PII Entities<a class="anchor" id="mask"/>
Lets look at the input content we want to redact, while redacting we will replace PIIEntity with the maked char * of the entity
```
!aws s3 cp s3://ai-ml-services-lab/public/labs/comprehend/pii/input/mask/pii-s3-input.txt -
```
### Async request
1. Using Async APIs for an input file in s3, we can redact the content and mask the redacted content.
```
!aws comprehend start-pii-entities-detection-job \
--input-data-config S3Uri="s3://ai-ml-services-lab/public/labs/comprehend/pii/input/mask/pii-s3-input.txt" \
--output-data-config S3Uri="s3://ai-ml-services-lab/public/labs/comprehend/pii/output/mask/" \
--mode "ONLY_REDACTION" \
--redaction-config PiiEntityTypes="BANK_ACCOUNT_NUMBER","BANK_ROUTING","CREDIT_DEBIT_NUMBER","CREDIT_DEBIT_CVV","CREDIT_DEBIT_EXPIRY","PIN","EMAIL","ADDRESS","NAME","PHONE","SSN",MaskMode="MASK",MaskCharacter="*" \
--data-access-role-arn "arn:aws:iam::<ACCT>:role/ComprehendBucketAccessRole" \
--job-name "comprehend-blog-redact-mask-001" \
--language-code "en"
```
2. Monitor redaction masking job
```
#!aws comprehend describe-pii-entities-detection-job --job-id "46e49284a3ea037d48f80371c053bf74"
```
### Output
Lets look at the output
```
!aws s3 cp s3://ai-ml-services-lab/public/labs/comprehend/pii/output/mask/<Acct>-PII-46e49284a3ea037d48f80371c053bf74/output/pii-s3-input.txt.out -
```
## Cleanup <a class="anchor" id="cleanup"/>
TBD to clean all the resources
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Data augmentation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/data_augmentation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
This tutorial demonstrates manual image manipulations and augmentation using `tf.image`.
Data augmentation is a common technique to improve results and avoid overfitting, see [Overfitting and Underfitting](../keras/overfit_and_underfit.ipynb) for others.
## Setup
```
!pip install git+https://github.com/tensorflow/docs
import urllib
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers
AUTOTUNE = tf.data.experimental.AUTOTUNE
import tensorflow_docs as tfdocs
import tensorflow_docs.plots
import tensorflow_datasets as tfds
import PIL.Image
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12, 5)
import numpy as np
```
Let's check the data augmentation features on an image and then augment a whole dataset later to train a model.
Download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg), by Von.grzanka, for augmentation.
```
image_path = tf.keras.utils.get_file("cat.jpg", "https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg")
PIL.Image.open(image_path)
```
Read and decode the image to tensor format.
```
image_string=tf.io.read_file(image_path)
image=tf.image.decode_jpeg(image_string,channels=3)
```
A function to visualize and compare the original and augmented image side by side.
```
def visualize(original, augmented):
fig = plt.figure()
plt.subplot(1,2,1)
plt.title('Original image')
plt.imshow(original)
plt.subplot(1,2,2)
plt.title('Augmented image')
plt.imshow(augmented)
```
## Augment a single image
### Flipping the image
Flip the image either vertically or horizontally.
```
flipped = tf.image.flip_left_right(image)
visualize(image, flipped)
```
### Grayscale the image
Grayscale an image.
```
grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
plt.colorbar()
```
### Saturate the image
Saturate an image by providing a saturation factor.
```
saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)
```
### Change image brightness
Change the brightness of image by providing a brightness factor.
```
bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)
```
### Rotate the image
Rotate an image by 90 degrees.
```
rotated = tf.image.rot90(image)
visualize(image, rotated)
```
### Center crop the image
Crop the image from center upto the image part you desire.
```
cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image,cropped)
```
See the `tf.image` reference for details about available augmentation options.
## Augment a dataset and train a model with it
Train a model on an augmented dataset.
Note: The problem solved here is somewhat artificial. It trains a densely connected network to be shift invariant by jittering the input images. It's much more efficient to use convolutional layers instead.
```
dataset, info = tfds.load('mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
num_train_examples= info.splits['train'].num_examples
```
Write a function to augment the images. Map it over the the dataset. This returns a dataset that augments the data on the fly.
```
def convert(image, label):
image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
return image, label
def augment(image,label):
image,label = convert(image, label)
image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
image = tf.image.resize_with_crop_or_pad(image, 34, 34) # Add 6 pixels of padding
image = tf.image.random_crop(image, size=[28, 28, 1]) # Random crop back to 28x28
image = tf.image.random_brightness(image, max_delta=0.5) # Random brightness
return image,label
BATCH_SIZE = 64
# Only use a subset of the data so it's easier to overfit, for this tutorial
NUM_EXAMPLES = 2048
```
Create the augmented dataset.
```
augmented_train_batches = (
train_dataset
# Only train on a subset, so you can quickly see the effect.
.take(NUM_EXAMPLES)
.cache()
.shuffle(num_train_examples//4)
# The augmentation is added here.
.map(augment, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
```
And a non-augmented one for comparison.
```
non_augmented_train_batches = (
train_dataset
# Only train on a subset, so you can quickly see the effect.
.take(NUM_EXAMPLES)
.cache()
.shuffle(num_train_examples//4)
# No augmentation.
.map(convert, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
```
Setup the validation dataset. This doesn't change whether or not you're using the augmentation.
```
validation_batches = (
test_dataset
.map(convert, num_parallel_calls=AUTOTUNE)
.batch(2*BATCH_SIZE)
)
```
Create and compile the model. The model is a two layered, fully-connected neural network without convolution.
```
def make_model():
model = tf.keras.Sequential([
layers.Flatten(input_shape=(28, 28, 1)),
layers.Dense(4096, activation='relu'),
layers.Dense(4096, activation='relu'),
layers.Dense(10)
])
model.compile(optimizer = 'adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
```
Train the model, **without** augmentation:
```
model_without_aug = make_model()
no_aug_history = model_without_aug.fit(non_augmented_train_batches, epochs=50, validation_data=validation_batches)
```
Train it again with augmentation:
```
model_with_aug = make_model()
aug_history = model_with_aug.fit(augmented_train_batches, epochs=50, validation_data=validation_batches)
```
## Conclusion:
In this example the augmented model converges to an accuracy ~95% on validation set. This is slightly higher (+1%) than the model trained without data augmentation.
```
plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "accuracy")
plt.title("Accuracy")
plt.ylim([0.75,1])
```
In terms of loss, the non-augmented model is obviously in the overfitting regime. The augmented model, while a few epoch slower, is still training correctly and clearly not overfitting.
```
plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "loss")
plt.title("Loss")
plt.ylim([0,1])
```
| github_jupyter |
This notebook scrapes the Alma and Primo release notes in the Ex Libris Knowledge Center and creates a CSV file with headings for each item. Recommend reviewing the results for comprehensiveness.
```
import csv
import requests
from bs4 import BeautifulSoup
def get_headings(url, year, month):
"""Scrapes headings of items in release notes URL.
url param is page for target month's release notes
"""
r = requests.get(url)
soup = BeautifulSoup(r.text)
headings = soup.find_all("h3")
items = []
# class used in current month's sections, e.g. 201812BASE
# also found in the release notes URL
datekey = str(year) + str(month) + 'BASE'
# get the major items
for heading in headings:
for parent in heading.find_parents("div"):
if datekey in parent.attrs["class"]:
items.append(heading.get_text())
# get additional enhancements
enhancements = []
small_items = soup.find_all("li", class_=datekey)
# get text from all the elements in each enhancement li tag
for small in small_items:
pieces = []
#get the text parts (siblings)
siblings = small.find("br").next_siblings
for sibling in siblings:
#if the part is a string, strip the whitespace and newlines and add to pieces
if sibling.string:
piece = sibling.string.replace("\n", "").strip()
pieces.append(piece)
enhancements.append(" ".join(pieces))
#add the enhancements to the major items, with a divider header
items.append("RESOLVED ISSUES")
items.extend(enhancements)
return items
def make_csv(filename, rows):
"""Creates a CSV file with columns for actions. """
with open(filename, 'w', newline='') as f:
writer = csv.writer(f, delimiter=',', quoting=csv.QUOTE_ALL)
header = ['Heading','Reviewer(s)','Relevant to GW?','Needs discussion?', 'Notes']
writer.writerow(header)
for row in rows:
writer.writerow([row])
```
alma_url is the URL for the month's release notes. Update the parameters in the call to get_headings() for the current year and month.
```
alma_url = "https://knowledge.exlibrisgroup.com/Alma/Release_Notes/010_2018/001Alma_2018_Release_Notes?mon=201812BASE"
alma_results = get_headings(alma_url, 2018, 12)
filename = "alma-201812.csv"
make_csv(filename, alma_results)
```
primo_url is the URL for the month's release notes. Update the parameters in the call to get_headings() for the current year and month.
```
primo_url = "https://knowledge.exlibrisgroup.com/Primo/Release_Notes/002Primo_VE/0972019/002Primo_VE_2019_Release_Notes?mon=201901BASE"
primo_results = get_headings(primo_url, 2019, "01")
primo_filename = "primo-201901.csv"
make_csv(primo_filename, primo_results)
```
Import the resulting CSV files into Google Sheets and add a link to the websites for getting further information.
| github_jupyter |
# Lab 03: Estimate proportions using SGD
Task: Debug some code to use stochastic gradient descent to estimate two proportions.
# Scenario
Suppose I have two boxes (A and B), each of which have a bunch of small beads in them. Peeking inside, it looks like there are 3 different colors of beads (red, orange, and yellow), but the two boxes have very different colors.
Each box has a lever on it. When I push the lever, a bead comes out of the box. (We can assume it's a random one, and we'll put the bead back in the box it came from so we don't lose beads.)
My friend suggests we play a game: they'll pick a box and press the lever a few times; I have to guess what color beads are going to come out. But I complain that I'm never going to be able to guess 100% correctly, since the boxes have mixtures of beads in them. So here's what they propose: I can spread out my one guess among the different colors, e.g., 0.5 for red and 0.25 for orange or yellow--as long as they add up to 1. Okay...sounds good?
Even though there's no way I could count the number of each color bead in each box (way too many!), I think I can do well at this game after a few rounds. What do you think?
## Setup
```
import torch
from torch import tensor
import matplotlib.pyplot as plt
%matplotlib inline
torch.manual_seed(10);
```
### 1. Define the true (hidden) proportions
Define the true proportions of the 3 colors in each box.
```
boxes = tensor([
[600, 550, 350],
[100, 1300, 100]
]).float()
```
### 2. Define how we're going to get observations.
Here's how the friend is going to pick which box. We'll get to see which box they pick.
```
def pick_box():
return int(torch.rand(1) < .5)
pick_box()
def draw_beads(box, num_beads):
return torch.multinomial(boxes[box], num_beads, replacement=True)
example_beads = draw_beads(box=0, num_beads=5); example_beads
```
# Task
The code below plays this game, but it encounters some major problems: it crashes, and even once you fix the crashes, it still doesn't learn the correct proportions.
Debug the code below so that running `get_guesses` gives a good estimate of the true proportions of each color in the given box.
**Mathy Notes**:
* Guessing the true proportions for each box minimizes the cross-entropy loss between observations and guesses (in expectation). So your loss function should be cross-entropy (the negative log of the probability given to the observed sample).
* To ensure that the guesses are valid probability distributions, I recommend you store the *logits* instead of *probabilities*. The `softmax` function turns logits into probabilities. (The `log_softmax` function turns logits into log-probabilities aka logprobs.)
# Solution
First, let's compute the true proportions: divide the counts (in `boxes`) by the total number of beads in each box. Use `sum`, and pass `keepdim=True`
```
# your code here
# boxes.sum(___)
# boxes / _____
```
### 3. Define how we're going to make a guess
```
params = tensor([
[.25, .4, .35],
[1/3, 1/3, 1/3]])
def get_guess(box):
guesses_for_box = params[box]
return guesses_for_box # <-- you will need to change this line to ensure that the result is a valid probability distribution
example_guess = get_guess(0); example_guess
```
### 4. Define how score is computed.
We can get the probabilities of the actual beads using an indexing trick. For example:
```
example_guess[example_beads]
def score_guesses(guess, beads): # <-- note that this is a "score" (higher = better)... you may want to change it to be a "loss" (lower = better).
probs_for_observed_beads = guess[beads]
return probs_for_observed_beads.mean() # <-- you will need to change this line so that we're using cross-entropy loss
score_guesses(example_guess, example_beads)
```
### 5. Use stochastic gradient descent to learn the proportions.
```
params = torch.ones((2, 3)) / 3.0
params.requires_grad_()
scores = []
for i in range(50):
box = pick_box() # friend picks a box
my_guess = get_guess(box) # I make a guess
# Check that my guess is valid.
assert (my_guess > 0).all()
assert (my_guess.sum() - 1.0).abs() < .01
beads = draw_beads(box, 10) # friend draws a bunch of beads
score = score_guesses(my_guess, beads) # friend computes my score
scores.append(score.item())
# I figure out how I should have guessed differently
score.backward()
params.data -= params.grad
# Plot the scores
plt.plot(scores)
# Show the proportions. These should be very close to the true proportions.
torch.stack([get_guess(box=0), get_guess(box=1)])
```
| github_jupyter |
# Plagiarism Detection Model
Now that you've created training and test data, you are ready to define and train a model. Your goal in this notebook, will be to train a binary classification model that learns to label an answer file as either plagiarized or not, based on the features you provide the model.
This task will be broken down into a few discrete steps:
* Upload your data to S3.
* Define a binary classification model and a training script.
* Train your model and deploy it.
* Evaluate your deployed classifier and answer some questions about your approach.
To complete this notebook, you'll have to complete all given exercises and answer all the questions in this notebook.
> All your tasks will be clearly labeled **EXERCISE** and questions as **QUESTION**.
It will be up to you to explore different classification models and decide on a model that gives you the best performance for this dataset.
---
## Load Data to S3
In the last notebook, you should have created two files: a `training.csv` and `test.csv` file with the features and class labels for the given corpus of plagiarized/non-plagiarized text data.
>The below cells load in some AWS SageMaker libraries and creates a default bucket. After creating this bucket, you can upload your locally stored data to S3.
Save your train and test `.csv` feature files, locally. To do this you can run the second notebook "2_Plagiarism_Feature_Engineering" in SageMaker or you can manually upload your files to this notebook using the upload icon in Jupyter Lab. Then you can upload local files to S3 by using `sagemaker_session.upload_data` and pointing directly to where the training data is saved.
```
import pandas as pd
import boto3
import sagemaker
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# session and role
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
# create an S3 bucket
bucket = sagemaker_session.default_bucket()
```
## EXERCISE: Upload your training data to S3
Specify the `data_dir` where you've saved your `train.csv` file. Decide on a descriptive `prefix` that defines where your data will be uploaded in the default S3 bucket. Finally, create a pointer to your training data by calling `sagemaker_session.upload_data` and passing in the required parameters. It may help to look at the [Session documentation](https://sagemaker.readthedocs.io/en/stable/session.html#sagemaker.session.Session.upload_data) or previous SageMaker code examples.
You are expected to upload your entire directory. Later, the training script will only access the `train.csv` file.
```
# should be the name of directory you created to save your features data
data_dir = 'plagiarism_data'
# set prefix, a descriptive name for a directory
prefix = 'plagiarism'
# upload all data to S3
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
print(input_data)
```
### Test cell
Test that your data has been successfully uploaded. The below cell prints out the items in your S3 bucket and will throw an error if it is empty. You should see the contents of your `data_dir` and perhaps some checkpoints. If you see any other files listed, then you may have some old model files that you can delete via the S3 console (though, additional files shouldn't affect the performance of model developed in this notebook).
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# confirm that data is in S3 bucket
empty_check = []
for obj in boto3.resource('s3').Bucket(bucket).objects.all():
empty_check.append(obj.key)
print(obj.key)
assert len(empty_check) !=0, 'S3 bucket is empty.'
print('Test passed!')
```
---
# Modeling
Now that you've uploaded your training data, it's time to define and train a model!
The type of model you create is up to you. For a binary classification task, you can choose to go one of three routes:
* Use a built-in classification algorithm, like LinearLearner.
* Define a custom Scikit-learn classifier, a comparison of models can be found [here](https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html).
* Define a custom PyTorch neural network classifier.
It will be up to you to test out a variety of models and choose the best one. Your project will be graded on the accuracy of your final model.
---
## EXERCISE: Complete a training script
To implement a custom classifier, you'll need to complete a `train.py` script. You've been given the folders `source_sklearn` and `source_pytorch` which hold starting code for a custom Scikit-learn model and a PyTorch model, respectively. Each directory has a `train.py` training script. To complete this project **you only need to complete one of these scripts**; the script that is responsible for training your final model.
A typical training script:
* Loads training data from a specified directory
* Parses any training & model hyperparameters (ex. nodes in a neural network, training epochs, etc.)
* Instantiates a model of your design, with any specified hyperparams
* Trains that model
* Finally, saves the model so that it can be hosted/deployed, later
### Defining and training a model
Much of the training script code is provided for you. Almost all of your work will be done in the `if __name__ == '__main__':` section. To complete a `train.py` file, you will:
1. Import any extra libraries you need
2. Define any additional model training hyperparameters using `parser.add_argument`
2. Define a model in the `if __name__ == '__main__':` section
3. Train the model in that same section
Below, you can use `!pygmentize` to display an existing `train.py` file. Read through the code; all of your tasks are marked with `TODO` comments.
**Note: If you choose to create a custom PyTorch model, you will be responsible for defining the model in the `model.py` file,** and a `predict.py` file is provided. If you choose to use Scikit-learn, you only need a `train.py` file; you may import a classifier from the `sklearn` library.
```
# directory can be changed to: source_sklearn or source_pytorch
!pygmentize source_sklearn/train.py
```
### Provided code
If you read the code above, you can see that the starter code includes a few things:
* Model loading (`model_fn`) and saving code
* Getting SageMaker's default hyperparameters
* Loading the training data by name, `train.csv` and extracting the features and labels, `train_x`, and `train_y`
If you'd like to read more about model saving with [joblib for sklearn](https://scikit-learn.org/stable/modules/model_persistence.html) or with [torch.save](https://pytorch.org/tutorials/beginner/saving_loading_models.html), click on the provided links.
---
# Create an Estimator
When a custom model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained; the `train.py` function you specified above. To run a custom training script in SageMaker, construct an estimator, and fill in the appropriate constructor arguments:
* **entry_point**: The path to the Python script SageMaker runs for training and prediction.
* **source_dir**: The path to the training script directory `source_sklearn` OR `source_pytorch`.
* **entry_point**: The path to the Python script SageMaker runs for training and prediction.
* **source_dir**: The path to the training script directory `train_sklearn` OR `train_pytorch`.
* **entry_point**: The path to the Python script SageMaker runs for training.
* **source_dir**: The path to the training script directory `train_sklearn` OR `train_pytorch`.
* **role**: Role ARN, which was specified, above.
* **train_instance_count**: The number of training instances (should be left at 1).
* **train_instance_type**: The type of SageMaker instance for training. Note: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types.
* **sagemaker_session**: The session used to train on Sagemaker.
* **hyperparameters** (optional): A dictionary `{'name':value, ..}` passed to the train function as hyperparameters.
Note: For a PyTorch model, there is another optional argument **framework_version**, which you can set to the latest version of PyTorch, `1.0`.
## EXERCISE: Define a Scikit-learn or PyTorch estimator
To import your desired estimator, use one of the following lines:
```
from sagemaker.sklearn.estimator import SKLearn
```
```
from sagemaker.pytorch import PyTorch
```
```
from sagemaker.sklearn.estimator import SKLearn
# your import and estimator code, here
estimator = SKLearn(
entry_point="train.py",
source_dir="source_sklearn",
train_instance_type="ml.c4.xlarge",
role=role,
sagemaker_session=sagemaker_session,
hyperparameters={'n_estimators': 50 , 'max_depth':5})
```
## EXERCISE: Train the estimator
Train your estimator on the training data stored in S3. This should create a training job that you can monitor in your SageMaker console.
```
%%time
# Train your estimator on S3 training data
estimator.fit({'train': input_data})
```
## EXERCISE: Deploy the trained model
After training, deploy your model to create a `predictor`. If you're using a PyTorch model, you'll need to create a trained `PyTorchModel` that accepts the trained `<model>.model_data` as an input parameter and points to the provided `source_pytorch/predict.py` file as an entry point.
To deploy a trained model, you'll use `<model>.deploy`, which takes in two arguments:
* **initial_instance_count**: The number of deployed instances (1).
* **instance_type**: The type of SageMaker instance for deployment.
Note: If you run into an instance error, it may be because you chose the wrong training or deployment instance_type. It may help to refer to your previous exercise code to see which types of instances we used.
```
%%time
# uncomment, if needed
# from sagemaker.pytorch import PyTorchModel
# deploy your model to create a predictor
predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
```
---
# Evaluating Your Model
Once your model is deployed, you can see how it performs when applied to our test data.
The provided cell below, reads in the test data, assuming it is stored locally in `data_dir` and named `test.csv`. The labels and features are extracted from the `.csv` file.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import os
# read in test data, assuming it is stored locally
test_data = pd.read_csv(os.path.join(data_dir, "test.csv"), header=None, names=None)
# labels are in the first column
test_y = test_data.iloc[:,0]
test_x = test_data.iloc[:,1:]
```
## EXERCISE: Determine the accuracy of your model
Use your deployed `predictor` to generate predicted, class labels for the test data. Compare those to the *true* labels, `test_y`, and calculate the accuracy as a value between 0 and 1.0 that indicates the fraction of test data that your model classified correctly. You may use [sklearn.metrics](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics) for this calculation.
**To pass this project, your model should get at least 90% test accuracy.**
```
# First: generate predicted, class labels
test_y_preds = predictor.predict(test_x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test that your model generates the correct number of labels
assert len(test_y_preds)==len(test_y), 'Unexpected number of predictions.'
print('Test passed!')
# Second: calculate the test accuracy
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(test_y, test_y_preds)
print(accuracy)
## print out the array of predicted and true labels, if you want
print('\nPredicted class labels: ')
print(test_y_preds)
print('\nTrue class labels: ')
print(test_y.values)
```
### Question 1: How many false positives and false negatives did your model produce, if any? And why do you think this is?
** Answer**:
False positives: 1,
False Negatives: 1
The model has produced an accuracy of 92%, I think the performance is pretty good.
### Question 2: How did you decide on the type of model to use?
** Answer**:
I figured a tree based model is a good place to start because they are efficient in producing high accuracy results.
----
## EXERCISE: Clean up Resources
After you're done evaluating your model, **delete your model endpoint**. You can do this with a call to `.delete_endpoint()`. You need to show, in this notebook, that the endpoint was deleted. Any other resources, you may delete from the AWS console, and you will find more instructions on cleaning up all your resources, below.
```
# uncomment and fill in the line below!
# <name_of_deployed_predictor>.delete_endpoint()
predictor.delete_endpoint()
```
### Deleting S3 bucket
When you are *completely* done with training and testing models, you can also delete your entire S3 bucket. If you do this before you are done training your model, you'll have to recreate your S3 bucket and upload your training data again.
```
# deleting bucket, uncomment lines below
bucket_to_delete = boto3.resource('s3').Bucket(bucket)
bucket_to_delete.objects.all().delete()
```
### Deleting all your models and instances
When you are _completely_ done with this project and do **not** ever want to revisit this notebook, you can choose to delete all of your SageMaker notebook instances and models by following [these instructions](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html). Before you delete this notebook instance, I recommend at least downloading a copy and saving it, locally.
---
## Further Directions
There are many ways to improve or add on to this project to expand your learning or make this more of a unique project for you. A few ideas are listed below:
* Train a classifier to predict the *category* (1-3) of plagiarism and not just plagiarized (1) or not (0).
* Utilize a different and larger dataset to see if this model can be extended to other types of plagiarism.
* Use language or character-level analysis to find different (and more) similarity features.
* Write a complete pipeline function that accepts a source text and submitted text file, and classifies the submitted text as plagiarized or not.
* Use API Gateway and a lambda function to deploy your model to a web application.
These are all just options for extending your work. If you've completed all the exercises in this notebook, you've completed a real-world application, and can proceed to submit your project. Great job!
| github_jupyter |
#<font color = blue> Installing Packages</font>
```
!pip install nltk
!pip install joblib
```
#<font color = blue> Importing Libraries</font>
```
# NLP
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
from collections import Counter
# Display results
from prettytable import PrettyTable
# Modeling
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import confusion_matrix, accuracy_score, f1_score, classification_report
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
# Plotting libraries
import matplotlib.cm as cm
from matplotlib import rcParams
import matplotlib.pyplot as plt
import seaborn as sns
# Math
import numpy as np
import pandas as pd
# Random libraries
import warnings
warnings.filterwarnings('ignore')
import joblib
```
#<font color = blue> Data preparation & Text Pre-processing </font>
<font color = blue> Data reading </font>
```
# Select the significant columns
cols = ['dialect', 'Text']
# File path
file = '/content/drive/MyDrive/Aim Technology/Model/Dialects-with-text-clean.json'
# Read the file
Data = pd.read_json(file, dtype='string')
# Select the required cols from the previous df
dialects_df = Data[cols]
# Show a sample from the df
dialects_df.head()
# Null values check
dialects_df.isnull().sum()
# NA values check
dialects_df.isna().sum()
# Count of each target class
classes = dialects_df['dialect']
classes.value_counts()
# Plot each class count
sns.countplot(data= dialects_df, x = "dialect")
plt.show()
```
<font color = blue> Tokenization </font>
```
# Text tokenization
tokenizer = RegexpTokenizer(r'\w+')
dialects_df.loc[:, "Text"] = dialects_df.loc[:, "Text"].apply(tokenizer.tokenize)
# Show a sample
dialects_df["Text"].head()
```
<font color = blue> Stop Words Removal </font>
```
# Stop words list
stopwords_list = stopwords.words('arabic')
# Converting the list into single string in order to show it in a more readable format
listToStr = ' '.join([str(elem) for elem in stopwords_list])
# Show the stopwords list
listToStr
```
> > <font color = blue> Count of tokens before stop words removal </font>
```
# Text information (before stop words removal)
all_words = [word for tokens in dialects_df["Text"] for word in tokens]
sentences_length = [len(tokens) for tokens in dialects_df["Text"]]
# Get the unique words of the dialects dataset
VOCAB = sorted(list(set(all_words)))
# Print the results summary
print("The total number of words is %s, with a vocabulary size of %s unique words" % (len(all_words), len(VOCAB)))
print("And the max sentence length is %s words" % max(sentences_length))
# top 25 words in dialects (before stop words removal)
counted_words = Counter(all_words)
top_25 = counted_words.most_common(25)
print("The top 25 words are:", end = '\n\n')
top_25
# Removing stop words
dialects_df["Text"] = dialects_df["Text"].apply(lambda x: [item for item in x if item not in stopwords_list])
# Show a sample from the df after tokenization
dialects_df.head()
# Text information
all_words = [word for tokens in dialects_df["Text"] for word in tokens]
sentences_length = [len(tokens) for tokens in dialects_df["Text"]]
# Get the unique words of the dialects dataset
VOCAB = sorted(list(set(all_words)))
# Print the results summary
print("The total number of words is %s, with a vocabulary size of %s unique words" % (len(all_words), len(VOCAB)))
print("And the max sentence length is %s words" % max(sentences_length))
# top 25 words in dialects
counted_words = Counter(all_words)
top_25 = counted_words.most_common(25)
top_25
# Draw the top 25
words = []
counts = []
# Create a separate list for words and counts
for letter, count in top_25:
words.append(letter)
counts.append(count)
# Print the words and counts
print("The top 25 stop words are: ", words, sep = '\n')
print("\n\nAnd their counts are: ", counts, sep = '\n', end = '\n\n')
# Plotting words and it's counts
colors = cm.rainbow(np.linspace(0, 1, 10))
rcParams['figure.figsize'] = 10, 7
plt.title('Top Words in Dialects')
plt.xlabel('Count')
plt.ylabel('Words')
plt.barh(words, counts, color=colors);
```
<font color = blue> Stemming </font>
```
from nltk.stem.snowball import SnowballStemmer
ar_stemmer = SnowballStemmer("arabic")
# dialects_df["Text"] = dialects_df["Text"].apply(lambda tokens: ar_stemmer.stem(' '.join(tokens)))
```
#<font color =blue> Model ML</font>
<font color = blue> Data spletting</font>
```
# Make a copy from the tokenized df and convert tokens into sentence
df = dialects_df.copy()
# Convert tokens into sentences
df['Text'] = df['Text'].apply(lambda j: ' '.join(str(token) for token in j))
# Show a sample
df.head()
# Label encoding initializatio
encoder = preprocessing.LabelEncoder()
# Fit into dialects and transform it
df['dialect'] = encoder.fit_transform(df['dialect'])
# Show a sample
df.head()
# Split the dataset into X and d
X = df['Text']
y = df['dialect']
# Spliting Dataset into 80% Training and 20% Testing
all_x_train, all_x_test, all_y_train, all_y_test = train_test_split(X, y, test_size=0.20, random_state=2)
# Split the training data into training and validation
x_train, x_valid, y_train, y_valid = train_test_split(all_x_train, all_y_train, test_size=0.20, random_state=2)
```
<font color = blue> Feature extraction using TFIDF</font>
```
# Vectorize the Data using tfidfVectorizer
# vectorizer = TfidfVectorizer(max_features=1000, ngram_range=(1, 3))
vectorizer = TfidfVectorizer(max_features=1000, analyzer='char_wb', ngram_range=(3, 5), min_df=.01, max_df=.3)
# Fit on the training data
vectorizer.fit(x_train)
# Transform the training, validation and test data
x_train = vectorizer.transform(x_train)
x_valid = vectorizer.transform(x_valid)
x_test = vectorizer.transform(all_x_test)
```
<font color = blue> MultinomialNB </font>
```
# A function to print the classification report
def print_report(clf, x_test, y_test):
y_pred = clf.predict(x_test)
report = classification_report(y_test, y_pred)
print(report)
print("accuracy: {:0.3f}".format(accuracy_score(y_test, y_pred)))
# Fit the MultinomialNB on the data
mnb = MultinomialNB()
mnb.fit(x_train.toarray(), y_train)
# Predict the validating data
prediction = mnb.predict(x_valid.toarray())
# Print the validation accuracy
print("Validation Accuracy: ", accuracy_score(y_valid,prediction))
print("Validation F1Score: ", f1_score(y_valid,prediction, average='macro'))
# Predict the test data
predt_test = mnb.predict(x_test.toarray())
# Calculating test accuracy
mnb_acc = accuracy_score(all_y_test, predt_test)
mnb_f1 = f1_score(all_y_test,predt_test, average='macro')
# Print the test accuracy
print("Test Accuracy: ", mnb_acc)
print("Test F1Score: ", mnb_f1)
```
<font color = blue>LogisticRegression </font>
```
# Fit the LogisticRegression on the data
LR= LogisticRegression(penalty = 'l2', C = 1)
LR.fit(x_train.toarray(), y_train)
# Predict the validating data
prediction = LR.predict(x_valid.toarray())
# Print the validation accuracy
print("Validation Accuracy: ", accuracy_score(y_valid,prediction))
print("Validation F1Score: ", f1_score(y_valid,prediction, average='macro'))
# Predict the test data
predt_test = LR.predict(x_test.toarray())
# Print the test accuracy
print("Test Accuracy: ", accuracy_score(all_y_test, predt_test))
print("Test F1Score: ", f1_score(all_y_test,predt_test, average='macro'))
# Fit the LogisticRegression on the SCALED data
pipe = make_pipeline(StandardScaler(), LogisticRegression())
pipe.fit(x_train.todense(), y_train) # apply scaling on training data
# pipe.score(x_valid.todense(), y_valid)
# Predict the validating data
prediction = pipe.predict(x_valid.todense())
# Print the validation accuracy
print("Validation Accuracy: ", accuracy_score(y_valid,prediction))
print("Validation F1Score: ", f1_score(y_valid,prediction, average='macro'))
# Predict the test data
predt_test = pipe.predict(x_test.todense())
# Calculating test accuracy
lr_acc = accuracy_score(all_y_test, predt_test)
lr_f1 = f1_score(all_y_test,predt_test, average='macro')
# Print the test accuracy
print("Test Accuracy: ", lr_acc)
print("Test F1Score: ", lr_f1)
```
<font color = blue> DecisionTreeClassifier </font>
```
# Fit the DecisionTreeClassifier on the data
classifier = DecisionTreeClassifier(random_state=0)
classifier.fit(x_train.toarray(), y_train)
# Predict the validating data
prediction = classifier.predict(x_valid.toarray())
# Print the validation accuracy
print("Validation Accuracy: ", accuracy_score(y_valid,prediction))
print("Validation F1Score: ", f1_score(y_valid,prediction, average='macro'))
# Predict the test data
predt_test = classifier.predict(x_test.toarray())
# Calculating test accuracy
tree_acc = accuracy_score(all_y_test, predt_test)
tree_f1 = f1_score(all_y_test,predt_test, average='macro')
# Print the test accuracy
print("Test Accuracy: ", tree_acc)
print("Test F1Score: ", tree_f1)
# Print the classification report of the decision tree
print_report(classifier, x_test.toarray(), all_y_test)
```
<font color = blue> Comparing results </font>
```
# Comparison of all algorithms Results
x = PrettyTable()
print('\n')
print("Comparison of all algorithms on F1 score")
x.field_names = ["Model", "Accuracy"]
x.add_row(["MultinomialNB Algorithm", round(mnb_f1,2)])
x.add_row(["LogesticRegression Algorithm", round(lr_f1,2)])
x.add_row(["DecisionTree Algorithm", round(tree_f1,2)])
print(x)
```
<font color = blue> As we see, the decision tree classifier is the best. so we will train the whole data using it</font>
```
# vectorizer = TfidfVectorizer(max_features=1000, ngram_range=(1, 3))
vectorizer = TfidfVectorizer(max_features=1000, analyzer='char_wb', ngram_range=(3, 5), min_df=.01, max_df=.3)
# Fit on the whole train data
vectorizer.fit(all_x_train)
# Transform the whole data
all_x_train = vectorizer.transform(all_x_train)
all_x_test = vectorizer.transform(all_x_test)
# Fit the DecisionTreeClassifier on the data
classifier = DecisionTreeClassifier(random_state=0)
classifier.fit(all_x_train.toarray(), all_y_train)
# Predict the test data
predt_test = classifier.predict(all_x_test.toarray())
# Calculating test accuracy
tree_acc = accuracy_score(all_y_test, predt_test)
tree_f1 = f1_score(all_y_test,predt_test, average='macro')
# Print the test accuracy
print("Test Accuracy: ", tree_acc)
print("Test F1Score: ", tree_f1)
# Print the classification report of the decision tree
print_report(classifier, all_x_test.toarray(), all_y_test)
```
<font color = blue> Saving the model</font>
```
# Save the model as a pickle in a file
# files path
path = '/content/drive/MyDrive/Aim Technology/Model'
# Files saving
joblib.dump(classifier, path + '/decisionTree.pkl')
joblib.dump(vectorizer, path + '/TFIDF.pkl')
joblib.dump(encoder, path + '/labelIncoder.pkl')
# Load the model from the file
model_from_joblib = joblib.load('decisionTree.pkl')
vect_from_joblib = joblib.load('TFIDF.pkl')
label_from_joblib = joblib.load('labelIncoder.pkl')
# Make prediction check
# Text samples
test_text = np.array('دير بالك عحالك')
# test_text = np.array('شلون تسوي انت !! انت مينون')
# test_text = np.array(' الحمد لله سيدنا وانت حالك')
# test_text = np.array('ههههه هاي قوية بعضلات')
# test_text = np.array('كيفك شو الاخبار حبيبي')
# test_text = np.array('كيفك شو الاخبار ')
# test_text = np.array('ازيك عامل ايه')
# test_text = np.array('ازيك عامل ايه واحشني)
# test_text = np.array('ازيك عامل ايه يابني')
# test_text = np.array('فينك يا شبح')
# test_text = np.array('وينك شو الاخبار')
# test_text = np.array('كابتن اسلام !! فينك يا عم ماحدش بيشوفك ليه؟!!')
# test_text = np.array('حمد الله علي سلامتك يا فمر')
# Convert text into series
test_text_series = pd.Series(test_text, index=[0])
# TFIDF transformation
transformed_test_text = vect_from_joblib.transform(test_text_series)
# Predict
predicted_label= model_from_joblib.predict(transformed_test_text)
# Print the predicted label
label_from_joblib.classes_[predicted_label]
```
<font color = blue> Save the df</font>
```
# Reset the index, in order to be allowed for saving
df_ml = dialects_df.copy()
df_ml.reset_index(inplace=True)
# Save the data in a file
dir = '/content/drive/MyDrive/Aim Technology/Model'
df_ml.to_json(dir + '/Tokenized-data-for-DL.json', index = True)
```
# <font color = blue> Model DL</font>
```
!pip install utils
!pip install tensorflow --upgrade
# Import libraries
import pandas as pd
import joblib
from sklearn.model_selection import train_test_split
from keras import metrics
import numpy as np
import matplotlib.pyplot as plt
# from keras.layers import Embedding, Dense, Dropout, Input
# from keras.layers import MaxPooling1D, Conv1D, Flatten, LSTM
# from keras.preprocessing import sequence
# from tensorflow import keras
# from sklearn import preprocessing
import tensorflow as tf
# from tensorflow.keras import layers
# from tensorflow.keras import losses
print(tf.__version__)
# File path
dir = '/content/drive/MyDrive/Aim Technology/Model'
# Check the saved file
DL_df = pd.read_json(dir + '/Tokenized-data-for-DL.json',
dtype = 'string')
DL_df.head()
# Find the max length of the sentence
max_sequence_len = 0
for sentence in DL_df['Text']:
max_sequence_len = max(len(sentence), max_sequence_len)
print(max_sequence_len)
# The total number of words
vocab_size = sum([len(sent) for sent in DL_df['Text']])
print(vocab_size)
# Load the vectorizer and the label encoder
vect_from_joblib = joblib.load( dir + '/TFIDF.pkl')
label_from_joblib = joblib.load(dir + '/labelIncoder.pkl')
# train = DL_df.iloc[:450000]
# valid = DL_df.iloc[450000:500000]
# test = DL_df.iloc[500000:]
DL_df['dialect'] = label_from_joblib.transform(DL_df['dialect'])
# Split the dataset into X and d
X = DL_df['Text']
y = DL_df['dialect']
# Spliting Dataset into 80% Training and 20% Testing
all_x_train, all_x_test, all_y_train, all_y_test = train_test_split(X, y, test_size=0.20, random_state=2)
# Split the training data into training and validation
x_train, x_valid, y_train, y_valid = train_test_split(all_x_train, all_y_train, test_size=0.20, random_state=2)
x_train.head()
xx_train = vect_from_joblib.transform(x_train.apply(lambda j: ' '.join(str(token) for token in j)))
xx_valid = vect_from_joblib.transform(x_valid.apply(lambda j: ' '.join(str(token) for token in j)))
xx_test = vect_from_joblib.transform(all_x_test.apply(lambda j: ' '.join(str(token) for token in j)))
# Now let's padd all the sentences to have that max length
x_train_padded = np.zeros((xx_train.shape[0], max_sequence_len))
for i, sent in enumerate(xx_train.toarray()):
x_train_padded[i, :len(sent)] = sent[:max_sequence_len]
# Now let's padd all the sentences to have that max length
x_valid_padded = np.zeros((xx_valid.shape[0], max_sequence_len))
for i, sent in enumerate(xx_valid.toarray()):
x_valid_padded[i, :len(sent)] = sent[:max_sequence_len]
# Now let's padd all the sentences to have that max length
x_test_padded = np.zeros((xx_test.shape[0], max_sequence_len))
for i, sent in enumerate(xx_test.toarray()):
x_test_padded[i, :len(sent)] = sent[:max_sequence_len]
model = tf.keras.models.Sequential([
tf.keras.layers.Embedding(vocab_size, 64),
tf.keras.layers.SimpleRNN(64),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss=losses.categorical_crossentropy, optimizer='Adam',
metrics=['accuracy', metrics.Precision(), metrics.Recall()])
epochs = 2
model.summary()
history = model.fit(x_train_padded, y_train, epochs=2, batch_size=128,
validation_data=(x_valid_padded, y_valid),
validation_steps=30)
# history = model.fit(
# x_train_padded, y_train,
# epochs=epochs)
test_loss, test_acc = model.evaluate(x_test_padded, y_test)
print('Test Loss: {}'.format(test_loss))
print('Test Accuracy: {}'.format(test_acc))
```
Trying another model
```
# Another model
# embedding_dim = 100
# max_length = 16
# max_features = 500
# model = tf.keras.Sequential([
# layers.Embedding(max_features + 1, embedding_dim, trainable=False),
# layers.Dropout(0.2),
# layers.Conv1D(64, 5, activation='relu'),
# layers.MaxPooling1D(pool_size=4),
# layers.LSTM(64),
# layers.Dense(1, activation='softmax')])
# model.summary()
model.compile(loss=losses.categorical_crossentropy, optimizer='SGD',
metrics=['accuracy', metrics.Precision(), metrics.Recall()])
epochs = 10
# history = model.fit(
# xx_train,
# # batch_size=batch_size,
# validation_data=all_x_test,
# epochs=epochs)
history = model.fit(
xx_train.todense(), y_train,
epochs=epochs)
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
```
| github_jupyter |
## 04. Explain the Otimizer in Detail
### Instancing a Optimizer Object
在Basic Tutorial中,我们知道UltraOpt中有如下优化器:
|优化器|描述|
|-----|---|
|ETPE| Embedding-Tree-Parzen-Estimator, 是UltraOpt作者自创的一种优化算法,在TPE算法[<sup>[4]</sup>](#refer-anchor-4)的基础上对类别变量采用Embedding降维为低维连续变量,<br>并在其他的一些方面也做了改进。ETPE在某些场景下表现比HyperOpt的TPE算法要好。 |
|Forest |基于随机森林的贝叶斯优化算法。概率模型引用了`scikit-optimize`[<sup>[1]</sup>](#refer-anchor-1)包的`skopt.learning.forest`模型[<sup>[2]</sup>](#refer-anchor-2),<br>并借鉴了`SMAC3`[<sup>[3]</sup>](#refer-anchor-3)中的局部搜索方法|
|GBRT| 基于梯度提升回归树(Gradient Boosting Resgression Tree)的贝叶斯优化算法,<br>概率模型引用了`scikit-optimize`包的`skopt.learning.gbrt`模型 |
|Random| 随机搜索。 |
我们在调用`ultraopt.fmin`函数进行优化时,在使用优化器默认参数的情况下,可以只传入优化器的名字,如:
```python
from ultraopt import fmin
result = fmin(evaluate_function, config_space, optimizer="ETPE")
```
但如果我们要调整优化器的参数,如修改`ForestOptimizer`随机森林树的个数,或 `ETPEOptimizer` 的一些涉及冷启动和采样个数的参数时,就需要从`ultraopt.optimizer`包中引入相应的优化类,并对其实例化为优化器对象
引入`fmin`和你需要使用的优化器类:
```
from ultraopt import fmin
from ultraopt.optimizer import ETPEOptimizer
from ultraopt.optimizer import RandomOptimizer
from ultraopt.optimizer import ForestOptimizer
from ultraopt.optimizer import GBRTOptimizer
```
引入一个测试用的配置空间和评价函数:
```
from ultraopt.tests.mock import config_space, evaluate
```
根据自己想要的参数实例化一个`ForestOptimizer`:
```
optimizer = ForestOptimizer(n_estimators=20) # 代理模型为20棵树的随机森林
```
把优化器对象传入fmin函数,开始优化过程
```
fmin(evaluate, config_space, optimizer)
```
### Implement an Optimization Process Outside of fmin
优化器有3个重要的函数:
- `initialize(config_space, ...` : 参数为配置空间等,初始化优化器
- `ask(n_points=None , ...` : 请求优化器推荐`n_points`个配置(默认为1个)
- `tell(config, loss, ...)` : 告知优化器,之前推荐配置`config`的好坏(损失`loss`越小越好)
优化器的运行流程如下图所示:
```
from graphviz import Digraph; g = Digraph()
g.node("config space", shape="ellipse"); g.node("optimizer", shape="box")
g.node("config", shape="ellipse"); g.node("loss", shape="circle"); g.node("evaluator", shape="box")
g.edge("config space", "optimizer", label="initialize"); g.edge("optimizer", "config", label="<<b>ask</b>>", color='blue')
g.edge("config","evaluator" , label="send to"); g.edge("evaluator","loss" , label="evaluate")
g.edge("config", "optimizer", label="<<b>tell</b>>", color='red'); g.edge("loss", "optimizer", label="<<b>tell</b>>", color='red')
g.graph_attr['rankdir'] = 'LR'; g
```
图中的`evaluator`评价器会对`config`配置进行评价,然后返回一个损失`loss`。
举个例子,在AutoML问题中,评价器的工作流程如下:
1. 将config转化为一个机器学习实例
2. 在训练集上对机器学习实例进行训练
3. 在验证集上得到相应的评价指标
4. 对评价指标进行处理,使其`越小越好`,返回`loss`
具体的评价器我们会在下个教程中实现。在学习了这些知识后, 我们能否脱离`fmin`函数,自己实现一个优化过程呢?答案是可以的。
在UltraOpt的设计哲学中,优化器只需要具备上述的3个接口,评价器和分布式策略的设计都可以由用户完成。
#### 先从一次循环中体会整个过程
> Step 1. 首先实例化和初始化优化器
```
optimizer = ETPEOptimizer()
optimizer.initialize(config_space)
```
> Step 2. 调用优化器的`ask`函数获取一个其推荐的配置:
```
recommend_config, config_info = optimizer.ask()
recommend_config
config_info
```
> Step 3. 用评估器,在这是`evaluate`函数来评价配置的好坏
```
loss = evaluate(recommend_config)
loss
```
> Step 4. 通过tell函数将观测结果 `config, loss` 传递给优化器
```
optimizer.tell(recommend_config, loss)
```
#### 将上述过程整理为一个for循环
```
optimizer = ETPEOptimizer()
optimizer.initialize(config_space)
losses = []
best_losses = []
for _ in range(100):
config, _ = optimizer.ask()
loss = evaluate(config)
optimizer.tell(config, loss)
losses.append(loss)
best_losses.append(min(losses))
import pylab as plt
plt.grid(0.2)
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.plot(range(20, 100), best_losses[20:]);
```
您可能会有疑问,一次只能`ask`一个推荐配置吗,能不能`ask`多个呢?答案是可以的
#### MapReduce计算策略:ask多个配置 + 并行调用评价函数
```
from joblib import Parallel, delayed
n_parallels = 3
optimizer = ETPEOptimizer()
optimizer.initialize(config_space)
for _ in range(100 // n_parallels):
config_info_pairs = optimizer.ask(n_points=n_parallels)
losses = Parallel(n_jobs=n_parallels)(
delayed(evaluate)(config)
for config, _ in config_info_pairs
)
loss = evaluate(config)
for j, (loss, (config, _)) in enumerate(zip(losses, config_info_pairs)):
optimizer.tell(config, loss, update_model=(j == n_parallels - 1)) # 传入这批观测的最后一个观测时,更新模型
```
**参考文献**
<div id="refer-anchor-1"></div>
- [1] https://github.com/scikit-optimize/scikit-optimize
<div id="refer-anchor-2"></div>
- [2] [Hutter, F. et al. “Algorithm runtime prediction: Methods & evaluation.” Artif. Intell. 206 (2014): 79-111.](https://arxiv.org/abs/1211.0906)
<div id="refer-anchor-3"></div>
- [3] [Hutter F., Hoos H.H., Leyton-Brown K. (2011) Sequential Model-Based Optimization for General Algorithm Configuration. In: Coello C.A.C. (eds) Learning and Intelligent Optimization. LION 2011. Lecture Notes in Computer Science, vol 6683. Springer, Berlin, Heidelberg.](https://link.springer.com/chapter/10.1007/978-3-642-25566-3_40)
<div id="refer-anchor-4"></div>
- [4] [James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS'11). Curran Associates Inc., Red Hook, NY, USA, 2546–2554.](https://dl.acm.org/doi/10.5555/2986459.2986743)
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.