text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
<img src='https://bit.ly/2VnXWr2' width='100' align='left'>
# Final project: NLP to predict Myers-Briggs Personality Type
### Imports
```
!pip freeze > requirements4.txt
# Data Analysis
import pandas as pd
import numpy as np
# Data Visualization
import seaborn as sns
import matplotlib.pyplot as plt
# Ignore noise warning
import warnings
warnings.filterwarnings('ignore')
# Work with pickles
import pickle
#Metrics
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error, accuracy_score, balanced_accuracy_score
from sklearn.metrics import precision_score, recall_score, f1_score, multilabel_confusion_matrix, confusion_matrix
from sklearn.metrics import classification_report
# Model training and evaluation
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV, StratifiedKFold, cross_val_score, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
```
## 5. Hyperparameter Tuning of the Models (Types)
Althought the metrics of the different models are really good, we can still improve the performance of the models. Therefore, a fine tunning of the different parameters of each models has to be done.
```
result_svd_vec_types = pd.read_csv('data/output_csv/result_svd_vec_types.csv')
result_svd_vec_types.drop(['Unnamed: 0'], axis=1, inplace=True)
X = result_svd_vec_types.drop(['type','enfj', 'enfp', 'entj', 'entp', 'esfj', 'esfp', 'estj', 'estp','infj', 'infp', 'intj',
'intp', 'isfj', 'isfp', 'istj', 'istp'], axis=1).values
y = result_svd_vec_types['type'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.2)
print ((X_train.shape),(y_train.shape),(X_test.shape),(y_test.shape))
```
<img src='https://www.nicepng.com/png/detail/148-1486992_discover-the-most-powerful-ways-to-automate-your.png' width='1000'>
```
raise SystemExit('Stop right there! The following cells takes some time to complete.')
```
As there's quite a few parameters, I will show the parameters' grid I used and then the model training with the best results.
Those grids have been used during the tuning in Google Colab in pairs or threes of parameters.
### RandomForest Tuning
##### GridSearchCV
```
random_forest = RandomForestClassifier(random_state = 42)
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
n_estimators.append(list(np.arange(50,200)))
param_grid = {'class_weight': [None,'balanced'],
'criterion': ['gini', 'entropy'],
'max_depth': max_depth,
'max_features': ['auto', 'sqrt', 'log2'],
'n_estimators' : n_estimators,
'min_samples_leaf': np.arange(1,20),
'min_samples_split': np.arange(2,25),
'bootstrap': [True, False],
'oob_score': [True, False]
}
grid = GridSearchCV(random_forest, param_grid, cv=3, scoring='f1_weighted', verbose=2, n_jobs=-1, refit=True)
grid.fit(X_train, y_train)
grid.best_estimator_
print(grid.best_params_)
```
### GradientBooster Tuning
##### GridSearchCV
```
gradient_booster = GradientBoostingClassifier(random_state = 42)
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
n_estimators.append(list(np.arange(50,200)))
param_grid = {'loss':['deviance', 'exponential'],
'learning_rate': [0.01, 0.025, 0.05, 0.075, 0.1, 0.15, 0.2],
'max_depth': max_depth,
'n_estimators' : n_estimators,
'min_samples_leaf': np.arange(1,20),
'min_samples_split': np.arange(2,25),
'max_features':['auto', 'sqrt', 'log2'],
'criterion': ['friedman_mse', 'mse', 'mae'],
'subsample':[0.5, 0.618, 0.8, 0.85, 0.9, 0.95, 1.0]
}
grid = GridSearchCV(gradient_booster, param_grid, cv=3, scoring='f1_weighted', verbose=2, n_jobs=-1, refit=True)
grid.fit(X_train, y_train)
grid.best_estimator_
print(grid.best_params_)
```
### Final results
```
def baseline_report(model, X_train, X_test, y_train, y_test, name):
strat_k_fold = StratifiedKFold(n_splits=5, shuffle=True)
model.fit(X_train, y_train)
accuracy = np.mean(cross_val_score(model, X_train, y_train, cv=strat_k_fold, scoring='accuracy'))
precision = np.mean(cross_val_score(model, X_train, y_train, cv=strat_k_fold, scoring='precision_weighted'))
recall = np.mean(cross_val_score(model, X_train, y_train, cv=strat_k_fold, scoring='recall_weighted'))
f1score = np.mean(cross_val_score(model, X_train, y_train, cv=strat_k_fold, scoring='f1_weighted'))
y_pred = model.predict(X_test)
mcm = multilabel_confusion_matrix(y_test, y_pred)
tn = mcm[:, 0, 0]
tp = mcm[:, 1, 1]
fn = mcm[:, 1, 0]
fp = mcm[:, 0, 1]
specificities = tn / (tn+fp)
specificity = (specificities.sum())/ 16
df_model = pd.DataFrame({'model' : [name],
'accuracy' : [accuracy],
'precision' : [precision],
'recall' : [recall],
'f1score' : [f1score],
'specificity' : [specificity]
})
return df_model
models = {'randomforest': RandomForestClassifier(random_state = 42, bootstrap=False, class_weight = 'balanced',
criterion = 'gini', max_depth = 50, max_features = 'sqrt',
min_samples_leaf = 5, min_samples_split = 12, n_estimators = 1800,
oob_score = False),
'xgboost': GradientBoostingClassifier(random_state = 42, loss = 'deviance', max_depth = 3, n_estimators = 1600,
max_features = 'sqrt', learning_rate = 0.075, criterion = 'friedman_mse',
subsample = 0.9, min_samples_leaf = 6, min_samples_split = 15)
}
models_df = pd.concat([baseline_report(model, X_train, X_test, y_train, y_test, name) for (name, model) in models.items()])
models_df.to_csv('data/output_csv/models_tuned_types.csv')
models_df
```
## Conclusions
The model trained has an F1 Score of 0.651957, that is the model is able to predict MBTI personality type 65,2% of times.
Despite not seeming particularly outstanding results, as a multiclass classification (16 types), randomness baseline was located at 6.25%. So predictions from this model would be more than 10 times more accurate than guessing.
| github_jupyter |
# Linear models in practice
In this lab session we will go over some important aspects of using linear models (and to some degree also neighborhood based models) in practice.
In particular, we will do some simple preprocessing and feature engineering. We will use the Boston housing dataset for this again.
## Data Scaling
Before we get started, we want to look at the data in more detail (you can look at ``boston.DESCR`` if you need a reminder about the data).
```
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
from sklearn.datasets import load_boston
boston = load_boston()
X, y = boston.data, boston.target
```
### Task
Fill in the code below to create a scatter plot of each variable against the target. These univariate relationships can provide some insight into how to best model the problem.
```
fig, axes = plt.subplots(3, 5, figsize=(20, 10))
for i, ax in enumerate(axes.ravel()):
if i > 12:
ax.set_visible(False)
break
ax.scatter(X[:, i], y)
# plot the i-th feature in X against the target
# label the x-axis
# your code goes here...
ax.set_ylabel("MEDV")
ax.set_ylabel(boston.feature_names[i])
```
The plot shows that for some features there is a clear dependence between the feature and the target. We can also see that the distributions of the different variables are quite different, as well as the ranges. We can see the difference between the ranges even more clearly if we do a box-plot of the features.
### Task
Create a boxplot of X. Make sure the axes are labeled properly.
```
plt.boxplot(X)
plt.xticks(range(1, 14), boston.feature_names);
```
Having features of different orders of magnitude can be very detrimental for many models, for example regularized linear models and neighborhood based models - for linear regression without regularization, scaling of the input usually makes no difference, but can be benefitial to get a more stable solution.
There is a regression equivalent to ``KNeighborsClassifier``, called ``KNeighborsRegressor``. It simply predicts the mean of the ``n_samples`` closest training points.
### Task
Split the boston housing data into training and test set, apply the ``KNeighborsRegressor`` and compute the test set $R^2$.
```
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import train_test_split
knr = KNeighborsRegressor(n_neighbors=3)
X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=0)
knr.fit(X_train, y_train)
knr.score(X_test, y_test)
```
The model is not very acurate and the problem should be clear from the box plot above: the TAX feature has a much larger magnitude than any other feature. That means this feature will completely dominate which data points are considered neighbors, and all other features will have little influence. A simple solution is to scale all the features to have the same scale. A common choice is to scale all features to zero mean and unit variance. We can do this with the ``StandardScaler`` from the ``sklearn.preprocessing`` module. It's important that we compute the mean and standard deviation of the data only on the training set, and then use these estimates for both the training and the test set.
### Task
Use the ``StandardScaler`` to rescale the Boston housing dataset. You can estimate the mean and variance by calling the ``fit`` method with the training data ``X_train`` (``y_train`` is not required). You can then scale the data using the ``transform`` method.
Store the scaled training data in to ``X_train_scaled`` and the scaled test data into ``X_test_scaled``. Compute mean and variance for both ``X_train_scaled`` and ``X_test_scaled``. Are they what you expect?
Now use the ``KNeighborsRegressor`` on the scaled data. The results should improve.
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
print(X_train_scaled.mean(axis=0))
print(X_test_scaled.mean(axis=0))
knr.fit(X_train_scaled, y_train)
knr.score(X_test_scaled, y_test)
```
## Regulariation
Coming back to linear models, we want to see what the effect of different amounts of regularization in a linear model is. Remember that the l2-penalized linear regression is called ``Ridge``.
### Task
Fit the Ridge model with the different values of ``alpha`` that are given. Record the ``R^2`` on the training and test set, and create a line plot comparing the two.
```
alphas = np.logspace(-3, 3, 7)
np.set_printoptions(suppress=True)
print(alphas)
from sklearn.linear_model import Ridge
train_scores = []
test_scores = []
for alpha in alphas:
ridge = Ridge(alpha=alpha)
# instantiate ridge with this particular alpha
# code here ...
ridge.fit(X_train_scaled, y_train)
train_scores.append(ridge.score(X_train_scaled, y_train))
test_scores.append(ridge.score(X_test_scaled, y_test))
# compute training and test score, store them in train_scores and tests_scores
# code here....
# plot train_scores and test_scores here
plt.plot(train_scores, label="train scores")
plt.plot(test_scores, label="test scores")
```
In this case small values of alpha lead to the best training and test performances. This means little regularization is beneficial, and we could have also just used ``LinearRegression``.
## Polynomial Features
Looking at the univariate plots of features vs target that we did in the beginning of this lab, it's clear that not all the relationships between features and target are linear. The relationship for ``RM`` looks somewhat linear, but the relationship for ``LSTAT`` is clearly not.
There is several ways to deal with non-linear relationships. A particular simple one is adding polynomials of the original features as additional features, so for example adding $\text{LSTAT}^2$. We will also add interactions between features, which further increases the power of the model.
Both of these are implemented in the ``PolynomialFeatures`` transformation in ``sklearn.preprocessing``.
### Task
Transform the scaled data ``X_train_scaled`` and ``X_test_scaled`` with the ``PolynomialFeatures`` transformation.
How many features does transformed data have? Why?
Build a ridge model using the scaled data. Is it worth or better than the model we build before?
Now repeat the exercise from above and investigate the influence of different values of ``alpha`` on the model. How does the plot look different than before?
```
from sklearn.preprocessing import PolynomialFeatures
poly_transform =
```
# Categorical Variables
Another common issue in working with linear models and related models is dealing with categorical variables. Linear models can't by themselves deal with categorical variables, but a simple way around that is using dummy variables, also known as one-hot encoding. Let's run through a small example with ``pandas``. Imagine you have a dataset with an integer column ``salary`` and a categorical column ``boro`` that takes the value of a boro of Manhattan:
```
import pandas as pd
df = pd.DataFrame({'salary': [103, 89, 142, 54, 63, 219],
'boro': ['Manhatten', 'Queens', 'Manhatten', 'Brooklyn', 'Brooklyn', 'Bronx']})
df
```
To create a one-hot representation of the data, we can simply call the ``get_dummies`` function in pandas. It will represent each categorical feature as several new features, one for each category. All values of the features will be zero, except for the one feature representing the category assigned to this datapoint. So there will always be exactly one entry of "1" for each group of features.
```
pd.get_dummies(df)
```
By default pandas encodes all variables that contains strings, objects, or categories (which is a pandas concept that we won't explain here - in this example we simply use strings). Sometimes you might encounter data in which someone has already "helpfully" encoded categories as integers:
```
df = pd.DataFrame({'salary': [103, 89, 142, 54, 63, 219],
'boro': [0, 1, 0, 2, 2, 3]})
df
```
Calling ``get_dummies`` on this data will not do anything:
```
pd.get_dummies(df)
```
To transform a categorical feature that's already encoded as an integer, you can pass it explicity:
```
pd.get_dummies(df, columns=['boro'])
```
# Exercise
Apply dummy encoding and scaling to the "adult" dataset consisting of income data from the 1990s census.
The goal is to predict whether a person will make less or more than \$50k a year, so this is a binary classification problem.
Use logistic regression on the problem. You need to separate the income variable, which is the target, from the rest of the data frame, which will be the features.
Bonus: identify important features and visualize them.
```
data = pd.read_csv("https://github.com/amueller/ml-training-advanced/raw/master/notebooks/data/adult.csv", index_col=0)
# solution here ...
```
| github_jupyter |
# 100 pandas puzzles
Inspired by [100 Numpy exerises](https://github.com/rougier/numpy-100), here are 100* short puzzles for testing your knowledge of [pandas'](http://pandas.pydata.org/) power.
Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning), making use of the core DataFrame and Series objects.
Many of the excerises here are stright-forward in that the solutions require no more than a few lines of code (in pandas or NumPy... don't go using pure Python or Cython!). Choosing the right methods and following best practices is the underlying goal.
The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are subjective, of course, but should be a seen as a rough guide as to how inventive the required solution is.
If you're just starting out with pandas and you are looking for some other resources, the official documentation is very extensive. In particular, some good places get a broader overview of pandas are...
- [10 minutes to pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html)
- [pandas basics](http://pandas.pydata.org/pandas-docs/stable/basics.html)
- [tutorials](http://pandas.pydata.org/pandas-docs/stable/tutorials.html)
- [cookbook and idioms](http://pandas.pydata.org/pandas-docs/stable/cookbook.html#cookbook)
Enjoy the puzzles!
\* *the list of exercises is not yet complete! Pull requests or suggestions for additional exercises, corrections and improvements are welcomed.*
## Importing pandas
### Getting started and checking your pandas setup
Difficulty: *easy*
**1.** Import pandas under the name `pd`.
```
import pandas as pd
```
**2.** Print the version of pandas that has been imported.
```
pd.__version__
```
**3.** Print out all the version information of the libraries that are required by the pandas library.
```
pd.show_versions()
```
## DataFrame basics
### A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames
Difficulty: *easy*
Note: remember to import numpy using:
```python
import numpy as np
```
Consider the following Python dictionary `data` and Python list `labels`:
``` python
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
```
(This is just some meaningless data I made up with the theme of animals and trips to a vet.)
**4.** Create a DataFrame `df` from this dictionary `data` which has the index `labels`.
```
df = pd.DataFrame(data, index=labels)
```
**5.** Display a summary of the basic information about this DataFrame and its data.
```
df.info()
# ...or...
df.describe()
```
**6.** Return the first 3 rows of the DataFrame `df`.
```
df.iloc[:3]
# or equivalently
df.head(3)
```
**7.** Select just the 'animal' and 'age' columns from the DataFrame `df`.
```
df.loc[:, ['animal', 'age']]
# or
df[['animal', 'age']]
```
**8.** Select the data in rows `[3, 4, 8]` *and* in columns `['animal', 'age']`.
```
df.loc[df.index[[3, 4, 8]], ['animal', 'age']]
```
**9.** Select only the rows where the number of visits is greater than 3.
```
df[df['visits'] > 3]
```
**10.** Select the rows where the age is missing, i.e. is `NaN`.
```
df[df['age'].isnull()]
```
**11.** Select the rows where the animal is a cat *and* the age is less than 3.
```
df[(df['animal'] == 'cat') & (df['age'] < 3)]
```
**12.** Select the rows the age is between 2 and 4 (inclusive).
```
df[df['age'].between(2, 4)]
```
**13.** Change the age in row 'f' to 1.5.
```
df.loc['f', 'age'] = 1.5
```
**14.** Calculate the sum of all visits (the total number of visits).
```
df['visits'].sum()
```
**15.** Calculate the mean age for each different animal in `df`.
```
df.groupby('animal')['age'].mean()
```
**16.** Append a new row 'k' to `df` with your choice of values for each column. Then delete that row to return the original DataFrame.
```
df.loc['k'] = [5.5, 'dog', 'no', 2]
# and then deleting the new row...
df = df.drop('k')
```
**17.** Count the number of each type of animal in `df`.
```
df['animal'].value_counts()
```
**18.** Sort `df` first by the values in the 'age' in *decending* order, then by the value in the 'visit' column in *ascending* order.
```
df.sort_values(by=['age', 'visits'], ascending=[False, True])
```
**19.** The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be `True` and 'no' should be `False`.
```
df['priority'] = df['priority'].map({'yes': True, 'no': False})
```
**20.** In the 'animal' column, change the 'snake' entries to 'python'.
```
df['animal'] = df['animal'].replace('snake', 'python')
```
**21.** For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (hint: use a pivot table).
```
df.pivot_table(index='animal', columns='visits', values='age', aggfunc='mean')
```
## DataFrames: beyond the basics
### Slightly trickier: you may need to combine two or more methods to get the right answer
Difficulty: *medium*
The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of the box" method.
**22.** You have a DataFrame `df` with a column 'A' of integers. For example:
```python
df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})
```
How do you filter out rows which contain the same integer as the row immediately above?
```
df.loc[df['A'].shift() != df['A']]
```
**23.** Given a DataFrame of numeric values, say
```python
df = pd.DataFrame(np.random.random(size=(5, 3))) # a 5x3 frame of float values
```
how do you subtract the row mean from each element in the row?
```
df.sub(df.mean(axis=1), axis=0)
```
**24.** Suppose you have DataFrame with 10 columns of real numbers, for example:
```python
df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))
```
Which column of numbers has the smallest sum? (Find that column's label.)
```
df.sum().idxmin()
```
**25.** How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?
```
len(df) - df.duplicated(keep=False).sum()
# or perhaps more simply...
len(df.drop_duplicates(keep=False))
```
The next three puzzles are slightly harder...
**26.** You have a DataFrame that consists of 10 columns of floating--point numbers. Suppose that exactly 5 entries in each row are NaN values. For each row of the DataFrame, find the *column* which contains the *third* NaN value.
(You should return a Series of column labels.)
```
(df.isnull().cumsum(axis=1) == 3).idxmax(axis=1)
```
**27.** A DataFrame has a column of groups 'grps' and and column of numbers 'vals'. For example:
```python
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
```
For each *group*, find the sum of the three greatest values.
```
df.groupby('grp')['vals'].nlargest(3).sum(level=0)
```
**28.** A DataFrame has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive). For each group of 10 consecutive integers in 'A' (i.e. `(0, 10]`, `(10, 20]`, ...), calculate the sum of the corresponding values in column 'B'.
```
df.groupby(pd.cut(df['A'], np.arange(0, 101, 10)))['B'].sum()
```
## DataFrames: harder problems
### These might require a bit of thinking outside the box...
...but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit `for` loops).
Difficulty: *hard*
**29.** Consider a DataFrame `df` where there is an integer column 'X':
```python
df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})
```
For each value, count the difference back to the previous zero (or the start of the Series, whichever is closer). These values should therefore be `[1, 2, 0, 1, 2, 3, 4, 0, 1, 2]`. Make this a new column 'Y'.
```
izero = np.r_[-1, (df['X'] == 0).nonzero()[0]] # indices of zeros
idx = np.arange(len(df))
df['Y'] = idx - izero[np.searchsorted(izero - 1, idx) - 1]
# http://stackoverflow.com/questions/30730981/how-to-count-distance-to-the-previous-zero-in-pandas-series/
# credit: Behzad Nouri
```
Here's an alternative approach based on a [cookbook recipe](http://pandas.pydata.org/pandas-docs/stable/cookbook.html#grouping):
```
x = (df['X'] != 0).cumsum()
y = x != x.shift()
df['Y'] = y.groupby((y != y.shift()).cumsum()).cumsum()
```
And another approach using a groupby:
```
df['Y'] = df.groupby((df['X'] == 0).cumsum()).cumcount()
# We're off by one before we reach the first zero.
first_zero_idx = (df['X'] == 0).idxmax()
df['Y'].iloc[0:first_zero_idx] += 1
```
**30.** Consider a DataFrame containing rows and columns of purely numerical data. Create a list of the row-column index locations of the 3 largest values.
```
df.unstack().sort_values()[-3:].index.tolist()
# http://stackoverflow.com/questions/14941261/index-and-column-for-the-max-value-in-pandas-dataframe/
# credit: DSM
```
**31.** Given a DataFrame with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals', replace any negative values in 'vals' with the group mean.
```
def replace(group):
mask = group<0
group[mask] = group[~mask].mean()
return group
df.groupby(['grps'])['vals'].transform(replace)
# http://stackoverflow.com/questions/14760757/replacing-values-with-groupby-means/
# credit: unutbu
```
**32.** Implement a rolling mean over groups with window size 3, which ignores NaN value. For example consider the following DataFrame:
```python
>>> df = pd.DataFrame({'group': list('aabbabbbabab'),
'value': [1, 2, 3, np.nan, 2, 3,
np.nan, 1, 7, 3, np.nan, 8]})
>>> df
group value
0 a 1.0
1 a 2.0
2 b 3.0
3 b NaN
4 a 2.0
5 b 3.0
6 b NaN
7 b 1.0
8 a 7.0
9 b 3.0
10 a NaN
11 b 8.0
```
The goal is to compute the Series:
```
0 1.000000
1 1.500000
2 3.000000
3 3.000000
4 1.666667
5 3.000000
6 3.000000
7 2.000000
8 3.666667
9 2.000000
10 4.500000
11 4.000000
```
E.g. the first window of size three for group 'b' has values 3.0, NaN and 3.0 and occurs at row index 5. Instead of being NaN the value in the new column at this row index should be 3.0 (just the two non-NaN values are used to compute the mean (3+3)/2)
```
g1 = df.groupby(['group'])['value'] # group values
g2 = df.fillna(0).groupby(['group'])['value'] # fillna, then group values
s = g2.rolling(3, min_periods=1).sum() / g1.rolling(3, min_periods=1).count() # compute means
s.reset_index(level=0, drop=True).sort_index() # drop/sort index
# http://stackoverflow.com/questions/36988123/pandas-groupby-and-rolling-apply-ignoring-nans/
```
## Series and DatetimeIndex
### Exercises for creating and manipulating Series with datetime data
Difficulty: *easy/medium*
pandas is fantastic for working with dates and times. These puzzles explore some of this functionality.
**33.** Create a DatetimeIndex that contains each business day of 2015 and use it to index a Series of random numbers. Let's call this Series `s`.
```
dti = pd.date_range(start='2015-01-01', end='2015-12-31', freq='B')
s = pd.Series(np.random.rand(len(dti)), index=dti)
```
**34.** Find the sum of the values in `s` for every Wednesday.
```
s[s.index.weekday == 2].sum()
```
**35.** For each calendar month in `s`, find the mean of values.
```
s.resample('M').mean()
```
**36.** For each group of four consecutive calendar months in `s`, find the date on which the highest value occurred.
```
s.groupby(pd.TimeGrouper('4M')).idxmax()
```
**37.** Create a DateTimeIndex consisting of the third Thursday in each month for the years 2015 and 2016.
```
pd.date_range('2015-01-01', '2016-12-31', freq='WOM-3THU')
```
## Cleaning Data
### Making a DataFrame easier to work with
Difficulty: *easy/medium*
It happens all the time: someone gives you data containing malformed strings, Python, lists and missing data. How do you tidy it up so you can get on with the analysis?
Take this monstrosity as the DataFrame to use in the following puzzles:
```python
df = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm',
'Budapest_PaRis', 'Brussels_londOn'],
'FlightNumber': [10045, np.nan, 10065, np.nan, 10085],
'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]],
'Airline': ['KLM(!)', '<Air France> (12)', '(British Airways. )',
'12. Air France', '"Swiss Air"']})
```
(It's some flight data I made up; it's not meant to be accurate in any way.)
**38.** Some values in the the FlightNumber column are missing. These numbers are meant to increase by 10 with each row so 10055 and 10075 need to be put in place. Fill in these missing numbers and make the column an integer column (instead of a float column).
```
df['FlightNumber'] = df['FlightNumber'].interpolate().astype(int)
```
**39.** The From\_To column would be better as two separate columns! Split each string on the underscore delimiter `_` to give a new temporary DataFrame with the correct values. Assign the correct column names to this temporary DataFrame.
```
temp = df.From_To.str.split('_', expand=True)
temp.columns = ['From', 'To']
```
**40.** Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".)
```
temp['From'] = temp['From'].str.capitalize()
temp['To'] = temp['To'].str.capitalize()
```
**41.** Delete the From_To column from `df` and attach the temporary DataFrame from the previous questions.
```
df = df.drop('From_To', axis=1)
df = df.join(temp)
```
**42**. In the Airline column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. `'(British Airways. )'` should become `'British Airways'`.
```
df['Airline'] = df['Airline'].str.extract('([a-zA-Z\s]+)', expand=False).str.strip()
# note: using .strip() gets rid of any leading/trailing spaces
```
**43**. In the RecentDelays column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.
Expand the Series of lists into a DataFrame named `delays`, rename the columns `delay_1`, `delay_2`, etc. and replace the unwanted RecentDelays column in `df` with `delays`.
```
# there are several ways to do this, but the following approach is possibly the simplest
delays = df['RecentDelays'].apply(pd.Series)
delays.columns = ['delay_{}'.format(n) for n in range(1, len(delays.columns)+1)]
df = df.drop('RecentDelays', axis=1).join(delays)
```
The DataFrame should look much better now.
## Using MultiIndexes
### Go beyond flat DataFrames with additional index levels
Difficulty: *medium*
Previous exercises have seen us analysing data from DataFrames equipped with a single index level. However, pandas also gives you the possibilty of indexing your data using *multiple* levels. This is very much like adding new dimensions to a Series or a DataFrame. For example, a Series is 1D, but by using a MultiIndex with 2 levels we gain of much the same functionality as a 2D DataFrame.
The set of puzzles below explores how you might use multiple index levels to enhance data analysis.
To warm up, we'll look make a Series with two index levels.
**44**. Given the lists `letters = ['A', 'B', 'C']` and `numbers = list(range(10))`, construct a MultiIndex object from the product of the two lists. Use it to index a Series of random numbers. Call this Series `s`.
```
letters = ['A', 'B', 'C']
numbers = list(range(10))
mi = pd.MultiIndex.from_product([letters, numbers])
s = pd.Series(np.random.rand(30), index=mi)
```
**45.** Check the index of `s` is lexicographically sorted (this is a necessary proprty for indexing to work correctly with a MultiIndex).
```
s.index.is_lexsorted()
# or more verbosely...
s.index.lexsort_depth == s.index.nlevels
```
**46**. Select the labels `1`, `3` and `6` from the second level of the MultiIndexed Series.
```
s.loc[:, [1, 3, 6]]
```
**47**. Slice the Series `s`; slice up to label 'B' for the first level and from label 5 onwards for the second level.
```
s.loc[pd.IndexSlice[:'B', 5:]]
# or equivalently without IndexSlice...
s.loc[slice(None, 'B'), slice(5, None)]
```
**48**. Sum the values in `s` for each label in the first level (you should have Series giving you a total for labels A, B and C).
```
s.sum(level=0)
```
**49**. Suppose that `sum()` (and other methods) did not accept a `level` keyword argument. How else could you perform the equivalent of `s.sum(level=1)`?
```
# One way is to use .unstack()...
# This method should convince you that s is essentially
# just a regular DataFrame in disguise!
s.unstack().sum(axis=0)
```
**50**. Exchange the levels of the MultiIndex so we have an index of the form (letters, numbers). Is this new Series properly lexsorted? If not, sort it.
```
new_s = s.swaplevel(0, 1)
# check
new_s.index.is_lexsorted()
# sort
new_s = new_s.sort_index()
```
## Minesweeper
### Generate the numbers for safe squares in a Minesweeper grid
Difficulty: *medium* to *hard*
If you've ever used an older version of Windows, there's a good chance you've played with [Minesweeper](https://en.wikipedia.org/wiki/Minesweeper_(video_game). If you're not familiar with the game, imagine a grid of squares: some of these squares conceal a mine. If you click on a mine, you lose instantly. If you click on a safe square, you reveal a number telling you how many mines are found in the squares that are immediately adjacent. The aim of the game is to uncover all squares in the grid that do not contain a mine.
In this section, we'll make a DataFrame that contains the necessary data for a game of Minesweeper: coordinates of the squares, whether the square contains a mine and the number of mines found on adjacent squares.
**51**. Let's suppose we're playing Minesweeper on a 5 by 4 grid, i.e.
```
X = 5
Y = 4
```
To begin, generate a DataFrame `df` with two columns, `'x'` and `'y'` containing every coordinate for this grid. That is, the DataFrame should start:
```
x y
0 0 0
1 0 1
2 0 2
```
```
p = pd.tools.util.cartesian_product([np.arange(X), np.arange(Y)])
df = pd.DataFrame(np.asarray(p).T, columns=['x', 'y'])
```
**52**. For this DataFrame `df`, create a new column of zeros (safe) and ones (mine). The probability of a mine occuring at each location should be 0.4.
```
# One way is to draw samples from a binomial distribution.
df['mine'] = np.random.binomial(1, 0.4, X*Y)
```
**53**. Now create a new column for this DataFrame called `'adjacent'`. This column should contain the number of mines found on adjacent squares in the grid.
(E.g. for the first row, which is the entry for the coordinate `(0, 0)`, count how many mines are found on the coordinates `(0, 1)`, `(1, 0)` and `(1, 1)`.)
```
# Here is one way to solve using merges.
# It's not necessary the optimal way, just
# the solution I thought of first...
df['adjacent'] = \
df.merge(df + [ 1, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 1, -1, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, -1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 1, 0, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, 0, 0], on=['x', 'y'], how='left')\
.merge(df + [ 0, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 0, -1, 0], on=['x', 'y'], how='left')\
.iloc[:, 3:]\
.sum(axis=1)
# An alternative solution is to pivot the DataFrame
# to form the "actual" grid of mines and use convolution.
# See https://github.com/jakevdp/matplotlib_pydata2013/blob/master/examples/minesweeper.py
from scipy.signal import convolve2d
mine_grid = df.pivot_table(columns='x', index='y', values='mine')
counts = convolve2d(mine_grid.astype(complex), np.ones((3, 3)), mode='same').real.astype(int)
df['adjacent'] = (counts - mine_grid).ravel('F')
```
**54**. For rows of the DataFrame that contain a mine, set the value in the `'adjacent'` column to NaN.
```
df.loc[df['mine'] == 1, 'adjacent'] = np.nan
```
**55**. Finally, convert the DataFrame to grid of the adjacent mine counts: columns are the `x` coordinate, rows are the `y` coordinate.
```
df.drop('mine', axis=1)\
.set_index(['y', 'x']).unstack()
```
## Plotting
### Visualize trends and patterns in data
Difficulty: *medium*
To really get a good understanding of the data contained in your DataFrame, it is often essential to create plots: if you're lucky, trends and anomalies will jump right out at you. This functionality is baked into pandas and the puzzles below explore some of what's possible with the library.
**56.** Pandas is highly integrated with the plotting library matplotlib, and makes plotting DataFrames very user-friendly! Plotting in a notebook environment usually makes use of the following boilerplate:
```python
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
```
matplotlib is the plotting library which pandas' plotting functionality is built upon, and it is usually aliased to ```plt```.
```%matplotlib inline``` tells the notebook to show plots inline, instead of creating them in a separate window.
```plt.style.use('ggplot')``` is a style theme that most people find agreeable, based upon the styling of R's ggplot package.
For starters, make a scatter plot of this random data, but use black X's instead of the default markers.
```df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})```
Consult the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) if you get stuck!
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})
df.plot.scatter("xs", "ys", color = "black", marker = "x")
```
**57.** Columns in your DataFrame can also be used to modify colors and sizes. Bill has been keeping track of his performance at work over time, as well as how good he was feeling that day, and whether he had a cup of coffee in the morning. Make a plot which incorporates all four features of this DataFrame.
(Hint: If you're having trouble seeing the plot, try multiplying the Series which you choose to represent size by 10 or more)
*The chart doesn't have to be pretty: this isn't a course in data viz!*
```
df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9],
"hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2],
"happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3],
"caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})
```
```
df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9],
"hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2],
"happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3],
"caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})
df.plot.scatter("hours_in", "productivity", s = df.happiness * 30, c = df.caffienated)
```
**58.** What if we want to plot multiple things? Pandas allows you to pass in a matplotlib *Axis* object for plots, and plots will also return an Axis object.
Make a bar plot of monthly revenue with a line plot of monthly advertising spending (numbers in millions)
```
df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52],
"advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],
"month":range(12)
})
```
```
df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52],
"advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],
"month":range(12)
})
ax = df.plot.bar("month", "revenue", color = "green")
df.plot.line("month", "advertising", secondary_y = True, ax = ax)
ax.set_xlim((-1,12))
```
Now we're finally ready to create a candlestick chart, which is a very common tool used to analyze stock price data. A candlestick chart shows the opening, closing, highest, and lowest price for a stock during a time window. The color of the "candle" (the thick part of the bar) is green if the stock closed above its opening price, or red if below.

This was initially designed to be a pandas plotting challenge, but it just so happens that this type of plot is just not feasible using pandas' methods. If you are unfamiliar with matplotlib, we have provided a function that will plot the chart for you so long as you can use pandas to get the data into the correct format.
Your first step should be to get the data in the correct format using pandas' time-series grouping function. We would like each candle to represent an hour's worth of data. You can write your own aggregation function which returns the open/high/low/close, but pandas has a built-in which also does this.
The below cell contains helper functions. Call ```day_stock_data()``` to generate a DataFrame containing the prices a hypothetical stock sold for, and the time the sale occurred. Call ```plot_candlestick(df)``` on your properly aggregated and formatted stock data to print the candlestick chart.
```
#This function is designed to create semi-interesting random stock price data
import numpy as np
def float_to_time(x):
return str(int(x)) + ":" + str(int(x%1 * 60)).zfill(2) + ":" + str(int(x*60 % 1 * 60)).zfill(2)
def day_stock_data():
#NYSE is open from 9:30 to 4:00
time = 9.5
price = 100
results = [(float_to_time(time), price)]
while time < 16:
elapsed = np.random.exponential(.001)
time += elapsed
if time > 16:
break
price_diff = np.random.uniform(.999, 1.001)
price *= price_diff
results.append((float_to_time(time), price))
df = pd.DataFrame(results, columns = ['time','price'])
df.time = pd.to_datetime(df.time)
return df
def plot_candlestick(agg):
fig, ax = plt.subplots()
for time in agg.index:
ax.plot([time.hour] * 2, agg.loc[time, ["high","low"]].values, color = "black")
ax.plot([time.hour] * 2, agg.loc[time, ["open","close"]].values, color = agg.loc[time, "color"], linewidth = 10)
ax.set_xlim((8,16))
ax.set_ylabel("Price")
ax.set_xlabel("Hour")
ax.set_title("OHLC of Stock Value During Trading Day")
plt.show()
```
**59.** Generate a day's worth of random stock data, and aggregate / reformat it so that it has hourly summaries of the opening, highest, lowest, and closing prices
```
df = day_stock_data()
df.head()
df.set_index("time", inplace = True)
agg = df.resample("H").ohlc()
agg.columns = agg.columns.droplevel()
agg["color"] = (agg.close > agg.open).map({True:"green",False:"red"})
agg.head()
```
**60.** Now that you have your properly-formatted data, try to plot it yourself as a candlestick chart. Use the ```plot_candlestick(df)``` function above, or matplotlib's [```plot``` documentation](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.plot.html) if you get stuck.
```
plot_candlestick(agg)
```
*More exercises to follow soon...*
| github_jupyter |
# TensorFlow Tutorial #03
# PrettyTensor
by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)
/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ)
## Introduction
The previous tutorial showed how to implement a Convolutional Neural Network in TensorFlow, which required low-level knowledge of how TensorFlow works. It was complicated and easy to make mistakes.
This tutorial shows how to use the add-on package for TensorFlow called [PrettyTensor](https://github.com/google/prettytensor), which is also developed by Google. PrettyTensor provides much simpler ways of constructing Neural Networks in TensorFlow, thus allowing us to focus on the idea we wish to implement and not worry so much about low-level implementation details. This also makes the source-code much shorter and easier to read and modify.
Most of the source-code in this tutorial is identical to Tutorial #02 except for the graph-construction which is now done using PrettyTensor, as well as some other minor changes.
This tutorial builds directly on Tutorial #02 and it is recommended that you study that tutorial first if you are new to TensorFlow. You should also be familiar with basic linear algebra, Python and the Jupyter Notebook editor.
## Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. See the previous tutorial for a more detailed description of convolution.
```
from IPython.display import Image
Image('images/02_network_flowchart.png')
```
The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.
These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are down-sampled again to 7x7 pixels.
The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.
The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.
These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.
Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow.
## Imports
```
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
# We also need PrettyTensor.
import prettytensor as pt
```
This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
```
tf.__version__
```
PrettyTensor version:
```
pt.__version__
```
## Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
```
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
```
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
```
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
```
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
```
data.test.cls = np.argmax(data.test.labels, axis=1)
```
## Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
```
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
```
### Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
```
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
```
### Plot a few images to see if data is correct
```
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
```
## TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
* Placeholder variables used for inputting data to the graph.
* Variables that are going to be optimized so as to make the convolutional network perform better.
* The mathematical formulas for the convolutional network.
* A cost measure that can be used to guide the optimization of the variables.
* An optimization method which updates the variables.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
### Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
```
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
```
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
```
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
```
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
```
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
```
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
```
y_true_cls = tf.argmax(y_true, dimension=1)
```
## TensorFlow Implementation
This section shows the original source-code from Tutorial #02 which implements the Convolutional Neural Network directly in TensorFlow. The code is not actually used in this Notebook and is only meant for easy comparison to the PrettyTensor implementation below.
The thing to note here is how many lines of code there are and the low-level details of how TensorFlow stores its data and performs the computation. It is easy to make mistakes even for fairly small Neural Networks.
### Helper-functions
In the direct TensorFlow implementation, we first make some helper-functions which will be used several times in the graph construction.
These two functions create new variables in the TensorFlow graph that will be initialized with random values.
```
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
```
The following helper-function creates a new convolutional network. The input and output are 4-dimensional tensors (aka. 4-rank tensors). Note the low-level details of the TensorFlow API, such as the shape of the weights-variable. It is easy to make a mistake somewhere which may result in strange error-messages that are difficult to debug.
```
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of filters.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
```
The following helper-function flattens a 4-dim tensor to 2-dim so we can add fully-connected layers after the convolutional layers.
```
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
```
The following helper-function creates a fully-connected layer.
```
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
```
### Graph Construction
The Convolutional Neural Network will now be constructed using the helper-functions above. Without the helper-functions this would have been very long and confusing
Note that the following code will not actually be executed. It is just shown here for easy comparison to the PrettyTensor code below.
The previous tutorial used constants defined elsewhere so they could be changed easily. For example, instead of having `filter_size=5` as an argument to `new_conv_layer()` we had `filter_size=filter_size1` and then defined `filter_size1=5` elsewhere. This made it easier to change all the constants.
```
if False: # Don't execute this! Just show it for easy comparison.
# First convolutional layer.
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=5,
num_filters=16,
use_pooling=True)
# Second convolutional layer.
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=16,
filter_size=5,
num_filters=36,
use_pooling=True)
# Flatten layer.
layer_flat, num_features = flatten_layer(layer_conv2)
# First fully-connected layer.
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=128,
use_relu=True)
# Second fully-connected layer.
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=128,
num_outputs=num_classes,
use_relu=False)
# Predicted class-label.
y_pred = tf.nn.softmax(layer_fc2)
# Cross-entropy for the classification of each image.
cross_entropy = \
tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
# Loss aka. cost-measure.
# This is the scalar value that must be minimized.
loss = tf.reduce_mean(cross_entropy)
```
## PrettyTensor Implementation
This section shows how to make the exact same implementation of a Convolutional Neural Network using PrettyTensor.
The basic idea is to wrap the input tensor `x_image` in a PrettyTensor object which has helper-functions for adding new computational layers so as to create an entire Neural Network. This is a bit similar to the helper-functions we implemented above, but it is even simpler because PrettyTensor also keeps track of each layer's input and output dimensionalities, etc.
```
x_pretty = pt.wrap(x_image)
```
Now that we have wrapped the input image in a PrettyTensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Note that `pt.defaults_scope(activation_fn=tf.nn.relu)` makes `activation_fn=tf.nn.relu` an argument for each of the layers constructed inside the `with`-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The `defaults_scope` makes it easy to change arguments for all of the layers.
```
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
```
That's it! We have now created the exact same Convolutional Neural Network in a few simple lines of code that required many complex lines of code in the direct TensorFlow implementation.
Using PrettyTensor instead of TensorFlow, we can clearly see the network structure and how the data flows through the network. This allows us to focus on the main ideas of the Neural Network rather than low-level implementation details. It is simple and pretty!
### Getting the Weights
Unfortunately, not everything is pretty when using PrettyTensor.
Further below, we want to plot the weights of the convolutional layers. In the TensorFlow implementation we had created the variables ourselves so we could just refer to them directly. But when the network is constructed using PrettyTensor, all the variables of the layers are created indirectly by PrettyTensor. We therefore have to retrieve the variables from TensorFlow.
We used the names `layer_conv1` and `layer_conv2` for the two convolutional layers. These are also called variable scopes (not to be confused with `defaults_scope` as described above). PrettyTensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.
The implementation is somewhat awkward because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
```
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
```
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
```
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
```
### Optimization Method
PrettyTensor gave us the predicted class-label (`y_pred`) as well as a loss-measure that must be minimized, so as to improve the ability of the Neural Network to classify the input images.
It is unclear from the documentation for PrettyTensor whether the loss-measure is cross-entropy or something else. But we now use the `AdamOptimizer` to minimize the loss.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
```
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
```
### Performance Measures
We need a few more performance measures to display the progress to the user.
First we calculate the predicted class number from the output of the Neural Network `y_pred`, which is a vector with 10 elements. The class number is the index of the largest element.
```
y_pred_cls = tf.argmax(y_pred, dimension=1)
```
Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
```
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
```
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
```
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
```
## TensorFlow Run
### Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
```
session = tf.Session()
```
### Initialize variables
The variables for `weights` and `biases` must be initialized before we start optimizing them.
```
session.run(tf.global_variables_initializer())
```
### Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
```
train_batch_size = 64
```
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
```
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
```
### Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
```
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
```
### Helper-function to plot confusion matrix
```
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
```
### Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
```
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
```
## Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
```
print_test_accuracy()
```
## Performance after 1 optimization iteration
The classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
```
optimize(num_iterations=1)
print_test_accuracy()
```
## Performance after 100 optimization iterations
After 100 optimization iterations, the model has significantly improved its classification accuracy.
```
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
```
## Performance after 1000 optimization iterations
After 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
```
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
```
## Performance after 10,000 optimization iterations
After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
```
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
```
## Visualization of Weights and Layers
When the Convolutional Neural Network was implemented directly in TensorFlow, we could easily plot both the convolutional weights and the images that were output from the different layers. When using PrettyTensor instead, we can also retrieve the weights as shown above, but we cannot so easily retrieve the images that are output from the convolutional layers. So in the following we only plot the weights.
### Helper-function for plotting convolutional weights
```
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
```
### Convolution Layer 1
Now plot the filter-weights for the first convolutional layer.
Note that positive weights are red and negative weights are blue.
```
plot_conv_weights(weights=weights_conv1)
```
### Convolution Layer 2
Now plot the filter-weights for the second convolutional layer.
There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.
Note again that positive weights are red and negative weights are blue.
```
plot_conv_weights(weights=weights_conv2, input_channel=0)
```
There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
```
plot_conv_weights(weights=weights_conv2, input_channel=1)
```
### Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
```
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
```
## Conclusion
PrettyTensor allows you to implement Neural Networks using a much simpler syntax than a direct implementation in TensorFlow. This lets you focus on your ideas rather than low-level implementation details. It makes the code much shorter and easier to understand, and you will make fewer mistakes.
However, there are some inconsistencies and awkward designs in PrettyTensor, and it can be difficult to learn because the documentation is short and confusing. Hopefully this gets better in the future (this was written in July 2016).
There are alternatives to PrettyTensor including [TFLearn](https://github.com/tflearn/tflearn) and [Keras](https://github.com/fchollet/keras).
## Exercises
These are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.
You may want to backup this Notebook before making any changes.
* Change the activation function to sigmoid for all the layers.
* Use sigmoid in some layers and relu in others. Can you use `defaults_scope` for this?
* Use l2loss in all layers. Then try it for only some of the layers.
* Use PrettyTensor's reshape for `x_image` instead of TensorFlow's. Is one better than the other?
* Add a dropout-layer after the fully-connected layer. If you want a different `keep_prob` during training and testing then you will need a placeholder variable and set it in the feed-dict.
* Replace the 2x2 max-pooling layers with stride=2 in the convolutional layers. Is there a difference in classification accuracy? What if you optimize it again and again? The difference is random, so how would you measure if there really is a difference? What are the pros and cons of using max-pooling vs. stride in the conv-layer?
* Change the parameters for the layers, e.g. the kernel, depth, size, etc. What is the difference in time usage and classification accuracy?
* Add and remove some convolutional and fully-connected layers.
* What is the simplest network you can design that still performs well?
* Retrieve the bias-values for the convolutional layers and print them. See `get_weights_variable()` for inspiration.
* Remake the program yourself without looking too much at this source-code.
* Explain to a friend how the program works.
## License (MIT)
Copyright (c) 2016 by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| github_jupyter |
```
import os
from google.colab import drive
drive.mount('/content/gdrive')
%cd '/content/gdrive/My Drive/machine'
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import RandomizedSearchCV
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVR
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neighbors import KNeighborsRegressor
from xgboost import XGBRegressor
data = pd.read_excel('Data_Train.xlsx')
data
data.info()
data.isnull().sum()
data = data.dropna()
data
```
## Extracting the day and month
```
data["Journey_day"] = pd.to_datetime(data.Date_of_Journey, format="%d/%m/%Y").dt.day
data["Journey_month"] = pd.to_datetime(data.Date_of_Journey, format="%d/%m/%Y").dt.month
data = data.drop('Date_of_Journey',axis=1)
data
# Similar to Date_of_Journey we can extract values from
# Extracting Hours
data["Dep_hour"] = pd.to_datetime(data["Dep_Time"]).dt.hour
# Extracting Minutes
data["Dep_min"] = pd.to_datetime(data["Dep_Time"]).dt.minute
# Now we can drop Dep_Time as it is of no use
data.drop(["Dep_Time"], axis = 1, inplace = True)
# Similar to Date_of_Journey we can extract values from Arrival_Time
# Extracting Hours
data["Arrival_hour"] = pd.to_datetime(data.Arrival_Time).dt.hour
# Extracting Minutes
data["Arrival_min"] = pd.to_datetime(data.Arrival_Time).dt.minute
# Now we can drop Arrival_Time as it is of no use
data.drop(["Arrival_Time"], axis = 1, inplace = True)
data
```
## Extracting Hour and Minutes seperately
```
# It is the differnce betwwen Departure Time and Arrival time
# Assigning and converting Duration column into list
duration = list(data["Duration"])
for i in range(len(duration)):
if len(duration[i].split()) != 2: # Check if duration contains only hour or mins
if "h" in duration[i]:
duration[i] = duration[i].strip() + " 0m" # Adds 0 minute
else:
duration[i] = "0h " + duration[i] # Adds 0 hour
duration_hours = []
duration_mins = []
for i in range(len(duration)):
duration_hours.append(int(duration[i].split(sep = "h")[0])) # Extract hours from duration
duration_mins.append(int(duration[i].split(sep = "m")[0].split()[-1])) # Extracts only minutes from duration
data["Duration_hours"] = duration_hours
data["Duration_mins"] = duration_mins
data.drop(["Duration"], axis = 1, inplace = True)
data["Airline"].value_counts()
# Airline vs Price
sns.catplot(y = "Price", x = "Airline", data = data.sort_values("Price", ascending = False), kind="boxen", height = 6, aspect = 3)
plt.show()
# As Airline is Nominal Categorical data we will perform OneHotEncoding
Airline = data[["Airline"]]
Airline = pd.get_dummies(Airline, drop_first= True)
Airline.head()
data["Source"].value_counts()
# Source vs Price
sns.catplot(y = "Price", x = "Source", data = data.sort_values("Price", ascending = False), kind="boxen", height = 4, aspect = 3)
plt.show()
# As Source is Nominal Categorical data we will perform OneHotEncoding
Source = data[["Source"]]
Source = pd.get_dummies(Source, drop_first= True)
Source.head()
data["Destination"].value_counts()
# As Destination is Nominal Categorical data we will perform OneHotEncoding
Destination = data[["Destination"]]
Destination = pd.get_dummies(Destination, drop_first = True)
Destination.head()
# Additional_Info contains almost 80% no_info
# Route and Total_Stops are related to each other
data.drop(["Route", "Additional_Info"], axis = 1, inplace = True)
data["Total_Stops"].value_counts()
# As this is case of Ordinal Categorical type we perform LabelEncoder
# Here Values are assigned with corresponding keys
data.replace({"non-stop": 0, "1 stop": 1, "2 stops": 2, "3 stops": 3, "4 stops": 4}, inplace = True)
data
data.drop(["Airline", "Source", "Destination"], axis = 1, inplace = True)
# Concatenate dataframe --> train_data + Airline + Source + Destination
data = pd.concat([data, Airline, Source, Destination], axis = 1)
data
test_data = pd.read_excel('Test_set.xlsx')
```
## Doing the Same Preprocessing for test data
```
# Preprocessing
print("Test data Info")
print("-"*75)
print(test_data.info())
print()
print()
print("Null values :")
print("-"*75)
test_data.dropna(inplace = True)
print(test_data.isnull().sum())
# EDA
# Date_of_Journey
test_data["Journey_day"] = pd.to_datetime(test_data.Date_of_Journey, format="%d/%m/%Y").dt.day
test_data["Journey_month"] = pd.to_datetime(test_data["Date_of_Journey"], format = "%d/%m/%Y").dt.month
test_data.drop(["Date_of_Journey"], axis = 1, inplace = True)
# Dep_Time
test_data["Dep_hour"] = pd.to_datetime(test_data["Dep_Time"]).dt.hour
test_data["Dep_min"] = pd.to_datetime(test_data["Dep_Time"]).dt.minute
test_data.drop(["Dep_Time"], axis = 1, inplace = True)
# Arrival_Time
test_data["Arrival_hour"] = pd.to_datetime(test_data.Arrival_Time).dt.hour
test_data["Arrival_min"] = pd.to_datetime(test_data.Arrival_Time).dt.minute
test_data.drop(["Arrival_Time"], axis = 1, inplace = True)
# Duration
duration = list(test_data["Duration"])
for i in range(len(duration)):
if len(duration[i].split()) != 2: # Check if duration contains only hour or mins
if "h" in duration[i]:
duration[i] = duration[i].strip() + " 0m" # Adds 0 minute
else:
duration[i] = "0h " + duration[i] # Adds 0 hour
duration_hours = []
duration_mins = []
for i in range(len(duration)):
duration_hours.append(int(duration[i].split(sep = "h")[0])) # Extract hours from duration
duration_mins.append(int(duration[i].split(sep = "m")[0].split()[-1])) # Extracts only minutes from duration
# Adding Duration column to test set
test_data["Duration_hours"] = duration_hours
test_data["Duration_mins"] = duration_mins
test_data.drop(["Duration"], axis = 1, inplace = True)
# Categorical data
print("Airline")
print("-"*75)
print(test_data["Airline"].value_counts())
Airline = pd.get_dummies(test_data["Airline"], drop_first= True)
print()
print("Source")
print("-"*75)
print(test_data["Source"].value_counts())
Source = pd.get_dummies(test_data["Source"], drop_first= True)
print()
print("Destination")
print("-"*75)
print(test_data["Destination"].value_counts())
Destination = pd.get_dummies(test_data["Destination"], drop_first = True)
# Additional_Info contains almost 80% no_info
# Route and Total_Stops are related to each other
test_data.drop(["Route", "Additional_Info"], axis = 1, inplace = True)
# Replacing Total_Stops
test_data.replace({"non-stop": 0, "1 stop": 1, "2 stops": 2, "3 stops": 3, "4 stops": 4}, inplace = True)
# Concatenate dataframe --> test_data + Airline + Source + Destination
data_test = pd.concat([test_data, Airline, Source, Destination], axis = 1)
data_test.drop(["Airline", "Source", "Destination"], axis = 1, inplace = True)
print()
print()
print("Shape of test data : ", data_test.shape)
data_test
data.columns
X = data.loc[:, ['Total_Stops', 'Journey_day', 'Journey_month', 'Dep_hour',
'Dep_min', 'Arrival_hour', 'Arrival_min', 'Duration_hours',
'Duration_mins', 'Airline_Air India', 'Airline_GoAir', 'Airline_IndiGo',
'Airline_Jet Airways', 'Airline_Jet Airways Business',
'Airline_Multiple carriers',
'Airline_Multiple carriers Premium economy', 'Airline_SpiceJet',
'Airline_Trujet', 'Airline_Vistara', 'Airline_Vistara Premium economy',
'Source_Chennai', 'Source_Delhi', 'Source_Kolkata', 'Source_Mumbai',
'Destination_Cochin', 'Destination_Delhi', 'Destination_Hyderabad',
'Destination_Kolkata', 'Destination_New Delhi']]
X.head()
y = data['Price']
# Finds correlation between Independent and dependent attributes
plt.figure(figsize = (18,18))
sns.heatmap(data.corr(), annot = True)
plt.show()
# Important feature using ExtraTreesRegressor
selection = ExtraTreesRegressor()
selection.fit(X, y)
#plot graph of feature importances for better visualization
plt.figure(figsize = (12,8))
feat_importances = pd.Series(selection.feature_importances_, index=X.columns)
feat_importances.nlargest(20).plot(kind='barh')
plt.show()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor()
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
```
## Model Evaluation
```
rf.score(X_train, y_train)
rf.score(X_test, y_test)
sns.distplot(y_test-y_pred)
plt.show()
plt.scatter(y_test, y_pred, alpha = 0.5)
plt.xlabel("y_test")
plt.ylabel("y_pred")
plt.show()
print('MAE:', metrics.mean_absolute_error(y_test, y_pred))
print('MSE:', metrics.mean_squared_error(y_test, y_pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
metrics.r2_score(y_test, y_pred)
```
## Trying out Multiple Algorithms
```
svr = LinearSVR()
dtr = DecisionTreeRegressor()
gbr = GradientBoostingRegressor()
knr = KNeighborsRegressor()
xgb = XGBRegressor()
models = list([svr,dtr,gbr,knr,xgb])
%%time
model_rmse = []
model_r2 = []
for i in models:
i.fit(X_train,y_train)
i_pred = i.predict(X_test)
model_rmse.append(np.sqrt(metrics.mean_squared_error(y_test,i_pred)))
model_r2.append(metrics.r2_score(y_test,i_pred))
model_r2
model_rmse
```
## Hyperparameter Tuning
```
#Randomized Search CV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 100, stop = 1200, num = 12)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(5, 30, num = 6)]
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10, 15, 100]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 5, 10]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf}
# Random search of parameters, using 5 fold cross validation,
# search across 100 different combinations
rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid,scoring='neg_mean_squared_error', n_iter = 10, cv = 5, verbose=2, random_state=42, n_jobs = 1)
rf_random.fit(X_train,y_train)
np.sqrt(abs(rf_random.best_score_))
rf_random.best_params_
rf = RandomForestRegressor(max_depth=20,max_features='auto',min_impurity_split=15,min_samples_leaf=1,n_estimators=700)
rf.fit(X_train,y_train)
prediction = rf.predict(X_test)
print(metrics.r2_score(y_test,prediction))
print(np.sqrt(metrics.mean_squared_error(y_test,prediction)))
#Saving the model
import pickle
pickle.dump(rf,open('flight_price.pkl','wb'))
model = pickle.load(open('flight_price.pkl','rb'))
prediction = model.predict(data_test)
```
| github_jupyter |
# Сегментация тетрадей
## Detectron2 baseline
В данном ноутбуке представлен baseline модели сегментации текста в школьных тетрадях с помощью фреймворка detectron2. Вы можете (и это даже лучше) использовать другие модели (например UNET, mmdet), или написать полностью свою.
# 0. Установка библиотек
Установка библиотек, под которым запускается данный бейзлайн.
```
!nvidia-smi
# !pip install gdown
# !gdown https://drive.google.com/uc?id=1VOojDMJe7RAxryQ2QKXrqA7CvhsnzJ_z
# !rm -rf __MACOSX
# !rm -rf data
# %%capture
# !unzip -u /home/jovyan/nto_final_data.zip
# !mv data/train_segmentation data/train
# !pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# !pip install git+https://github.com/facebookresearch/detectron2.git
# !pip install opencv-pyth
# !pip install tensorflow==2.1.0
# %%capture
# !pip install albumentations==0.4.6
# !pip install pycocotools
# !pip install imutils
# !git clone https://github.com/MarkPotanin/copy_paste_aug_detectron2.git
# !cp ./copy_paste_aug_detectron2/coco.py ./
# !cp ./copy_paste_aug_detectron2/copy_paste.py ./
# !cp ./copy_paste_aug_detectron2/visualize.py ./
```
## 1. Загрузить необходимые библиотеки для создания и обучения модели
```
import cv2
import random
import json
import os
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings("ignore")
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
import shutil
import torch, torchvision
import detectron2
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog,DatasetCatalog
from detectron2.data.datasets import register_coco_instances, load_coco_json
from detectron2.data import detection_utils as utils
from detectron2.engine import DefaultTrainer
from detectron2.engine import HookBase
from detectron2.evaluation.evaluator import DatasetEvaluator
import pycocotools.mask as mask_util
from matplotlib import pyplot as plt
from tqdm import tqdm
import numpy as np
import gc
import logging
logger = logging.getLogger('detectron2')
logger.setLevel(logging.CRITICAL)
from detectron2.checkpoint import DetectionCheckpointer
from detectron2.modeling import build_model
from detectron2.evaluation import COCOEvaluator
import detectron2.data.transforms as T
from detectron2.data import build_detection_train_loader, build_detection_test_loader
import copy
import json
from detectron2.evaluation import inference_on_dataset
def clear_cache():
gc.collect()
torch.cuda.empty_cache()
gc.collect()
```
Прежде чем переходить к загрузке данных посмотрим, доступны ли нам GPU-мощности.
```
print('GPU: ' + str(torch.cuda.is_available()))
```
# 2. Валидационный датасет
Для валидации наших моделей нам неплохо было создать из обучающих данных валидационный датасет. Для этого разделим наш датасет на две части - для обучения и для валидации. Для этого просто создадим два новых файлика с аннотациями, куда раздельно запишем исиходную информацию об аннотациях.
```
#Подгрузим аннотации train
with open('data/train/annotations.json') as f:
annotations = json.load(f)
len(annotations['images'])
#Пустой словарь для аннотаций валидации
annotations_val = {}
#Список категорий такой же как в train
annotations_val['categories'] = annotations['categories']
#Пустой словарь для аннотаций нового train
annotations_train = {}
#Список категорий такой же как в train
annotations_train['categories'] = annotations['categories']
#Положим в валидацию каждое 10 изображение из исходного train, а остальные - в новый train
annotations_val['images'] = []
annotations_train['images'] = []
for num, img in enumerate(annotations['images']):
if num % 110 == 0:
annotations_val['images'].append(img)
else:
annotations_train['images'].append(img)
#Положим в список аннотаций валидации только те аннотации, которые относятся к изображениям из валидации.
#А в список аннотаций нового train - только те, которые относятся к нему
val_img_id = [i['id'] for i in annotations_val['images']]
train_img_id = [i['id'] for i in annotations_train['images']]
annotations_val['annotations'] = []
annotations_train['annotations'] = []
for annot in annotations['annotations']:
if annot['image_id'] in val_img_id:
annotations_val['annotations'].append(annot)
elif annot['image_id'] in train_img_id:
annotations_train['annotations'].append(annot)
else:
print('Аннотации нет ни в одном наборе')
for i, element in enumerate(annotations_train["images"]):
if element["file_name"] == "41_3.JPG":
print(element["id"])
del annotations_train["images"][i]
for i, element in enumerate(annotations_train["annotations"]):
if element["image_id"] == 405:
print("Done")
del annotations_train["annotations"][i]
for i, element in enumerate(annotations_val["images"]):
if element["file_name"] == "41_3.JPG":
print(element["id"])
del annotations_val["images"][i]
for i, element in enumerate(annotations_val["annotations"]):
if element["image_id"] == 405:
print("Done")
del annotations_val["annotations"][i]
try: os.remove("data/train/images/41_3.JPG")
except: pass
clear_cache()
```
Готово! Аннотации для валидации и новой обучающей выборки готовы, теперь просто сохраним их в формате json, и положим в папке. Назовем аннотации annotations_new.json, чтобы новая набор аннотаций для train (без множества val) не перезаписал исходные аннотации.
```
if not os.path.exists('data/val'):
os.makedirs('data/val')
if not os.path.exists('data/val/images'):
os.makedirs('data/val/images')
```
Скопируем изображения, которые относятся к валидации, в папку val/images
```
for i in annotations_val['images']:
shutil.copy('data/train/images/'+i['file_name'],'data/val/images/')
```
Запишем новые файлы с аннотациями для train и val.
```
with open('data/val/annotations_new.json', 'w') as outfile:
json.dump(annotations_val, outfile)
with open('data/train/annotations_new.json', 'w') as outfile:
json.dump(annotations_train, outfile)
```
# 3. Регистрация датасета
Зарегистрируем выборки в detectron2 для дальнейшей подачи на обучение модели.
```
for d in ['train','val']:
DatasetCatalog.register("my_dataset_"+d, lambda d=d: load_coco_json("./data/{}/annotations_new.json".format(d),
image_root= "./data/train/images",\
dataset_name="my_dataset_"+d,extra_annotation_keys=['bbox_mode']))
```
После регистрации можно загружать выборки, чтобы иметь возможность посмотреть на них глазами. Первой загрузим обучающую выборку в **dataset_dicts_train**
```
dataset_dicts_train = DatasetCatalog.get("my_dataset_train")
train_metadata = MetadataCatalog.get("my_dataset_train")
```
И тестовую выборку в **dataset_dicts_val**
```
dataset_dicts_val = DatasetCatalog.get("my_dataset_val")
val_metadata = MetadataCatalog.get("my_dataset_val")
```
Посмотрим на размер получившихся выборок - эта операция в python осуществляется при помощи функции **len()**
```
print('Размер обучающей выборки (Картинки): {}'.format(len(dataset_dicts_train)))
print('Размер тестовой выборки (Картинки): {}'.format(len(dataset_dicts_val)))
```
Итак, у нас в распоряжении 588 изображения для тренировки, и 66 - для проверки качества.
**Посмотрим на размеченные фотографии из валидации**
```
import os
from IPython.display import Image
@interact
def show_images(file=range(len(dataset_dicts_val))):
example = dataset_dicts_val[file]
image = utils.read_image(example["file_name"], format="RGB")
plt.figure(figsize=(3,3),dpi=200)
visualizer = Visualizer(image[:, :, ::-1], metadata=val_metadata, scale=0.5)
vis = visualizer.draw_dataset_dict(example)
plt.imshow(vis.get_image()[:, :,::-1])
plt.show()
dataset_dicts_val[0]["file_name"]
```
# copy-paste
```
# import scripts
# https://github.com/MarkPotanin/copy_paste_aug_detectron2/blob/main/detectron2_copypaste.ipynb
from copy_paste import CopyPaste
from coco import CocoDetectionCP
import albumentations as A
from pycocotools import mask
from skimage import measure
aug_list = [A.Resize(1960, 1960),\
A.OneOf([A.HorizontalFlip(),
A.RandomRotate90()], p=0.75),
A.OneOf([A.Blur(),
A.MotionBlur(),
A.GaussNoise(),
A.ImageCompression(quality_lower=75)],p=0.5),
CopyPaste(blend=True, sigma=1, pct_objects_paste=0.3, p=1.0) #pct_objects_paste is a guess
]
transform = A.Compose(
aug_list, bbox_params=A.BboxParams(format="coco")
)
data = CocoDetectionCP(
'./data/train/images',
'./data/train/annotations_new.json',
transform
)
data_id_to_num = {i:q for q,i in enumerate(data.ids)}
ALL_IDS = list(data_id_to_num.keys())
dataset_dicts_train = [i for i in dataset_dicts_train if i['image_id'] in ALL_IDS]
BOX_MODE = dataset_dicts_train[0]['annotations'][0]['bbox_mode']
class MyMapper:
"""Mapper which uses `detectron2.data.transforms` augmentations"""
def __init__(self, cfg, is_train: bool = True):
self.is_train = is_train
mode = "training" if is_train else "inference"
#print(f"[MyDatasetMapper] Augmentations used in {mode}: {self.augmentations}")
def __call__(self, dataset_dict):
dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
img_id = dataset_dict['image_id']
aug_sample = data[data_id_to_num[img_id]]
image = aug_sample['image']
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))
bboxes = aug_sample['bboxes']
box_classes = np.array([b[-2] for b in bboxes])
boxes = np.stack([b[:4] for b in bboxes], axis=0)
mask_indices = np.array([b[-1] for b in bboxes])
masks = aug_sample['masks']
annos = []
for enum,index in enumerate(mask_indices):
curr_mask = masks[index]
fortran_ground_truth_binary_mask = np.asfortranarray(curr_mask)
encoded_ground_truth = mask.encode(fortran_ground_truth_binary_mask)
ground_truth_area = mask.area(encoded_ground_truth)
ground_truth_bounding_box = mask.toBbox(encoded_ground_truth)
contours = measure.find_contours(curr_mask, 0.5)
annotation = {
"segmentation": [],
"iscrowd": 0,
"bbox": ground_truth_bounding_box.tolist(),
"category_id": train_metadata.thing_dataset_id_to_contiguous_id[box_classes[enum]] ,
"bbox_mode":BOX_MODE
}
for contour in contours:
contour = np.flip(contour, axis=1)
segmentation = contour.ravel().tolist()
annotation["segmentation"].append(segmentation)
annos.append(annotation)
image_shape = image.shape[:2] # h, w
instances = utils.annotations_to_instances(annos, image_shape)
dataset_dict["instances"] = utils.filter_empty_instances(instances)
clear_cache()
return dataset_dict
```
# 4 Обучение модели
**4.1. Определяем конфигурацию**
Прежде чем начать работать с самой моделью, нам нужно определить ее параметры и спецификацию обучения
Создаем конфигурацию и загружаем архитектуру модели с предобученными весами (на COCO - датасете, содержащем $80$ популярных категорий объектов и более $300000$ изображений) для распознавания объектов.
```
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml"))
# cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml")
cfg.MODEL.WEIGHTS = "/home/jovyan/model_final (1).pth"
```
В целом, вы можете посмотреть и другие архитектуры в зоопарке [моделей](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md).
Теперь задаем параметры самой модели и обучения модели
```
height, width = 10000, 10000
for element in annotations_train["images"]:
height = min(height, element["height"])
width = min(width, element["width"])
print(height, width)
# Загружаем названия обучающией и тестовой выборок в настройки
cfg.DATASETS.TRAIN = ("my_dataset_train",)
cfg.DATASETS.TEST = ("my_dataset_val",)
cfg.TEST.EVAL_PERIOD = 20
# cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 64
# Часто имеет смысл сделать изображения чуть меньшего размера, чтобы
# обучение происходило быстрее. Поэтому мы можем указать размер, до которого будем изменяться наименьшая
# и наибольшая из сторон исходного изображения.
cfg.INPUT.MIN_SIZE_TRAIN = 1960
cfg.INPUT.MAX_SIZE_TRAIN = 1960
cfg.INPUT.MIN_SIZE_TEST = cfg.INPUT.MIN_SIZE_TRAIN
cfg.INPUT.MAX_SIZE_TEST = cfg.INPUT.MAX_SIZE_TRAIN
# Также мы должны сказать модели ниже какой вероятности определения она игнорирует результат.
# То есть, если она найдет на картинке еду, но вероятность правильного определения ниже 0.5,
# то она не будет нам сообщать, что она что-то нашла.
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.1
# Также мы должны указать порядок каналов во входном изображении. Обратите внимание, что это Blue Green Red (BGR),
# а не привычный RGB. Это особенности работы данной модели.
cfg.INPUT.FORMAT = 'BGR'
# Для более быстрой загрузки данных в модель, мы делаем параллельную загрузку. Мы указываем параметр 4,
cfg.DATALOADER.NUM_WORKERS = 4
# Следующий параметр задает количество изображений в батче, на котором
# модель делает одну итерацию обучения (изменения весов).
cfg.SOLVER.IMS_PER_BATCH = 1
# Зададим также learning_rate
cfg.SOLVER.BASE_LR = 0.01
# Укажем модели, через сколько шагов обучения модели следует уменьшить learning rate
cfg.SOLVER.STEPS = (1500,)
# Фактор, на который уменьшается learning rate задается следующим выражением
cfg.SOLVER.GAMMA = 0.1
# Зададим общее число итераций обучения.
cfg.SOLVER.MAX_ITER = 17000
# Укажем количество классов в нашей выборке
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
# Задаем через сколько шагов обучения сохранять веса модели в файл. Этот файл мы сможем загрузить потом
# для тестирования нашей обученной модели на новых данных.
cfg.SOLVER.CHECKPOINT_PERIOD = cfg.TEST.EVAL_PERIOD
cfg.TEST.DETECTIONS_PER_IMAGE = 1000
# cfg.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES = True
# И указываем название папки, куда сохранять чекпойнты модели и информацию о процессе обучения.
cfg.OUTPUT_DIR = './output'
# Если вдруг такой папки нет, то создадим ее
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
# Если мы хотим удалить чекпойнты предыдущих моделей, то выполняем данную команду.
#%rm output/*
class custom_mapper:
def __init__(self, cfg):
self.transform_list = [
T.ResizeShortestEdge(
[cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST],
cfg.INPUT.MAX_SIZE_TEST),
T.RandomBrightness(0.9, 1.1),
T.RandomContrast(0.9, 1.1),
T.RandomSaturation(0.9, 1.1),
T.RandomLighting(0.9)
]
print(f"[custom_mapper]: {self.transform_list}")
def __call__(self, dataset_dict):
dataset_dict = copy.deepcopy(dataset_dict)
image = utils.read_image(dataset_dict["file_name"], format="BGR")
image, transforms = T.apply_transform_gens(self.transform_list, image)
dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))
annos = [
utils.transform_instance_annotations(obj, transforms, image.shape[:2])
for obj in dataset_dict.pop("annotations")
if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(annos, image.shape[:2])
dataset_dict["instances"] = utils.filter_empty_instances(instances)
return dataset_dict
def f1_loss(y_true, y_pred):
tp = np.sum(y_true & y_pred)
tn = np.sum(~y_true & ~y_pred)
fp = np.sum(~y_true & y_pred)
fn = np.sum(y_true & ~y_pred)
epsilon = 1e-7
precision = tp / (tp + fp + epsilon)
recall = tp / (tp + fn + epsilon)
f1 = 2 * precision*recall / ( precision + recall + epsilon)
return f1
CHECKPOINTS_RESULTS = []
class F1Evaluator(DatasetEvaluator):
def __init__(self):
self.loaded_true = np.load('data/train/binary.npz')
self.val_predictions = {}
self.f1_scores = []
def reset(self):
self.val_predictions = {}
self.f1_scores = []
def process(self, inputs, outputs):
for input, output in zip(inputs, outputs):
filename = input["file_name"].split("/")[-1]
if filename != "41_3.JPG":
true = self.loaded_true[filename].reshape(-1)
prediction = output['instances'].pred_masks.cpu().numpy()
mask = np.add.reduce(prediction)
mask = (mask > 0).reshape(-1)
self.f1_scores.append(f1_loss(true, mask))
def evaluate(self):
global CHECKPOINTS_RESULTS
result = np.mean(self.f1_scores)
CHECKPOINTS_RESULTS.append(result)
return {"meanF1": result}
class AugTrainer(DefaultTrainer):
@classmethod
def build_train_loader(cls, cfg):
return build_detection_train_loader(cfg, mapper=custom_mapper(cfg))
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
if output_folder is None:
output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
return F1Evaluator()
class MyTrainer(DefaultTrainer):
@classmethod
def build_train_loader(cls, cfg, sampler=None):
return build_detection_train_loader(
cfg, mapper=MyMapper(cfg, True), sampler=sampler
)
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
if output_folder is None:
output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
return F1Evaluator()
```
**4.2. Обучаем модель**
Процесс обучения модели запускают следующие три строчки кода. Возможно будут предупреждения, на которые можно не обращать внимания, это информация об обучении.
```
%rm output/*
trainer = MyTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
print()
del trainer
clear_cache()
!ls ./output
```
Используем обученную модель для проверки качества на валидации.
```
list(enumerate(CHECKPOINTS_RESULTS, start=1))
with open("CHECKPOINTS_RESULTS.txt", "w") as f:
f.write(str(CHECKPOINTS_RESULTS))
```
| github_jupyter |
```
import numpy as np
import scipy.misc
import pylab
import torch
from datetime import datetime as dt
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from mpl_toolkits.axes_grid1 import make_axes_locatable
from wavelets_pytorch_2.alltorch.wavelets import Morlet, Ricker, DOG, Paul
from wavelets_pytorch_2.alltorch.transform import WaveletTransformTorch #WaveletTransform,
import torch.utils.data as data_utils
from IPython.display import clear_output
from tqdm import tqdm_notebook,tqdm
from sklearn.preprocessing import normalize
import matplotlib.image as mpimg
import time
import wfdb as wf
# from wavelets_pytorch_2.backup_pytorchwavelets.wavelets import Morlet, Ricker, DOG
# from wavelets_pytorch_2.backup_pytorchwavelets.transform import WaveletTransformTorch #WaveletTransform,
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
paths = glob('qtdb/*.dat')
paths = [path[:-4] for path in paths]
paths.sort()
#from examples.plot import plot_scalogram
def plot_scalogram(power, scales, t, normalize_columns=True, cmap=None, ax=None, scale_legend=True):
"""
Plot the wavelet power spectrum (scalogram).
:param power: np.ndarray, CWT power spectrum of shape [n_scales,signal_length]
:param scales: np.ndarray, scale distribution of shape [n_scales]
:param t: np.ndarray, temporal range of shape [signal_length]
:param normalize_columns: boolean, whether to normalize spectrum per timestep
:param cmap: matplotlib cmap, please refer to their documentation
:param ax: matplotlib axis object, if None creates a new subplot
:param scale_legend: boolean, whether to include scale legend on the right
:return: ax, matplotlib axis object that contains the scalogram
"""
if not cmap: cmap = plt.get_cmap("plasma")#("coolwarm")
if ax is None: fig, ax = plt.subplots()
if normalize_columns: power = power/np.max(power, axis=0)
T, S = np.meshgrid(t, scales)
cnt = ax.contourf(T, S, power, 500, cmap=cmap)
# Fix for saving as PDF (aliasing)
for c in cnt.collections:
c.set_edgecolor("face")
ax.set_yscale('log')
ax.set_ylabel("Scale (Log Scale)")
ax.set_xlabel("Time (s)")
ax.set_title("Wavelet Power Spectrum")
if scale_legend:
def format_axes_label(x, pos):
return "{:.2f}".format(x)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(cnt, cax=cax, ticks=[np.min(power), 0, np.max(power)],
format=ticker.FuncFormatter(format_axes_label))
return ax
```
## Dataset load & Config Wavelet params
```
newlabels = []
newdata = []
newdata2 = []
count = 0
cnt=0
for path in tqdm_notebook(paths):
dat= wf.rdrecord(path)
try:
ann1p = wf.rdann(path, 'q1c')
except:
cnt=cnt+1
#print(cnt)
continue
beats = ann1p.sample
symb1 = np.array(ann1p.symbol)
data_ = dat.p_signal[:,0]
data_2 = dat.p_signal[:,1]
symb1[symb1=='('] = 0
symb1[symb1=='p'] = 2
symb1[symb1==')'] = 1
symb1[symb1=='N'] = 3
symb1[symb1=='t'] = 4
beat_len = dat.fs
brack_open = np.where(np.array(symb1) == '0')
brack_open_index = beats[tuple(brack_open)]
brack_close = np.where(np.array(symb1) == '1')
brack_close_index = beats[tuple(brack_close)]
p_open = np.where(np.array(symb1) == '2')
p_open_index = beats[tuple(p_open)]
qrs_open = np.where(np.array(symb1) == '3')
qrs_open_index = beats[tuple(qrs_open)]
t_open = np.where(np.array(symb1) == '4')
t_open_index = beats[tuple(t_open)]
label_data = np.zeros(data_.shape)
#print (label_data.shape)
open_bracket = brack_open_index.tolist()
close_bracket = brack_close_index.tolist()
for ii in range(p_open_index.shape[0]):
try:
p_index = p_open_index[ii]
q_index = qrs_open_index[ii]
t_index = t_open_index[ii]
open_index = open_bracket[2*ii:2*ii+2]
close_index = close_bracket[3*ii:3*ii+3]
input_data = data_[open_index[0]:open_index[0]+150]
#input_data= replaceRandom(input_data, 5)
input_data_2 = data_2[open_index[0]:open_index[0]+150]
# P -- 1, Q -- 2, T -- 3
label_data[open_index[0]:close_index[0]] = 1
label_data[open_index[1]:close_index[1]] = 2
label_data[t_index:close_index[2]] = 3
except:
cnt=cnt+1
continue
max_val = np.max(input_data)
min_val = np.min(input_data)
#output_data = label_data[open_index[0]:close_index[-1]]
output_data = label_data[open_index[0]:open_index[0]+150]
if len(set(output_data))<3:
#print(set(output_data))
continue
newdata.append(input_data)
newdata2.append(input_data_2)
newlabels.append(output_data)
# pmin,pmax = open_index[0]-open_index[0],close_index[0]-open_index[0]
# qmin,qmax = open_index[1]-open_index[0],close_index[1]-open_index[0]
# tmin,tmax = t_index-open_index[0],close_index[2]-open_index[0]
# lab.figure()
# lab.plot(input_data)
# lab.fill([pmin,pmax,pmax,pmin],[min_val,min_val,max_val,max_val],'r',alpha=0.2)
# lab.fill([qmin,qmax,qmax,qmin],[min_val,min_val,max_val,max_val],'g',alpha=0.2)
# lab.fill([tmin,tmax,tmax,tmin],[min_val,min_val,max_val,max_val],'y',alpha=0.2)
# lab.savefig(os.path.join('./Annotated_ecg_images/',str(ii)+'.jpg'))
# lab.close()
# lab.figure()
# lab.plot(input_data)
# lab.plot(output_data)
# lab.savefig(os.path.join('./Annotated_ecg_images/',str(ii)+'.jpg'))
# if count == 500:
# break
count += 1
newdata = np.asarray(newdata)
newlabels = np.asarray(newlabels)/3
newdata_norm = preprocessing.normalize(newdata, axis=1, norm= 'l2',copy=True, return_norm=False)
#newlabels_norm = preprocessing.normalize(newlabels, axis=1, norm= 'l1' ,copy=True, return_norm=False)
# newdata.shape, newlabels.shape, newdata2.shape
dat = newdata_norm#np.load('../seg_dataset/X.npy')
label = newlabels #np.load('../seg_dataset/Y.npy')
#dat = normalize(dat,axis=0)
def show(data):
pylab.jet()
pylab.imshow(data)
pylab.colorbar()
pylab.show()
pylab.clf()
x = np.linspace(0, 1, num=dat[0].shape[0])
dt = 1
#label[np.where(label==1)] = 0
#label[np.where(label==3)] = 0
# label = normalize(label,axis=1)
print(dat.shape, label.shape)
print(np.unique(label))
fps = 150
dt = 1.0/fps
dt1 = 1.0/fps
dj = 0.125
unbias = False
batch_size = 32
#wavelet = Morlet(w0=2)
wavelet = Paul(m=8)
#wavelet1 = Morlet(w0=10)
t_min = 0
t_max = dat[0].shape[0]/fps
t = np.linspace(t_min, t_max, (t_max-t_min)*fps)
```
## Sequential wavelet transform
```
ecg_wavelet =[]
label_wavelet =[]
scale =[]
# for ind in tqdm_notebook(range(dat.shape[0])):
# # Finding wavelet coeffs for ECG
# ecg = torch.from_numpy(dat[ind]).float()
# wa_ecg_torch = WaveletTransformTorch(dt, dj, wavelet, unbias=unbias, cuda=True)
# power_ecg_torch = wa_ecg_torch.power(ecg).type(torch.FloatTensor)#.numpy())
# # ecg_wavelet.append(power_ecg_torch)
# # Finding wavelet coeffs for labels
# wa_label_torch = WaveletTransformTorch(dt, dj, wavelet, unbias=unbias, cuda=True)
# power_label_torch = wa_label_torch.power(torch.from_numpy(label[ind]).float()).type(torch.FloatTensor)#.numpy())
# # label_wavelet.append(power_label_torch)#.unsqueeze(0))
# # scales_label_torch = torch.from_numpy(wa_label_torch.fourier_periods).type(torch.FloatTensor)
# if ind ==0:
# scales_ecg_torch = torch.from_numpy(wa_ecg_torch.fourier_periods).type(torch.FloatTensor)
# scale.append(scales_ecg_torch)
wa_ecg_torch = WaveletTransformTorch(dt, dj, wavelet, unbias=unbias, cuda=True)
power_ecg_torch = wa_ecg_torch.power(torch.from_numpy(dat).float()).type(torch.FloatTensor).unsqueeze(1)
wa_label_torch = WaveletTransformTorch(dt, dj, wavelet, unbias=unbias, cuda=True)
power_label_torch = wa_label_torch.power(torch.from_numpy(label).float()).type(torch.FloatTensor).unsqueeze(1)
scales = wa_ecg_torch.fourier_periods
cwt_label = wa_label_torch._cwt_op
cwt_ecg = wa_ecg_torch._cwt_op
cwt_label_real = torch.from_numpy(cwt_label.real).type(torch.FloatTensor).unsqueeze(1)
cwt_label_imag = torch.from_numpy(cwt_label.imag).type(torch.FloatTensor).unsqueeze(1)
cwt_ecg_real = torch.from_numpy(cwt_ecg.real).type(torch.FloatTensor).unsqueeze(1)
cwt_ecg_imag = torch.from_numpy(cwt_ecg.imag).type(torch.FloatTensor).unsqueeze(1)
#cwt_op = torch.from_numpy(cwt_op)#.unsqueeze(1)
#cwt_op = cwt_op.type(torch.FloatTensor)
del cwt_label,cwt_ecg,wa_ecg_torch,power_ecg_torch,power_label_torch,wa_label_torch
cwt_label_real.shape, cwt_label_imag.shape, cwt_ecg_real.shape, cwt_ecg_imag.shape
# power_ecg_torch.size(), power_label_torch.size(), scales.shape,cwt_op_real.shape
ind = np.random.randint(0,dat.shape[0],1).squeeze()
fig, ax = plt.subplots(2, 2, figsize=(16,10))
ax = ax.flatten()
ax[0].plot(t, dat[ind])
ax[0].set_title(r'ECG signal')
ax[0].set_xlabel('Samples')
plot_scalogram(power_ecg_torch.numpy()[ind].squeeze(), scales, t, ax=ax[1])
#ax[1].axhline(1.0 / random_frequencies[0], lw=1, color='k')
ax[1].set_title('ECG Scalogram')#.format(1.0/random_frequencies[0]))
ax[1].set_ylabel('')
ax[1].set_yticks([])
ax[2].plot(t, label[ind])
ax[2].set_title(r'Label')
ax[2].set_xlabel('Samples')
plot_scalogram(power_label_torch.numpy()[ind].squeeze(),scales, t, ax=ax[3])
#ax[1].axhline(1.0 / random_frequencies[0], lw=1, color='k')
ax[3].set_title('Label Scalogram')#.format(1.0/random_frequencies[0]))
ax[3].set_ylabel('')
ax[3].set_yticks([])
# plot_scalogram(power_torch1.numpy(), scales_torch1.numpy(), t, ax=ax[2])
# #ax[1].axhline(1.0 / random_frequencies[0], lw=1, color='k')
# ax[2].set_title('Scalogram dt=10/fs')#.format(1.0/random_frequencies[0]))
# ax[2].set_ylabel('')
# ax[2].set_yticks([])
plt.tight_layout()
plt.show()
tot_x = torch.stack([cwt_ecg_real,cwt_ecg_imag],1).squeeze(2)
tot_y = torch.stack([cwt_label_real,cwt_label_imag],1).squeeze(2)
# del cwt_ecg_real,cwt_ecg_imag,cwt_label_real,cwt_label_imag
tot_x.shape, tot_y.shape, cwt_ecg_real.shape, cwt_ecg_imag.shape
tot_dat = torch.cat([tot_x,tot_y],1)
tot_x.shape,tot_y.shape, tot_dat.shape, torch.stack([tot_x,tot_y],1).shape
del tot_x,tot_y
indices = torch.randperm(len(tot_dat))
valid_size = 257
train_size = 1000
train_indices = indices[:len(indices)-valid_size][:train_size or None]
test_indices = indices[len(indices)-valid_size:] #if valid_size else None
train_pow = tot_dat[train_indices]
train_x = train_pow[:,:2]
train_y = train_pow[:,2:]
test_pow = tot_dat[test_indices]
test_x = test_pow[:,:2]
test_y = test_pow[:,2:]
train_x.shape, train_y.shape,test_x.shape, test_y.shape
del tot_dat, train_pow,test_pow
# torch.cat([test_x,test_y],1).shape
train_dat = torch.cat([train_x,train_y],1)
test_dat = torch.cat([test_x,test_y],1)
torch.save(train_dat, 'wavelet_dataset/train_dat_cwt.pt')
# torch.save(train_cwt, 'wavelet_dataset/train_cwt.pt')
torch.save(test_dat, 'wavelet_dataset/test_dat_cwt.pt')
# torch.save(test_cwt, 'wavelet_dataset/test_cwt.pt')
train_set = data_utils.TensorDataset(train_x, train_y)
train_set
np.save('wavelet_dataset/scales.npy',scales)
```
| github_jupyter |
```
import sys
sys.path.insert(0, '../..')
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import torch
from tqdm import tnrange, tqdm_notebook
from causal_meta.utils.data_utils import generate_data_categorical
from causal_meta.bivariate.categorical import StructuralModel
N = 10
model = StructuralModel(N, dtype=torch.float64)
optimizer = torch.optim.SGD(model.modules_parameters(), lr=1e-1)
meta_optimizer = torch.optim.RMSprop([model.w], lr=1e-2)
num_runs = 1 # 10
num_training = 1 # 100
num_transfer = 1000
num_gradient_steps = 2
train_batch_size = 1000
transfer_batch_size = 10
alphas = np.zeros((num_runs, num_training, num_transfer))
for j in tnrange(num_runs):
model.w.data.zero_()
for i in tnrange(num_training, leave=False):
# Step 1: Sample a joint distribution before intervention
pi_A_1 = np.random.dirichlet(np.ones(N))
pi_B_A = np.random.dirichlet(np.ones(N), size=N)
transfers = tnrange(num_transfer, leave=False)
for k in transfers:
# Step 2: Train the modules on the training distribution
model.set_ground_truth(pi_A_1, pi_B_A)
# Step 3: Sample a joint distribution after intervention
pi_A_2 = np.random.dirichlet(np.ones(N))
# Step 4: Do k steps of gradient descent for adaptation on the
# distribution after intervention
model.zero_grad()
loss = torch.tensor(0., dtype=torch.float64)
for _ in range(num_gradient_steps):
x_train = torch.from_numpy(generate_data_categorical(transfer_batch_size, pi_A_2, pi_B_A))
loss += -torch.mean(model(x_train))
optimizer.zero_grad()
inner_loss_A_B = -torch.mean(model.model_A_B(x_train))
inner_loss_B_A = -torch.mean(model.model_B_A(x_train))
inner_loss = inner_loss_A_B + inner_loss_B_A
inner_loss.backward()
optimizer.step()
# Step 5: Update the structural parameter alpha
meta_optimizer.zero_grad()
loss.backward()
meta_optimizer.step()
# Log the values of alpha
alpha = torch.sigmoid(model.w).item()
alphas[j, i, k] = alpha
transfers.set_postfix(alpha='{0:.4f}'.format(alpha), grad='{0:.4f}'.format(model.w.grad.item()))
alphas_50 = np.percentile(alphas.reshape((-1, num_transfer)), 50, axis=0)
fig = plt.figure(figsize=(9, 5))
ax = plt.subplot(1, 1, 1)
ax.tick_params(axis='both', which='major', labelsize=13)
ax.axhline(1, c='lightgray', ls='--')
ax.axhline(0, c='lightgray', ls='--')
ax.plot(alphas_50, lw=2, color='k')
ax.set_xlim([0, num_transfer - 1])
ax.set_xlabel('Number of episodes', fontsize=14)
ax.set_ylabel(r'$\sigma(\gamma)$', fontsize=14)
plt.show()
```
| github_jupyter |
```
!pip install dynet
!git clone https://github.com/neubig/nn4nlp-code.git
from __future__ import print_function
import time
start = time.time()
from collections import Counter, defaultdict
import random
import math
import sys
import argparse
import dynet as dy
import numpy as np
# format of files: each line is "word1 word2 ..."
train_file = "nn4nlp-code/data/ptb/train.txt"
test_file = "nn4nlp-code/data/ptb/valid.txt"
w2i = defaultdict(lambda: len(w2i))
def read(fname):
"""
Read a file where each line is of the form "word1 word2 ..."
Yields lists of the form [word1, word2, ...]
"""
with open(fname, "r") as fh:
for line in fh:
sent = [w2i[x] for x in line.strip().split()]
sent.append(w2i["<s>"])
yield sent
train = list(read(train_file))
nwords = len(w2i)
test = list(read(test_file))
S = w2i["<s>"]
assert (nwords == len(w2i))
# DyNet Starts
model = dy.Model()
trainer = dy.AdamTrainer(model)
# Lookup parameters for word embeddings
EMBED_SIZE = 64
HIDDEN_SIZE = 128
WORDS_LOOKUP = model.add_lookup_parameters((nwords, EMBED_SIZE))
# Word-level LSTM (layers=1, input=64, output=128, model)
RNN = dy.LSTMBuilder(1, EMBED_SIZE, HIDDEN_SIZE, model)
# Softmax weights/biases on top of LSTM outputs
W_sm = model.add_parameters((nwords, HIDDEN_SIZE))
b_sm = model.add_parameters(nwords)
# Build the language model graph
def calc_lm_loss(sent):
dy.renew_cg()
# parameters -> expressions
W_exp = dy.parameter(W_sm)
b_exp = dy.parameter(b_sm)
# initialize the RNN
f_init = RNN.initial_state()
# get the wids and masks for each step
tot_words = len(sent)
# start the rnn by inputting "<s>"
s = f_init.add_input(WORDS_LOOKUP[S])
# feed word vectors into the RNN and predict the next word
losses = []
for wid in sent:
# calculate the softmax and loss
score = W_exp * s.output() + b_exp
loss = dy.pickneglogsoftmax(score, wid)
losses.append(loss)
# update the state of the RNN
wemb = WORDS_LOOKUP[wid]
s = s.add_input(wemb)
return dy.esum(losses), tot_words
# Sort training sentences in descending order and count minibatches
train_order = range(len(train))
print("startup time: %r" % (time.time() - start))
# Perform training
start = time.time()
i = all_time = dev_time = all_tagged = this_words = this_loss = 0
for ITER in range(100):
random.shuffle(train_order)
for sid in train_order:
i += 1
if i % int(500) == 0:
trainer.status()
print(this_loss / this_words, file=sys.stderr)
all_tagged += this_words
this_loss = this_words = 0
all_time = time.time() - start
if i % int(10000) == 0:
dev_start = time.time()
dev_loss = dev_words = 0
for sent in test:
loss_exp, mb_words = calc_lm_loss(sent)
dev_loss += loss_exp.scalar_value()
dev_words += mb_words
dev_time += time.time() - dev_start
train_time = time.time() - start - dev_time
print("nll=%.4f, ppl=%.4f, words=%r, time=%.4f, word_per_sec=%.4f" % (
dev_loss / dev_words, math.exp(dev_loss / dev_words), dev_words, train_time, all_tagged / train_time))
# train on the minibatch
loss_exp, mb_words = calc_lm_loss(train[sid])
this_loss += loss_exp.scalar_value()
this_words += mb_words
loss_exp.backward()
trainer.update()
print("epoch %r finished" % ITER)
trainer.update_epoch(1.0)
```
| github_jupyter |
---
_You are currently looking at **version 1.3** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._
---
# Assignment 1 - Introduction to Machine Learning
For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below).
```
import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print(cancer.DESCR) # Print the data set description
```
The object returned by `load_breast_cancer()` is a scikit-learn Bunch object, which is similar to a dictionary.
```
cancer.keys()
```
### Question 0 (Example)
How many features does the breast cancer dataset have?
*This function should return an integer.*
```
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the number of features of the breast cancer dataset, which is an integer.
# The assignment question description will tell you the general format the autograder is expecting
return len(cancer['feature_names'])
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
```
### Question 1
Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame.
Convert the sklearn.dataset `cancer` to a DataFrame.
*This function should return a `(569, 31)` DataFrame with *
*columns =*
['mean radius', 'mean texture', 'mean perimeter', 'mean area',
'mean smoothness', 'mean compactness', 'mean concavity',
'mean concave points', 'mean symmetry', 'mean fractal dimension',
'radius error', 'texture error', 'perimeter error', 'area error',
'smoothness error', 'compactness error', 'concavity error',
'concave points error', 'symmetry error', 'fractal dimension error',
'worst radius', 'worst texture', 'worst perimeter', 'worst area',
'worst smoothness', 'worst compactness', 'worst concavity',
'worst concave points', 'worst symmetry', 'worst fractal dimension',
'target']
*and index = *
RangeIndex(start=0, stop=569, step=1)
```
def answer_one():
# Your code here
dataset = pd.DataFrame(np.hstack((cancer['data'], cancer['target'].reshape(-1, 1))), columns=cancer['feature_names'].tolist()+['target'])
return dataset
answer_one()
```
### Question 2
What is the class distribution? (i.e. how many instances of `malignant` (encoded 0) and how many `benign` (encoded 1)?)
*This function should return a Series named `target` of length 2 with integer values and index =* `['malignant', 'benign']`
```
def answer_two():
cancerdf = answer_one()
# Your code here
classdict = dict(zip(cancer.target_names, [0, 1]))
benign = int(cancerdf.target.sum())
malignant = len(cancerdf) - benign
return pd.Series({'malignant': malignant, 'benign': benign})
answer_two()
```
### Question 3
Split the DataFrame into `X` (the data) and `y` (the labels).
*This function should return a tuple of length 2:* `(X, y)`*, where*
* `X`*, a pandas DataFrame, has shape* `(569, 30)`
* `y`*, a pandas Series, has shape* `(569,)`.
```
def answer_three():
cancerdf = answer_one()
# Your code here
y = cancerdf.target
X = cancerdf.drop(['target'], axis=1)
return X, y
print(answer_three()[0].shape)
print(answer_three()[1].shape)
```
### Question 4
Using `train_test_split`, split `X` and `y` into training and test sets `(X_train, X_test, y_train, and y_test)`.
**Set the random number generator state to 0 using `random_state=0` to make sure your results match the autograder!**
*This function should return a tuple of length 4:* `(X_train, X_test, y_train, y_test)`*, where*
* `X_train` *has shape* `(426, 30)`
* `X_test` *has shape* `(143, 30)`
* `y_train` *has shape* `(426,)`
* `y_test` *has shape* `(143,)`
```
from sklearn.model_selection import train_test_split
def answer_four():
X, y = answer_three()
# Your code here
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
return X_train, X_test, y_train, y_test
print(answer_four()[0].shape)
print(answer_four()[1].shape)
print(answer_four()[2].shape)
print(answer_four()[3].shape)
```
### Question 5
Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with `X_train`, `y_train` and using one nearest neighbor (`n_neighbors = 1`).
*This function should return a * `sklearn.neighbors.classification.KNeighborsClassifier`.
```
from sklearn.neighbors import KNeighborsClassifier
def answer_five():
X_train, X_test, y_train, y_test = answer_four()
# Your code here
clf = KNeighborsClassifier(n_neighbors=1)
clf.fit(X_train, y_train)
return clf
answer_five()
```
### Question 6
Using your knn classifier, predict the class label using the mean value for each feature.
Hint: You can use `cancerdf.mean()[:-1].values.reshape(1, -1)` which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier).
*This function should return a numpy array either `array([ 0.])` or `array([ 1.])`*
```
def answer_six():
cancerdf = answer_one()
means = cancerdf.mean()[:-1].values.reshape(1, -1)
# Your code here
clf = answer_five()
label = clf.predict(means)
return label
answer_six()
```
### Question 7
Using your knn classifier, predict the class labels for the test set `X_test`.
*This function should return a numpy array with shape `(143,)` and values either `0.0` or `1.0`.*
```
def answer_seven():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
# Your code here
pred = knn.predict(X_test)
return pred
answer_seven().shape
```
### Question 8
Find the score (mean accuracy) of your knn classifier using `X_test` and `y_test`.
*This function should return a float between 0 and 1*
```
def answer_eight():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
# Your code here
score = knn.score(X_test, y_test)
return score
answer_eight()
```
### Optional plot
Try using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells.
```
def accuracy_plot():
import matplotlib.pyplot as plt
%matplotlib notebook
X_train, X_test, y_train, y_test = answer_four()
# Find the training and testing accuracies by target value (i.e. malignant, benign)
mal_train_X = X_train[y_train==0]
mal_train_y = y_train[y_train==0]
ben_train_X = X_train[y_train==1]
ben_train_y = y_train[y_train==1]
mal_test_X = X_test[y_test==0]
mal_test_y = y_test[y_test==0]
ben_test_X = X_test[y_test==1]
ben_test_y = y_test[y_test==1]
knn = answer_five()
scores = [knn.score(mal_train_X, mal_train_y), knn.score(ben_train_X, ben_train_y),
knn.score(mal_test_X, mal_test_y), knn.score(ben_test_X, ben_test_y)]
plt.figure()
# Plot the scores as a bar chart
# 柱状图
# matplotlib.pyplot.bar(left, height, alpha=1, width=0.8, color=, edgecolor=, label=, lw=3)
bars = plt.bar(np.arange(4), scores, color=['#4c72b0','#4c72b0','#55a868','#55a868'])
# directly label the score onto the bars
for bar in bars:
height = bar.get_height()
plt.gca().text(bar.get_x() + bar.get_width()/2, height*.90, '{0:.{1}f}'.format(height, 2),
ha='center', color='w', fontsize=11)
# remove all the ticks (both axes), and tick labels on the Y axis
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on')
# remove the frame of the chart
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.xticks([0,1,2,3], ['Malignant\nTraining', 'Benign\nTraining', 'Malignant\nTest', 'Benign\nTest'], alpha=0.8);
plt.title('Training and Test Accuracies for Malignant and Benign Cells', alpha=0.8)
```
Uncomment the plotting function to see the visualization.
**Comment out** the plotting function when submitting your notebook for grading.
```
accuracy_plot()
```
## 辅助代码
```
d = pd.DataFrame(np.hstack((cancer['data'], cancer['target'].reshape(-1, 1))), columns=cancer['feature_names'].tolist()+['target'])
d.columns
int(d.target.sum())
classdict = dict(zip(cancer.target_names, [0, 1]))
classdict
cancerdf = answer_one()
means = cancerdf.mean()[:-1].values.reshape(1,-1)
clf = answer_five()
clf.predict(means)
```
| github_jupyter |
the main figure of the paper.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
pd.options.display.max_rows = 999
from tang_jcompneuro.model_fitting_postprocess import load_data_generic
cnn_mapping_dict = {
'b.9': 'R_max',
'b.2': 'R_max_Q',
'b.5': 'R_max_HALF',
'b.9_avg': 'R_avg',
'b.9_sq': 'S_max',
'b.9_avg_sq': 'S_avg',
'b.9_halfsq': 'HS_max',
'b.9_avg_halfsq': 'HS_avg',
'b.9_abs': 'A_max',
'b.9_avg_abs': 'A_avg',
'b.9_linear': 'L_max',
'b.9_avg_linear': 'L_avg',
'b.9_threshold': 'T',
'b.9_nonthreshold': 'NT',
'b.9_avgpool': 'AVG',
'b.9_maxpool': 'MAX',
}
def modelname_alternative(model_type, model_subtype, _1, _2):
if model_type == 'cnn':
assert not _2
if _1:
suffix = cnn_mapping_dict[model_subtype] + '_all'
else:
suffix = cnn_mapping_dict[model_subtype]
elif model_type == 'glm':
suffix = model_subtype
else:
raise NotImplementedError
# dollar is later used to find those relevant models.
return f'{model_type}${suffix}'
# well, I guess I don't need to be that fancy.
# just manually doing it should be fine.
# also more flexible, as I can control order more freely.
# reutr
def check_all(squared, score_col_name):
models_to_examine = [
('cnn', 'b.9'),
# ('cnn', 'b.9', True),
# as # of parameter control.
# ('cnn', 'b.5'),
# ('cnn', 'b.2'),
# ('cnn', 'b.9_avg', True),
# ('cnn', 'b.9_abs', True),
# ('cnn', 'b.9_avg_abs', True),
# ('cnn', 'b.9_linear', True),
# ('cnn', 'b.9_avg_linear', True),
# ('cnn', 'b.9_sq', True),
# ('cnn', 'b.9_avg_sq', True),
# ('cnn', 'b.9_halfsq', True),
# ('cnn', 'b.9_avg_halfsq', True),
# ('cnn', 'b.9_avg'),
# ('cnn', 'b.9_abs'),
# ('cnn', 'b.9_avg_abs'),
('cnn', 'b.9_linear'),
('cnn', 'b.9_avg_linear'),
# ('cnn', 'b.9_sq'),
# ('cnn', 'b.9_avg_sq'),
# ('cnn', 'b.9_halfsq'),
# ('cnn', 'b.9_avg_halfsq'),
('cnn', 'b.9_threshold', True, False, ('b.9', 'b.9_halfsq', 'b.9_avg', 'b.9_avg_halfsq')),
('cnn', 'b.9_nonthreshold', True, False, ('b.9_abs', 'b.9_sq', 'b.9_avg_abs', 'b.9_avg_sq')),
('cnn', 'b.9_avgpool', True, False, (
'b.9_avg_abs', 'b.9_avg_sq', 'b.9_avg', 'b.9_avg_halfsq')),
('cnn', 'b.9_maxpool', True, False, ('b.9_abs', 'b.9_sq','b.9', 'b.9_halfsq',)),
# ('glm', 'GLM_all', True, False, ('fpower_poisson', 'linear_poisson', 'gqm.2_poisson', 'gqm.4_poisson', 'gqm.8_poisson')),
# ('glm', 'GLM_all_overfit', True, True, ('fpower_poisson', 'linear_poisson', 'gqm.2_poisson', 'gqm.4_poisson', 'gqm.8_poisson')),
]
return load_data_generic(models_to_examine, load_naive=False, metric='ccnorm_5', squared=squared,
score_col_name=score_col_name, modelname_alternative=modelname_alternative,
# datasets_to_check=('MkA_Shape',)
)
df_all_cc2 = check_all(squared=True, score_col_name='cc2').xs(100, level='percentage').sort_index()
df_all_cc2
# seems that I don't need those small init ones.
# using the default one already looks good enough, in terms of mean performance.
HO_neuron_perf = df_all_cc2.apply(lambda x: x['cc2']['HO']['mean'], axis=1).unstack('subset')
HO_neuron_perf
HO_fail = df_all_cc2.apply(lambda x: np.sum(x['cc2']['HO']['raw']==0), axis=1).unstack('subset')
HO_fail
OT_neuron_perf = df_all_cc2.apply(lambda x: x['cc2']['OT']['mean'], axis=1).unstack('subset')
OT_neuron_perf
# so nobody actually fails.
OT_fail = df_all_cc2.apply(lambda x: np.sum(x['cc2']['OT']['raw']==0), axis=1).unstack('subset')
OT_fail
import os.path
from tang_jcompneuro import dir_dictionary
from collections import OrderedDict
from tang_jcompneuro.plotting import (image_subset_and_neuron_subset_list,
show_one_decomposed_bar,
show_one_decomposed_scatter,
# show_one_basic
)
from tang_jcompneuro.cell_classification import get_ready_to_use_classification
cell_class_dict_coarse = get_ready_to_use_classification(coarse=True, readonly=True)
cell_class_dict_fine = get_ready_to_use_classification(coarse=False, readonly=True)
def fetch_data_mean(dataset, img_subset, neuron_subset, model_type, model_subtype):
if neuron_subset == 'OT':
return OT_neuron_perf.at[(dataset, f'{model_type}${model_subtype}'), img_subset]
elif neuron_subset == 'HO':
return HO_neuron_perf.at[(dataset, f'{model_type}${model_subtype}'), img_subset]
else:
raise NotImplementedError
def fetch_data_raw(dataset, img_subset, neuron_subset, model_type, model_subtype):
return df_all_cc2.at[(dataset, img_subset, f'{model_type}${model_subtype}'), 'cc2'][neuron_subset]['raw']
def get_local_index_mask(dataset, neuron_subset):
coarse_mask = cell_class_dict_coarse[dataset][neuron_subset]
fine_this = cell_class_dict_fine[dataset][neuron_subset]
result = []
sum_now = 0
mask_start = np.zeros((coarse_mask.sum(),), dtype=np.bool_)
for v in fine_this.values():
assert v.shape == coarse_mask.shape
assert v.dtype == coarse_mask.dtype == np.bool_
value_to_add = v[coarse_mask]
sum_now += value_to_add.sum()
result.append(value_to_add)
assert mask_start.shape == value_to_add.shape
mask_start = np.logical_or(mask_start, value_to_add)
assert coarse_mask.sum() == sum_now
assert np.array_equal(mask_start, np.ones((coarse_mask.sum(),), dtype=np.bool_))
# again, check that this mask is a good one.
return result
# ok. time to work on plots.
# # https://github.com/leelabcnbc/tang_jcompneuro/blob/master/thesis_plots/v1_fitting/comparison_among_all_non_vgg_models_decomposed_by_fine_subsets.ipynb
def draw_one_stuff(dataset, save=None, letter_bias=0):
models_to_work_on = [('cnn', x) for x in ('R_max',
'NT_all', 'T_all', 'AVG_all', 'MAX_all',
'L_avg', 'L_max')]
models_to_work_on = models_to_work_on[::-1]
assert len(set([x[1] for x in models_to_work_on])) == len(models_to_work_on)
# model_pairs_to_check = [
# # ('GLM_all', 'S_avg'),
# # two T vs NT
# ('HS_avg', 'S_avg'),
# ('HS_max', 'S_max'),
# ('R_max', 'S_max'),
# ('R_avg', 'S_avg'),
# ('R_avg', 'A_avg'),
# ('R_max', 'A_max'),
# # ('A_avg', 'S_avg'),
# # ('A_max', 'S_max'),
# # ('A_avg', 'A_max'),
# ('L_avg', 'L_max'),
# # two HS vs R
# ('HS_max', 'R_max'),
# ('HS_avg', 'R_avg'),
# # ('A_max', 'R_max_HALF'), # in case all things are just due to expressiveness.
# # in this case.
# # ('A_max', 'R_max_Q'), # in case all things are just due to expressiveness.
# ]
# model_pairs_to_check = [
# ('glm_all', 'S_avg_ALL'),
# ('HS_avg_ALL', 'S_avg_ALL'),
# ('A_avg_ALL', 'S_avg_ALL'),
# ('A_max_ALL', 'S_max_ALL'),
# ('A_avg_ALL', 'A_max_ALL'),
# ]
# spotlight_items = [('HS_max', 'S_max', 'OT', 'all'), # seems to prove my point best, averaged over two monkeys.
# ('HS_max', 'S_max', 'HO', 'all'),
# ('HS_max', 'R_max', 'HO', 'all'),
# ]
# spotlight_work_count = 0
# assert len(spotlight_items) == 3
monkey = {'MkA_Shape': 'A', 'MkE2_Shape': 'B'}[dataset]
# draw one by one.
num_panel = len(image_subset_and_neuron_subset_list)
plt.close('all')
fig, axes = plt.subplots(1, num_panel, sharex=False, sharey=True, squeeze=False,
figsize=(5.5,3.5))
# fig_explore, axes_explore = plt.subplots(len(model_pairs_to_check), num_panel, sharex=True, sharey=True,
# squeeze=False, figsize=(5.5,5.5/3*len(model_pairs_to_check)))
# fig_sl, axes_sl = plt.subplots(1, len(spotlight_items), sharex=True, sharey=True,
# squeeze=False, figsize=(5.5,5.5/3))
# assert axes_explore.shape == (len(model_pairs_to_check), num_panel)
for idx, (ax, (img_subset, neuron_subset)) in enumerate(zip(axes.ravel(), image_subset_and_neuron_subset_list)):
# data_x = df_all_cc2.at[('MkA_Shape', img_subset, model_name_x_real), 'cc2'][neuron_subset]['raw']
# data_y = df_all_cc2.at[('MkA_Shape', img_subset, model_name_y_real), 'cc2'][neuron_subset]['raw']
# show_one_basic(data_x, data_y, title=f'{neuron_subset} neurons\n{img_subset} stimuli',
# ax=ax,mean_title='mean $CC_\mathrm{norm}^2$', xlabel=model_name_x,
# ylabel=model_name_y if idx == 0 else None)
print(img_subset, neuron_subset)
color_bias = {'HO': 0, 'OT': 5}[neuron_subset]
# gather data.
# for each model, collect subsets in chunks.
# and divide data by fine subsets
stat_raw_array = [fetch_data_raw(dataset, img_subset, neuron_subset, x, y) for x, y in models_to_work_on]
stat_mean_ref_array = np.asarray([fetch_data_mean(dataset, img_subset, neuron_subset, x, y) for x, y in models_to_work_on])
stat_chunks_array = []
raw_chunks_array = []
local_index_mask_all = get_local_index_mask(dataset, neuron_subset)
for mask_this in local_index_mask_all:
stat_chunks_array.append([x[mask_this].sum()/mask_this.size for x in stat_raw_array])
raw_chunks_array.append(np.asarray([x[mask_this] for x in stat_raw_array]))
stat_chunks_array = np.asarray(stat_chunks_array)
assert stat_chunks_array.shape == (len(local_index_mask_all), len(models_to_work_on))
# print(stat_chunks_array)
stat_mean_ref_array_debug = stat_chunks_array.sum(axis=0)
assert stat_mean_ref_array_debug.shape == stat_mean_ref_array.shape
assert np.allclose(stat_mean_ref_array_debug, stat_mean_ref_array)
stat_name_array = [x[1] for x in models_to_work_on]
# print(stat_name_array)
# ok. pass into my fancy function and draw!
show_one_decomposed_bar(stat_chunks_array, stat_name_array,
ax=ax,
xlabel='mean $CC_\mathrm{norm}^2$',
title=f'{neuron_subset} neurons\n{img_subset} stimuli',
color_bias=color_bias, set_ylabel=True if idx==0 else False,
ylabel_styles=[None]*2 + ['italic']*4 + ['bold'],
letter_map=idx+letter_bias,color_list=[[ '#00FF00', '#BFFFBF', 'black', '#AAAAAA', 'blue',
'#BFBFFF', 'red']]*7,
height=0.8)
# save space
# fig.suptitle(f'CNN variants on monkey {monkey}')
# adjust figure
fig.subplots_adjust(top=0.85, bottom=0.15, left=0.15, right=0.99, hspace=0.05, wspace=0.1)
# fig_explore.subplots_adjust(top=1, bottom=0, left=0, right=1, hspace=0.0, wspace=0.0)
# fig_sl.subplots_adjust(top=1, bottom=0, left=0, right=1, hspace=0.0, wspace=0.0)
if save is not None:
save_dir = os.path.join(dir_dictionary['plots'], 'main', 'cnn_detailed_for_slides')
os.makedirs(save_dir, exist_ok=True)
fig.savefig(os.path.join(save_dir, f'{save}_bars.pdf'), dpi=300, transparent=True)
# fig_explore.savefig(os.path.join(save_dir, f'{save}_explore.pdf'), dpi=300)
# fig_sl.savefig(os.path.join(save_dir, f'{save}_spotlight.pdf'), dpi=300)
plt.show()
draw_one_stuff('MkA_Shape', 'A')
draw_one_stuff('MkE2_Shape', 'E2')
```
| github_jupyter |
```
""" Script that evaluates reaction coordinates using the SGOOP method.
Probabilites are calculated using MD trajectories. Transition rates are
found using the maximum caliber approach.
For unbiased simulations use rc_eval().
For biased simulations calculate unbiased probabilities and analyze then with sgoop().
The original method was published by Tiwary and Berne, PNAS 2016, 113, 2839.
Author: Zachary Smith zsmith7@terpmail.umd.edu
Original Algorithm: Pratyush Tiwary ptiwary@umd.edu
Contributor: Pablo Bravo Collado ptbravo@uc.cl"""
import numpy as np
import scipy.optimize as opt
from scipy import signal
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
"""User Defined Variables"""
in_file = 'ZS.traj' # Input file
rc_bin = 20 # Bins over RC
wells = 2 # Expected number of wells with barriers > kT
d = 1 # Distance between indexes for transition
prob_cutoff = 1e-5 # Minimum nonzero probability
"""Auxiliary Variables"""
SG = [] # List of Spectral Gaps
RC = [] # List of Reaction Coordinates
P = [] # List of probabilites on RC
SEE = [] # SGOOP Eigen exp
SEV = [] # SGOOP Eigen values
SEVE = [] # SGOOP Eigen vectors
"""Load MD File"""
data_array = np.loadtxt(in_file)
def rei():
# Reinitializes arrays for new runs
global SG,RC,P,SEE,SEV,SEVE
SG = []
RC = []
P = []
SEE = []
SEV = []
SEVE = []
def normalize_rc(rc):
# Normalizes input RC
squares=0
for i in rc:
squares+=i**2
denom=np.sqrt(squares)
return np.array(rc)/denom
def generate_rc(i):
# Generates a unit vector with angle pi*i
x=np.cos(np.pi*i)
y=np.sin(np.pi*i)
return (x,y)
def md_prob(rc):
# Calculates probability along a given RC
global binned
proj=[]
for v in data_array:
proj.append(np.dot(v,rc))
rc_min=np.min(proj)
rc_max=np.max(proj)
binned=(proj-rc_min)/(rc_max-rc_min)*(rc_bin-1)
binned=np.array(binned).astype(int)
prob=np.zeros(rc_bin)
for point in binned:
prob[point]+=1
return prob/prob.sum() # Normalize
def set_bins(rc,bins,rc_min,rc_max):
# Sets bins from an external source
global binned, rc_bin
rc_bin = bins
proj = np.dot(data_array,rc)
binned=(proj-rc_min)/(rc_max-rc_min)*(rc_bin-1)
binned=np.array(binned).astype(int)
def clean_whitespace(p):
# Removes values of imported data that do not match MaxCal data
global rc_bin, binned
bmin = np.min(binned)
bmax = np.max(binned)
rc_bin = bmax - bmin + 1
binned -= bmin
return p[bmin:bmax+1]
def eigeneval(matrix):
# Returns eigenvalues, eigenvectors, and negative exponents of eigenvalues
eigenValues, eigenVectors = np.linalg.eig(matrix)
idx = eigenValues.argsort() # Sorting by eigenvalues
eigenValues = eigenValues[idx] # Order eigenvalues
eigenVectors = eigenVectors[:,idx] # Order eigenvectors
eigenExp = np.exp(-eigenValues) # Calculate exponentials
return eigenValues, eigenExp, eigenVectors
def mu_factor(binned,p):
# Calculates the prefactor on SGOOP for a given RC
# Returns the mu factor associated with the RC
# NOTE: mu factor depends on the choice of RC!
# <N>, number of neighbouring transitions on each RC
J = 0
N_mean = 0
D = 0
for I in binned:
N_mean += (np.abs(I-J) <= d)*1
J = np.copy(I)
N_mean = N_mean/len(binned)
# Denominator
for j in range(rc_bin):
for i in range(rc_bin):
if (np.abs(i-j) <= d) and (i != j):
D += np.sqrt(p[j]*p[i])
MU = N_mean/D
return MU
def transmat(MU,p):
# Generates transition matrix
S = np.zeros([rc_bin, rc_bin])
# Non diagonal terms
for j in range(rc_bin):
for i in range(rc_bin):
if (p[i] != 0) and (np.abs(i-j) <= d and (i != j)) :
S[i, j] = MU*np.sqrt(p[j]/p[i])
for i in range(rc_bin):
S[i,i] = -S.sum(1)[i] # Diagonal terms
S = -np.transpose(S) # Tranpose and fix
return S
def spectral():
# Calculates spectral gap for appropriate number of wells
SEE_pos=SEE[-1][SEV[-1]>-1e-10] # Removing negative eigenvalues
SEE_pos=SEE_pos[SEE_pos>0] # Removing negative exponents
gaps=SEE_pos[:-1]-SEE_pos[1:]
if np.shape(gaps)[0]>=wells:
return gaps[wells-1]
else:
return 0
def sgoop(rc,p):
# SGOOP for a given probability density on a given RC
# Start here when using probability from an external source
MU = mu_factor(binned,p) # Calculated with MaxCal approach
S = transmat(MU,p) # Generating the transition matrix
sev, see, seve = eigeneval(S) # Calculating eigenvalues and vectors for the transition matrix
SEV.append(sev) # Recording values for later analysis
SEE.append(see)
SEVE.append(seve)
sg = spectral() # Calculating the spectral gap
SG.append(sg)
return sg
def biased_prob(rc,old_rc):
# Calculates probabilities while "forgetting" original RC
global binned
bias_prob=md_prob(old_rc)
bias_bin=binned
proj=[]
for v in data_array:
proj.append(np.dot(v,rc))
rc_min=np.min(proj)
rc_max=np.max(proj)
binned=(proj-rc_min)/(rc_max-rc_min)*(rc_bin-1)
binned=np.array(binned).astype(int)
prob=np.zeros(rc_bin)
for i in range(np.shape(binned)[0]):
prob[binned[i]]+=1/bias_prob[bias_bin[i]] # Dividing by RAVE-like weights
return prob/prob.sum() # Normalize
def best_plot():
# Displays the best RC for 2D data
best_rc=np.ceil(np.arccos(RC[np.argmax(SG)][0])*180/np.pi)
plt.figure()
cmap=plt.cm.get_cmap("jet")
hist = np.histogram2d(data_array[:,0],data_array[:,1],100)
hist = hist[0]
prob = hist/np.sum(hist)
potE=-np.ma.log(prob)
potE-=np.min(potE)
np.ma.set_fill_value(potE,np.max(potE))
plt.contourf(np.transpose(np.ma.filled(potE)),cmap=cmap)
plt.title('Best RC = {0:.2f} Degrees'.format(best_rc))
origin=[50,50]
rcx=np.cos(np.pi*best_rc/180)
rcy=np.sin(np.pi*best_rc/180)
plt.quiver(*origin,rcx,rcy,scale=.1,color='grey');
plt.quiver(*origin,-rcx,-rcy,scale=.1,color='grey');
def rc_eval(rc):
# Unbiased SGOOP on a given RC
# Input type: array of weights
"""Save RC for Calculations"""
rc = normalize_rc(rc)
RC.append(rc)
"""Probabilities and Index on RC"""
prob=md_prob(rc)
P.append(prob)
"""Main SGOOP Method"""
sg = sgoop(rc,prob)
return sg
def biased_eval(rc,bias_rc):
# Biased SGOOP on a given RC with bias along a second RC
# Input type: array of weights, probability from original RC
"""Save RC for Calculations"""
rc = normalize_rc(rc)
RC.append(rc)
"""Probabilities and Index on RC"""
prob=biased_prob(rc,bias_rc)
P.append(prob)
"""Main SGOOP Method"""
sg = sgoop(rc,prob)
return sg
```
| github_jupyter |
# AUTO-DROP HIGHLY CORRELATED COLUMNS - GANESH RAM GURURAJAN
**Explanation** :
Steps:
1. First pass data frame into the function
2. Get Corr() data frame using **' pearson method '**
3. Filter with condition **df[ df [ columns > 0.85 ]**
4. **Set the diagonal to np.nan, because diagonal of corr() is always 1.0**
5. **Remove all completely empty columns and rows**, with absolute np.nan
6. If corr() is of shape (0,0), it means there's no highly correlated columns
7. Else, while corr() is not equal to (0,0) keep removing both the column and row with the highest correlation value in the whole corr() matrix, also remove all rows and columns with absolute np.nan. This will keep reducing the shape of correlation matrix.
8. Now, remove the columns from original dataFrame and return the DF
### Module Imports - Problem - 3
```
import pandas as pd
import numpy as np
import seaborn as sns # FOR VISUALIZATION OF HEATMAP
```
## **THE FUNCTION** - PROBLEM 3
```
def dropHighlyCorrelatedColumns(df):
'''
This method removes minimum number of highly correlated columns using pearson method.
'''
# INITIAL FEW STEPS
corr = df.corr(method='pearson') # CORR dataFrame consisting only correlation
corr = corr[(corr >= 0.85)] # Filtering dateFrame with corr() >= 0.85
for column in corr.columns: # np.nan Diagonal, as corr() of diag is 1.0 always
corr.loc[column][column] = np.nan
corr.dropna(axis=1,how='all',inplace=True) # Drop all columns with absolute NaN
corr.dropna(axis=0,how='all',inplace=True) # Drop all row with absolute NaN
###################### THIS IS THE IMPORTANT PART ######################
if corr.shape!=(0,0): # If shape of the current dataFrame is not (0,0)
removed_cols = [] # Stored the names of columns to be removed from original dataframe
while corr.shape != (0,0): # While Correlation DF is not NONE:
corr_dict = {} # Keep removing highly correlated columns in the descending order of
for column in corr.columns: # Correlation
corr_dict[corr[column].max()] = column
try:
val = max(corr_dict)
corr.drop(corr_dict[val],inplace=True)
corr.drop(corr_dict[val],axis=1,inplace=True)
corr.dropna(axis=1,how='all',inplace=True)
corr.dropna(axis=0,how='all',inplace=True)
removed_cols.append(corr_dict[val])
del corr_dict[val]
except ValueError: # When corr_dict is empty, it means all columns have been noted
break
df.drop(removed_cols,axis=1,inplace=True) # Remove the columns from the original DF
print("\nRemoved Columns are {}".format(removed_cols)) # Print the removed columns
else:
print('There are no highly correlated columns') # No need of removal of columns if all corr() is less than 0.85
return df # In any case return DF
```
## - - - - - TEST YOUR DATA HERE - - - - - PROBLEM 3
```
############### CHANGE THE NEXT LINE TO LOAD YOUR DATA ###############
# df = pd.read_csv('health.csv')
################ DO NOT CHANGE THIS ################
# This is the resultant DATA FRAME
new_df = dropHighlyCorrelatedColumns(df)
```
## VISUALIZE HERE
#### FIRST CELL IS THE **HEAT MAP OF ORIGINAL DATA**
#### SECOND CELL IS THE **HEAT MAP OF NEW DATA**
#### THIRD CELL IS TO VIEW **NEW_DATA.corr() > 0.85**
```
######################### THIS IS HEAT MAP OF CORR() OF ORIGINAL DATAFRAME ##############################
# VISUALIZE HERE
# UNCOMMENT THE BELOW LINE
# sns.heatmap(df.corr())
######################### THIS IS HEAT MAP OF CORR() OF NEW DATAFRAME ##############################
# VISUALIZE HERE
# UNCOMMENT THE BELOW LINE
# sns.heatmap(new_df.corr())
###################### VIEW NEW_DF.CORR() > 0.85 HERE ######################
# UNCOMMENT THE BELOW LINE
# new_df.corr() > 0.85
```
| github_jupyter |
# Independent component analysis
Here we'll learn about indepednent component analysis (ICA), a matrix decomposition method that's an alternative to PCA.
```
import ipywidgets
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy import signal
from scipy.spatial import distance
import seaborn as sns
from sklearn.decomposition import FastICA, PCA
sns.set(style='white', context='notebook')
%matplotlib inline
```
## Independent Component Analysis (ICA)
ICA was originally created for the "cocktail party problem" for audio processing. It's an incredible feat that our brains are able to filter out all these different sources of audio, automatically!

(I really like how smug that guy looks - it's really over the top)
[Source](http://www.telegraph.co.uk/news/science/science-news/9913518/Cocktail-party-problem-explained-how-the-brain-filters-out-unwanted-voices.html)
### Cocktail party problem
Given multiple sources of sound (people talking, the band playing, glasses clinking), how do you distinguish independent sources of sound? Imagine at a cocktail party you have multiple microphones stationed throughout, and you get to hear all of these different sounds.

[Source](http://www.slideserve.com/vladimir-kirkland/ica-and-isa-using-schweizer-wolff-measure-of-dependence)
### What if you applied PCA to the cocktail party problem?
What would you get if you
Example adapted from the excellent [scikit-learn documentation](http://scikit-learn.org/stable/auto_examples/decomposition/plot_ica_blind_source_separation.html).
```
###############################################################################
# Generate sample data
np.random.seed(0)
n_samples = 2000
time = np.linspace(0, 8, n_samples)
s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal
s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal
s3 = signal.sawtooth(2 * np.pi * time) # Signal 3: saw tooth signal
S = np.c_[s1, s2, s3]
S += 0.2 * np.random.normal(size=S.shape) # Add noise
S /= S.std(axis=0) # Standardize data
# Mix data
A = np.array([[1, 1, 1], [0.5, 2, 1.0], [1.5, 1.0, 2.0]]) # Mixing matrix
X = np.dot(S, A.T) # Generate observations
# Compute ICA
ica = FastICA(n_components=3)
S_ = ica.fit_transform(X) # Reconstruct signals
A_ = ica.mixing_ # Get estimated mixing matrix
# We can `prove` that the ICA model applies by reverting the unmixing.
assert np.allclose(X, np.dot(S_, A_.T) + ica.mean_)
# For comparison, compute PCA
pca = PCA(n_components=3)
H = pca.fit_transform(X) # Reconstruct signals based on orthogonal components
###############################################################################
# Plot results
plt.figure()
models = [X, S, S_, H]
names = ['Observations (mixed signal)',
'True Sources',
'ICA recovered signals',
'PCA recovered signals']
colors = sns.color_palette('colorblind')
for ii, (model, name) in enumerate(zip(models, names), 1):
plt.subplot(4, 1, ii)
plt.title(name)
for sig, color in zip(model.T, colors):
plt.plot(sig, color=color)
plt.subplots_adjust(0.09, 0.04, 0.94, 0.94, 0.26, 0.46)
sns.despine()
plt.show()
```
### Discussion
1. What do you get when you apply PCA to the cocktail party problem?
2. What is the difference between “orthogonal” features (PCA) and “independent” features (ICA)? Indicate all that apply.
- Orthogonal features explain distinct gene expression patterns
- Orthogonal features describe average gene expression patterns
- Independent features explain distinct gene expression patterns
- Independent features describe average gene expression patterns
- Orthogonal features can correlate with each other, independent features do not.
- Independent features can correlate with each other, orthogonal features do not.
## PCA vs ICA
Which one should you use? Well, that depends on your question :)
PCA and ICA have different goals. PCA wants to find the things that change the greatest across your data, and ICA wants to find individual signals. Let's take a look at this by running both PCA and ICA on data that we're all familiar with - faces!
The "Olivetti Faces Dataset" is a commonly use face recognition dataset in machine learning.
```
# Authors: Vlad Niculae, Alexandre Gramfort
# License: BSD 3 clause
import logging
from time import time
from numpy.random import RandomState
import matplotlib.pyplot as plt
plt.close('all')
from sklearn.datasets import fetch_olivetti_faces
from sklearn.cluster import MiniBatchKMeans
from sklearn import decomposition
image_shape = (64, 64)
rng = RandomState(0)
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
###############################################################################
# Load faces data
dataset = fetch_olivetti_faces(shuffle=True, random_state=rng)
faces = dataset.data
n_samples, n_features = faces.shape
# global centering
faces_centered = faces - faces.mean(axis=0)
# local centering
faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
###############################################################################
def plot_gallery(title, images, n_col=5, n_row=5, cmap=plt.cm.viridis):
plt.figure(figsize=(2. * n_col/2, 2.26 * n_row/2))
plt.suptitle(title)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
if comp.min() < 0:
vmax = max(comp.max(), -comp.min())
vmin = -vmax
else:
vmin = comp.min()
vmax = comp.max()
plt.imshow(comp.reshape(image_shape), cmap=cmap,
interpolation='nearest',
vmin=vmin, vmax=vmax)
plt.xticks(())
plt.yticks(())
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
###############################################################################
# Plot a sample of the input data
plot_gallery("First centered Olivetti faces", faces[:25], cmap=plt.cm.gray)
```
The first figure and its subpanels show the first 20 (out of 400) faces in the dataset.
So now let's explore!
```
def explore_pca_ica(algorithm, n_components):
# establish size of the figure to plot by the number
# of rows and columns of subplots
n_row = 1
n_col = 1
while n_row * n_col < n_components:
if n_col > n_row:
n_row += 1
else:
n_col += 1
kwargs = dict(whiten=True, n_components=n_components)
if algorithm == 'PCA':
decomposer = PCA(**kwargs)
elif algorithm == 'ICA':
kwargs['random_state'] = 2016
kwargs['max_iter'] = 200
kwargs['tol'] = 0.001
decomposer = FastICA(**kwargs)
t0 = time()
decomposer.fit(X=faces_centered)
train_time = (time() - t0)
print("done in %0.3fs" % train_time)
plot_gallery('%s - Train time %.1fs' % (algorithm, train_time),
decomposer.components_[:n_components], n_col=n_col, n_row=n_row)
ipywidgets.interact(explore_pca_ica,
algorithm=ipywidgets.Dropdown(options=['PCA', 'ICA'], value='PCA',
description='Matrix decomposition algorithm'),
n_components=ipywidgets.IntSlider(min=2, max=50, value=12));
```
This plot shows you the *components* of the data.
Notice that in PCA, these are "eigenfaces," that is, the first face is the most average face that explains most of the data. The next components shows where the next largest amount of variance is. As you continue, the components of PCA goes into the edge cases of the different faces so you can reconstruct more and more faces.
For ICA, we don't get an "eigenface." Instead, ICA goes right into the discrete signals. Notice that some of the ICA components actually look like an individual person's face, not an average of people's faces. ICA is pulling out individual people who had their photo taken multiple times in the dataset, and reconstructing them.
### Exercise 2
Discuss the questions below while you play with the sliders.
1. How does the number of components influence the decomposition by PCA? (indicate all that apply)
- You get to see more distinct signals in the data
- It changes the components
- It doesn't affect the first few components
- You get to see more of the "special cases" in the variation of the data
2. How does the number of components influence the decomposition by ICA? (indicate all that apply)
- You get to see more distinct signals in the data
- It changes the components
- It doesn't affect the first few components
- You get to see more of the "special cases" in the variation of the data
3. What does the first component of PCA represent? (Check all that apply)
- The features that change the most across the data
- One distinct subset of features that appears independently of all other features
- The axis of the "loudest" features in the dataset
- A particular set of facial features that appear together and not with other features
3. What does the first component of ICA represent? (Check all that apply)
- The features that change the most across the data
- One distinct subset of features that appears independently of all other features
- The axis of the "loudest" features in the dataset
- A particular set of facial features that appear together and not with other features
#### The punchline
Which should you use, PCA or ICA? Again, it depends on your question!
PCA tells you which are the largest varying genes in your data. ICA tells you which genes contribute to discrete signals from specific populations in your data. Let's look at some more biologically relevant (though still small) datasets.
```
from time import time
import ipywidgets
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import offsetbox
import pandas as pd
from sklearn import (manifold, datasets, decomposition, ensemble,
discriminant_analysis, random_projection)
import seaborn as sns
sns.set(context='notebook', style='white')
%matplotlib inline
np.random.seed(2016)
n_samples = 10
n_genes = 20
half_genes = int(n_genes/2)
half_samples = int(n_samples/2)
size = n_samples * n_genes
genes = ['Gene_{}'.format(str(i+1).zfill(2)) for i in range(n_genes)]
samples = ['Sample_{}'.format(str(i+1).zfill(2)) for i in range(n_samples)]
mouse_data = pd.DataFrame(np.random.randn(size).reshape(n_samples, n_genes), index=samples, columns=genes)
# Add biological variance
mouse_data.iloc[:half_samples, :half_genes] += 1
mouse_data.iloc[:half_samples, half_genes:] += -1
mouse_data.iloc[half_samples:, half_genes:] += 1
mouse_data.iloc[half_samples:, :half_genes] += -1
# Z_score within genes
mouse_data = (mouse_data - mouse_data.mean())/mouse_data.std()
# Biological samples
mouse_groups = pd.Series(dict(zip(mouse_data.index, (['Mouse_01'] * int(n_samples/2)) + (['Mouse_02'] * int(n_samples/2)))),
name="Mouse")
mouse_groupby = pd.Series(mouse_groups.index, index=mouse_groups.values)
mouse_palette = ['lightgrey', 'black']
mouse_to_color = dict(zip(['Mouse_01', 'Mouse_02'], mouse_palette))
mouse_colors = [mouse_to_color[mouse_groups[x]] for x in samples]
# Gene colors
gene_colors = (['SeaGreen'] * half_genes) + (['MediumPurple'] * half_genes)
mouse_row_colors = mouse_colors
mouse_col_colors = gene_colors
g = sns.clustermap(mouse_data, row_colors=mouse_row_colors, col_cluster=False, row_cluster=False,
linewidth=0.5, col_colors=mouse_col_colors,
cbar_kws=dict(label='Normalized Expression'))
plt.setp(g.ax_heatmap.get_yticklabels(), rotation=0);
g.fig.suptitle('Mouse data')
np.random.seed(2016)
n_samples = 10
n_genes = 20
half_genes = int(n_genes/2)
half_samples = int(n_samples/2)
size = n_samples * n_genes
genes = ['Gene_{}'.format(str(i+1).zfill(2)) for i in range(n_genes)]
samples = ['Sample_{}'.format(str(i+1).zfill(2)) for i in range(n_samples)]
pseudotime_data = pd.DataFrame(np.random.randn(size).reshape(n_samples, n_genes), index=samples, columns=genes)
# Add "psueodotime"
pseudotime_data.iloc[:, :half_genes] = pseudotime_data.iloc[:, :half_genes].add(np.square(np.arange(n_samples)/2), axis=0)
pseudotime_data.iloc[:, half_genes:] = pseudotime_data.iloc[:, half_genes:].add(np.square(np.arange(n_samples)[::-1]/2), axis=0)
# Normalize genes using z-scores
pseudotime_data = (pseudotime_data - pseudotime_data.mean())/pseudotime_data.std()
pseudotime_row_colors = sns.color_palette('BrBG', n_colors=n_samples)
pseudotime_col_colors = sns.color_palette("PRGn", n_colors=n_genes)
pseudotime_palette = pseudotime_row_colors
tidy = pseudotime_data.unstack().reset_index()
tidy = tidy.rename(columns={'level_0': 'Gene', 'level_1': "Sample", 0:'Normalized Expression'})
tidy.head()
g = sns.factorplot(data=tidy, hue='Gene', palette=pseudotime_col_colors, x='Sample',
y='Normalized Expression', aspect=2)
# g.map(plt.plot, x='Sample', y='Normalized Expression')
g = sns.clustermap(pseudotime_data, row_colors=pseudotime_row_colors, col_cluster=False, row_cluster=False,
linewidth=0.5, col_colors=pseudotime_col_colors,
cbar_kws=dict(label='Normalized Expression'))
plt.setp(g.ax_heatmap.get_yticklabels(), rotation=0);
g.fig.suptitle('Pseudotime data')
%pdb
methods = [
'Matrix decomposition: PCA',
'Matrix decomposition: ICA',
]
def explore_clustering(dataset, method, n_components):
if dataset == 'Mouse':
data = mouse_data
row_colors = mouse_row_colors
col_colors = mouse_col_colors
hue = mouse_groups.copy()
palette = mouse_palette
elif dataset == 'Pseudotime':
data = pseudotime_data
row_colors = pseudotime_row_colors
col_colors = pseudotime_col_colors
hue = pd.Series(data.index, index=data.index)
palette = pseudotime_palette
hue.name = 'hue'
# Copy the full name of the method
fullname = str(method)
if method.startswith('Clustering'):
method = method.split()[-1]
t0 = time()
g = sns.clustermap(data, row_colors=row_colors, method=method,
xticklabels=[], yticklabels=[])
g.fig.suptitle('{} linkage of the digits (time {:.2f}s)'.format(fullname, time()-t0))
else:
max_iter = 100
random_state = 0
n_init = 1
if method.endswith('PCA'):
estimator = decomposition.PCA(n_components=n_components)
elif method.endswith('ICA'):
estimator = decomposition.FastICA(max_iter=max_iter, n_components=n_components,
random_state=random_state)
t0 = time()
smushed = estimator.fit_transform(data)
smushed = pd.DataFrame(smushed, index=data.index)
smushed = smushed.join(hue)
title = "{} embedding of the {} (time {:.2f}s)".format(fullname, dataset, time() - t0)
sns.pairplot(smushed, hue='hue', palette=palette)
# fig, ax = plt.subplots()
# ax.scatter(smushed[:, 0], smushed[:, 1], color=row_colors, s=40)
# ax.set(title=title)
# sns.despine()
ipywidgets.interact(explore_clustering,
dataset=ipywidgets.Dropdown(options=['Mouse', 'Pseudotime'], value='Mouse',
description='Dataset'),
metric=ipywidgets.Dropdown(options=['euclidean', 'cityblock', ], value='euclidean',
description='Distance metric'),
method=ipywidgets.Dropdown(options=methods, value='Matrix decomposition: PCA',
description='Unsupervised learning method'),
n_components=ipywidgets.IntSlider(min=2, max=10, value=4));
```
## Discussion questions
1. What do additional components of PCA reveal?
2. How does the number of ICA components affect the mouse data? The pseudotime data?
| github_jupyter |
# Graph exploration and sampling
Here we experiment network exploration using the main graph samplers that can be found in the literature. The principle is the following: The initial graph is too large to be handled and we need to extract a representative part of it for analysis. We hope this reduced subgraph is representative of the large one, and, indeed, there are theoretical guarantees about that for each method.
The different samplers are designed to preserve particular graph properties when subsampling. We will see what properties are associated to each sampler and learn to select the most adapted one depending on the application.
```
import networkx as nx
import littleballoffur as lbof
import matplotlib.pyplot as plt
from collections import Counter
```
The main samplers are coded in the Python module called "little ball of fur" https://github.com/benedekrozemberczki/littleballoffur and we will use it here (`pip install littleballoffur`).
The documentation can be found here:
* https://little-ball-of-fur.readthedocs.io
Let us load one of the datasets available in the module.
```
# load a graph
#reader = lbof.GraphReader("facebook")
reader = lbof.GraphReader("github")
G = reader.get_graph()
print('Number of nodes: {}, number of edges: {}.'.format(G.number_of_nodes(),G.number_of_edges()))
```
Let us suppose this graph is too big for our analysis. We need to get a reduced version of it. We can define the size of this reduced dataset.
```
# number of nodes in the subgraph
number_of_nodes = int(0.01*G.number_of_nodes())
print('Number of nodes in the subgraph:',number_of_nodes)
```
There exist several ways for sampling a graph. When you have access to the full graph, you may sample at random edges or nodes, this is the first family. Alternatively, you can start from an initial group of nodes and collect a part of the graph by exploring (following connections) from them. We shall focus on these latter approaches.
## Exploring the network
### General principle
For the applications we have in mind, we want to start the exploration from an initial set of nodes. For example, it could be a particular user or group of users in a social network that are posting about a topic we are interested in. We want to know more about this topic and related topics appearing in the exchanges. Our goal is to explore the network around this initial group. Hence we plan to use the exploration methods.
The exploration scheme is show on the following figure. Starting from an initial node, the neighborhood is explored, selecting randomly a subset of edges (with possibly different probability weights for different edges). Then the process is iterated on the new nodes sampled.

<center>Left: snowball exploration, follows all edges. Right: spikyball exploration, follow a subset of edges at each step.</center>
Among the most popular explorations are:
- Metropolis-Hasting randow walk sampler ([paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.140.4864&rep=rep1&type=pdf)) and ([paper](https://core.ac.uk/download/pdf/192275476.pdf)),
- Forest Fire sampler ([paper](https://cs.stanford.edu/people/jure/pubs/sampling-kdd06.pdf)).
MHRW algorithm (with $k_v$ the degree of node $v$):

The Spikyball ([paper](https://www.mdpi.com/1999-4893/13/11/275)) add more flexibility to the previous sampling schemes while still keeping the exploration efficient. At each step of the exploration, all the out-going edges are collected. Depending on their weight and the number of theses connections on the initial and target nodes, some edges will be selected. You can influence the sampling toward:
* nodes with high degree (parameter $\alpha$),
* nodes connected with large weights (parameter $\beta$),
* neighbors connected to several nodes already sampled (parameter $\gamma$).

One step of the exploration is illustrated on the following figure. The sampled graph is in the middle, in purple. The green nodes are the neighbors that can be selected for the next exploration step. Among all the out-going edges, only a part of them are selected, the ones that are straight lines.

### Application
In the following, we experiment the exploration methods on toy graphs. The goal is to understand the different possibility to explore a network, the different parameters and their impact on the sampled network. The initial group of nodes is chosen at random within the exploration functions of `littleballoffur`. So we do not focus on a specific region of the network but rather on the way the exploration is performed. Later on, we will choose initial nodes and apply the exploration to real networks.
We save the graphs in `gexf` file in order to visualize them with Gephi.
```
# Metropolis Hasting random walk sampler
sampler = lbof.MetropolisHastingsRandomWalkSampler(number_of_nodes = number_of_nodes)
GMH = sampler.sample(G)
nx.write_gexf(GMH, 'data/gmh.gexf')
print('Subgraph with {} nodes and {} edges.'.format(GMH.number_of_nodes(),GMH.number_of_edges()))
# Forest Fire sampler
sampler = lbof.ForestFireSampler(number_of_nodes = number_of_nodes)
GFF = sampler.sample(G)
nx.write_gexf(GFF, 'data/gff.gexf')
print('Subgraph with {} nodes and {} edges.'.format(GFF.number_of_nodes(),GFF.number_of_edges()))
```
A more general and flexible exploration approach called "Spikyball" ([paper](https://www.mdpi.com/1999-4893/13/11/275)) can be used.
```
# Fireball sampler is similar to the Forest Fire sampler
sampler = lbof.SpikyBallSampler(number_of_nodes = number_of_nodes, sampling_probability=0.1, mode='fireball',
initial_nodes_ratio=0.001)
GFB = sampler.sample(G)
nx.write_gexf(GFB, 'data/gfb.gexf')
print('Subgraph with {} nodes and {} edges.'.format(GFB.number_of_nodes(), GFB.number_of_edges()))
# Coreball sampler
sampler = lbof.SpikyBallSampler(number_of_nodes = number_of_nodes, sampling_probability=0.1, mode='coreball',
initial_nodes_ratio=0.001)
GCB = sampler.sample(G)
# Remove isolated nodes
GCB = nx.Graph(GCB)
GCB.remove_nodes_from(list(nx.isolates(GCB)))
nx.write_gexf(GCB, 'data/gcb.gexf')
print('Subgraph with {} nodes and {} edges.'.format(GCB.number_of_nodes(), GCB.number_of_edges()))
# Coreball 2 sampler
sampler = lbof.SpikyBallSampler(number_of_nodes = number_of_nodes, sampling_probability=0.1, mode='coreball',
initial_nodes_ratio=0.001, distrib_coeff=2)
GCB2 = sampler.sample(G)
# Remove isolated nodes
GCB2 = nx.Graph(GCB2)
GCB2.remove_nodes_from(list(nx.isolates(GCB2)))
nx.write_gexf(GCB2, 'data/gcb2.gexf')
print('Subgraph with {} nodes and {} edges.'.format(GCB2.number_of_nodes(), GCB2.number_of_edges()))
```
**Exercise**: Visualize some of the sampled graphs with Gephi.
### Visualisation of the graphs (from Gephi)
Metropolis-Hasting RW, Forest Fire, Fireball and Coreball
<table><tr>
<td><img src="figures/gmh.png" alt="MetropolisHasting" width="200"> </td>
<td><img src="figures/gff.png" alt="Forest Fire" width="200"></td>
<td><img src="figures/gfb.png" alt="Fireball" width="200"></td>
<td><img src="figures/gcb.png" alt="Coreball" width="200"></td>
</tr></table>
### Degree distribution
Let us see what is the degree distribution of these networks.
```
# A function to plot the degree distribution of a given graph
def plot_degree(G,glabel):
m=1 # minimal degree to display
degree_freq = nx.degree_histogram(G)
degrees = range(len(degree_freq))
plt.scatter(degrees[m:], degree_freq[m:],label=glabel)
return max(degrees),max(degree_freq)
plt.figure(figsize=(12, 8))
mx1,my1 = plot_degree(GMH,'MH')
mx2,my2 = plot_degree(GFF,'FF')
mx3,my3 = plot_degree(GFB,'FB')
mx4,my4 = plot_degree(GCB,'CB')
mx5,my5 = plot_degree(GCB2,'CB2')
plt.xlim(1, max([mx1,mx2,mx3,mx4,mx5]))
plt.ylim(1, max([my1,my2,my3,my4,my5]))
plt.yscale('log')
plt.xscale('log')
plt.xlabel('Degree')
plt.ylabel('Frequency')
plt.legend()
plt.show()
```
**Remark 1:** this visualization is simple to code but not the most appropriate. A more precise curve could be obtained by grouping degrees in bins, with a range following a logaritmic scale.
**Remark 2:** The size of the sampled graphs is good for visualization but too small for having good statistics on the degree distribution.
### A larger subgraph
We increase the size of the sampled subgraph to have better statistics on the degree distribution.
```
number_of_nodes = 3000
# MHRW
sampler = lbof.MetropolisHastingsRandomWalkSampler(number_of_nodes = number_of_nodes)
GMH = sampler.sample(G)
print('MHRW subgraph with {} nodes and {} edges.'.format(GMH.number_of_nodes(),GMH.number_of_edges()))
# FF
sampler = lbof.ForestFireSampler(number_of_nodes = number_of_nodes)
GFF = sampler.sample(G)
print('FF subgraph with {} nodes and {} edges.'.format(GFF.number_of_nodes(),GFF.number_of_edges()))
# Fireball
sampler = lbof.SpikyBallSampler(number_of_nodes = number_of_nodes, sampling_probability=0.05, mode='fireball',
initial_nodes_ratio=0.001)
GFB = sampler.sample(G)
print('FB subgraph with {} nodes and {} edges.'.format(GFB.number_of_nodes(), GFB.number_of_edges()))
# Coreball
sampler = lbof.SpikyBallSampler(number_of_nodes = number_of_nodes, sampling_probability=0.1, mode='coreball',
initial_nodes_ratio=0.001)
GCB = sampler.sample(G)
print('CB subgraph with {} nodes and {} edges.'.format(GCB.number_of_nodes(), GCB.number_of_edges()))
# Coreball 2
sampler = lbof.SpikyBallSampler(number_of_nodes = number_of_nodes, sampling_probability=0.1, mode='coreball',
initial_nodes_ratio=0.001, distrib_coeff=2)
GCB2 = sampler.sample(G)
print('CB2 subgraph with {} nodes and {} edges.'.format(GCB2.number_of_nodes(), GCB2.number_of_edges()))
# Plot degree distribution
plt.figure(figsize=(12, 8))
mx1,my1 = plot_degree(GMH,'MH')
mx2,my2 = plot_degree(GFF,'FF')
mx3,my3 = plot_degree(GFB,'FB')
mx4,my4 = plot_degree(GCB,'CB')
mx5,my5 = plot_degree(GCB2,'CB2')
plt.xlim(1, max([mx1,mx2,mx3,mx4,mx5]))
plt.ylim(1, max([my1,my2,my3,my4,my5]))
plt.yscale('log')
plt.xscale('log')
plt.xlabel('Degree')
plt.ylabel('Frequency')
plt.legend()
plt.show()
```
### Degrees in the initial graph
How do the different methods perform with respect to node degrees in the initial graph? Do they collect more nodes with a high degree or not?
```
# Add degree as a node property in the initial graph
# Then we can collect it easily in the usbsampled graph
nx.set_node_attributes(G,dict(G.degree()), name='degree')
# A function to plot the degree distribution of a given graph
def plot_d_init(G,subgraph,glabel):
m=1 # minimal degree to display
d = [G.nodes[i]['degree'] for i in subgraph.nodes()]
counter_dic = Counter(d)
degrees = list(counter_dic.keys())
degree_freq = list(counter_dic.values())
#degree_freq, degrees = np.histogram(d,bins=50)
#print(len(degree_freq),len(degrees))
#degree_freq = nx.degree_histogram(G)
#degrees = range(len(degree_freq))
plt.scatter(degrees[m:], degree_freq[m:],label=glabel)
return max(degrees),max(degree_freq)
plt.figure(figsize=(12, 8))
mx1,my1 = plot_d_init(G,GMH,'MH')
mx2,my2 = plot_d_init(G,GFF,'FF')
mx3,my3 = plot_d_init(G,GFB,'FB')
mx4,my4 = plot_d_init(G,GCB,'CB')
mx5,my5 = plot_d_init(G,GCB2,'CB2')
plt.xlim(1, max([mx1,mx2,mx3,mx4,mx5]))
plt.ylim(1, max([my1,my2,my3,my4,my5]))
plt.yscale('log')
plt.xscale('log')
plt.xlabel('Degree')
plt.ylabel('Frequency')
plt.legend()
plt.show()
```
## Exercise
Redo the experiments with different parameters for the fireball and coreball and a different graph.
What are the best exploration methods
* for collecting more high degree nodes?
* for keeping the same degree distribution?
* for exploring a larger part of the network?
| github_jupyter |
```
#loading packages
import nltk
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#nltk.download()
#loading text data
text=pd.read_csv("C:/Users/Admin/Desktop/text.csv")
print(text.head())
#count Words
text['word_count'] = text['text'].apply(lambda x: len(str(x).split(" ")))
text[['text','word_count']].head()
#count characters
text['char_count'] = text['text'].str.len()
text[['text','char_count']].head()
#Average Word Length
def avg_word(sentence):
words = sentence.split()
return (sum(len(word) for word in words)/len(words))
text['avg_word'] = text['text'].apply(lambda x: avg_word(x))
text[['text','avg_word']].head()
#count of stopwords
from nltk.corpus import stopwords
stop = stopwords.words('english')
text['stopwords'] = text['text'].apply(lambda x: len([x for x in x.split() if x in stop]))
text[['text','stopwords']].head()
#Number of special characters
text['hastags'] = text['text'].apply(lambda x: len([x for x in x.split() if x.startswith('#')]))
text[['text','hastags']].head()
#count of numerics
text['numerics'] = text['text'].apply(lambda x: len([x for x in x.split() if x.isdigit()]))
text[['text','numerics']].head()
#number of Uppercase words
text['upper'] = text['text'].apply(lambda x: len([x for x in x.split() if x.isupper()]))
text[['text','upper']].head()
#Basic Pre-processing
#making all as Lower case
text['text'] = text['text'].apply(lambda x: " ".join(x.lower() for x in x.split()))
text['text'].head()
#Removing Punctuation
text['text'] = text['text'].str.replace('[^\w\s]','')
text['text'].head()
#Removal of Stop Words
from nltk.corpus import stopwords
stop = stopwords.words('english')
text['text'] = text['text'].apply(lambda x: " ".join(x for x in x.split() if x not in stop))
text['text'].head()
# tokenization and find the frequency
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
from nltk.probability import FreqDist
t=[]
for i in range(len(text['text'])):
tokenized_text=sent_tokenize(text['text'][i])
tokenized_word=word_tokenize(text['text'][i])
t =t+tokenized_word
#print(t)
freq = pd.Series(' '.join(text['text']).split()).value_counts()[:10]
print(freq)
fdist = FreqDist(t)
print(fdist)
#plotting the frequency of words
fdist.plot(25,cumulative=False)
plt.show()
#Common word removal
freq = list(freq.index)
text['text'] = text['text'].apply(lambda x: " ".join(x for x in x.split() if x not in freq))
text['text'].head()
#Rare words removal
freq = pd.Series(' '.join(text['text']).split()).value_counts()[-10:]
freq.head()
freq = list(freq.index)
text['text']= text['text'].apply(lambda x: " ".join(x for x in x.split() if x not in freq))
text['text'].head()
#Stemming
from nltk.stem import PorterStemmer
st = PorterStemmer()
text['text'][:5].apply(lambda x: " ".join([st.stem(word) for word in x.split()]))
#Lemmatization
from textblob import Word
text['text'] =text['text'].apply(lambda x: " ".join([Word(word).lemmatize() for word in x.split()]))
text['text'].head()
#POS Tagging
nltk.pos_tag(t)[0:5]
#Text Processing
#N-grams(2)
from textblob import TextBlob
TextBlob(text['text'][0]).ngrams(2)
#Term frequency
tf1 = (text['text'][1:2]).apply(lambda x: pd.value_counts(x.split(" "))).sum(axis = 0).reset_index()
tf1.columns = ['words','tf']
tf1.head()
#Inverse Document Frequency
for i,word in enumerate(tf1['words']):
tf1.loc[i, 'idf'] = np.log(text.shape[0]/(len(text[text['text'].str.contains(word)])))
tf1.head()
#Term Frequency – Inverse Document Frequency (TF-IDF)
tf1['tfidf'] = tf1['tf'] * tf1['idf']
tf1.head()
#making worldcloud
from wordcloud import WordCloud, STOPWORDS
from nltk.corpus import stopwords
stop_words=set(stopwords.words("english"))
print(stop_words)
comment_words = ' '
for words in t:
comment_words = comment_words + words + ' '
wordcloud = WordCloud(width = 600, height = 600,
background_color='white' ,
stopwords = stop_words,
min_font_size = 10).generate(comment_words)
# plot the WordCloud image
plt.figure(figsize = (5, 5), facecolor = None)
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad = 0)
text.head()
#splitting training and testing data
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest=train_test_split(text['text'],text['sentiment'],test_size= 0.20,random_state=15)
#Encoding the predicted variable
from sklearn.preprocessing import LabelEncoder
Encoder = LabelEncoder()
ytrain = Encoder.fit_transform(ytrain)
ytest = Encoder.fit_transform(ytest)
#vectorization
from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer(max_features=5000)
vec.fit(text['text'])
trainx = vec.transform(xtrain)
testx = vec.transform(xtest)
print(trainx[5])
#Fit a Naviebayes text classifier
from sklearn import model_selection, naive_bayes
Naive = naive_bayes.MultinomialNB()
Naive.fit(trainx,ytrain)
ypred = Naive.predict(testx)
from sklearn.metrics import accuracy_score
print("Naive Bayes Accuracy Score -> ",accuracy_score(ypred,ytest)*100,"%")
#confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix = pd.crosstab(ypred,ytest, rownames=['Actual'], colnames=['Predicted'])
print (confusion_matrix)
import seaborn as sns
sns.heatmap(confusion_matrix, annot=True)
```
| github_jupyter |
# Essentiality analysis
The goal of this analysis is to ensure the model correctly predicts the presence or absence of cell growth for well known mutants. Unlike for _E. coli_, extensive mutant libraries have not been charcterized for C. therm. Rather, a few mutants of biotechnological relevance have been relatively well studied.
## Background
### HydG-ech derived mutants
The standard for GEM model validation is the prediction of essentiality phenotypes. In particular, those relevant to metabolic engineering of C. thermocellum should be predicted accurately. To prevent designs which grow in-silico but are lethal in vivo.
In the publication
> Thompson, R. Adam, et al. "Elucidating central metabolic redox obstacles hindering ethanol production in Clostridium thermocellum." Metabolic engineering 32 (2015): 207-219."
different lethal phenotypes captured by a core model are presented. The experimental evidence is provided in the figure below.
Fig. 5. Growth characteristics of parent strain (triangles) and ΔhydG Δech (circles) in MTC media (filled symbols) or MTC with the PFL inhibitor hypophosphite (open symbols). To investigate redox bottlenecks, no additional electron sink (A), 20 mM fumarate (B), 20 mM 2-ketoisovalerate (C), or 2 g/L total sulfate (D) were included in the medium to probe NADH, NAD(P)H, and Fdrd, respectively.
<img src="fig5_thompson2015.jpg" alt="Drawing" style="width: 100px;"/>
Based on this evidence, the following phenotypes should be captured by the model:
1. hydG-ech-pfl deletion is lethal
2. hydG-ech-pfl deletion can recover growth in the presence of an external electron sink, either sulfate or kiv.
The most important being phenotype 1.
## LL1210 related mutants
The publication
> Tian, Liang, et al. "Simultaneous achievement of high ethanol yield and titer in Clostridium thermocellum." Biotechnology for biofuels 9.1 (2016): 116.
studies the mutant with deletion of hydG, ldh, pfl, pta-ackA
| Strain name | Description | Growth rate μ (h−1) |
|-------------|------------------------------------------------------------|---------------------|
| AG553 | C. thermocellum DSM1313 Δhpt ΔhydG Δldh Δpfl Δpta-ack [10] | 0.06 ± 0.01 |
| AG601 | Selected from AG553 after first stage adaptive evolution | 0.10 ± 0.01 |
| LL1210 | Selected from AG601 after second stage adaptive evolution | 0.22 ± 0.02 |
As a reference a wild type in avicell tubes grows at 0.33-0.39 (h-1) (see extracelular flux table). While it is not crucial to capture the quantitative change in growth rate it is imporant to ensure that:
3. hydg-ldh-pfl-pta/ack mutant can grow
# Model simulations
```
%matplotlib inline
import os
import sys
#sys.path.append('/home/sergio/Dropbox/cthermgem-dev')
#os.chdir('/home/sergio/Dropbox/cthermgem-dev')
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import csv
import numpy as np
import tools.conf_model
import cobra as cb
import settings
from tools.essentiality import *
model = cb.io.load_json_model(os.path.join(settings.INTERMEDIATE_MODEL_ROOT, 'iSG_5.json'))
set_conditions(model, medium_str='cellb', secretion='common_secretion')
model.objective = 'BIOMASS_CELLOBIOSE'
#Several features were updated after this notebook, thus to make the code reproducible, this features must be reverted to their original state:
model.reactions.EX_h2s_e.bounds = (0,1000) # Enable sulfide secretion
model.reactions.ACS.bounds = (0,1000) # Enable ACS
```
## Phenotype 1: hydG-ech-pfl mutant cannot growth in minimal medium
```
r_wt = model.optimize()
print('Growth rate of wt: {:.2f}'.format(r_wt.objective_value))
mut_ko = ['BIF','H2ASE_syn', 'PFL', 'ECH']
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in mut_ko]
r = tmodel.optimize()
print('Growth rate of hydG-ech-pfl ko: {:.2f} \t fraction of wt: {:.2f}'.format(r.objective_value, r.objective_value/r_wt.objective_value))
tmodel.summary()
```
The growth rate of this mutant is 48% that of the wild type. GEMs are known to overpredict growth since they do not account for important kinetic and regulatory limiations. However, lethality is often consired around a reduction of 80-90% of the theoretical maximum. Using that reference, we can say that __the model is failing to predict the lethality phenotype.__
We observe sulfide secretion, this pathway was studied by Thompson 2015 and further analyzed by [Biswas 2017]( https://biotechnologyforbiofuels.biomedcentral.com/articles/10.1186/s13068-016-0684-x). Biswas 2017 observed a range of 10-20 umol depending on the mutant.
```
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in mut_ko]
tmodel.reactions.EX_h2s_e.knock_out()
rmut = tmodel.optimize()
tmodel.summary()
with model as tmodel:
tmodel.reactions.EX_h2s_e.knock_out()
r = tmodel.optimize()
print('WT-EX_h2s_e gr: {:.2f}'.format(r.objective_value))
print('Fraction of wt growth: {:.2f}'.format(rmut.objective_value/r.objective_value))
deleted_rxns = ['EX_h2s_e']
```
The elimination of sulfide secretion continues to reduce growth rate of the mutant to 39% of the wild type.
The current understanding of this mutant is that POR cannot produce suficcient acetyl-coa for growth due to the accumulation of reduced ferredoxin. This opens two hypothesis that still explain growth:
1. An alternative source of Acetyl-CoA is active.
2. An alternative ferredoxin (or indirectly nad(p)h) oxidation pathway is active.
```
# Sources of accoa
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in mut_ko]
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
tmodel.metabolites.accoa_c.summary()
tmodel.reactions.ACS.knock_out()
rmut = tmodel.optimize()
tmodel.summary()
tmodel.metabolites.accoa_c.summary()
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
tmodel.reactions.ACS.knock_out()
r = tmodel.optimize()
print('\nWT-EX_h2s_e gr: {:.2f}'.format(r.objective_value))
mut_gr = rmut.objective_value
print('Fraction of wt growth: {:.2f}'.format(mut_gr/r.objective_value))
deleted_rxns.append('ACS')
```
We observed that ACS was capable of generating the majority of acetyl CoA required for growth. ACS was added to the model due to the genome annotation but there is no evidence of this reaction in C. therm. Furhtermore acetate consumption phenotypes have not been observed. So the current knowledge leads us to believe that ACS is not active in C. therm.
These modifications reduce growth to 17% of the wild type. This is often considered lethal, even for more curated models, since it is below 20% of the maximum.. We will dig a bit further to identify what pathways still support growth.
Investigating potential errors in redox metabolism is much more challenging since due to its highly redundant and interconected nature, as well as interactions with major metabolic reactions which cannot be deleted. Thus, we will perform a systematic deletion analysis to identify which single-deletions further reduce growth.
```
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
# wt
r_wt = tmodel.optimize()
wt_del = cb.flux_analysis.single_reaction_deletion(tmodel)
# mut
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in mut_ko]
mut_del = cb.flux_analysis.single_reaction_deletion(tmodel)
MIN_GROWTH = 0.2* r_wt.objective_value
mut_essen = set(mut_del.index[mut_del['growth']<= MIN_GROWTH])
wt_essen = set(wt_del.index[wt_del['growth']<= MIN_GROWTH])
wt_eff = set(wt_del.index[wt_del['growth'] < 0.9*r_wt.objective_value]) # deletions which reduce wt growth below 90% of the original value
print('Essential in: WT: {}; Mut: {}, Mutant and not WT (excluding reactions which reduce wt growth below 90%): {}\n'.format(len(wt_essen), len(mut_essen), len((mut_essen-wt_essen)-wt_eff)))
keyrxn = mut_essen-wt_essen-wt_eff
for rxn in keyrxn:
rxnid = list(rxn)[0]
print('{} \t {}'.format(list(rxn)[0], model.reactions.get_by_id(rxnid).reaction))
```
Several electron sinks suport growth. Notably MTHFC deletion makes PFL essential.
## Conclusion for Phenotype 1
1. ACS was providing an acetyl-CoA source which is likely not present or relevant in C. therm. There is no evidence of this reaction in C. therm.
2. Sulfide secretion provided an electron sink. When high amounts of sulfate are provided to the medium, growth is observed. Otherwise we consider this pathway to not be relevant, and thus remove sulfide secretion.
Modification of these two features reduces growth rate below 20% of the wild type maximum, which is considered lethal.
# Phenotype 2: Do fumarate, sulfate, or ketoisovalerate recover growth in the mutant?
In the Core model publication cited at the begining, the addition of the following reactions in separate instances led the model to precit growth:
| ID | Formula | Genes |
|-------|-------------------------------------------------------|-------------------|
| FUM1 | FUM_ext = FUM . | |
| FUM2 | FUM + NADH = SUCC + NAD . | Clo1313_2640;3018 |
| FUM3 | SUCC = SUCC_ext. | |
| ISOV1 | AKIV_ext = AKIV . | |
| ISOV2 | AKIV + fdox + 2 NADPH = IBOH + CO2 + fdred + 2 NADP . | Clo1313_0382-383 |
| ISOV3 | IBOH = IBOH_ext . | |
| SULF1 | SO4_ext = SO4 . | |
| SULF2 | SO4 + fdred = SO3 + fdox . | Clo1313_0118-124 |
| SULF3 | SO3 + fdred = Sulfide + fdox . | Clo1313_0118-124 |
| SULF4 | Sulfide = Sulf_ext . | |
As previously noted, only sulfate and kivd addition recover growth (albeit not as in the wild type, but rather with low growth or long lag phase)
## Fumarate
```
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in mut_ko]
# allow fumarate input
tmodel.reactions.EX_fum_e.bounds = (-1000,0)
# Include reaction converting fumarate to succinate, which is not present in the GEM
FUM2 = cb.Reaction(id='FUM2')
tmodel.add_reaction(FUM2)
tmodel.reactions.FUM2.reaction = 'fum_c + nadh_c + h_c => succ_c + nad_c'
# allow succinate secretion
tmodel.reactions.EX_succ_e.bounds = (0,1000)
# tmodel.objective = 'FUM2' # check the reaction is not blocked
r = tmodel.optimize()
tmodel.summary()
mut_fum_gr = r.objective_value
```
Fumarate predicts growth recovery consistently with the redox-imbalance hypothesis
## Sulfate
Sulfate is essential for the model, since it is used to provide sulfur for cysteine biosynthesis, as demonstrated by the plot below.
However, we can simulate high concentrations of sulfate by enabling h2s secretion.
```
x = np.linspace(-10,0,20)
y = []
with model:
for l in np.nditer(x):
model.reactions.EX_so4_e.lower_bound = l
y.append(model.optimize().objective_value)
plt.scatter(x,y)
plt.xlabel('EX_so4_e(lower bount)')
plt.ylabel('Growth rate')
print(model.reactions.EX_h2s_e.bounds)
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in mut_ko]
# allow sulfide secretion
tmodel.reactions.EX_h2s_e.bounds = (0,1000)
# tmodel.objective = 'FUM2' # check the reaction is not blocked
mut_sul_gr = tmodel.optimize().objective_value
tmodel.summary()
```
Indeed so4 uptake is significantly increased, and a significant efflux of h2s enables growth.
## KIV
Here we enable the isobutanol pathway providing one of its intermediates
```
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in mut_ko]
tmodel.reactions.EX_ibutoh_e.bounds = (0,1000)
sk = tmodel.add_boundary(tmodel.metabolites.get_by_id('3mob_c'))
mut_kiv_gr = tmodel.optimize().objective_value
tmodel.summary()
```
Again this fully recovers growth. Unlike fumarate, growth rate goes back to the wild-type level.
## Conclusion for phenotype 2:
The predictions are consistent with the core model, which accuratly represented experimental observations with the exception of fumarate.
## Phenotype 3: Growth of LL1210
As shown below the strain is able to grow, consistently with experimental observations.
```
# ll1210 mutant
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
r_wt = model.optimize()
wt_gr = r_wt.objective_value
tmodel.reactions.BIF.knock_out() #hydg
tmodel.reactions.H2ASE_syn.knock_out() #hydg
tmodel.reactions.PFL.knock_out()
tmodel.reactions.LDH_L.knock_out()
tmodel.reactions.PTAr.knock_out()
tmodel.reactions.ACKr.knock_out()
r = tmodel.optimize()
ll1210_gr = r.objective_value
tmodel.summary()
print('Growth rate of wt: {:.2f}, growth rate of Mut: {:.2f}, fraction: {:.2f}'.format(
r_wt.objective_value, r.objective_value, r.objective_value/r_wt.objective_value))
```
## Fractions of growth reduction for different mutants/strains
```
## Add other mutants:
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
tmodel.reactions.BIF.knock_out() #hydg
tmodel.reactions.H2ASE_syn.knock_out() #hydg
hydg_gr = tmodel.optimize().objective_value
tmodel.reactions.ECH.knock_out()
hydgech_gr = tmodel.optimize().objective_value
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
tmodel.reactions.BIF.knock_out() #hydg
tmodel.reactions.H2ASE_syn.knock_out() #hydg
tmodel.reactions.PTAr.knock_out()
tmodel.reactions.ACKr.knock_out()
hydgpta_gr = tmodel.optimize().objective_value
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
tmodel.reactions.LDH_L.knock_out()
ldh_gr = tmodel.optimize().objective_value
tmodel.reactions.PTAr.knock_out()
tmodel.reactions.ACKr.knock_out()
ldhpta_gr = tmodel.optimize().objective_value
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
tmodel.reactions.PTAr.knock_out()
tmodel.reactions.ACKr.knock_out()
pta_gr = tmodel.optimize().objective_value
with open('mutant_gr.csv', 'w') as f:
w = csv.writer(f, delimiter=',', lineterminator='\n')
mut = 'hydG-ech-pfl'
ll1210 = 'hydG-pfl-ldh-pta-ack'
w.writerow(['Strain', 'Medium', 'Fraction of WT growth rate'])
w.writerow(['hydG','MTC',hydg_gr/wt_gr])
w.writerow(['hydG-ech','MTC',hydg_gr/wt_gr])
w.writerow(['hydG-pta-ack','MTC', hydgpta_gr/wt_gr])
w.writerow([mut,'MTC', mut_gr/wt_gr])
w.writerow([mut,'MTC+fumarate', mut_fum_gr/wt_gr])
w.writerow([mut,'MTC+sulfate', mut_sul_gr/wt_gr])
w.writerow([mut,'MTC+ketoisovalerate', mut_kiv_gr/wt_gr])
w.writerow([ll1210, 'MTC', ll1210_gr/wt_gr])
w.writerow(['ldh', 'MTC', ldh_gr/wt_gr])
w.writerow(['pta-ack','MTC',pta_gr/wt_gr])
w.writerow(['ldh-pta-ack','MTC',ldhpta_gr/wt_gr])
# pta-ack can still make significant ammounts of acetate in-silico:
with model as tmodel:
[tmodel.reactions.get_by_id(rxn_id).knock_out() for rxn_id in deleted_rxns]
tmodel.reactions.LDH_L.knock_out()
tmodel.reactions.PTAr.knock_out()
tmodel.reactions.ACKr.knock_out()
tmodel.optimize()
tmodel.summary()
tmodel.metabolites.ac_c.summary()
```
| github_jupyter |
```
import os
import cv2
import numpy as np
import argus
from argus import Model
from argus.callbacks import MonitorCheckpoint, EarlyStopping, LoggingToFile
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from src.dataset import SaltDataset
from src.transforms import SimpleDepthTransform, DepthTransform, SaltTransform
from src.argus_models import SaltProbModel, SaltMetaModel
from src import config
import matplotlib.pyplot as plt
%matplotlib inline
image_size = (128, 128)
val_folds = [0]
train_folds = [1, 2, 3, 4]
train_batch_size = 64
val_batch_size = 64
depth_trns = SimpleDepthTransform()
train_trns = SaltTransform(image_size, True, 'pad')
val_trns = SaltTransform(image_size, False, 'pad')
train_dataset = SaltDataset(config.TRAIN_FOLDS_PATH, train_folds, train_trns, depth_trns)
val_dataset = SaltDataset(config.TRAIN_FOLDS_PATH, val_folds, val_trns, depth_trns)
train_loader = DataLoader(train_dataset, batch_size=train_batch_size, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=val_batch_size, shuffle=False)
# Draw a list of images in a row# Draw
def draw(imgs):
n = len(imgs) # Number of images in a row
plt.figure(figsize=(7,n*7))
for i in range(n):
plt.subplot(1, n, i+1)
plt.axis('off')
plt.imshow(imgs[i])
plt.show()
n_images_to_draw = 3
for img, trg in train_loader:
for i in range(n_images_to_draw):
img_i = img[i, 0, :, :].numpy()
cumsum_i = img[i, 1, :, :].numpy()
trg_i = trg[i, 0, :, :].numpy()
draw([img_i, cumsum_i, trg_i])
break
```
# Train prob
```
params = {
'nn_module': ('DPNProbUnet', {
'num_classes': 1,
'num_channels': 3,
'encoder_name': 'dpn92',
'dropout': 0
}),
'loss': ('FbBceProbLoss', {
'fb_weight': 0.95,
'fb_beta': 2,
'bce_weight': 0.9,
'prob_weight': 0.85
}),
'prediction_transform': ('ProbOutputTransform', {
'segm_thresh': 0.5,
'prob_thresh': 0.5
}),
'optimizer': ('Adam', {'lr': 0.0001}),
'device': 'cuda'
}
model = SaltMetaModel(params)
callbacks = [
MonitorCheckpoint('/workdir/data/experiments/test_022', monitor='val_crop_iout', max_saves=3),
EarlyStopping(monitor='val_crop_iout', patience=50),
LoggingToFile('/workdir/data/experiments/test_022/log.txt')
]
model.fit(train_loader,
val_loader=val_loader,
max_epochs=1000,
callbacks=callbacks,
metrics=['crop_iout'])
from argus import load_model
experiment_name = 'test_025'
lr_steps = [
(300, 0.0001),
(300, 0.00003),
(300, 0.00001),
(300, 0.000003),
(1000, 0.0000001)
]
for i, (epochs, lr) in enumerate(lr_steps):
print(i, epochs, lr)
if not i:
model = SaltMetaModel(params)
else:
model = load_model(f'/workdir/data/experiments/{experiment_name}/model-last.pth')
callbacks = [
MonitorCheckpoint(f'/workdir/data/experiments/{experiment_name}', monitor='val_crop_iout', max_saves=2),
EarlyStopping(monitor='val_crop_iout', patience=50),
LoggingToFile(f'/workdir/data/experiments/{experiment_name}/log.txt')
]
model.set_lr(lr)
model.fit(train_loader,
val_loader=val_loader,
max_epochs=epochs,
callbacks=callbacks,
metrics=['crop_iout'])
```
| github_jupyter |
# Search Terms
This project starts with curated collections of terms, including ERP terms, and potential associations, such as cognitive and disease terms. Automated literature collection then collects information from papers using those terms, using [LISC](https://lisc-tools.github.io/).
Current analysis takes two forms:
- `Words` analyses: analyses text data from articles that discuss ERP related research
- This approach collects text and metadata from papers, and builds data driven profiles for ERP components
- `Count` analyses: searches for co-occurences of terms, between ERPs and associated terms
- This approach looks for patterns based on how commonly terms occur together
This notebook introduces the terms that are used in the project.
```
from collections import Counter
# Import Base LISC object to load and check search terms
from lisc.objects.base import Base
from lisc.utils.io import load_txt_file
import seaborn as sns
sns.set_context('talk')
# Import custom project code
import sys
sys.path.append('../code')
from plts import plot_latencies
# Set the location of the terms
term_dir = '../terms/'
# Load a test object to check the terms
erps = Base()
```
## ERP Terms
First, we can check the list of search terms used to find articles about ERP components.
```
# Load erps and labels terms from file
erps.add_terms('erps.txt', directory=term_dir)
erps.add_labels('erp_labels.txt', directory=term_dir)
# Check the number of ERP terms
print('Number of ERP terms: {}'.format(erps.n_terms))
```
In the list below, the left most term is the label of the search term (not necessarily used as a search term), with any terms to the right of the colon listing search terms that were used. Any synonyms are separated by commas, and were used together in searches, with an OR operator.
```
# Check list of search terms for the ERP components
erps.check_terms()
```
### ERP Exclusion Terms
In order to exclude articles that might include a synonymous use of our terms of interest (for example, the term 'P100' in reference to an antibody), we use exclusion terms to exclude unrelated papers.
These terms are included in the overal search term, using a NOT operator, to exclude articles that include them.
```
# Add exclusion words
erps.add_terms('erps_exclude.txt', term_type='exclusions', directory=term_dir)
# Check the ERP exclusion terms used
erps.check_terms('exclusions')
```
### ERP Latencies
We also include an annotation of the typical latency of each ERP component.
```
# Load canonical latency information
labels = load_txt_file('erp_labels.txt', term_dir, split_elements=False)
latencies = load_txt_file('latencies.txt', term_dir, split_elements=False)
latency_dict = {label : latency.split(', ') for label, latency in zip(labels, latencies)}
# Extract the labelled polarities and latencies for each ERP
polarities = [el[0] for el in latency_dict.values()]
latencies = [int(el[1]) for el in latency_dict.values()]
# Check the count of polarities
polarity_counts = Counter(polarities)
print(polarity_counts)
# Plot the ERP latencies
plot_latencies(polarities, latencies)
# Print the ERP latency for each component
print('Typical ERP latency:')
for label, lat in zip(labels, latencies):
print(' {:s}\t:\t{:4d}'.format(label, lat))
```
## Association Terms
As well as search terms for ERP components, we collected lists of potential association terms.
Groups of association terms include:
- cognitive terms
- disorder-related terms
### Cognitive Terms
First, we curated a list of cognitive-related association terms, to investigate cognition related investigations using ERPs.
```
# Load cognitive terms from file
cogs = Base()
cogs.add_terms('cognitive.txt', directory=term_dir)
# Check the number of ERP terms
print('Number of cognitive terms: {}'.format(cogs.n_terms))
# Check the cognitive terms used
cogs.check_terms()
```
### Disorder Related Terms
Finally, we curated a list of disorder-related terms to search for clinically-related applications of ERP analyses.
```
# Load the disorder related terms from file
disease = Base()
disease.add_terms('disorders.txt', directory=term_dir)
# Check the number of ERP terms
print('Number of disorder terms: {}'.format(disease.n_terms))
# Check the disease terms
disease.check_terms()
```
| github_jupyter |
```
import pandas as pd
import requests
import tweepy
import yaml
import os
import json
import numpy as np
```
# Read PitchBook data in
## Start with companies that HAVE received VC funding. These are the 1 companies (VC investment =1)
```
vc_general_info_df = pd.read_excel("../data/raw/CA_VC_PitchBook/Company_General_Information.xlsx",header=6)
vc_general_info_df.head(2)
# interesting columns= Company ID (primary key), Description, Company Name, HQ Post Code,
#Primary Industry Code, Primary Contact, Year Founded, Active Investors
vc_general_info_df.info()
vc_last_financing_df = pd.read_excel("../data/raw/CA_VC_PitchBook/Last_Financing_Details.xlsx",header=6)
vc_last_financing_df .head(2)
# interesting columns = Company ID ( primary key), Company Name, Growth Rate, Size Multiple, last financing date,
# last financing Size, Last financing valuation, Last Financing Deal Type 2
# Note : Only want series A or later, filter OUT the seed rounds
vc_last_financing_df.info()
vc_company_financials_df = pd.read_excel("../data/raw/CA_VC_PitchBook/Public_Company_Financials.xlsx",header=6)
vc_company_financials_df.head(2)
# Interesting columns are NOTHING
vc_company_financials_df.info()
vc_social_web_df = pd.read_excel("../data/raw/CA_VC_PitchBook/Social_and_Web_Presence.xlsx",header=6)
vc_social_web_df.head(2)
# interesting columns = company id (primary key), company name, growth rate, size multiple, majestic referring domains
# facebook likes, Tiwtter followers, Employees, Total raised
vc_social_web_df.tail()
```
# Join these four dataframes into one
```
vc_general_info_colDrop_df = vc_general_info_df[["Company ID", "Description", "Company Name", "HQ Post Code", "Primary Industry Code",
"Primary Contact", "Year Founded", "Active Investors","HQ Location"]]
vc_last_financing_colDrop_df =vc_last_financing_df[["Company ID", "Growth Rate", "Size Multiple",
"Last Financing Date","Last Financing Size","Last Financing Valuation",
"Last Financing Deal Type 2 "]]
vc_social_web_colDrop_df =vc_social_web_df [["Company ID", "Growth Rate",
"Size Multiple", "Majestic Referring Domains",
"Facebook Likes", "Twitter Followers", "Employees", "Total Raised"]]
final_vc_df = vc_general_info_colDrop_df.merge(vc_last_financing_colDrop_df, on='Company ID').merge(vc_social_web_colDrop_df,
on='Company ID')
final_vc_df .info()
final_vc_df .tail(20)
final_vc_df .drop([
'Growth Rate_y','Size Multiple_y'],axis=1,inplace=True)
final_vc_df.rename(columns={'Growth Rate_x':'Growth Rate',"Size Multiple_x":'Size Multiple',
"Company Name_x":"Company Name"},inplace=True) # rename cols
final_vc_df.head(2)
```
### Filter out the seed rounds
- Only want Series A
```
final_vc_df['Last Financing Deal Type 2 '].unique()
final_vc_financeTypeFilter_df = final_vc_df.loc[(final_vc_df['Last Financing Deal Type 2 ']!='Seed') &
(final_vc_df['Last Financing Deal Type 2 ']!='Angel') &
(final_vc_df['Last Financing Deal Type 2 ']!='Series B')& (final_vc_df['Last Financing Deal Type 2 ']!='Series B2') &
(final_vc_df['Last Financing Deal Type 2 ']!='Series B1') & (final_vc_df['Last Financing Deal Type 2 ']!='Series D') &
(final_vc_df['Last Financing Deal Type 2 ']!='Series C') & (final_vc_df['Last Financing Deal Type 2 ']!='Series E') &
(final_vc_df['Last Financing Deal Type 2 ']!='Series 2') & (final_vc_df['Last Financing Deal Type 2 ']!='Series CC') &
(final_vc_df['Last Financing Deal Type 2 ']!='Series F') & (final_vc_df['Last Financing Deal Type 2 ']!='Series G') &
(final_vc_df['Last Financing Deal Type 2 ']!='Series C3') & (final_vc_df['Last Financing Deal Type 2 ']!='Series 3') &
(final_vc_df['Last Financing Deal Type 2 ']!='Series B3') & (final_vc_df['Last Financing Deal Type 2 ']!='Series C1'),
: ]
final_vc_financeTypeFilter_df.info()
final_vc_financeTypeFilter_df.head(2)
```
# Drop companies missing the zip code, year founded, and primary contact
- Can't impute this
```
final_vc_dropFinanceZipYear_df = final_vc_financeTypeFilter_df.loc[
(final_vc_financeTypeFilter_df['HQ Post Code'].isnull()==False) &
(final_vc_financeTypeFilter_df['Year Founded'].isnull()==False) &
(final_vc_financeTypeFilter_df['Primary Contact'].isnull()==False),: ]
final_vc_dropFinanceZipYear_df .info()
final_vc_dropFinanceZipYear_df .describe()
```
# Impute missing values
- Growth Rate : impute median
- Size Multiple: imput median
- Last Financing Date : dont't really need this ( only need for angel companies)
- Last Financing Size : impute median
- Last Financing Valuation: Drop this ( too many nulls)
- Majestic Referring Domains: Impute Median
- Facebook Like: Impute median
- Twitter followers: impute median
- Employees: impute median
- Total raised: impute median
```
# first drop last financing valuation (not enough data)
final_vc_dropFinanceZipYear_df.drop(['Last Financing Valuation'],axis=1,inplace=True)
median_values={}
for row in final_vc_dropFinanceZipYear_df.describe(): # get median values
if row =='Last Financing Valuation': # don't have enough data for this
pass
else:
median_values[row]=final_vc_dropFinanceZipYear_df.describe()[row]["50%"]
median_values
imputed_final_df = final_vc_dropFinanceZipYear_df.copy()
for key in median_values: # update the nan values with the median
updated_col = final_vc_dropFinanceZipYear_df.loc[:,key].copy()
updated_col = updated_col.fillna(median_values[key])
imputed_final_df.loc[:,key] = updated_col
imputed_final_df.head(2)
```
### Add a 1 for the column VC_Invested
```
imputed_final_df['VC_invested']=1
imputed_final_df.head(20)
```
# Tweepy to find usernames of the people
```
def username_search(name, company, state, c = 20):
"""Run a search on twitter for the given name. Returns the first username (should be the most relevant).
Looks to match a state location with the state locatio nof the company
First try searching for the person's name + company. If that does not work, try just searching for the
person's name.
Count is for the number of results to return. Default to twenty. If not in the first twenty results, probably not
the correct user"""
state = state.lower()
credentials = yaml.load(open(os.path.expanduser('~/.ssh/api_credentials.yml')))
auth = tweepy.OAuthHandler(credentials['twitter']['consumer_key'], credentials['twitter']['consumer_secret'],)
auth.set_access_token(credentials['twitter']['token'], credentials['twitter']['token_secret'])
api = tweepy.API(auth,wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
try: # search the name and the company
tweets = api.search_users(q=str(name)+" "+str(company), count=c)
test = result[0].screen_name
screen_n = None
for result in tweets:
location = result.location.lower().split(" ") # see if the location is in the companies state
if state in location:
return result.screen_name
if screen_n == None:
return 'NaN'
else:
return screen_n
except Exception as e: # try just the name
try:
tweets = api.search_users(q=name, count = c)
screen_n = None
for result in tweets:
if state in result.location.lower().split(" "):
return result.screen_name
if screen_n == None:
return "NaN"
else:
return screen_n
except Exception as e:
return "NaN"
```
### Test the accuracy of this api for finding usernames
```
test_names = ['Jonathan Hilgart None NJ','Tim Cook Apple CA', "Smita Bakshi Zyante CA","Rob Fuggetta zuberance CA",
"Charles Hogan tranzlogic CA",
"Duke Chung travelbank CA", "Jake Fields treeline CA","Eric Min Zwift CA","Dominic Lewis Transifex CA",
"Dimitris Glezos Transifex CA"]
names_username = {"Jonathan Hilgart":"topofthehil",'Tim Cook':"tim_cook", "Smita Bakshi":"sbaksh",
"Rob Fuggetta":"robfuggetta", "Charles Hogan":"Tranzlogic",
"Duke Chung":"Duke_Chung", "Jake Fields":"JakeFields","Eric Min":"werkdodger",
"Dominic Lewis":"domwlewis", " Dominic Lewis":"glezos"}
test_search= {}
for name in test_names:
first,last,company,state = name.split(" ")
result = username_search(first+" "+last,company,state)
print(result)
test_search[name]=result
username_intersection = set(test_search.values()).intersection(set(names_username.values()))
len(username_intersection)/len(names_username.keys())
"ca" in "hickory, north carolina usa location"
test_search
```
#### Decent! About 50% accuracy in finding twitter handles
### Add the Twitter username to the pandas df for the VC dataframe
```
twitter_usernames_vc_df = []
for idx,row in enumerate(imputed_final_df.iterrows()):
location = row[1]['HQ Location'].split(",")[1].strip(" ")
company = row[1]['Company Name']
founder = row[1]['Primary Contact']
twitter_usernames_vc_df.append(username_search(founder,company, location ))
if idx%100 ==0:
print(f"Finished {idx/len(imputed_final_df)}")
imputed_final_df['Twitter_Username'] = twitter_usernames_vc_df
imputed_final_df.tail()
```
## Drop NaN Usernames
- Also, drop last financing Date and last financing deal type 2 (don't need for the VC round)
```
final_acce_incub_df = imputed_final_df[
(imputed_final_df.Twitter_Username!='NaN') ]
final_acce_incub_df .head()
final_acce_incub_df .info()
final_acce_incub_df.to_csv("../data/processed/PitchBook_CA_VCInvest=1.csv")
t = pd.read_csv("../data/processed/PitchBook_CA_VCInvest=1.csv")
final_acce_incub_df.tail()
# 0, 1 ,1 , 1, 0 - amir, 1, 1, 1 , .5, 0 ,1
# ~ 65% accuracy
# 68% acuracy7.5/11
```
| github_jupyter |
<a href="https://colab.research.google.com/github/seopbo/nlp_tutorials/blob/main/single_text_classification_(nsmc)_BERT.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Single text classification - BERT
- pre-trained language model로는 `klue/bert-base`를 사용합니다.
- https://huggingface.co/klue/bert-base
- single text classification task를 수행하는 예시 데이터셋으로는 `nsmc`를 사용합니다.
- https://huggingface.co/datasets/nsmc
## Setup
어떠한 GPU가 할당되었는 지 아래의 코드 셀을 실행함으로써 확인할 수 있습니다.
```
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
```
아래의 코드 셀을 실행함으로써 본 노트북을 실행하기위한 library를 install하고 load합니다.
```
!pip install torch
!pip install transformers
!pip install datasets
!pip install -U scikit-learn
import torch
import transformers
import datasets
```
## Preprocess data
1. `klue/bert-base`가 사용한 subword tokenizer를 load합니다.
2. `datasets` library를 이용하여 `nsmc`를 load합니다.
3. 1의 subword tokenizer를 이용 `nsmc`의 data를 single text classification을 수행할 수 있는 형태, train example로 transform합니다.
- `[CLS] tok 1 ... tok N [SEP]`로 만들고, 이를 list_of_integers로 transform합니다.
`nsmc`를 load하고, `train_ds`, `valid_ds`, `test_ds`를 생성합니다
```
from datasets import load_dataset
cs = load_dataset("nsmc", split="train")
cs = cs.train_test_split(0.1)
test_cs = load_dataset("nsmc", split="test")
train_cs = cs["train"]
valid_cs = cs["test"]
```
transform을 위한 함수를 정의하고 적용합니다.
```
from transformers import AutoTokenizer, AutoConfig
tokenizer = AutoTokenizer.from_pretrained("klue/bert-base")
config = AutoConfig.from_pretrained("klue/bert-base")
print(tokenizer.__class__)
print(config.__class__)
from typing import Union, List, Dict
def transform(sentences: Union[str, List[str]], tokenizer) -> Dict[str, List[List[int]]]:
if isinstance(sentences, str):
sentences = [sentences]
return tokenizer(text=sentences, add_special_tokens=True, padding=False, truncation=False)
samples = train_cs[:2]
transformed_samples = transform(samples["document"], tokenizer)
print(samples)
print(transformed_samples)
train_ds = train_cs.map(lambda data: transform(data["document"], tokenizer), remove_columns=["id", "document"], batched=True).rename_column("label", "labels")
valid_ds = valid_cs.map(lambda data: transform(data["document"], tokenizer), remove_columns=["id", "document"], batched=True).rename_column("label", "labels")
test_ds = test_cs.map(lambda data: transform(data["document"], tokenizer), remove_columns=["id", "document"], batched=True).rename_column("label", "labels")
```
## Prepare model
single text classification을 수행하기위해서 `klue/bert-base`를 load합니다.
```
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("klue/bert-base", num_labels=2)
print(model.__class__)
```
## Train model
`Trainer` class를 이용하여 train합니다.
- https://huggingface.co/transformers/custom_datasets.html?highlight=trainer#fine-tuning-with-trainer
```
import numpy as np
from transformers.data.data_collator import DataCollatorWithPadding
from sklearn.metrics import accuracy_score
def compute_metrics(p):
pred, labels = p
pred = np.argmax(pred, axis=1)
accuracy = accuracy_score(y_true=labels, y_pred=pred)
return {"accuracy": accuracy}
batchify = DataCollatorWithPadding(
tokenizer=tokenizer,
padding="longest",
)
# mini-batch 구성확인
batchify(train_ds[:2])
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='./results',
evaluation_strategy="steps",
eval_steps=1000,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
learning_rate=1e-4,
weight_decay=0.01,
adam_beta1=.9,
adam_beta2=.95,
adam_epsilon=1e-8,
max_grad_norm=1.,
num_train_epochs=2,
lr_scheduler_type="linear",
warmup_steps=100,
logging_dir='./logs',
logging_strategy="steps",
logging_first_step=True,
logging_steps=100,
save_strategy="epoch",
seed=42,
dataloader_drop_last=False,
dataloader_num_workers=2
)
trainer = Trainer(
args=training_args,
data_collator=batchify,
model=model,
train_dataset=train_ds,
eval_dataset=valid_ds,
compute_metrics=compute_metrics
)
trainer.train()
trainer.evaluate(test_ds)
```
| github_jupyter |
```
import utilities as utils
# This code is used to scale to processing numerous datasets
data_path_1: str = '../../../Data/phase1/'
data_set_1: list = [ 'Darknet_reduced_features.csv' ]
data_set: list = data_set_1
file_path_1 = utils.get_file_path(data_path_1)
file_set: list = list(map(file_path_1, data_set_1))
current_job: int = 0
utils.data_set = data_set
utils.file_set = file_set
print(f'We will be cleaning {len(file_set)} files:')
utils.pretty(file_set)
```
## Label Analysis
Now we load the data and separate the dataset by label, giving us a traffic dataset and an application dataset. We also want to investigate how merging the Non-Tor and NonVNP labels together affects the clustering, so rename the samples under these labels as regular and produce a second traffic dataset with it.
```
dataset : dict = utils.examine_dataset(1)
dataset = utils.package_data_for_inspection_with_label(
utils.reduce_feature_to_values(dataset['Dataset'], 'Traffic Type', ['Tor', 'VPN', 'Non-Tor'] ), 'Dataset_2')
dataset['Dataset'] = utils.rename_values_in_column(dataset, [('Traffic Type', {'Non-Tor': 'Regular'})])
traffic_dataset : dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(dataset, ['Application Type']), 'Traffic_Dataset_2_Tor_VPN_Regular')
application_dataset : dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(dataset, ['Traffic Type']), 'Application_Dataset_1')
```
# SMOTE Prototype
Here we load up our first SMOTE model and train it on the whole traffic dataset. This way we have some data to play around with it and our actual splits will be done later.
```
def create_and_visualize_smote(df: utils.pd.DataFrame, target_label: str, ratio_dict: dict) -> list:
'''
Function creates and visualizes SMOTE with the given ratio
Parameters:
df: dataframe to be used for SMOTE
target_label: the label used for prediction
ratio_dict: dictionary of the ratio to be used for SMOTE for each class
'''
X = df.drop(target_label, axis=1)
y = df[target_label]
model = utils.SMOTE(sampling_strategy=ratio_dict)
X, y = model.fit_resample(X, y)
counter = utils.Counter(y)
for k,v in counter.items():
per = v / len(y) * 100
print('Class=%s, n=%d (%.3f%%)' % (k, v, per))
# plot the distribution
utils.pyplot.bar(counter.keys(), counter.values())
utils.pyplot.show()
return utils.pd.concat([X, utils.pd.DataFrame(y)], axis=1)
def get_largest_class_sample_size(df: utils.pd.DataFrame, column_name: str) -> int:
'''
Function returns the largest class sample size
'''
return df.groupby(column_name).size().max()
largest_class = get_largest_class_sample_size(traffic_dataset['Dataset'], 'Traffic Type')
ratio_dict = {"VPN": largest_class, "Regular": largest_class, "Tor": largest_class}
fake_df_traffic = create_and_visualize_smote(traffic_dataset['Dataset'], 'Traffic Type', ratio_dict)
largest_class = get_largest_class_sample_size(application_dataset['Dataset'], 'Application Type')
ratio_dict = {"audio-streaming": largest_class, "browsing": largest_class, "chat": largest_class, "email": largest_class, "file-transfer": largest_class, "p2p": largest_class, "video-streaming": largest_class, "voip": largest_class}
fake_df_application = create_and_visualize_smote(application_dataset['Dataset'], 'Application Type', ratio_dict)
fake_traffic_dataset: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_traffic), [] ),
'smote_balanced_traffic_dataset_labels_equal'
)
fake_application_dataset: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_application), [] ),
'smote_balanced_application_dataset_labels_equal'
)
fake_traffic_dataset['Dataset'].to_csv('./synthetic/smote_balanced_traffic_dataset_labels_equal.csv', index=False)
fake_application_dataset['Dataset'].to_csv('./synthetic/smote_balanced_application_dataset_labels_equal.csv', index=False)
```
# Traffic Type Synthetic Data Generation
### We will begin with 20,000 of each traffic type
```
def create_and_filter_new_model(model: utils.Model_data, column_name: str, value: str, prune_column: str = None) -> utils.Model_data:
'''
Function returns a new model with the given column name and value
'''
new_model : dict = utils.copy.deepcopy(model)
new_model['Dataset'] = model["Dataset"][model["Dataset"][column_name] == value]
if prune_column is not None:
new_model['Dataset'] = utils.prune_dataset(new_model, [prune_column])
return new_model
def downsample(df: utils.pd.DataFrame, column_name: str, size : int) -> utils.pd.DataFrame:
'''
Function returns a new dataframe with the given column name and value
'''
return df.groupby(column_name, group_keys=False).apply(lambda df: df.sample(size))
vpn_experiment_2 : utils.Model_data = create_and_filter_new_model(traffic_dataset, 'Traffic Type', 'VPN')
tor_experiment_2 : utils.Model_data = create_and_filter_new_model(traffic_dataset, 'Traffic Type', 'Tor')
regular_experiment_2 : utils.Model_data = create_and_filter_new_model(traffic_dataset, 'Traffic Type', 'Regular')
regular_experiment_2['Dataset'] = downsample(regular_experiment_2['Dataset'], 'Traffic Type', 20000)
vpn_experiment_2['Dataset'] = downsample(vpn_experiment_2['Dataset'], 'Traffic Type', 20000)
ratio_dict = {"VPN": 20000, "Regular": 20000, "Tor": 20000}
fake_df_traffic_2 = create_and_visualize_smote(utils.pd.concat([regular_experiment_2['Dataset'],vpn_experiment_2['Dataset'],tor_experiment_2['Dataset']]), 'Traffic Type', ratio_dict)
fake_traffic_dataset_2: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_traffic_2), [] ),
'Fake_Traffic_Dataset_2_20_15_5'
)
fake_traffic_dataset_2['Dataset'].to_csv('./synthetic/smote_balanced_traffic_dataset_20_20_20.csv', index=False)
```
### Next is a 20,000 Regular, 15,000 VPN, and 5,000 Tor split
```
vpn_experiment_3 : utils.Model_data = create_and_filter_new_model(traffic_dataset, 'Traffic Type', 'VPN')
tor_experiment_3 : utils.Model_data = create_and_filter_new_model(traffic_dataset, 'Traffic Type', 'Tor')
regular_experiment_3 : utils.Model_data = create_and_filter_new_model(traffic_dataset, 'Traffic Type', 'Regular')
regular_experiment_3['Dataset'] = downsample(regular_experiment_3['Dataset'], 'Traffic Type', 20000)
vpn_experiment_3['Dataset'] = downsample(vpn_experiment_3['Dataset'], 'Traffic Type', 15000)
ratio_dict = {"VPN": 15000, "Regular": 20000, "Tor": 5000}
fake_df_traffic_3 = create_and_visualize_smote(utils.pd.concat([regular_experiment_3['Dataset'],vpn_experiment_3['Dataset'],tor_experiment_3['Dataset']]), 'Traffic Type', ratio_dict)
fake_traffic_dataset_3: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_traffic_3), [] ),
'Fake_Traffic_Dataset_3_20_15_5'
)
fake_traffic_dataset_3['Dataset'].to_csv('./synthetic/smote_balanced_traffic_dataset_labels_20_15_5.csv', index=False)
```
### Finally is a 30,000 Regular, 20,000 VPN, and 10,000 Tor split
```
vpn_experiment_4 : utils.Model_data = create_and_filter_new_model(traffic_dataset, 'Traffic Type', 'VPN')
tor_experiment_4 : utils.Model_data = create_and_filter_new_model(traffic_dataset, 'Traffic Type', 'Tor')
regular_experiment_4 : utils.Model_data = create_and_filter_new_model(traffic_dataset, 'Traffic Type', 'Regular')
regular_experiment_4['Dataset'] = downsample(regular_experiment_4['Dataset'], 'Traffic Type', 30000)
vpn_experiment_4['Dataset'] = downsample(vpn_experiment_4['Dataset'], 'Traffic Type', 20000)
ratio_dict = {"VPN": 20000, "Regular": 30000, "Tor": 10000}
fake_df_traffic_4 = create_and_visualize_smote(utils.pd.concat([regular_experiment_4['Dataset'],vpn_experiment_4['Dataset'],tor_experiment_4['Dataset']]), 'Traffic Type', ratio_dict)
fake_traffic_dataset_4: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_traffic_4), [] ),
'Fake_Traffic_Dataset_4_30_20_10'
)
fake_traffic_dataset_4['Dataset'].to_csv('./synthetic/smote_balanced_traffic_dataset_labels_30_20_10.csv', index=False)
```
# Application Type Synthetic Data Generation
```
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
def downsample_and_run_smote(dataset : dict, ratio_dict: dict, target_label: str) -> utils.pd.DataFrame:
models : list = []
df : Dataframe = utils.pd.DataFrame()
for key, value in ratio_dict.items():
models.append(create_and_filter_new_model(dataset, target_label, key))
if len(models[-1]['Dataset']) > value:
models[-1]['Dataset'] = downsample(models[-1]['Dataset'], target_label, value)
df = df.append(models[-1]['Dataset'])
return create_and_visualize_smote(df, target_label, ratio_dict)
```
### All application types with 10,000 samples
```
fake_df_application_2 = downsample_and_run_smote(application_dataset, {"audio-streaming": 10000, "browsing": 10000, "chat": 10000, "file-transfer": 10000, "email": 10000,
"p2p": 10000, "video-streaming": 10000, "voip": 10000}, 'Application Type')
fake_application_dataset_2: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_application_2), [] ),
'Fake_Application_Dataset_2_10_10_10'
)
fake_application_dataset_2['Dataset'].to_csv('./synthetic/smote_balanced_application_dataset_labels_10_10_10.csv', index=False)
```
### All application types with 15,000 samples
```
fake_df_application_3 = downsample_and_run_smote(application_dataset, {"audio-streaming": 15000, "browsing": 15000, "chat": 15000, "file-transfer": 15000, "email": 15000,
"p2p": 15000, "video-streaming": 15000, "voip": 15000}, 'Application Type')
fake_application_dataset_3: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_application_3), [] ),
'Fake_Application_Dataset_3_15_15_15'
)
fake_application_dataset_3['Dataset'].to_csv('./synthetic/smote_balanced_application_dataset_labels_15_15_15.csv', index=False)
```
### All application types with 20,000 samples
```
fake_df_application_4 = downsample_and_run_smote(application_dataset, {"audio-streaming": 20000, "browsing": 20000, "chat": 20000, "file-transfer": 20000, "email": 20000,
"p2p": 20000, "video-streaming": 20000, "voip": 20000}, 'Application Type')
fake_application_dataset_4: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_application_4), [] ),
'Fake_Application_Dataset_4_20_20_20'
)
fake_application_dataset_4['Dataset'].to_csv('./synthetic/smote_balanced_application_dataset_labels_20_20_20.csv', index=False)
```
### All application types with 25,000 samples
```
fake_df_application_5 = downsample_and_run_smote(application_dataset, {"audio-streaming": 25000, "browsing": 25000, "chat": 25000, "file-transfer": 25000, "email": 25000,
"p2p": 25000, "video-streaming": 25000, "voip": 25000}, 'Application Type')
fake_application_dataset_5: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_application_5), [] ),
'Fake_Application_Dataset_5_25_25_25'
)
fake_application_dataset_5['Dataset'].to_csv('./synthetic/smote_balanced_application_dataset_labels_25_25_25.csv', index=False)
```
### All application types with 30,000 samples
```
fake_df_application_6 = downsample_and_run_smote(application_dataset, {"audio-streaming": 30000, "browsing": 30000, "chat": 30000, "file-transfer": 30000, "email": 30000,
"p2p": 30000, "video-streaming": 30000, "voip": 30000}, 'Application Type')
fake_application_dataset_6: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_application_6), [] ),
'Fake_Application_Dataset_6_30_30_30'
)
fake_application_dataset_6['Dataset'].to_csv('./synthetic/smote_balanced_application_dataset_labels_30_30_30.csv', index=False)
```
### Proportional Splits
```
fake_df_application_7 = downsample_and_run_smote(application_dataset, {"audio-streaming": 9000, "browsing": 16000, "chat": 10000, "file-transfer": 10000, "email": 10000,
"p2p": 20000, "video-streaming": 10000, "voip": 5000}, 'Application Type')
fake_application_dataset_7: dict = utils.package_data_for_inspection_with_label(
utils.prune_dataset(utils.package_data_for_inspection(fake_df_application_6), [] ),
'Fake_Application_Dataset_7_proportional'
)
fake_application_dataset_7['Dataset'].to_csv('./synthetic/smote_balanced_application_dataset_labels_proportional.csv', index=False)
print(f'Last Execution: {utils.datetime.datetime.now()}')
assert False, 'Nothing after this point is included in the study'
```
| github_jupyter |
# Day 18
## Part I
Ok,本日考的是Token解析和栈处理。首先需要解析整个表达式,这里有三种情况,可能是数字,可能是运算符加和乘,还可能是左右括号。还是为了代码清晰易读,用了枚举来代表解析出来的Token的类型,以及后续运算的状态,此处运算的状态只有两种,加法和乘法。
多说一句,在Python中写枚举的体验比起Rust差的太远,没法携带值,导致后续读取解析token的函数需要返回一个三元组,心塞。
```
from enum import Enum
TokenType = Enum('TokenType', ('digit', 'operator', 'parentheses'))
CalculateMode = Enum('CalculateMode', ('add', 'multiply'))
```
下面是第一个考点了,解析整个表达式。逐个字符读取分析,左括号、加号和乘号都很简单,直接返回相应的符号、类型和剩下的字符串即可;空格有两种情况,前面是数字或者前面是运算符,分别返回数字和忽略继续;虽然输入中的数字都是个位数,此处为了通用,还是使用了一个缓冲区用来存储数字,用来作为数字返回值;最后是右括号,分两种情况,如果前面是数字,就先返回数字,然后解析的指针不向前易懂,再次解析的时候在返回右括号:
```
from typing import Any, Tuple
import string
def next_token(data: str) -> Tuple[Any, TokenType, str]:
data = data.rstrip()
buf = ''
for i, c in enumerate(data):
if c == '(':
return c, TokenType.parentheses, data[i+1:]
if c == ')':
if buf:
return int(buf), TokenType.digit, data[i:]
else:
return c, TokenType.parentheses, data[i+1:]
if c == '+' or c == '*':
return c, TokenType.operator, data[i+1:]
if c == ' ':
if buf:
return int(buf), TokenType.digit, data[i+1:]
else:
continue
if c in string.digits:
buf += c
if buf:
return int(buf), TokenType.digit, ''
```
用一个小测试看看解析的效果:
```
d = '1 + (2 * 3) + (4 * (5 + 6))'
while d:
t, ttype, remain = next_token(d)
print(f'{t:<2}, {ttype:^40} {remain}')
d = remain
```
看起来不错,下面完成第一部分问题的一条表达式的计算逻辑。这里使用递归来计算括号中的部分,当然也可以使用栈来进行计算,但是显然递归的逻辑更加简单:
```
def part1_do_math(data: str) -> Tuple[int, str]:
result = 0
# 初始计算状态,加法
mode = CalculateMode.add
while data:
token, token_type, remain = next_token(data)
# 右括号,返回结果和剩余的字符串
if token == ')':
return result, remain
# 左括号,递归计算括号中的部分,然后根据计算状态,与原来的临时结果相乘或相加
if token == '(':
ret, remain = part1_do_math(remain)
if mode == CalculateMode.multiply:
result *= ret
else:
result += ret
# 数字,根据计算状态和临时结果相乘或相加
if token_type == TokenType.digit:
if mode == CalculateMode.multiply:
result *= token
else:
result += token
# 加号和乘号,修改计算状态
if token == '+':
mode = CalculateMode.add
if token == '*':
mode = CalculateMode.multiply
# 使用剩余的字符串继续计算下面的部分
data = remain
return result
```
单元测试:
```
assert(part1_do_math('1 + 2 * 3 + 4 * 5 + 6') == 71)
assert(part1_do_math('1 + (2 * 3) + (4 * (5 + 6))') == 51)
assert(part1_do_math('2 * 3 + (4 * 5)') == 26)
assert(part1_do_math('5 + (8 * 3 + 9 + 3 * 4 * 3)') == 437)
assert(part1_do_math('5 * 9 * (7 * 3 * 3 + 9 * 3 + (8 + 6 * 4))') == 12240)
assert(part1_do_math('((2 + 4 * 9) * (6 + 9 * 8 + 6) + 6) + 2 + 4 * 2') == 13632)
```
读取输入文件函数:
```
from typing import List
def read_input(input_file: str) -> List[str]:
with open(input_file) as fn:
return fn.readlines()
```
第一部分计算结果的总和:
```
def part1_solution(datas: List[str]) -> int:
return sum(part1_do_math(data) for data in datas)
```
获得第一部分结果:
```
datas = read_input('input.txt')
part1_solution(datas)
```
## Part II
第二部分需要再次使用一个栈算法,当然也可以使用递归实现,不过这里采取了自己使用list实现stack的方式。遇到乘法就将数据压栈,遇到加法就出栈然后进行加法计算后再次压栈,计算结果时只需要将栈中所有的数字相乘即可。这里使用numpy仅仅是为了直接使用prod ufuncs,使用reduce也是一样的:
```
import numpy as np
def do_math_part2(data: str) -> Tuple[int, str]:
mode = CalculateMode.add
stack = []
while data:
token, token_type, remain = next_token(data)
# 右括号,返回栈内数字的乘积和剩余的字符串
if token == ')':
return np.prod(stack), remain
# 左括号,递归计算结果,如果是乘法,将结果压栈,如果是加法,出栈与结果相加后再压栈
if token == '(':
ret, remain = do_math_part2(remain)
if mode == CalculateMode.multiply:
stack.append(ret)
else:
x = stack.pop() if stack else 0
stack.append(x + ret)
# 数字,乘法压栈,加法出栈与数字相加后再压栈
if token_type == TokenType.digit:
if mode == CalculateMode.multiply:
stack.append(token)
else:
x = stack.pop() if stack else 0
stack.append(x + token)
# 设置计算状态
if token == '+':
mode = CalculateMode.add
if token == '*':
mode = CalculateMode.multiply
data = remain
return np.prod(stack)
```
单元测试:
```
assert(do_math_part2('1 + 2 * 3 + 4 * 5 + 6') == 231)
assert(do_math_part2('1 + (2 * 3) + (4 * (5 + 6))') == 51)
assert(do_math_part2('2 * 3 + (4 * 5)') == 46)
assert(do_math_part2('5 + (8 * 3 + 9 + 3 * 4 * 3)') == 1445)
assert(do_math_part2('5 * 9 * (7 * 3 * 3 + 9 * 3 + (8 + 6 * 4))') == 669060)
assert(do_math_part2('((2 + 4 * 9) * (6 + 9 * 8 + 6) + 6) + 2 + 4 * 2') == 23340)
```
计算第二部分问题表达式结果之和:
```
def part2_solution(datas: List[str]) -> int:
return sum(do_math_part2(data) for data in datas)
```
获得第二部分结果:
```
part2_solution(datas)
```
| github_jupyter |
# Fine-tuning Transformer models and test prediction for GLUE tasks, using *torchdistill*
## 1. Make sure you have access to GPU/TPU
Google Colab: Runtime -> Change runtime type -> Hardware accelarator: "GPU" or "TPU"
```
!nvidia-smi
```
## 2. Clone torchdistill repository to use its example code and configuration files
```
!git clone https://github.com/yoshitomo-matsubara/torchdistill
```
## 3. Install dependencies and *torchdistill*
```
!pip install -r torchdistill/examples/hf_transformers/requirements.txt
!pip install torchdistill
```
## (Optional) Configure Accelerate for 2x-speedup training by mixed-precision
If you are **NOT** using the Google Colab Pro, it will exceed 12 hours (maximum lifetimes for free Google Colab users) to fine-tune a base-sized model for the following 9 different tasks with Tesla K80.
By using mixed-precision training, you can complete all the 9 fine-tuning jobs.
[This table](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification#mixed-precision-training) gives you a good idea about how long it will take to fine-tune a BERT-Base on a Titan RTX with/without mixed-precision.
```
!accelerate config
```
## 4. Fine-tune Transformer models for GLUE tasks
The following examples demonstrate how to fine-tune pretrained BERT-Base (uncased) on each of datasets in GLUE.
**Note**: Test splits for GLUE tasks in `datasets` package are not labeled, and you use only training and validation spltis in this example, following [Hugging Face's example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification).
### 4.1 CoLA task
```
!accelerate launch torchdistill/examples/hf_transformers/text_classification.py \
--config torchdistill/configs/sample/glue/cola/ce/bert_base_uncased.yaml \
--task cola \
--log log/glue/cola/ce/bert_base_uncased.txt \
--private_output leaderboard/glue/standard/bert_base_uncased/
```
### 4.2 SST-2 task
```
!accelerate launch torchdistill/examples/hf_transformers/text_classification.py \
--config torchdistill/configs/sample/glue/sst2/ce/bert_base_uncased.yaml \
--task sst2 \
--log log/glue/sst2/ce/bert_base_uncased.txt \
--private_output leaderboard/glue/standard/bert_base_uncased/
```
### 4.3 MRPC task
```
!accelerate launch torchdistill/examples/hf_transformers/text_classification.py \
--config torchdistill/configs/sample/glue/mrpc/ce/bert_base_uncased.yaml \
--task mrpc \
--log log/glue/mrpc/ce/bert_base_uncased.txt \
--private_output leaderboard/glue/standard/bert_base_uncased/
```
### 4.4 STS-B task
```
!accelerate launch torchdistill/examples/hf_transformers/text_classification.py \
--config torchdistill/configs/sample/glue/stsb/mse/bert_base_uncased.yaml \
--task stsb \
--log log/glue/stsb/mse/bert_base_uncased.txt \
--private_output leaderboard/glue/standard/bert_base_uncased/
```
### 4.5 QQP task
```
!accelerate launch torchdistill/examples/hf_transformers/text_classification.py \
--config torchdistill/configs/sample/glue/qqp/ce/bert_base_uncased.yaml \
--task qqp \
--log log/glue/qqp/ce/bert_base_uncased.txt \
--private_output leaderboard/glue/standard/bert_base_uncased/
```
### 4.6 MNLI task
```
!accelerate launch torchdistill/examples/hf_transformers/text_classification.py \
--config torchdistill/configs/sample/glue/mnli/ce/bert_base_uncased.yaml \
--task mnli \
--log log/glue/mnli/ce/bert_base_uncased.txt \
--private_output leaderboard/glue/standard/bert_base_uncased/
```
### 4.7 QNLI task
```
!accelerate launch torchdistill/examples/hf_transformers/text_classification.py \
--config torchdistill/configs/sample/glue/qnli/ce/bert_base_uncased.yaml \
--task qnli \
--log log/glue/qnli/ce/bert_base_uncased.txt \
--private_output leaderboard/glue/standard/bert_base_uncased/
```
### 4.8 RTE task
```
!accelerate launch torchdistill/examples/hf_transformers/text_classification.py \
--config torchdistill/configs/sample/glue/rte/ce/bert_base_uncased.yaml \
--task rte \
--log log/glue/rte/ce/bert_base_uncased.txt \
--private_output leaderboard/glue/standard/bert_base_uncased/
```
### 4.9 WNLI task
```
!accelerate launch torchdistill/examples/hf_transformers/text_classification.py \
--config torchdistill/configs/sample/glue/wnli/ce/bert_base_uncased.yaml \
--task wnli \
--log log/glue/wnli/ce/bert_base_uncased.txt \
--private_output leaderboard/glue/standard/bert_base_uncased/
```
# 5. Validate your prediction files for GLUE leaderboard
To make sure your prediction files contain the right numbers of samples (lines), you should see the following output by `wc -l <your prediction dir path>`.
```
1105 AX.tsv
1064 CoLA.tsv
9848 MNLI-mm.tsv
9797 MNLI-m.tsv
1726 MRPC.tsv
5464 QNLI.tsv
390966 QQP.tsv
3001 RTE.tsv
1822 SST-2.tsv
1380 STS-B.tsv
147 WNLI.tsv
426320 total
```
```
!wc -l leaderboard/glue/standard/bert_base_uncased/*
```
## 6. Zip the submission files and download to make a submission
```
!zip bert_base_uncased-submission.zip leaderboard/glue/standard/bert_base_uncased/*
```
Download the zip file from "Files" menu.
To submit the file to the GLUE system, refer to their webpage.
https://gluebenchmark.com/
## 7. More sample configurations, models, datasets...
You can find more [sample configurations](https://github.com/yoshitomo-matsubara/torchdistill/tree/master/configs/sample/) in the [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) repository.
If you would like to use larger datasets e.g., **ImageNet** and **COCO** datasets and models in `torchvision` (or your own modules), refer to the [official configurations](https://github.com/yoshitomo-matsubara/torchdistill/tree/master/configs/official) used in some published papers.
Experiments with such large datasets and models will require you to use your own machine due to limited disk space and session time (12 hours for free version and 24 hours for Colab Pro) on Google Colab.
# Colab examples for knowledge distillation
You can find Colab examples for knowledge distillation experiments in the [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) repository.
| github_jupyter |
### Tutorial 5 (Date 18 Sep, 2019)
Today's Attendance: https://forms.gle/uSf8JqubE4nDT65d9
Today's Notebook link: https://bit.ly/2lUP3EK
Topics
- Semantic Role Labelling
- PropBank, FrameNet
- PredPatt
- Semantic Parsers
- Other tasks - WSD, Coreference resolution
A lot of theory content and most of the SRL examples in this notebook are taken from Chapter 18 of Jurafsky and Martin's book - https://web.stanford.edu/~jurafsky/slp3/18.pdf
# 1. Semantic Role Labelling
To answer **"who did what to whom"**, one needs to have a meaningful understanding of an utterance, and this has been of interest in natural language understading for quite a long time.
## 1.1 Semantic Roles
Let's take an example.
"The boy broke a glass."
Let's draw the dependency tree for the above sentence.
```
import spacy
nlp = spacy.load('en')
from spacy import displacy
doc1 = nlp(u"The boy broke a glass.")
displacy.render(doc1, style="dep", jupyter=True, options={'compact':False})
```
- Semantic roles are abstract roles of arguments that are played in some event which is denoted by the predicate.
For example, in the above case, the event being talked about is "break" which has two arguments. The first argument is the boy who broke something which can be though of as the 'AGENT' of the event. The object "glass" can be called as a THEME. Tags like 'AGENT' and 'THEME' are called semantic roles.
- These roles are defined in databses like PropBank and FrameNet.
#### Some Thematic Roles
Agents can be considered as a thematic role. We can think if it as representing "volitional causation". The object of verbs like break are prototypically inanimate objects, which are affected by the event of breaking.
- Agent: the volitional causer of an event
- Theme: The participant most directly affected by an event
- Experiencer: The experiencer of an event
- Instrument: An instrument used in an event
#### "There's no universally agreed upon set of roles."
#### Why are these roles important? Can't we just use the syntactic parsers?
These roles help us build a shallow semantic representation which cannot be expressed by syntactic parses.
For eg: You might say the subject of the predicate is always the agent. But consider sentence 2.
1. Jon broke the window.
Jon -- agent, the window -- theme
2. The window was broken by John.
The window -- theme, John -- agent.
3. Jon broke the window with a hammer.
hammer -- the instrument
4. The hammer broke the window.
hammer -- instrument, window -- theme
## 1.2 PropBank
Roles are defined for each definition of a verb in PropBank.
Frames from PropBank: https://github.com/propbank/propbank-frames/blob/master/framesThe
#### Semantic Role Labelling is the task of assigning semantic roles to a given input sentence.
#### Semantic Role Labelling generally has four sub-tasks:
1. predicate identificatoin
2. predicate disambiguation
3. argument identification
4. argument classification
#### Let's use the Allenlp library to check the probably State-of-the-Art SRL
```
!pip install allennlp
```
#### Download the SRL model from allennlp
```
from allennlp.predictors.predictor import Predictor
predictor = Predictor.from_path("https://s3-us-west-2.amazonaws.com/allennlp/models/bert-base-srl-2019.06.17.tar.gz")
predictor.predict(sentence="The boy broke the window with a hammer.")
predictor.predict(sentence="The window was broken by the boy.")
```
You can also check out their online demo at: https://demo.allennlp.org/semantic-role-labeling/
## 1.3 Quick note on FrameNet
Frame Net is another semantic role labelling Project which assigns Semantic roles. But it considers different verbs and nouns that could be in the same frame.
For eg, consider the following sentences:
1. [The price of bananas] **increased** [5%.]
2. [The price of bananas] **rose** [5%].
3. There has been a [5%] **rise** in [the price of bananas.]
We want to infer that the price of bananas went up, and it went up by 5%.
```
predictor.predict(sentence="The price of bananas increased 5%.")
predictor.predict(sentence="The price of bananas rose 5%.")
predictor.predict(sentence="There has been a rise in the price of bananas")
```
In FrameNet, roles are specific to a frame instead of individual words as in PropBank
A frame is a concept like "air-travel". It could have roles related to: "reservation, flight, travel, buy, price, cost, fare, rates, meal, plane" etc.
Distinction between core-roles non-core roles.
# 2. PredPatt
- Predicate-argument extraction from Universal Dependencies.
- Part of the decomp project (http://decomp.io/).
- Creates a shallow semantic layer from the syntactic structure of Universal Dependencies
Details: https://github.com/hltcoe/PredPatt
Papers:
- https://www.aclweb.org/anthology/W17-6944/
- https://www.aclweb.org/anthology/D16-1177/
```
!pip install git+https://github.com/hltcoe/PredPatt.git
conllu = """
1 You you PRON PRP Case=Nom|Person=2|PronType=Prs 2 nsubj _ _
2 wonder wonder VERB VBP Mood=Ind|Tense=Pres|VerbForm=Fin 0 root _ _
3 if if SCONJ IN _ 6 mark _ _
4 he he PRON PRP Case=Nom|Gender=Masc|Number=Sing|Person=3|PronType=Prs 6 nsubj _ _
5 was be AUX VBD Mood=Ind|Number=Sing|Person=3|Tense=Past|VerbForm=Fin 6 aux _ _
6 manipulating manipulate VERB VBG Tense=Pres|VerbForm=Part 2 advcl _ _
7 the the DET DT Definite=Def|PronType=Art 8 det _ _
8 market market NOUN NN Number=Sing 6 dobj _ _
9 with with ADP IN _ 12 case _ _
10 his he PRON PRP$ Gender=Masc|Number=Sing|Person=3|Poss=Yes|PronType=Prs 12 nmod:poss _ _
11 bombing bombing NOUN NN Number=Sing 12 compound _ _
12 targets target NOUN NNS Number=Plur 6 nmod _ SpaceAfter=No
13 . . PUNCT . _ 2 punct _ _
"""
```
#### Create a PredPatt object and extract predicates
Details at: https://github.com/hltcoe/PredPatt/blob/master/tutorial.ipynb
```
from predpatt import load_conllu, PredPatt, PredPattOpts
options = PredPattOpts(resolve_relcl=True, borrow_arg_for_relcl=True, resolve_conj=False, cut=True)
```
#### Print Relation triples
```
conll_example = [ud_parse for sent_id, ud_parse in load_conllu(conllu)][0]
print(conll_example.pprint(K=3))
```
#### Create Predicate object and print predicate and arguments
```
pred_object = PredPatt(conll_example, opts=options)
print(" ".join([token.text for token in pred_object.tokens]))
print(pred_object.pprint(color=True))
## Predicate instances
pred_object.instances
## Tokens
print(pred_object.tokens)
```
# 3. Semantic Parsers
Wikipedia defines semantic parsing as the task of converting a natural language utterance to a logical form: a machine-understandable representation of its meaning.
## 3.1 TRIPS Parser
TRIPS Ontology and TRIPS Lexicon can be found at: http://www.cs.rochester.edu/research/trips/lexicon/browse-ont-lex.html
- TRIPS has its own semantic roles which are defined differently than PropBank.
- These roles are better readable (unlike Arg0, Arg1 etc.).
- The roles are not specific to individual verbs.
Details about the parser: https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/view/15377/14522
TRIPS Parser online demo: http://trips.ihmc.us/parser/cgi/parse?input=the+bat+ate+the+fruit
#### Pytrips
https://pypi.org/project/pytrips/
```
!pip install pytrips
!pip install nltk
import nltk
nltk.download('wordnet')
```
#### Loading TRIPS Ontology in python
```
from pytrips.ontology import load
ont = load()
## Ontology types for a lexicon item
ont['w::bat']
print(ont['nonhuman-animal'])
type(ont['nonhuman-animal'])
ont['nonhuman-animal'].path_to_root()
## Lowest Common Ancestor
ont['nonhuman-animal'] ^ ont['manufactured-object']
```
#### Going up and down in the heirarchy
```
print(ont['nonhuman-animal'].parent)
print(ont['phys-object'].children)
```
#### Trips-web
```
!pip install git+https://github.com/mrmechko/trips-web.git
!trips-web "He ate the fish." > parse.json
import json
with open('parse.json') as f:
trips_data = json.load(f)
trips_data
trips_web_parse = trips_data['parse'][0]
ont['want'].children
```
#### Simplify trips-web format
```
def rkeys(d, keys=['root']):
return {x: d[x] for x in d if x not in keys}
def remove_all_roots(dct):
'''
Collapse all 'root' keys in the dict
'''
if list(dct.keys())==['root']:
return dct['root']
else:
ans_dict = {}
root = dct['root']
#print(root)
dict_without_root = rkeys(dct)
#print(dict_without_root)
ans_dict[root] = dict_without_root
for key in ans_dict[root]:
ans_dict[root][key] = remove_all_roots(ans_dict[root][key])
return ans_dict
def simplify_format(t, root_id, explored=set()):
'''
Input: t, a target graph output from trips-web
root_id = root_id of t = t['root'][1:]
Output: a simplified dict which matches the format of
pattern_graphs as described in the project
'''
ans_dict = {}
ans_dict['root'] = t[root_id]["type"]
explored.add(root_id)
for role in t[root_id]['roles']:
curr_dict = t[root_id]['roles']
if curr_dict[role][0] == "#":
## To avoid running into cycles we keep a note of the explored nodes
if curr_dict[role][1:] not in explored:
explored.add(curr_dict[role][1:])
## recursive call
ans_dict[role.lower()] = simplify_format(t, curr_dict[role][1:], explored)
else:
## this id has already been explored
ans_dict[role.lower()] = {"root": t[curr_dict[role][1:]]['type']}
return ans_dict
def create_simplified_graph(trips_web_parse):
return remove_all_roots(simplify_format(trips_web_parse,
trips_web_parse['root'][1:]))
dict_graph = create_simplified_graph(trips_web_parse)
print(dict_graph)
```
#### Draw simplified TRIPS graph from the dictionary tree
Source: https://stackoverflow.com/questions/13688410/dictionary-object-to-decision-tree-in-pydot
```
!pip install pydot
import pydot
def draw(parent_name, child_name):
edge = pydot.Edge(parent_name, child_name)
graph.add_edge(edge)
def visit(node, parent=None):
for k,v in node.items():
if isinstance(v, dict):
# We start with the root node whose parent is None
# we don't want to graph the None node
if parent:
draw(parent, k)
visit(v, k)
else:
draw(parent, k)
# drawing the label using a distinct name
draw(k, k+'_'+v)
graph = pydot.Dot(graph_type='graph')
visit(dict_graph)
from IPython import display
display.Image(graph.create_png(), width=800)
```
## 3.2 A quick note on AMR Parser
Abstract Meaning Representation. Another semantic parser which uses PropBank roles as its semantic roles.
https://amr.isi.edu/language.html
#### Differences between TRIPS and AMR:
Taken from lecture notes CSC447 by James Allen
- AMR has semantic roles from PropBank, TRIPS uses its own semantic roles.
- AMR doesn't have word-senses for each word in the sentence, TRIPS does.
- AMR has co-reference resolution, TRIPS doesn't.
- The Senses in AMR are not organized in an ontology like TRIPS.
Current State-of-the-Art on AMR Parsing: https://www.aclweb.org/anthology/P19-1009
# 4. Some other NLP tasks
You can check various NLP tasks here: http://nlpprogress.com/
## 4.1 Word-Sense Disambiguation (WSD)
WSD is the task of identifying the sense of a word in a given contenxt.
#### SupWSD Python API
```
!pip install supwsd
from it.si3p.supwsd.api import SupWSD
from it.si3p.supwsd.config import Model, Language
import nltk
```
Register for the api key here: https://supwsd.net/supwsd/register.jsp
```
sup_key = '' ## Use your API-key here
#### Sentence to wordnet senses ####
def sent_to_wn_senses(sent, system="supwsd",
ims_object = None,
word=None,
supwsd_apikey=None):
'''
Input: sent: A string of words
Output: list containining wordnet-senses of each word in the sent
with respective probabilities of those senses
'''
if system == "supwsd":
ans = []
for result in SupWSD(supwsd_apikey).disambiguate(sent, Language.EN, Model.SEMCOR, True):
token=result.token
if not result.miss():
sense_lst = []
for sense in result.senses:
sense_lst.append((sense.id, sense.probability))
ans.append([token.word, sense_lst])
else:
ans.append([token.word, [(str(result.sense()), 1.0)]])
return ans
sent_to_wn_senses(r"The bat ate the fruit.", supwsd_apikey = sup_key)
```
The above word-senses are based on WordNet (https://wordnet.princeton.edu/)
## 4.2 Co-reference Resolution
#### Use Allennlp library
```
!pip install allennlp
from allennlp.predictors.predictor import Predictor
predictor = Predictor.from_path("https://s3-us-west-2.amazonaws.com/allennlp/models/coref-model-2018.02.05.tar.gz")
predictor.predict(document="The woman reading a newspaper sat on the bench with her dog.")
```
#### Visualize it here: https://demo.allennlp.org/coreference-resolution/
| github_jupyter |
```
import os
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from tensorflow import keras
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels),(test_images, test_labels) = mnist.load_data()
first_train_image = training_images[0]
plt.imshow(first_train_image)
np.set_printoptions(linewidth=150)
print(training_images[0])
first_train_image = training_images[0]
plt.figure(figsize=(30, 15))
plt.imshow(first_train_image)
first_train_image.shape
first_test_image = test_images[0]
plt.figure(figsize=(30,15))
plt.imshow(first_test_image)
np.set_printoptions(linewidth=150)
print(test_images[0])
first_test_image.shape
training_images = training_images / 255.0 #Data normalization
test_images = test_images / 255.0
np.min(training_images[0]), np.max(training_images[0]) #probability
model = keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128,activation=tf.nn.relu), # relu [x>0 and 1 to infinity]
tf.keras.layers.Dense(10,activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss= 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(training_images,training_labels,epochs=5)
model.summary()
model.evaluate(test_images, test_images_labels)
classifications = model.predict(test_images)
print(classifications[0])
print(test_images_lables[0])
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self,epoch,logs={}):
if(logs.get('loss')<0.4):
print("\nReached to 60% accuracy so i am cancelling training process")
self.model.stop_training = True
callbacks = myCallback()
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels),(test_images, test_labels) = mnist.load_data()
training_images = training_images / 255.0 #Data normalization
test_images = test_images / 255.0
model = keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512,activation=tf.nn.relu), # relu [x>0 and 1 to infinity]
tf.keras.layers.Dense(256,activation=tf.nn.relu),
tf.keras.layers.Dense(128,activation=tf.nn.relu),
tf.keras.layers.Dense(64,activation=tf.nn.relu),
tf.keras.layers.Dense(10,activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss= 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(training_images,
training_labels,
epochs=5,
callbacks[callbacks])
```
| github_jupyter |
# gis-utils demo
This notebook illustrates some of the basic functionality of ``gis-utils``
```
from pathlib import Path
import numpy as np
import pandas as pd
import geopandas as gp
import rasterio
import gisutils
import matplotlib.pyplot as plt
```
## Working with shapefiles
```
#nhdplus_path = Path.home() / 'Documents/NHDPlus/'
#files = [nhdplus_path / 'NHDPlusGL/NHDPlus04/NHDSnapshot/Hydrography/NHDFlowline.shp',
# nhdplus_path / 'NHDPlusMS/NHDPlus07/NHDSnapshot/Hydrography/NHDFlowline.shp']
files = ['data/NHDPlus04_flowline.shp',
'data/NHDPlus07_flowline.shp']
# make the output folder
Path('output').mkdir(exist_ok=True)
```
#### Get the Coordinate Reference System for a shapefile
(as a [``pyproj.CRS``](https://pyproj4.github.io/pyproj/stable/api/crs/crs.html) instance)
```
crs = gisutils.get_shapefile_crs(files[0])
crs
```
#### read multiple shapefiles into a single DataFrame
Note: for most applications, [``Geopandas``](https://geopandas.org/index.html) does what ``shp2df`` and ``df2shp`` do and more, with a nicer interface (the ``GeoDataFrame``).
```
df = gisutils.shp2df(files)
df.head()
```
#### write back out to a shapefile
```
gisutils.df2shp(df, 'output/combined.shp', crs=4269)
```
#### writing .DBF files
this is probably the most significant thing that ``gisutils`` does that GeoPandas doesn't
```
gisutils.df2shp(df.drop('geometry', axis=1), 'output/combined.dbf')
```
## Working with Coordinate Reference Systems (CRS)
### Reprojection
No need to specify what type of CRS representation you're using!
##### individual point
from NAD83 Geographic CRS (degrees; EPSG:4269) to NAD83 Albers; (EPSG:5070)
```
x5070, y5070 = gisutils.project((-91.87370, 34.93738), 4269, 5070)
x5070, y5070
gisutils.project((x5070, y5070), 5070, 4269)
```
##### vector of points:
from NAD83 Geographic CRS (degrees; EPSG:4269) to NAD83 Albers; (EPSG:3070)
```
df.geometry.values[0]
x, y = df.geometry.values[0].coords.xy
x3070, y3070 = gisutils.project((x, y), 4269, 3070)
#x3070, y3070
```
##### shapley geometry object
```
geom_3070 = gisutils.project(df.geometry.values[0], 4269, 3070)
geom_4269 = gisutils.project(geom_3070, 3070, 4269)
geom_4269.almost_equals(df.geometry.values[0])
```
##### sequence of shapely geometry objects
```
projected = gisutils.project(df.geometry, 4269, 3070)
```
### Raster reprojection
```
from rasterio.plot import show
rasterfile = 'data/top.tif'
with rasterio.open(rasterfile) as src:
show(src)
```
Reproject from NAD83 Albers (EPSG:5070) to NAD83 Geographic CRS (degrees; EPSG:4269)
```
gisutils.projection.project_raster(rasterfile, 'output/projected.tif', 4269)
with rasterio.open('output/projected.tif') as src:
show(src)
```
#### Note: ``project_raster`` can also be used for resampling
(with output to the same or a different CRS)
```
gisutils.projection.project_raster(rasterfile, 'output/projected_low_res.tif', 5070,
resolution=1e4
)
with rasterio.open('output/projected_low_res.tif') as src:
show(src)
```
### Get an equivalent `pyproj.CRS` object for WTM, 3 ways:
* [`pyproj.CRS`](https://pyproj4.github.io/pyproj/stable/examples.html#using-crs) allows for simple and robust specification and comparison of CRS
#### using a PROJ string
([Not recommended anymore](https://pyproj4.github.io/pyproj/stable/gotchas.html#what-is-the-best-format-to-store-the-crs-information))
```
proj_str_3070 = (('+proj=tmerc +lat_0=0 +lon_0=-90 +k=0.9996 '
'+x_0=520000 +y_0=-4480000 +ellps=GRS80 '
'+datum=NAD83 +units=m +no_defs'))
crs_3070 = gisutils.get_authority_crs(proj_str_3070)
crs_3070
```
#### The proj4 and EPSG representations are correctly identified as the same!
```
crs_3070 == gisutils.get_authority_crs(3070)
wkt_3070 = ('PROJCS["NAD83 / Wisconsin Transverse Mercator",'
'GEOGCS["GCS_North_American_1983",'
'DATUM["D_North_American_1983",'
'SPHEROID["GRS_1980",6378137,298.257222101]],'
'PRIMEM["Greenwich",0],'
'UNIT["Degree",0.017453292519943295]],'
'PROJECTION["Transverse_Mercator"],'
'PARAMETER["latitude_of_origin",0],'
'PARAMETER["central_meridian",-90],'
'PARAMETER["scale_factor",0.9996],'
'PARAMETER["false_easting",520000],'
'PARAMETER["false_northing",-4480000],'
'UNIT["Meter",1]]')
```
#### works for WKT too
* note that there are many different flavors of WKT; for WTM, [spatialreference.org lists at least 3](https://spatialreference.org/ref/epsg/3070/)
```
crs_3070 == gisutils.get_authority_crs(wkt_3070)
```
## Working with Rasters
#### get the crs for a raster
(as a [``pyproj.CRS``](https://pyproj4.github.io/pyproj/stable/api/crs/crs.html) instance)
```
gisutils.raster.get_raster_crs(rasterfile)
```
### Sample raster values at a set of points
For example, we can make a grid of x, y locations that covers the same area as the raster in ``rasterfile``.
```
with rasterio.open(rasterfile) as src:
print(src.meta)
```
#### ``gisutils.get_values_at_points()``
* takes a raster file, scalar or sequence-like x, y values
* optionally, a ``point_crs`` argument that specifies the CRS for the x, y locations, which are then reprojected to the CRS of the raster prior to sampling
* an interpolation method argument (``"nearest"`` or ``"linear"``)
* a 1D array of sampled values for each x, y is returned
```
nrow, ncol = 700, 700
x = np.arange(ncol) * 1000 + 177955.0
y = 1604285.0 - (np.arange(nrow) * 1000)[::-1]
X, Y = np.meshgrid(x, y)
sampled = gisutils.get_values_at_points(rasterfile,
x=X.ravel(), y=Y.ravel(),
points_crs=None)
sampled = np.reshape(sampled, (nrow, ncol))
plt.pcolormesh(X, Y, sampled, shading='auto')
plt.gca().set_aspect(1)
```
### Convert a set of points to a raster
Demonstrate this by making a DataFrame of the values sampled by ``get_values_at_points()``, and then writing that out to a shapefile. Use every 10th point.
```
from shapely.geometry import Point
stride = 10
geoms = [Point(x, y) for x, y in zip(X[::stride].ravel(), Y[::stride].ravel())]
df = gp.GeoDataFrame({'geometry': geoms,
'elevation': sampled[::stride].ravel()},
crs=5070)
df.to_file('output/sampled.shp')
gisutils.raster.points_to_raster('output/sampled.shp',
data_col='elevation',
output_resolution=1000,
outfile='surface.tif')
with rasterio.open('surface.tif') as src:
show(src)
```
### Write a numpy array to a raster
First read ``rasterfile`` to an array:
```
with rasterio.open(rasterfile) as src:
array = src.read(1)
array
plt.imshow(array)
```
#### get the lower left corner of the array in ``rasterfile``
Note: we could simply use the upper left corner instead, via the ``xul`` and ``yul`` arguments below, but oftentimes we want to work with the lower left.
```
xll = src.transform[2]
yll = src.transform[5] + src.height * src.transform[4]
xll, yll
```
#### write a raster
* with a lower left corner of xll, yll in NAD83 North American Albers (epsg:5070)
* 1,000 meter vertical and horizontal spacing
* nodata mask for all values == -9999
```
gisutils.raster.write_raster('output/surface2.tif', array, xll=xll, yll=yll,
dx=1000, dy=1000, rotation=0, nodata=-9999,
crs = 5070)
```
#### note that a separate ``.tif.msk`` file is written, containing the nodata mask
* this allows software like ``rasterio`` or ``QGIS`` to mask the nodata values by default
```
with rasterio.open('output/surface2.tif') as src:
show(src)
```
#### alternatively, a masked array can be supplied to the same effect
```
masked_array = np.ma.masked_array(array, array==-9999)
plt.imshow(masked_array)
gisutils.raster.write_raster('output/surface2.tif', masked_array, xll=xll, yll=yll,
dx=1000, dy=1000, rotation=0,
crs = 5070)
```
| github_jupyter |
Table of Contents
=================
* [numpy array](#numpy-array)
* [arange](#arange)
* [linspace](#linspace)
* [eye](#eye)
* [random](#random)
* [rand](#rand)
* [randn](#randn)
* [randint](#randint)
* [reshape](#reshape)
* [dtype](#dtype)
* [numpy_indexing_and_slicing](#numpy_indexing_and_slicing)
* [indexing](#indexing)
* [slice](#slice)
* [slice vs copy](#slice-vs-copy)
* [2d_array](#2d_array)
* [conditional Fetching in Arrays](#conditional-fetching-in-arrays)
```
lst = [1,2,3,4]
import numpy as np
```
# numpy array
converts list to array
```
np.array(lst)
mat=[(1,2),(3,4)]
np.array(mat)
```
## arange
similar to range in list
```
np.arange(0,30,2)
np.zeros(3)
```
> Create a Matrix with 7X7 zeros with Tupples
```
np.zeros((7,7))
np.ones(1)
```
> Create a Matrix with 3X4 one's with Tupples
```
np.ones((3,4))
```
## linspace
```
Signature:
np.linspace(
start,
stop,
num=50,
endpoint=True,
retstep=False,
dtype=None,
axis=0,
)
Docstring:
Return evenly spaced numbers over a specified interval.
Returns `num` evenly spaced samples, calculated over the
interval [`start`, `stop`].
The endpoint of the interval can optionally be excluded.
Examples
--------
>>> np.linspace(2.0, 3.0, num=5)
array([ 2. , 2.25, 2.5 , 2.75, 3. ])
>>> np.linspace(2.0, 3.0, num=5, endpoint=False)
array([ 2. , 2.2, 2.4, 2.6, 2.8])
>>> np.linspace(2.0, 3.0, num=5, retstep=True)
(array([ 2. , 2.25, 2.5 , 2.75, 3. ]), 0.25)
```
```
np.linspace(0,10,5)
np.linspace(0,10,35)
```
## eye
```
np.eye(4)
```
## random
### rand
random number between 0 and 1
Docstring:
rand(d0, d1, ..., dn)
Random values in a given shape.
```
np.random.rand()
np.random.rand(5) # single Dimension
np.random.rand(3,4) # Two Dimension
```
### randn
Docstring:
randn(d0, d1, ..., dn)
Return a sample (or samples) from the "standard normal" distribution.
```
np.random.randn()
np.random.randn(3)
np.random.randn(3,3)
```
### randint
Docstring:
randint(low, high=None, size=None, dtype='l')
Return random integers from `low` (inclusive) to `high` (exclusive).
```
np.random.randint(100)
np.random.randint(1,100,4)
arr = np.arange(25)
arr
ranarr = np.random.randint(1,100,25)
ranarr
```
## reshape
Docstring:
a.reshape(shape, order='C')
Returns an array containing the same data with a new shape.
```
ranarr.reshape(5,5)
ranarr.max() # max Value in the arr
ranarr.min() # min Value in the arr
ranarr.argmax() # Index of max Value in the arr
ranarr.argmin() # Index of min Value in the arr
ranarr.shape # Type of Array ,.. single,double ..dimentional
ranarr=ranarr.reshape(5,5)
ranarr.shape
```
## dtype
```
Create a data type object.
A numpy array is homogeneous, and contains elements described by a
dtype object. A dtype object can be constructed from different
combinations of fundamental numeric types.
Parameters
----------
obj
Object to be converted to a data type object.
align : bool, optional
Add padding to the fields to match what a C compiler would output
for a similar C-struct. Can be ``True`` only if `obj` is a dictionary
or a comma-separated string. If a struct dtype is being created,
this also sets a sticky alignment flag ``isalignedstruct``.
copy : bool, optional
Make a new copy of the data-type object. If ``False``, the result
may just be a reference to a built-in data-type object.
Examples
--------
Using array-scalar type:
>>> np.dtype(np.int16)
dtype('int16')
```
```
ranarr.dtype # Data Type of Array,..... int64,
s=[1,2,'3']
ss= np.array(s)
ss.dtype
```
> **Simple way to import numpy methods**
```
from numpy.random import randint
randint(3,10,4)
```
# numpy_indexing_and_slicing
## indexing
```
lst=[0,1,2,3,4,5,6,7,8,9,10]
lst
lst[0]
import numpy as np
npArr=np.arange(10)
npArr
npArr[0]
npArr
npArr[0:]
npArr[:5]
npArr[0:5]
npArr[5:]
npArr[0:5]=100 # <------ Note : Broadcast the value to range of index
npArr
```
## slice
```
a=np.arange(10,20)
a
slice_of_a = a[0:5]
slice_of_a[:]
slice_of_a[]=99 # <------ Note : See this FAILS
slice_of_a[:]=99 # <------ Note : See this PASS "[:]"
slice_of_a
```
Well Now CheckOut this
```
a
```
Strange Is'nt It.....We Just Changed slice_of_a but why a also changed it Values ???
**NumPy actually Makes a View/Reference to Original Source but not copy the source ....so changes made in 'slice_of_a' is actually change made to 'a'**
**Why this?**
----> **In Order to Avoid Memory issues Numpy Does this**
So **How to Copy the Array then ????**
Answer : **array.copy()**
```
b=np.arange(20,30)
b
bcopy = b.copy()
bcopy
bcopy[0:3]=0
bcopy
b # <------ NOTE : original Source not changed...
```
## slice vs copy
```
b=np.arange(80,90)
b
bcopy=b.copy()
bcopy
bslice=b[:]
bslice
bcopy[0:5]=5
bcopy
b # <------- NOTE : Source not Changed
bslice[0:5]=5
bslice
b # <------- NOTE : Source Changed
```
## 2d_array
**Now Lets Move on To Next ---->>>**
```
arr_2d=np.array([[1,2,3],[4,5,6],[7,8,9]])
arr_2d
```
>There are Two General Methods to Access Elements From 2d Array
- a.Double Bracket Format
- b.single Bracjet with Comma
```
arr_2d[1][2]
arr_2d[1,2] # <------ NOTE : Single bracket Notation
```
> **Accessing Specific Elements in Array/ Sub Sections**
```
arr_2d
arr_2d[:2,1:] # <---- Analyse this Carefully
```
> Now Select 5,6,8,9
```
arr_2d[1:,1:]
```
### conditional fetching in arrays
> We can get check condition against array and get the boolean values...and access the Values of Array Which are true
```
arr=np.arange(1,11)
arr
arr>5 #<-------- NOTE : Here comparing the values of arr which are greater than 5
boolean = arr>5
boolean
```
> So these the values of 'arr' which are satisfy the condition..basically this is kind of **Filtering**
```
arr[boolean]
```
> As Simple as Such Fetch all the Elements of arr which are > 5 , can be fetched as below
```
arr[arr>5] # <----- same as arr[boolean]
arr[arr<7]
```
> **Now Lets Move Ahead**
```
arr=np.arange(50).reshape(5,10)
arr
arr[1:3,3:5]
```
# numpy operations
```
import numpy as np
arr=np.arange(0,10)
arr
arr + arr
arr * arr
```
> Notice in unsupported errror array vs python
```
1/0
arr/arr
1/arr
arr **2
```
> **Numpy Universal array Functions**
- [https://docs.scipy.org/doc/numpy/reference/ufuncs.html](https://docs.scipy.org/doc/numpy/reference/ufuncs.html)
```
np.sqrt(arr)
np.exp(arr)
np.max(arr)
```
___
<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
___
# NumPy Exercises
Now that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks, and then you'll be asked some more complicated questions.
#### Import NumPy as np
```
import numpy as np
```
#### Create an array of 10 zeros
```
np.zeros(10)
```
#### Create an array of 10 ones
```
np.ones(10)
```
#### Create an array of 10 fives
```
np.ones(10)*5
np.zeros(10)+5
```
#### Create an array of the integers from 10 to 50
```
np.arange(10,40)
```
#### Create an array of all the even integers from 10 to 50
```
np.arange(10,52,2)
```
#### Create a 3x3 matrix with values ranging from 0 to 8
```
np.arange(9).reshape(3,3)
```
#### Create a 3x3 identity matrix
```
np.eye(3)
```
#### Use NumPy to generate a random number between 0 and 1
```
np.random.rand()
```
#### Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
```
np.random.randn(25)
```
#### Create the following matrix:
```
array([[ 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1 ],
[ 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.2 ],
[ 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3 ],
[ 0.31, 0.32, 0.33, 0.34, 0.35, 0.36, 0.37, 0.38, 0.39, 0.4 ],
[ 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47, 0.48, 0.49, 0.5 ],
[ 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57, 0.58, 0.59, 0.6 ],
[ 0.61, 0.62, 0.63, 0.64, 0.65, 0.66, 0.67, 0.68, 0.69, 0.7 ],
[ 0.71, 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.78, 0.79, 0.8 ],
[ 0.81, 0.82, 0.83, 0.84, 0.85, 0.86, 0.87, 0.88, 0.89, 0.9 ],
[ 0.91, 0.92, 0.93, 0.94, 0.95, 0.96, 0.97, 0.98, 0.99, 1. ]])
```
> **This can be done in Two Ways As Shown Below**
```
np.linspace(0.01,1.00,100).reshape(10,10)
np.arange(1,101).reshape(10,10)/100
```
#### Create an array of 20 linearly spaced points between 0 and 1:
```
np.linspace(0,1,20)
```
## Numpy Indexing and Selection
Now you will be given a few matrices, and be asked to replicate the resulting matrix outputs:
```
mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
#array([[12, 13, 14, 15],
# [17, 18, 19, 20],
# [22, 23, 24, 25]])
mat[2:,1:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
# 20
mat[3,4]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
# array([[ 2],
# [ 7],
# [12]])
mat[:3,1].reshape(3,1) # NOTE : SEE THIS
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
# array([21, 22, 23, 24, 25])
mat[4]
mat[-1]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
# array([[16, 17, 18, 19, 20],
# [21, 22, 23, 24, 25]])
mat[3:]
```
### Now do the following
#### Get the sum of all the values in mat
```
mat.sum()
np.sum(mat)
```
#### Get the standard deviation of the values in mat
```
np.std(mat)
mat.std()
```
#### Get the sum of all the columns in mat
```
mat.sum(axis=0)
```
# Great Job!
| github_jupyter |
# Introduction to Python
In this lesson we will learn the basics of the Python programming language (version 3). We won't learn everything about Python but enough to do some basic machine learning.
<img src="figures/python.png" width=350>
# Variables
Variables are objects in Python that can hold anything with numbers or text. Let's look at how to create some variables.
```
# Numerical example
x = 5
print (x)
# Text example
x = "hello"
print (x)
# Variables can be used with each other
a = 1
b = 2
c = a + b
print (c)
```
Variables can come in lots of different types. Even within numerical variables, you can have integers (int), floats (float), etc. All text based variables are of type string (str). We can see what type a variable is by printing its type.
```
# int variable
x = 5
print (x)
print (type(x))
# float variable
x = 5.0
print (x)
print (type(x))
# text variable
x = "5"
print (x)
print (type(x))
# boolean variable
x = True
print (x)
print (type(x))
```
It's good practice to know what types your variables are. When you want to use numerical operations on them, they need to be compatible.
```
# int variables
a = 5
b = 3
print (a + b)
# string variables
a = "5"
b = "3"
print (a + b)
```
# Lists
Lists are objects in Python that can hold a ordered sequence of numbers **and** text.
```
# Creating a list
list_x = [3, "hello", 1]
print (list_x)
# Adding to a list
list_x.append(7)
print (list_x)
# Accessing items at specific location in a list
print ("list_x[0]: ", list_x[0])
print ("list_x[1]: ", list_x[1])
print ("list_x[2]: ", list_x[2])
print ("list_x[-1]: ", list_x[-1]) # the last item
print ("list_x[-2]: ", list_x[-2]) # the second to last item
# Slicing
print ("list_x[:]: ", list_x[:])
print ("list_x[2:]: ", list_x[2:])
print ("list_x[1:3]: ", list_x[1:3])
print ("list_x[:-1]: ", list_x[:-1])
# Length of a list
len(list_x)
# Replacing items in a list
list_x[1] = "hi"
print (list_x)
# Combining lists
list_y = [2.4, "world"]
list_z = list_x + list_y
print (list_z)
```
# Tuples
Tuples are also objects in Python that can hold data but you cannot replace their values (for this reason, tuples are called immutable, whereas lists are known as mutable).
```
# Creating a tuple
tuple_x = (3.0, "hello")
print (tuple_x)
# Adding values to a tuple
tuple_x = tuple_x + (5.6,)
print (tuple_x)
# Trying to change a tuples value (you can't, this should produce an error.)
tuple_x[1] = "world"
```
# Dictionaries
Dictionaries are Python objects that hold key-value pairs. In the example dictionary below, the keys are the "name" and "eye_color" variables. They each have a value associated with them. A dictionary cannot have two of the same keys.
```
# Creating a dictionary
dog = {"name": "dog",
"eye_color": "brown"}
print (dog)
print (dog["name"])
print (dog["eye_color"])
# Changing the value for a key
dog["eye_color"] = "green"
print (dog)
# Adding new key-value pairs
dog["age"] = 5
print (dog)
# Length of a dictionary
print (len(dog))
```
# If statements
You can use `if` statements to conditionally do something.
```
# If statement
x = 4
if x < 1:
score = "low"
elif x <= 4:
score = "medium"
else:
score = "high"
print (score)
# If statment with a boolean
x = True
if x:
print ("it worked")
```
# Loops
In Python, you can use `for` loop to iterate over the elements of a sequence such as a list or tuple, or use `while` loop to do something repeatedly as long as a condition holds.
```
# For loop
x = 1
for i in range(3): # goes from i=0 to i=2
x += 1 # same as x = x + 1
print ("i={0}, x={1}".format(i, x)) # printing with multiple variables
# Loop through items in a list
x = 1
for i in [0, 1, 2]:
x += 1
print ("i={0}, x={1}".format(i, x))
# While loop
x = 3
while x > 0:
x -= 1 # same as x = x - 1
print (x)
```
# Functions
Functions are a way to modularize reusable pieces of code.
```
# Create a function
def add_two(x):
x += 2
return x
# Use the function
score = 0
score = add_two(x=score)
print (score)
# Function with multiple inputs
def join_name(first_name, last_name):
joined_name = first_name + " " + last_name
return joined_name
# Use the function
first_name = "John"
last_name = "Doe"
joined_name = join_name(first_name=first_name, last_name=last_name)
print (joined_name)
```
# Classes
Classes are a fundamental piece of object oriented programming in Python.
```
# Creating the class
class Pets(object):
# Initialize the class
def __init__(self, species, color, name):
self.species = species
self.color = color
self.name = name
# For printing
def __str__(self):
return "{0} {1} named {2}.".format(self.color, self.species, self.name)
# Example function
def change_name(self, new_name):
self.name = new_name
# Creating an instance of a class
my_dog = Pets(species="dog", color="orange", name="Rover",)
print (my_dog)
print (my_dog.name)
# Using a class's function
my_dog.change_name(new_name="Sparky")
print (my_dog)
print (my_dog.name)
```
# Additional resources
This was a very quick look at Python and we'll be learning more in future lessons. If you want to learn more right now before diving into machine learning, check out this free course: [Free Python Course](https://www.codecademy.com/learn/learn-python)
| github_jupyter |
# Hyper-Param and Iterative jobs
MLRun support iterative tasks for automatic and distributed execution of many tasks with variable parameters, this can be used for various tasks such as:
* Parallel loading and preparation of many data objects
* Model training with different parameter sets and/or algorithms
* Parallel testing with many test vector options
MLRun iterations can be viewed as child runs under the main task/run, each child run will get a set of parameters which will be computed/selected from the input hyper parameters based on the chosen strategy ([Grid](#grid-search-default), [List](#list-search), [Random](#random-search) or [Custom](#custom-iterator)).
The different iterations can run in parallel over multiple containers (using Dask or Nuclio runtimes which manage the workers), read more in the [**Parallel Execution Over Containers**](#parallel-execution-over-containers) section.
The hyper parameters and options are specified in the `task` or the {py:meth}`~mlrun.runtimes.BaseRuntime.run` command through the `hyperparams` (for hyper param values) and `hyper_param_options` (for {py:class}`~mlrun.model.HyperParamOptions`) properties, see examples below. hyper parameters can also be loaded directly from a CSV or Json file (by setting the `param_file` hyper option).
The hyper params are specified as a struct of `key: list` values for example: `{"p1": [1,2,3], "p2": [10,20]}`, the values can be of any type (int, string, float, ..), the list are used to compute the parameter combinations using one of the following strategies:
1. Grid Search (`grid`) - running all the parameter combinations
2. Random (`random`) - running a sampled set from all the parameter combinations
3. List (`list`) - running the first parameter from each list followed by the 2nd from each list and so on, note that all the lists must be of equal size.
MLRun also support a 4th `custom` option which allow determining the parameter combination per run programmatically
You can specify a selection criteria to select the best run among the different child runs by setting the `selector` option, this will mark that result as the parent (iteration 0) result, and mark the best result in the user interface.
You can also specify the `stop_condition` to stop execution of child runs when some criteria based on the returned results is met (for example `stop_condition="accuracy>=0.9"`)
## Examples
**Base dummy function:**
```
import mlrun
def hyper_func(context, p1, p2):
print(f"p1={p1}, p2={p2}, result={p1 * p2}")
context.log_result("multiplier", p1 * p2)
```
### Grid Search (default)
```
grid_params = {"p1": [2,4,1], "p2": [10,20]}
task = mlrun.new_task("grid-demo").with_hyper_params(grid_params, selector="max.multiplier")
run = mlrun.new_function().run(task, handler=hyper_func)
```
**UI Screenshot:**
<br><br>
<img src="_static/images/hyper-params.png" alt="hyper-params" width="800"/>
### Random Search
in random search MLRun will choose random parameter combinations, we limit the number of combinations using the `max_iterations` attribute
```
grid_params = {"p1": [2,4,1,3], "p2": [10,20,30]}
task = mlrun.new_task("random-demo")
task.with_hyper_params(grid_params, selector="max.multiplier", strategy="random", max_iterations=4)
run = mlrun.new_function().run(task, handler=hyper_func)
```
### List Search
In this example we also show how to use the `stop_condition` option
```
list_params = {"p1": [2,3,7,4,5], "p2": [15,10,10,20,30]}
task = mlrun.new_task("list-demo").with_hyper_params(
list_params, selector="max.multiplier", strategy="list", stop_condition="multiplier>=70")
run = mlrun.new_function().run(task, handler=hyper_func)
```
### Custom Iterator
We can define a child iteration context under the parent/main run, the child run will be logged independently
```
def handler(context: mlrun.MLClientCtx, param_list):
best_multiplier = total = 0
for param in param_list:
with context.get_child_context(**param) as child:
hyper_func(child, **child.parameters)
multiplier = child.results['multiplier']
total += multiplier
if multiplier > best_multiplier:
child.mark_as_best()
best_multiplier = multiplier
# log result at the parent
context.log_result('avg_multiplier', total / len(param_list))
param_list = [{"p1":2, "p2":10}, {"p1":3, "p2":30}, {"p1":4, "p2":7}]
run = mlrun.new_function().run(handler=handler, params={"param_list": param_list})
```
## Parallel Execution Over Containers
When working with compute intensive or long running tasks we would like to run our iterations over a cluster of containers, on the same time we don't want to bring up too many containers and rather limit the number of parallel tasks.
MLRun support distribution of the child runs over Dask or Nuclio clusters, this is handled automatically by MLRun, the user only need to deploy the Dask or Nuclio function used by the workers, and set the level of parallelism in the task. The execution can be controlled from the client/notebook, or can have a job (immediate or scheduled) which control the execution.
### Code example (single task)
```
# mark the start of a code section which will be be sent to the job
# mlrun: start-code
import socket
import pandas as pd
def hyper_func2(context, data, p1, p2, p3):
print(data.as_df().head())
context.logger.info(f"p2={p2}, p3={p3}, r1={p2 * p3} at {socket.gethostname()}")
context.log_result("r1", p2 * p3)
raw_data = {
"first_name": ["Jason", "Molly", "Tina", "Jake", "Amy"],
"age": [42, 52, 36, 24, 73],
"testScore": [25, 94, 57, 62, 70],
}
df = pd.DataFrame(raw_data, columns=["first_name", "age", "testScore"])
context.log_dataset("mydf", df=df, stats=True)
# mlrun: end-code
```
### Running the workers using Dask
In the following example we create a new function and execute the parent/controller as an MLRun `job` and the different child runs over a Dask cluster (MLRun Dask function).
#### Define a Dask Cluster (using MLRun serverless Dask)
```
dask_cluster = mlrun.new_function("dask-cluster", kind='dask', image='mlrun/ml-models')
dask_cluster.apply(mlrun.mount_v3io()) # add volume mounts
dask_cluster.spec.service_type = "NodePort" # open interface to the dask UI dashboard
dask_cluster.spec.replicas = 2 # define two containers
uri = dask_cluster.save()
uri
# initialize the dask cluster and get its dashboard url
dask_cluster.client
```
#### Define the Parallel Work
We set the `parallel_runs` attribute to indicate how many child tasks to run in parallel, and set the `dask_cluster_uri` to point to our dask cluster (if we don't set the cluster uri it will use dask local), we can also set the `teardown_dask` flag to indicate we want to free up all the dask resources after completion.
```
grid_params = {"p2": [2,1,4,1], "p3": [10,20]}
task = mlrun.new_task(params={"p1": 8}, inputs={'data': 'https://s3.wasabisys.com/iguazio/data/iris/iris_dataset.csv'})
task.with_hyper_params(
grid_params, selector="r1", strategy="grid", parallel_runs=4, dask_cluster_uri=uri, teardown_dask=True
)
```
**Define a job that will take our code (using code_to_function) and run it over the cluster**
```
fn = mlrun.code_to_function(name='hyper-tst', kind='job', image='mlrun/ml-models')
run = fn.run(task, handler=hyper_func2)
```
### Running the workers using Nuclio
Nuclio is a high-performance serverless engine which can process many events in parallel, it can also separate initialization from execution, certain parts of the code (imports, loading data, etc.) can be done once per worker vs in any run.
Nuclio by default process events (http, stream, ..), there is a special Nuclio kind which runs MLRun jobs (nuclio:mlrun)
**Notes:**
* Nuclio tasks are relatively short (preferably under 5 minutes), use it for running many iterations where each individual run is less than 5 min.
* Use `context.logger` to drive text outputs (vs `print()`)
#### Create a nuclio:mlrun function
```
fn = mlrun.code_to_function(name='hyper-tst2', kind='nuclio:mlrun', image='mlrun/mlrun')
# replicas * workers need to match or exceed parallel_runs
fn.spec.replicas = 2
fn.with_http(workers=2)
fn.deploy()
```
#### Run the parallel task over the function
```
# this is requiered to fix Jupyter issue with asyncio (not requiered outside of Jupyter)
# run it only once
import nest_asyncio
nest_asyncio.apply()
grid_params = {"p2": [2,1,4,1], "p3": [10,20]}
task = mlrun.new_task(params={"p1": 8}, inputs={'data': 'https://s3.wasabisys.com/iguazio/data/iris/iris_dataset.csv'})
task.with_hyper_params(
grid_params, selector="r1", strategy="grid", parallel_runs=4, max_errors=3
)
run = fn.run(task, handler=hyper_func2)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "1"
import torch
from torch import nn
import numpy as np
from tqdm.auto import tqdm
from matplotlib import pyplot as plt
from sparse_causal_model_learner_rl.learners import rl_learner
from sparse_causal_model_learner_rl.config import Config
from sparse_causal_model_learner_rl.sacred_gin_tune.sacred_wrapper import load_config_files
import gin
import ray
import pickle
from causal_util.helpers import CPU_Unpickler, lstdct2dctlst
gin.enter_interactive_mode()
from copsolver.frank_wolfe_solver import FrankWolfeSolver
from commondescentvector.multi_objective_cdv import MultiObjectiveCDV
load_config_files(['../vectorincrement/config/ve5_toy_digits.gin',
'../sparse_causal_model_learner_rl/configs/ve5_digits_rec.gin',
'../sparse_causal_model_learner_rl/configs/server_collect_big.gin'])
gin.bind_parameter('Config.load_new_config', True)
gin.bind_parameter('Config.load_checkpoint', '/home/sergei/ray_results/ve5_toy_digits_ve5_digits_rec_server_collect_big/main_fcn_98f13_00000_0_2021-02-20_08-59-46/checkpoint_5000/')
ray.init('10.90.40.13:6379')
f = open(gin.query_parameter('Config.load_checkpoint') + '/checkpoint', 'rb')
#l = CPU_Unpickler(f).load()
l = pickle.load(f)
def tensor_list_shape(lst):
return [x.shape for x in lst]
def tensor_list_flatten(lst):
return torch.cat([x.flatten() for x in lst])
def tensor_list_unflatten(t, shape):
tensors = []
used_n = 0
for s in shape:
curr_n = np.prod([int(t) for t in s])
tensors.append(t[used_n:used_n+curr_n].reshape(s))
used_n += curr_n
return tensors
def aggregate_qcop(losses_vals, grads, loss_keys, execution):
grad_shape = tensor_list_shape(list(grads.values())[0])
grads_flat = {key: tensor_list_flatten(val) for key, val in grads.items()}
solver = FrankWolfeSolver()
cdv = MultiObjectiveCDV(copsolver=solver)#, max_empirical_losses=[losses_vals[k] for k in loss_keys],
# normalized=True)
alpha_loss, alphas = cdv.get_descent_vector(
losses=[losses_vals[key] for key in loss_keys],
gradients=np.array([grads_flat[key].detach().cpu().numpy() for key in loss_keys]))
aggregated_gradient = sum([grads_flat[key] * alphas[i] for i, key in enumerate(loss_keys)]).detach()
aggregated_gradient_list = tensor_list_unflatten(aggregated_gradient, grad_shape)
return aggregated_gradient_list
def aggregate_sum(losses_vals, grads, loss_keys, execution):
grad_shape = tensor_list_shape(list(grads.values())[0])
grads_flat = {key: tensor_list_flatten(val) for key, val in grads.items()}
aggregated_gradient = sum([grads_flat[key] * execution[key][1] for i, key in enumerate(loss_keys)]).detach()
aggregated_gradient_list = tensor_list_unflatten(aggregated_gradient, grad_shape)
return aggregated_gradient_list
def get_grads():
grads_now = []
for p in l.params_for_optimizers['opt1']:
grads_now.append(p.grad)
return [x.clone() for x in grads_now]
losses = l.config.get('losses')
execution = {key: (losses[key]['fcn'], losses[key]['coeff']) for key in l.config.get('execution')['opt1']}
np.sum([np.prod(x.shape) for x in l.params_for_optimizers['opt1']])
tracked = []
for i in tqdm(range(1000)):
losses_vals = {}
grads = {}
loss_keys = sorted(execution.keys())
ctx = l.collect_and_get_context()
for key, (loss, coeff) in execution.items():
loss_local_cache = {}
l.optimizer_objects['opt1'].zero_grad()
loss_value = loss(**ctx, loss_local_cache=loss_local_cache)
# print(key, loss_value['loss'].item())
loss_value['loss'].backward()
losses_vals[key] = loss_value['loss'].item()
grads[key] = get_grads()
# l.optimizer_objects['opt1'].zero_grad()
# l.optimizer_objects['opt1'].step()
agg_grads = aggregate_qcop(losses_vals, grads, loss_keys, execution)
# agg_grads = aggregate_sum(losses_vals, grads, loss_keys, execution)
for param, grad in zip(l.params_for_optimizers['opt1'], agg_grads):
param.grad = grad.clone()
l.optimizer_objects['opt1'].step()
tracked.append(losses_vals)
hist = lstdct2dctlst(tracked)
plt.figure(figsize=(15, 5))
for i, key in enumerate(sorted(hist.keys())):
plt.subplot(1, len(hist), i + 1)
plt.title(key)
plt.plot(hist[key])
plt.yscale('log')
plt.show()
```
Standard aggregation

QCOP with no norm

qcop with norm

```
!echo -e "if True:\n\
import os\n\
from setuptools import setup\n\
\n\
setup(\
name = \"mamo\",\n\
version = \"0.0.1\",\n\
author = \"Swisscom\",\n\
license = \"BSD\",\n\
packages=['copsolver', 'commondescentvector'],\n\
classifiers=[\n\
\"Development Status :: 3 - Alpha\",\n\
\"Topic :: Utilities\",\n\
\"License :: OSI Approved :: BSD License\",\n\
],\n\
)\n\
"> ../mamo/setup.py
!pip install -e ../mamo/
```
| github_jupyter |
# Loading a PyTorch Model in C++
```
import torch
import torchvision
```
## Step 1: Converting Your PyTorch Model to Torch Script
```
# An instance of your model.
model = torchvision.models.resnet18()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
output = traced_script_module(torch.ones(1, 3, 224, 224))
print(output[0, :5])
```
## Converting to Torch Script via Annotation
```
class MyModule(torch.nn.Module):
def __init__(self, N, M):
super(MyModule, self).__init__()
self.weight = torch.nn.Parameter(torch.rand(N, M))
def forward(self, input):
if input.sum() > 0:
output = self.weight.mv(input)
else:
output = self.weight + input
return output
```
Because the `forward` method of this module uses control flow that is dependent on the input, it is not suitable for tracing. Instead, we can convert it to a `ScriptModule` by subclassing it from `torch.jit.ScriptModule` and adding a `@torch.jit.script_method` annotation to the model’s `forward` method:
```
class MyModule(torch.jit.ScriptModule):
def __init__(self, N, M):
super(MyModule, self).__init__()
self.weight = torch.nn.Parameter(torch.rand(N, M))
@torch.jit.script_method
def forward(self, input):
if bool(input.sum() > 0):
output = self.weight.mv(input)
else:
output = self.weight + input
return output
my_script_module = MyModule(2, 3)
```
## Step 2: Serializing Your Script Module to a File
Once you have a `ScriptModule` in your hands, either from tracing or annotating a PyTorch model, you are ready to serialize it to a file. Later on, you’ll be able to load the module from this file in C++ and execute it without any dependency on Python. Say we want to serialize the `ResNet18` model shown earlier in the tracing example. To perform this serialization, simply call save on the module and pass it a filename:
```
traced_script_module.save("model.pt")
```
This will produce a `model.pt` file in your working directory. We have now officially left the realm of Python and are ready to cross over to the sphere of C++.
```
!ls -lah
```
## Step 3: Loading Your Script Module in C++
To load your serialized PyTorch model in C++, your application must depend on the PyTorch C++ API – also known as LibTorch. The LibTorch distribution encompasses a collection of shared libraries, header files and CMake build configuration files. While CMake is not a requirement for depending on LibTorch, it is the recommended approach and will be well supported into the future. For this tutorial, we will be building a minimal C++ application using CMake and LibTorch that simply loads and executes a serialized PyTorch model.
### A Minimal C++ Application
```
!cat resnet18-app.cpp
```
### Depending on LibTorch and Building the Application
```
!cat CMakeLists.txt
```
The last thing we need to build the example application is the LibTorch distribution. You can always grab the latest stable release from the [download page](https://pytorch.org/get-started/locally/#start-locally) on the PyTorch website. If you download and unzip the latest archive, you should receive a folder with the following directory structure:
```
!ls -l ../../; ls -l ../../libtorch
```
- The `lib/` folder contains the shared libraries you must link against,
- The `include/` folder contains header files your program will need to include,
- The `share/` folder contains the necessary CMake configuration to enable the simple find_package(Torch) command above.
The last step is building the application. For this, assume our example directory is laid out like this:
```
!ls -l
```
We can now run the following commands to build the application from within the `resnet18-app/` folder:
```
!cmake -DCMAKE_PREFIX_PATH=../../libtorch
!ls -l
!make
!ls -l resnet18-app
```
If we supply the path to the serialized `ResNet18` model we created earlier to the resulting `resnet18-app` binary, we should be rewarded with a friendly "ok":
```
!./resnet18-app
!./resnet18-app model.pt
```
## Step 4: Executing the Script Module in C++
Having successfully loaded our serialized `ResNet18` in C++, we are now just a couple lines of code away from executing it! Let’s add those lines to our C++ application’s `main()` function:
```
!cat resnet18-app.cpp
```
Let’s try it out by re-compiling our application and running it with the same serialized model:
```
!make
!./resnet18-app model.pt
```
For reference, the output in Python previously was:
```sh
tensor([ 0.0808, 0.5042, 0.7030, -0.9465, 0.4538], grad_fn=<SliceBackward>)
```
Looks like a good match!
## Step 5: Getting Help and Exploring the API
This tutorial has hopefully equipped you with a general understanding of a PyTorch model’s path from Python to C++. With the concepts described in this tutorial, you should be able to go from a vanilla, “eager” PyTorch model, to a compiled `ScriptModule` in Python, to a serialized file on disk and – to close the loop – to an executable `script::Module` in C++.
Of course, there are many concepts we did not cover. To learn more, go to: https://pytorch.org/tutorials/advanced/cpp_export.html#step-5-getting-help-and-exploring-the-api
| github_jupyter |
# Project 3: Smart Beta Portfolio and Portfolio Optimization
## Overview
Smart beta has a broad meaning, but we can say in practice that when we use the universe of stocks from an index, and then apply some weighting scheme other than market cap weighting, it can be considered a type of smart beta fund. A Smart Beta portfolio generally gives investors exposure or "beta" to one or more types of market characteristics (or factors) that are believed to predict prices while giving investors a diversified broad exposure to a particular market. Smart Beta portfolios generally target momentum, earnings quality, low volatility, and dividends or some combination. Smart Beta Portfolios are generally rebalanced infrequently and follow relatively simple rules or algorithms that are passively managed. Model changes to these types of funds are also rare requiring prospectus filings with US Security and Exchange Commission in the case of US focused mutual funds or ETFs.. Smart Beta portfolios are generally long-only, they do not short stocks.
In contrast, a purely alpha-focused quantitative fund may use multiple models or algorithms to create a portfolio. The portfolio manager retains discretion in upgrading or changing the types of models and how often to rebalance the portfolio in attempt to maximize performance in comparison to a stock benchmark. Managers may have discretion to short stocks in portfolios.
Imagine you're a portfolio manager, and wish to try out some different portfolio weighting methods.
One way to design portfolio is to look at certain accounting measures (fundamentals) that, based on past trends, indicate stocks that produce better results.
For instance, you may start with a hypothesis that dividend-issuing stocks tend to perform better than stocks that do not. This may not always be true of all companies; for instance, Apple does not issue dividends, but has had good historical performance. The hypothesis about dividend-paying stocks may go something like this:
Companies that regularly issue dividends may also be more prudent in allocating their available cash, and may indicate that they are more conscious of prioritizing shareholder interests. For example, a CEO may decide to reinvest cash into pet projects that produce low returns. Or, the CEO may do some analysis, identify that reinvesting within the company produces lower returns compared to a diversified portfolio, and so decide that shareholders would be better served if they were given the cash (in the form of dividends). So according to this hypothesis, dividends may be both a proxy for how the company is doing (in terms of earnings and cash flow), but also a signal that the company acts in the best interest of its shareholders. Of course, it's important to test whether this works in practice.
You may also have another hypothesis, with which you wish to design a portfolio that can then be made into an ETF. You may find that investors may wish to invest in passive beta funds, but wish to have less risk exposure (less volatility) in their investments. The goal of having a low volatility fund that still produces returns similar to an index may be appealing to investors who have a shorter investment time horizon, and so are more risk averse.
So the objective of your proposed portfolio is to design a portfolio that closely tracks an index, while also minimizing the portfolio variance. Also, if this portfolio can match the returns of the index with less volatility, then it has a higher risk-adjusted return (same return, lower volatility).
Smart Beta ETFs can be designed with both of these two general methods (among others): alternative weighting and minimum volatility ETF.
## Instructions
Each problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a `# TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity.
## Packages
When you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.
The other packages that we're importing are `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems.
### Install Packages
```
import sys
!{sys.executable} -m pip install -r requirements.txt
```
### Load Packages
```
import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests
```
## Market Data
### Load Data
For this universe of stocks, we'll be selecting large dollar volume stocks. We're using this universe, since it is highly liquid.
```
df = pd.read_csv('../../data/project_3/eod-quotemedia.csv')
percent_top_dollar = 0.2
high_volume_symbols = project_helper.large_dollar_volume_stocks(df, 'adj_close', 'adj_volume', percent_top_dollar)
df = df[df['ticker'].isin(high_volume_symbols)]
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
volume = df.reset_index().pivot(index='date', columns='ticker', values='adj_volume')
dividends = df.reset_index().pivot(index='date', columns='ticker', values='dividends')
```
### View Data
To see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix.
```
project_helper.print_dataframe(close)
```
# Part 1: Smart Beta Portfolio
In Part 1 of this project, you'll build a portfolio using dividend yield to choose the portfolio weights. A portfolio such as this could be incorporated into a smart beta ETF. You'll compare this portfolio to a market cap weighted index to see how well it performs.
Note that in practice, you'll probably get the index weights from a data vendor (such as companies that create indices, like MSCI, FTSE, Standard and Poor's), but for this exercise we will simulate a market cap weighted index.
## Index Weights
The index we'll be using is based on large dollar volume stocks. Implement `generate_dollar_volume_weights` to generate the weights for this index. For each date, generate the weights based on dollar volume traded for that date. For example, assume the following is close prices and volume data:
```
Prices
A B ...
2013-07-08 2 2 ...
2013-07-09 5 6 ...
2013-07-10 1 2 ...
2013-07-11 6 5 ...
... ... ... ...
Volume
A B ...
2013-07-08 100 340 ...
2013-07-09 240 220 ...
2013-07-10 120 500 ...
2013-07-11 10 100 ...
... ... ... ...
```
The weights created from the function `generate_dollar_volume_weights` should be the following:
```
A B ...
2013-07-08 0.126.. 0.194.. ...
2013-07-09 0.759.. 0.377.. ...
2013-07-10 0.075.. 0.285.. ...
2013-07-11 0.037.. 0.142.. ...
... ... ... ...
```
```
def generate_dollar_volume_weights(close, volume):
"""
Generate dollar volume weights.
Parameters
----------
close : DataFrame
Close price for each ticker and date
volume : str
Volume for each ticker and date
Returns
-------
dollar_volume_weights : DataFrame
The dollar volume weights for each ticker and date
"""
assert close.index.equals(volume.index)
assert close.columns.equals(volume.columns)
#TODO: Implement function
df_weights = close*volume
# I think the example given is misleading.
# The way it is illustrated, it looks like you look at each stock
# and weight that individual stock to the observed dates.
#
# What I think they want is for each date, what is the weight of
# the stocks.
#
# .sum(axis=1) means add up along the row (the date) and not the stock
# the column.
#
# .div(axis='index') means
# Divide each row of a DataFrame by another DataFrame vector
# https://stackoverflow.com/questions/22642162/python-divide-each-row-of-a-dataframe-by-another-dataframe-vector
dollar_volume_weights = df_weights.div( df_weights.sum(axis=1),
axis='index')
# # DEBUG
# #
# print("DEBUG - close")
# print(close.head())
# print("DEBUG - volume")
# print(volume.head())
# print("DEBUG - df_weights")
# print(df_weights.head())
# print("DEBUG - df_weights.sum()")
# print(df_weights.sum())
# print("DEBUG - dollar_volume_weights")
# print(dollar_volume_weights)
return dollar_volume_weights
project_tests.test_generate_dollar_volume_weights(generate_dollar_volume_weights)
```
### View Data
Let's generate the index weights using `generate_dollar_volume_weights` and view them using a heatmap.
```
index_weights = generate_dollar_volume_weights(close, volume)
project_helper.plot_weights(index_weights, 'Index Weights')
```
## Portfolio Weights
Now that we have the index weights, let's choose the portfolio weights based on dividend. You would normally calculate the weights based on trailing dividend yield, but we'll simplify this by just calculating the total dividend yield over time.
Implement `calculate_dividend_weights` to return the weights for each stock based on its total dividend yield over time. This is similar to generating the weight for the index, but it's using dividend data instead.
For example, assume the following is `dividends` data:
```
Prices
A B
2013-07-08 0 0
2013-07-09 0 1
2013-07-10 0.5 0
2013-07-11 0 0
2013-07-12 2 0
... ... ...
```
The weights created from the function `calculate_dividend_weights` should be the following:
```
A B
2013-07-08 NaN NaN
2013-07-09 0 1
2013-07-10 0.333.. 0.666..
2013-07-11 0.333.. 0.666..
2013-07-12 0.714.. 0.285..
... ... ...
```
```
def calculate_dividend_weights(dividends):
"""
Calculate dividend weights.
Parameters
----------
dividends : DataFrame
Dividend for each stock and date
Returns
-------
dividend_weights : DataFrame
Weights for each stock and date
"""
#TODO: Implement function
# DEBUG
#
#print(dividends)
# How I believe this works is that for each stock, you do the cumulative sum or the dividends.
# Then for that date, you check the weight of the return for all the stock.
df_cumlative_sum = dividends.cumsum()
# This is similar to the last problem cell.
#
# .sum(axis=1) means add up along the row (the date) and not the stock
# the column.
#
# .div(axis='index') means
# Divide each row of a DataFrame by another DataFrame vector
# https://stackoverflow.com/questions/22642162/python-divide-each-row-of-a-dataframe-by-another-dataframe-vector
dividend_weights = df_cumlative_sum.div( df_cumlative_sum.sum(axis=1),
axis='index')
return dividend_weights
project_tests.test_calculate_dividend_weights(calculate_dividend_weights)
```
### View Data
Just like the index weights, let's generate the ETF weights and view them using a heatmap.
```
etf_weights = calculate_dividend_weights(dividends)
project_helper.plot_weights(etf_weights, 'ETF Weights')
```
## Returns
Implement `generate_returns` to generate returns data for all the stocks and dates from price data. You might notice we're implementing returns and not log returns. Since we're not dealing with volatility, we don't have to use log returns.
```
def generate_returns(prices):
"""
Generate returns for ticker and date.
Parameters
----------
prices : DataFrame
Price for each ticker and date
Returns
-------
returns : Dataframe
The returns for each ticker and date
"""
#TODO: Implement function
# returns (not log returns) is (today's price minus yesterday's price) divided by yesterday's price.
# NOTE: .shift(1) goes back one day.
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shift.html
returns = (prices - prices.shift(1))/prices.shift(1)
return returns
project_tests.test_generate_returns(generate_returns)
```
### View Data
Let's generate the closing returns using `generate_returns` and view them using a heatmap.
```
returns = generate_returns(close)
project_helper.plot_returns(returns, 'Close Returns')
```
## Weighted Returns
With the returns of each stock computed, we can use it to compute the returns for an index or ETF. Implement `generate_weighted_returns` to create weighted returns using the returns and weights.
```
def generate_weighted_returns(returns, weights):
"""
Generate weighted returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
weights : DataFrame
Weights for each ticker and date
Returns
-------
weighted_returns : DataFrame
Weighted returns for each ticker and date
"""
assert returns.index.equals(weights.index)
assert returns.columns.equals(weights.columns)
# # DEBUG
# #
# print("returns")
# print(returns)
# print("weights")
# print(weights)
#TODO: Implement function
weighted_returns = returns.mul(weights)
return weighted_returns
project_tests.test_generate_weighted_returns(generate_weighted_returns)
```
### View Data
Let's generate the ETF and index returns using `generate_weighted_returns` and view them using a heatmap.
```
index_weighted_returns = generate_weighted_returns(returns, index_weights)
etf_weighted_returns = generate_weighted_returns(returns, etf_weights)
project_helper.plot_returns(index_weighted_returns, 'Index Returns')
project_helper.plot_returns(etf_weighted_returns, 'ETF Returns')
```
## Cumulative Returns
To compare performance between the ETF and Index, we're going to calculate the tracking error. Before we do that, we first need to calculate the index and ETF comulative returns. Implement `calculate_cumulative_returns` to calculate the cumulative returns over time given the returns.
```
def calculate_cumulative_returns(returns):
"""
Calculate cumulative returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
Returns
-------
cumulative_returns : Pandas Series
Cumulative returns for each date
"""
#TODO: Implement function
# # DEBUG
# #
# print('returns')
# print(returns)
df_return_of_stocks_per_day = returns.sum(axis=1)
# The cumulative return formula
# Cumulative return is the cumprod of (1+r_t) where r_t is the daily return.
#
# ex: cumulative = (1 + return1) * (1 + return2) * (1 + return3) - 1
# https://stackoverflow.com/questions/40811246/pandas-cumulative-return-function
df_temp_a = 1.0 + df_return_of_stocks_per_day
cumulative_returns = df_temp_a.cumprod()
# # DEBUG
# #
# print('returns')
# print(returns)
# print('cumulative_returns')
# print(cumulative_returns)
return cumulative_returns
project_tests.test_calculate_cumulative_returns(calculate_cumulative_returns)
```
### View Data
Let's generate the ETF and index cumulative returns using `calculate_cumulative_returns` and compare the two.
```
index_weighted_cumulative_returns = calculate_cumulative_returns(index_weighted_returns)
etf_weighted_cumulative_returns = calculate_cumulative_returns(etf_weighted_returns)
project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, etf_weighted_cumulative_returns, 'Smart Beta ETF vs Index')
```
## Tracking Error
In order to check the performance of the smart beta portfolio, we can calculate the annualized tracking error against the index. Implement `tracking_error` to return the tracking error between the ETF and benchmark.
For reference, we'll be using the following annualized tracking error function:
$$ TE = \sqrt{252} * SampleStdev(r_p - r_b) $$
Where $ r_p $ is the portfolio/ETF returns and $ r_b $ is the benchmark returns.
_Note: When calculating the sample standard deviation, the delta degrees of freedom is 1, which is the also the default value._
```
def tracking_error(benchmark_returns_by_date, etf_returns_by_date):
"""
Calculate the tracking error.
Parameters
----------
benchmark_returns_by_date : Pandas Series
The benchmark returns for each date
etf_returns_by_date : Pandas Series
The ETF returns for each date
Returns
-------
tracking_error : float
The tracking error
"""
assert benchmark_returns_by_date.index.equals(etf_returns_by_date.index)
#TODO: Implement function
ds_delta = benchmark_returns_by_date - etf_returns_by_date
tracking_error = np.sqrt(252) * ds_delta.std()
return tracking_error
project_tests.test_tracking_error(tracking_error)
```
### View Data
Let's generate the tracking error using `tracking_error`.
```
smart_beta_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(etf_weighted_returns, 1))
print('Smart Beta Tracking Error: {}'.format(smart_beta_tracking_error))
```
# Part 2: Portfolio Optimization
Now, let's create a second portfolio. We'll still reuse the market cap weighted index, but this will be independent of the dividend-weighted portfolio that we created in part 1.
We want to both minimize the portfolio variance and also want to closely track a market cap weighted index. In other words, we're trying to minimize the distance between the weights of our portfolio and the weights of the index.
$Minimize \left [ \sigma^2_p + \lambda \sqrt{\sum_{1}^{m}(weight_i - indexWeight_i)^2} \right ]$ where $m$ is the number of stocks in the portfolio, and $\lambda$ is a scaling factor that you can choose.
Why are we doing this? One way that investors evaluate a fund is by how well it tracks its index. The fund is still expected to deviate from the index within a certain range in order to improve fund performance. A way for a fund to track the performance of its benchmark is by keeping its asset weights similar to the weights of the index. We’d expect that if the fund has the same stocks as the benchmark, and also the same weights for each stock as the benchmark, the fund would yield about the same returns as the benchmark. By minimizing a linear combination of both the portfolio risk and distance between portfolio and benchmark weights, we attempt to balance the desire to minimize portfolio variance with the goal of tracking the index.
## Covariance
Implement `get_covariance_returns` to calculate the covariance of the `returns`. We'll use this to calculate the portfolio variance.
If we have $m$ stock series, the covariance matrix is an $m \times m$ matrix containing the covariance between each pair of stocks. We can use [`Numpy.cov`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html) to get the covariance. We give it a 2D array in which each row is a stock series, and each column is an observation at the same period of time. For any `NaN` values, you can replace them with zeros using the [`DataFrame.fillna`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html) function.
The covariance matrix $\mathbf{P} =
\begin{bmatrix}
\sigma^2_{1,1} & ... & \sigma^2_{1,m} \\
... & ... & ...\\
\sigma_{m,1} & ... & \sigma^2_{m,m} \\
\end{bmatrix}$
```
def get_covariance_returns(returns):
"""
Calculate covariance matrices.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
Returns
-------
returns_covariance : 2 dimensional Ndarray
The covariance of the returns
"""
#TODO: Implement function
# NOTE: rowvar
# If rowvar is True (default), then each row represents a variable,
# with observations in the columns. Otherwise, the relationship is
# transposed: each column represents a variable, while the rows
# contain observations.
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html
returns_covariance = np.cov( returns.fillna( value=0),
rowvar=False )
return returns_covariance
project_tests.test_get_covariance_returns(get_covariance_returns)
```
### View Data
Let's look at the covariance generated from `get_covariance_returns`.
```
covariance_returns = get_covariance_returns(returns)
covariance_returns = pd.DataFrame(covariance_returns, returns.columns, returns.columns)
covariance_returns_correlation = np.linalg.inv(np.diag(np.sqrt(np.diag(covariance_returns))))
covariance_returns_correlation = pd.DataFrame(
covariance_returns_correlation.dot(covariance_returns).dot(covariance_returns_correlation),
covariance_returns.index,
covariance_returns.columns)
project_helper.plot_covariance_returns_correlation(
covariance_returns_correlation,
'Covariance Returns Correlation Matrix')
```
### portfolio variance
We can write the portfolio variance $\sigma^2_p = \mathbf{x^T} \mathbf{P} \mathbf{x}$
Recall that the $\mathbf{x^T} \mathbf{P} \mathbf{x}$ is called the quadratic form.
We can use the cvxpy function `quad_form(x,P)` to get the quadratic form.
### Distance from index weights
We want portfolio weights that track the index closely. So we want to minimize the distance between them.
Recall from the Pythagorean theorem that you can get the distance between two points in an x,y plane by adding the square of the x and y distances and taking the square root. Extending this to any number of dimensions is called the L2 norm. So: $\sqrt{\sum_{1}^{n}(weight_i - indexWeight_i)^2}$ Can also be written as $\left \| \mathbf{x} - \mathbf{index} \right \|_2$. There's a cvxpy function called [norm()](https://www.cvxpy.org/api_reference/cvxpy.atoms.other_atoms.html#norm)
`norm(x, p=2, axis=None)`. The default is already set to find an L2 norm, so you would pass in one argument, which is the difference between your portfolio weights and the index weights.
### objective function
We want to minimize both the portfolio variance and the distance of the portfolio weights from the index weights.
We also want to choose a `scale` constant, which is $\lambda$ in the expression.
$\mathbf{x^T} \mathbf{P} \mathbf{x} + \lambda \left \| \mathbf{x} - \mathbf{index} \right \|_2$
This lets us choose how much priority we give to minimizing the difference from the index, relative to minimizing the variance of the portfolio. If you choose a higher value for `scale` ($\lambda$).
We can find the objective function using cvxpy `objective = cvx.Minimize()`. Can you guess what to pass into this function?
### constraints
We can also define our constraints in a list. For example, you'd want the weights to sum to one. So $\sum_{1}^{n}x = 1$. You may also need to go long only, which means no shorting, so no negative weights. So $x_i >0 $ for all $i$. you could save a variable as `[x >= 0, sum(x) == 1]`, where x was created using `cvx.Variable()`.
### optimization
So now that we have our objective function and constraints, we can solve for the values of $\mathbf{x}$.
cvxpy has the constructor `Problem(objective, constraints)`, which returns a `Problem` object.
The `Problem` object has a function solve(), which returns the minimum of the solution. In this case, this is the minimum variance of the portfolio.
It also updates the vector $\mathbf{x}$.
We can check out the values of $x_A$ and $x_B$ that gave the minimum portfolio variance by using `x.value`
```
import cvxpy as cvx
def get_optimal_weights(covariance_returns, index_weights, scale=2.0):
"""
Find the optimal weights.
Parameters
----------
covariance_returns : 2 dimensional Ndarray
The covariance of the returns
index_weights : Pandas Series
Index weights for all tickers at a period in time
scale : int
The penalty factor for weights the deviate from the index
Returns
-------
x : 1 dimensional Ndarray
The solution for x
"""
assert len(covariance_returns.shape) == 2
assert len(index_weights.shape) == 1
assert covariance_returns.shape[0] == covariance_returns.shape[1] == index_weights.shape[0]
#TODO: Implement function
# Based off Lesson 18 - Porfolio Optimization
# 9. Exercise: cvxpy advanced optimization.
m = covariance_returns.shape[0]
x = cvx.Variable(m)
portfolio_variance = cvx.quad_form(x, covariance_returns)
distance_to_index = cvx.norm( x - index_weights )
objective = cvx.Minimize(portfolio_variance + scale*distance_to_index)
constraints = [x >= 0, sum(x) == 1 ]
cvx.Problem( objective,
constraints ).solve()
# NOTE: Notice the period for x.value
# .value is a python property
x_values = x.value
return x_values
project_tests.test_get_optimal_weights(get_optimal_weights)
```
## Optimized Portfolio
Using the `get_optimal_weights` function, let's generate the optimal ETF weights without rebalanceing. We can do this by feeding in the covariance of the entire history of data. We also need to feed in a set of index weights. We'll go with the average weights of the index over time.
```
raw_optimal_single_rebalance_etf_weights = get_optimal_weights(covariance_returns.values, index_weights.iloc[-1])
optimal_single_rebalance_etf_weights = pd.DataFrame(
np.tile(raw_optimal_single_rebalance_etf_weights, (len(returns.index), 1)),
returns.index,
returns.columns)
```
With our ETF weights built, let's compare it to the index. Run the next cell to calculate the ETF returns and compare it to the index returns.
```
optim_etf_returns = generate_weighted_returns(returns, optimal_single_rebalance_etf_weights)
optim_etf_cumulative_returns = calculate_cumulative_returns(optim_etf_returns)
project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, optim_etf_cumulative_returns, 'Optimized ETF vs Index')
optim_etf_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(optim_etf_returns, 1))
print('Optimized ETF Tracking Error: {}'.format(optim_etf_tracking_error))
```
## Rebalance Portfolio Over Time
The single optimized ETF portfolio used the same weights for the entire history. This might not be the optimal weights for the entire period. Let's rebalance the portfolio over the same period instead of using the same weights. Implement `rebalance_portfolio` to rebalance a portfolio.
Reblance the portfolio every n number of days, which is given as `shift_size`. When rebalancing, you should look back a certain number of days of data in the past, denoted as `chunk_size`. Using this data, compute the optoimal weights using `get_optimal_weights` and `get_covariance_returns`.
```
def rebalance_portfolio(returns, index_weights, shift_size, chunk_size):
"""
Get weights for each rebalancing of the portfolio.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
index_weights : DataFrame
Index weight for each ticker and date
shift_size : int
The number of days between each rebalance
chunk_size : int
The number of days to look in the past for rebalancing
Returns
-------
all_rebalance_weights : list of Ndarrays
The ETF weights for each point they are rebalanced
"""
assert returns.index.equals(index_weights.index)
assert returns.columns.equals(index_weights.columns)
assert shift_size > 0
assert chunk_size >= 0
#TODO: Implement function
# # DEBUG
# #
# print('returns')
# print(returns)
# print('index_weights')
# print(index_weights)
# print('shift_size')
# print(shift_size)
# print('chunk_size')
# print(chunk_size)
# print('index_weights.shape')
# print(index_weights.shape)
# Michael W suggested steps:
# Rebalance every n days (shift_size)
# When rebalancing look back however many days (chunk size) to find calculate the returns
# Note, don't start our loop with an index lower than chunk_size
# Pass these into get_covariance_returns & get_optimal_weights
# Append values to an array (all_rebalance_weights)
# return all_rebalance_weights
#
# The solution pseudo-code
# 1. For each chunk_size in the returns, first calculate covariance with get_covariance_returns() function.
# 2. Use the covariance obtained in step 1 and get the corresponding index_weight of the same chunk_size
# index. With this, calculate the optimal weights using get_optimal_weights() function.
#
# Repeat these two steps in a loop for the entire chunk by going of increments of shift_size.
# the optimal weights obtained should be captured in the list all_rebalance_weights.
# This list is what this function returns.
# The indexing was explained in the Student Hub by Narotam S on Tuesday, October 16, 2018.
#
# - Loop should start from zero and go until len(returns) - chunk_size with increments of shift_size
# - Each iteration in the loop, take slices from i to i+chunk_size
# it should be
# returns.iloc[i+chunk_size]
# index_weights.iloc[i + chunk_size - 1]
index_start = 0
index_end = index_weights.shape[0] - chunk_size
index_step_size = shift_size
all_rebalance_weights = []
for index in np.arange(index_start,index_end,index_step_size):
covariance_returns = get_covariance_returns(returns[ index : index + chunk_size ])
current_rebalance_weights = get_optimal_weights( covariance_returns,
index_weights.iloc[ index + chunk_size - 1 ],
scale=2.0 )
all_rebalance_weights.append(current_rebalance_weights)
return all_rebalance_weights
project_tests.test_rebalance_portfolio(rebalance_portfolio)
```
Run the following cell to rebalance the portfolio using `rebalance_portfolio`.
```
chunk_size = 250
shift_size = 5
all_rebalance_weights = rebalance_portfolio(returns, index_weights, shift_size, chunk_size)
```
## Portfolio Turnover
With the portfolio rebalanced, we need to use a metric to measure the cost of rebalancing the portfolio. Implement `get_portfolio_turnover` to calculate the annual portfolio turnover. We'll be using the formulas used in the classroom:
$ AnnualizedTurnover =\frac{SumTotalTurnover}{NumberOfRebalanceEvents} * NumberofRebalanceEventsPerYear $
$ SumTotalTurnover =\sum_{t,n}{\left | x_{t,n} - x_{t+1,n} \right |} $ Where $ x_{t,n} $ are the weights at time $ t $ for equity $ n $.
$ SumTotalTurnover $ is just a different way of writing $ \sum \left | x_{t_1,n} - x_{t_2,n} \right | $
```
def get_portfolio_turnover(all_rebalance_weights, shift_size, rebalance_count, n_trading_days_in_year=252):
"""
Calculage portfolio turnover.
Parameters
----------
all_rebalance_weights : list of Ndarrays
The ETF weights for each point they are rebalanced
shift_size : int
The number of days between each rebalance
rebalance_count : int
Number of times the portfolio was rebalanced
n_trading_days_in_year: int
Number of trading days in a year
Returns
-------
portfolio_turnover : float
The portfolio turnover
"""
assert shift_size > 0
assert rebalance_count > 0
#TODO: Implement function
# # DEBUG
# #
# print('all_rebalance_weights')
# print(all_rebalance_weights)
# NOTE: all_rebalance_weights is a list of Ndarrays. Converted to pandas DataFrame
# just because I believe the methods are more convinient.
df_all_rebalance_weights = pd.DataFrame(all_rebalance_weights)
# NOTE: Notice the negeative one (-1) in .shift(-1).
# That's because the formula is subtracting Xt - X(t+1), meaning the current
# X at the time and the next X at the next time in the future.
df_temp_a = df_all_rebalance_weights - df_all_rebalance_weights.shift(-1)
#
# According to the lecture video in Lesson 18 - Portfolio Optimization - 10. Rebalancing,
# It looks like you take the absolute value of the difference. I filled all NaN (not a number)
# as zero, though I believe in pandas DataFrame it would be ok to make sums and not mess up the
# calculations (maybe not for numpy ndarrays.) Then sum the absolute differences up. Then you
# sum up the individual results for each stock. So you take the sum of a sum.
SumTotalTurnover = df_temp_a.abs().fillna(0).sum().sum()
# Number of rebalance events should be: rebalance_count
# Number of rebalance events per year should be: n_trading_days_in_year/shift_size
portfolio_turnover = (SumTotalTurnover / rebalance_count)*(n_trading_days_in_year/shift_size)
# # DEBUG
# #
# print('SumTotalTurnover')
# print(SumTotalTurnover)
# print('portfolio_turnover')
# print(portfolio_turnover)
return portfolio_turnover
project_tests.test_get_portfolio_turnover(get_portfolio_turnover)
```
Run the following cell to get the portfolio turnover from `get_portfolio turnover`.
```
print(get_portfolio_turnover(all_rebalance_weights, shift_size, len(all_rebalance_weights) - 1))
```
That's it! You've built a smart beta portfolio in part 1 and did portfolio optimization in part 2. You can now submit your project.
## Submission
Now that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.
| github_jupyter |
```
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
<img src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png" style="width: 90px; float: right;">
# Training HugeCTR Model with Pre-trained Embeddings
In this notebook, we will train a deep neural network for predicting user's rating (binary target with 1 for ratings `>3` and 0 for ratings `<=3`). The two categorical features are `userId` and `movieId`.
We will also make use of movie's pretrained embeddings, extracted in the previous notebooks.
## Loading pretrained movie features into non-trainable embedding layer
```
# loading NVTabular movie encoding
import pandas as pd
import os
INPUT_DATA_DIR = './data'
movie_mapping = pd.read_parquet(os.path.join(INPUT_DATA_DIR, "workflow-hugectr/categories/unique.movieId.parquet"))
movie_mapping.tail()
feature_df = pd.read_parquet('feature_df.parquet')
print(feature_df.shape)
feature_df.head()
feature_df.set_index('movieId', inplace=True)
from tqdm import tqdm
import numpy as np
num_tokens = len(movie_mapping)
embedding_dim = 2048+1024
hits = 0
misses = 0
# Prepare embedding matrix
embedding_matrix = np.zeros((num_tokens, embedding_dim))
print("Loading pretrained embedding matrix...")
for i, row in tqdm(movie_mapping.iterrows(), total=len(movie_mapping)):
movieId = row['movieId']
if movieId in feature_df.index:
embedding_vector = feature_df.loc[movieId]
# embedding found
embedding_matrix[i] = embedding_vector
hits += 1
else:
misses += 1
print("Found features for %d movies (%d misses)" % (hits, misses))
embedding_dim
embedding_matrix
```
Next, we write the pretrained embedding to a raw format supported by HugeCTR.
Note: As of version 3.2, HugeCTR only supports a maximum embedding size of 1024. Hence, we shall be using the first 512 element of image embedding plus 512 element of text embedding.
```
import struct
PRETRAINED_EMBEDDING_SIZE = 1024
def convert_pretrained_embeddings_to_sparse_model(keys, pre_trained_sparse_embeddings, hugectr_sparse_model, embedding_vec_size):
os.system("mkdir -p {}".format(hugectr_sparse_model))
with open("{}/key".format(hugectr_sparse_model), 'wb') as key_file, \
open("{}/emb_vector".format(hugectr_sparse_model), 'wb') as vec_file:
for i, key in enumerate(keys):
vec = np.concatenate([pre_trained_sparse_embeddings[i,:int(PRETRAINED_EMBEDDING_SIZE/2)], pre_trained_sparse_embeddings[i, 1024:1024+int(PRETRAINED_EMBEDDING_SIZE/2)]])
key_struct = struct.pack('q', key)
vec_struct = struct.pack(str(embedding_vec_size) + "f", *vec)
key_file.write(key_struct)
vec_file.write(vec_struct)
keys = list(movie_mapping.index)
convert_pretrained_embeddings_to_sparse_model(keys, embedding_matrix, 'hugectr_pretrained_embedding.model', embedding_vec_size=PRETRAINED_EMBEDDING_SIZE) # HugeCTR not supporting embedding size > 1024
```
## Define and train model
In this section, we define and train the model. The model comprise trainable embedding layers for categorical features (`userId`, `movieId`) and pretrained (non-trainable) embedding layer for movie features.
We will write the model to `./model.py` and execute it afterwards.
First, we need the cardinalities of each categorical feature to assign as `slot_size_array` in the model below.
```
import nvtabular as nvt
from nvtabular.ops import get_embedding_sizes
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow-hugectr"))
embeddings = get_embedding_sizes(workflow)
print(embeddings)
#{'userId': (162542, 512), 'movieId': (56586, 512), 'movieId_duplicate': (56586, 512)}
```
We use `graph_to_json` to convert the model to a JSON configuration, required for the inference.
```
%%writefile './model.py'
import hugectr
from mpi4py import MPI # noqa
INPUT_DATA_DIR = './data/'
solver = hugectr.CreateSolver(
vvgpu=[[0]],
batchsize=2048,
batchsize_eval=2048,
max_eval_batches=160,
i64_input_key=True,
use_mixed_precision=False,
repeat_dataset=True,
)
optimizer = hugectr.CreateOptimizer(optimizer_type=hugectr.Optimizer_t.Adam)
reader = hugectr.DataReaderParams(
data_reader_type=hugectr.DataReaderType_t.Parquet,
source=[INPUT_DATA_DIR + "train-hugectr/_file_list.txt"],
eval_source=INPUT_DATA_DIR + "valid-hugectr/_file_list.txt",
check_type=hugectr.Check_t.Non,
slot_size_array=[162542, 56586, 21, 56586],
)
model = hugectr.Model(solver, reader, optimizer)
model.add(
hugectr.Input(
label_dim=1,
label_name="label",
dense_dim=0,
dense_name="dense",
data_reader_sparse_param_array=[
hugectr.DataReaderSparseParam("data1", nnz_per_slot=[1, 1, 2], is_fixed_length=False, slot_num=3),
hugectr.DataReaderSparseParam("movieId", nnz_per_slot=[1], is_fixed_length=True, slot_num=1)
],
)
)
model.add(
hugectr.SparseEmbedding(
embedding_type=hugectr.Embedding_t.LocalizedSlotSparseEmbeddingHash,
workspace_size_per_gpu_in_mb=3000,
embedding_vec_size=16,
combiner="sum",
sparse_embedding_name="sparse_embedding1",
bottom_name="data1",
optimizer=optimizer,
)
)
# pretrained embedding
model.add(
hugectr.SparseEmbedding(
embedding_type=hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash,
workspace_size_per_gpu_in_mb=3000,
embedding_vec_size=1024,
combiner="sum",
sparse_embedding_name="pretrained_embedding",
bottom_name="movieId",
optimizer=optimizer,
)
)
model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape,
bottom_names = ["sparse_embedding1"],
top_names = ["reshape1"],
leading_dim=48))
model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape,
bottom_names = ["pretrained_embedding"],
top_names = ["reshape2"],
leading_dim=1024))
model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Concat,
bottom_names = ["reshape1", "reshape2"],
top_names = ["concat1"]))
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["concat1"],
top_names=["fc1"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc1"],
top_names=["relu1"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu1"],
top_names=["fc2"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc2"],
top_names=["relu2"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu2"],
top_names=["fc3"],
num_output=1,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.BinaryCrossEntropyLoss,
bottom_names=["fc3", "label"],
top_names=["loss"],
)
)
model.compile()
model.summary()
# Load the pretrained embedding layer
model.load_sparse_weights({"pretrained_embedding": "./hugectr_pretrained_embedding.model"})
model.freeze_embedding("pretrained_embedding")
model.fit(max_iter=10001, display=100, eval_interval=200, snapshot=5000)
model.graph_to_json(graph_config_file="hugectr-movielens.json")
```
We train our model.
```
!python model.py
```
| github_jupyter |
```
import os
os.chdir('../')
import DeepPurpose.DTI as models
from DeepPurpose.utils import *
from DeepPurpose.dataset import *
X_drug, X_target, y = load_process_DAVIS('./data/', binary=False)
drug_encoding = 'Morgan'
target_encoding = 'CNN'
train, val, test = data_process(X_drug, X_target, y,
drug_encoding, target_encoding,
split_method='random',frac=[0.7,0.1,0.2], random_seed = 1)
# use the parameters setting provided in the paper: https://arxiv.org/abs/1801.10193
config = generate_config(drug_encoding = drug_encoding,
target_encoding = target_encoding,
cls_hidden_dims = [1024,1024,512],
train_epoch = 100,
LR = 0.001,
batch_size = 256,
cnn_target_filters = [32,64,96],
cnn_target_kernels = [4,8,12]
)
model = models.model_initialize(**config)
model.train(train, val, test)
model.save_model('./model_morgan_cnn_davis')
drug_encoding = 'Morgan'
target_encoding = 'AAC'
train, val, test = data_process(X_drug, X_target, y,
drug_encoding, target_encoding,
split_method='random',frac=[0.7,0.1,0.2], random_seed = 1)
config = generate_config(drug_encoding = drug_encoding,
target_encoding = target_encoding,
cls_hidden_dims = [1024,1024,512],
train_epoch = 100,
LR = 0.001,
batch_size = 256
)
model = models.model_initialize(**config)
model.train(train, val, test)
model.save_model('./model_morgan_aac_davis')
drug_encoding = 'Daylight'
target_encoding = 'AAC'
train, val, test = data_process(X_drug, X_target, y,
drug_encoding, target_encoding,
split_method='random',frac=[0.7,0.1,0.2], random_seed = 2)
config = generate_config(drug_encoding = drug_encoding,
target_encoding = target_encoding,
cls_hidden_dims = [1024,1024,512],
train_epoch = 100,
LR = 0.001,
batch_size = 256
)
model = models.model_initialize(**config)
model.train(train, val, test)
drug_encoding = 'Daylight'
target_encoding = 'AAC'
train, val, test = data_process(X_drug, X_target, y,
drug_encoding, target_encoding,
split_method='random',frac=[0.7,0.1,0.2], random_seed = 3)
config = generate_config(drug_encoding = drug_encoding,
target_encoding = target_encoding,
cls_hidden_dims = [1024,1024,512],
train_epoch = 100,
LR = 0.001,
batch_size = 256
)
model = models.model_initialize(**config)
model.train(train, val, test)
model.save_model('./model_daylight_aac_davis')
drug_encoding = 'Daylight'
target_encoding = 'AAC'
train, val, test = data_process(X_drug, X_target, y,
drug_encoding, target_encoding,
split_method='random',frac=[0.7,0.1,0.2], random_seed = 4)
config = generate_config(drug_encoding = drug_encoding,
target_encoding = target_encoding,
cls_hidden_dims = [1024,1024,512],
train_epoch = 100,
LR = 0.001,
batch_size = 256
)
model = models.model_initialize(**config)
model.train(train, val, test)
drug_encoding = 'Daylight'
target_encoding = 'AAC'
train, val, test = data_process(X_drug, X_target, y,
drug_encoding, target_encoding,
split_method='random',frac=[0.7,0.1,0.2], random_seed = 5)
config = generate_config(drug_encoding = drug_encoding,
target_encoding = target_encoding,
cls_hidden_dims = [1024,1024,512],
train_epoch = 100,
LR = 0.001,
batch_size = 256
)
model = models.model_initialize(**config)
model.train(train, val, test)
```
| github_jupyter |
# 유튜브 크롤링 하기
> ## 1. 유튜브 API 호출하기
[유튜브크롤링 - 참고자료 - API호출하기](https://blog.naver.com/doublet7411/221511344483)
- (1단계) GCP (Google Cloud Platform) 접속
- https://console.developers.google.com
- (2단계) API 라이브러리에서 "Youtube Data API v3" 선택
- 사용 클릭
- 좌측 열쇄모양 아이콘(사용자 인증정보) 클릭
- 상단 `+사용자 인증정보 만들기` 버튼 클릭 -> `API 키` 클릭
- API키 복사하기
- (3단계) 구글 포스트맨을 활용하여 정상적으로 호출되는지 확인
- 구글포스트맨: 크롬 웹스토어에서 `Tabbed Postman` 설치
- 크롬 확장프로그램 `Tabbed Postman`을 실행.
- url: `https://www.googleapis.com/youtube/v3/search`
- 요청방식: `GET`
- `URL params 버튼` 클릭
- URL parameter: `key`
- Value: `(2단계)에서 발급받은 API키`
- postman 사용 결과

> ## 2. 조건에 맞는 유튜브 JSON형식 데이터 조회
- [유튜브 크롤링 참고자료 - 유튜브 크롤링2](https://blog.naver.com/doublet7411/221514043955)
- Tabbed postman
- (URL Parameter Key) `key` => (Value) `유튜브 api에서 발급받은 키`
- (URL Parameter Key) `part` => (Value) `snippet`
- (URL Parameter Key) `order` => (Value) `date`
- (URL Parameter Key) `q` => (Value) `찾고자하는 유튜브 이름 (쿼리)`
<br>
- 파이썬 패키지 설치
- apiclient
```
pip install apiclient
```
- oauth2client
```
pip install oauth2client
```
```
!pip install apiclient
!pip install oauth2client
!pip install --upgrade google-api-python-client
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from oauth2client.tools import argparser
DEVELOPER_KEY= "(2단계)에서 발급받은 유튜브 API"
YOUTUBE_API_SERVICE_NAME="youtube"
YOUTUBE_API_VERSION="v3"
def youtube_search(query):
print('검색결과 => ', query)
# 크롤링 객체
youtube= build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
developerKey= DEVELOPER_KEY)
# 검색결과 크롤링
# 검색리스트를 부른다.
# 쿼리에 명시된것과 일치한 결과물을 메소드에게 반환.
search_response= youtube.search().list(
q=query,
part="snippet",
maxResults=50
).execute()
videos=[]
channels=[]
playlists=[]
# 리스트에 적절한 각각의 결과물들을 추가한다.
# 매칭된 비디오, 채널, 플레이리스트들을 리스트형태로 나타낸다.
for search_result in search_response.get('items', []):
if search_result['id']['kind']=='youtube#video':
videos.append('%s (%s)' % (search_result['snippet']['title'],
search_result['id']['videoId']))
elif search_result['id']['kind']=='youtube#channel':
channels.append('%s (%s)' %(search_result['snippet']['title'],
search_result['id']['channelId']))
elif search_result['id']['kind']=='youtube#playlist':
playlists.append('%s (%s)' %(search_result['snippet']['title'],
search_result['id']['playlistId']))
return videos, channels, playlists
search_queries=['서울 강남 맛집', '서울 홍대 맛집', '서울 가성비 맛집',
'서울 종로 맛집', '경기도 맛집', '서울 건국대 맛집']
videos, channels, playlists = youtube_search(search_queries[0])
videos
youtube_search(search_queries[0])
for query in search_queries:
youtube_search(query)
```
| github_jupyter |
```
# import the required libraries
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import os
import cv2
import tensorflow as tf
from tensorflow.keras import layers, optimizers
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
```
## Loading Images from the Disk
```
# function that would read an image provided the image path, preprocess and return it back
def read_and_preprocess(img_path):
img = cv2.imread(img_path, cv2.IMREAD_COLOR) # reading the image
img = cv2.resize(img, (256, 256)) # resizing it (I just like it to be powers of 2)
img = np.array(img, dtype='float32') # convert its datatype so that it could be normalized
img = img/255 # normalization (now every pixel is in the range of 0 and 1)
return img
X_train = [] # To store train images
y_train = [] # To store train labels
# labels -
# 0 - Covid
# 1 - Viral Pneumonia
# 2 - Normal
train_path = './dataset/train/' # path containing training image samples
for folder in os.scandir(train_path):
for entry in os.scandir(train_path + folder.name):
X_train.append(read_and_preprocess(train_path + folder.name + '/' + entry.name))
if folder.name[0]=='C':
y_train.append(0) # Covid
elif folder.name[0]=='V':
y_train.append(1) # Viral Pneumonia
else:
y_train.append(2) # Normal
X_train = np.array(X_train)
X_train.shape # We have 1955 training samples in total
y_train = np.array(y_train)
y_train.shape
```
## Visualizing the Dataset
```
covid_count = len(y_train[y_train==0])
pneumonia_count = len(y_train[y_train==1])
normal_count = len(y_train[y_train==2])
plt.title("Train Images for Each Label")
plt.bar(["Covid", "Viral Pneumonia", "Normal"],[covid_count, pneumonia_count, normal_count])
# We have more samples of Normal and Viral Pneumonia than Covid
# Plotting 2 images per disease
import random
title = {0:"Covid", 1:"Viral Pneumonia", 2:"Normal"}
rows = 2
columns = 3
for i in range(2):
fig = plt.figure(figsize=(7,7))
fig.add_subplot(rows, columns, 1)
pos = random.randint(0, covid_count)
plt.imshow(X_train[pos])
plt.title(title[y_train[pos]])
fig.add_subplot(rows, columns, 2)
pos = random.randint(covid_count, covid_count+pneumonia_count)
plt.imshow(X_train[pos])
plt.title(title[y_train[pos]])
fig.add_subplot(rows, columns, 3)
pos = random.randint(covid_count+pneumonia_count, covid_count+pneumonia_count+normal_count)
plt.imshow(X_train[pos])
plt.title(title[y_train[pos]])
```
## Image Augmentation
Augmentation is the process of creating new training samples by altering the available data. <br>
It not only increases the number of samples for training the model but also prevents the model from overfitting the training data since it makes relevant feautes in the image location invariant. <br>
Although there are various ways of doing so like random zoom, increasing/decreasing brightness, rotating the images, most of it does not make sense for Health related data as the real world data is almost always of high quality. <br>
So we only applied one type of Image Augmentation in this Model : <b> Horizontal Flip </b>. <br>
Now, even if we try to classify horizontaly flipped images, we can expect to get correct predictions.
```
plt.imshow(X_train[0])
plt.title("Original Image")
X_new = np.fliplr(X_train[0])
plt.imshow(X_new)
plt.title("Horizontaly Flipped Image")
X_aug = []
y_aug = []
for i in range(0, len(y_train)):
X_new = np.fliplr(X_train[i])
X_aug.append(X_new)
y_aug.append(y_train[i])
X_aug = np.array(X_aug)
y_aug = np.array(y_aug)
X_train = np.append(X_train, X_aug, axis=0) # appending augmented images to original training samples
X_train.shape
y_train = np.append(y_train, y_aug, axis=0)
y_train.shape
# Now we have 3910 samples in total
```
## Spliting the data for Training and Validation
```
from sklearn.model_selection import train_test_split
# We have splitted our data in a way that -
# 1. The samples are shuffled
# 2. The ratio of each class is maintained (stratify)
# 3. We get same samples every time we split our data (random state)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.15, shuffle=True, stratify=y_train, random_state=123)
# we will use 3323 images for training the model
y_train.shape
# we will use 587 images for validating the model's performance
y_val.shape
```
## Designing and Training the Model
```
model = tf.keras.Sequential([
Conv2D(filters=32, kernel_size=(2,2), activation='relu', input_shape=(256, 256, 3)),
MaxPooling2D((4,4)),
Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'),
MaxPooling2D((3,3)),
Dropout(0.3), # for regularization
Conv2D(filters=64, kernel_size=(4,4), activation='relu', padding='same'),
Conv2D(filters=128, kernel_size=(5,5), activation='relu', padding='same'),
MaxPooling2D((2,2)),
Dropout(0.4),
Conv2D(filters=128, kernel_size=(5,5), activation='relu', padding='same'),
MaxPooling2D((2,2)),
Dropout(0.5),
Flatten(), # flattening for feeding into ANN
Dense(512, activation='relu'),
Dropout(0.5),
Dense(256, activation='relu'),
Dropout(0.3),
Dense(128, activation='relu'),
Dense(3, activation='softmax')
])
model.summary()
# Slowing down the learning rate
opt = optimizers.Adam(learning_rate=0.0001)
# compile the model
model.compile(loss = 'sparse_categorical_crossentropy', optimizer=opt, metrics= ["accuracy"])
# use early stopping to exit training if validation loss is not decreasing even after certain epochs (patience)
earlystopping = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=20)
# save the best model with least validation loss
checkpointer = ModelCheckpoint(filepath="covid_classifier_weights.h5", verbose=1, save_best_only=True)
history = model.fit(X_train, y_train, epochs = 100, validation_data=(X_val, y_val), batch_size=32, shuffle=True, callbacks=[earlystopping, checkpointer])
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# save the model architecture to json file for future use
model_json = model.to_json()
with open("covid_classifier_model.json","w") as json_file:
json_file.write(model_json)
```
## Evaluating the Saved Model Performance
```
# Load pretrained model (best saved one)
with open('covid_classifier_model.json', 'r') as json_file:
json_savedModel= json_file.read()
# load the model
model = tf.keras.models.model_from_json(json_savedModel)
model.load_weights('covid_classifier_weights.h5')
model.compile(loss = 'sparse_categorical_crossentropy', optimizer=opt, metrics= ["accuracy"])
```
### Loading the Test Images
```
X_test = [] # To store test images
y_test = [] # To store test labels
test_path = './dataset/test/'
for folder in os.scandir(test_path):
for entry in os.scandir(test_path + folder.name):
X_test.append(read_and_preprocess(test_path + folder.name + '/' + entry.name))
if folder.name[0]=='C':
y_test.append(0)
elif folder.name[0]=='V':
y_test.append(1)
else:
y_test.append(2)
X_test = np.array(X_test)
y_test = np.array(y_test)
X_test.shape # We have 185 images for testing
covid_count = len(y_test[y_test==0])
pneumonia_count = len(y_test[y_test==1])
normal_count = len(y_test[y_test==2])
plt.title("Test Images for Each Label")
plt.bar(["Covid", "Viral Pneumonia", "Normal"],[covid_count, pneumonia_count, normal_count])
# making predictions
predictions = model.predict(X_test)
predictions.shape
# Obtain the predicted class from the model prediction
predict = []
for i in predictions:
predict.append(np.argmax(i))
predict = np.asarray(predict)
# Obtain the accuracy of the model
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, predict)
accuracy
# plot the confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, predict)
plt.figure(figsize = (7,7))
sns.heatmap(cm, annot=True, cmap='Blues')
# The model misclassified one Covid case as Normal
from sklearn.metrics import classification_report
report = classification_report(y_test, predict)
print(report)
```
| github_jupyter |
<small><small><i>
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/02_Python_Datatypes/tree/main/002_Python_String_Methods)**
</i></small></small>
# Python String `endswith()`
The **`endswith()`** method returns True if a string ends with the specified suffix. If not, it returns False.
**Syntax**:
```python
str.endswith(suffix[, start[, end]])
```
## `endswith()` Parameters
The **`endswith()`** method takes three parameters:
* **suffix** - String or tuple of suffixes to be checked.
* **start (Optional)** - Beginning position where **suffix** is to be checked within the string.
* **end (Optional)** - Ending position where **suffix** is to be checked within the string.
## Return Value from `endswith()`
The **`endswith()`** method returns a boolean.
* It returns **True** if strings ends with the specified suffix.
* It returns **False** if string doesn't end with the specified suffix.
```
# Example 1: endswith() Without start and end Parameters
text = "Python is easy to learn."
result = text.endswith('to learn')
# returns False
print(result)
result = text.endswith('to learn.')
# returns True
print(result)
result = text.endswith('Python is easy to learn.')
# returns True
print(result)
# Example 2: endswith() With start and end Parameters
text = "Python programming is easy to learn."
# start parameter: 7
# "programming is easy to learn." string is searched
result = text.endswith('learn.', 7)
print(result)
# Both start and end is provided
# start: 7, end: 26
# "programming is easy" string is searched
result = text.endswith('is', 7, 26)
# Returns False
print(result)
result = text.endswith('easy', 7, 26)
# returns True
print(result)
```
## Passing Tuple to `endswith()`
It's possible to pass a tuple suffixes to the **`endswith()`** method in Python.
If the string ends with any item of the tuple, **`endswith()`** returns **`True`**. If not, it returns **`False`**.
```
# Example 3: endswith() With Tuple Suffix
text = "programming is easy"
result = text.endswith(('programming', 'python'))
# prints False
print(result)
result = text.endswith(('python', 'easy', 'java'))
#prints True
print(result)
# With start and end parameter
# 'programming is' string is checked
result = text.endswith(('is', 'an'), 0, 14)
# prints True
print(result)
```
If you need to check if a string starts with the specified prefix, you can use **[startswith() method in Python](https://github.com/milaan9/02_Python_Datatypes/blob/main/002_Python_String_Methods/041_Python_String_startswith%28%29.ipynb)**.
| github_jupyter |
```
import sys
sys.path.append('../scripts/')
from mcl import *
from kf import *
class EstimatedLandmark(Landmark):
def __init__(self):
super().__init__(0,0)
self.cov = None #変更
def draw(self, ax, elems):
if self.cov is None:
return
##推定位置に青い星を描く##
c = ax.scatter(self.pos[0], self.pos[1], s=100, marker="*", label="landmarks", color="blue")
elems.append(c)
elems.append(ax.text(self.pos[0], self.pos[1], "id:" + str(self.id), fontsize=10))
##誤差楕円を描く##
e = sigma_ellipse(self.pos, self.cov, 3)
elems.append(ax.add_patch(e))
class MapParticle(Particle): ###fastslam6mapparticle
def __init__(self, init_pose, weight, landmark_num):
super().__init__(init_pose, weight)
self.map = Map()
for i in range(landmark_num):
self.map.append_landmark(EstimatedLandmark())
def drawing_params(self, hat_x, landmark, distance_dev_rate, direction_dev):
##観測関数の線形化##
ell = np.hypot(*(hat_x[0:2] - landmark.pos))
Qhat_zt = matQ(distance_dev_rate*ell, direction_dev)
hat_zt = IdealCamera.observation_function(hat_x, landmark.pos)
H_m = - matH(hat_x, landmark.pos)[0:2,0:2]
H_xt = matH(hat_x, landmark.pos)
##パーティクルの姿勢と地図からセンサ値の分布の共分散行列を計算##
Q_zt = H_m.dot(landmark.cov).dot(H_m.T) + Qhat_zt
return hat_zt, Q_zt, H_xt
def gauss_for_drawing(self, hat_x, R_t, z, landmark, distance_dev_rate, direction_dev):
hat_zt, Q_zt, H_xt = self.drawing_params(hat_x, landmark, distance_dev_rate, direction_dev)
K = R_t.dot(H_xt.T).dot(np.linalg.inv(Q_zt + H_xt.dot(R_t).dot(H_xt.T)))
return K.dot(z - hat_zt) + hat_x, (np.eye(3) - K.dot(H_xt)).dot(R_t)
def motion_update2(self, nu, omega, time, motion_noise_stds, observation, distance_dev_rate, direction_dev): #変更
##移動後の分布を作る##
M = matM(nu, omega, time, motion_noise_stds)
A = matA(nu, omega, time, self.pose[2])
R_t = A.dot(M).dot(A.T)
hat_x = IdealRobot.state_transition(nu, omega, time, self.pose)
for d in observation:
hat_x, R_t = self.gauss_for_drawing(hat_x, R_t, d[0], self.map.landmarks[d[1]], distance_dev_rate, direction_dev)
self.pose = multivariate_normal(mean=hat_x, cov=R_t + np.eye(3)*1.0e-10).rvs() #次元が足りないので少し共分散を足す
def init_landmark_estimation(self, landmark, z, distance_dev_rate, direction_dev):
landmark.pos = z[0]*np.array([np.cos(self.pose[2] + z[1]), np.sin(self.pose[2] + z[1])]).T + self.pose[0:2]
H = matH(self.pose, landmark.pos)[0:2,0:2]
Q = matQ(distance_dev_rate*z[0], direction_dev)
landmark.cov = np.linalg.inv(H.T.dot( np.linalg.inv(Q) ).dot(H))
def observation_update_landmark(self, landmark, z, distance_dev_rate, direction_dev):
estm_z = IdealCamera.observation_function(self.pose, landmark.pos)
if estm_z[0] < 0.01:
return
H = - matH(self.pose, landmark.pos)[0:2,0:2]
Q = matQ(distance_dev_rate*estm_z[0], direction_dev)
K = landmark.cov.dot(H.T).dot( np.linalg.inv(Q + H.dot(landmark.cov).dot(H.T)) )
##重みの更新#
Q_z = H.dot(landmark.cov).dot(H.T) + Q
self.weight *= multivariate_normal(mean=estm_z, cov=Q_z).pdf(z)
##ランドマークの推定の更新##
landmark.pos = K.dot(z - estm_z) + landmark.pos
landmark.cov = (np.eye(2) - K.dot(H)).dot(landmark.cov)
def observation_update(self, observation, distance_dev_rate, direction_dev):
for d in observation:
z = d[0]
landmark = self.map.landmarks[d[1]]
if landmark.cov is None:
self.init_landmark_estimation(landmark, z, distance_dev_rate, direction_dev)
else:
self.observation_update_landmark(landmark, z, distance_dev_rate, direction_dev)
class FastSlam(Mcl): ###fastslam6fastslam
def __init__(self, init_pose, particle_num, landmark_num, motion_noise_stds={"nn":0.19, "no":0.001, "on":0.13, "oo":0.2}, \
distance_dev_rate=0.14, direction_dev=0.05):
super().__init__(None, init_pose, particle_num, motion_noise_stds, distance_dev_rate, direction_dev)
self.particles = [MapParticle(init_pose, 1.0/particle_num, landmark_num) for i in range(particle_num)]
self.ml = self.particles[0]
self.motion_noise_stds = motion_noise_stds #追加
def motion_update(self, nu, omega, time, observation): #書き換え
##すでに観測されていたランドマークのリストを作成##
not_first_obs = []
for d in observation:
if self.particles[0].map.landmarks[d[1]].cov is not None: #先頭のパーティクルの地図を使って判断
not_first_obs.append(d)
if len(not_first_obs) > 0:
for p in self.particles: p.motion_update2(nu, omega, time, self.motion_noise_stds, not_first_obs,\
self.distance_dev_rate, self.direction_dev) #新しい更新則
else:
for p in self.particles: p.motion_update(nu, omega, time, self.motion_noise_rate_pdf) #元の更新則
def observation_update(self, observation):
for p in self.particles:
p.observation_update(observation, self.distance_dev_rate, self.direction_dev)
self.set_ml()
self.resampling()
def draw(self, ax, elems):
super().draw(ax, elems)
self.ml.map.draw(ax, elems)
class FastSlam2Agent(EstimationAgent): ###fastslam6agent
def __init__(self, time_interval, nu, omega, estimator):
super().__init__(time_interval, nu, omega, estimator)
def decision(self, observation=None):
self.estimator.motion_update(self.prev_nu, self.prev_omega, self.time_interval, observation) #センサ値追加
self.prev_nu, self.prev_omega = self.nu, self.omega
self.estimator.observation_update(observation)
return self.nu, self.omega
def trial():
time_interval = 0.1
world = World(30, time_interval, debug=False)
##真の地図を作成##
m = Map()
for ln in [(-4,2), (2,-3), (3,3)]: m.append_landmark(Landmark(*ln))
world.append(m)
## ロボットを作る ##
init_pose = np.array([0,0,0]).T
pf = FastSlam(init_pose,100, len(m.landmarks))
a = FastSlam2Agent(time_interval, 0.2, 10.0/180*math.pi, pf) #エージェントを変更
r = Robot(init_pose, sensor=Camera(m), agent=a, color="red")
world.append(r)
world.draw()
trial()
```
| github_jupyter |
### Anomaly Detection
* What are Outliers ?
* Statistical Methods for Univariate Data
* Using Gaussian Mixture Models
* Fitting an elliptic envelope
* Isolation Forest
* Local Outlier Factor
* Using clustering method like DBSCAN
### 1. Outliers
* New data which doesn't belong to general trend (or distribution) of entire data are known as outliers.
* Data belonging to general trend are known as inliners.
* Learning models are impacted by presence of outliers.
* Anomaly detection is another use of outlier detection in which we find out unusual behaviour.
* Data which were detected outliers can be deleted from complete dataset.
* Outliers can also be marked before using them in learning methods
### 2. Statistical Methods for Univariate Data
* Using Standard Deviation Method - zscore
* Using Interquartile Range Method - IRQ
##### Using Standard Deviation Method
* If univariate data follows Gaussian Distribution, we can use standard deviation to figure out where our data lies
```
import numpy as np
data = np.random.normal(size=1000)
```
* Adding More Outliers
```
data[-5:] = [3.5,3.6,4,3.56,4.2]
from scipy.stats import zscore
```
* Detecting Outliers
```
data[np.abs(zscore(data)) > 3]
```
##### Using Interquartile Range
* For univariate data not following Gaussian Distribution IQR is a way to detect outliers
```
from scipy.stats import iqr
data = np.random.normal(size=1000)
data[-5:]=[-2,9,11,-3,-21]
iqr_value = iqr(data)
lower_threshold = np.percentile(data,25) - iqr_value*1.5
upper_threshold = np.percentile(data,75) + iqr_value*1.5
upper_threshold
lower_threshold
data[np.where(data < lower_threshold)]
data[np.where(data > upper_threshold)]
```
### 3. Using Gaussian Mixture Models
* Data might contain more than one peaks in the distribution of data.
* Trying to fit such multi-model data with unimodel won't give a good fit.
* GMM allows to fit such multi-model data.
* Configuration involves number of components in data, n_components.
* covariance_type controls the shape of cluster
- full : cluster will be modeled to eclipse in arbitrary dir
- sperical : cluster will be spherical like kmeans
- diag : cluster will be aligned to axis
* We will see how GMM can be used to find outliers
```
# Number of samples per component
n_samples = 500
# Generate random sample, two components
np.random.seed(0)
C = np.array([[0., -0.1], [1.7, .4]])
C2 = np.array([[1., -0.1], [2.7, .2]])
#X = np.r_[np.dot(np.random.randn(n_samples, 2), C)]
#.7 * np.random.randn(n_samples, 2) + np.array([-6, 3])]
X = np.r_[np.dot(np.random.randn(n_samples, 2), C),np.dot(np.random.randn(n_samples, 2), C2)]
import matplotlib.pyplot as plt
%matplotlib inline
X[-5:] = [[4,-1],[4.1,-1.1],[3.9,-1],[4.0,-1.2],[4.0,-1.3]]
plt.scatter(X[:,0], X[:,1],s=5)
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=3)
gmm.fit(X)
pred = gmm.predict(X)
pred[:50]
plt.scatter(X[:,0], X[:,1],s=10,c=pred)
```
### 4. Fitting Elliptical Envelope
* The assumption here is, regular data comes from known distribution ( Gaussion distribution )
* Inliner location & variance will be calculated using `Mahalanobis distances` which is less impacted by outliers.
* Calculate robust covariance fit of the data.
```
X,_ = make_blobs(n_features=2, centers=2, cluster_std=2.5, n_samples=1000)
plt.scatter(X[:,0], X[:,1],s=10)
from sklearn.covariance import EllipticEnvelope
ev = EllipticEnvelope(contamination=.1)
ev.fit(X)
cluster = ev.predict(X)
plt.scatter(X[:,0], X[:,1],s=10,c=cluster)
```
### 5. Isolation Forest
* Based on RandomForest
* Useful in detecting outliers in high dimension datasets.
* This algorithm randomly selects a feature & splits further.
* Random partitioning produces shorter part for anomolies.
* When a forest of random trees collectively produce shorter path lengths for particular samples, they are highly likely to be anomalies.
```
rng = np.random.RandomState(42)
# Generate train data
X = 0.3 * rng.randn(100, 2)
X_train = np.r_[X + 2, X - 2]
# Generate some regular novel observations
X = 0.3 * rng.randn(20, 2)
X_test = np.r_[X + 2, X - 2]
# Generate some abnormal novel observations
X_outliers = rng.uniform(low=-4, high=4, size=(20, 2))
from sklearn.ensemble import IsolationForest
data = np.r_[X_train,X_test,X_outliers]
iso = IsolationForest(behaviour='new', contamination='auto')
iso.fit(data)
pred = iso.predict(data)
plt.scatter(data[:,0], data[:,1],s=10,c=pred)
```
### 6. Local Outlier Factor
* Based on nearest neighbours
* Suited for moderately high dimension datasets
* LOF computes a score reflecting degree of abnormility of a data.
* LOF Calculation
- Local density is calculated from k-nearest neighbors.
- LOF of each data is equal to the ratio of the average local density of his k-nearest neighbors, and its own local density.
- An abnormal data is expected to have smaller local density.
* LOF tells you not only how outlier the data is but how outlier is it with respect to all data
```
from sklearn.neighbors import LocalOutlierFactor
lof = LocalOutlierFactor(n_neighbors=25,contamination=.1)
pred = lof.fit_predict(data)
s = np.abs(lof.negative_outlier_factor_)
plt.scatter(data[:,0], data[:,1],s=s*10,c=pred)
```
### 7. Outlier Detection using DBSCAN
* DBSCAN is a clustering method based on density
* Groups data which are closer to each other.
* Doesn't use distance vector calculation method
* Data not close enough to any cluster is not assigned any cluster & these can be anomalies
* eps controls the degree of considering a data part of cluster
```
from sklearn.cluster import DBSCAN
dbscan = DBSCAN(eps=.3)
dbscan.fit(data)
plt.scatter(data[:,0], data[:,1],s=s*10,c=dbscan.labels_)
```
| github_jupyter |
```
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import os
import torch
from scipy.io import loadmat
from tqdm import tqdm_notebook as tqdm
%matplotlib inline
use_cuda = torch.cuda.is_available()
device = torch.device('cuda:0' if use_cuda else 'cpu')
# Add new methods here.
# methods = ['hesaff', 'hesaffnet', 'delf', 'delf-new', 'superpoint','d2-fast-ap-final', 'd2-fast-ap-revised']
# names = ['Hes. Aff. + Root-SIFT', 'HAN + HN++', 'DELF', 'DELF New', 'SuperPoint', 'D2-Net-Fast-AP', 'D2-Net-Fast-AP-2']
# colors = ['green', 'orange', 'purple', 'cyan', 'blue', 'black', 'red']
# linestyles = ['--', '-', '-', '--', '-', '-', '--']
methods = ['d2-net', 'd2-ms', 'd2-net-trained', 'd2-net-trained', 'd2-fast-ap-revised-ms', 'd2-fast-ap-revised']
names = ['D2-Net', 'D2-Net MS', 'D2-Net Trained', 'D2-Net Trained MS', 'D2-Net Fast AP 1', 'D2-Net Fast AP 2']
colors = ['purple', 'cyan', 'orange', 'green', 'black', 'red']
linestyles = ['-', '-', '--', '--', '-', '--']
# Change here if you want to use top K or all features.
top_k = 2000
# top_k = None
n_i = 52
n_v = 56
dataset_path = 'hpatches-sequences-release'
lim = [1, 15]
rng = np.arange(lim[0], lim[1] + 1)
def mnn_matcher(descriptors_a, descriptors_b):
device = descriptors_a.device
sim = descriptors_a @ descriptors_b.t()
nn12 = torch.max(sim, dim=1)[1]
nn21 = torch.max(sim, dim=0)[1]
ids1 = torch.arange(0, sim.shape[0], device=device)
mask = (ids1 == nn21[nn12])
matches = torch.stack([ids1[mask], nn12[mask]])
return matches.t().data.cpu().numpy()
def benchmark_features(read_feats):
seq_names = sorted(os.listdir(dataset_path))
n_feats = []
n_matches = []
seq_type = []
i_err = {thr: 0 for thr in rng}
v_err = {thr: 0 for thr in rng}
for seq_idx, seq_name in tqdm(enumerate(seq_names), total=len(seq_names)):
keypoints_a, descriptors_a = read_feats(seq_name, 1)
n_feats.append(keypoints_a.shape[0])
for im_idx in range(2, 7):
keypoints_b, descriptors_b = read_feats(seq_name, im_idx)
n_feats.append(keypoints_b.shape[0])
matches = mnn_matcher(
torch.from_numpy(descriptors_a).to(device=device),
torch.from_numpy(descriptors_b).to(device=device)
)
homography = np.loadtxt(os.path.join(dataset_path, seq_name, "H_1_" + str(im_idx)))
pos_a = keypoints_a[matches[:, 0], : 2]
pos_a_h = np.concatenate([pos_a, np.ones([matches.shape[0], 1])], axis=1)
pos_b_proj_h = np.transpose(np.dot(homography, np.transpose(pos_a_h)))
pos_b_proj = pos_b_proj_h[:, : 2] / pos_b_proj_h[:, 2 :]
pos_b = keypoints_b[matches[:, 1], : 2]
dist = np.sqrt(np.sum((pos_b - pos_b_proj) ** 2, axis=1))
n_matches.append(matches.shape[0])
seq_type.append(seq_name[0])
if dist.shape[0] == 0:
dist = np.array([float("inf")])
for thr in rng:
if seq_name[0] == 'i':
i_err[thr] += np.mean(dist <= thr)
else:
v_err[thr] += np.mean(dist <= thr)
seq_type = np.array(seq_type)
n_feats = np.array(n_feats)
n_matches = np.array(n_matches)
return i_err, v_err, [seq_type, n_feats, n_matches]
def summary(stats):
seq_type, n_feats, n_matches = stats
print('# Features: {:f} - [{:d}, {:d}]'.format(np.mean(n_feats), np.min(n_feats), np.max(n_feats)))
print('# Matches: Overall {:f}, Illumination {:f}, Viewpoint {:f}'.format(
np.sum(n_matches) / ((n_i + n_v) * 5),
np.sum(n_matches[seq_type == 'i']) / (n_i * 5),
np.sum(n_matches[seq_type == 'v']) / (n_v * 5))
)
def generate_read_function(method, extension='ppm'):
def read_function(seq_name, im_idx):
aux = np.load(os.path.join(dataset_path, seq_name, '%d.%s.%s' % (im_idx, extension, method)))
if top_k is None:
return aux['keypoints'], aux['descriptors']
else:
assert('scores' in aux)
ids = np.argsort(aux['scores'])[-top_k :]
return aux['keypoints'][ids, :], aux['descriptors'][ids, :]
return read_function
def sift_to_rootsift(descriptors):
return np.sqrt(descriptors / np.expand_dims(np.sum(np.abs(descriptors), axis=1), axis=1) + 1e-16)
def parse_mat(mat):
keypoints = mat['keypoints'][:, : 2]
raw_descriptors = mat['descriptors']
l2_norm_descriptors = raw_descriptors / np.expand_dims(np.sum(raw_descriptors ** 2, axis=1), axis=1)
descriptors = sift_to_rootsift(l2_norm_descriptors)
if top_k is None:
return keypoints, descriptors
else:
assert('scores' in mat)
ids = np.argsort(mat['scores'][0])[-top_k :]
return keypoints[ids, :], descriptors[ids, :]
if top_k is None:
cache_dir = 'cache'
else:
cache_dir = 'cache-top'
if not os.path.isdir(cache_dir):
os.mkdir(cache_dir)
errors = {}
for method in methods:
output_file = os.path.join(cache_dir, method + '.npy')
print(method)
if method == 'hesaff':
read_function = lambda seq_name, im_idx: parse_mat(loadmat(os.path.join(dataset_path, seq_name, '%d.ppm.hesaff' % im_idx), appendmat=False))
else:
if method == 'delf' or method == 'delf-new':
read_function = generate_read_function(method, extension='png')
else:
read_function = generate_read_function(method)
if os.path.exists(output_file):
print('Loading precomputed errors...')
errors[method] = np.load(output_file, allow_pickle=True)
else:
errors[method] = benchmark_features(read_function)
np.save(output_file, errors[method])
summary(errors[method][-1])
```
# Plotting
```
plt_lim = [1, 15]
plt_rng = np.arange(plt_lim[0], plt_lim[1] + 1)
plt.rc('axes', titlesize=25)
plt.rc('axes', labelsize=25)
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
for method, name, color, ls in zip(methods, names, colors, linestyles):
i_err, v_err, _ = errors[method]
plt.plot(plt_rng, [(i_err[thr] + v_err[thr]) / ((n_i + n_v) * 5) for thr in plt_rng], color=color, ls=ls, linewidth=3, label=name)
plt.title('Overall')
plt.xlim(plt_lim)
plt.xticks(plt_rng)
plt.ylabel('MMA')
plt.ylim([0, 1])
plt.grid()
plt.tick_params(axis='both', which='major', labelsize=20)
plt.legend()
plt.subplot(1, 3, 2)
for method, name, color, ls in zip(methods, names, colors, linestyles):
i_err, v_err, _ = errors[method]
plt.plot(plt_rng, [i_err[thr] / (n_i * 5) for thr in plt_rng], color=color, ls=ls, linewidth=3, label=name)
plt.title('Illumination')
plt.xlabel('threshold [px]')
plt.xlim(plt_lim)
plt.xticks(plt_rng)
plt.ylim([0, 1])
plt.gca().axes.set_yticklabels([])
plt.grid()
plt.tick_params(axis='both', which='major', labelsize=20)
plt.subplot(1, 3, 3)
for method, name, color, ls in zip(methods, names, colors, linestyles):
i_err, v_err, _ = errors[method]
plt.plot(plt_rng, [v_err[thr] / (n_v * 5) for thr in plt_rng], color=color, ls=ls, linewidth=3, label=name)
plt.title('Viewpoint')
plt.xlim(plt_lim)
plt.xticks(plt_rng)
plt.ylim([0, 1])
plt.gca().axes.set_yticklabels([])
plt.grid()
plt.tick_params(axis='both', which='major', labelsize=20)
if top_k is None:
plt.savefig('hseq.pdf', bbox_inches='tight', dpi=300)
else:
plt.savefig('hseq-top.pdf', bbox_inches='tight', dpi=300)
```
| github_jupyter |
```
%pylab inline
import pandas as pd
import tensorflow as tf
import glob
from tensorflow.contrib.tensor_forest.python import tensor_forest
from tensorflow.python.ops import resources
from tqdm import tqdm_notebook
from multiprocessing import Pool
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
np.random.seed(42)
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import average_precision_score
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper', font_scale=2)
BUFFER_SIZE= 100
from pyvirchow.io.operations import get_annotation_bounding_boxes
from pyvirchow.io.operations import get_annotation_polygons
from pyvirchow.io.operations import path_leaf
from pyvirchow.io.operations import read_as_rgb
from pyvirchow.io.operations import WSIReader
from pyvirchow.io.tiling import get_all_patches_from_slide
from pyvirchow.io.tiling import save_images_and_mask, generate_tiles, generate_tiles_fast
from pyvirchow.morphology.patch_extractor import TissuePatch
from pyvirchow.morphology.mask import get_common_interior_polygons
from tqdm import tqdm
from multiprocessing import Pool
from pyvirchow.segmentation import label_nuclei, summarize_region_properties
from pyvirchow.deep_model.model import slide_level_map
from collections import defaultdict
import joblib
import numpy as np
from six import iteritems
import pandas as pd
train_samples = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/train_df_segmented.tsv')
is_tumor_idx = list(train_samples.columns).index('is_tumor')
def order(frame,var):
if type(var) is str:
var = [var] #let the command take a string or list
varlist =[w for w in frame.columns if w not in var]
frame = frame[var+varlist]
return frame
# Parameters
num_steps = 500 # Total steps to train
batch_size = 1#024 # The number of samples per batch
num_classes = 2 # 2 classes
num_features = 46 # columns in the feature
num_trees = 100
max_nodes = 10000
def _parse_csv(rows_string_tensor):
"""Takes the string input tensor and returns tuple of (features, labels)."""
# Last dim is the label.
num_columns = num_features + 1
columns = tf.decode_csv(rows_string_tensor,
record_defaults=[[0.0]] * num_columns ,
field_delim='\t')
label = columns[0]
print(label)
#label = columns[is_tumor_idx]
#stack1 = tf.stack(columns[0: is_tumor_idx])
#stack2 = tf.stack(columns[is_tumor_idx+1:])
#return tf.stack([columns[x0: is_tumor_idx], columns[is_tumor_idx+1:]]), tf.cast(label, tf.int32)
return tf.cast(tf.stack(columns[1:]), tf.float32), tf.cast(label, tf.int32)
#return tf.cast(label, tf.int32), tf.cast(label, tf.int32)
def input_fn(file_names, batch_size):
"""The input_fn."""
dataset = tf.data.TextLineDataset(file_names).skip(1)
# Skip the first line (which does not have data).
dataset = dataset.map(_parse_csv)
dataset = dataset.shuffle(buffer_size=BUFFER_SIZE)
dataset = dataset.batch(batch_size)
iterator = tf.data.Iterator.from_structure(dataset.output_types,
dataset.output_shapes)
next_batch = iterator.get_next()
init_op = iterator.make_initializer(dataset)
return init_op, next_batch
X = tf.placeholder(tf.float32, shape=[None, num_features])
# For random forest, labels must be integers (the class id)
Y = tf.placeholder(tf.int32, shape=[None])
# Random Forest Parameters
hparams = tensor_forest.ForestHParams(num_classes=num_classes,
num_features=num_features,
num_trees=num_trees,
max_nodes=max_nodes).fill()
# Build the Random Forest
forest_graph = tensor_forest.RandomForestGraphs(hparams)
# Get training graph and loss
train_op = forest_graph.training_graph(X, Y)
loss_op = forest_graph.training_loss(X, Y)
# Measure the accuracy
infer_op, _, _ = forest_graph.inference_graph(X)
correct_prediction = tf.equal(tf.argmax(infer_op, 1), tf.cast(Y, tf.int64))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Initialize the variables (i.e. assign their default value) and forest resources
init_vars = tf.group(tf.global_variables_initializer(),
resources.initialize_resources(resources.shared_resources()))
# Start TensorFlow session
sess = tf.Session()
# Run the initializer
sess.run(init_vars)
train_samples = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/train_df_segmented.tsv')
train_samples = train_samples.drop(columns='0')
validation_samples = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/validate_df_with_mask_segmented.tsv')
validation_samples = validation_samples.drop(columns='0')
train_samples_labels = pd.read_table(
'/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/train_df_with_mask.tsv'
)
validation_samples_labels = pd.read_table(
'/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/validate_df_with_mask.tsv'
)
train_samples = order(train_samples, ['is_tumor'])
train_samples.is_tumor = train_samples.is_tumor.astype('int32')
validation_samples_labels.is_tumor = validation_samples_labels.is_tumor.astype('int32')
train_samples_labels.is_tumor = train_samples_labels.is_tumor.astype('int32')
train_samples.to_csv('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/train_df_segmented_with_labels.tsv',
index=False,
header=True,
sep='\t')
validation_samples = order(validation_samples, ['is_tumor'])
validation_samples.is_tumor = validation_samples.is_tumor.astype('int32')
validation_samples.to_csv('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/validate_df_segmented_with_labels.tsv',
index=False,
header=True,
sep='\t')
validation_samples_labels_tumor = validation_samples_labels[validation_samples_labels.is_tumor==1].sample(frac=0.45, random_state=42)
validation_samples_labels_normal = validation_samples_labels[validation_samples_labels.is_tumor==0].sample(frac=0.45, random_state=43)
train_samples_labels_tumor = train_samples_labels[train_samples_labels.is_tumor==1].sample(frac=0.45, random_state=42)
train_samples_labels_normal = train_samples_labels[train_samples_labels.is_tumor==0].sample(frac=0.45, random_state=43)
validation_samples_labels = pd.concat([validation_samples_labels_tumor, validation_samples_labels_normal]).sample(frac=1, random_state=43)
train_samples_labels = pd.concat([train_samples_labels_tumor, train_samples_labels_normal]).sample(frac=1, random_state=43)
#validation_samples['img_path'] = ''
#validation_samples['mask_path'] = ''
# Sample only half the points
train_samples_tumor = train_samples[train_samples.is_tumor==True].sample(frac=0.45, random_state=42)
train_samples_normal = train_samples[train_samples.is_tumor==False].sample(frac=0.45, random_state=43)
validation_samples_tumor = validation_samples[validation_samples.is_tumor==True].sample(frac=0.45, random_state=42)
validation_samples_normal = validation_samples[validation_samples.is_tumor==False].sample(frac=0.45, random_state=43)
train_samples = pd.concat([train_samples_tumor, train_samples_normal]).sample(frac=1, random_state=42)
train_samples['img_path'] = train_samples_labels.img_path.tolist()
train_samples['is_tissue'] = train_samples_labels.is_tissue.tolist()
train_samples['uid'] = train_samples_labels.uid.tolist()
train_samples['mask_path'] = train_samples_labels.mask_path.tolist()
train_samples.loc[train_samples.img_path!=train_samples.img_path, 'img_path'] = 'xxx'
train_samples.loc[train_samples.mask_path!=train_samples.mask_path, 'mask_path'] = 'yyy'
train_samples = train_samples.dropna()
train_samples = train_samples.reset_index(drop=True)
validation_samples = pd.concat([validation_samples_tumor, validation_samples_normal]).sample(frac=1, random_state=43)
validation_samples['img_path'] = validation_samples_labels.img_path.tolist()
validation_samples['is_tissue'] = validation_samples_labels.is_tissue.tolist()
validation_samples['uid'] = validation_samples_labels.uid.tolist()
validation_samples['mask_path'] = validation_samples_labels.mask_path.tolist()
validation_samples.loc[validation_samples.img_path!=validation_samples.img_path, 'img_path'] = 'xxx'
validation_samples.loc[validation_samples.mask_path!=validation_samples.mask_path, 'mask_path'] = 'yyy'
validation_samples = validation_samples.dropna()
validation_samples = validation_samples.reset_index(drop=True)
train_samples = train_samples.dropna()
validation_samples = validation_samples.dropna()
train_samples.to_csv('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/train_df_segmented_with_labels_subsampled_v1.tsv',
index=False,
header=True,
sep='\t')
validation_samples.to_csv('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/validate_df_segmented_with_labels_subsampled_v1.tsv',
index=False,
header=True,
sep='\t')
train_samples.head()
len(train_samples_labels.index)
train_samples = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/train_df_segmented.tsv')
len(train_samples.index)
print(train_samples.columns)
len(validation_samples.index)
training_init_op, training_next_batch = input_fn(['/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/train_df_segmented_with_labels_subsampled.tsv'],
1024)
for epoch in range(num_steps):
sess.run(training_init_op)
while True:
try:
training_features_batch, training_label_batch = sess.run(training_next_batch)
except tf.errors.OutOfRangeError:
break
_, l = sess.run([train_op, loss_op],
feed_dict={X: training_features_batch,
Y: training_label_batch})
acc = sess.run(accuracy_op,
feed_dict={X: training_features_batch,
Y: training_label_batch})
print('Step %i, Loss: %f, Acc: %f' % (epoch, l, acc))
validation_df = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/validate_df_segmented_with_labels_subsampled_v1.tsv').copy()
validation_df = order(validation_df, ['is_tumor', 'is_tissue', 'uid', 'img_path', 'mask_path'])
validation_df['pred_probability'] = None
validation_df['pred_label'] = None
for idx, row in validation_df.iterrows():
validation_label_batch = row.values[0]
validation_features_batch = row.values[5:-2]
prob = sess.run(infer_op,
feed_dict={X: [validation_features_batch]})[0][1]
validation_df.loc[idx, 'pred_probability'] = prob
if prob>0.5:
validation_df.loc[idx, 'pred_label'] = 1
else:
validation_df.loc[idx, 'pred_label'] = 0
validation_df.to_csv('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/validate_df_segmented_with_labels_subsampled_v1_predicted.tsv',
sep='\t',
index=False,
header=True)
#len(validation_acc)
plt.hist(validation_acc)
validation_prob = []
validation_labels = []
sess.run(validation_init_op)
while True:
try:
validation_features_batch, validation_label_batch = sess.run(validation_next_batch)
validation_labels.append(validation_label_batch[0])
except tf.errors.OutOfRangeError:
break
acc = sess.run(infer_op,
feed_dict={X: validation_features_batch})
validation_prob.append(acc[0][1])
joblib.dump(validation_prob, '/Z/personal-folders/interns/saket/github/pyvirchow/pickles/random_forest_valid_prob.joblib.pickle')
joblib.dump(validation_labels, '/Z/personal-folders/interns/saket/github/pyvirchow/pickles/random_forest_valid_true.joblib.pickle')
joblib.dump(validation_acc, '/Z/personal-folders/interns/saket/github/pyvirchow/pickles/random_forest_valid_acc.joblib.pickle')
average_precision = average_precision_score(validation_labels, validation_prob)
precision, recall, _ = precision_recall_curve(validation_labels, validation_prob)
fig, ax = plt.subplots(figsize=(8, 8))
ax.step(recall, precision, color='b', alpha=0.2,
where='post')
ax.fill_between(recall, precision, step='post', alpha=0.2,
color='b')
ax.set_xlabel('Recall')
ax.set_ylabel('Precision')
ax.set_ylim([0.0, 1.05])
ax.set_xlim([0.0, 1.0])
ax.set_title('2-class Precision-Recall curve: AP={0:0.2f}'.format(
average_precision))
fig.tight_layout()
fig.savefig('presentation_images/random_forest_PRAUC.pdf')
```
# Patch stuff
```
wsi = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/tumor/tumor_005.tif'
json_filepath = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/lesion_annotations_json/tumor_005.json'
savedir = '/Z/personal-folders/interns/saket/github/pyvirchow/data/wsi_heatmap_rf/'
os.makedirs(savedir, exist_ok=True)
img_mask_dir = '/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_img_and_mask/'
basename = path_leaf(wsi).replace('.tif', '')
#if basename!= 'tumor_110':
# continue
patchsize = 256
saveto = os.path.join(savedir, basename + '.joblib.pickle')
saveto_original = os.path.join(savedir,
basename + '.original.joblib.pickle')
all_samples = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005.tsv')
if 'img_path' not in all_samples.columns:
assert img_mask_dir is not None, 'Need to provide directory if img_path column is missing'
tile_loc = all_samples.tile_loc.astype(str)
tile_loc = tile_loc.str.replace(' ', '').str.replace(
')', '').str.replace('(', '')
all_samples[['row', 'col']] = tile_loc.str.split(',', expand=True)
all_samples['img_path'] = img_mask_dir + '/' + all_samples[[
'uid', 'row', 'col'
]].apply(
lambda x: '_'.join(x.values.tolist()),
axis=1) + '.img.joblib.pickle'
all_samples['mask_path'] = img_mask_dir + '/' + all_samples[[
'uid', 'row', 'col'
]].apply(
lambda x: '_'.join(x.values.tolist()),
axis=1) + '.mask.joblib.pickle'
if not os.path.isfile('/tmp/white.img.pickle'):
white_img = np.ones(
[patchsize, patchsize, 3], dtype=np.uint8) * 255
joblib.dump(white_img, '/tmp/white.img.pickle')
# Definitely not a tumor and hence all black
if not os.path.isfile('/tmp/white.mask.pickle'):
white_img_mask = np.ones(
[patchsize, patchsize], dtype=np.uint8) * 0
joblib.dump(white_img_mask, '/tmp/white.mask.pickle')
all_samples.loc[all_samples.is_tissue == False,
'img_path'] = '/tmp/white.img.pickle'
all_samples.loc[all_samples.is_tissue == False,
'mask_path'] = '/tmp/white.mask.pickle'
for idx, row in all_samples.iterrows():
f = row['img_path']
if not os.path.isfile(f):
row['savedir'] = img_mask_dir
row['patch_size'] = patchsize
row['index'] = idx
save_images_and_mask(row)
print(all_samples.head())
all_samples.to_csv('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005_with_mask.tsv',
index=False,
header=True, sep='\t')
testing_init_op, testing_next_batch = input_fn(['/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005_with_mask_segmented.tsv'],
batch_size)
tumor005_segdf = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/tumor_005_segmented.fixed.segmented.tsv')
tumor005_segdf.head()
n_samples = len(tumor005_segdf.index)
n_samples
slide = WSIReader(wsi, 40)
n_cols = int(slide.dimensions[0] / patchsize)
n_rows = int(slide.dimensions[1] / patchsize)
assert n_rows * n_cols == n_samples, 'Some division error;'
print('Total: {}'.format(n_samples))
"""
def generate_rows(samples, num_samples, batch_size=32):
while True: # Loop forever so the generator never terminates
for offset in range(0, num_samples, batch_size):
batch_samples = samples.iloc[offset:offset + batch_size]
is_tissue = batch_samples.is_tissue.tolist()
is_tumor = batch_samples.is_tumor.astype('int32').tolist()
features = []
batch_samples = batch_samples.copy().drop(columns=['is_tissue', 'is_tumor'])
for _, batch_sample in batch_samples.iterrows():
row = batch_samples.values
features.append(row)
X_train = np.array(features)
y_train = np.array(labels)
yield X_train, y_train
"""
def generate_rows(samples, num_samples, batch_size=1):
while True: # Loop forever so the generator never terminates
for offset in range(0, num_samples, batch_size):
batch_samples = samples.iloc[offset:offset + batch_size]
#is_tissue = batch_samples.is_tissue.tolist()
#is_tumor = batch_samples.is_tumor.astype('int32').tolist()
features = []
labels = []
#batch_samples = batch_samples.copy().drop(columns=['is_tissue', 'is_tumor'])
for _, batch_sample in batch_samples.iterrows():
row = batch_sample.values
label = int(batch_sample.is_tumor)
if batch_sample.is_tissue:
feature = pd.read_table(os.path.join('/Z/personal-folders/interns/saket/github/pyvirchow', batch_sample.segmented_tsv))
feature = feature.drop(columns=['is_tumor', 'is_tissue'])
assert len(feature.columns) == 46
features.append(feature.loc[0].values)
else:
values = [0.0]*46
features.append(values)
labels.append(label)
X_train = np.array(features, dtype=np.float32)
y_train = np.array(labels)
#print(X_train)
#print(y_train)
yield X_train, y_train
predicted_thumbnails = list()
batch_size = 1
"""
sess.run(testing_init_op)
while True:
try:
testing_features_batch, testing_label_batch = sess.run(testing_next_batch)
except tf.errors.OutOfRangeError:
break
preds = sess.run(infer_op,
feed_dict={X: testing_features_batch})
predicted_thumbnails.append(preds)
"""
true_labels = []
for offset in tqdm_notebook(list(range(0, n_samples, batch_size))):
batch_samples = tumor005_segdf.iloc[offset:offset + batch_size]
X_test, true_label = next(
generate_rows(batch_samples, batch_size))
true_labels.append(true_label)
if batch_samples.is_tissue.nunique(
) == 1 and batch_samples.iloc[0].is_tissue == False:
# all patches in this row do not have tissue, skip them all
#predicted_thumbnails.append(
# np.zeros(batch_size, dtype=np.float32))
predicted_thumbnails.append(0)
else:
preds = sess.run(infer_op,
feed_dict={X: X_test})
predicted_thumbnails.append(preds[0][1])
predicted_thumbnails = np.asarray(predicted_thumbnails)
savedir = '/Z/personal-folders/interns/saket/github/pyvirchow/data/wsi_heatmap_rf'
saveto = os.path.join(savedir, 'tumor_005.job.pickle')
os.makedirs(savedir, exist_ok=True)
output_thumbnail_preds = predicted_thumbnails.reshape(
n_rows, n_cols)
joblib.dump(output_thumbnail_preds, saveto)
fig, ax = plt.subplots()
sns.set_style('white')
x = ax.imshow(output_thumbnail_preds, cmap='coolwarm')
plt.colorbar(x)
fig.tight_layout()
fig, ax = plt.subplots()
sns.set_style('white')
x = ax.imshow(output_thumbnail_preds > 0.5, cmap='gray')
#plt.colorbar(x)
fig.tight_layout()
saver = tf.train.Saver()
saver.save(sess, '/Z/personal-folders/interns/saket/github/pyvirchow/models/random_forest_all_train.tf.model')
df = pd.read_table('../data/patch_df/tumor_001_with_mask_segmented.segmented.tsv')
df1 = df[df.segmented_tsv==df.segmented_tsv]
df1.head()
x = pd.read_table(df1.loc[693, 'segmented_tsv'])
x
x = pd.read_table('/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_segmented_tumor001/tumor_001_75_63.segmented_summary.tsv')
x
train_samples
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
clf = RandomForestClassifier(n_jobs=-1, random_state=0)
features = train_samples.columns
clf.fit(train_samples.loc[:, features[1:]], train_samples.is_tumor)
predictions = clf.predict(validation_samples.loc[:, features[1:]])
print ("Train Accuracy :: {} ".format(accuracy_score(train_samples.is_tumor,
clf.predict(train_samples.loc[:, features[1:]]))))
print ("Test Accuracy :: {} ".format(accuracy_score(validation_samples.is_tumor,
predictions)))
importances = clf.feature_importances_
indices = np.argsort(importances)[::-1]
importances
for f in range(train_samples.shape[1]-1):
print("%d. feature %s (%f)" % (f + 1, train_samples.columns[indices[f]+1],
importances[indices[f]]))
std = np.std([tree.feature_importances_ for tree in clf.estimators_],
axis=0)
sns.set_context('talk', font_scale=2)
sns.set_style('white')
fig, ax = plt.subplots(figsize=(10, 10))
ax.set_title('Feature importances')
ax.barh(list(features[1:][indices])[:15],list(importances[indices])[:15],
yerr=list(std[indices])[:15], align="center")
#ax.set_xticks(range(X.shape[1]), indices)
fig.tight_layout()
fig.savefig('presentation_images/rf_feature_importances.pdf')
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# カスタムレイヤー
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/customization/custom_layers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/customization/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/customization/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/customization/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [docs-ja@tensorflow.org メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。
ニューラルネットワークの構築には、ハイレベルの API である `tf.keras` を使うことを推奨します。しかしながら、TensorFlow API のほとんどは、eager execution でも使用可能です。
```
import tensorflow as tf
```
## レイヤー:有用な演算の共通セット
機械学習モデルのコーディングでは、個々の演算やひとつひとつの変数のオペレーションよりは、より高度に抽象化されたオペレーションを行いたいのがほとんどだと思います。
多くの機械学習モデルは、比較的単純なレイヤーの組み合わせや積み重ねによって表現可能です。TensorFlow では、多くの一般的なレイヤーのセットに加えて、アプリケーションに特有なレイヤーを最初から記述したり、既存のレイヤーの組み合わせによって作るための、簡単な方法が提供されています。
TensorFlow には、tf.keras パッケージに[Keras](https://keras.io) APIのすべてが含まれています。Keras のレイヤーは、独自のモデルを構築する際に大変便利です。
```
# tf.keras.layers パッケージの中では、レイヤーはオブジェクトです。
# レイヤーを構築するためにすることは、単にオブジェクトを作成するだけです。
# ほとんどのレイヤーでは、最初の引数が出力の次元あるいはチャネル数を表します。
layer = tf.keras.layers.Dense(100)
# 入力の次元数は多くの場合不要となっています。それは、レイヤーが最初に使われる際に
# 推定可能だからです。ただし、引数として渡すことで手動で指定することも可能です。
# これは複雑なモデルを構築する場合に役に立つでしょう。
layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
```
既存のレイヤーのすべての一覧は、[ドキュメント](https://www.tensorflow.org/api_docs/python/tf/keras/layers)を参照してください。Dense(全結合レイヤー)、Conv2D、LSTM、BatchNormalization、Dropoutなどのたくさんのレイヤーが含まれています。
```
# レイヤーを使うには、単純にcallします。
layer(tf.zeros([10, 5]))
# レイヤーにはたくさんの便利なメソッドがあります。例えば、`layer.variables`を使って
# レイヤーのすべての変数を調べることができます。訓練可能な変数は、 `layer.trainable_variables`
# でわかります。この例では、全結合レイヤーには重みとバイアスの変数があります。
layer.variables
# これらの変数には便利なアクセサを使ってアクセス可能です。
layer.kernel, layer.bias
```
## カスタムレイヤーの実装
独自のレイヤーを実装する最良の方法は、tf.keras.Layer クラスを拡張し、下記のメソッドを実装することです。
* `__init__` , 入力に依存しないすべての初期化を行う
* `build`, 入力の `shape` を知った上で、残りの初期化を行う
* `call`, フォワード計算を行う
`build` が呼ばれるまで変数の生成を待つ必要はなく、`__init__` で作成できることに注意してください。しかしながら、`build` で変数を生成することの優位な点は、レイヤーがオペレーションをしようとする入力の `shape` に基づいて、後から定義できる点です。これに対して、`__init__` で変数を生成するということは、そのために必要な `shape` を明示的に指定する必要があるということです。
```
class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
def build(self, input_shape):
self.kernel = self.add_variable("kernel",
shape=[int(input_shape[-1]),
self.num_outputs])
def call(self, input):
return tf.matmul(input, self.kernel)
layer = MyDenseLayer(10)
print(layer(tf.zeros([10, 5])))
print(layer.trainable_variables)
```
できるだけ標準のレイヤーを使ったほうが、概してコードは読みやすく保守しやすくなります。コードを読む人は標準的なレイヤーの振る舞いに慣れているからです。`tf.keras.layers` にはないレイヤーを使いたい場合には、[githubのイシュー](http://github.com/tensorflow/tensorflow/issues/new)を登録するか、もっとよいのはプルリクエストを送ることです。
## モデル:レイヤーの組み合わせ
機械学習では、多くのレイヤーに類するものが、既存のレイヤーを組み合わせることで実装されています。例えば、ResNetの残差ブロックは、畳込み、バッチ正規化とショートカットの組み合わせです。
他のレイヤーからなるレイヤーに類するものを定義する際の主役は、tf.keras.Model クラスです。このクラスを継承することで実装できます。
```
class ResnetIdentityBlock(tf.keras.Model):
def __init__(self, kernel_size, filters):
super(ResnetIdentityBlock, self).__init__(name='')
filters1, filters2, filters3 = filters
self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))
self.bn2a = tf.keras.layers.BatchNormalization()
self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
self.bn2b = tf.keras.layers.BatchNormalization()
self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))
self.bn2c = tf.keras.layers.BatchNormalization()
def call(self, input_tensor, training=False):
x = self.conv2a(input_tensor)
x = self.bn2a(x, training=training)
x = tf.nn.relu(x)
x = self.conv2b(x)
x = self.bn2b(x, training=training)
x = tf.nn.relu(x)
x = self.conv2c(x)
x = self.bn2c(x, training=training)
x += input_tensor
return tf.nn.relu(x)
block = ResnetIdentityBlock(1, [1, 2, 3])
print(block(tf.zeros([1, 2, 3, 3])))
print([x.name for x in block.trainable_variables])
```
しかし、ほとんどの場合には、モデルはレイヤーを次々に呼び出すことで構成されます。tf.keras.Sequential クラスを使うことで、これをかなり短いコードで実装できます。
```
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1),
input_shape=(
None, None, 3)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(2, 1,
padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(3, (1, 1)),
tf.keras.layers.BatchNormalization()])
my_seq(tf.zeros([1, 2, 3, 3]))
```
# 次のステップ
それでは、前出のノートブックに戻り、線形回帰の例を、レイヤーとモデルを使って、より構造化された形で実装してみてください。
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Model-Zoo----Convolutional-Neural-Network-(VGG16)" data-toc-modified-id="Model-Zoo----Convolutional-Neural-Network-(VGG16)-1"><span class="toc-item-num">1 </span>Model Zoo -- Convolutional Neural Network (VGG16)</a></span><ul class="toc-item"><li><span><a href="#Imports" data-toc-modified-id="Imports-1.1"><span class="toc-item-num">1.1 </span>Imports</a></span></li><li><span><a href="#Settings-and-Dataset" data-toc-modified-id="Settings-and-Dataset-1.2"><span class="toc-item-num">1.2 </span>Settings and Dataset</a></span></li><li><span><a href="#Model" data-toc-modified-id="Model-1.3"><span class="toc-item-num">1.3 </span>Model</a></span></li><li><span><a href="#Training" data-toc-modified-id="Training-1.4"><span class="toc-item-num">1.4 </span>Training</a></span></li><li><span><a href="#Evaluation" data-toc-modified-id="Evaluation-1.5"><span class="toc-item-num">1.5 </span>Evaluation</a></span></li></ul></li></ul></div>
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).*
Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
```
- Runs on CPU (not recommended here) or GPU (if available)
# Model Zoo -- Convolutional Neural Network (VGG16)
## Imports
```
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
```
## Settings and Dataset
```
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('Device:', device)
# Hyperparameters
random_seed = 1
learning_rate = 0.001
num_epochs = 10
batch_size = 128
# Architecture
num_features = 784
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.CIFAR10(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.CIFAR10(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
```
## Model
```
##########################
### MODEL
##########################
class VGG16(torch.nn.Module):
def __init__(self, num_features, num_classes):
super(VGG16, self).__init__()
# calculate same padding:
# (w - k + 2*p)/s + 1 = o
# => p = (s(o-1) - w + k)/2
self.block_1 = nn.Sequential(
nn.Conv2d(in_channels=3,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
# (1(32-1)- 32 + 3)/2 = 1
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=64,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_2 = nn.Sequential(
nn.Conv2d(in_channels=64,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=128,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_3 = nn.Sequential(
nn.Conv2d(in_channels=128,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_4 = nn.Sequential(
nn.Conv2d(in_channels=256,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_5 = nn.Sequential(
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.classifier = nn.Sequential(
nn.Linear(512, 4096),
nn.Linear(4096, 4096),
nn.Linear(4096, num_classes)
)
for m in self.modules():
if isinstance(m, torch.nn.Conv2d):
#n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
#m.weight.data.normal_(0, np.sqrt(2. / n))
m.weight.detach().normal_(0, 0.05)
if m.bias is not None:
m.bias.detach().zero_()
elif isinstance(m, torch.nn.Linear):
m.weight.detach().normal_(0, 0.05)
m.bias.detach().detach().zero_()
def forward(self, x):
x = self.block_1(x)
x = self.block_2(x)
x = self.block_3(x)
x = self.block_4(x)
x = self.block_5(x)
logits = self.classifier(x.view(-1, 512))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = VGG16(num_features=num_features,
num_classes=num_classes)
model = model.to(device)
##########################
### COST AND OPTIMIZER
##########################
cost_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
```
## Training
```
def compute_accuracy(model, data_loader, train=False, validation=False):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
if not train:
if not validation and i < 10:
continue
elif validation and i >= 10:
continue
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = cost_fn(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model.eval()
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d | Train: %.3f%% | Valid: %.3f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader, train=True),
compute_accuracy(model, train_loader, train=False, validation=True)))
```
## Evaluation
```
with torch.set_grad_enabled(False): # save memory during inference
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
```
| github_jupyter |
```
#%cd ..
#!mkdir .kaggle
#from google.colab import files
#up = files.upload()
#!chmod 600 /root/.kaggle/kaggle.json
#!kaggle competitions download -c web-traffic-time-series-forecasting
#!unzip train_1.csv.zip
!ls
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
df = pd.read_csv('train_1.csv')
df.head()
df.info()
data_start_date = df.columns[1]
data_end_date = df.columns[-1]
print('Data ranges from %s to %s' % (data_start_date, data_end_date))
def plot_random_series(df, n_series):
sample = df.sample(n_series, random_state=8)
page_labels = sample['Page'].tolist()
series_samples = sample.loc[:,data_start_date:data_end_date]
plt.figure(figsize=(10,6))
for i in range(series_samples.shape[0]):
np.log1p(pd.Series(series_samples.iloc[i]).astype(np.float64)).plot(linewidth=1.5)
plt.title('Randomly Selected Wikipedia Page Daily Views Over Time (Log(views) + 1)')
plt.legend(page_labels)
plot_random_series(df, 6)
from datetime import timedelta
pred_steps = 14
pred_length=timedelta(pred_steps)
first_day = pd.to_datetime(data_start_date)
last_day = pd.to_datetime(data_end_date)
val_pred_start = last_day - pred_length + timedelta(1)
val_pred_end = last_day
train_pred_start = val_pred_start - pred_length
train_pred_end = val_pred_start - timedelta(days=1)
enc_length = train_pred_start - first_day
train_enc_start = first_day
train_enc_end = train_enc_start + enc_length - timedelta(1)
val_enc_start = train_enc_start + pred_length
val_enc_end = val_enc_start + enc_length - timedelta(1)
print('Train encoding:', train_enc_start, '-', train_enc_end)
print('Train prediction:', train_pred_start, '-', train_pred_end, '\n')
print('Val encoding:', val_enc_start, '-', val_enc_end)
print('Val prediction:', val_pred_start, '-', val_pred_end)
print('\nEncoding interval:', enc_length.days)
print('Prediction interval:', pred_length.days)
date_to_index = pd.Series(index=pd.Index([pd.to_datetime(c) for c in df.columns[1:]]),
data=[i for i in range(len(df.columns[1:]))])
series_array = df[df.columns[1:]].values
def get_time_block_series(series_array, date_to_index, start_date, end_date):
inds = date_to_index[start_date:end_date]
return series_array[:,inds]
def transform_series_encode(series_array):
series_array = np.log1p(np.nan_to_num(series_array)) # filling NaN with 0
series_mean = series_array.mean(axis=1).reshape(-1,1)
series_array = series_array - series_mean
series_array = series_array.reshape((series_array.shape[0],series_array.shape[1], 1))
return series_array, series_mean
def transform_series_decode(series_array, encode_series_mean):
series_array = np.log1p(np.nan_to_num(series_array)) # filling NaN with 0
series_array = series_array - encode_series_mean
series_array = series_array.reshape((series_array.shape[0],series_array.shape[1], 1))
return series_array
from keras.models import Model
from keras.layers import Input, LSTM, Dense
from keras.optimizers import Adam
latent_dim = 50 # LSTM hidden units
dropout = .20
# Define an input series and encode it with an LSTM.
encoder_inputs = Input(shape=(None, 1))
encoder = LSTM(latent_dim, dropout=dropout, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the final states. These represent the "context"
# vector that we use as the basis for decoding.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
# This is where teacher forcing inputs are fed in.
decoder_inputs = Input(shape=(None, 1))
# We set up our decoder using `encoder_states` as initial state.
# We return full output sequences and return internal states as well.
# We don't use the return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, dropout=dropout, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(1) # 1 continuous output at each timestep
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
first_n_samples = 20000
batch_size = 2**11
epochs = 100
# sample of series from train_enc_start to train_enc_end
encoder_input_data = get_time_block_series(series_array, date_to_index,
train_enc_start, train_enc_end)[:first_n_samples]
encoder_input_data, encode_series_mean = transform_series_encode(encoder_input_data)
# sample of series from train_pred_start to train_pred_end
decoder_target_data = get_time_block_series(series_array, date_to_index,
train_pred_start, train_pred_end)[:first_n_samples]
decoder_target_data = transform_series_decode(decoder_target_data, encode_series_mean)
# lagged target series for teacher forcing
decoder_input_data = np.zeros(decoder_target_data.shape)
decoder_input_data[:,1:,0] = decoder_target_data[:,:-1,0]
decoder_input_data[:,0,0] = encoder_input_data[:,-1,0]
model.compile(Adam(), loss='mean_absolute_error')
history = model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.xlabel('Epoch')
plt.ylabel('Mean Absolute Error Loss')
plt.title('Loss Over Time')
plt.legend(['Train','Valid'])
# from our previous model - mapping encoder sequence to state vectors
encoder_model = Model(encoder_inputs, encoder_states)
# A modified version of the decoding stage that takes in predicted target inputs
# and encoded state vectors, returning predicted target outputs and decoder state vectors.
# We need to hang onto these state vectors to run the next step of the inference loop.
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, 1))
# Populate the first target sequence with end of encoding series pageviews
target_seq[0, 0, 0] = input_seq[0, -1, 0]
# Sampling loop for a batch of sequences - we will fill decoded_seq with predictions
# (to simplify, here we assume a batch of size 1).
decoded_seq = np.zeros((1,pred_steps,1))
for i in range(pred_steps):
output, h, c = decoder_model.predict([target_seq] + states_value)
decoded_seq[0,i,0] = output[0,0,0]
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, 1))
target_seq[0, 0, 0] = output[0,0,0]
# Update states
states_value = [h, c]
return decoded_seq
encoder_input_data = get_time_block_series(series_array, date_to_index, val_enc_start, val_enc_end)
encoder_input_data, encode_series_mean = transform_series_encode(encoder_input_data)
decoder_target_data = get_time_block_series(series_array, date_to_index, val_pred_start, val_pred_end)
decoder_target_data = transform_series_decode(decoder_target_data, encode_series_mean)
def predict_and_plot(encoder_input_data, decoder_target_data, sample_ind, enc_tail_len=50):
encode_series = encoder_input_data[sample_ind:sample_ind+1,:,:]
pred_series = decode_sequence(encode_series)
encode_series = encode_series.reshape(-1,1)
pred_series = pred_series.reshape(-1,1)
target_series = decoder_target_data[sample_ind,:,:1].reshape(-1,1)
encode_series_tail = np.concatenate([encode_series[-enc_tail_len:],target_series[:1]])
x_encode = encode_series_tail.shape[0]
plt.figure(figsize=(10,6))
plt.plot(range(1,x_encode+1),encode_series_tail)
plt.plot(range(x_encode,x_encode+pred_steps),target_series,color='orange')
plt.plot(range(x_encode,x_encode+pred_steps),pred_series,color='teal',linestyle='--')
plt.title('Encoder Series Tail of Length %d, Target Series, and Predictions' % enc_tail_len)
plt.legend(['Encoding Series','Target Series','Predictions'])
predict_and_plot(encoder_input_data, decoder_target_data, 100)
predict_and_plot(encoder_input_data, decoder_target_data, 6007)
predict_and_plot(encoder_input_data, decoder_target_data, 33000)
```
| github_jupyter |
# Xarray Oceanography Example
## El Niño / Southern Oscillation (ENSO) from Sea Surface Temperature (SST)
**Author:** [Ryan Abernathey](http://rabernat.github.io)
According to [NOAA](https://www.esrl.noaa.gov/psd/enso/):
> El Niño and La Niña, together called the El Niño Southern Oscillation (ENSO), are periodic departures from expected sea surface temperatures (SSTs) in the equatorial Pacific Ocean. These warmer or cooler than normal ocean temperatures can affect weather patterns around the world by influencing high and low pressure systems, winds, and precipitation. ENSO may bring much needed moisture to a region while causing extremes of too much or too little water in others.
In this notebook, we use the python [xarray](http://xarray.pydata.org/en/latest/) package to examine SST data from NOAA's [Extended Reconstructed Sea Surface Temperature (ERSST) v5](https://www.ncdc.noaa.gov/data-access/marineocean-data/extended-reconstructed-sea-surface-temperature-ersst-v5) product.
We use [holoviews](http://holoviews.org/) to interactively visualize the data.
We then demonstrate how easy it is to calculate the [Niño3.4 index](https://www.ncdc.noaa.gov/teleconnections/enso/indicators/sst/).
Finally, we use the [EOFS](https://ajdawson.github.io/eofs/index.html) package to compute the Emprical Orthogoal Functions of global SST.
```
import xarray as xr
from matplotlib import pyplot as plt
import numpy as np
import hvplot.xarray
import holoviews as hv
hv.extension('bokeh')
%matplotlib inline
```
### Access data directly from NOAA using OpenDAP protocal
```
url = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/noaa.ersst.v5/sst.mnmean.nc'
ds = xr.open_dataset(url)
ds
```
### Select only the more recent part of the data.
```
ds = ds.sel(time=slice('1960', '2018'))
ds
# size of data in MB
ds.nbytes/1e6
# actually download the data
ds.load()
```
### Interactive Plot
```
%output holomap='scrubber' fps=3
ds.sst.hvplot('lon', 'lat', cmap='Magma').redim.range(sst=(-2, 30))
```
### Calculate Climatology and Monthly Anomaly
Xarray makes this particularly easy
```
sst_clim = ds.sst.groupby('time.month').mean(dim='time')
sst_anom = ds.sst.groupby('time.month') - sst_clim
```
Detrend signal
```
from scipy.signal import detrend
sst_anom_detrended = xr.apply_ufunc(detrend, sst_anom.fillna(0),
kwargs={'axis': 0}).where(~sst_anom.isnull())
```
### Plot global mean
```
# For a global average, we need to weigh the points by cosine of latitude.
# This is not built into xarray because xarray is not specific to geoscientific data.
weights = np.cos(np.deg2rad(ds.lat)).where(~sst_anom[0].isnull())
weights /= weights.mean()
(sst_anom * weights).mean(dim=['lon', 'lat']).plot(label='raw')
(sst_anom_detrended * weights).mean(dim=['lon', 'lat']).plot(label='detrended')
plt.grid()
plt.legend()
sst_anom_detrended.hvplot('lon', 'lat', cmap='RdBu_r').redim.range(sst=(-2, 2))
```
### Calculate Oceanic Niño Index
```
sst_anom_nino34 = sst_anom_detrended.sel(lat=slice(5, -5), lon=slice(190, 240))
sst_anom_nino34_mean = sst_anom_nino34.mean(dim=('lon', 'lat'))
oni = sst_anom_nino34_mean.rolling(time=3, center=True).mean()
```
Plot it:
```
oni.plot()
plt.grid()
plt.ylabel('Anomaly (dec. C)');
```
Compare to the [official version](https://www.ncdc.noaa.gov/teleconnections/enso/indicators/sst/) from NOAA:

### Composite the global SST on the Niño3.4 index
```
positive_oni = ((oni>0.5).astype('b').rolling(time=5, center=True).sum()==5)
negative_oni = ((oni<0.5).astype('b').rolling(time=5, center=True).sum()==5)
positive_oni.astype('i').plot(marker='.', label='positive')
(-negative_oni.astype('i')).plot(marker='.', label='negative')
plt.legend()
sst_anom.where(positive_oni).mean(dim='time').plot()
plt.title('SST Anomaly - Positive Niño3.4');
sst_anom.where(negative_oni).mean(dim='time').plot()
plt.title('SST Anomaly - Negative Niño3.4');
```
### Calculate Empirical Orthogonal Functions
```
from eofs.xarray import Eof
solver = Eof(sst_anom_detrended, weights=np.sqrt(weights))
eof1 = solver.eofsAsCorrelation(neofs=1)
pc1 = solver.pcs(npcs=1, pcscaling=1)
eof1.sel(mode=0).plot()
pc1.sel(mode=0).plot()
plt.grid()
```
| github_jupyter |
# Installing the NAG library and running this notebook
This notebook depends on the NAG library for Python to run. Please read the instructions in the [Readme.md](https://github.com/numericalalgorithmsgroup/NAGPythonExamples/blob/master/local_optimization/Readme.md#install) file to download, install and obtain a licence for the library.
Instruction on how to run the notebook can be found [here](https://github.com/numericalalgorithmsgroup/NAGPythonExamples/blob/master/local_optimization/Readme.md#jupyter).
# Simple Nonlinear Least-Squares Fitting Example
This example demontrates how to fit data to a model using weighted nonlinear least-squares.
**handle_solve_bxnl** (`e04gg`) is a bound-constrained nonlinear least squares trust region solver (BXNL) from the NAG optimization modelling suite aimed for small to medium-scale problems. It solves the problem:
$$
\begin{array}{ll}
{\underset{x \in \mathbb{R}^{n_{\text{var}}}}{minimize}\ } &
\frac{1}{2} \sum_{i=1}^{n_{\text{res}}} w_i r_i(x)^2 + \frac{\sigma}{p}\|x\|^p_2\\
\text{subject to} & l_{x} \leq x \leq u_{x}
\end{array}
$$
where $r_i(x),i=1,\dots,n_{\text{res}}$, are smooth nonlinear functions called residuals, $w_i ,i=1,\dots,n_{\text{res}}$ are weights (by default they are all defined to 1, and the rightmost element represents the regularization term with parameter $\sigma\geq0$ and power $p>0$. The constraint elements $l_x$ and $u_x$ are $n_{\text{var}}$-dimensional vectors defining the bounds on the variables.
Typically in a calibration or data fitting context, the residuals will be defined as the difference between the observed values $y_i$ at $t_i$ and the values provided by a nonlinear model $\phi(t;x)$, i.e., $$r_i(x)≔y_i-\phi(t_i;x).$$
The following example illustrates the usage of `e04gg` to fit PADC target with $\alpha$ particle
etched nuclear track data to a convoluted distribution. A target
sheet is scanned and track diameters (red wedges in
the following Figure 1) are recorded into a histogram and a mixed Normal and log-Normal model is to be fitted to the experimental histogram (see Figure 2).

**Figure 1**: PADC with etched $\alpha$ particle tracks.
`e04gg` is used to fit the six parameter model
$$
\begin{array}{ll}
\phi\big(t, x = (a, b, A_{\ell}, \mu, \sigma, A_g)\big) = \text{log-Normal}(a, b, A_l) + \text{Normal}(\mu, \sigma^2, A_g)\\
\text{subject to } 0 \leq x,
\end{array}$$
using the histogram heights reported in Figure 2.
```
import numpy as np
import math
import matplotlib.pyplot as plt
from naginterfaces.base import utils
from naginterfaces.library import opt
# problem data
# number of observations
nres = 64
# observations
diameter = range(1, nres+1)
density = [
0.0722713864, 0.0575221239, 0.0604719764, 0.0405604720, 0.0317109145,
0.0309734513, 0.0258112094, 0.0228613569, 0.0213864307, 0.0213864307,
0.0147492625, 0.0213864307, 0.0243362832, 0.0169616519, 0.0095870206,
0.0147492625, 0.0140117994, 0.0132743363, 0.0147492625, 0.0140117994,
0.0140117994, 0.0132743363, 0.0117994100, 0.0132743363, 0.0110619469,
0.0103244838, 0.0117994100, 0.0117994100, 0.0147492625, 0.0110619469,
0.0132743363, 0.0206489676, 0.0169616519, 0.0169616519, 0.0280235988,
0.0221238938, 0.0235988201, 0.0221238938, 0.0206489676, 0.0228613569,
0.0184365782, 0.0176991150, 0.0132743363, 0.0132743363, 0.0088495575,
0.0095870206, 0.0073746313, 0.0110619469, 0.0036873156, 0.0051622419,
0.0058997050, 0.0014749263, 0.0022123894, 0.0029498525, 0.0014749263,
0.0007374631, 0.0014749263, 0.0014749263, 0.0007374631, 0.0000000000,
0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000
]
# Define the data structure to be passed to the callback functions
data = {'d': diameter, 'y': density}
# Plot histogram of PADC etch track diameter count (densities)
dh = np.arange(1, 10*nres+9)/10.0
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('PADC etch track diameter histogram', fontsize=16)
ax.set_xlabel('Diameter (nm)')
ax.set_ylabel('Density')
ax.set_xlim(xmin=1, xmax=65)
ax.bar(diameter, data['y'], color='lightsteelblue')
ax.grid()
plt.show()
```
**Figure 2**: Histogram of etched track diameter of $\alpha$ particles. Bar heights are the data that will be fitted unsing the aggregated model $\phi(x, t)$.
```
# Define Normal and log-Normal distributions
def lognormal(d, a, b, Al):
return Al/(d*b*np.sqrt(2*math.pi))*np.exp(-((np.log(d)-a)**2)/(2*b**2))
def gaussian(d, mu, sigma, Ag):
return Ag*np.exp(-0.5*((d-mu)/sigma)**2)/(sigma*np.sqrt(2*math.pi))
```
In terms of solving this problem, the function to minimize is the sum of residuals using the model $\phi(x;t)$
and the data pair (`diameter`, `density`). The parameter vector is $x = (a, b, A_l, \mu, \sigma, A_g)$. The next step is to define a function to return the residual vector
$\text{lsqfun}(x) := \big[r_1(x), r_2(x), \dots, r_{n_{\text{res}}}(x)\big]$.
```
# Define the least-square function as a mixture of Normal and log-Normal
# functions. Also add its first derivatives
def lsqfun(x, nres, inform, data):
"""
Objective function callback passed to the least squares solver.
x = (a, b, Al, mu, sigma, Ag)
"""
rx = np.zeros(nres)
d = data['d']
y = data['y']
a = x[0]
b = x[1]
Al = x[2]
mu = x[3]
sigma = x[4]
Ag = x[5]
for i in range(nres):
rx[i] = lognormal(d[i], a, b, Al) + gaussian(d[i], mu, sigma, Ag) - y[i]
return rx, inform
def lsqgrd(x, nres, rdx, inform, data):
"""
Computes the Jacobian of the least square residuals.
x = (a, b, Al, mu, sigma, Ag)
"""
n = len(x)
d = data['d']
a = x[0]
b = x[1]
Al = x[2]
mu = x[3]
sigma = x[4]
Ag = x[5]
for i in range(nres):
# log-Normal derivatives
l = lognormal(d[i], a, b, Al)
# dl/da
rdx[i*n+0] = (np.log(d[i])-a)/(b**2) * l
# dl/db
rdx[i*n+1] = ((np.log(d[i])-a)**2 - b**2)/b**3 * l
# dl/dAl
rdx[i*n+2] = lognormal(d[i], a, b, 1.0)
# Gaussian derivatives
g = gaussian(d[i], mu, sigma, Ag)
# dg/dmu
rdx[i*n+3] = (d[i] - mu) / sigma**2 * g
# dg/dsigma
rdx[i*n+4] = ((d[i] - mu)**2 - sigma**2)/sigma**3 * g
# dg/dAg
rdx[i*n+5] = gaussian(d[i], mu, sigma, 1.0)
return rdx, inform
# parameter vector: x = (a, b, Al, mu, sigma, Ag)
nvar = 6
# Initialize the model handle
handle = opt.handle_init(nvar)
# Define a dense nonlinear least-squares objective function
opt.handle_set_nlnls(handle, nres)
# Add weights for each residual
weights = np.ones(nres)
weights[55:63] = 5.0
weights /= weights.sum()
# Define the reliability of the measurements (weights)
opt.handle_set_get_real(handle, 'rw', rarr=weights)
# Restrict parameter space (0 <= x)
opt.handle_set_simplebounds(handle, np.zeros(nvar), 100.0*np.ones(nvar))
# Set some optional parameters to control the output of the solver
for option in [
'Print Options = NO',
'Print Level = 1',
'Print Solution = X',
'Bxnl Iteration Limit = 100',
'Bxnl Use weights = YES',
# Add cubic regularization term (avoid overfitting)
'Bxnl Reg Order = 3',
'Bxnl Glob Method = REG',
]:
opt.handle_opt_set(handle, option)
# Use an explicit I/O manager for abbreviated iteration output:
iom = utils.FileObjManager(locus_in_output=False)
# Define initial guess (starting point)
x = np.array([1.63, 0.88, 1.0, 30, 1.52, 0.24], dtype=float)
```
Call the solver
```
# Call the solver
slv = opt.handle_solve_bxnl(handle, lsqfun, lsqgrd, x, nres, data=data, io_manager=iom)
```
The optimal solution $x$ provides the unfolded parameters for the two distributions, Normal and log-Normal (blue and red curves in Figure 4). Adding these together produces the aggragated curve (shown in color green of Figure 3 and 4) this last one is the one used to perform the fitting with. The optimal solution is
```
# Optimal parameter values
# Al * log-Normal(a, b):
aopt = slv.x[0]
bopt = slv.x[1]
Alopt = slv.x[2]
# Ag * gaussian(mu, sigma):
muopt = slv.x[3]
sigmaopt = slv.x[4]
Agopt = slv.x[5]
```
and the objective function value is
```
print(slv.rinfo[0])
```
The next plot in Figure 3 illustrates the mixed-distribution fit over the histogram:
```
lopt = lognormal(dh, aopt, bopt, Alopt)
gopt = gaussian(dh, muopt, sigmaopt, Agopt)
w = lopt + gopt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('PADC etch track diameter histogram and fit', fontsize=16)
ax.set_xlabel('Diameter (nm)')
ax.set_ylabel('Density')
ax.set_xlim(xmin=1, xmax=65)
ax.bar(diameter, data['y'], color='lightsteelblue')
ax.plot(dh, w, '-', linewidth=3, color='tab:green')
ax.grid()
ax.legend(['Aggregated', 'Measured track diameter density'])
plt.show()
```
**Figure 3**: Histogram with aggregated fit.
The plot below in Figure 4 shows the unfolded fit, in red the log-Normal distribution and blue the Normal one:
```
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('PADC etch track diameter histogram unfolding', fontsize=16)
ax.set_xlabel('Diameter (nm)')
ax.set_ylabel('Density')
ax.set_xlim(xmin=1, xmax=65)
ax.bar(diameter, data['y'], color='lightsteelblue')
ax.plot(dh, lopt, '-', linewidth=4, color='tab:red')
ax.plot(dh, gopt, '-', linewidth=4, color='tab:blue')
ax.plot(dh, w, '-', linewidth=3, color='tab:green')
ax.grid()
glab = 'Unfolded Normal($\\mu=%1.2f$, $\\sigma=%1.2f, A=%1.2f$)' % (muopt, sigmaopt, Agopt)
llab = 'Unfolded log-Normal($a=%1.2f$, $b=%1.2f, A=%1.2f$)' % (aopt, bopt, Alopt)
ax.legend([llab, glab, 'Aggregated', 'Measured track diameter density'])
plt.show()
```
**Figure 4**: Aggregated model used for the fitting (green curve) and unfolded models (blue and red curves).
Optimal parameter values are ported in the legend.
Finally, clean up and destroy the handle
```
# Destroy the handle:
opt.handle_free(handle)
```
| github_jupyter |
## Linear and Polynomial Regression for Pumpkin Pricing - Lesson 3
Load up required libraries and dataset. Convert the data to a dataframe containing a subset of the data:
- Only get pumpkins priced by the bushel
- Convert the date to a month
- Calculate the price to be an average of high and low prices
- Convert the price to reflect the pricing by bushel quantity
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from datetime import datetime
pumpkins = pd.read_csv('../../data/US-pumpkins.csv')
pumpkins.head()
pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
new_columns = ['Package', 'Variety', 'City Name', 'Month', 'Low Price', 'High Price', 'Date']
pumpkins = pumpkins.drop([c for c in pumpkins.columns if c not in new_columns], axis=1)
price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
month = pd.DatetimeIndex(pumpkins['Date']).month
day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
new_pumpkins = pd.DataFrame(
{'Month': month,
'DayOfYear' : day_of_year,
'Variety': pumpkins['Variety'],
'City': pumpkins['City Name'],
'Package': pumpkins['Package'],
'Low Price': pumpkins['Low Price'],
'High Price': pumpkins['High Price'],
'Price': price})
new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/1.1
new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price*2
new_pumpkins.head()
```
A scatterplot reminds us that we only have month data from August through December. We probably need more data to be able to draw conclusions in a linear fashion.
```
new_pumpkins.plot.scatter('Month','Price')
new_pumpkins.plot.scatter('DayOfYear','Price')
```
Let's see if there is correlation:
```
print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
```
Looks like correlation is pretty small, but there is some other more important relationship - because price points in the plot above seem to have several distinct clusters. Let's make a plot that will show different pumpkin varieties:
```
ax=None
colors = ['red','blue','green','yellow']
for i,var in enumerate(new_pumpkins['Variety'].unique()):
ax = new_pumpkins[new_pumpkins['Variety']==var].plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
```
For the time being, let's concentrate only on one variety - **pie type**.
```
pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
print(pie_pumpkins['DayOfYear'].corr(pie_pumpkins['Price']))
pie_pumpkins.plot.scatter('DayOfYear','Price')
```
### Linear Regression
We will use Scikit Learn to train linear regression model:
```
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
from sklearn.model_selection import train_test_split
X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
y = pie_pumpkins['Price']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
lin_reg = LinearRegression()
lin_reg.fit(X_train,y_train)
pred = lin_reg.predict(X_test)
mse = np.sqrt(mean_squared_error(y_test,pred))
print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
plt.scatter(X_test,y_test)
plt.plot(X_test,pred)
```
The slope of the line can be determined from linear regression coefficients:
```
lin_reg.coef_, lin_reg.intercept_
```
We can use the trained model to predict price:
```
# Pumpkin price on programmer's day
lin_reg.predict([[256]])
```
### Polynomial Regression
Sometimes the relationship between features and the results is inherently non-linear. For example, pumpkin prices might be high in winter (months=1,2), then drop over summer (months=5-7), and then rise again. Linear regression is unable to fin this relationship accurately.
In this case, we may consider adding extra features. Simple way is to use polynomials from input features, which would result in **polynomial regression**. In Scikit Learn, we can automatically pre-compute polynomial features using pipelines:
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
pipeline.fit(X_train,y_train)
pred = pipeline.predict(X_test)
mse = np.sqrt(mean_squared_error(y_test,pred))
print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
score = pipeline.score(X_train,y_train)
print('Model determination: ', score)
plt.scatter(X_test,y_test)
plt.plot(sorted(X_test),pipeline.predict(sorted(X_test)))
```
### Encoding varieties
In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. To take variety into account, we first need to convert it to numeric form, or **encode**. There are several way we can do it:
* Simple numeric encoding that will build a table of different varieties, and then replace variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the numeric value of the index into account, and the numeric value is likely not to correlate numerically with the price.
* One-hot encoding, which will replace `Variety` column by 4 different columns, one for each variety, that will contain 1 if the corresponding row is of given variety, and 0 otherwise.
The code below shows how we can can one-hot encode a variety:
```
pd.get_dummies(new_pumpkins['Variety'])
```
### Linear Regression on Variety
We will now use the same code as above, but instead of `DayOfYear` we will use our one-hot-encoded variety as input:
```
X = pd.get_dummies(new_pumpkins['Variety'])
y = new_pumpkins['Price']
def run_linear_regression(X,y):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
lin_reg = LinearRegression()
lin_reg.fit(X_train,y_train)
pred = lin_reg.predict(X_test)
mse = np.sqrt(mean_squared_error(y_test,pred))
print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
score = lin_reg.score(X_train,y_train)
print('Model determination: ', score)
run_linear_regression(X,y)
```
We can also try using other features in the same manner, and combining them with numerical features, such as `Month` or `DayOfYear`:
```
X = pd.get_dummies(new_pumpkins['Variety']) \
.join(new_pumpkins['Month']) \
.join(pd.get_dummies(new_pumpkins['City'])) \
.join(pd.get_dummies(new_pumpkins['Package']))
y = new_pumpkins['Price']
run_linear_regression(X,y)
```
### Polynomial Regression
Polynomial regression can also be used with categorical features that are one-hot-encoded. The code to train polynomial regression would essentially be the same as we have seen above.
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
pipeline.fit(X_train,y_train)
pred = pipeline.predict(X_test)
mse = np.sqrt(mean_squared_error(y_test,pred))
print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
score = pipeline.score(X_train,y_train)
print('Model determination: ', score)
```
| github_jupyter |
```
import scanpy as sp
import scrublet as scr
import pandas as pd
import matplotlib.pyplot as plt
first = sp.read_10x_h5('/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/counts/more-late/Mm_CD45_IDH_SMAR_6mice_filtered_feature_bc_matrix.h5')
print(first.shape)
print(type(first))
print(first.obs_names)
print(first.var_names)
print(type(first.X))
scrub = scr.Scrublet(first.X, expected_doublet_rate=0.06)
doublet_scores, predicted_doublets = scrub.scrub_doublets(min_counts=2,
min_cells=3,
min_gene_variability_pctl=85,
n_prin_comps=30)
print(doublet_scores)
scrub.call_doublets(threshold=0.45)
scrub.plot_histogram()
pd.DataFrame(doublet_scores).to_csv("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/first_doublet_scores.csv")
rh1 = sp.read_10x_h5("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/counts/Early CD45 GL261 10x run/ea_RH_1_v3_filtered_feature_bc_matrix.h5")
scrub_rh1 = scr.Scrublet(rh1.X, expected_doublet_rate=0.06)
doublet_scores, predicted_doublets = scrub_rh1.scrub_doublets(min_counts=2,
min_cells=3,
min_gene_variability_pctl=85,
n_prin_comps=30)
scrub_rh1.plot_histogram()
pd.DataFrame(doublet_scores).to_csv("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/rh1_doublet_scores.csv")
rh2 = sp.read_10x_h5("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/counts/Early CD45 GL261 10x run/ea_RH_2_v3_filtered_feature_bc_matrix.h5")
scrub_rh2 = scr.Scrublet(rh2.X, expected_doublet_rate=0.06)
doublet_scores, predicted_doublets = scrub_rh2.scrub_doublets(min_counts=2,
min_cells=3,
min_gene_variability_pctl=85,
n_prin_comps=30)
scrub_rh2.plot_histogram()
pd.DataFrame(doublet_scores).to_csv("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/rh2_doublet_scores.csv")
rh3 = sp.read_10x_h5("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/counts/Early CD45 GL261 10x run/ea_RH_3_v3_filtered_feature_bc_matrix.h5")
scrub_rh3 = scr.Scrublet(rh3.X, expected_doublet_rate=0.06)
doublet_scores, predicted_doublets = scrub_rh3.scrub_doublets(min_counts=2,
min_cells=3,
min_gene_variability_pctl=85,
n_prin_comps=30)
scrub_rh3.plot_histogram()
pd.DataFrame(doublet_scores).to_csv("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/rh3_doublet_scores.csv")
rh4 = sp.read_10x_h5("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/counts/Early CD45 GL261 10x run/ea_RH_4_v3_filtered_feature_bc_matrix.h5")
scrub_rh4 = scr.Scrublet(rh4.X, expected_doublet_rate=0.06)
doublet_scores, predicted_doublets = scrub_rh4.scrub_doublets(min_counts=2,
min_cells=3,
min_gene_variability_pctl=85,
n_prin_comps=30)
scrub_rh4.plot_histogram()
pd.DataFrame(doublet_scores).to_csv("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/rh4_doublet_scores.csv")
wt1 = sp.read_10x_h5("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/counts/Early CD45 GL261 10x run/ea_WT_1_v3_filtered_feature_bc_matrix.h5")
scrub_wt1 = scr.Scrublet(wt1.X, expected_doublet_rate=0.06)
doublet_scores, predicted_doublets = scrub_wt1.scrub_doublets(min_counts=2,
min_cells=3,
min_gene_variability_pctl=85,
n_prin_comps=30)
scrub_wt1.plot_histogram()
pd.DataFrame(doublet_scores).to_csv("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/wt1_doublet_scores.csv")
wt2 = sp.read_10x_h5("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/counts/Early CD45 GL261 10x run/ea_WT_2_v3_filtered_feature_bc_matrix.h5")
scrub_wt2 = scr.Scrublet(wt2.X, expected_doublet_rate=0.06)
doublet_scores, predicted_doublets = scrub_wt2.scrub_doublets(min_counts=2,
min_cells=3,
min_gene_variability_pctl=85,
n_prin_comps=30)
scrub_wt2.plot_histogram()
pd.DataFrame(doublet_scores).to_csv("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/wt2_doublet_scores.csv")
wt3 = sp.read_10x_h5("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/counts/Early CD45 GL261 10x run/ea_WT_3_v3_filtered_feature_bc_matrix.h5")
scrub_wt3 = scr.Scrublet(wt3.X, expected_doublet_rate=0.06)
doublet_scores, predicted_doublets = scrub_wt3.scrub_doublets(min_counts=2,
min_cells=3,
min_gene_variability_pctl=85,
n_prin_comps=30)
scrub_wt3.plot_histogram()
pd.DataFrame(doublet_scores).to_csv("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/wt3_doublet_scores.csv")
wt4 = sp.read_10x_h5("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/counts/Early CD45 GL261 10x run/ea_WT_4_v3_filtered_feature_bc_matrix.h5")
scrub_wt4 = scr.Scrublet(wt4.X, expected_doublet_rate=0.06)
doublet_scores, predicted_doublets = scrub_wt4.scrub_doublets(min_counts=2,
min_cells=3,
min_gene_variability_pctl=85,
n_prin_comps=30)
scrub_wt4.plot_histogram()
pd.DataFrame(doublet_scores).to_csv("/home/roman/Documents/Single cell analysis/10x-mouse-early-vs-late-wt-gbm/data/wt4_doublet_scores.csv")
```
| github_jupyter |
```
import fastai
from fastai import * # Quick access to most common functionality
from fastai.vision import * # Quick access to computer vision functionality
from fastai.callbacks import *
from torchvision.models import vgg16_bn
PATH = Path('/DATA/kaggle/imgnetloc/ILSVRC/Data/CLS-LOC/')
PATH_TRN = PATH/'train'
sz_lr=72
scale,bs = 4,24
sz_hr = sz_lr*scale
classes = list(PATH_TRN.iterdir())
fnames_full = []
for class_folder in progress_bar(classes):
for fname in class_folder.iterdir():
fnames_full.append(fname)
np.random.seed(42)
keep_pct = 0.02
#keep_pct = 1.
keeps = np.random.rand(len(fnames_full)) < keep_pct
image_fns = np.array(fnames_full, copy=False)[keeps]
len(image_fns)
valid_pct=0.1
src = (ImageToImageList(image_fns)
.random_split_by_pct(valid_pct, seed=42)
.label_from_func(lambda x: x))
def get_data(bs, sz_lr, sz_hr, num_workers=12, **kwargs):
# tfms = get_transforms(do_flip=True, flip_vert=True,
# max_lighting=0.0, max_rotate=0.0,max_zoom=0.0,max_warp=0.0)
tfms = [[dihedral_affine(p=0.75), crop_pad(row_pct=0.5, col_pct=0.5)],
[crop_pad(row_pct=0.5, col_pct=0.5)]]
data = (src
.transform(tfms, size=sz_lr)
.transform_labels(size=sz_hr)
.databunch(bs=bs, num_workers=num_workers, **kwargs)
.normalize(imagenet_stats, do_y=True))
return data
sz_lr = 72
scale,bs = 4,24
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
data.train_ds[0:3]
x,y = data.dl().one_batch()
x.shape, y.shape
def make_img(x, idx=0):
return Image(torch.clamp(data.denorm(x.cpu()),0,1)[idx])
idx=5
x_img = make_img(x, idx)
y_img = make_img(y, idx)
x_img.show(), y_img.show()
```
## Model
```
wn = lambda x: torch.nn.utils.weight_norm(x)
def conv(ni, nf, kernel_size=3, actn=True):
layers = [wn(nn.Conv2d(ni, nf, kernel_size, padding=kernel_size//2))]
if actn: layers.append(nn.ReLU(True))
return nn.Sequential(*layers)
class ResSequential(nn.Module):
def __init__(self, layers, res_scale=1.0):
super().__init__()
self.res_scale = res_scale
self.m = nn.Sequential(*layers)
def forward(self, x):
x = x + self.m(x) * self.res_scale
return x
def res_block(nf):
return ResSequential(
[conv(nf, nf), conv(nf, nf, actn=False)],
0.1)
def upsample(ni, nf, scale):
layers = []
for i in range(int(math.log(scale,2))):
layers += [conv(ni, nf*4), nn.PixelShuffle(2)]
return nn.Sequential(*layers)
class SrResnet(nn.Module):
def __init__(self, nf, scale, n_res=8):
super().__init__()
features = [conv(3, 64)]
for i in range(n_res): features.append(res_block(64))
features += [conv(64,64), upsample(64, 64, scale),
# nn.BatchNorm2d(64),
conv(64, 3, actn=False)]
self.features = nn.Sequential(*features)
def forward(self, x): return self.features(x)
def icnr(x, scale, init=nn.init.kaiming_normal_):
new_shape = [int(x.shape[0] / (scale ** 2))] + list(x.shape[1:])
subkernel = torch.zeros(new_shape)
subkernel = init(subkernel)
subkernel = subkernel.transpose(0, 1)
subkernel = subkernel.contiguous().view(subkernel.shape[0],
subkernel.shape[1], -1)
kernel = subkernel.repeat(1, 1, scale ** 2)
transposed_shape = [x.shape[1]] + [x.shape[0]] + list(x.shape[2:])
kernel = kernel.contiguous().view(transposed_shape)
kernel = kernel.transpose(0, 1)
return kernel
model = SrResnet(64, scale)
#model = torch.load('old.pth')
# wd=1e-7
# learn = Learner(data, nn.DataParallel(model,[0,2]), loss_func=F.mse_loss, opt_func=torch.optim.Adam, wd=wd, true_wd=False)
sz_lr = 288
scale,bs = 4,12
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
learn = Learner(data, nn.DataParallel(model), loss_func=F.mse_loss)
learn.lr_find()
learn.recorder.plot()
learn = learn.load('pixel_v2')
lr = 1e-3
learn.fit_one_cycle(1, lr)
lr = 1e-3
learn.fit_one_cycle(1, lr)
lr = 2e-4
learn.fit_one_cycle(1, lr)
learn.save('pixel_v2')
sz_lr = 72
scale,bs = 4,4
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
learn = Learner(data, nn.DataParallel(model), loss_func=F.mse_loss)
learn = learn.load('pixel_v2')
def plot_x_y_pred(x, pred, y, figsize):
rows=x.shape[0]
fig, axs = plt.subplots(rows,3,figsize=figsize)
for i in range(rows):
make_img(x, i).show(ax=axs[i, 0])
make_img(pred, i).show(ax=axs[i, 1])
make_img(y, i).show(ax=axs[i, 2])
plt.tight_layout()
x, y = next(iter(learn.data.valid_dl))
y_pred = model(x)
x[0:3].shape
make_img(y_pred.detach(), 2).show(), make_img(y.detach(), 2).show()
def plot_some(learn, do_denorm=True):
x, y = next(iter(learn.data.valid_dl))
y_pred = model(x)
y_pred = y_pred.detach()
x = x.detach()
y = y.detach()
plot_x_y_pred(x[0:3], y_pred[0:3], y[0:3], figsize=y_pred.shape[-2:])
plot_some(learn)
m_vgg_feat = vgg16_bn(True).cuda().eval().features
requires_grad(m_vgg_feat, False)
blocks = [i-1 for i,o in enumerate(children(m_vgg_feat))
if isinstance(o,nn.MaxPool2d)]
blocks, [m_vgg_feat[i] for i in blocks]
class FeatureLoss(nn.Module):
def __init__(self, m_feat, layer_ids, layer_wgts):
super().__init__()
self.m_feat = m_feat
self.loss_features = [self.m_feat[i] for i in layer_ids]
self.hooks = hook_outputs(self.loss_features, detach=False)
self.wgts = layer_wgts
self.metrics = {}
self.metric_names = ['L1'] + [f'feat_{i}' for i in range(len(layer_ids))]
for name in self.metric_names: self.metrics[name] = 0.
def make_feature(self, bs, o, clone=False):
feat = o.view(bs, -1)
if clone: feat = feat.clone()
return feat
def make_features(self, x, clone=False):
bs = x.shape[0]
self.m_feat(x)
return [self.make_feature(bs, o, clone) for o in self.hooks.stored]
def forward(self, input, target):
out_feat = self.make_features(target, clone=True)
in_feat = self.make_features(input)
l1_loss = F.l1_loss(input,target)/100
self.feat_losses = [l1_loss]
self.feat_losses += [F.mse_loss(f_in, f_out)*w
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
for i,name in enumerate(self.metric_names): self.metrics[name] = self.feat_losses[i]
self.metrics['L1'] = l1_loss
self.loss = sum(self.feat_losses)
return self.loss
class ReportLossMetrics(LearnerCallback):
_order = -20 #Needs to run before the recorder
def on_train_begin(self, **kwargs):
self.metric_names = self.learn.loss_func.metric_names
self.learn.recorder.add_metric_names(self.metric_names)
def on_epoch_begin(self, **kwargs):
self.metrics = {}
for name in self.metric_names:
self.metrics[name] = 0.
self.nums = 0
def on_batch_end(self, last_target, train, **kwargs):
if not train:
bs = last_target.size(0)
for name in self.metric_names:
self.metrics[name] += bs * self.learn.loss_func.metrics[name]
self.nums += bs
def on_epoch_end(self, **kwargs):
if self.nums:
metrics = [self.metrics[name]/self.nums for name in self.metric_names]
self.learn.recorder.add_metrics(metrics)
sz_lr = 32
scale,bs = 4,24
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
feat_loss = FeatureLoss(m_vgg_feat, blocks[0:4], [0.25,0.25,0.25,0.25])
model = SrResnet(64, scale)
learn = Learner(data, nn.DataParallel(model), loss_func=feat_loss, callback_fns=[ReportLossMetrics])
learn.load('pixel_v2')
model = learn.model.module
nres = 8
conv_shuffle = model.features[nres+2][0][0]
kernel = icnr(conv_shuffle.weight, scale=scale)
conv_shuffle.weight.data.copy_(kernel);
conv_shuffle = model.features[nres+2][2][0]
kernel = icnr(conv_shuffle.weight, scale=scale)
conv_shuffle.weight.data.copy_(kernel);
learn.freeze_to(999)
for i in range(10,12): requires_grad(model.features[i], True)
learn.lr_find()
learn.recorder.plot()
sz_lr = 128
scale,bs = 4,4
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
model = SrResnet(64, scale)
learn = Learner(data, nn.DataParallel(model), loss_func=feat_loss, callback_fns=[ReportLossMetrics])
learn = learn.load('enhance_feat_v2')
lr=1e-3
learn.unfreeze()
learn.fit_one_cycle(1, lr)
learn.save('enhance_feat_v2')
learn = learn.load('enhance_feat_v2')
learn.unfreeze()
learn.lr_find()
learn.recorder.plot()
lr=1e-5
learn.fit_one_cycle(1, lr)
learn.save('enhance_feat2')
learn = learn.load('enhance_feat2')
sz_lr = 72
scale,bs = 4,4
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
learn = Learner(data, nn.DataParallel(model), loss_func=F.mse_loss)
learn = learn.load('enhance_feat_v2')
plot_some(learn)
sz_lr = 72
scale,bs = 4,4
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
learn = Learner(data, nn.DataParallel(model), loss_func=F.mse_loss)
learn = learn.load('pixel_v2')
plot_some(learn)
```
| github_jupyter |
# Alternating direction algorithms for l1 problems in compressive sensing
We provide a port of [YALL1 basic](http://yall1.blogs.rice.edu/) package. This is built on top of JAX and can be used to solve the following $\ell_1$ minimization problems.
The basis pursuit problem
$$
\tag{BP}
{\min}_{x} \| W x\|_{w,1} \; \text{s.t.} \, A x = b
$$
The L1/L2 minimization or basis pursuit denoising problem
$$
\tag{L1/L2}
{\min}_{x} \| W x\|_{w,1} + \frac{1}{2\rho}\| A x - b \|_2^2
$$
The L1 minimization problem with L2 constraints
$$
\tag{L1/L2con}
{\min}_{x} \| W x\|_{w,1} \; \text{s.t.} \, \| A x - b \|_2 \leq \delta
$$
We also support corresponding non-negative counter-parts.
The nonnegative basis pursuit problem
$$
\tag{BP+}
{\min}_{x} \| W x\|_{w,1} \; \text{s.t.} \, A x = b \, \, \text{and} \, x \succeq 0
$$
The nonnegative L1/L2 minimization or basis pursuit denoising problem
$$
\tag{L1/L2+}
{\min}_{x} \| W x\|_{w,1} + \frac{1}{2\rho}\| A x - b \|_2^2 \; \text{s.t.} \, x \succeq 0
$$
The nonnegative L1 minimization problem with L2 constraints
$$
\tag{L1/L2con+}
{\min}_{x} \| W x\|_{w,1} \; \text{s.t.} \, \| A x - b \|_2 \leq \delta \, \, \text{and} \, x \succeq 0
$$
In the above, $W$ is a sparsifying basis s.t. $Wx = \alpha$ is a sparse representation of $x$ in $W$ given by
$\alpha = W^T x$. For simple examples, we can assume $W=I$ is the identity basis.
The $\| \cdot \|_{w,1}$ is the weighted L1 (semi-) norm defined as
$$
\|x \|_{w,1} = \sum_{i=1}^n w_i |x_i|
$$
for a given non-negative weight vector $w$. In the simplest case, we assume $w=1$ reducing it to the famous $\ell_1$ norm.
Import relevant libraries
```
from jax.config import config
config.update("jax_enable_x64", True)
import jax
import jax.numpy as jnp
import numpy as np
from jax import random
from jax import jit, grad, vmap
norm = jnp.linalg.norm
import cr.sparse as crs
import cr.sparse.dict as crdict
import cr.sparse.data as crdata
import cr.sparse.lop as lop
from cr.sparse.cvx.adm import yall1
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
```
Setup a problem with a random sensing matrix with orthonormal rows
```
N = 1000
M = 300
K = 50
key = random.PRNGKey(0)
key1, key2, key3, key4 = random.split(key, 4)
A = crdict.random_orthonormal_rows(key1, M, N)
crs.has_orthogonal_rows(A)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.imshow(A, extent=[0, 2, 0, 1])
plt.gray()
plt.colorbar()
plt.title(r'$A$');
x, omega = crdata.sparse_normal_representations(key2, N, K, 1)
x = jnp.squeeze(x)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.stem(x, markerfmt='.');
# Convert A into a linear operator
A = lop.matrix(A)
```
## Standard sparse recovery problems for compressive sensing
### Basis pursuit
The simple form of basis pursuit problem is:
$$
{\min}_{x} \| x\|_{1} \; \text{s.t.} \, A x = b
$$
```
# Compute the measurements
b0 = A.times(x)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.stem(b0, markerfmt='.');
sol = yall1.solve(A, b0)
int(sol.iterations), int(sol.n_times), int(sol.n_trans)
norm(sol.x-x)/norm(x)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.subplot(211)
plt.title('original')
plt.stem(x, markerfmt='.', linefmt='gray');
plt.subplot(212)
plt.stem(sol.x, markerfmt='.');
plt.title('reconstruction');
%timeit yall1.solve(A, b0).x.block_until_ready()
```
### Basis pursuit denoising
The simple form of L1-L2 unconstrained minimization or basis pursuit denoising is:
$$
{\min}_{x} \| x\|_{1} + \frac{1}{2\rho}\| A x - b \|_2^2
$$
```
sigma = 0.01
noise = sigma * random.normal(key3, (M,))
crs.snr(b0, noise)
b = b0 + noise
sol = yall1.solve(A, b, rho=0.01)
int(sol.iterations), int(sol.n_times), int(sol.n_trans)
norm(sol.x-x)/norm(x)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.subplot(211)
plt.title('original')
plt.stem(x, markerfmt='.', linefmt='gray');
plt.subplot(212)
plt.stem(sol.x, markerfmt='.');
plt.title('reconstruction');
%timeit yall1.solve(A, b, rho=0.01).x.block_until_ready()
```
### Basis pursuit with inequality constraints
The simple form of L1 minimization with L2 constraints or basis pursuit with inequality constraints is:
$$
{\min}_{x} \| x\|_{1} \; \text{s.t.} \, \| A x - b \|_2 \leq \delta
$$
```
delta = float(norm(noise))
delta
sol = yall1.solve(A, b, delta=delta)
int(sol.iterations), int(sol.n_times), int(sol.n_trans)
norm(sol.x-x)/norm(x)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.subplot(211)
plt.title('original')
plt.stem(x, markerfmt='.', linefmt='gray');
plt.subplot(212)
plt.stem(sol.x, markerfmt='.');
plt.title('reconstruction');
%timeit yall1.solve(A, b, delta=delta).x.block_until_ready()
```
## Non-negative counterparts
In this case, the signal $x$ with the sparse representation $\alpha = W x$ has only non-negative entries. i.e. if an entry in $x$ is non-zero, it is positive. This is typical for images.
Let us construct a sparse representation with non-negative entries.
```
xp = jnp.abs(x)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.stem(xp, markerfmt='.');
```
### Non-negative basis pursuit
The simple form of basis pursuit for non-negative $x$ is:
$$
{\min}_{x} \| x\|_{1} \; \text{s.t.} \, A x = b \, \, \text{and} \, x \succeq 0
$$
```
b0p = A.times(xp)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.stem(b0p, markerfmt='.');
sol = yall1.solve(A, b0p, nonneg=True)
int(sol.iterations), int(sol.n_times), int(sol.n_trans)
norm(sol.x-xp)/norm(xp)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.subplot(211)
plt.title('original')
plt.stem(xp, markerfmt='.', linefmt='gray');
plt.subplot(212)
plt.stem(sol.x, markerfmt='.');
plt.title('reconstruction');
%timeit yall1.solve(A, b0p, nonneg=True).x.block_until_ready()
```
### Non-negative basis pursuit denoising
The simple form of L1-L2 unconstrained minimization with non-negative $x$ is:
$$
{\min}_{x} \| x\|_{1} + \frac{1}{2\rho}\| A x - b \|_2^2 \; \text{s.t.} \, x \succeq 0
$$
```
crs.snr(b0p, noise)
bp = b0p + noise
sol = yall1.solve(A, bp, nonneg=True, rho=0.01)
int(sol.iterations), int(sol.n_times), int(sol.n_trans)
norm(sol.x-xp)/norm(xp)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.subplot(211)
plt.title('original')
plt.stem(xp, markerfmt='.', linefmt='gray');
plt.subplot(212)
plt.stem(sol.x, markerfmt='.');
plt.title('reconstruction');
%timeit yall1.solve(A, bp, nonneg=True, rho=0.01).x.block_until_ready()
```
### Non-negative basis pursuit with inequality constraints
$$
{\min}_{x} \| x\|_{1} \; \text{s.t.} \, \| A x - b \|_2 \leq \delta \, \, \text{and} \, x \succeq 0
$$
```
sol = yall1.solve(A, bp, delta=delta)
int(sol.iterations), int(sol.n_times), int(sol.n_trans)
norm(sol.x-xp)/norm(xp)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.subplot(211)
plt.title('original')
plt.stem(xp, markerfmt='.', linefmt='gray');
plt.subplot(212)
plt.stem(sol.x, markerfmt='.');
plt.title('reconstruction');
%timeit yall1.solve(A, bp, delta=delta).x.block_until_ready()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from os.path import join
plt.style.use(["seaborn", "thesis"])
plt.rcParams["figure.figsize"] = (8, 4)
```
# Dataset
```
from SCFInitialGuess.utilities.dataset import extract_triu_batch, AbstractDataset
from sklearn.model_selection import train_test_split
#data_path = "../thesis/dataset/EthenT/"
#postfix = "EthenT"
#dim = 72
#N_ELECTRONS = 16
#basis = "6-311++g**"
#data_path = "../thesis/dataset/TSmall/"
#postfix = "TSmall"
#dim = 70
#N_ELECTRONS = 30
#basis = "3-21++g*"
data_path = "../thesis/dataset/ethen0/"
postfix = "ethen0_6-311++g**"
dim = 72
N_ELECTRONS = 30
basis = "6-311++g**"
#data_path = "../../../butadien/data/"
#postfix = ""
#dim = 26
def split(x, y, ind):
return x[:ind], y[:ind], x[ind:], y[ind:]
#[S, P] = np.load()
S = np.load(join(data_path, "S" + postfix + ".npy"))
P = np.load(join(data_path, "P" + postfix + ".npy"))
#index = np.load(join(data_path, "index" + postfix + ".npy"))
#ind = int(0.8 * len(S))
ind = 0
molecules = np.load(join(data_path, "molecules" + postfix + ".npy"))
molecules = (molecules[:ind], molecules[ind:])
#molecules = ([], molecules)
s_triu_norm, mu, std = AbstractDataset.normalize(S)
s_train, p_train, s_test, p_test = split(S.reshape(-1, dim, dim), P.reshape(-1, dim, dim), ind)
#s_test = S.reshape(-1, dim, dim)
#p_test = S.reshape(-1, dim, dim)
from SCFInitialGuess.utilities.analysis import make_results_str, measure_all_quantities
from SCFInitialGuess.utilities.dataset import StaticDataset
dataset = StaticDataset(
train=(s_train, p_train),
#train=(None, None),
validation=(None, None),
test=(s_test, p_test),
mu=mu,
std=std
)
def f(x,y):
return 4*x + 6 * y
f(13,3)
from pyscf.scf import hf
h_test = [hf.get_hcore(mol.get_pyscf_molecule()) for mol in molecules[1]]
```
# Utilities
```
from SCFInitialGuess.utilities.dataset import reconstruct_from_triu
def embedd(x, y):
p = x.copy()
p[mask] = (y.copy())[mask]
return p
def embedd_batch(p_batch):
p_embedded = []
for (p_guess, p_conv) in zip(p_batch, p_test):
p_embedded.append(embedd(p_guess, p_conv))
return np.array(p_embedded)
from SCFInitialGuess.utilities.constants import number_of_basis_functions as N_BASIS
mol = molecules[1][0]
mask = np.zeros((dim, dim))
current_dim = 0
for atom in mol.species:
# calculate block range
index_start = current_dim
current_dim += N_BASIS[basis][atom]
index_end = current_dim
# calculate logical vector
L = np.arange(dim)
L = np.logical_and(index_start <= L, L < index_end)
m = np.logical_and.outer(L, L)
mask = np.logical_or(mask, m)
#mask
import seaborn as sns
sns.heatmap(mask.astype("int"))
np.mean(np.abs(p_test.flatten() - embedd_batch(p_test).flatten()))
from pyscf.scf import hf
def fock_from_density_batch(p_batch):
f = []
for p, s, h, mol in zip(p_batch, s_test, h_test, molecules[1]):
f.append(hf.get_fock(None, h1e=h, s1e=s, vhf=hf.get_veff(mol=mol.get_pyscf_molecule(), dm=p), dm=p))
return np.array(f)
from SCFInitialGuess.utilities.dataset import density_from_fock
def density_from_fock_batch(f_batch):
p = []
for (s, f, mol) in zip(s_test, f_batch, molecules[1]):
p.append(density_from_fock(f, s, mol.get_pyscf_molecule()))
return np.array(p)
```
# GWH
```
from pyscf.scf import hf
p_gwh = np.array([
hf.init_guess_by_wolfsberg_helmholtz(mol.get_pyscf_molecule()) for mol in molecules[1]
]).astype("float64")
from SCFInitialGuess.utilities.analysis import make_results_str, measure_all_quantities, mf_initializer
print(make_results_str(measure_all_quantities(
p_gwh,
dataset,
molecules[1],
N_ELECTRONS,
mf_initializer,
dim,
is_triu=False,
is_dataset_triu=False,
s=S[ind:]
)))
```
# Embedded GWH
```
p_embedded_gwh = embedd_batch(p_gwh)
from SCFInitialGuess.utilities.analysis import mf_initializer as mf_initializer
print(make_results_str(measure_all_quantities(
p_embedded_gwh,
dataset,
molecules[1],
N_ELECTRONS,
mf_initializer,
dim,
is_triu=False,
is_dataset_triu=False,
s=S[ind:]
)))
51/1001
```
# Embedded GWH + 1 Iteration
```
f_embedded_gwh = fock_from_density_batch(p_embedded_gwh)
p_embedded_gwh_test = density_from_fock_batch(f_embedded_gwh)
from SCFInitialGuess.utilities.analysis import mf_initializer as mf_initializer
print(make_results_str(measure_all_quantities(
p_embedded_gwh_test,
dataset,
molecules[1],
N_ELECTRONS,
mf_initializer,
dim,
is_triu=False,
is_dataset_triu=False,
s=S[ind:]
)))
```
# SAD
```
from pyscf.scf import hf
p_sad = np.array([
hf.init_guess_by_atom(mol.get_pyscf_molecule()) for mol in molecules[1]
]).astype("float64")
sns.heatmap((abs(p_sad[0]) < 1e-12).astype("int"))
sns.heatmap((abs(p_gwh[0]) < 0.05).astype("int"))
sns.heatmap((abs(p_test[30]) < 0.05).astype("int"))
from SCFInitialGuess.utilities.analysis import make_results_str, measure_all_quantities, mf_initializer
print(make_results_str(measure_all_quantities(
p_sad,
dataset,
molecules[1],
N_ELECTRONS,
mf_initializer,
dim,
is_triu=False,
is_dataset_triu=False,
s=S[ind:]
)))
47/1001
```
# Embedded zeros
```
p_embedded_zeros = embedd_batch(np.zeros(p_test.shape))
sns.heatmap((abs(p_embedded_zeros[0]) < 0.05).astype("int"))
from SCFInitialGuess.utilities.analysis import mf_initializer as mf_initializer
print(make_results_str(measure_all_quantities(
p_embedded_zeros,
dataset,
molecules[1],
N_ELECTRONS,
mf_initializer,
dim,
is_triu=False,
is_dataset_triu=False,
s=S[ind:]
)))
```
# Embedded zeros + 1 Iteration?
```
f_embedded_zeros = fock_from_density_batch(p_embedded_zeros)
p_embedded_zeros_test = density_from_fock_batch(f_embedded_zeros,)
from SCFInitialGuess.utilities.analysis import mf_initializer, make_results_str, measure_all_quantities
print(make_results_str(measure_all_quantities(
p_embedded_zeros_test,
dataset,
molecules[1],
N_ELECTRONS,
mf_initializer,
dim,
is_triu=False,
is_dataset_triu=False,
s=S[ind:]
)))
```
#
```
77/1001
```
# Embedded GWH w/ Self-Overlap
```
from SCFInitialGuess.utilities.constants import number_of_basis_functions as N_BASIS
mol = molecules[1][0]
mask_self_overlap = np.zeros((dim, dim))
current_dim_i = 0
for i, atom_i in enumerate(mol.species):
# calculate block range
index_start_i = current_dim_i
current_dim_i += N_BASIS[basis][atom_i]
index_end_i = current_dim_i
# calculate logical vector
L_i = np.arange(dim)
L_i = np.logical_and(index_start_i <= L_i, L_i < index_end_i)
current_dim_j = 0
for j, atom_j in enumerate(mol.species):
#print(str(i) + ", " + str(j))
#print(str(atom_i) + ", " + str(atom_j))
# calculate block range
index_start_j = current_dim_j
current_dim_j += N_BASIS[basis][atom_j]
index_end_j = current_dim_j
if i == j:
continue
if atom_i == atom_j:
# calculate logical vector
L_j = np.arange(dim)
L_j = np.logical_and(index_start_j <= L_j, L_j < index_end_j)
m = np.logical_and.outer(L_i, L_j)
mask_self_overlap = np.logical_or(mask_self_overlap, m)
#mask
import seaborn as sns
sns.heatmap(mask_self_overlap.astype("int"))
from SCFInitialGuess.utilities.dataset import reconstruct_from_triu
def embedd_self_ovlp(x, y):
p = x.copy()
p[mask_self_overlap] = (y.copy())[mask_self_overlap]
return p
def embedd_batch_self_ovlp(p_batch):
p_embedded = []
for (p_guess, p_conv) in zip(p_batch, p_test):
p_embedded.append(embedd_self_ovlp(p_guess, p_conv))
return np.array(p_embedded)
p_embedded_gwh_self_ovlp = embedd_batch_self_ovlp(p_embedded_gwh)
sns.heatmap(p_embedded_gwh_self_ovlp[0] - p_test[0], square=True)
from SCFInitialGuess.utilities.analysis import mf_initializer, make_results_str, measure_all_quantities
print(make_results_str(measure_all_quantities(
density_from_fock_batch(fock_from_density_batch(p_embedded_gwh_self_ovlp)),
dataset,
molecules[1],
N_ELECTRONS,
mf_initializer,
dim,
is_triu=False,
is_dataset_triu=False,
s=S[ind:]
)))
6/(len(p_test))
```
# Embedded GWH w/ OFF overlap
```
from SCFInitialGuess.utilities.constants import number_of_basis_functions as N_BASIS
mol = molecules[1][0]
mask_off_overlap = np.zeros((dim, dim))
current_dim_i = 0
for i, atom_i in enumerate(mol.species):
# calculate block range
index_start_i = current_dim_i
current_dim_i += N_BASIS[basis][atom_i]
index_end_i = current_dim_i
# calculate logical vector
L_i = np.arange(dim)
L_i = np.logical_and(index_start_i <= L_i, L_i < index_end_i)
current_dim_j = 0
for j, atom_j in enumerate(mol.species):
#print(str(i) + ", " + str(j))
#print(str(atom_i) + ", " + str(atom_j))
# calculate block range
index_start_j = current_dim_j
current_dim_j += N_BASIS[basis][atom_j]
index_end_j = current_dim_j
if i == j:
continue
if atom_i != atom_j:
# calculate logical vector
L_j = np.arange(dim)
L_j = np.logical_and(index_start_j <= L_j, L_j < index_end_j)
m = np.logical_and.outer(L_i, L_j)
mask_off_overlap = np.logical_or(mask_off_overlap, m)
#mask
import seaborn as sns
sns.heatmap(mask_off_overlap.astype("int"))
from SCFInitialGuess.utilities.dataset import reconstruct_from_triu
def embedd_off_ovlp(x, y):
p = x.copy()
p[mask_off_overlap] = (y.copy())[mask_off_overlap]
return p
def embedd_batch_off_ovlp(p_batch):
p_embedded = []
for (p_guess, p_conv) in zip(p_batch, p_test):
p_embedded.append(embedd_off_ovlp(p_guess, p_conv))
return np.array(p_embedded)
p_embedded_gwh_off_ovlp = embedd_batch_off_ovlp(p_embedded_gwh)
sns.heatmap(p_embedded_gwh_off_ovlp[0] - p_test[0], square=True)
from SCFInitialGuess.utilities.analysis import mf_initializer, make_results_str, measure_all_quantities
print(make_results_str(measure_all_quantities(
density_from_fock_batch(fock_from_density_batch(p_embedded_gwh_off_ovlp)),
dataset,
molecules[1],
N_ELECTRONS,
mf_initializer,
dim,
is_triu=False,
is_dataset_triu=False,
s=S[ind:]
)))
7/len(p_test)
```
| github_jupyter |
# Routing optimization in a humanitarian context
> Note: All notebooks need the [environment dependencies](https://github.com/GIScience/openrouteservice-examples#local-installation)
> as well as an [openrouteservice API key](https://openrouteservice.org/dev/#/signup) to run
Routing optimization generally solves the [Vehicle Routing Problem](https://en.wikipedia.org/wiki/Vehicle_routing_problem)
(a simple example being the more widely known [Traveling Salesman Problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem)).
A more complex example would be the distribution of goods by a fleet of multiple vehicles to dozens of locations,
where each vehicle has certain time windows in which it can operate and each delivery location has certain time windows
in which it can be served (e.g. opening times of a supermarket).
In this example we'll look at a real-world scenario of **distributing medical goods during disaster response**
following one of the worst tropical cyclones ever been recorded in Africa: **Cyclone Idai**.
, processed by ESA, CC BY-SA 3.0 IGO)")
*Cyclone Idai floods in false color image on 19.03.2019; © Copernicus Sentinel-1 -satellite (modified Copernicus Sentinel data (2019), processed by ESA, CC BY-SA 3.0 IGO), [source](http://www.esa.int/spaceinimages/Images/2019/03/Floods_imaged_by_Copernicus_Sentinel-1)*
In this scenario, a humanitarian organization shipped much needed medical goods to Beira, Mozambique, which were then
dispatched to local vehicles to be delivered across the region.
The supplies included vaccinations and medications for water-borne diseases such as Malaria and Cholera,
so distribution efficiency was critical to contain disastrous epidemics.
We'll solve this complex problem with the **optimization** endpoint of [openrouteservice](https://openrouteservice.org).
```
import folium
from folium.plugins import BeautifyIcon
import pandas as pd
import openrouteservice as ors
```
## The logistics setup
In total 20 sites were identified in need of the medical supplies, while 3 vehicles were scheduled for delivery.
Let's assume there was only one type of goods, e.g. standard moving boxes full of one medication.
(In reality there were dozens of different good types, which can be modelled with the same workflow,
but that'd unnecessarily bloat this example).
The **vehicles** were all located in the port of Beira and had the same following constraints:
- operation time windows from 8:00 to 20:00
- loading capacity of 300 *[arbitrary unit]*
The **delivery locations** were mostly located in the Beira region, but some extended ~ 200 km to the north of Beira.
Their needs range from 10 to 148 units of the arbitrary medication goods
(consult the file located at `../resources/data/idai_health_sites.csv`). Let's look at it in a map.
```
# First define the map centered around Beira
m = folium.Map(location=[-18.63680, 34.79430], tiles='cartodbpositron', zoom_start=8)
# Next load the delivery locations from CSV file at ../resources/data/idai_health_sites.csv
# ID, Lat, Lon, Open_From, Open_To, Needed_Amount
deliveries_data = pd.read_csv(
'../resources/data/idai_health_sites.csv',
index_col="ID",
parse_dates=["Open_From", "Open_To"]
)
# Plot the locations on the map with more info in the ToolTip
for location in deliveries_data.itertuples():
tooltip = folium.map.Tooltip("<h4><b>ID {}</b></p><p>Supplies needed: <b>{}</b></p>".format(
location.Index, location.Needed_Amount
))
folium.Marker(
location=[location.Lat, location.Lon],
tooltip=tooltip,
icon=BeautifyIcon(
icon_shape='marker',
number=int(location.Index),
spin=True,
text_color='red',
background_color="#FFF",
inner_icon_style="font-size:12px;padding-top:-5px;"
)
).add_to(m)
# The vehicles are all located at the port of Beira
depot = [-19.818474, 34.835447]
folium.Marker(
location=depot,
icon=folium.Icon(color="green", icon="bus", prefix='fa'),
setZIndexOffset=1000
).add_to(m)
m
```
## The routing problem setup
Now that we have described the setup sufficiently, we can start to set up our actual Vehicle Routing Problem.
For this example we're using the FOSS library of [Vroom](https://github.com/VROOM-Project/vroom), which has
[recently seen](http://k1z.blog.uni-heidelberg.de/2019/01/24/solve-routing-optimization-with-vroom-ors/) support for
openrouteservice and is available through our APIs.
To properly describe the problem in algorithmic terms, we have to provide the following information:
- **vehicles start/end address**: vehicle depot in Beira's port
- **vehicle capacity**: 300
- **vehicle operational times**: 08:00 - 20:00
- **service location**: delivery location
- **service time windows**: individual delivery location's time window
- **service amount**: individual delivery location's needs
We defined all these parameters either in code above or in the data sheet located in
`../resources/data/idai_health_sites.csv`.
Now we have to only wrap this information into our code and send a request to openrouteservice optimization service at
[`https://api.openrouteservice.org/optimization`](https://openrouteservice.org/dev/#/api-docs/optimization/post).
```
# Define the vehicles
# https://openrouteservice-py.readthedocs.io/en/latest/openrouteservice.html#openrouteservice.optimization.Vehicle
vehicles = list()
for idx in range(3):
vehicles.append(
ors.optimization.Vehicle(
id=idx,
start=list(reversed(depot)),
# end=list(reversed(depot)),
capacity=[300],
time_window=[1553241600, 1553284800] # Fri 8-20:00, expressed in POSIX timestamp
)
)
# Next define the delivery stations
# https://openrouteservice-py.readthedocs.io/en/latest/openrouteservice.html#openrouteservice.optimization.Job
deliveries = list()
for delivery in deliveries_data.itertuples():
deliveries.append(
ors.optimization.Job(
id=delivery.Index,
location=[delivery.Lon, delivery.Lat],
service=1200, # Assume 20 minutes at each site
amount=[delivery.Needed_Amount],
time_windows=[[
int(delivery.Open_From.timestamp()), # VROOM expects UNIX timestamp
int(delivery.Open_To.timestamp())
]]
)
)
```
With that set up we can now perform the actual request and let openrouteservice calculate the optimal vehicle schedule
for all deliveries.
```
# Initialize a client and make the request
ors_client = ors.Client(key='your_key') # Get an API key from https://openrouteservice.org/dev/#/signup
result = ors_client.optimization(
jobs=deliveries,
vehicles=vehicles,
geometry=True
)
# Add the output to the map
for color, route in zip(['green', 'red', 'blue'], result['routes']):
decoded = ors.convert.decode_polyline(route['geometry']) # Route geometry is encoded
gj = folium.GeoJson(
name='Vehicle {}'.format(route['vehicle']),
data={"type": "FeatureCollection", "features": [{"type": "Feature",
"geometry": decoded,
"properties": {"color": color}
}]},
style_function=lambda x: {"color": x['properties']['color']}
)
gj.add_child(folium.Tooltip(
"""<h4>Vehicle {vehicle}</h4>
<b>Distance</b> {distance} m <br>
<b>Duration</b> {duration} secs
""".format(**route)
))
gj.add_to(m)
folium.LayerControl().add_to(m)
m
```
## Data view
Plotting it on a map is nice, but let's add a little more context to it in form of data tables.
First the overall trip schedule:
### Overall schedule
```
# Only extract relevant fields from the response
extract_fields = ['distance', 'amount', 'duration']
data = [{key: route[key] for key in extract_fields} for route in result['routes']]
vehicles_df = pd.DataFrame(data)
vehicles_df.index.name = 'vehicle'
vehicles_df
```
So every vehicle's capacity is almost fully exploited. That's good.
How about a look at the individual service stations:
```
# Create a list to display the schedule for all vehicles
stations = list()
for route in result['routes']:
vehicle = list()
for step in route["steps"]:
vehicle.append(
[
step.get("job", "Depot"), # Station ID
step["arrival"], # Arrival time
step["arrival"] + step.get("service", 0), # Departure time
]
)
stations.append(vehicle)
```
Now we can look at each individual vehicle's timetable:
### Vehicle 0
```
df_stations_0 = pd.DataFrame(stations[0], columns=["Station ID", "Arrival", "Departure"])
df_stations_0['Arrival'] = pd.to_datetime(df_stations_0['Arrival'], unit='s')
df_stations_0['Departure'] = pd.to_datetime(df_stations_0['Departure'], unit='s')
df_stations_0
```
### Vehicle 1
```
df_stations_1 = pd.DataFrame(stations[1], columns=["Station ID", "Arrival", "Departure"])
df_stations_1['Arrival'] = pd.to_datetime(df_stations_1['Arrival'], unit='s')
df_stations_1['Departure'] = pd.to_datetime(df_stations_1['Departure'], unit='s')
df_stations_1
```
### Vehicle 2
```
df_stations_2 = pd.DataFrame(stations[2], columns=["Station ID", "Arrival", "Departure"])
df_stations_2['Arrival'] = pd.to_datetime(df_stations_2['Arrival'], unit='s')
df_stations_2['Departure'] = pd.to_datetime(df_stations_2['Departure'], unit='s')
df_stations_2
```
| github_jupyter |
<h1 id="Introduction-to-Python-and-Natural-Language-Technologies">Introduction to Python and Natural Language Technologies</h1>
<h2 id="Laboratory-06,-NLP-Introduction">Laboratory 06, NLP Introduction</h2>
<p><strong>March 18, 2020</strong></p>
<p><strong>Ádám Kovács</strong></p>
<p>During this laboratory we are going to use a classification dataset of SemEval 2019 - Task 6. This is called Identifying and Categorizing Offensive Language in Social Media.</p>
<h2 id="Preparation">Preparation</h2>
<p style="padding-left: 40px;"><a href="http://sandbox.hlt.bme.hu/~adaamko/glove.6B.100d.txt" target="_blank" rel="noopener">Download GLOVE</a>(and place it into this directory)</p>
<p style="padding-left: 40px;">Download the dataset (with python code)</p>
```
import os
if not os.path.isdir('./data'):
os.mkdir('./data')
import urllib
u = urllib.request.URLopener()
u.retrieve("http://sandbox.hlt.bme.hu/~adaamko/offenseval.tsv", "data/offenseval.tsv")
```
# 1. Train a Logistic Regression on the dataset
Use a CountVectorizer for featurizing your data. You can reuse the code presented during the lecture
## 1.1 Read in the dataset into a Pandas DataFrame
Use `pd.read_csv` with the correct parameters to read in the dataset. If done correctly, `DataFrame` should have 3 columns,
`id`, `tweet`, `subtask_a`.
```
import pandas as pd
import numpy as np
def read_dataset():
# YOUR CODE HERE
raise NotImplementedError()
train_data_unprocessed = read_dataset()
assert type(train_data_unprocessed) == pd.core.frame.DataFrame
assert len(train_data_unprocessed.columns) == 3
assert (train_data_unprocessed.columns == ['id', 'tweet', 'subtask_a']).all()
```
## 1.2 Convert `subtask_a` into a binary label
The task is to classify the given tweets into two category: _offensive(OFF)_ , _not offensive (NOT)_. For machine learning algorithms you will need integer labels instead of strings. Add a new column to the dataframe called `label`, and transform the `subtask_a` column into a binary integer label.
```
def transform(train_data):
# YOUR CODE HERE
raise NotImplementedError()
from pandas.api.types import is_numeric_dtype
train_data = transform(train_data_unprocessed)
assert "label" in train_data
assert is_numeric_dtype(train_data.label)
assert (train_data.label.isin([0,1])).all()
train_data.groupby("label").size()
```
## 1.3 Initialize CountVectorizer and _train_ it on the _tweet_ column of the dataset
The _training_ will prepare the vocabulary for us so we will be able to use it for training a LogisticRegression algorithm later. Set the number of `max_features` to 5000 so vocabulary won't be too big for training. Also filter out english `stop_words`.
```
# We will need to use a random seed for our methods so they will be reproducible
SEED = 1234
from sklearn.feature_extraction.text import CountVectorizer
def prepare_vectorizer(train_data):
# YOUR CODE HERE
raise NotImplementedError()
vectorizer = prepare_vectorizer(train_data)
transformed = vectorizer.transform(["hello this is the intro to nlp"])
assert transformed.dtype == np.dtype('int64')
assert transformed.shape == (1, 5000)
```
## 1.4 Featurize the dataset with the prepared CountVectorizer, and split it into _train_ and _test_ dataset
You should use the random seed when you are splitting the dataset. The scale of the training and the test dataset should be 70% to 30%.
```
import gensim
from tqdm import tqdm
from sklearn.model_selection import train_test_split as split
def vectorize_to_bow(tr_data, tst_data, vectorizer):
# YOUR CODE HERE
raise NotImplementedError()
def get_features_and_labels(data, labels, vectorizer):
# tr_data,tst_data,tr_labels,tst_labels = split...
# ...
# tr_vecs, tst_vecs = vectorize_to_bow(...
# YOUR CODE HERE
raise NotImplementedError()
tr_vecs, tr_labels, tst_vecs, tst_labels = get_features_and_labels(train_data.tweet, train_data.label, vectorizer)
assert tr_vecs.shape == (9268, 5000)
assert tr_labels.shape == (9268,)
assert tst_vecs.shape == (3972, 5000)
assert tst_labels.shape == (3972,)
assert tr_vecs[0].toarray().shape == (1, 5000)
# Import a bunch of stuff from sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# We will train a LogisticRegression algorithm for the classification
lr = LogisticRegression(n_jobs=-1)
```
## 1.5 Train and evaluate your method!
```
# Training on the train dataset
# YOUR CODE HERE
raise NotImplementedError()
from sklearn.utils.validation import check_is_fitted
from sklearn.exceptions import NotFittedError
try:
check_is_fitted(lr)
except NotFittedError as e:
assert None, repr(e)
from sklearn.metrics import accuracy_score
# Evaluation on the test dataset
def preds(lr, tst_vecs):
# YOUR CODE HERE
raise NotImplementedError()
# If you have done everything right, the accuracy should be around 75%
lr_pred = preds(lr, tst_vecs)
assert lr_pred.shape == (3972,)
print("Logistic Regression Test accuracy : {}".format(
accuracy_score(tst_labels, lr_pred)))
```
## 1.1 Change to TfidfVectorizer, and also change the configuration
Look up the documentation of [TfidfVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html). It has a lot of parameters to play with.
This time, change the parameters to include _maximum_ of __10000__ features. Also include filtering of _stopwords_ and _lowercasing_ the features. (hint: look at the parameter names in the documentation)
Also [_ngram_](https://en.wikipedia.org/wiki/N-gram) features can improve the performance of the model. A bigram is an n-gram for n=2, trigram is when n=3, etc..
Bigram features include not only single words in the vocabulary, but the frequency of every occuring bigram in the text (e.g. it will include not only the words _brown_ and _dog_ but __brown dog__ also)
Change the configuration of the _TfidfVectorizer_ to also include the _bigrams_ and _trigrams_ in the vocabulary.
```
from sklearn.feature_extraction.text import TfidfVectorizer
def prepare_tfidf_vectorizer(train_data):
# YOUR CODE HERE
raise NotImplementedError()
tr_vecs, tr_labels, tst_vecs, tst_labels = get_features_and_labels(
train_data.tweet, train_data.label, prepare_tfidf_vectorizer(train_data))
# Train and evaluate!
lr = LogisticRegression(n_jobs=-1)
#lr.fit...
#lr_pred = ..
# YOUR CODE HERE
raise NotImplementedError()
from sklearn.utils.validation import check_is_fitted
from sklearn.exceptions import NotFittedError
try:
check_is_fitted(lr)
except NotFittedError as e:
assert None, repr(e)
```
## 1.2 Write a custom tokenizer for TfidfVectorizer
Right now, the vectorizer uses it's own tokenizer for creating the vocabulary. You can also create a custom function and tell the vectorizer to use that when tokenizing the text.
Use [spacy](https://spacy.io/) for tokenization. write your own function.
Your function should:
- get a sentence as an input
- run spacy on the input text
- return a token list that includes:
- filtering of stop words
- filtering of punctuation
- lemmatizing the text
- lowercasing the text
```
import spacy
nlp = spacy.load("en")
def spacy_tokenizer(sentence):
# YOUR CODE HERE
raise NotImplementedError()
vectorizer_with_spacy = TfidfVectorizer(
max_features=10000, tokenizer=spacy_tokenizer)
assert (spacy_tokenizer("This is the NLP lab, this text should not contain any punctuations and stopwords, and the text should be lowercased.") == [
'nlp', 'lab', 'text', 'contain', 'punctuation', 'stopword', 'text', 'lowercase'])
X = vectorizer_with_spacy.fit(train_data.tweet)
tr_vecs, tr_labels, tst_vecs, tst_labels = get_features_and_labels(train_data.tweet, train_data.label, X)
# Train and evaluate!
# If you have done everything right you should get the same or a little better performance than the standard
# TfidfVectorizer and CountVectorizer
lr = LogisticRegression(n_jobs=-1)
#lr.fit...
#lr_pred = ..
# YOUR CODE HERE
raise NotImplementedError()
```
# 2. Word embeddings
## 2.1 Transform word vectors to sentence vector taking the average of the word vectors
Word vectors transform words to a vector space where similar words have similar vectors.
These vectors can be used as features for ML algorithms. But to feature a sentence first you need to create a _sentence vector_ from the vectors of the words. The easiest way of transforming word vectors to sentence vector is to take the average of all the word vectors.

```
#Load the embedding file
embedding_file = "glove.6B.100d.txt"
model = gensim.models.KeyedVectors.load_word2vec_format(embedding_file, binary=False)
vectorizer = model.wv
vocab_length = len(model.wv.vocab)
```
**Your transform function should:**
- tokenize the sentence with the spacy tokenizer
- get the embedding vector:
- get the embedding vector from the model if the word is in the vocabulary
- initialize a vector with zeros with the same dimension if the word is not in the vocabulary
- take the mean of the word vectors to return a sentence vector
```
def transform(words):
# YOUR CODE HERE
raise NotImplementedError()
assert transform("this is a nlp lab").shape == (100,)
```
**We can calculate similarities between sentences now the same way that we did between words! For this we need to use the cosine_similarity function!**
```
from sklearn.metrics.pairwise import cosine_similarity
print(cosine_similarity(transform("hello my name is adam").reshape(
1, -1), transform("hello my name is andrea").reshape(1, -1))[0][0])
assert cosine_similarity(transform("hello my name is adam").reshape(
1, -1), transform("hello my name is andrea").reshape(1, -1)).shape == (1, 1)
```
## 2.4 Finding Analogies
Word vectors have been shown to sometimes have the ability to solve analogies.
We discussed this during the lecture that for the analogy "man : king :: woman : x" (read: man is to king as woman is to x), x is _queen_
Find more examples of analogies that holds according to these vectors (i.e. the intended word is ranked top)!
Also find an example of analogy that does not hold according to these vectors!
Summarize your findings in a few sentences.
```
# YOUR CODE HERE
raise NotImplementedError()
```
## 2.5 Bias in word vectors
It's important to be cognizant of the biases (gender, race, sexual orientation etc.) implicit in our word embeddings. Bias in word vectors can be dangerous because it can incorporate stereotypes through applications that employ these models.
Run the cell below, to examine a sample of gender bias present in the data. Try to come up with another examples that can reflect biases in datasets (gender, race, sexual orientation etc.)
Summarize your findings in a few sentences.
```
print(model.most_similar(positive=['woman', 'doctor'], negative=['man']))
print(model.most_similar(positive=['man', 'doctor'], negative=['woman']))
# YOUR CODE HERE
raise NotImplementedError()
```
# ================ PASSING LEVEL ====================
# 3. Logistic regression using word vectors
These sentence vectors can be used as feature vectors for classifiers. Rewrite the featurizing process and transform each sentence into a sentence vector using the embedding model!
__Note: it is OK if your model is not better than the other classifiers__
```
def vectorize_to_embedding(tr_data, tst_data):
# YOUR CODE HERE
raise NotImplementedError()
def get_features_and_labels(data, labels):
# YOUR CODE HERE
raise NotImplementedError()
tr_vecs, tr_labels, tst_vecs, tst_labels = get_features_and_labels(train_data.tweet, train_data.label)
assert tr_vecs[0].shape == (100,)
# Train and evaluate!
lr = LogisticRegression(n_jobs=-1)
#lr.fit...
#lr_pred = ..
# YOUR CODE HERE
raise NotImplementedError()
```
## 3.1 Ensemble model
Try out other classifiers from: [sklearn](https://scikit-learn.org/stable/supervised_learning.html). Choose three and build a [VotingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html) with the choosen classifiers. If the _voting_ strategy is set to _hard_ it will do a majority voting among the classifiers and choose the class with the most votes.
Make a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) with a TFIdfVectorizer and with your Ensemble model. Pipeline objects make it easy to assemble several steps together and makes your machine learning pipeline executable in just one step.
```
from sklearn.pipeline import Pipeline
from sklearn.ensemble import VotingClassifier
def make_pipeline_ensemble(tweet, label):
# YOUR CODE HERE
raise NotImplementedError()
pipeline = make_pipeline_ensemble(train_data.tweet, train_data.label)
assert type(pipeline) == Pipeline
assert type(pipeline.steps[0][1]) == TfidfVectorizer
assert type(pipeline.steps[1][1]) == VotingClassifier
# Train and evaluate!
# YOUR CODE HERE
raise NotImplementedError()
```
## 3.2 __Also evaluate your classifiers separately as well. Summarize your results in a cell below. Did the ensemble model improved your performance?__
```
# YOUR CODE HERE
raise NotImplementedError()
```
# ================ EXTRA LEVEL ====================
| github_jupyter |
# Muography
#### Roland Grinis - Researcher at MIPT Nuclear Physics Methods lab - CTO at GrinisRIT (grinisrit.com)
#### Danila Riazanov - Student at MIPT, JetBrains Research trainee
Code available within `NOA` [github.com/grinisrit/noa](https://github.com/grinisrit/noa) - Differentiable Programming Algorithms in `C++17` over [LibTorch](https://pytorch.org/cppdocs)
## Installation
The `conda` environment provided with the repository has all the required dependencies. For this particular tutorial we will need the following `python` packages:
```
import torch
from torch.utils.cpp_extension import load
import matplotlib.pyplot as plt
```
Now we need to build and load `C++17/CUDA` extensions for `PyTorch`, set up the locations:
```
!mkdir -p build
noa_location = '../..'
```
If you are running this on Google Colab, you need to clone `NOA` and set `noa_location` accordingly:
```python
!git clone https://github.com/grinisrit/noa.git
noa_location = 'noa'
```
Also, make sure that `ninja` and `g++-9` or higher are available. The following commands will do that for you:
```python
!pip install Ninja
!add-apt-repository ppa:ubuntu-toolchain-r/test -y
!apt update
!apt upgrade -y
!apt install gcc-9 g++-9
!update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 100 --slave /usr/bin/g++ g++ /usr/bin/g++-9
!gcc --version
!g++ --version
!nvcc --version
```
Finally, you get the extensions into `python` by calling `load`:
```
muons = load(name='muons',
build_directory='./build',
sources=[f'{noa_location}/docs/pms/muons.cc'],
extra_include_paths=[f'{noa_location}/include'],
extra_cflags=['-Wall -Wextra -Wpedantic -O3 -std=c++17'],
verbose=False)
muons_cuda = load(name='muons_cuda',
build_directory='./build',
sources=[f'{noa_location}/docs/pms/muons.cu'],
extra_include_paths=[f'{noa_location}/include'],
extra_cflags=['-Wall -Wextra -Wpedantic -O3 -std=c++17'],
extra_cuda_cflags=['-std=c++17 --extended-lambda'],
verbose=False) if torch.cuda.is_available() else None
```
## Differential Cross-Section calculations
Differential cross-sections (DCS) are implemented in `<noa/pms/dcs.hh>` for `CPU` and `<noa/pms/dcs.cuh>` for `CUDA` within the namespace `noa::pms::dcs`.
Here, we demonstrate the calculations for muons passing through the standard rock. In `<noa/pms/physics.hh>` you will find:
```cpp
constexpr ParticleMass MUON_MASS = 0.10565839; // GeV/c^2
constexpr AtomicElement<Scalar> STANDARD_ROCK =
AtomicElement<Scalar>{
22., // Atomic mass in g/mol
0.1364E-6, // Mean Excitation in GeV
11 // Atomic number
};
```
Let's get a range of kinetic and recoil energies:
```
kinetic_energies = torch.linspace(1e-3, 1e6, 10000).double()
recoil_energies = 0.0505 * kinetic_energies
kinetic_energies_gpu = kinetic_energies.cuda()
recoil_energies_gpu = recoil_energies.cuda()
```
From now on, we shall get ourselves into the namespace:
```cpp
using namespace noa::pms;
```
On `CPU` the DCS computation kernel can be mapped on a tensor via the utility `dcs::vmap`. The user is expected to provide a `result` tensor with the same options as `kinetic_energies` which will get populated with the calculation values:
```cpp
const auto result = torch::zeros_like(kinetic_energies);
```
### Bremsstrahlung
Bremsstrahlung process corresponds to radiation due to deceleration when two charged particles interact. We will consider here the example of muons. At high energies ($E \geq 1$
TeV) this process contributes to about 40% of the average muon energy loss.
* D. Groom et. al. [Muon stopping power and range tables 10 MeV - 100 TeV](https://pdg.lbl.gov/2014/AtomicNuclearProperties/adndt.pdf)
##### Input:
* $K$ - The projectile initial kinetic energy, in GeV
* $q$ - The kinetic energy lost to the photon, in GeV
* $A$ - The mass number of the target atom, in g/mol
* $Z$ - The charge number of the target atom.
* $m_\mu$ - muon rest mass $0.10565839$ GeV
##### Output
The DCS (differential cross-section) in $\text{m}^{2}$/kg:
\begin{equation}
\frac{\text{d}\sigma}{\text{d}q} = \alpha Z(2\frac{m_e}{m_\mu})^{2}(\frac{4}{3}(\frac{1}{\nu} - 1) + \nu)(\Phi_\text{in}(\delta) + Z\Phi_n(\delta) ) \frac{N_a}{A \cdot 10^{-3}}
\end{equation}
Here:
\begin{equation}
E = K + m_\mu
\end{equation}
and:
* $\nu = \frac{q}{E}$ is the fraction of the muon's energy transferred to the photon
* $N_a$ the Avogadro number $6.02214199 \cdot 10^{23} \text{mol}^{-1}$
* $m_e$ electron rest mass $0.51099891003 \cdot 10^{-3}$ GeV
* $\alpha$ fine structure constant $1/137.03599976$
We have the contribution from (screened) nucleus:
\begin{equation}
\Phi_n(\delta) = \ln \left( \frac{BZ^{\frac{-1}{3}}(m_\mu + \delta(D_n\sqrt{e} - 2))}{(m_e + \delta \sqrt{e}BZ^{\frac{-1}{3}})D_n} \right),
\end{equation}
where $D_n = 1.54A^{0.27}$ , $B = 182.7$ ($B = 202.4$ for hygrogen), the exponential $e = 2.7181...$
and the contributions from atomic electrons:
\begin{equation}
\Phi_\text{in}(\delta) = \ln \left( \frac{m_\mu BZ^{-2/3} \sqrt{e}}{(m_e + \delta BZ^{-2/3} \sqrt{e})(\frac{m_\mu \delta}{m_e^{2}} + \sqrt{e})} \right),
\end{equation}
where $B = 1429$ ($B = 446$ for hygrogen).
Both are evaluated at:
\begin{equation}
\delta = \frac{m_\mu^{2}\nu}{2E(1 - \nu)}
\end{equation}
For `CPU` we have:
```cpp
dcs::vmap(dcs::pumas::bremsstrahlung)(
result, kinetic_energies, recoil_energies, STANDARD_ROCK, MUON_MASS);
```
For `CUDA` we create the lambda function directly ourselves:
```cpp
dcs::pumas::cuda::vmap_bremsstrahlung(
result, kinetic_energies, recoil_energies, STANDARD_ROCK, MUON_MASS);
```
```
brems = muons.bremsstrahlung(kinetic_energies, recoil_energies)
brems[:5]
brems_gpu = muons_cuda.bremsstrahlung(kinetic_energies_gpu, recoil_energies_gpu);
brems_gpu[:5]
(brems - brems_gpu.cpu()).abs().sum()
%timeit muons_cuda.bremsstrahlung(kinetic_energies_gpu, recoil_energies_gpu);
%timeit muons.bremsstrahlung(kinetic_energies, recoil_energies);
```
### Pair Production
Direct electron pair production is one of the most important muon interaction processes. At TeV muon energies,
the pair production cross section exceeds those of other muon interaction processes over a range of energy transfers
between 100 MeV and 0.1𝐸𝜇. The average energy loss for pair production increases linearly with muon energy, and
in the TeV region this process contributes more than half the total energy loss rate.
To adequately describe the number of pairs produced, the average energy loss and the stochastic energy loss distribution,
the differential cross section behavior over an energy transfer range of 5 MeV ≤ 𝜖 ≤ 0.1 ·𝐸𝜇 must be accurately
reproduced. This is is because the main contribution to the total cross section is given by transferred energies 5 MeV
≤ 𝜖 ≤ 0.01 ·𝐸𝜇, and because the contribution to the average muon energy loss is determined mostly in the region
0.001 · 𝐸𝜇 ≤ 𝜖 ≤ 0.1 ·𝐸𝜇 .
##### Input:
* Z - The charge number of the target atom.
* A - The mass number of the target atom.
* mass - The projectile rest mass, in GeV
* K - The projectile initial kinetic energy.
* q - The kinetic energy lost to the photon ($E - E^{\text{'}}$).
##### Output:
DCS in m^2/kg.
#### Definitions and Applicability
Theory from https://geant4-userdoc.web.cern.ch/UsersGuides/PhysicsReferenceManual/fo/PhysicsReferenceManual.pdf (Page 151)
Coefficients for the Gaussian quadrature from:
https://pomax.github.io/bezierinfo/legendre-gauss.html.
The formula for the differential cross section applies when:
* $E_\mu \gg \mu $ $ ( E \geq 2 - 5 GeV)$ and $E_\mu \leq 10^{15} - 10^{17} eV $. If muon energies exceed this limit, the LPM (Landau Pomeranchuk Migdal) effect may become important, depending on the material.
*The muon energy transfer $q$ lies between $q_\text{min} = 4m_e$ and $q_\text{max} = E_\mu - \frac{3\sqrt{e}}{4} M_\mu Z^{1/3}$, although the formal lower limit is $q \gg 2m_e$, and the formal upper limit requires $E_\mu ^\text{'} \gg \mu$.
*$Z \leq 40 - 50$. For higher Z, the Coulomb correction is important but has not been sufficiently studied theoretically.
##### Formulae
\begin{equation}
\frac{\text{d}\sigma}{\text{d}q}(Z, A, E, q) = \frac{4}{3\pi} \frac{Z(Z + \xi(Z))}{A} N_A {(\alpha r_e)}^{2} (\frac{1 - \nu}{q}) \int_{p}[\Phi_e + (m_e/M_\mu)^{2}\Phi_\mu]dp,
\end{equation}
where $\Phi_{e,\mu} = B_{e,\mu} {L_{e,\mu}}^{'}$ and $\Phi_{e,\mu} = 0 \text{ whenever } \Phi_{e,\mu}<0$.
$B_e$ and $B_\mu$ do not depend on Z, A, and are given by
\begin{equation}
B_e = [(2 + \rho^{2})(1 + \beta) + \xi(3 + \rho^{2})]\ln(1 + \frac{1}{\xi}) + \frac{1 - \rho^{2} - \beta}{1 + \xi} - (3 + \rho^{2})
\approx \frac{1}{2\xi}[(3 - \rho^{2}) + 2\beta(z + \rho^{2})] \text{ for } \xi \geq 10 ^{3}
\end{equation}
\begin{equation}
B_\mu = [(1 + \rho^{2})(1 + \frac{3\beta}{2}) - \frac{1}{\xi}(1 + 2\beta)(1 - \rho^{2})]\ln(1 + \xi) + \frac{\xi(1 - \rho{2} - \beta)}{1 + \xi} + (1 + 2\beta)(1 - \rho^{2})
\approx \frac{\xi}{2}[(5 - \rho^{2}) + \beta(3 + \rho^{2})] \text{ for } \xi \leq 10^{-3}
\end{equation}
Also,
\begin{equation}
\xi = \frac{\mu^{2}\nu^{2}}{4m^{2}} \frac{(1 - \rho^{2})}{1 - \nu}; \text{ }
\beta = \frac{\nu^{2}}{2(1 - \nu)}
\end{equation}
\begin{equation}
L_e^\text{'} = \ln(\frac{A^{*}Z^{-1/3} \sqrt{(1 + \xi)(1 + Y_e)}}{1 + \frac{2m\sqrt{e}A*Z^{-1/3}(1 + \xi)(1 + Y_\mu)}{E\nu(1 - \rho^{2})}}) - \frac{1}{2}ln([1 + (\frac{3mZ^{1/3}}{2\mu})^{2}(1 + \xi)(1 + Y_e)])
\end{equation}
\begin{equation}
L_\mu ^\text{'} = \ln(\frac{(\frac{M_\mu}{m_e})A^{*}Z^{-1/3} \sqrt{(1 + \frac{1}{\xi})(1 + Y_e)}}{1 + \frac{2m\sqrt{e}A*Z^{-1/3}(1 + \xi)(1 + Y_\mu)}{E\nu(1 - \rho^{2})}}) - ln(\frac{3}{2} Z^{1/3}\sqrt{(1 + \frac{1}{\xi})(1 + Y_\mu)})
\end{equation}
For faster computing, the expressions for $L_{e, \mu} ^\text{'}$ are further algebraically transformed. The function $L_{e, \mu} ^\text{'}$ include the nuclear size correction in comparison with parameterization.
\begin{equation}
Y_e = \frac{5 - \rho^2 + 4\beta(1 + \rho^2)}{2(1 + 3\beta)\ln(3 + 1/\xi)) - \rho^2 - 2\beta(2 - \rho^2)}
\end{equation}
\begin{equation}
Y_\mu = \frac{4 + \rho^2 + 3\beta(1 + \rho^2)}{(1 + \rho^2)(\frac{3}{2} + 2\beta)\ln(3 + \xi) + 1 - \frac{3}{2}\rho^2}
\end{equation}
\begin{equation}
\rho_{max} = 1 - \frac{6M_\mu^2}{E^2(1 - \nu)})\sqrt{1 - \frac{4m_e}{E\nu}}
\end{equation}
```cpp
dcs::vmap(dcs::pumas::pair_production)(
result, kinetic_energies, recoil_energies, STANDARD_ROCK, MUON_MASS);
```
```
ppair = muons.pair_production(kinetic_energies, recoil_energies)
ppair[:5]
```
### Photonuclear
Input:
* Z - The charge number of the target atom.
* A - The mass number of the target atom.
* ml - The projectile rest mass, in GeV
* K - The projectile initial kinetic energy.
* q - The kinetic energy lost to the photon.
Output:
DCS in m^2/kg.
#### Definitions and Applicability
Theory from: https://arxiv.org/pdf/hep-ph/9712415.pdf, https://arxiv.org/pdf/hep-ph/0012350.pdf
The inelastic interaction of muons with nuclei is important at high muon energies (𝐸 ≥ 10 GeV), and at relatively high
energy transfers 𝜈 (𝜈/𝐸 ≥ 10−2). It is especially important for light materials and for the study of detector response
to high energy muons, muon propagation and muon-induced hadronic background. The average energy loss for this
process increases almost linearly with energy, and at TeV muon energies constitutes about 10% of the energy loss rate.
The main contribution to the cross section 𝜎(𝐸, 𝜈) and energy loss comes from the low 𝑄2–region ( 𝑄2 ≪ 1 GeV2).
In this domain, many simplifications can be made in the theoretical consideration of the process in order to obtain
convenient and simple formulae for the cross section. Most widely used are the expressions given by Borog and
Petrukhin [BP75], and Bezrukov and Bugaev [BB81]. Results from these authors agree within 10% for the differential
cross section and within about 5% for the average energy loss, provided the same photonuclear cross section, 𝜎𝛾𝑁, is
used in the calculations.
##### Formulae
The differential cross section can be written in the form
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}x} = \frac{4\pi\alpha^2}{Q^4}\frac{F_2(x, Q^2)}{x}(1-y-\frac{Mxy}{2E} + (1 - \frac{2m_l^2}{Q^2})\frac{y^2(1 + \frac{4M^2x^2}{Q^2})}{2(1 + R(x, Q^2))})
\end{equation}
Where $F_2$ - a nucleon structure function
\begin{equation}
F_2(x, Q^2) = \frac{Q^2}{Q^2 + m_0^2} (F_2^P(x, Q^2) + F_2^R(x, Q^2))
\end{equation}
\begin{equation}
F_2 ^{R}(x, Q^{2}) = c_R(t)x_R^{a_R(t)}(1 - x)^{b_R(t)}
= c_R(t)e^{a_R(t)\ln x_R+ b_R(t)\ln(1 - x)}
\end{equation}
\begin{equation}
F_2 ^{P}(x, Q^{2}) = c_P(t)x_P^{a_P(t)}(1 - x)^{b_P(t)}
= c_P(t)e^{\ln(x_P^{a_P(t)}(1 - x)^{b_P(t)})}
= c_P(t)e^{\ln x_P^{a_P(t)}+ \ln(1 - x)^{b_P(t)}}
= c_P(t)e^{a_P(t)\ln x_P+ b_P(t)\ln(1 - x)}
\end{equation}
Take $x$ as $x = \frac{Q^2}{2Mq}$
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}x} = \frac{4\pi\alpha^2 F_2}{Q^4} \frac{2Mq}{Q^2}(1 - y - \frac{My}{2E}\frac{Q^2}{2Mq} + (1 - \frac{2m_l^2}{Q^2})\frac{y^2(1 + (\frac{2MQ^2}{2MqQ})^2)}{2(1 + R)})
\end{equation}
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}x} = \frac{8\pi\alpha^2F_2Mq}{Q^6}(1 - y - \frac{Q^2y}{4Eq} + (1 - \frac{2m_l^2}{Q^2}) \frac{y^2 + (y\frac{Q}{q})^2}{2(1 + R)})
\end{equation}
Take into account, that $y = \frac{q}{E}$
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}x} = \frac{8\pi\alpha^2F_2Mq}{Q^6}(1 - y - \frac{Q^2}{4E^2} + (1 - \frac{2m_l^2}{Q^2}) \frac{y^2 + \frac{Q^2}{E^2}}{2(1 + R)})
\end{equation}
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}x} = \frac{8\pi\alpha^2F_2Mq}{Q^6}(1 - y + (1 - \frac{2m_l^2}{Q^2}) \frac{y^2 + \frac{Q^2}{E^2}}{2(1 + R)}) - \frac{8\pi\alpha^2F_2Mq}{Q^6}\frac{Q^2}{4E^2}
\end{equation}
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}x} = \frac{8\pi\alpha^2F_2Mq}{Q^6}(1 - y + (1 - \frac{2m_l^2}{Q^2}) \frac{y^2 + \frac{Q^2}{E^2}}{2(1 + R)}) - \frac{8\pi\alpha^2F_2Mq}{4Q^4E^2}
\end{equation}
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}x} = \frac{8\pi\alpha^2F_2Mq}{Q^2}((\frac{1 - y + \frac{1}{2}(1 - \frac{2m_l^2}{Q^2}) \frac{y^2 + \frac{Q^2}{E^2}}{1 + R}}{Q^4}) - \frac{1}{4Q^2E^2})
\end{equation}
Let's see on $x = \frac{Q^2}{2Mq}$
\begin{equation}
\frac{\text{d}x}{\text{d}q} = \frac{-Q^2}{2Mq^2}
\end{equation}
And
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}q}\frac{2Mq^2}{-Q^2} = \frac{8\pi\alpha^2F_2Mq}{Q^2}((\frac{1 - y + \frac{1}{2}(1 - \frac{2m_l^2}{Q^2}) \frac{y^2 + \frac{Q^2}{E^2}}{1 + R}}{Q^4}) - \frac{1}{4Q^2E^2})
\end{equation}
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}q} = \frac{-4\pi\alpha^2F_2}{q}((\frac{1 - y + \frac{1}{2}(1 - \frac{2m_l^2}{Q^2}) \frac{y^2 + \frac{Q^2}{E^2}}{1 + R}}{Q^4}) - \frac{1}{4Q^2E^2})
\end{equation}
So it's not enough correct equation. We should add some approximation factors. And finally we have:
\begin{equation}
\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}q} = \frac{c_f F_2}{q} (\frac{1-y+\frac{1}{2}(1 - \frac{2m_l^2}{Q^2}) \frac {y^2 + \frac{Q^2}{E^2}}{1 + R}}{Q^4} - \frac{1}{4E^2Q^2})
\end{equation}
with a constant factor $c_f = 2.603096 \cdot 10^{-35}$. Then integrate this equation using Gaussian Quadrature from $Q_\text{min}^2 = \frac{m_l^2y^2}{1 - y} $ to $Q_\text{max}^2 = 2MEy - ((M + m_\pi)^2 - M^2) $
\begin{equation}
\frac{\text{d}\sigma}{\text{d}q} = \int{\frac{\text{d}\sigma(x, Q^2)}{\text{d}Q^2\text{d}q}}\text{d}Q^2
\end{equation}
```cpp
dcs::vmap(dcs::pumas::photonuclear)(
result, kinetic_energies, recoil_energies, STANDARD_ROCK, MUON_MASS);
```
```
photonuc = muons.photonuclear(kinetic_energies, recoil_energies)
photonuc[:5]
```
### Ionisation
Input:
* A - The charge number of the target atom.
* I - The mean excitation of the target atom.
* Z - The mass number of the target atom.
* mu - The projectile rest mass, in GeV
* K - The projectile initial kinetic energy.
* q - The kinetic energy lost to the photon.
Output:
DCS in m^2/kg.
### Definitions and Applicability
Theory from: Salvat et al., NIMB316 (2013) 144-159, Sokalski et al., Phys.Rev.D64 (2001) 074015 (MUM)
The differential cross section for ionisation is computed following Salvat et al., NIMB316 (2013) 144-159, considering only close interactions for DELs. In addition a radiative correction is applied according toSokalski et al., Phys.Rev.D64 (2001) 074015 (MUM).
\begin{equation}
\frac{\text{d}\sigma}{\text{d}q} = \frac{CEZ}{A(\frac{1}{2P_2} + \frac{P_2W_\text{max}}{E^2W_\text{max} - qP_2})}(1 + \Delta_{e\gamma}),
\end{equation}
where $P_2 = E^2 - M_\mu^2$, $W_{max} = \frac{2m_e P_2}{M_\mu^2 + m_e(m_e + 2E)}$ and $C = 1.535336*10^{-5}$
$\Delta_{e\gamma}$ - Radiative correction
\begin{equation}
\Delta_{e\gamma} = \frac{\alpha}{2\pi}\ln(1 + \frac{2\nu E}{m_e})(\ln(\frac{4E^2(1 - \nu)}{M_\mu^2}) - \ln(1 + \frac{2\nu E}{m_e}))
\end{equation}
If we take into account, that $\nu = \frac{q}{E}$
\begin{equation}
\Delta_{e\gamma} = \frac{\alpha}{2\pi}\ln(1 + \frac{2q}{m_e})(\ln(\frac{4E(E - q)}{M_\mu^2}) - \ln(1 + \frac{2q}{m_e}))
\end{equation}
```cpp
dcs::vmap(dcs::pumas::ionisation)(
result, kinetic_energies, recoil_energies, STANDARD_ROCK, MUON_MASS);
```
```
ionis = muons.ionisation(kinetic_energies, recoil_energies)
ionis[:5]
def plot_recoil_energy(E, sample = 1000):
recoil_energies = torch.zeros(sample).double()
kinetic_energies = torch.full((sample,), E).double() #tensor with one energy level
for i in range(sample):
recoil_energies[i] = E*(0.05 + i*(1 - 0.05)/sample)
brems = muons.bremsstrahlung(kinetic_energies, recoil_energies)
ppair = muons.pair_production(kinetic_energies, recoil_energies)
photonuc = muons.photonuclear(kinetic_energies, recoil_energies)
ionis = muons.ionisation(kinetic_energies, recoil_energies)
fig, ax = plt.subplots()
ax.plot(recoil_energies.numpy(), brems.numpy(), label = 'bremsstrahlung')
ax.plot(recoil_energies.numpy(), ppair.numpy(), label = 'pair production')
ax.plot(recoil_energies.numpy(), photonuc, label = 'photonuclear')
ax.plot(recoil_energies.numpy(), ionis.numpy(), label = 'ionisation')
ax.set_xlabel('The kinetic energy lost to the photon, GeV', fontsize = 20)
ax.set_ylabel('DCS, m^2/kg', fontsize = 20)
ax.set_title(f"Energy loss at {E} GeV")
fig.set_figwidth(20)
fig.set_figheight(10)
ax.legend()
plt.show()
plot_recoil_energy(1e-3)
plot_recoil_energy(10.0)
plot_recoil_energy(1e3)
plot_recoil_energy(1e6)
```
### Modelling high energy DCS
An interpolation model in $\tau = q/K$ is used for $K \geq 10$ GeV. For Bremsstrahlung we re-write:
\begin{equation}
\frac{K}{E} \frac{\text{d}\sigma}{\text{d}q} = \frac{K \alpha Z}{q}(2\frac{m_e}{m_\mu})^{2}(\frac{4}{3}(1 - \nu) + \nu^2)(\Phi_\text{in}(\delta) + Z\Phi_n(\delta) ) \frac{N_a}{A \cdot 10^{-3}}
\end{equation}
Setting $X=\ln(\tau)$ and $Y=\ln(1-\tau)$, we fit the model:
\begin{equation}
\ln \left( \frac{K}{E} \frac{\text{d}\sigma}{\text{d}q} \right) = \sum_{i=0}^{6} a_i X^i + b_1 Y + b_2 Y^2
\end{equation}
| github_jupyter |
# CSCA08 Introduction to Computer Science I
## 2018 - Fall
### Anya Tafliovich
```
##ODD BEHAVIOURS
##the end index when slicing strings can be greater than the length of sliced string
i = 'abcde'
i[3:13032]
```
## Defining a Function
```
#def is used to start a function, the function comes after
#everything inside (argument_name:argument type) are parameters and argument types
#-> denotes return type
##HEADER
def func(arg1: str, arg2: str) -> str:
##DOCSTRING & EXAMPLES
""" Return blah blah based on blah blah.
>>>func('str', 'str')
*returns whatever function is supposed to return*
"""
##FUNCTION CODE
conjunction = arg1 + arg2
##RETURN statement(ends the function)
return conjunction
```
## Using Non-Bult-in Functions
```
##import + module name will import the entire module
import math
##from module import + function will import a specific function
from math import log
##to use the function in your code you must call the module
print(math.log(10))
```
## if, else, elif
```
##the code block under 'if' will run if the condition is satisfied
def greater(x:int) -> str:
if x > 3:
return 'Greater than 3!'
print(greater(4))
##the code block under 'else' will run if none of the previous ifs are satisfied
def greaterelse(x:int) -> str:
if x > 3:
return 'Greater than 3!'
else:
return 'Less than 3!'
print(greaterelse(2))
##elif condition will be checked if the previous if wasn't satisfied
def greaterelif(x:int) -> str:
if x > 3:
return 'Greater than 3!'
elif x < 3:
return 'Less than 3!'
elif x == 3:
return 'This number is 3!'
print(greaterelif(3))
```
#### putting 'else' or 'elif' before an if will result in a syntax error
## Logical and/or
```
##an and/or conditional can be written as if statements
##x > 3 and x < 4 is equivalent to the following
def checkand(x:float) -> bool:
if x <= 3:
return False
if x >= 4:
return False
else:
return True
print(checkand(5))
print(checkand(3.5))
##x > 3 or x < 4 is equivalent to the following
def checkor(x:float) -> bool:
if x > 3:
return True
if x < 4:
return True
else:
return False
print(checkor(3.5))
```
## Lists
```
##lists are denoted with []s and can host different types of objects at the same time
listA = ['a', 2, 5]
print(type(listA))
##list indexes like string indexes start at 0
print(listA[0])
##forloops can be used to iterate through lists
listA = ['1', 4, 6, 'a', 3.5, 'hello']
def printelements(lista: list) -> str:
for element in listA:
print(element)
print(printelements(listA))
```
## Sorting Algorithms
### Bubble Sort
Bubble sort looks at two adjacent elements at at time starting from the beginning, if the 2nd element is greater than the first, they swap. This continues until the list is fully sorted

Complexity = Quadratic
### Insertion Sort
Insertion sort creates subarrays within the array, and sorts the subarray locally. Insertion sort will start with an array the size of 2 (up to index 1). After X runs of insertion sort, a subarray of size X will be sorted.

Complexity = Quadratic
### Selection Sort
Selection sort scans the list for the smallest element and switches places with the first index, then it scans the rest of the list (from index 1 to lenth of list) and repeats the first step. After X runs of selection sort, a subarray of size X will be sorted.
However, this sorting method results in a different unsorted subarray from insertion sort.

Complexity = Quadratic
| github_jupyter |
<table align="left" width="100%"> <tr>
<td style="background-color:#ffffff;"><a href="https://qsoftware.lu.lv/index.php/qworld/" target="_blank"><img src="..\images\qworld.jpg" width="35%" align="left"></a></td>
<td align="right" style="background-color:#ffffff;vertical-align:bottom;horizontal-align:right">
prepared by Özlem Salehi (<a href="http://qworld.lu.lv/index.php/qturkey/" target="_blank">QTurkey</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
<h1> <font color="blue"> Solutions for </font> Discrete Fourier Transform</h1>
<h3>Task 1 (on paper)</h3>
<a id="task1"></a>
Given $x=\myvector{1 \\ 2}$, apply $DFT$ to obtain $y$.
<h3>Solution </h3>
In this example $N=2$, $x_0=1$ and $x_1=2$. Hence, we have
$$y_k=\frac{1}{\sqrt{2}} \sum_{j=0}^{1}e^{\frac{2\pi i j k}{2}}x_j.$$
Replacing $k=0$,
$$
y_0=\frac{1}{\sqrt{2}} \sum_{j=0}^{1}e^{\frac{2\pi ij\cdot 0}{2} }x_j= \frac{1}{\sqrt{2}} (x_0+x_1) = \frac{3}{\sqrt{2}}
$$
and $k=1$,
$$
y_1=\frac{1}{\sqrt{2}} \sum_{j=0}^{1}e^{\frac{2\pi ij\cdot 1}{2}}x_j= \frac{1}{\sqrt{2}} \biggl( e^{\frac{2\pi i \cdot0 \cdot1}{2}} x_0 + e^{\frac{2\pi i \cdot 1 \cdot 1}{2}} x_1 \biggr) = \frac{1+ 2e^{\pi i}}{\sqrt{2}}=\frac{-1}{\sqrt{2}}.
$$
We can conclude that $y=\myvector{\frac{3}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}} } $.
<a id="task2"></a>
<h3>Task 2</h3>
Create the following list in Python (1 0 0 0 0 1 0 0 0 0 1 0 0 0 0) where every 5'th value is a 1. Then compute its $DFT$ using python and visualize.
<h3>Solution </h3>
```
#Create an empty list
x=[]
#Number of elements in the list
N=100
#We set every fifth number as a 1
for i in range(N):
if i%5==0:
x.append(1)
else:
x.append(0)
from cmath import exp
from math import pi
from math import sqrt
#We calculate Fourier Transform of the list
y=[]
for k in range(N):
s=0
for j in range(N):
s+= exp(2*pi*1j*j*k/N)*x[j]
s*=1/sqrt(N)
y.append(s)
y
import matplotlib.pyplot as plt
#Visualizing the trasformed list
plt.plot(y)
plt.show()
```
<a id="task3"></a>
<h3>Task 3</h3>
Repeat Task 2 where this time every 6'th value is a 1 and the rest is 0.
<h3>Solution </h3>
```
#Create an empty list
x=[]
#Number of elements in the list
N=100
#We set every sixth number as a 1
for i in range(N):
if i%6==0:
x.append(1)
else:
x.append(0)
from cmath import exp
from math import pi
from math import sqrt
#We calculate Fourier Transform of the list
y=[]
for k in range(N):
s=0
for j in range(N):
s+= exp(2*pi*1j*j*k/N)*x[j]
s*=1/sqrt(N)
y.append(s)
import matplotlib.pyplot as plt
#Visualizing the trasformed list
plt.plot(y)
plt.show()
```
| github_jupyter |
# K-means clustering
### Part I: Demo of concepts
We're first going to learn what exactly the 'k' and the 'means' part of K-means clustering represent.
#### Step 1: Understand the dataset and the task
I'm starting to plant my flower garden for the spring. I bought seeds for two types of flowers, but upon opening the packets, I dropped all of them and they are now mixed together. Because I have a very particular vision for my garden, and want the shorter yellow flower to form one line and another line of taller purple flowers. Because I also have infinite patience, I measured the width (kernel_width column in 'seeds.csv') and the groove length (kernel_groove_length column in 'seeds.csv') of every single seed I picked back up from the ground. Based on these two measurements alone, can I seperate out the seeds?
#### Step 2: Load the dataset
```
import numpy as np
data = np.loadtxt('seeds.csv',delimiter=',',skiprows=1)
X = data[:,0:2] # The first column contains kernel_width, the second column contains kernel_groove_length
#Notice that we don't have a 'y'!
```
#### Step 3: Apply k-means cluster with k = 2
```
from sklearn.cluster import KMeans
model = KMeans(n_clusters=2,random_state=0).fit(X)
```
#### Step 4: Visualize the dataset and the clusters
```
import seaborn as sns
import matplotlib.pyplot as plt
ax = sns.scatterplot(x=X[:,0],y=X[:,1],hue=model.labels_)
ax.legend(title='Cluster')
ax.plot(model.cluster_centers_[0,0],model.cluster_centers_[0,1], 'Db')
ax.plot(model.cluster_centers_[1,0],model.cluster_centers_[1,1], 'Dr')
ax.set_xlabel('Kernel width')
ax.set_ylabel('Kernel groove length')
plt.show()
```
#### Step 5: Compare cluster centers to mean of all datapoints
Based on the results, why do you think this is called k-means clustering?
```
cluster0_center = model.cluster_centers_[0,:]
cluster1_center = model.cluster_centers_[1,:]
cluster0_mean = np.mean(X[model.labels_ ==0], axis=0)
cluster1_mean = np.mean(X[model.labels_ ==1], axis=0)
print('Cluter 0 center: ' + str(cluster0_center))
print('Cluter 0 mean : ' + str(cluster0_mean))
print('Cluter 1 center: ' + str(cluster1_center))
print('Cluter 1 mean : ' + str(cluster1_mean))
```
### K-means clustering on handwriting images
Now we try K-means on much higher dimensional data, and think about why clustering is considered an unsupervised learning.
#### Step 1: Understand the data and the task
'mnist_01.csv' contains MNIST black-and-white images of handwritten digits 0 and 1. Each sample comes from an image, 28 pixels by 28 pixels. We 'flatten' this square image to a flat vector that contains the pixel values (pixel value = 0 means there's no writing/ink in the pixel, going up to value 255). Each row in the spreadsheet corresponds to one such flattened image; the columns correspond to a pixel. Let's see if we can cluster the images and recover the 0's and 1's that the handwritings represent.
#### Step 2: Load the dataet
```
data = np.loadtxt('mnist_01.csv',delimiter=',')
X = data[:,1:]
y = data[:,0] # The dataset comes with the 'answers' as to whether the image represents digit 0 or 1.
```
#### Step 3: Cluster the images
```
model = KMeans(n_clusters=2,random_state=0).fit(X)
```
#### Step4: Visualize the cluster centers
Looking back at Step 3 (did we use the 'y' variable?), and the results below, why do you think this is called an 'unsupervised' learning?
```
centroid1 = model.cluster_centers_[0,:].reshape((28,28))
plt.gray()
plt.imshow(centroid1)
plt.show()
centroid2 = model.cluster_centers_[1,:].reshape((28,28))
plt.gray()
plt.imshow(centroid2)
plt.show()
```
# Principal Comonent Analysis (PCA)
Here we're going to learn about PCA, which comes from a family of powerful unsupervised learning methods called dimensionality reduction.
#### Step 1: Understand the data and the task
You saw above that a datapoint may originally be represented using a lot of features (in case of the MNIST image, one image is represented with 28 x 28 = 784 features). The number of features is sometimes referred to as the dimension of the dataset, and many real-world datasets are high dimensional (think about the number of features collected on you in a single hospital admission). You also saw above that although the images are represented with 784 pixels/features, there were fundamentially two underlying patterns/groupings of the images. Here we're going to explore if we can reduce the dimension of the dataset, i.e., represent each data point with just 2 features or 'faux'/pseudo-pixels, while still preserving those underlying patterns.
#### Step 2: Get the top 2 principal components
```
from sklearn.decomposition import PCA
print('Dimension of original dataset: '
+ str(X.shape[0]) + ' datapoints, ' + str(X.shape[1]) + ' features')
model = PCA(n_components=2).fit(X)
```
#### Step 3: Project the data to 2D space using the two principal components
How many 'faux' pixels are we using to represent each image with PCA? <br>
What do you notice about the 'clumps' of datapoints (which represent images) in this 2D space? <br>
Can you guess which handwritten digit (0 or 1) an image belonging to one of the clumps would represent?
```
X_project = np.dot(X, model.components_.T)
print('Dimension of the dataset after reduction with PCA: '
+ str(X_project.shape[0]) + ' datapoints, ' + str(X_project.shape[1]) + ' features')
ax = sns.scatterplot(x=X_project[:,0],y=X_project[:,1])
ax.set_xlabel('Principal component 1')
ax.set_ylabel('Principal component 2')
plt.show()
```
#### Step 4: Now we 'cheat' and label each datapoint to its true digit label
Was your guess correct from step 3? In hindsight, what does the shape of each 'clump' represent?
```
X_project = np.dot(X, model.components_.T)
ax = sns.scatterplot(x=X_project[:,0],y=X_project[:,1],hue=y)
ax.legend(title='True digit')
ax.set_xlabel('Principal component 1')
ax.set_ylabel('Principal component 2')
plt.show()
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Working with sparse tensors
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/sparse_tensor"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/sparse_tensor.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/sparse_tensor.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/sparse_tensor.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
When working with tensors that contain a lot of zero values, it is important to store them in a space- and time-efficient manner. Sparse tensors enable efficient storage and processing of tensors that contain a lot of zero values. Sparse tensors are used extensively in encoding schemes like [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) as part of data pre-processing in NLP applications and for pre-processing images with a lot of dark pixels in computer vision applications.
## Sparse tensors in TensorFlow
TensorFlow represents sparse tensors through the `tf.SparseTensor` object. Currently, sparse tensors in TensorFlow are encoded using the coordinate list (COO) format. This encoding format is optimized for hyper-sparse matrices such as embeddings.
The COO encoding for sparse tensors is comprised of:
* `values`: A 1D tensor with shape `[N]` containing all nonzero values.
* `indices`: A 2D tensor with shape `[N, rank]`, containing the indices of the nonzero values.
* `dense_shape`: A 1D tensor with shape `[rank]`, specifying the shape of the tensor.
A ***nonzero*** value in the context of a `tf.SparseTensor` is a value that's not explicitly encoded. It is possible to explicitly include zero values in the `values` of a COO sparse matrix, but these "explicit zeros" are generally not included when referring to nonzero values in a sparse tensor.
Note: `tf.SparseTensor` does not require that indices/values be in any particular order, but several ops assume that they're in row-major order. Use `tf.sparse.reorder` to create a copy of the sparse tensor that is sorted in the canonical row-major order.
## Creating a `tf.SparseTensor`
Construct sparse tensors by directly specifying their `values`, `indices`, and `dense_shape`.
```
import tensorflow as tf
st1 = tf.SparseTensor(indices=[[0, 3], [2, 4]],
values=[10, 20],
dense_shape=[3, 10])
```
<img src="images/sparse_tensor.png">
When you use the `print()` function to print a sparse tensor, it shows the contents of the three component tensors:
```
print(st1)
```
It is easier to understand the contents of a sparse tensor if the nonzero `values` are aligned with their corresponding `indices`. Define a helper function to pretty-print sparse tensors such that each nonzero value is shown on its own line.
```
def pprint_sparse_tensor(st):
s = "<SparseTensor shape=%s \n values={" % (st.dense_shape.numpy().tolist(),)
for (index, value) in zip(st.indices, st.values):
s += f"\n %s: %s" % (index.numpy().tolist(), value.numpy().tolist())
return s + "}>"
print(pprint_sparse_tensor(st1))
```
You can also construct sparse tensors from dense tensors by using `tf.sparse.from_dense`, and convert them back to dense tensors by using `tf.sparse.to_dense`.
```
st2 = tf.sparse.from_dense([[1, 0, 0, 8], [0, 0, 0, 0], [0, 0, 3, 0]])
print(pprint_sparse_tensor(st2))
st3 = tf.sparse.to_dense(st2)
print(st3)
```
## Manipulating sparse tensors
Use the utilities in the `tf.sparse` package to manipulate sparse tensors. Ops like `tf.math.add` that you can use for arithmetic manipulation of dense tensors do not work with sparse tensors.
Add sparse tensors of the same shape by using `tf.sparse.add`.
```
st_a = tf.SparseTensor(indices=[[0, 2], [3, 4]],
values=[31, 2],
dense_shape=[4, 10])
st_b = tf.SparseTensor(indices=[[0, 2], [7, 0]],
values=[56, 38],
dense_shape=[4, 10])
st_sum = tf.sparse.add(st_a, st_b)
print(pprint_sparse_tensor(st_sum))
```
Use `tf.sparse.sparse_dense_matmul` to multiply sparse tensors with dense matrices.
```
st_c = tf.SparseTensor(indices=([0, 1], [1, 0], [1, 1]),
values=[13, 15, 17],
dense_shape=(2,2))
mb = tf.constant([[4], [6]])
product = tf.sparse.sparse_dense_matmul(st_c, mb)
print(product)
```
Put sparse tensors together by using `tf.sparse.concat` and take them apart by using `tf.sparse.slice`.
```
sparse_pattern_A = tf.SparseTensor(indices = [[2,4], [3,3], [3,4], [4,3], [4,4], [5,4]],
values = [1,1,1,1,1,1],
dense_shape = [8,5])
sparse_pattern_B = tf.SparseTensor(indices = [[0,2], [1,1], [1,3], [2,0], [2,4], [2,5], [3,5],
[4,5], [5,0], [5,4], [5,5], [6,1], [6,3], [7,2]],
values = [1,1,1,1,1,1,1,1,1,1,1,1,1,1],
dense_shape = [8,6])
sparse_pattern_C = tf.SparseTensor(indices = [[3,0], [4,0]],
values = [1,1],
dense_shape = [8,6])
sparse_patterns_list = [sparse_pattern_A, sparse_pattern_B, sparse_pattern_C]
sparse_pattern = tf.sparse.concat(axis=1, sp_inputs=sparse_patterns_list)
print(tf.sparse.to_dense(sparse_pattern))
sparse_slice_A = tf.sparse.slice(sparse_pattern_A, start = [0,0], size = [8,5])
sparse_slice_B = tf.sparse.slice(sparse_pattern_B, start = [0,5], size = [8,6])
sparse_slice_C = tf.sparse.slice(sparse_pattern_C, start = [0,10], size = [8,6])
print(tf.sparse.to_dense(sparse_slice_A))
print(tf.sparse.to_dense(sparse_slice_B))
print(tf.sparse.to_dense(sparse_slice_C))
```
If you're using TensorFlow 2.4 or above, use `tf.sparse.map_values` for elementwise operations on nonzero values in sparse tensors.
```
st2_plus_5 = tf.sparse.map_values(tf.add, st2, 5)
print(tf.sparse.to_dense(st2_plus_5))
```
Note that only the nonzero values were modified – the zero values stay zero.
Equivalently, you can follow the design pattern below for earlier versions of TensorFlow:
```
st2_plus_5 = tf.SparseTensor(
st2.indices,
st2.values + 5,
st2.dense_shape)
print(tf.sparse.to_dense(st2_plus_5))
```
## Using `tf.SparseTensor` with other TensorFlow APIs
Sparse tensors work transparently with these TensorFlow APIs:
* `tf.keras`
* `tf.data`
* `tf.Train.Example` protobuf
* `tf.function`
* `tf.while_loop`
* `tf.cond`
* `tf.identity`
* `tf.cast`
* `tf.print`
* `tf.saved_model`
* `tf.io.serialize_sparse`
* `tf.io.serialize_many_sparse`
* `tf.io.deserialize_many_sparse`
* `tf.math.abs`
* `tf.math.negative`
* `tf.math.sign`
* `tf.math.square`
* `tf.math.sqrt`
* `tf.math.erf`
* `tf.math.tanh`
* `tf.math.bessel_i0e`
* `tf.math.bessel_i1e`
Examples are shown below for a few of the above APIs.
### `tf.keras`
A subset of the `tf.keras` API supports sparse tensors without expensive casting or conversion ops. The Keras API lets you pass sparse tensors as inputs to a Keras model. Set `sparse=True` when calling `tf.keras.Input` or `tf.keras.layers.InputLayer`. You can pass sparse tensors between Keras layers, and also have Keras models return them as outputs. If you use sparse tensors in `tf.keras.layers.Dense` layers in your model, they will output dense tensors.
The example below shows you how to pass a sparse tensor as an input to a Keras model if you use only layers that support sparse inputs.
```
x = tf.keras.Input(shape=(4,), sparse=True)
y = tf.keras.layers.Dense(4)(x)
model = tf.keras.Model(x, y)
sparse_data = tf.SparseTensor(
indices = [(0,0),(0,1),(0,2),
(4,3),(5,0),(5,1)],
values = [1,1,1,1,1,1],
dense_shape = (6,4)
)
model(sparse_data)
model.predict(sparse_data)
```
### `tf.data`
The `tf.data` API enables you to build complex input pipelines from simple, reusable pieces. Its core data structure is `tf.data.Dataset`, which represents a sequence of elements in which each element consists of one or more components.
#### Building datasets with sparse tensors
Build datasets from sparse tensors using the same methods that are used to build them from `tf.Tensor`s or NumPy arrays, such as `tf.data.Dataset.from_tensor_slices`. This op preserves the sparsity (or sparse nature) of the data.
```
dataset = tf.data.Dataset.from_tensor_slices(sparse_data)
for element in dataset:
print(pprint_sparse_tensor(element))
```
#### Batching and unbatching datasets with sparse tensors
You can batch (combine consecutive elements into a single element) and unbatch datasets with sparse tensors using the `Dataset.batch` and `Dataset.unbatch` methods respectively.
```
batched_dataset = dataset.batch(2)
for element in batched_dataset:
print (pprint_sparse_tensor(element))
unbatched_dataset = batched_dataset.unbatch()
for element in unbatched_dataset:
print (pprint_sparse_tensor(element))
```
You can also use `tf.data.experimental.dense_to_sparse_batch` to batch dataset elements of varying shapes into sparse tensors.
#### Transforming Datasets with sparse tensors
Transform and create sparse tensors in Datasets using `Dataset.map`.
```
transform_dataset = dataset.map(lambda x: x*2)
for i in transform_dataset:
print(pprint_sparse_tensor(i))
```
### tf.train.Example
`tf.train.Example` is a standard protobuf encoding for TensorFlow data. When using sparse tensors with `tf.train.Example`, you can:
* Read variable-length data into a `tf.SparseTensor` using `tf.io.VarLenFeature`. However, you should consider using `tf.io.RaggedFeature` instead.
* Read arbitrary sparse data into a `tf.SparseTensor` using `tf.io.SparseFeature`, which uses three separate feature keys to store the `indices`, `values`, and `dense_shape`.
### `tf.function`
The `tf.function` decorator precomputes TensorFlow graphs for Python functions, which can substantially improve the performance of your TensorFlow code. Sparse tensors work transparently with both `tf.function` and [concrete functions](https://www.tensorflow.org/guide/function#obtaining_concrete_functions).
```
@tf.function
def f(x,y):
return tf.sparse.sparse_dense_matmul(x,y)
a = tf.SparseTensor(indices=[[0, 3], [2, 4]],
values=[15, 25],
dense_shape=[3, 10])
b = tf.sparse.to_dense(tf.sparse.transpose(a))
c = f(a,b)
print(c)
```
## Distinguishing missing values from zero values
Most ops on `tf.SparseTensor`s treat missing values and explicit zero values identically. This is by design — a `tf.SparseTensor` is supposed to act just like a dense tensor.
However, there are a few cases where it can be useful to distinguish zero values from missing values. In particular, this allows for one way to encode missing/unknown data in your training data. For example, consider a use case where you have a tensor of scores (that can have any floating point value from -Inf to +Inf), with some missing scores. You can encode this tensor using a sparse tensor where the explicit zeros are known zero scores but the implicit zero values actually represent missing data and not zero.
Note: This is generally not the intended usage of `tf.SparseTensor`s; and you might want to also consier other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically.
Note that some ops like `tf.sparse.reduce_max` do not treat missing values as if they were zero. For example, when you run the code block below, the expected output is `0`. However, because of this exception, the output is `-3`.
```
print(tf.sparse.reduce_max(tf.sparse.from_dense([-5, 0, -3])))
```
In contrast, when you apply `tf.math.reduce_max` to a dense tensor, the output is 0 as expected.
```
print(tf.math.reduce_max([-5, 0, -3]))
```
## Further reading and resources
* Refer to the [tensor guide](https://www.tensorflow.org/guide/tensor) to learn about tensors.
* Read the [ragged tensor guide](https://www.tensorflow.org/guide/ragged_tensor) to learn how to work with ragged tensors, a type of tensor that lets you work with non-uniform data.
* Check out this object detection model in the [TensorFlow Model Garden](https://github.com/tensorflow/models) that uses sparse tensors in a [`tf.Example` data decoder](https://github.com/tensorflow/models/blob/9139a7b90112562aec1d7e328593681bd410e1e7/research/object_detection/data_decoders/tf_example_decoder.py).
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import models
import os
import random
import cv2
import matplotlib.pyplot as plt
df = pd.read_csv("full_df.csv")
df.sample(5)
img_dir = r'C:\Users\KIIT\Documents\Deep Learning\preprocessed_images'
df = df.iloc[:,1:7]
df.head()
s1 = df['Left-Diagnostic Keywords']
print(s1)
s2 = df['Right-Diagnostic Keywords']
s2
for i in range(6392):
if 'hyper' in s1[i]:
s1[i] = 'Hypertension'
for i in range(6392):
if 'hyper' in s2[i]:
s2[i] = 'Hypertension'
df.sample(10)
df_left_hyp = df[df['Left-Diagnostic Keywords'] == 'Hypertension']
df_left_hyp.head()
len(df_left_hyp)
df_rt_hyp = df[df['Right-Diagnostic Keywords'] == 'Hypertension']
df_rt_hyp.head()
len(df_rt_hyp)
df_hyp_filenames = df_left_hyp['Left-Fundus'].append(df_rt_hyp['Right-Fundus'], ignore_index=True)
df_hyp_filenames.head()
len(df_hyp_filenames)
img = df_hyp_filenames[34]
image = cv2.imread(os.path.join(img_dir, img))
plt.imshow(image)
print(image.shape)
print(img)
plt.figure(figsize=(8,8))
for i in range(9):
img = df_hyp_filenames[i+9]
image = cv2.imread(os.path.join(img_dir, img))
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.subplot(3,3,i+1)
plt.imshow(image_rgb)
plt.xlabel('Filename: {}\n''Hypertension'.format(df_hyp_filenames[i+9]))
plt.tight_layout()
df_left_nor = df[df['Left-Diagnostic Keywords'] == 'normal fundus']
df_left_nor.head()
df_rt_nor = df[df['Right-Diagnostic Keywords'] == 'normal fundus']
df_rt_nor.head()
df_nor_filenames = df_left_nor['Left-Fundus'].append(df_rt_nor['Right-Fundus'],ignore_index=True)
df_nor_filenames
df_nor_filenames = df_nor_filenames.sample(400)
df_nor_filenames = df_nor_filenames.reset_index(drop=True)
df_nor_filenames.head()
# Grid images of normal eye
plt.figure(figsize=(8,8))
for i in range(9):
img = df_nor_filenames[i+9]
image = cv2.imread(os.path.join(img_dir, img))
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.subplot(3,3,i+1)
plt.imshow(image_rgb)
plt.xlabel('Filename: {}\n''Normal'.format(df_nor_filenames[i+9]))
plt.tight_layout()
df_hyp_filenames = pd.DataFrame(df_hyp_filenames, columns = ["filename"])
df_hyp_filenames["label"] = "Hypertension"
df_hyp_filenames.head()
df_nor_filenames = pd.DataFrame(df_nor_filenames, columns = ["filename"])
df_nor_filenames["label"] = "normal"
df_nor_filenames.head()
df_combined = df_hyp_filenames.append(df_nor_filenames,ignore_index = True)
df_combined
df_combined = df_combined.sample(785)
df_combined = df_combined.reset_index(drop=True)
df_combined
a = np.array(df_combined.filename)
a.shape
paths = []
type(paths)
for i in range(785):
img = a[i]
image = os.path.join(img_dir, img)
paths.append(image)
paths
data = []
for i in range(785):
img = paths[i]
image = cv2.imread(img)
image = cv2.resize(image,(224,224))
data.append(image)
len(data)
data = np.array(data)
```
## Scaling the Data in the range of 0 to 1
```
data = data/255
x = data
y = []
for i in df_combined.label:
if(i=='Hypertension'):
y.append(1)
else:
y.append(0)
y = np.array(y)
len(x)
len(y)
```
## Splitting the Data into the test,train,val
```
from sklearn.model_selection import train_test_split
x_train,x_val,y_train,y_val = train_test_split(x,y,test_size=0.2)
x_val,x_test,y_val,y_test = train_test_split(x_val,y_val,test_size=0.5)
print(len(x_train))
print(len(x_val))
print(len(x_test))
```
### Data Augmentation
```
from tensorflow.keras import layers
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal", input_shape=(224,224,3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
from tensorflow.keras.models import Sequential
num_classes = 2
model = models.Sequential([
data_augmentation,
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes,activation = 'softmax')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=30)
```
Interrupted the execution in between as it was not giving the good accuracy after 20 epochs, so decided to go for pretrained model.
# Transfer Learning
### MobileNet V2 Model
```
import numpy as np
import cv2
import PIL.Image as Image
import os
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
feature_extractor_model = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
pretrained_model_without_top_layer = hub.KerasLayer(
feature_extractor_model, input_shape=(224, 224, 3), trainable=False)
num_of_classes = 2
model = tf.keras.Sequential([
pretrained_model_without_top_layer,
tf.keras.layers.Dense(num_of_classes)
])
model.summary()
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10)
model.evaluate(x_val,y_val)
model.evaluate(x_test,y_test)
y_pre = model.predict(x_test)
y_pred = [np.argmax(i) for i in y_pre]
y_pred[:10]
from sklearn.metrics import classification_report
print(classification_report(y_test,y_pred))
```
## EfficientNet B4 Model
```
model1 = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/tensorflow/efficientnet/b4/feature-vector/1",
trainable=False,input_shape=(224, 224, 3)),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
num_of_classes = 2
model1 = tf.keras.Sequential([
pretrained_model_without_top_layer,
tf.keras.layers.Dense(num_of_classes)
])
model1.summary()
model1.compile(
optimizer="nadam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model1.fit(x_train, y_train, epochs=10)
model1.evaluate(x_val,y_val)
```
# VGG-16 Model
```
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.models import Sequential
from keras import layers,models
from keras.layers import Flatten,Dense
from keras.models import Model
IMAGE_SIZE = [224,224]
vgg = VGG16(input_shape=IMAGE_SIZE+[3],weights='imagenet',include_top=False)
for layers in vgg.layers:
layers.trainable = False
x = Flatten()(vgg.output)
prediction = Dense(2,activation = 'softmax')(x)
model2 = Model(inputs = vgg.input, outputs = prediction)
model2.summary()
model2.compile(optimizer = 'adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics = ['accuracy'])
model2.fit(x_train,y_train,epochs=10)
```
## ResNet-50 Model
```
num_classes = 2
model3 = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/tensorflow/resnet_50/classification/1",
trainable=False,input_shape=(224, 224, 3)),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model3 = tf.keras.Sequential([
pretrained_model_without_top_layer,
tf.keras.layers.Dense(num_of_classes)
])
model2.summary()
model3.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model3.fit(x_train, y_train, epochs=10)
model3.evaluate(x_val,y_val)
model3.evaluate(x_test,y_test)
y_pre = model3.predict(x_val)
y_pred = [np.argmax(i) for i in y_pre]
y_pred[:10]
y_val[:10]
print(classification_report(y_val,y_pred))
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Distributed Training in TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/distribution_strategy_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/distribution_strategy_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/distribution_strategy_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Overview
The `tf.distribute.Strategy` API is an easy way to distribute your training
across multiple devices/machines. Our goal is to allow users to use existing
models and training code with minimal changes to enable distributed training.
Currently, core TensorFlow includes `tf.distribute.MirroredStrategy`. This
does in-graph replication with synchronous training on many GPUs on one machine.
Essentially, it create copies of all variables in the model's layers on each
device. It then use all-reduce to combine gradients across the devices before
applying them to the variables to keep them in sync.
Many other strategies will soon be
available in core TensorFlow. You can find more information about them in the
[README](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/distribute).
## Example with Keras API
The easiest way to get started with multiple GPUs on one machine using `MirroredStrategy` is with `tf.keras`.
```
from __future__ import absolute_import, division, print_function
# Import TensorFlow
!pip install tf-nightly-gpu-2.0-preview
import tensorflow_datasets as tfds
import tensorflow as tf
import os
```
## Download the dataset
Download the MNIST dataset to train our model on. Use [TensorFlow Datasets](https://www.tensorflow.org/datasets) to load the dataset. This returns a dataset in `tf.data` format.
```
datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
```
`with_info=True` returns the metadata for the entire dataset.
In this example, `ds_info.splits.total_num_examples = 70000`.
```
num_train_examples = ds_info.splits['train'].num_examples
num_test_examples = ds_info.splits['test'].num_examples
BUFFER_SIZE = num_train_examples
BATCH_SIZE = 64
```
## Input data pipeline
```
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
train_dataset = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
```
## Define Distribution Strategy
To distribute a Keras model on multiple GPUs using `MirroredStrategy`, we first instantiate a `MirroredStrategy` object.
```
strategy = tf.distribute.MirroredStrategy()
print ('Number of devices: {}'.format(strategy.num_replicas_in_sync))
```
## Create the model
Create and compile the Keras model in the `strategy.scope`.
```
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
# TODO(yashkatariya): Add accuracy when b/122371345 is fixed.
model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam())
#metrics=['accuracy'])
```
## Define the callbacks.
The callbacks used here are:
* *Tensorboard*: This callback writes a log for Tensorboard which allows you to visualize the graphs.
* *Model Checkpoint*: This callback saves the model after every epoch.
* *Learning Rate Scheduler*: Using this callback, you can schedule the learning rate to change after every epoch/batch.
```
# Define the checkpoint directory to store the checkpoints
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
# Function for decaying the learning rate.
# You can use a complicated decay equation too.
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >= 3 and epoch < 7:
return 1e-4
else:
return 1e-5
# Callback for printing the LR at the end of each epoch.
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print ('\nLearning rate for epoch {} is {}'.format(epoch + 1,
model.optimizer.lr.numpy()))
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
```
## Train and evaluate
To train the model call Keras `fit` API using the input dataset that was
created earlier, same as how it would be called in a non-distributed case.
```
model.fit(train_dataset, epochs=10, callbacks=callbacks)
```
As you can see below, the checkpoints are getting saved.
```
# check the checkpoint directory
!ls {checkpoint_dir}
```
Let's load the latest checkpoint and see how the model performs on the test dataset.
Call `evaluate` as before using appropriate datasets.
```
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
eval_loss = model.evaluate(eval_dataset)
print ('Eval loss: {}'.format(eval_loss))
```
You can download the tensorboard logs and then use the following command to see the output.
```
tensorboard --logdir=path/to/log-directory
```
```
!ls -sh ./logs
```
## What's next?
Read the [distribution strategy guide](../../../guide/distribute_strategy.ipynb).
Try the [distribution strategy with training loops](training_loops.ipynb) tutorial to use `tf.distribute.Strategy` with custom training loops.
`tf.distribute.Strategy` is actively under development and we will be adding more examples and tutorials in the near future. Please give it a try, we welcome your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
| github_jupyter |
# Speed Skydiving Analysis and Scoring 2019
Analyze one or more FlySight files with speed skydiving data.
This document implements scoring techniques compatible with the FAI World Air Sports Federation [Speed Skydiving Competition Rules, 2019 Edition](https://www.fai.org/sites/default/files/documents/2019_ipc_cr_speedskydiving.pdf) (PDF, 428 KB).
## Environment setup
```
from ssscore import COURSE_END
from ssscore import DEG_IN_RAD
from ssscore import FLYSIGHT_SAMPLE_TIME
from ssscore import FLYSIGHT_TIME_FORMAT
from ssscore import RESOURCE_PATH
from ssscore import SPEED_INTERVAL
from ssscore import VALID_MSL
import dateutil.parser
import inspect
import math
import os
import os.path
import shutil
import pandas as pd
import ssscore
```
### Known drop zones AMSL in meters
The `ssscore` module defines these altitudes; the DZ name corresponds to the symbolic constant, e.g. Bay Area Skydiving ::= `BAY_AREA_SKYDIVING`. The altitudes were culled from public airport information available on the Worldwide Web.
```
from ssscore.elevations import DZElevations # in meters
dir(DZElevations)
```
#### Set the appropriate DZ elevation
```
DZ_AMSL = DZElevations.BAY_AREA_SKYDIVING.value
DZ_AMSL
```
## FlySight data sources
1. Copy the FlySight `Tracks/YY-MM-dd` directory of interest to the `DATA_LAKE` directory;
the `DATA_LAKE` can also be an external mount, a Box or Dropbox share, anything that
can be mapped to a directory -- even a whole drive!
1. Make a list of all CSV files in the FlySight `DATA_LAKE`
1. Move the CSV to the `DATA_SOURCE` bucket directory
### Define the data lake and data source

<a id="l_data-def"></a>
```
DATA_LAKE = os.path.join('.', RESOURCE_PATH)
DATA_SOURCE = os.path.join('.', 'data-sources', ssscore.RESOURCE_PATH)
ssscore.updateFlySightDataSource(DATA_LAKE, DATA_SOURCE)
```
## Top speed and pitch analysis on a single data file
User selects a valid FlySight data path and file name. Call the `calculateSpeedAndPitch(fileName)` function with this file name. The function produces:
* Maximum mean speed within a 3-second interval within the course
* Flight pitch in degrees at max speed
* Max speed
* Min speed
All speeds are reported in km/h.
### Explanation of the code in calculateSpeedAndPitchFrom(fileName)
1. Discard all source entries outside of the exit altitude to 1,700 m AGL course
1. Resolve the max speed and pitch within the course
1. Calculate the max mean speed for the 3-second interval near the max speed
1. Return the results in a Series object for later inclusion in a data frame with results from multiple jumps
### Specify the FlySight data file to analyze
```
DATA_SOURCE = os.path.join('.', 'data-sources', ssscore.RESOURCE_PATH)
FLYSIGHT_DATA_FILE = 'FlySight-test-file.csv'
def _discardDataOutsideCourse(flightData):
maxHeight = flightData['hAGL', '(m)'].max()
height = flightData['hAGL', '(m)']
descentVelocity = flightData['velD', '(m/s)']
flightData = flightData[(height <= maxHeight) & (height >= COURSE_END) & (descentVelocity >= 0.0)]
return flightData
import datetime
import time
def _convertToUnixTime(dateString):
"""
Converts the dateString in FLYSIGHT_TIME_FORMAT into Unix time,
expressed in hundreths of a second (i.e. 100*timestamp)
"""
timestamp = datetime.datetime.strptime(dateString, FLYSIGHT_TIME_FORMAT)
epoch = datetime.datetime(1970, 1, 1)
return int(100.0*(timestamp-epoch).total_seconds())
def _selectValidSpeedAnalysisWindowsIn(flightData):
startTime = flightData['unixTime'].iloc[0]
stopTime = flightData['unixTime'].iloc[-1]
windows = None
unixTime = flightData['unixTime']
for intervalStart in range(startTime, stopTime, FLYSIGHT_SAMPLE_TIME):
intervalEnd = intervalStart+SPEED_INTERVAL
window = flightData[(unixTime >= intervalStart) & (unixTime < intervalEnd)]
if len(window) == (SPEED_INTERVAL/FLYSIGHT_SAMPLE_TIME):
if windows is not None:
windows.append(window)
else:
windows = window
return windows
def _calculateCourseSpeedUsing(flightData):
"""
Returns absolute best max speed and 3-second window max speed.
"""
windows = _selectValidSpeedAnalysisWindowsIn(flightData)
return windows
def maxHorizontalSpeedFrom(flightData, maxVerticalSpeed):
velN = flightData[flightData['velD', '(m/s)'] == maxVerticalSpeed]['velN', '(m/s)']
velE = flightData[flightData['velD', '(m/s)'] == maxVerticalSpeed]['velE', '(m/s)']
return math.sqrt(velN**2+velE**2) # R vector
"""
Adjusts the flight data to compensate for DZ elevation AMSL.
flightData - the raw FlySight data frame
elevation - the elevation, in meters, to adjust
All hMSL values in flightData will be offset by +elevation meters.
"""
def adjustElevation(flightData, elevation):
flightData['hAGL', '(m)'] = flightData['hMSL', '(m)']-elevation
return flightData
def calculateSpeedAndPitchFor(fileName, elevation = 0.00):
"""
Accepts a file name to a FlySight data file.
Returns a Series with the results of a speed skydiving jump.
"""
flightData = adjustElevation(pd.read_csv(fileName, header = [0, 1]), elevation)
flightData = _discardDataOutsideCourse(flightData)
flightData['unixTime'] = flightData['time'].iloc[:,0].apply(_convertToUnixTime)
return _calculateCourseSpeedUsing(flightData)
calculateSpeedAndPitchFor(os.path.join(DATA_SOURCE, FLYSIGHT_DATA_FILE), elevation = DZ_AMSL)
```
## Listing FlySight data files with valid data
This set up generates a list of FlySight data files ready for analysis. It discards any warm up FlySight data files, those that show no significant changes in elevation MSL across the complete data set.
The list generator uses the `DATA_SOURCE` global variable. [Change the value of `DATA_SOURCE`](#l_data-def) if necessary.
1. Generate the list of available data files
1. Discard FlySight warm up files by rejecting files without jump data (test: minimal altitude changes)

```
def listDataFilesIn(bucketPath):
"""
Generate a sorted list of files available in a given bucketPath.
Files names appear in reverse lexicographical order.
"""
filesList = pd.Series([os.path.join(bucketPath, fileName) for fileName in sorted(os.listdir(bucketPath), reverse = True) if '.CSV' in fileName or '.csv' in fileName])
return filesList;
def _hasValidJumpData(fileName):
flightData = pd.read_csv(fileName, header = [0, 1])['hMSL', '(m)']
return flightData.std() >= VALID_MSL
def selectValidFlySightFilesFrom(dataFiles):
included = dataFiles.apply(_hasValidJumpData)
return pd.Series(dataFiles)[included]
dataFiles = selectValidFlySightFilesFrom(listDataFilesIn(DATA_SOURCE))
dataFiles
```
## Top speed and pitch analysis on all tracks in the data lake
Takes all the FlySight files in a bucket, detects the ones with valid data, and runs performance analysis over them. Packs all the results in a data frame, then calculates:
* Average speed
* Max average speed
### Populate data sources
```
DATA_LAKE = './data-lake'
DATA_SOURCE = './data-sources/ciurana'
ssscore.updateFlySightDataSource(DATA_LAKE, DATA_SOURCE)
# DATA_LAKE = './data-lake'
# DATA_SOURCE = './data-sources/landgren'
#
# ssscore.updateFlySightDataSource(DATA_LAKE, DATA_SOURCE)
```
### Analyze all files in the bucket
```
allCompetitionJumps = selectValidFlySightFilesFrom(listDataFilesIn(DATA_SOURCE)).apply(calculateSpeedAndPitchFor)
allCompetitionJumps
```
### Summary of results
```
summary = pd.Series(
[
len(allCompetitionJumps),
allCompetitionJumps['maxSpeed'].mean(),
allCompetitionJumps['pitch'].mean(),
allCompetitionJumps['maxSpeed'].max(),
allCompetitionJumps['pitch'].max(),
],
[
'totalJumps',
'meanSpeed',
'pitch',
'maxSpeed',
'maxPitch',
])
summary
```
| github_jupyter |
# Interleaved Randomized Benchmarking
* **Last Updated:** August 19, 2019
* **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.2, qiskit-aer 0.2
## Introduction
**Interleaved Randomized Benchmarking** is a variant of the Randomized Benchmarking (RB) method that is used for benchmarking individual Clifford gates via randomization. The protocol consists of interleaving random gates between the given Clifford gate of interest. The protocol estimates the gate error of the given Clifford.
The method is based on the paper *"Efficient measurement of quantum gate error by interleaved randomized benchmarking"*(https://arxiv.org/abs/1203.4550).
This notebook gives an example for how to use the ``ignis.verification.randomized_benchmarking`` module in order to perform interleaved RB.
```
#Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
#Import the RB Functions
import qiskit.ignis.verification.randomized_benchmarking as rb
#Import Qiskit classes classes
import qiskit
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
```
## Select the Parameters of the Interleaved RB Run
First, wee need to choose the regular RB parameters:
- **nseeds:** The number of seeds. For each seed you will get a separate list of output circuits in rb_circs.
- **length_vector:** The length vector of Clifford lengths. Must be in ascending order. RB sequences of increasing length grow on top of the previous sequences.
- **rb_pattern:** A list of the form [[i,j],[k],...] which will make simultaneous RB sequences where Qi,Qj are a 2-qubit RB sequence and Qk is a 1-qubit sequence, etc. The number of qubits is the sum of the entries. For 'regular' RB the qubit_pattern is just [[0]],[[0,1]].
- **length_multiplier:** If this is an array it scales each rb_sequence by the multiplier.
- **seed_offset:** What to start the seeds at (e.g. if we want to add more seeds later).
- **align_cliffs:** If true adds a barrier across all qubits in rb_pattern after each set of Cliffords.
As well as another parameter for interleaved RB:
- **interleaved_gates:** A list of gates of Clifford elements that will be interleaved. The length of the list would equal the length of the rb_pattern.
In this example we have 3 qubits Q0,Q1,Q2.
We are running 2Q RB (on qubits Q0,Q2) and 1Q RB (on qubit Q1) simultaneously,
where there are twice as many 1Q Clifford gates.
```
#Number of qubits
nQ = 3
#There are 3 qubits: Q0,Q1,Q2.
#Number of seeds (random sequences)
nseeds = 5
#Number of Cliffords in the sequence (start, stop, steps)
nCliffs = np.arange(1,200,20)
#2Q RB on Q0,Q2 and 1Q RB on Q1
rb_pattern = [[0,2],[1]]
#Do three times as many 1Q Cliffords
length_multiplier = [1,3]
#Interleaved Clifford gates (2-qubits and 1-qubit)
interleaved_gates = [['h 0', 'cx 0 1'],['x 0']]
```
## Generate Interleaved RB sequences
We generate RB sequences. We start with a small example (so it doesn't take too long to run).
In order to generate the RB sequences **rb_circs**, which is a list of lists of quantum circuits,
we run the function `rb.randomized_benchmarking_seq`.
This function returns:
- **rb_original_circs:** A list of lists of circuits for the original RB sequences (separate list for each seed).
- **xdata:** The Clifford lengths (with multiplier if applicable).
As well as:
- **rb_interleaved_circs**: A list of lists of circuits for the interleaved RB sequences (separate list for each seed).
```
rb_opts = {}
rb_opts['length_vector'] = nCliffs
rb_opts['nseeds'] = nseeds
rb_opts['rb_pattern'] = rb_pattern
rb_opts['length_multiplier'] = length_multiplier
#rb_opts['align_cliffs'] = True
rb_opts['interleaved_gates'] = interleaved_gates
rb_original_circs, xdata, rb_interleaved_circs = rb.randomized_benchmarking_seq(**rb_opts)
```
As an example, we print the circuit corresponding to the first original and interleaved RB sequences:
```
#Original RB circuits
print (rb_original_circs[0][0])
#Interleaved RB circuits
print (rb_interleaved_circs[0][0])
```
## Define the noise model
We define a noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT and U gates.
```
noise_model = NoiseModel()
p1Q = 0.002
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2*p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
```
We can execute the original and interleaved RB sequences either using a Qiskit Aer Simulator (with some noise model) or using an IBMQ provider, and obtain a list of results, `result_list`.
```
#Original RB circuits
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
original_result_list = []
original_qobj_list = []
import time
for rb_seed,rb_circ_seed in enumerate(rb_original_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
original_result_list.append(job.result())
original_qobj_list.append(qobj)
print("Finished Simulating Original Circuits")
#Interleaved RB circuits
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
interleaved_result_list = []
interleaved_qobj_list = []
import time
for rb_seed,rb_circ_seed in enumerate(rb_interleaved_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
interleaved_result_list.append(job.result())
interleaved_qobj_list.append(qobj)
print("Finished Simulating Interleaved Circuits")
```
## Fit the results
We fit the results of the original RB circuits and the interleaved RB circuits into an exponentially decaying function and obtain the *Errors per Clifford* $\alpha$ and $\alpha_C$ of the original and interleaved sequences, respectively.
```
#Create the original and interleaved RB fitter
original_rb_fit = rb.RBFitter(original_result_list, xdata, rb_opts['rb_pattern'])
interleaved_rb_fit = rb.RBFitter(interleaved_result_list, xdata, rb_opts['rb_pattern'])
```
### Calculate the interleaved gate error fidelity
From the values of $\alpha$ and $\alpha_C$ we obtain the gate error of the interleaved Clifford $c$, and $r_C=1-$(average gate fidelity of the interleaved Clifford $C$), is estimated by:
$$ EPC^{est} = r_C^{est} = \frac{(2^n-1)(1-\alpha_C/\alpha)}{2^n}$$
and must lie in the range given by certain systematic error bounds:
$$[r_C^{est}-E,r_C^{est}+E]$$
for each of the patterns.
```
#Calculate the joint fitter
joint_rb_fit = rb.InterleavedRBFitter(original_result_list, interleaved_result_list, xdata, rb_opts['rb_pattern'])
#Print the joint fitter parameters
for patt_ind, pattern in enumerate(rb_pattern):
print ('pattern:', patt_ind, '-', len(pattern), 'qubit interleaved RB:', joint_rb_fit.fit_int[patt_ind])
#Plot the joint RB data
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
joint_rb_fit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit interleaved RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
# Generalized Multi-Tissue Modeling
The measured PGSE diffusion signal depends on Echo Time (TE), gradient strength (G), orientation $\textbf{n}$, pulse separation $\Delta$ and pulse duration $\delta$. The signal representation can be separated in terms of amplitude and the shape:
\begin{equation}\label{eq:separation}
S(G, \textbf{n}, \Delta, \delta, TE)=S_0(TE)\cdot E(G,\textbf{n}, \Delta, \delta).
\end{equation}
where we can notice that the amplitude only depends on TE and the shape on all the others. In most models (NODDI, SMT, VERDICT etc.) ONLY the signal shape is fitted. Fitting the signal itself has only been explored in Multi-Tissue models like Multi-Tissue CSD.
In Dmipy, we generalize multi-tissue modeling to ANY MC, MC-SM and MC-SH model.
## Setting up a Multi-Tissue model : Example Multi-Tissue Ball and Stick
### Instantiate base models
```
from dmipy.signal_models import gaussian_models, cylinder_models
# setting base models
ball = gaussian_models.G1Ball()
stick = cylinder_models.C1Stick()
models = [ball, stick]
# setting arbitrary S0 values for the example
S0_ball = 12.
S0_stick = 8.
S0_responses = [S0_ball, S0_stick]
```
### Instantiate Multi-tissue Ball and Stick for standard MC-model
A multi-tissue model is created exactly as a regular one - only the S0 response values need to be given at the upon multi-compartment model instantiation.
```
from dmipy.core import modeling_framework
mt_BAS_standard = modeling_framework.MultiCompartmentModel(
models=models,
S0_tissue_responses=S0_responses)
```
### Simulate some test signal
The multi-tissue model only differs from a standard model when fitting it to data. When simulating data it will still generate the signal attenuation.
```
from dmipy.data.saved_acquisition_schemes import wu_minn_hcp_acquisition_scheme
scheme = wu_minn_hcp_acquisition_scheme()
# generate test data.
params = {
'G1Ball_1_lambda_iso': 3e-9,
'C1Stick_1_mu': [0., 0.],
'C1Stick_1_lambda_par': 1.7e-9,
'partial_volume_0': 0.5, # equal volume fractions as SIGNAL fractions
'partial_volume_1': 0.5
}
# total signal intensity is 10
S0_signal = 10.
S = mt_BAS_standard.simulate_signal(scheme, params) * S0_signal
S.shape
```
### Fit Multi-Tissue model and compare estimated fractions
We can fit the model as usual, as the multi-tissue optimization occurs AFTER the standard optimization. It is therefore independent and naturally follows other approaches.
```
mt_BAS_standard_fit = mt_BAS_standard.fit(scheme, S)
```
We now have access the the **signal fractions based on the signal attenuation**, and the non-normalized and normalized **volume fractions based on the **signal**.
```
sig_fracts = mt_BAS_standard_fit.fitted_parameters
vol_fracts = mt_BAS_standard_fit.fitted_multi_tissue_fractions
vol_fracts_norm = mt_BAS_standard_fit.fitted_multi_tissue_fractions_normalized
```
We can see that as we added equal signal fractions of the ball and stick, that indeed the estimated signal fractions are equal to each other.
```
sig_fracts
```
But the non-normalized volume fractions after the secondary optimization (which does only estimated the linear volume fractions and does not impose unity) are now scaled according to the tissue-specific S0 responses:
```
vol_fracts
```
But as it's valuable to find the normalized volume fractions, we also easily provide the normalized volume fractions. Here, we can now see that while the signal fraction is indeed equal, the **volume** fraction, actually in terms of signal production by diffusing particles, is in fact 0.4 to 0.6.
```
vol_fracts_norm
```
Note here that we can only set the S0 response to a signal value for an MC-model (no voxel varying) and currently data with multiple TE (so multiple S0 responses per model) is not implemented.
## Multi-Tissue Spherical Mean Modeling
Setting up Multi-Tissue modeling is exactly the same when setting up MC-spherical mean models:
```
mt_BAS_sm = modeling_framework.MultiCompartmentSphericalMeanModel(
models=models,
S0_tissue_responses=S0_responses)
mt_BAS_sm_fit = mt_BAS_sm.fit(scheme, S)
mt_BAS_sm_fit.fitted_parameters
mt_BAS_sm_fit.fitted_multi_tissue_fractions
```
## Multi-Tissue Spherical Harmonics Modeling
Similarly, MC-SH models are instantiated the same
```
mt_BAS_sh = modeling_framework.MultiCompartmentSphericalHarmonicsModel(
models=models,
S0_tissue_responses=S0_responses)
```
But the fitting procedure is slightly differently implemented as a 1-step optimization because fitting spherical harmonics is convex already.
## Implications
Including tissue responses allows to correct a *signal fraction* estimation to a *volume fraction* estimation. Fitting an MC-model representing tissues with different *true* S0 responses without including these estimates will result in biased volume fraction estimated.
As a good example, CSF has a much higher S0 response than white matter. This results in vast overestimation of CSF volume fractions in models such as NODDI. We illustrate in the Multi-Tissue NODDI [example](https://nbviewer.jupyter.org/github/AthenaEPI/dmipy/blob/master/examples/example_multi_tissue_noddi.ipynb).
| github_jupyter |
<a href="https://colab.research.google.com/github/Lawrence-Krukrubo/SQL_for_Data_Science/blob/main/sql_for_data_analysis2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<b><h1>SQL Joins...</h1></b>
We connect to MySQL server and workbench and make analysis with the parch-and-posey database.
This course is the practicals of the course **SQL for Data Analysis** at Udacity.
```
# First we install mysql-connector-python
!pip install mysql-connector-python
print('done')
# we import some required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from pprint import pprint
import time
print('Done!')
```
**Next, we create a connection to the parch-and-posey DataBase in MySQL Work-Bench**
```
import mysql
from mysql.connector import Error
from getpass import getpass
try:
connection = mysql.connector.connect(host='localhost',
database='parch_and_posey',
user=input('Enter UserName:'),
password=getpass('Enter Password:'))
if connection.is_connected():
db_Info = connection.get_server_info()
print("Connected to MySQL Server version ", db_Info)
cursor = connection.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print("You're connected to database: ", record)
except Error as e:
print("Error while connecting to MySQL", e)
```
Let's see the first 3 data of the different tables in parch and posey database
```
# let's run the show tables command
cursor.execute('show tables')
out = cursor.fetchall()
out
```
Defining a method that converts a select query to a data frame
```
def query_to_df(query):
st = time.time()
# Assert Every Query ends with a semi-colon
try:
assert query.endswith(';')
except AssertionError:
return 'ERROR: Query Must End with ;'
# so we never have more than 20 rows displayed
pd.set_option('display.max_rows', 20)
df = None
# Process the query
cursor.execute(query)
columns = cursor.description
result = []
for value in cursor.fetchall():
tmp = {}
for (index,column) in enumerate(value):
tmp[columns[index][0]] = [column]
result.append(tmp)
# Create a DataFrame from all results
for ind, data in enumerate(result):
if ind >= 1:
x = pd.DataFrame(data)
df = pd.concat([df, x], ignore_index=True)
else:
df = pd.DataFrame(data)
print(f'Query ran for {time.time()-st} secs!')
return df
# 1. For the accounts table
query = 'SELECT * FROM accounts LIMIT 3;'
query_to_df(query)
# 2. For the orders table
query = 'SELECT * FROM orders LIMIT 3;'
query_to_df(query)
# 3. For the region table
query = 'SELECT * FROM region LIMIT 3;'
query_to_df(query)
# 4. For the web_events table
query = 'SELECT * FROM web_events LIMIT 3;'
query_to_df(query)
# 5. For the sales_reps table
query = 'SELECT * FROM sales_reps LIMIT 3;'
query_to_df(query)
```
<h3>Overview</h3>
Writing Joins is the real strength and magic of SQL. Joins are used to read data from multiple tables to power your analysis.
<h3>Database Normalization</h3>
When creating a database, it is really important to think about how data will be stored. This is known as normalization, and it is a huge part of most SQL classes. If you are in charge of setting up a new database, it is important to have a thorough understanding of database normalization.
There are essentially three ideas that are aimed at database normalization:
* Are the tables storing logical groupings of the data?
* Can I make changes in a single location, rather than in many tables for the same information?
* Can I access and manipulate data quickly and efficiently?
This is discussed in detail [here](https://www.itprotoday.com/sql-server/sql-design-why-you-need-database-normalization).
<h3><b>Joins</b></h3>
The whole purpose of `JOIN` statements is to allow us to pull data from more than one table at a time.
Again - `JOINs` are useful for allowing us to pull data from multiple tables. This is both simple and powerful all at the same time.
With the addition of the `JOIN` statement to our toolkit, we will also be adding the `ON` statement.
We use `ON` clause to specify a `JOIN` condition which is a logical statement to combine the table in `FROM` and `JOIN` statements.
<h3>Join Statement Analysis</h3>
```
SELECT orders.*
FROM orders
JOIN accounts
ON orders.account_id = accounts.id;
```
The `SELECT` clause indicates which column(s) of data you'd like to see in the output (For Example, orders.* gives us all the columns in orders table in the output). The `FROM` clause indicates the first table from which we're pulling data, and the `JOIN` indicates the second table. The `ON` clause specifies the column on which you'd like to merge the two tables together.
```
query = 'SELECT orders.* FROM orders JOIN accounts \
ON orders.account_id = accounts.id;'
query_to_df(query)
```
**What to Notice**
We are able to pull data from two tables:
* orders
* accounts
Above, we are only pulling data from the orders table since in the `SELECT` statement we only reference columns from the orders table.
The `ON` statement holds the two columns that get linked across the two tables.
**Additional Information**
If we wanted to only pull individual elements from either the orders or accounts table, we can do this by using the exact same information in the `FROM` and `ON` statements. However, in your `SELECT` statement, you will need to know how to specify tables and columns in the `SELECT` statement:
The table name is always before the period.<br>
The column you want from that table is always after the period.
For example, if we want to pull only the account name and the dates in which that account placed an order, but none of the other columns, we can do this with the following query:
```
SELECT accounts.name, orders.occurred_at
FROM orders
JOIN accounts
ON orders.account_id = accounts.id;
```
```
query = 'SELECT accounts.name, orders.occurred_at FROM orders JOIN accounts ON \
orders.account_id = accounts.id;'
query_to_df(query)
```
This query only pulls two columns, not all the information in these two tables. Alternatively, the below query pulls all the columns from both the accounts and orders table.
```
SELECT *
FROM orders
JOIN accounts
ON orders.account_id = accounts.id;
```
```
query = 'SELECT * FROM orders JOIN accounts ON orders.account_id = accounts.id;'
query_to_df(query)
```
**Quiz Questions**
1. Try pulling all the data from the accounts table, and all the data from the orders table.
2. Try pulling standard_qty, gloss_qty, and poster_qty from the orders table, and the website and the primary_poc from the accounts table.
```
# Try pulling all the data from the accounts table, and all the data from the orders table.
query = 'SELECT * FROM accounts JOIN orders ON accounts.id = orders.account_id;'
query_to_df(query)
```
Another way to select all columns from the two above tables is...
```
query = """SELECT orders.*, accounts.*
FROM accounts
JOIN orders
ON accounts.id = orders.account_id;"""
query_to_df(query)
```
Notice this result is the same as if you switched the tables in the `FROM` and `JOIN`. <br>Additionally, which side of the `=` a column is listed doesn't matter.
Personally, I think it makes sense to keep it uniform... <br>Meaning make the table at the left side of the `=` be the first table selected, while that at the right side be the second table.
```
# Try pulling standard_qty, gloss_qty, and poster_qty from the orders table,
# and the website and the primary_poc from the accounts table.
query = 'SELECT orders.standard_qty, orders.gloss_qty, orders.poster_qty, \
accounts.website, accounts.primary_poc FROM orders JOIN accounts ON \
orders.account_id = accounts.id;'
query_to_df(query)
```
<h3>Entity Relationship Diagrams:</h3>
From the last lesson, you might remember that an entity relationship diagram (ERD) is a common way to view data in a database. It is also a key element to understanding how we can pull data from multiple tables.
It will be beneficial to have an idea of what the ERD looks like for Parch & Posey handy,
<img src='https://video.udacity-data.com/topher/2017/October/59e946e7_erd/erd.png'>
**Tables & Columns**
In the Parch & Posey database there are 5 tables:
* web_events
* accounts
* orders
* sales_reps
* region
You will notice some of the columns in the tables have PK or FK next to the column name, while other columns don't have a label at all.
If you look a little closer, you might notice that the PK is associated with the first column in every table. The PK here stands for primary key. A primary key exists in every table, and it is a column that has a unique value for every row.
If you look at the first few rows of any of the tables in our database, you will notice that this first, PK, column is always unique. For this database it is always called id, but that is not true of all databases.
<h4>Keys</h4>
**Primary Key (PK):**
A primary key is a unique column in a particular table. This is the first column in each of our tables. Here, those columns are all called id, but that doesn't necessarily have to be the name. It is common that the primary key is the first column in our tables in most databases.
**Foreign Key (FK):**
A foreign key is a column in one table that is a primary key in a different table. We can see in the Parch & Posey ERD that the foreign keys are:
* region_id
* account_id
* sales_rep_id
Each of these is linked to the primary key of another table. An example is shown in the image below:<br>**Note that a table can have multiple foreign-keys, but one primary-key**
<img src='https://video.udacity-data.com/topher/2017/August/598d2378_screen-shot-2017-08-10-at-8.23.48-pm/screen-shot-2017-08-10-at-8.23.48-pm.png'>
<h4>Primary - Foreign Key Link</h4>
In the above image you can see that:
* The `region_id` is the foreign key.
* The `region_id` is linked to `id` - this is the **primary-foreign key link** that connects these two tables.
* The crow's foot shows that the FK can actually appear in many rows in the sales_reps table.
* While the single line tells us that the PK id appears only once for each row in the region table.
* If you look through the rest of the database, you will notice this is always the case for a primary-foreign key relationship. In the next concept, you can make sure you have this down!
<h4>JOIN Revisited</h4>
Let's look back at the first JOIN we wrote.
```
SELECT orders.*
FROM orders
JOIN accounts
ON orders.account_id = accounts.id;
```
Here is the ERD for these two tables:
<img src='https://video.udacity-data.com/topher/2017/August/598dfda7_screen-shot-2017-08-11-at-11.54.30-am/screen-shot-2017-08-11-at-11.54.30-am.png'>
**Notice**
Notice our SQL query has the two tables we would like to join - one in the `FROM` and the other in the `JOIN`. Then in the `ON`, we will ALWAYs have the PK equal to the FK:
The way we join any two tables is in this way: linking the PK and FK (generally in an `ON` statement).
<h4>JOIN More than Two Tables</h4>
This same logic can actually assist in joining more than two tables together. Look at the three tables below.
<img src='https://video.udacity-data.com/topher/2017/August/598e2e15_screen-shot-2017-08-11-at-3.21.34-pm/screen-shot-2017-08-11-at-3.21.34-pm.png'>
**The Code**
If we wanted to join all three of these tables, we could use the same logic. The code below pulls all of the data from all of the joined tables.
```
SELECT *
FROM web_events
JOIN accounts
ON web_events.account_id = accounts.id
JOIN orders
ON accounts.id = orders.account_id
```
Alternatively, we can create a `SELECT` statement that could pull specific columns from any of the three tables. <br>Again, our `JOIN` holds a table, and `ON` is a link for our PK to equal the FK.
To pull specific columns, the `SELECT` statement will need to specify the table that you are wishing to pull the column from, as well as the column name.
<h3>Alias:</h3>
When we `JOIN` tables together, it is nice to give each table an alias. Frequently an alias is just the first letter of the table name. You actually saw something similar for column names in the Arithmetic Operators concept.
Example:
```
FROM tablename AS t1
JOIN tablename2 AS t2
```
Frequently, you might also see these statements without the `AS` statement. Each of the above could be written in the following way instead, and they would still produce the exact same results:
```
FROM tablename t1
JOIN tablename2 t2
```
and
```
SELECT col1 + col2 total, col3
```
<h3>Aliases for Columns in Resulting Table</h3>
While aliasing tables is the most common use case. It can also be used to alias the columns selected to have the resulting table reflect a more readable name.
Example:
```
Select t1.column1 aliasname, t2.column2 aliasname2
FROM tablename AS t1
JOIN tablename2 AS t2
```
The alias name fields will be what shows up in the returned table instead of t1.column1 and t2.column2
```
aliasname aliasname2
example row example row
example row example row
```
<h4>Questions</h4>
1. Provide a table for all web_events associated with account name of Walmart. There should be three columns. Be sure to include the primary_poc, time of the event, and the channel for each event. Additionally, you might choose to add a fourth column to assure only Walmart events were chosen.
2. Provide a table that provides the region for each sales_rep along with their associated accounts. Your final table should include three columns: the region name, the sales rep name, and the account name. Sort the accounts alphabetically (A-Z) according to account name.
3. Provide the name for each region for every order, as well as the account name and the unit price they paid (total_amt_usd/total) for the order. Your final table should have 3 columns: region name, account name, and unit price. A few accounts have 0 for total, so I divided by (total + 0.01) to assure not dividing by zero.
```
# 1. For the sales_reps table
query = 'SELECT * FROM sales_reps LIMIT 3;'
query_to_df(query)
# 1. For the orders table
query = 'SELECT * FROM orders LIMIT 3;'
query_to_df(query)
# 2. For the accounts table
query = 'SELECT * FROM accounts LIMIT 3;'
query_to_df(query)
# 1. For the region table
query = 'SELECT * FROM region LIMIT 3;'
query_to_df(query)
# 1. For the web_events table
query = 'SELECT * FROM web_events LIMIT 3;'
query_to_df(query)
```
Provide a table for all web_events associated with account name of Walmart. There should be three columns. Be sure to include the primary_poc, time of the event, and the channel for each event. Additionally, you might choose to add a fourth column to assure only Walmart events were chosen.
```
query = 'SELECT accounts.primary_poc, web_events.occurred_at, web_events.channel,\
accounts.name FROM accounts JOIN web_events ON \
accounts.id = web_events.account_id WHERE accounts.name LIKE "Walmart%";'
query_to_df(query)
```
Provide a table that provides the region for each sales_rep along with their associated accounts. Your final table should include three columns: the region name, the sales rep name, and the account name. Sort the accounts alphabetically (A-Z) according to account name.
```
query_to_df(
"SELECT r.name region, s.name sales_rep, a.name account \
FROM region r JOIN sales_reps s ON r.id = s.region_id \
JOIN accounts a ON a.sales_rep_id = s.id ORDER BY a.name;"
)
```
Provide the name for each region for every order, as well as the account name and the unit price they paid (total_amt_usd/total) for the order. Your final table should have 3 columns: region name, account name, and unit price. A few accounts have 0 for total, so I divided by (total + 0.01) to assure not dividing by zero.
```
query_to_df(
"SELECT r.name region_name, a.name acct_name, \
(o.total_amt_usd / (o.total+0.01)) unit_price \
FROM region r JOIN sales_reps s ON r.id = s.region_id\
JOIN accounts a ON a.sales_rep_id = s.id\
JOIN orders o ON o.account_id = a.id;"
)
```
<h3>Inner, Left, Right, Outer Joins</h3>
**JOINs**
The INNER JOIN, which we saw by just using JOIN,
Fro the right and left joins,
If there is not matching information in the JOINed table, then you will have columns with empty cells. These empty cells introduce a new data type called NULL. You will learn about NULLs in detail in the next lesson, but for now you have a quick introduction as you can consider any cell without data as NULL.
<h3>JOIN Check In</h3>
**INNER JOINs**
Notice every JOIN we have done up to this point has been an `INNER JOIN`. That is, we have always pulled rows only if they exist as a match across two tables.
Our new JOINs allow us to pull rows that might only exist in one of the two tables. This will introduce a new data type called NULL. This data type will be discussed in detail in the next lesson.
**Quick Note**
You might see the SQL syntax of
`LEFT OUTER JOIN`
OR
`RIGHT OUTER JOIN`
These are the exact same commands as the `LEFT JOIN` and `RIGHT JOIN` we learned about in the previous video.
**OUTER JOINS**
The last type of join is a `full outer join`. This will return the `inner join` result set, as well as any unmatched rows from either of the two tables being joined.
Again this returns rows that do not match one another from the two tables. The use cases for a `full outer join` are very rare.
You can see examples of outer joins at the link [here](http://www.w3resource.com/sql/joins/perform-a-full-outer-join.php) and a description of the rare use cases here. We will not spend time on these given the few instances you might need to use them.
Similar to the above, you might see the language `FULL OUTER JOIN`, which is the same as `OUTER JOIN`.
<h3><b>Facts</b></h3>
1. A `LEFT JOIN` and `RIGHT JOIN` do the same thing if we change the tables that are in the `FROM` and `JOIN` statements.
2. A `LEFT JOIN` will at least return all the rows that are in an `INNER JOIN`.
3. `JOIN` and `INNER JOIN` are the same.
4. A `LEFT OUTER JOIN` is the same as `LEFT JOIN`.
<b><h3>Tip:</h3></b>
If you have two or more columns in your SELECT that have the same name after the table name such as accounts.name and sales_reps.name you will need to alias them. Otherwise it will only show one of the columns. You can alias them like accounts.name AS AcountName, sales_rep.name AS SalesRepName
<h3>Q1</h3>
Provide a table that provides the region for each sales_rep along with their associated accounts. This time only for the Midwest region. Your final table should include three columns: the region name, the sales rep name, and the account name. Sort the accounts alphabetically (A-Z) according to account name.
```
query = 'SELECT r.name region, s.name sales_rep, a.name acct \
FROM region r JOIN sales_reps s ON r.id = s.region_id \
AND r.name LIKE "Midwest%" JOIN accounts a on \
a.sales_rep_id = s.id ORDER BY a.name;'
query_to_df(query)
```
<h3>Q2</h3>
Provide a table that provides the region for each sales_rep along with their associated accounts. This time only for accounts where the sales rep has a first name starting with S and in the Midwest region. Your final table should include three columns: the region name, the sales rep name, and the account name. Sort the accounts alphabetically (A-Z) according to account name.
```
query = 'SELECT r.name region_, s.name sales_rep, a.name acct FROM region r \
JOIN sales_reps s ON r.id = s.region_id AND s.name LIKE "S%" AND \
r.name LIKE "Midwest%" JOIN accounts a ON a.sales_rep_id = s.id \
ORDER BY a.name;'
query_to_df(query)
```
<h3>Q3</h3>
Provide a table that provides the region for each sales_rep along with their associated accounts. This time only for accounts where the sales rep has a last name starting with K and in the Midwest region. Your final table should include three columns: the region name, the sales rep name, and the account name. Sort the accounts alphabetically (A-Z) according to account name.
```
query = 'SELECT r.name region, s.name sales_rep, a.name acct FROM region r \
JOIN sales_reps s ON r.id = s.region_id AND s.name LIKE "% K%" AND \
r.name LIKE "Midwest" JOIN accounts a ON a.sales_rep_id = s.id \
ORDER BY a.name;'
query_to_df(query)
```
<h3>Q4</h3>
Provide the name for each region for every order, as well as the account name and the unit price they paid (total_amt_usd/total) for the order. However, you should only provide the results if the standard order quantity exceeds 100. Your final table should have 3 columns: region name, account name, and unit price. In order to avoid a division by zero error, adding .01 to the denominator here is helpful total_amt_usd/(total+0.01).
```
query = 'SELECT r.name region, a.name acct, (o.total_amt_usd / (o.total+0.001)) \
unit_price FROM region r JOIN sales_reps s ON r.id = s.region_id JOIN \
accounts a ON a.sales_rep_id = s.id JOIN orders o ON a.id = o.account_id \
AND o.standard_qty > 100;'
query_to_df(query)
```
<h3>Q5</h3>Provide the name for each region for every order, as well as the account name and the unit price they paid (total_amt_usd/total) for the order. However, you should only provide the results if the standard order quantity exceeds 100 and the poster order quantity exceeds 50. Your final table should have 3 columns: region name, account name, and unit price. Sort for the smallest unit price first. In order to avoid a division by zero error, adding .01 to the denominator here is helpful (total_amt_usd/(total+0.01).
```
query = 'SELECT r.name region_name, a.name acct_name, (o.total_amt_usd/(total+0.001)) \
unit_price FROM region r JOIN sales_reps s ON r.id = s.region_id JOIN accounts a ON \
s.id = a.sales_rep_id JOIN orders o ON o.account_id = a.id WHERE \
o.standard_qty > 100 AND o.poster_qty > 50 ORDER BY unit_price;'
query_to_df(query)
```
### Q6
Provide the name for each region for every order, as well as the account name and the unit price they paid (total_amt_usd/total) for the order. However, you should only provide the results if the standard order quantity exceeds 100 and the poster order quantity exceeds 50. Your final table should have 3 columns: region name, account name, and unit price. Sort for the largest unit price first. In order to avoid a division by zero error, adding .01 to the denominator here is helpful (total_amt_usd/(total+0.01).
```
query = 'SELECT r.name region_name, a.name acct_name, (o.total_amt_usd/(o.total+0.001)) \
unit_price FROM region r JOIN sales_reps s ON r.id = s.region_id JOIN accounts a \
ON a.sales_rep_id = s.id JOIN orders o ON o.account_id = a.id WHERE \
o.standard_qty > 100 AND o.poster_qty > 50 ORDER BY unit_price Desc;'
query_to_df(query)
```
### Q7
What are the different channels used by account id 1001? Your final table should have only 2 columns: account name and the different channels. You can try SELECT DISTINCT to narrow down the results to only the unique values.
```
query = 'SELECT DISTINCT a.name acct_name, w.channel channels FROM accounts a \
JOIN web_events w ON w.account_id = a.id WHERE a.id = 1001;'
query_to_df(query)
```
### Q8
Find all the orders that occurred in 2015. Your final table should have 4 columns: occurred_at, account name, order total, and order total_amt_usd.
```
query = 'SELECT o.occurred_at occurred_at, a.name acct_name, o.total total_qty, \
o.total_amt_usd total_usd FROM orders o JOIN accounts a ON a.id = \
o.account_id WHERE occurred_at LIKE "%2015%" ORDER BY occurred_at DESC;'
query_to_df(query)
```
### Q9
What are the different channels used by account id 1001? Sort by the count of most frequently used channel descending. Your query should return 4 columns:- account-name, account-id, channels, count.
```
query = 'SELECT DISTINCT a.name acct_name, a.id acct_id, w.channel channels, \
COUNT(w.channel) count FROM web_events w JOIN accounts a ON a.id = \
w.account_id WHERE w.account_id = 1001 GROUP BY acct_name,\
channels ORDER BY count DESC;'
query_to_df(query)
```
<h2>Recap</h2>
### Primary and Foreign Keys
You learned a key element for JOINing tables in a database has to do with primary and foreign keys:
* **primary keys** - are unique for every row in a table. These are generally the first column in our database (like you saw with the id column for every table in the Parch & Posey database).
* **foreign keys** - are the primary key appearing in another table, which allows the rows to be non-unique.
Choosing the set up of data in our database is very important, but not usually the job of a data analyst. This process is known as Database Normalization.
### JOINs
In this lesson, you learned how to combine data from multiple tables using JOINs. The three JOIN statements you are most likely to use are:
* **JOIN** - an INNER JOIN that only pulls data that exists in both tables.
* **LEFT JOIN **- pulls all the data that exists in both tables, as well as all of the rows from the table in the FROM even if they do not exist in the JOIN statement.
* **RIGHT JOIN** - pulls all the data that exists in both tables, as well as all of the rows from the table in the JOIN even if they do not exist in the FROM statement.
There are a few more advanced JOINs that we did not cover here, and they are used in very specific use cases. [UNION and UNION ALL](https://www.w3schools.com/sql/sql_union.asp), [CROSS JOIN](http://www.w3resource.com/sql/joins/cross-join.php), and the tricky [SELF JOIN](https://www.w3schools.com/sql/sql_join_self.asp). These are more advanced than this course will cover, but it is useful to be aware that they exist, as they are useful in special cases.
### Alias
You learned that you can alias tables and columns using AS or not using it. This allows you to be more efficient in the number of characters you need to write, while at the same time you can assure that your column headings are informative of the data in your table.
### Looking Ahead
The next lesson is aimed at aggregating data. You have already learned a ton, but SQL might still feel a bit disconnected from statistics and using Excel like platforms. Aggregations will allow you to write SQL code that will allow for more complex queries, which assist in answering questions like:
* Which channel generated more revenue?
* Which account had an order with the most items?
* Which sales_rep had the most orders? or least orders? How many orders did they have?
```
# closing connection and cursor for the day
if connection.is_connected():
cursor.close()
connection.close()
print("MySQL connection is closed")
```
| github_jupyter |
## Task 1: Classifying Documents
#### Using Tokenization (and basic bag-of-words features)
Here is the code we went over at the start, to get started classifying documents by sentiment.
```
import re
import random
import nltk
from nltk.corpus import movie_reviews
# Read in a list of document (wordlist, category) tuples, and shuffle
docs_tuples = [(movie_reviews.words(fileid), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)[:200]]
random.shuffle(docs_tuples)
# Create a list of the most frequent words in the entire corpus
movie_words = [word.lower() for (wordlist, cat) in docs_tuples for word in wordlist]
all_wordfreqs = nltk.FreqDist(movie_words)
top_wordfreqs = all_wordfreqs.most_common()[:1000]
feature_words = [x[0] for x in top_wordfreqs]
# Define a function to extract features of the form containts(word) for each document
def document_features(doc_toks):
document_words = set(doc_toks)
features = {}
for word in feature_words:
features['contains({})'.format(word)] = 1 if word in document_words else 0
return features
# Create feature sets of document (features, category) tuples
featuresets = [(document_features(wordlist), cat) for (wordlist, cat) in docs_tuples]
# Separate train and test sets, train the classifier, print accuracy and best features
train_set, test_set = featuresets[:-100], featuresets[-100:]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set))
print(classifier.show_most_informative_features(10))
```
#### Using POS Tagging
We left the first part of the code the same as above, but created a new list of most common adjectives as our feature words:
```
# Create a list of the most frequent adjectives in the entire corpus
from nltk import FreqDist
movie_tokstags = nltk.pos_tag(movie_words)
movie_adjs = [tok for (tok,tag) in movie_tokstags if re.match('JJ', tag)]
all_adjfreqs = FreqDist(movie_adjs)
top_adjfreqs = all_adjfreqs.most_common()[:1000]
feature_words = [x[0] for x in top_adjfreqs]
```
Then we left the document_features() function and remaining code the same:
```
# Define a function to extract features of the form containts(word) for each document
def document_features(doc_toks):
document_words = set(doc_toks)
features = {}
for word in feature_words:
features['contains({})'.format(word)] = 1 if word in document_words else 0
return features
# Create feature sets of document (features, category) tuples
featuresets = [(document_features(wordlist), cat) for (wordlist, cat) in docs_tuples]
# Separate train and test sets, train the classifier, print accuracy and best features
train_set, test_set = featuresets[:-100], featuresets[-100:]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set))
print(classifier.show_most_informative_features(10))
```
#### Using Phrase Chunking
Now we created a new list of most common noun phrases, and also modified the line where we use the document_features() function to extract the noun phrase features from each document's noun phrase list:
```
# Create a list of the most frequent noun phrases in the entire corpus
from nltk import RegexpParser
grammar = "NP: {<JJ><NN.*>}"
cp = RegexpParser(grammar)
def extract_nps(wordlist):
wordlist_tagged = nltk.pos_tag(wordlist)
wordlist_chunked = cp.parse(wordlist_tagged)
nps = []
for node in wordlist_chunked:
if type(node)==nltk.tree.Tree and node.label()=='NP':
phrase = [tok for (tok, tag) in node.leaves()]
nps.append(' '.join(phrase))
return nps
docs_tuples_nps = [(extract_nps(wordlist), cat) for (wordlist, cat) in docs_tuples]
movie_nps = [np for (nplist, cat) in docs_tuples_nps for np in nplist]
all_npfreqs = FreqDist(movie_nps)
top_npfreqs = all_npfreqs.most_common()[:1000]
feature_nps = [x[0] for x in top_npfreqs]
# Create feature sets of document (features, category) tuples
featuresets = [(document_features(nplist), cat) for (nplist, cat) in docs_tuples_nps]
```
We left the last part of the code the same:
```
# Separate train and test sets, train the classifier, print accuracy and best features
train_set, test_set = featuresets[:-100], featuresets[-100:]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set))
print(classifier.show_most_informative_features(10))
```
This actually doesn't do that well. One reason is that we're using more complex sequences of words as our noun phrase features, each of which is going to appear far less frequently across documents. We might need to increase the number of noun phrases we use, or limit the pattern we're looking for to a single adjective followed by a single noun (leaving out articles, etc). But it might also be the case that adjectives are really the best features to use for sentiment classification in the domain of movie reviews, and the version of this task we did using POS tags was the right way to go.
## Task 2. Information Extraction
#### Using Tokenization (and basic keyword search)
Here is the code we went over at the start, to initially extract election-related sentences.
```
from nltk.corpus import brown
# Read in all news docs as a list of sentences, each sentence a list of tokens
news_docs = [brown.sents(fileid) for fileid in brown.fileids(categories='news')]
# Create regular expression to search for election-related words
elect_regexp = 'elect|vote'
# Loop through documents and extract each sentence containing an election-related word
elect_sents = []
for doc in news_docs:
for sent in doc:
for tok in sent:
if re.match(elect_regexp, tok):
elect_sents.append(sent)
break # Break out of last for loop, so we only add the sentence once
len(elect_sents)
```
#### Using POS Tagging
We used the election-related sentences we identified in the first step (so we don't waste time tagging irrelevant text). Then we looped through each sentence, ran the POS tagger, and extracted all the nouns.
```
# Extract nouns from election-related sentences
elect_nouns = []
for sent in elect_sents:
sent_tagged = nltk.pos_tag(sent)
for (tok, tag) in sent_tagged:
if re.match('N', tag):
elect_nouns.append(tok)
print(len(elect_nouns))
print(elect_nouns[:50])
```
We can add a check to see if the sentence has a token matching the election regexp that's tagged as a verb, once we've POS-tagged the sentence, and only add the sentence's nouns if the sentence passes this more specific test.
```
# Extract nouns if the sentence contains an election-related verb
elect_nouns = []
for sent in elect_sents:
sent_nouns = []
contains_elect_verb = False
sent_tagged = nltk.pos_tag(sent)
for (tok, tag) in sent_tagged:
if re.match('V', tag) and re.match(elect_regexp, tok):
contains_elect_verb = True
elif re.match('N', tag):
sent_nouns.append(tok)
if contains_elect_verb:
elect_nouns.extend(sent_nouns)
print(len(elect_nouns))
print(elect_nouns[:50])
```
#### Using Phrase Chunking and NER Tagging
Next we used the NLTK NER tagger (which chunks a sentence into named entity noun phrases, labeled by entity category), to extract named entities for either people or organizations mentioned in election-related sentences.
```
elect_entities = {'ORGANIZATION':[], 'PERSON':[]}
for sent in elect_sents:
sent_tagged = nltk.pos_tag(sent)
sent_nes = nltk.ne_chunk(sent_tagged)
for node in sent_nes:
if type(node)==nltk.tree.Tree and node.label() in elect_entities:
phrase = [tok for (tok, tag) in node.leaves()]
elect_entities[node.label()].append(' '.join(phrase))
for key, value in elect_entities.items():
print(key, value, '\n')
```
We also extracted noun phrases if they appeared right before or after an election-related word.
```
grammar = "NP: {<DT>?<JJ>*<NN.*>+}"
cp = RegexpParser(grammar)
entities_before = []
entities_after = []
for sent in elect_sents:
sent_tokstags = nltk.pos_tag(sent)
sent_chunks = cp.parse(sent_tokstags)
for n in range(len(sent_chunks)):
node = sent_chunks[n]
if type(node)!=nltk.tree.Tree and re.match(elect_regexp, node[0]):
if n > 0:
node_prev = sent_chunks[n-1]
if type(node_prev)==nltk.tree.Tree:
phrase = ' '.join([tok for (tok, tag) in node_prev.leaves()])
entities_before.append(phrase)
if n < len(sent_chunks)-1:
node_after = sent_chunks[n+1]
if type(node_after)==nltk.tree.Tree:
phrase = ' '.join([tok for (tok, tag) in node_after.leaves()])
entities_after.append(phrase)
print('BEFORE:', entities_before)
print('AFTER:', entities_after)
```
#### Using Dependency Parsing
```
import os
from nltk.parse.stanford import StanfordDependencyParser
os.environ['STANFORD_PARSER'] = '/Users/natalieahn/Documents/SourceCode/stanford-parser-full-2015-12-09'
os.environ['STANFORD_MODELS'] = '/Users/natalieahn/Documents/SourceCode/stanford-parser-full-2015-12-09'
dependency_parser = StanfordDependencyParser(model_path="edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
sents_parsed = dependency_parser.parse_sents(elect_sents)
sents_parseobjs = [obj for sent in sents_parsed for obj in sent]
elect_winners = []
for sent_parseobj in sents_parseobjs:
sent_triples = sent_parseobj.triples()
for triple in sent_triples:
# Insert your code here
if re.match('win|won|defeat|gain|secure|achieve|got', triple[0][0]):
if re.match('nsubj', triple[1]):
elect_winners.append(triple[2][0])
elif re.match('elect|vote|choose|pick', triple[0][0]):
if re.match('dobj', triple[1]):
elect_winners.append(triple[2][0])
print(elect_winners)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/olgOk/QCircuit/blob/master/tutorials/How_to_build_Advanced_Circuit.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Advanced Curcuit
by Olga Okrut
Install frameworks, and import libraries
```
!pip install tensornetwork jax jaxlib colorama qcircuit
from qcircuit import QCircuit as qc
import numpy as np
```
Moving further, let's create a larger quantum curcuit of three qubits and using more logical gates.
In this section we will build the following quantum curcuit:

As you can see I have new logical gates in the circuit. Let's start by disscusing their meaning and how they impact a curcuit or a qubit.
The Y gate will turn the state vector of the qubit for the angle π around y axis on the [Bloch sphere](https://en.wikipedia.org/wiki/Bloch_sphere).
The Y gate defined as follows:
$ Y = \begin{pmatrix}
0 & -i \\
i & 0 \end{pmatrix} $,
where $ i = \sqrt{-1} $ .
Acting with the Y gate on initial state vector, we acquire the following state of the quantum circuit:
$ Y * (1|0> + 0|1>) = \begin{pmatrix}
0 & -i \\
i & 0 \end{pmatrix} * \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ i \end{pmatrix}$
As you can see the state fliped from 0 to 1 and turned around the Y-axis, so 1 becames *i*. The impact of the Y gate on a qubit is a combined effect of the X gate and Z gate (discribed below).
The CY (controlled-Y) gate, acts in the same way on a control and target qubit as the CX gate. It performs a Y on the target whenever the control is in state `|1⟩`. Again, if two qubits are in superpossition, and we apply CY gate on this state:
$ CY * 1/ \sqrt{2} * (1|00> + 0|01> +1|10> + 0|11>) =
1/\sqrt{2} \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & -i \\
0 & 0 & i & 0 \\\end{pmatrix} * \begin{pmatrix} 1 \\ 0 \\1 \\ 0 \end{pmatrix} = 1/\sqrt{2} \begin{pmatrix} 1 \\ 0 \\ 0 \\ i \end{pmatrix}$
The Z gate has the property of flipping the sign of the state vector, thus `|+⟩`changes to `|-⟩`, and vice versa.
$ Z * (0|0> + 1|1>) = \begin{pmatrix}
1 & 0 \\
0 & -1 \end{pmatrix} * \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ -1 \end{pmatrix}$
As you can see my state vector is `0|0> - 1|1>`. The probability of the state remains but state turned around Z-axes on angle π.
The controlled-Z gate (CZ), like CX and CY gates, acts on a control and target qubit. It performs a Z on the target whenever the control is in state `|1⟩` . Suppose two qubits are in superpossition (we applied Hadamard gate on it a moment before), and we apply CZ gate on this state:
$CZ * 1/ \sqrt{2}*(1|00> + 0|01> +1|10> + 0|11>) = 1/\sqrt{2} \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1 \\\end{pmatrix} * \begin{pmatrix} 1 \\ 0 \\1 \\ 0 \end{pmatrix} = 1/\sqrt{2} \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}$
Now the easiest part - create this quantum circuit using QCircuit simulator!
```
# Create three qubits quantum circuit
circuit_size = 3
my_circuit = qc.QCircuit(circuit_size)
# applying gates on the qubits
my_circuit.Y(0)
my_circuit.H(1)
my_circuit.X(2)
my_circuit.CX(control=[0], target=1)
my_circuit.H(0)
my_circuit.CZ(control=[1], target=2)
my_circuit.CY(control=[0], target=2)
my_circuit.CY(control=[0], target=2)
my_circuit.H(0)
my_circuit.CZ(control=[1], target=2)
my_circuit.CX(control=[0], target=1)
my_circuit.Y(0)
my_circuit.H(1)
my_circuit.X(2)
# get amplitude measurement and bitstring sampling
print("amplitude: ")
my_circuit.get_amplitude()
print("bitstring:")
bitstr, max_str = my_circuit.get_bitstring()
for index in range(2 ** circuit_size):
b = np.binary_repr(index, width=circuit_size)
probability = bitstr[index]
print("|" + b + "> probability " + str(probability))
# state vector
state_vector = my_circuit.get_state_vector()
print("state vector", state_vector)
# visualize
my_circuit.visualize()
```
| github_jupyter |
# Particle on a sphere
The Hamiltonial $H$ for a particle on a sphere is given by
\begin{align}
H &= \frac{p^2}{2m} = \frac{L^2}{2 I}
\end{align}
## Exercise 1
$\mathbf{L} = \mathbf{r} \times \mathbf{p}$ is the angular momentum operator
where $\mathbf{r} = (x, y, z)$ is the positional operator and $\mathbf{p} = (p_x, p_y, p_z)$ is the momentum operator
Show that:
\begin{align}
L_x &= - i \hbar \left(y \frac{\partial}{\partial z} - z \frac{\partial}{\partial y} \right) \\
L_y &= - i \hbar \left(z \frac{\partial}{\partial x} - x \frac{\partial}{\partial z} \right) \\
L_z &= - i \hbar \left(x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x} \right)
\end{align}
Hint: $p_x = -i\frac{\partial}{\partial x}$
Hint 2: To do this exercise you need to do a cross product
## Exercise 2
Find the commutator relations $[L_x, L_y]$ and $[L^2, L_z]$.
Does $L_x$ and $L_y$ commute?
Does $L^2$ and $L_z$ commute?
Hint: $[a, b] = ab - ba$
## Exercise 3
The particle on a sphere is given by:
\begin{align}
L^2 Y_{m_l}^l (\varphi, \theta) = \hbar^2 l(l+1) Y_{m_l}^l (\varphi, \theta) \\
L_z Y_{m_l}^l (\varphi, \theta) = \hbar m_l Y_{m_l}^l (\varphi, \theta)
\end{align}
where $l = 0, 1, 2, ...$ and $m_l = -l, -l+1,...,0,..., l-1, l$
Make a surface plot of a sphere where the particle can be located
## Exercise 4
Find the analytic expressions for $Y_{m_l}^l (\varphi, \theta)$, for instance in the lecture notes on explain everything (code ATEHWGWY).
Make python functions for $L = 0$, $L = 1$ and $L = 2$ with all combination of $m_l$
```
def Y_10(phi, theta):
def Y_11(phi, theta):
def Y_1m1(phi, theta):
# or more generally using sph_harm from scipy.special
from scipy.special import sph_harm
```
## Exercise 5
The parametrization for the probability densities are:
\begin{align}
x &= \cos \theta \sin \phi |Y_{m_l}^l (\varphi, \theta)|^2 \\
y &= \sin \theta \sin \phi |Y_{m_l}^l (\varphi, \theta)|^2 \\
z &= \cos \phi |Y_{m_l}^l (\varphi, \theta)|^2
\end{align}
Give an explaination to why the parametrization looks like this.
Plot the probability density of $|Y_{m_l}^l (\varphi, \theta)|^2$ for $L = 0$, $L = 1$
and $L=2$ for $m_l = -l, -l+1,...,0,..., l-1, l$.
Try to plot them with the sphere from exercise 3, here it is a good idea to add the keyword $alpha = 0.3$ for the sphere plot.
## Exercise 6
The formulas for $Y_{m_l}^l (\varphi, \theta)$ for $l=1$ and $m_l=-1,0,1$ represent the three $p$ orbitals. It is easy to identify $p_z$ as the function $Y_{0}^1$, whereas $p_x$ and $p_y$ must be obtained as combinations of $Y_{-1}^1$ and $Y_{1}^1$.
1. Make the appropriate linear conbinations to obtain the $p_x$ and $p_y$ orbitals. Hint: $p_x$ and $p_y$ are real functions
2. Compute the parametric surface for the probability density of $p_x$ and $p_y$ as in exercise 5
3. Plot $p_x$ and $p_y$
Advanced (not mandatory): repeat the points above for $d$ and $f$ orbitals.
| github_jupyter |
# Image prediction accuracy analysis
- Josh Montague (MIT License)
In this notebook, we'll look at the TSV dump from the mysql db that recorded the prediction accuracies (top1, top5, none) from the webapp.
Note: the dialog and some of the choices were specific to my data. The outputs should be sufficiently general that they work with your data, though. Feel free to modify the text and conclusions if it bothers you!
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('bmh')
%matplotlib inline
project_dir = '/path/to/repo/'
# we don't need the id or URL for this
results = (pd.read_csv(project_dir + 'rdata/db-results-export.tsv', sep='\t')
.drop(['id','link'], axis=1)
)
results.head()
# make sure they're in time order (snowflake id encodes timing)
results = results.sort_values('tweet_id', ascending=True)
#results.tail()
# distribution of labels?
results['label_id'].value_counts().sort_index()
```
Remember that:
```
0 = top1
1 = top5
2 = None
```
Also recall that the `3`s are from a bug in the UI buttons, and should be `2`s. I think there were < 10 total uses of the buttons, so there are probably a handful of 2s that are 1s and 1s that are 0s. But since they're one to two OOM less, who cares.
Just drop the 3s for now.
```
results = results.query('label_id != 3')
results['label_id'].value_counts().sort_index()
```
Quick (and unsurprising) take: most of the predictions are incorrect.
Given the random things people post on Twitter, this was expected from the outset.
# Analyze score distributions
Let's start by looking at the scores and their distributions. Start by recalling the format of the data.
We can begin by simply looking at the distributions of the prediction probabilities. Recall that these are the ordered (1 to 5), most-probable predictions for each image.
For annoying historical reasons, the remainder of the notebook uses a dataframe named `adj_results`, so make that connection here.
```
adj_results = results
adj_results.head()
axes = (adj_results[['score1','score2','score3','score4','score5']]
.plot.hist(bins=100,
subplots=True,
#sharey=True,
figsize=(8,8))
)
plt.xlabel('prediction probability')
axes[0].set_title('distribution of top-5 prediction probabilities ({} images)'.format(len(adj_results)))
```
These distributions are consistent with the notion that `score1` is always the most probable result, `score2` less so, and so on. There is never a case in which the fifth-most likely label is predicted to be highly likely.
Interestingly, note that most of the distribution weight for `score1` (the most probably prediction) is still at very low probability. This indicates that the model is generally not confident in it's predictions for our data set. Another way of saying it is that this out-of-the-box model was not designed for the task of labeling random twitter images, and as such it doesn't do a spectacular job of it.
But that's fine, and expected. We're looking to see if we can still get some use out of it.
If you uncomment the `sharey` kwarg, you can also see more clearly that all the probability masses do sum to 1, but the `score1` distribution is just much more spread out over the probability range.
Note that this view doesn't account for (or display) the ground truth that we applied. Let's encode that in the frame with a text label.
```
# add a column that maps the label_id to the text labels
label_dict = {0:'top1', 1:'top5', 2:'none'}
adj_results['truth'] = adj_results['label_id'].apply(lambda x: label_dict[x])
adj_results[['label_id','truth']].head()
```
## Prediction probability distributions per label
Now that we have the labels, let's use them in a similar set of distributions as above.
```
adj_results.head()
cols = ['score1','score2','score3','score4','score5']
labels = ['top1','top5','none']
# make 3 separate charts (one for each text label)
# each one will have subplots for the 5 scores
for label in labels:
axes = (adj_results.query('truth == @label')[cols].plot.hist(bins=100,
subplots=True,
#sharey=True,
figsize=(8,6)
)
)
# put the title on the top subplot
axes[0].set_title('score distribution for label={}'.format(label))
plt.legend()
plt.xlabel('prediction probability')
```
Pretty interesting shapes. Given that the distributions for each label are pretty unique, we could probably train some other model to predict the labels based on all the top 5 scores. Interesting idea, but not the point right now; we want to know more about how to use the `top1` predictions.
Of note: in both `top5` and `none`, the `score1` (highest probability prediction) can be as high as 0.9 or more. Similarly, even with the `top1` label, some `score1` values are as low as 0.1. **This makes it clear that we can't blindly use the `score1` probability.**
We need something like an ROC curve (FPR v TPR) on `score1` to understand the tradeoffs.
## ROC curve
How do we construct an ROC curve for this data? [Wiki ref for calculations](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)
- assume `score1` is the prediction probability of a binary classifier
- assume ground truth is binary: the label is `top1` or it's not
- vary probability decision-making threshold from 0 to 1
- at each threshold
- everything above threshold is "prediction positive"
- everything below threshold is "prediction negative"
- at all times (fixed)
- actual data with `top1` / "1" labels are the "condition positive"
- everything else is "condition negative"
- TPR = TP / cond pos
- FPR = FP / cond neg
- plot TPR vs. FPR at each threshold value
```
sub_df = adj_results[['score1','truth']].copy(deep=True)
# for easier comparisons, let's convert 'top1' to 1 and everything else to 0
sub_df['truth'] = sub_df['truth'].apply(lambda x: 1 if x == 'top1' else 0)
#del sub_df['pred_acc']
sub_df.rename(columns={'score1':'pred'}, inplace=True)
sub_df.head()
# threshold range
t_rng = np.linspace(0,1,100)
# ground truth (fixed)
cond_pos = len(sub_df.query('truth == 1'))
cond_neg = len(sub_df.query('truth == 0'))
# use this copy of the frame above for flipping the preds at each threshold
# `tmp_df` will be our binary label holder
tmp_df = sub_df.copy(deep=True)
tpr_list = []
fpr_list = []
for t in t_rng:
# flip to 0 or 1 based on this threshold
tmp_df['pred'] = sub_df['pred'].apply(lambda x: 1 if x > t else 0)
# calculate TPR (TP / cond pos)
tp = len(tmp_df.query('pred == 1 and truth == 1'))
tpr = tp / cond_pos
tpr_list.append(tpr)
# calculate FPR (FP / cond neg)
fp = len(tmp_df.query('pred == 1 and truth == 0'))
fpr = fp / cond_neg
fpr_list.append(fpr)
#print('tpr_list: ', tpr_list)
#print('fpr_list: ', fpr_list)
fig = plt.figure(figsize=(6,6))
plt.plot(fpr_list, tpr_list, '--.', label='model')
plt.plot(t_rng, t_rng, ':', c='k', label='diagonal')
plt.xlabel('FPR')
plt.ylabel('TPR')
# rough estimate of auc
auc = sum(tpr_list)/len(tpr_list)
plt.title('ROC curve for top1 predictions (AUC ~ {:.2f})'.format(auc))
plt.legend()
```
The last step would be to choose a threshold value for the prediction probability and then say that's where we operate. How do we choose the best threshold?
Some googling suggests there the "optimal" choice is typically dependent on the industry and context of the problem. That said, [some texts](https://books.google.com/books?id=JzT_CAAAQBAJ&lpg=PT43&dq=Selecting%20an%20Optimal%20Threshold%20ROC%20curve&pg=PT43#v=onepage&q=Selecting%20an%20Optimal%20Threshold%20ROC%20curve&f=false) suggest there are two common answers:
- the point that is closest to "ideal" (i.e. (0,1))
- the point that is furthest from the "informationless diagonal"
Let's calculate both and see what they look like.
### Closest to "ideal"
```
# create an array of the TPR and FPR values
fptp_list = []
for f,t in zip(fpr_list, tpr_list):
fptp_list.append([f, t])
# NB: fptp starts at (1,1) at t=0; reverse order (start at (0,0)) for
# same convention as t_rng
fptp = np.array(fptp_list)[::-1]
# create an array of the "ideal" values
ideal_list = []
for _ in fptp:
ideal_list.append([0,1])
ideal = np.array(ideal_list)
# calculate pair-wise (euclidean) distances
ideal_dist = np.array([np.linalg.norm(a-b) for a,b in zip(ideal, fptp)])
# get the index of the smallest distance
ideal_position = np.argmin(ideal_dist)
# ideal_dist should be 1.0 at both ends!
#plt.plot(ideal_dist)
```
### Furthest from diagonal
```
# create an array of the diagnoal values
# NB: diag starts at (0,0)
diag = np.array([[x, x] for x in t_rng])
# calculate pair-wise (euclidean) distances
# (reverse fptp since it starts goes upper right => lower left)
#diag_dist = np.array([np.linalg.norm(a-b) for a,b in zip(diag, fptp[::-1])])
diag_dist = np.array([np.linalg.norm(a-b) for a,b in zip(diag, fptp)])
# get the index of the largest distance
diag_position = np.argmax(diag_dist)
# diag_dist should be 0 at both ends!
#plt.plot(diag_dist)
```
### Compare results
```
print('The threshold closest to the "ideal" point in the ROC curve is at:')
print(' (FPR, TPR)={}'.format(fptp[ideal_position]))
print(' with prediction threshold={:.3f}'.format(t_rng[ideal_position]))
print('The threshold furthest from the "informationless diagonal" in the ROC curve is at:')
print(' (FPR, TPR)={}'.format(fptp[diag_position]))
print(' with prediction threshold={:.3f}'.format(t_rng[diag_position]))
fig = plt.figure(figsize=(6,6))
plt.plot(fpr_list, tpr_list, '--.', label='model')
plt.plot(t_rng, t_rng, ':', c='k', label='diagonal')
# diagonal threshold
plt.plot(*fptp[diag_position], 'o', markersize=10, alpha=0.6,
label='furthest from diagonal (t={:.2f})'.format(t_rng[diag_position])
)
plt.plot([fptp[diag_position][0], fptp[diag_position][0]],
fptp[diag_position],
'k--', alpha=0.5
)
# ideal threshold
plt.plot(*fptp[ideal_position], 'o', markersize=10, alpha=0.6,
label='closest to ideal (t={:.2f})'.format(t_rng[ideal_position])
)
plt.plot([ideal[0][0], fptp[ideal_position][0]],
[ideal[0][1], fptp[ideal_position][1]],
'k--', alpha=0.5
)
plt.xlabel('FPR')
plt.ylabel('TPR')
# rough estimate of auc
auc = sum(tpr_list)/len(tpr_list)
plt.title('ROC curve for top1 predictions (AUC ~ {:.2f})'.format(auc))
plt.legend()
```
Since there is some random sampling involved in generating the first region of data, these threshold values can vary a bit. In repeated runs, they typically range from 0.5 to 0.6, and sometimes they are the same point.
Let's use the "furthest from the diagonal" point and move on to recalculate other parts of the confusion matrix params.
```
# set threshold
t = t_rng[diag_position]
# ground truth still fixed at cond_pos, cond_neg
# use this copy of the frame above for flipping the preds at each threshold
tmp_df = sub_df.copy(deep=True)
tpr_list = []
fpr_list = []
# flip to 0 or 1 based on this threshold
tmp_df['pred'] = sub_df['pred'].apply(lambda x: 1 if x > t else 0)
# calculate TPR (TP / cond pos)
tp = len(tmp_df.query('pred == 1 and truth == 1'))
tpr = tp / cond_pos
tpr_list.append(tpr)
# calculate FPR (FP / cond neg)
fp = len(tmp_df.query('pred == 1 and truth == 0'))
fpr = fp / cond_neg
fpr_list.append(fpr)
# calculate TN + FN
tn = len(tmp_df.query('pred == 0 and truth == 0'))
fn = len(tmp_df.query('pred == 0 and truth == 1'))
# accuracy
accuracy = (tp + tn) / len(tmp_df)
print('total accuracy: {:.3f}'.format(accuracy))
```
# Common categories of predictions
Ok, so we've got a mid- to high-70s% accuracy image classifier by setting the threshold (reflected in `tmp_df`), but it's possible that there are class imbalances or other sorts of bias in the *types* of images we can accurately predict.
What are the most common accurately predicted images, and what fraction of the data do they comprise?
We can use the index of tmp_df to join back onto the larger frame with the actual labels.
**Most common TP labels**
```
# tmp_df still has our binary labels
# most common labels for TP
(tmp_df.query('pred == 1 and truth == 1')
#.head()
.join(adj_results[['tweet_id','keyword1']], how='left')
.groupby(by='keyword1')
.count()
.sort_values('tweet_id', ascending=False)[['tweet_id']]
.rename(columns={'tweet_id':'count'})
.head(15)
)
```
From this, we see some data that feels consistent with my experience hand-labeling things: the model is really good at recognizing suits and websites in images. Recall that in my hand-labeling, I counted anything that was a mobile app or deskstop screenshot as "web_site". Then, however, the counts drop off pretty quickly.
**Most common FP categories**
```
# tmp_df still has our binary labels
# most common labels for FP
(tmp_df.query('pred == 1 and truth == 0')
#.head()
.join(adj_results[['tweet_id','keyword1']], how='left')
.groupby(by='keyword1')
.count()
.sort_values('tweet_id', ascending=False)[['tweet_id']]
.rename(columns={'tweet_id':'count'})
.head(15)
)
```
Not a huge shock, there are a bunch of screenshots, so web_site also is at the top of the false positive list. I'm not sure what to make of envelope and menu making it into the top of that list, either.
**Most common FN categories**
```
# tmp_df still has our binary labels
# most common labels for FN
(tmp_df.query('pred == 0 and truth == 1')
#.head()
.join(adj_results[['tweet_id','keyword1']], how='left')
.groupby(by='keyword1')
.count()
.sort_values('tweet_id', ascending=False)[['tweet_id']]
.rename(columns={'tweet_id':'count'})
.head(15)
)
```
**Most common TN categories**
```
# tmp_df still has our binary labels
# most common labels for TN
(tmp_df.query('pred == 0 and truth == 0')
#.head()
.join(adj_results[['tweet_id','keyword1']], how='left')
.groupby(by='keyword1')
.count()
.sort_values('tweet_id', ascending=False)[['tweet_id']]
.rename(columns={'tweet_id':'count'})
.head(15)
)
```
# Conclusion
So, is this image classifier useful? Maybe. It's pretty noisy, though. I do think there's probably utility in attaching labels to tweets when above, say, one of these thresholds.
To see what you think:
- run the cell below
- copy/paste the link and compare to the corresponding `keyword1`
- (it's ok that it has my username, it works anyway!)
Re-run the cell as many times as necessary.
```
# for true positives
tw_id, kw1 = (tmp_df.query('pred == 1 and truth == 1')
# for false positives
#tw_id, kw1 = (tmp_df.query('pred == 1 and truth == 0')
.join(adj_results[['tweet_id','keyword1']], how='left')
.sample(n=1)[['tweet_id','keyword1']].values[0]
)
print('prediction: {}\n'.format(kw1))
print('URL to copy-paste: {}'.format('https://www.twitter.com/jrmontag/status/{}'.format(tw_id)))
```
# Follow up
There are many places that this work could go.
### similar, better labels
For one, now that all of these pieces exist, it would be straightforward (one line of code) to swap out the VGG16 model for [any other pre-trained model](https://keras.io/applications/#available-models). As seen in [this diagram](https://culurciello.github.io/tech/2016/06/04/nets.html) the top1 performance varies (generally slightly higher than VGG16), but other models may also be faster to evaluate according to that post.
If the task was done similarly (top1, top5, none), then all of this code could be re-used, as-is (make a new db table, though). The turn around time for a next round (like this one) would be significantly less. Ballpark, based on my time logs:
- a couple hours to ensure no other code changes are needed
- an overnight (or couple hour) data collection
- an overnight (or couple hour) prediction generation
- a few hours to label the images
- a couple hours to evaluate using this notebook
For a total of maybe 10 hours of person time.
### image captions
Another super intriguing angle to take would be to explore some of the more recent developments in image *captioning* (vs. labeling). There are a handful of examples, most notably [Google's "Show and Tell" research](https://research.googleblog.com/2016/09/show-and-tell-image-captioning-open.html). Unfortunately, it doesn't seem like there are open-source weights for this model yet. Perhaps in the near future.
### faster analysis
One other choice that might make the whole process faster is to skip the notion of "top5" results and treat the output as a binary "top1" or "nothing".
| github_jupyter |
<span style="font-family:Papyrus; font-size:3em;">Homework 2</span>
<span style="font-family:Papyrus; font-size:2em;">Cross Validation</span>
# Problem
In this homework, you will use cross validation to analyze the effect on model quality
of the number of model parameters and the noise in the observational data.
You do this analysis in the context of design of experiments.
The two factors are (i) number of model parameters and (ii) the noise in the observational data;
the response will be the $R^2$ of the model (actually the $R^2$ averaged across the folds of
cross validation).
You will investigate models of linear pathways with 2, 4, 6, 8, 10 parameters.
For example, a two parameter model is use $S_1 \xrightarrow{v_1} S_2 \xrightarrow{v_3} S_3$,
where $v_i = k_i s_i$, $k_i$ is a parameter to estimate, and $s_i$ is the concentration of $S_i$.
The initial concentration of $S_1 = 10$, and the true value of $k_i$ is $i$. Thus, for a two parameter model,
$k_1 = 1$, $k_2 = 2$.
You will generate the synthetic data by adding a
noise term to the true model.
The noise term is drawn from a normal distribution with mean 0
and standard deviations of 0.2, 0.5, 0.8, 1.0, and 1.5, depending on the experiment.
You will design experiments, implement codes to run them, run the experiments, and interpret the results.
The raw output of these experiments will be
a table structured as the one below.
Cell values will be the average $R^2$ across the folds of the cross validation done with
one level for each factor.
| | 2 | 4 | 6 | 8 | 10
| -- | -- | -- | -- | -- | -- |
0.2 | ? | ? | ? | ? | ?
0.5 | ? | ? | ? | ? | ?
0.8 | ? | ? | ? | ? | ?
1.0 | ? | ? | ? | ? | ?
1.5 | ? | ? | ? | ? | ?
1. (2 pt) **Generate Models.** Write (or generate) the models in Antimony, and produce plots for their true values. Use a simulation time
of 10 and 100 points.
1. (1 pt) **Generate Synthetic Data.** Write a function that creates synthetic data given the parameters std
and numParameter.
1. (1 pt) **Extend ``CrossValidator``.** You will extend ``CrossValidator`` (in ``common/util_crossvalidation.py``)
by creating a subclass ``ExtendedCrossValidator`` that has the method
``calcAvgRsq``. The method takes no argument (except ``self``) and returns the average value of
$R^2$ for the folds. Don't forget to document the function and include at least one tests.
1. (4 pt) **Implement ``runExperiments``.** This function has inputs: (a) list of the number of parameters for the
models to study and (b) list of the standard deviations of the noise terms.
It returns a dataframe with: columns are the number of parameters; rows (index) are the standard deviations of noise;
and values are the average $R^2$ for the folds defined by the levels of the factors.
Run experiments that produce the tables described above using five hold cross validation and 100 simulation points.
1. (4 pt) **Calculate Effects.** Using the baseline standard deviation of noise of 0.8, number of parameters of 6, calculate $\mu$, $\alpha_{i,k_i}$,
$\gamma_{i,i_k,j,k_j}$.
1. (3 pt) **Analysis.** Answer the following questions
1. What is the effect on $R^2$ as the number of parameters increases? Why?
1. How does the noise standard deviation affect $R^2$? Why?
1. What are the interaction effects and how do they influence the response (average $R^2$)?
**Please do your homework in a copy of this notebook, maintaining the sections.**
# Programming Preliminaries
This section provides the setup to run your python codes.
```
IS_COLAB = False
#
if IS_COLAB:
!pip install tellurium
!pip install SBstoat
#
# Constants for standalone notebook
if not IS_COLAB:
CODE_DIR = "/home/ubuntu/advancing-biomedical-models/common"
else:
from google.colab import drive
drive.mount('/content/drive')
CODE_DIR = "/content/drive/My Drive/Winter 2021/common"
import sys
sys.path.insert(0, CODE_DIR)
import util_crossvalidation as ucv
from SBstoat.namedTimeseries import NamedTimeseries, TIME
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tellurium as te
END_TIME = 5
NUM_POINT = 100
NOISE_STD = 0.5
# Column names
C_NOISE_STD = "noisestd"
C_NUM_PARAMETER = "no. parameters"
C_VALUE = "value"
#
NOISE_STDS = [0.2, 0.5, 0.8, 1.0, 1.5]
NUM_PARAMETERS = [2, 4, 6, 8, 10]
def isSame(collection1, collection2):
"""
Determines if two collections have the same elements.
"""
diff = set(collection1).symmetric_difference(collection2)
return len(diff) == 0
# Tests
assert(isSame(range(3), [0, 1, 2]))
assert(not isSame(range(4), range(3)))
```
# Generate Models
```
MODEL = """
S1 -> S2; k1*S1
S2 -> S3; k2*S2
S1 = 10
S2 = 0
k1 = 1
k2 = 2
"""
rr = te.loada(MODEL)
class LinearModel(object):
def __init__(self, numParameter, s1Value=10):
"""
numParameter: int
number of parameters in the model
s1Value: float
Initial value for S1
"""
self.numParameter = numParameter
self.s1Value = s1Value
#
# modelStr - Antimony model
# parameterDct - true values of parameters
self.modelStr, self.parameterDct = self._generateModel()
self.parameterNames = list(self.parameterDct.keys())
# Calculated by other methods
self.resultTS = None # Simulation result
def _generateModel(self):
"""
Constructs an antimony model.
Returns
-------
str: antimony model
list-str: model parameters
dict:
key: parameterName
value: true value
"""
def mkSpeciesInitialization(idx, value=0):
return "\nS%d = %2.2f" % (idx, value)
#
reactionStr = ""
initializationStr = ""
parameterDct = {}
for idx in range(1, self.numParameter+1):
if idx == 1:
value = self.s1Value
else:
value = 0
parameterName = "k%d" % idx
parameterDct[parameterName] = idx
initializationStr += mkSpeciesInitialization(idx, value)
if idx == self.numParameter:
initializationStr += mkSpeciesInitialization(idx + 1, 0)
initializationStr += "\n%s = %2.2f" % (parameterName, 1.0*idx)
reactionStr += "\nS%d -> S%d; k%d*S%d" % (idx, idx+1, idx, idx)
modelStr = "%s\n%s" % (reactionStr, initializationStr)
return modelStr, parameterDct
def simulate(self, endTime=END_TIME, numPoint=NUM_POINT):
"""
Simulates the model.
Parameters
----------
endTime: float
end of the simulation
numPoint: int
number of points in the simulation
"""
rr = te.loada(self.modelStr)
arr = rr.simulate(0, endTime, numPoint)
self.resultTS = NamedTimeseries(namedArray=arr)
def plotTrueModel(self, **kwargs):
"""
Plots the result of a simulation.
Parameters
----------
kwargs: dict
arguments passed to plot
Returns
-------
Matplotlib.Axes
"""
if self.resultTS is None:
self.simulate()
return ucv.plotTS(self.resultTS, linetype="line", **kwargs)
# Tests
numParameter = 3
model = LinearModel(numParameter)
_ = te.loada(model.modelStr)
assert(len(model.parameterDct) == numParameter)
# All species and constants are present
for term in ["S", "k"]:
trues = ["%s%d" % (term, n) in model.modelStr for n in range(1, numParameter+1)]
assert(all(trues))
# simulate
model.simulate()
assert(len(model.resultTS) > 0)
# plotResult
model.plotTrueModel(title="test", isPlot=False)
for numParameter in NUM_PARAMETERS:
model = LinearModel(numParameter)
model.plotTrueModel(title="numParameter: %d" % numParameter)
```
# Generate Synthetic Data
```
class ExtendedLinearModel(LinearModel):
"""Extends Model by providing generated observations."""
def __init__(self, *args, **kwargs):
"""
numParameter: int
number of parameters in the model
"""
super().__init__(*args, **kwargs)
self.noiseStd = None
self.observedTS = None
def generateObserved(self, noiseStd=NOISE_STD, **kwargs):
"""
Creates synthetic data for the model.
model: Model
noiseStd: float
kwargs: dict
optional keywords passed to run model
"""
self.noiseStd = noiseStd
self.simulate(**kwargs)
self.observedTS = ucv.makeSyntheticData(fittedTS=self.resultTS, std=noiseStd)
def plotTrueAndGeneratedData(self, isPlot=True):
ax = self.plotTrueModel(isPlot=isPlot)
if self.observedTS is None:
self.generateObserved()
title = "parameters: %d noise std: %2.2f" % (self.numParameter, self.noiseStd)
ucv.plotTS(self.observedTS, title=title, isPlot=isPlot, ax=ax)
# Tests
model = ExtendedLinearModel(3)
model.generateObserved()
assert(len(model.observedTS) == len(model.observedTS))
# plotTrueAndGeneratedData
model.plotTrueAndGeneratedData()
# Here are some examples of the data
for numParameter in [2, 4, 6]:
model = ExtendedLinearModel(numParameter)
model.plotTrueAndGeneratedData()
```
# ``ExtendedCrossValidator``
Hint: Subclass using ``class ExtendedCrossValidator(ucv.CrossValidator):``.
```
class ExtendedCrossValidator(ucv.CrossValidator):
"""Calculates the average Rsq for the folds."""
def calcAvgRsq(self):
"""
Calculates the average Rsq across the folds in the cross validation.
Returns
-------
float
"""
return np.mean(self.rsqs)
# Tests
numFold = 5
numParameter = 4
model = ExtendedLinearModel(numParameter)
model.generateObserved(noiseStd=1.5)
validator = ExtendedCrossValidator(numFold, model.modelStr, model.observedTS, model.parameterNames,
trueParameterDct=model.parameterDct, method="least_squares",
lower=0, upper=100)
validator.execute()
avgRsq = validator.calcAvgRsq()
assert(avgRsq >= 0)
assert(avgRsq <= 1.0)
```
# Implement ``runExperiments``
```
def runExperiments(numParameters, noiseStds, numFold=5, **kwargs):
"""
Constructs tables for the results of computational experiments.
Experiments are described by the collection of values for the
number of parameters in the model and the standard deviation of noise.
Parameters
----------
numParameters: list-int
noiseStds: list-float
numFold: int
kwargs: dict
optional arguments for ExtendedCrossValidator
Returns
-------
pd.DataFrame: rsqDF - average R2 for folds
columns: NUM_PARAMETER in model
index: NOISE_STD of observational data
values: average R2
"""
def mkTable(dct):
"""
Creates a dataframe table from a dictionary.
Columns are the number of parameters.
Rows are the standard deviation of the noise.
"""
df = pd.DataFrame(dct)
df = df.set_index(C_NOISE_STD)
tableDF = df.pivot_table(values=C_VALUE,columns=C_NUM_PARAMETER,
index=C_NOISE_STD)
return tableDF
def updateDct(dct, noiseStd, numParameter, value):
"""
Update entries in the dictionary.
"""
dct[C_NOISE_STD].append(noiseStd)
dct[C_NUM_PARAMETER].append(numParameter)
dct[C_VALUE].append(value)
#
rsqDct = parameterdct = {C_NOISE_STD: [], C_NUM_PARAMETER: [], C_VALUE: []}
for noiseStd in noiseStds:
for numParameter in numParameters:
model = ExtendedLinearModel(numParameter)
model.generateObserved(noiseStd=noiseStd)
validator = ExtendedCrossValidator(numFold, model.modelStr, model.observedTS,
model.parameterNames,
trueParameterDct=model.parameterDct, **kwargs)
validator.execute()
updateDct(rsqDct, noiseStd, numParameter,
np.mean(validator.rsqs))
rsqDF = mkTable(rsqDct)
return rsqDF
# Tests
numParameters = [2, 4, 10]
noiseStds = [0.5, 0.8, 1.5]
rsqDF = runExperiments(numParameters, noiseStds)
for df in [rsqDF]:
assert(isSame(numParameters, df.columns))
assert(isSame(noiseStds, df.index))
RSQ_DF = runExperiments(NUM_PARAMETERS, NOISE_STDS)
RSQ_DF
```
# Calculate Effects
Here, we calculate $\mu$, $\alpha_{i, k_i}$, and $\gamma_{i, k_i, j, k_j}$.
```
def calculateEffects(tableDF):
"""
"""
columns = tableDF.columns.tolist()
rows = tableDF.index.tolist()
numRow = len(rows)
numCol = len(columns)
#
if numRow % 2 != 1:
raise ValueError("Must have an odd number of rows.")
if numCol % 2 != 1:
raise ValueError("Must have an odd number of columns.")
centerRow = numRow // 2
centerCol = numCol // 2
#
mu = tableDF.loc[rows[centerRow], columns[centerCol]]
#
rowAlphaSer = tableDF.loc[rows, columns[centerCol]] - mu
columnAlphaSer = tableDF.loc[rows[centerRow], columns] - mu
# Calculate the values for each entry
gammaDF = tableDF.copy()
for column in columns:
for row in rows:
gammaDF.loc[row, column] = gammaDF.loc[row, column] \
- mu - rowAlphaSer.loc[row] - columnAlphaSer.loc[column]
#
return mu, rowAlphaSer, columnAlphaSer, gammaDF
# Tests
numParameters = [2, 4, 6]
noiseStds = [0.5, 0.8, 1.5]
rsqDF = runExperiments(numParameters, noiseStds)
mu, rowAlphaSer, columnAlphaSer, gammaDF = calculateEffects(rsqDF)
assert((0 <= mu) and (mu <= 1.0))
assert(len(rowAlphaSer) == len(noiseStds))
assert(len(columnAlphaSer) == len(numParameters))
```
Calculate Rsqs and then calculate mu, alpha, gamma
```
mu, noiseStdAlphaSer, numParameterAlphaSer, gammaDF = calculateEffects(RSQ_DF)
mu
# Alpha for noise standard deviations
noiseStdAlphaSer
# Alpha for number of parameters
numParameterAlphaSer
gammaDF
```
# Analysis
**What is the effect on $R^2$ as the number of parameters increases? Why?**
$R^2$ decreases because there are more parameters to estimate with the same data.
**How does the noise standard deviation affect $R^2$? Why?**
$R^2$ decreases with noise.
**What are the interaction effects and how do they influence the response (average $R^2$)?**
The interaction effects are mostly small, and they generally increase with the number of parameters in the model.
| github_jupyter |
# User Defined Functions
From time to time you hit a wall where you need a simple transformation, but Spark does not offer an appropriate function in the `pyspark.sql.functions` module. Fortunately you can simply define new functions, so called *user defined functions* or short *UDFs*.
```
from pyspark.sql.functions import *
from pyspark.sql.types import *
df = spark.createDataFrame([('Alice & Bob',12),('Thelma & Louise',17)],['name','age'])
df.toPandas()
import html
html.escape("Thelma & Louise")
import html
html_encode = udf(lambda s: html.escape(s), StringType())
result = df.select(html_encode('name').alias('html_name'))
result.toPandas()
```
As an alternative, you can also use a Python decorator for declaring a UDF:
```
@udf(StringType())
def html_encode(s):
return html.escape(s)
result = df.select(html_encode('name').alias('html_name'))
result.toPandas()
```
## Complex return types
PySpark also supports complex return types, for example structs (or also arrays)
```
@udf(StructType([
StructField("org_name", StringType()),
StructField("html_name", StringType())
]))
def html_encode(s):
return (s,html.escape(s))
result = df.select(html_encode('name').alias('both_names'))
result.toPandas()
```
## SQL Support
If you wanto to use the Python UDF inside a SQL query, you also need to register it, so PySpark knows its name.
```
html_encode = spark.udf.register("html_encode", lambda s: html.escape(s), StringType())
df.createOrReplaceTempView("famous_pairs")
result = spark.sql("SELECT html_encode(name) FROM famous_pairs")
result.toPandas()
```
# Pandas UDFs
"Normal" Python UDFs are pretty expensive (in terms of execution time), since for every record the following steps need to be performed:
* record is serialized inside JVM
* record is sent to an external Python process
* record is deserialized inside Python
* record is Processed in Python
* result is serialized in Python
* result is sent back to JVM
* result is deserialized and stored inside result DataFrame
This does not only sound like a lot of work, it actually is. Therefore Python UDFs are a magnitude slower than native UDFs written in Scala or Java, which run directly inside the JVM.
But since Spark 2.3 an alternative approach is available for defining Python UDFs with so called *Pandas UDFs*. Pandas is a commonly used Python framework which also offers DataFrames (but Pandas DataFrames, not Spark DataFrames). Spark 2.3 now can convert inside the JVM a Spark DataFrame into a shareable memory buffer by using a library called *Arrow*. Python then can also treat this memory buffer as a Pandas DataFrame and can directly work on this shared memory.
This approach has two major advantages:
* No need for serialization and deserialization, since data is shared directly in memory between the JVM and Python
* Pandas has lots of very efficient implementations in C for many functions
Due to these two facts, Pandas UDFs are much faster and should be preferred over traditional Python UDFs whenever possible.
```
r = spark.range(0,100)
df = r.withColumn('v', r.id.cast("double")).withColumn("group", r.id % 5)
df.limit(10).toPandas()
```
## Classic UDF Approach
As an example, let's create a function which simply increments a numeric column by one. First let us have a look using a traditional Python UDF:
```
from pyspark.sql.functions import udf
# Use udf to define a row-at-a-time udf
@udf('double')
# Input/output are both a single double value
def plus_one(v):
return v + 1
result = df.withColumn('v2', plus_one(df.v))
result.limit(10).toPandas()
```
## Pandas UDF
Increment a value using a Pandas UDF. The Pandas UDF receives a `pandas.Series` object and also has to return a `pandas.Series` object.
```
from pyspark.sql.functions import pandas_udf, PandasUDFType
# Use pandas_udf to define a Pandas UDF
@pandas_udf('double', PandasUDFType.SCALAR)
# Input/output are both a pandas.Series of doubles
def pandas_plus_one(v):
return v + 1
result = df.withColumn('v2', pandas_plus_one(df.v))
result.limit(10).toPandas()
```
## Grouped Pandas Aggregate UDFs
Since version 2.4.0, Spark also supports Pandas aggregation functions. This is the only way to implement custom aggregation functions in Python. Note that this type of UDF does not support partial aggregation and all data for a group or window will be loaded into memory.
```
@pandas_udf("double", PandasUDFType.GROUPED_AGG)
def mean_udf(v):
return v.mean()
result = df.groupBy("group").agg(mean_udf(df.v))
result.toPandas()
```
## Grouped Pandas Map UDFs
While the example above transforms all records independently, but only one column at a time, Spark also offers a so called *grouped Pandas UDF* which operates on complete groups of records (as created by a `groupBy` method). This is a great mean to replace the (in PySpark missing) *User Defined Aggregation Functions* (UDAFs).
For example let's subtract the mean of a group from all entries of a group. In Spark this could be achieved directly by using windowed aggregations. But let's first have a look at a Python implementation which does not use Pandas Grouped UDFs
```
import pandas as pd
@udf(ArrayType(DoubleType()))
def subtract_mean(values):
series = pd.Series(values)
center = series - series.mean()
return [x for x in center]
groups = df.groupBy('group').agg(collect_list(df.v).alias('values'))
result = groups.withColumn('center', explode(subtract_mean(groups.values))).drop('values')
result.limit(10).toPandas()
```
This example is even incomplete, as the `id` column is now missing.
### Using Pandas Grouped UDFs
Now let's try to implement the same function using a Pandas grouped UDF
```
@pandas_udf(df.schema, PandasUDFType.GROUPED_MAP)
# Input/output are both a pandas.DataFrame
def subtract_mean(pdf):
return pdf.assign(v=pdf.v - pdf.v.mean())
result = df.groupby('group').apply(subtract_mean)
result.limit(10).toPandas()
```
# Example of grouped regressions
In this section, we want to demanstrate a slightly advanced example for using Pandas grouped transformation for performing many ordinary least square model fits in parallel. We reuse the weather data and try to predict the temperature of all stations with a very simple model per station.
```
%matplotlib inline
```
### Load Data
First we load data of a single year.
```
storageLocation = "s3://dimajix-training/data/weather"
from pyspark.sql.types import *
from pyspark.sql.functions import *
rawWeatherData = spark.read.text(storageLocation + "/2003")
weather_all = rawWeatherData.select(
substring(col("value"),5,6).alias("usaf"),
substring(col("value"),11,5).alias("wban"),
to_timestamp(substring(col("value"),16,12),"yyyyMMddHHmm").alias("timestamp"),
to_timestamp(substring(col("value"),16,12),"yyyyMMddHHmm").cast("long").alias("ts"),
substring(col("value"),42,5).alias("report_type"),
substring(col("value"),61,3).alias("wind_direction"),
substring(col("value"),64,1).alias("wind_direction_qual"),
substring(col("value"),65,1).alias("wind_observation"),
(substring(col("value"),66,4).cast("float") / lit(10.0)).alias("wind_speed"),
substring(col("value"),70,1).alias("wind_speed_qual"),
(substring(col("value"),88,5).cast("float") / lit(10.0)).alias("air_temperature"),
substring(col("value"),93,1).alias("air_temperature_qual")
)
```
## Analysis of one station
First we only analyse a single station, just to check our approach and the expressiveness of our model. It won't be a very good fit, but it will be good enough for our needs to demonstrate the concept.
So first we pick a single station, and we also only keep those records with a valid temeprature measurement.
```
weather_single = weather_all.where("usaf='954920' and wban='99999'").cache()
pdf = weather_single.where(weather_single.air_temperature_qual == 1).toPandas()
pdf
```
### Create Feature Space
Our model will simply predict the temperature depending on the time and day of year. We use sin and cos of with a day-wide period and a year-wide period as features for fitting the model.
```
import numpy as np
import math
seconds_per_day = 24*60*60
seconds_per_year = 365*seconds_per_day
# Add sin and cos as features for fitting
pdf['daily_sin'] = np.sin(pdf['ts']/seconds_per_day*2.0*math.pi)
pdf['daily_cos'] = np.cos(pdf['ts']/seconds_per_day*2.0*math.pi)
pdf['yearly_sin'] = np.sin(pdf['ts']/seconds_per_year*2.0*math.pi)
pdf['yearly_cos'] = np.cos(pdf['ts']/seconds_per_year*2.0*math.pi)
# Make a plot, just to check how it looks like
pdf[0:200].plot(x='timestamp', y=['daily_sin','daily_cos','air_temperature'], figsize=[16,6])
```
### Fit model
Now that we have the temperature and some features, we fit a simple model.
```
import statsmodels.api as sm
# define target variable y
y = pdf['air_temperature']
# define feature variables X
X = pdf[['ts', 'daily_sin', 'daily_cos', 'yearly_sin', 'yearly_cos']]
X = sm.add_constant(X)
# fit model
model = sm.OLS(y, X).fit()
# perform prediction
pdf['pred'] = model.predict(X)
# Make a plot of real temperature vs predicted temperature
pdf[0:200].plot(x='timestamp', y=['pred','air_temperature'], figsize=[16,6])
```
### Inspect Model
Now let us inspect the model, in order to find a way to store it in a Pandas DataFrame
```
model.params
type(model.params)
```
Create a DataFrame from the model parameters
```
x_columns = X.columns
pd.DataFrame([[model.params[i] for i in x_columns]], columns=x_columns)
```
## Perform OLS for all stations
Now we want to create a model for all stations. First we filter the data again, such that we only have valid temperature measurements.
```
valid_weather = weather_all.filter(weather_all.air_temperature_qual == 1)
```
### Feature extraction
Now we generate the same features, but this time we use Spark instead of Pandas operations. This simplifies later model fitting.
```
import math
seconds_per_day = 24*60*60
seconds_per_year = 365*seconds_per_day
features = valid_weather.select(
valid_weather.usaf,
valid_weather.wban,
valid_weather.air_temperature,
valid_weather.ts,
lit(1.0).alias('const'),
sin(valid_weather.ts * 2.0 * math.pi / seconds_per_day).alias('daily_sin'),
cos(valid_weather.ts * 2.0 * math.pi / seconds_per_day).alias('daily_cos'),
sin(valid_weather.ts * 2.0 * math.pi / seconds_per_year).alias('yearly_sin'),
cos(valid_weather.ts * 2.0 * math.pi / seconds_per_year).alias('yearly_cos')
)
features.limit(10).toPandas()
```
### Fit Models
Now we use a Spark Pandas grouped UDF in order to fit models for all weather stations in parallel.
```
group_columns = ['usaf', 'wban']
y_column = 'air_temperature'
x_columns = ['ts', 'const', 'daily_sin', 'daily_cos', 'yearly_sin', 'yearly_cos']
schema = features.select(*group_columns, *x_columns).schema
@pandas_udf(schema, PandasUDFType.GROUPED_MAP)
def ols(pdf):
group = [pdf[g].iloc[0] for g in group_columns]
y = pdf[y_column]
X = pdf[x_columns]
model = sm.OLS(y, X).fit()
return pd.DataFrame([group + [model.params[i] for i in x_columns]], columns=group_columns + x_columns)
models = features.groupby(weather_all.usaf, weather_all.wban).apply(ols).cache()
models.limit(10).toPandas()
```
## Inspect and compare results
Now let's pick the same station again, and compare the model to the original model.
```
models.where("usaf='954920' and wban='99999'").toPandas()
model.params
```
| github_jupyter |
## Numpy
<img src= "https://1.bp.blogspot.com/-3P-ULcc-aSc/XwINe8CjKgI/AAAAAAAAFFg/LhApCVa2YqUBlnqDDNW4NudSS398L5gjACLcBGAsYHQ/s640/NumPy%2Barrays.png">
#### A Multidimensional Array Object
```
import numpy as np
l1 = np.arange(1,13)
l1
l1.shape
np.arange(1,13).reshape((2,-1))
np.arange(1,13).reshape((-1,2))
np.arange(1,13).reshape((4,3))
# l1 = np.arange(1,13).reshape((4,3))
# l1 = np.arange(1,13).reshape((3,4))
# l1 = np.arange(1,13).reshape((2,6))
# l1 = np.arange(1,13).reshape((6,2))
# l1 = np.arange(1,13).reshape((12,1))
l1 = np.arange(1,13).reshape((2,-1)) # -1 is used for getting automatic possible row/col value
print(l1)
print("Diminsion:", l1.ndim)
print("Type:", l1.dtype)
print("Size:", l1.size)
print("Itemsize:", l1.itemsize)
print("Shape:", l1.shape)
```
<img src="https://www.pythoninformer.com/img/numpy/3d-array-stack.png">
<img src="https://www.pythoninformer.com/img/numpy/3d-array.png">
l1[i,j,k]
l1[slide, row, column]
<img src="https://cdn-images-1.medium.com/max/2000/1*_D5ZvufDS38WkhK9rK32hQ.jpeg">
**..0D..---..1D..---..2D..---....3D.....---...........4D.............---.............5D............**
**Scalar---Vector---Matrix---Tensor/Cube---Vector Of Tensor's/Cube's ---Matrix Of Tensor's/Cube's**
```
np.arange(4*3*2*2).reshape(4,3,2,2)
np.arange(4*3*2*2).reshape(4,-1,2,2)
l1 = np.arange(1,28).reshape((3,3,3))
print(l1)
print("Diminsion:", l1.ndim)
print("Type:", l1.dtype)
print("Size:", l1.size)
print("Itemsize:", l1.itemsize)
print("Shape:", l1.shape)
l1 = np.arange(1,28).reshape((3,3,3))
print(l1)
# l1[i,j,k]
# l1[slide, row, column]
# l1[2]
# l1[1,2]
#l1[1,2,1]
l1[2]
l1[1,2]
l1[1,2,1]
l1 = np.array(['a','b','c'])
print(l1)
# 0 1 2
print(l1[[True,True,False]])
print(l1[[True,False,True]])
ages = np.arange(1,50)
adult = []
for age in ages:
if age >18:
adult.append(True)
else:
adult.append(False)
adult
ages[adult]
adult = [True for age in ages if age>18] #this wil only return only true but we want bith false n true.So,
adult
adult = [True if age>18 else False for age in ages]
adult
ages[adult]
ages[ages>18] #more easy way
l1 = np.arange(1,28).reshape((3,3,3))
l1[l1 % 3 == 0]
l1 = np.arange(1,28).reshape((3,3,3))
print(l1)
print(l1[l1 % 3 == 0])
l1 = np.random.randn(3,3,3)
print(l1)
print("Diminsion:", l1.ndim)
print("Type:", l1.dtype)
print("Size:", l1.size)
print("Itemsize:", l1.itemsize)
print("Shape:", l1.shape)
l1 + 2
#
#
#
#
l1 * 2
l1 / 2
l1 ** 2
l1 ** 0.5 # under-root
l1 % 2
l1 % 2 == 0
```
| github_jupyter |
# Getting Started with images
```
import cv2
img = cv2.imread('l.png', -1)
print(img)
cv2.imshow('image', img)
k=cv2.waitKey(0)
if k==27:
cv2.destroyAllWindows()
```
# Getting Started with Videos
cv2.VideoWriter is code for writing and saving the video
out.write is use to copy the video
```
import cv2
import numpy as np
cap=cv2.VideoCapture(0);
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out=cv2.VideoWriter('output',fourcc,20.0,(640,480))
print(cap.isOpened())
while True:
ret, frame=cap.read()
#read properties
if ret == True:
print(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
print(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
out.write(frame)
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
cv2.imshow("frame",frame)
if cv2.waitKey(1) & 0xFF== ord('q'):
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
# Draw geometric shapes on Images
```
import cv2
import numpy as np
#img=cv2.imread('l.png',1)
img = np.zeros([512, 512,3])
img=cv2.line(img,(0,0),(220,220),(0,255,0),1)
img=cv2.arrowedLine(img,(0,0),(220,220),(0,255,0),5)
img=cv2.rectangle(img,(20,20),(100,100),(0,0,255),10)
img=cv2.circle(img,(250,250),100,(0,0,255),-1)
img=cv2.putText(img,"Hello",(150,150),cv2.FONT_ITALIC,5,(255,0,0),10,cv2.LINE_AA)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
# Setting Camera Parameters
```
import cv2
cap=cv2.VideoCapture(0)
print(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
print(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
cap.set(3,2200)
cap.set(4,1500)
print(cap.get(3))
print(cap.get(4))
while cap.isOpened():
ret,frame = cap.read()
if(ret == True):
cv2.imshow('Frame',frame)
if cv2.waitKey(1) & 0xFF ==ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
```
# Show Date and time
```
import cv2
import datetime
cap=cv2.VideoCapture(0)
while cap.isOpened():
ret,frame=cap.read()
if ret == True:
font=cv2.FONT_ITALIC
text='Width: '+str(cap.get(3))+' Height: '+str(cap.get(4))
datet=str(datetime.datetime.now())
frame=cv2.putText(frame,datet,(200,200),font,1,(255,255,0),1,cv2.LINE_AA)
cv2.imshow("frame",frame)
if cv2.waitKey(1) & 0xFF== ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
```
# Handle mouse Events
```
import numpy as np
import cv2
# events =[i for i in dir(cv2) if 'EVENT' in i]
# print(events)
def click_event(event,x,y,flags,param):
if event==cv2.EVENT_LBUTTONDOWN:
print(x,' ',y)
font=cv2.FONT_ITALIC
cv2.putText(img,str(x)+' '+str(y),(x,y),font,0.5,(255,100,100),1,cv2.LINE_AA)
cv2.imshow('image',img)
if event==cv2.EVENT_RBUTTONDOWN:
blue = img[y,x, 0]
green = img[y,x, 1]
red = img[y,x,2]
font=cv2.FONT_ITALIC
strg=str(blue)+', '+str(green)+', '+str(red)
cv2.putText(img,strg,(x,y),font,0.5,(int(blue),int(green),int(red)),1,cv2.LINE_AA)
cv2.imshow('image',img)
img=cv2.imread('l.png',1)
cv2.imshow('image',img)
cv2.setMouseCallback('image',click_event)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
more
```
import numpy as np
import cv2
# events =[i for i in dir(cv2) if 'EVENT' in i]
# print(events)
def click_event(event,x,y,flags,param):
count=0
if event==cv2.EVENT_LBUTTONDOWN:
cv2.circle(img,(x,y),3,(50,100,150),-1)
points.append((x,y))
if len(points)>=2:
cv2.line(img,points[-1],points[-2],(150,100,50))
font=cv2.FONT_ITALIC
cv2.putText(img,str(x)+' '+str(y),(x,y),font,0.5,(255,100,100),1,cv2.LINE_AA)
cv2.imshow('image',img)
if event==cv2.EVENT_RBUTTONDOWN:
blue=img[x,y,0]
green=img[x,y,1]
red=img[x,y,2]
myimg=np.zeros((12,12,3),np.uint8)
i=count
myimg[i]=[blue,green,red]
print(len(myimg))
count=count+5
cv2.imshow('myimg',myimg)
img=cv2.imread('l.png',1)
cv2.imshow('image',img)
count=0
points=[]
cv2.setMouseCallback('image',click_event)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
# Basic Arithmetic operations on images
```
import numpy as np
import cv2
img=cv2.imread('l.png')
eye=img[254:279,323:353]
img[22:47,234:264]=eye
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
# BITWISE OPERATIONS
```
import cv2
import numpy as np
img1=np.zeros((194,259,3),np.uint8)
img2=cv2.imread('HappyFish.jpg')
print(img1.shape)
print(img2.shape)
img1=cv2.rectangle(img1,(100,0),(200,100),(255,255,255),-1)
bitANd=cv2.bitwise_and(img1,img2)
cv2.imshow("i1",img1)
cv2.imshow("i2",img2)
cv2.imshow("bit",bitANd)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
# TRACKBARs
are useful to change a value of image dynamically
```
import numpy as np
import cv2
def nothing(x):
print(x)
img=cv2.imread('HappyFish.jpg')
#img=np.zeros((250,500,3),np.uint8)
cv2.namedWindow('image')
cv2.createTrackbar('B','image',0,255,nothing)
cv2.createTrackbar('G','image',0,255,nothing)
cv2.createTrackbar('R','image',0,255,nothing)
switch = '0: OFF\n 1: ON'
cv2.createTrackbar(switch,'image',0,1,nothing)
while(1):
cv2.imshow('image',img)
if cv2.waitKey(1) & 0xFF==ord('q'):
break
b = cv2.getTrackbarPos('B','image')
g = cv2.getTrackbarPos('G','image')
r = cv2.getTrackbarPos('R','image')
s = cv2.getTrackbarPos(switch,'image')
if s==0:
img[:]=0
else:
img[:]=[b,g,b]
#cv2.waitKey(0)
cv2.destroyAllWindows()
```
# Object Detection and Object Tracking using HSV(Hue,Saturation and Value) Color Space
```
import cv2
import numpy as np
def nothing(x):
print(x)
cap=cv2.VideoCapture(0)
cv2.namedWindow("tracking")
cv2.createTrackbar("LH","tracking",0,255,nothing)
cv2.createTrackbar("LS","tracking",0,255,nothing)
cv2.createTrackbar("LV","tracking",0,255,nothing)
cv2.createTrackbar("UH","tracking",255,255,nothing)
cv2.createTrackbar("US","tracking",255,255,nothing)
cv2.createTrackbar("UV","tracking",255,255,nothing)
while True:
#frame=cv2.imread('detect_blob.png')
ret,frame = cap.read()
hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
l_h=cv2.getTrackbarPos("LH","tracking")
l_s=cv2.getTrackbarPos("LS","tracking")
l_v=cv2.getTrackbarPos("LV","tracking")
u_h=cv2.getTrackbarPos("UH","tracking")
u_s=cv2.getTrackbarPos("US","tracking")
u_v=cv2.getTrackbarPos("UV","tracking")
l_b=np.array([l_h,l_s,l_v])
u_b=np.array([u_h,u_s,u_v])
mask = cv2.inRange(hsv,l_b,u_b)
res = cv2.bitwise_and(frame,frame,mask=mask)
cv2.imshow('Frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
if cv2.waitKey(1) & 0xFF==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
# Simple image Thresholding
```
import cv2
import numpy as np
img = cv2.imread('left08.jpg',0)
ret,th1= cv2.threshold(img,50,255, cv2.THRESH_BINARY)
ret,th2= cv2.threshold(img,205,255, cv2.THRESH_BINARY_INV)
ret,th3= cv2.threshold(img,105,255, cv2.THRESH_TRUNC)
ret,th4= cv2.threshold(img,105,255, cv2.THRESH_TOZERO)
ret,th5= cv2.threshold(img,105,255, cv2.THRESH_)
cv2.imshow("image",img)
cv2.imshow("threshold1",th1)
cv2.imshow("threshold2",th2)
cv2.imshow("threshold3",th3)
cv2.imshow("threshold4",th4)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
# Adaptive Thresholding
```
import cv2
import numpy as np
img = cv2.imread('left08.jpg',0)
ret,th1= cv2.threshold(img,12,155, cv2.THRESH_BINARY)
#th2=cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,11,21)
#th3=cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,21)
#th4=cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,11,10)
#th5=cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,10)
cv2.imshow("image",img)
cv2.imshow("threshold1",th1)
#cv2.imshow("threshold2",th2)
#cv2.imshow("threshold3",th3)
#cv2.imshow("th4",th4)
#cv2.imshow("th5",th5)
cv2.waitKey(0)
cv2.destroyAllWindows()
import cv2
import matplotlib.pyplot as plt
img=cv2.imread('HappyFish.jpg')
cv2.imshow('image', img)
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
cv2.waitKey(0)
cv2.destroyAllWindows()
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('left08.jpg',0)
ret,th1= cv2.threshold(img,50,255, cv2.THRESH_BINARY)
th1=cv2.cvtColor(th1,cv2.COLOR_BGR2RGB)
ret,th2= cv2.threshold(img,205,255, cv2.THRESH_BINARY_INV)
th2=cv2.cvtColor(th1,cv2.COLOR_BGR2RGB)
ret,th3= cv2.threshold(img,105,255, cv2.THRESH_TRUNC)
th3=cv2.cvtColor(th1,cv2.COLOR_BGR2RGB)
ret,th4= cv2.threshold(img,105,255, cv2.THRESH_TOZERO)
th4=cv2.cvtColor(th1,cv2.COLOR_BGR2RGB)
ret,th5= cv2.threshold(img,105,255, cv2.THRESH_TRIANGLE)
th5=cv2.cvtColor(th1,cv2.COLOR_BGR2RGB)
titles=['Original image','BINARY','BINARY_INV','TRUNC','TOZERO','TRIANGLE']
images=[img,th1,th2,th3,th4,th5]
for i in range(6):
plt.subplot(2,3,i+1)
plt.imshow(images[i],'gray')
plt.title(titles[i])
cv2.imshow("image",img)
cv2.imshow("threshold1",th1)
cv2.imshow("threshold2",th2)
cv2.imshow("threshold3",th3)
cv2.imshow("threshold4",th4)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
# Color Filtering
```
import cv2
import numpy as np
cap=cv2.VideoCapture(0)
while True:
ret,frame=cap.read()
hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
lower_red=np.array([100,100,50])
upper_red=np.array([180,185,255])
mask=cv2.inRange(hsv,lower_red,upper_red)
res=cv2.bitwise_and(frame,frame,mask=mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
if cv2.waitKey(1) & 0xFF==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
# Morphological Transformations
Are some simple operations based on the image shape.
they are normally performed on binary image
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
img=cv2.imread('smarties.png',0)
_,mask=cv2.threshold(img,220,255,cv2.THRESH_BINARY_INV)
kernel=np.ones((5,5),np.uint8)
dilation=cv2.dilate(mask,kernel,iterations=2)
erosion=cv2.erode(mask, kernel,iterations=1)
opening = cv2.morphologyEx(mask,cv2.MORPH_OPEN,kernel)
closing = cv2.morphologyEx(mask,cv2.MORPH_CLOSE,kernel)
mg = cv2.morphologyEx(mask,cv2.MORPH_GRADIENT,kernel)
mbh = cv2.morphologyEx(mask,cv2.MORPH_BLACKHAT,kernel)
e= cv2.morphologyEx(mask,cv2.MORPH_ELLIPSE,kernel)
merge=cv2.bitwise_and(dilation,erosion)
title=['image','mask','dilation','erosion','opening','closing','mg','mbh','e']
image=[img,mask,dilation,erosion,opening,closing,mg,mbh,e]
for i in range(9):
plt.subplot(2,5,i+1),plt.imshow(image[i],'gray')
plt.title(title[i])
plt.xticks([]),plt.yticks([])
cv2.destroyAllWindows()
```
# Smoothing and Blurring images
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
img= cv2.imread('opencv-logo.png')
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
kernel=np.ones((5,5),np.float32)/25
dst = cv2.filter2D(img,-1,kernel)
titles=['image','dst']
images=[img,dst]
for i in range(2):
plt.subplot(1,2,i+1)
plt.imshow(images[i],'gray')
plt.xticks([]),plt.yticks([])
```
# Image Gradient
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
img= cv2.imread('opencv-logo.png')
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
titles=['image','dst']
images=[img,dst]
for i in range(2):
plt.subplot(1,2,i+1)
plt.imshow(images[i],'gray')
plt.xticks([]),plt.yticks([])
```
# Contours
```
import cv2
import numpy as np
img3=np.zeros(img.shape,np.uint8)
img=cv2.imread('detect_blob.png',-1)
imggray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh=cv2.threshold(imggray,127,255,cv2.THRESH_BINARY)
#img2=cv2.bitwise_and(img,cv2.)
ret,thresh1=cv2.threshold(imggray,0,155,0)
contours,heirarchy=cv2.findContours(thresh,cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
print("number of Contours: "+str(len(contours)))
cv2.drawContours(img3,contours,-1,(0,255,150),1)
cv2.imshow("imagr",img3)
cv2.imshow('imge1',imggray)
#cv2.imshow("thres",thresh)
#cv2.imshow("thres1",thresh1)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
# Motion Detection
```
import cv2
import numpy as np
cap=cv2.VideoCapture(0)
ret,frame1=cap.read()
ret,frame2=cap.read()
while cap.isOpened():
diff = cv2.absdiff(frame1,frame)
gray=cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
blur= cv2.GaussianBlur(gray,(5,5), 0)
_, thresh= cv2.threshold(blur,20, 255, cv2.THRESH_BINARY)
cv2.imshow("video",frame)
if cv2.waitKey(0)==27:
break
cap.release()
cv2.destroyAllWindows()
```
| github_jupyter |
```
import findspark
findspark.init()
import pyspark
sc = pyspark.SparkContext()
sqlContext = pyspark.sql.SQLContext(sc)
import re
import csv
import random
import ujson as json
from itertools import izip
from operator import add, itemgetter
from collections import Counter, defaultdict
from urlparse import urljoin
from boto.s3.connection import S3Connection
from boto.s3.connection import Key
from datetime import datetime
from time import time
s3baseuri = "s3n://"
def zip_sum(*x):
return [sum(i) for i in izip(*x)]
def trim_link_protocol(s):
idx = s.find('://')
return s if idx == -1 else s[idx+3:]
def get_timestamp():
return datetime.fromtimestamp(time()).strftime('%Y%m%d%H%M%S')
def write_file_to_s3(localfile, s3_bucket, s3_filename):
conn = S3Connection(key, secret)
bucket = conn.get_bucket(s3_bucket)
if len(list(bucket.list(s3_filename))) == 0:
k = Key(bucket)
k.key = s3_filename
k.set_contents_from_filename(localfile)
def get_mention_aligned_links(doc):
text = doc['full_text']
for m in doc['mentions']:
mention_start, mention_stop = m['start'], m['stop']
# filter mentions which occur outside of document full_text
if mention_start >= 0 and mention_stop > mention_start:
link_start = mention_stop+2
# naively detect whether this mention sits inside a markdown link anchor
if text[mention_start-1] == '[' and text[mention_stop:link_start] == '](':
link_stop = text.find(')', link_start)
if text[link_start:link_stop].startswith('http://'):
link_start += 7
elif text[link_start:link_stop].startswith('https://'):
link_start += 8
if link_stop != -1:
yield slice(link_start, link_stop), slice(mention_start,mention_stop)
def get_links(doc):
for m in re.finditer('(?<!\\\\)\[(([^]]|(\\\\]))+)(]\(\s*(http[s]?://)?)([^)]+)\s*\)', doc['full_text']):
parts = m.groups()
a, uri = parts[0], parts[5]
if uri and not a.startswith('www') and not a.startswith('http') and not 'secure.adnxs.com' in uri:
if 'digg.com' in uri:
continue # todo: add check for anchor diversity to filter this kidn of thing
mention_start = m.start() + 1
mention_stop = mention_start + len(parts[0])
link_start = mention_stop + len(parts[3])
link_stop = link_start + len(parts[5])
yield slice(link_start, link_stop), slice(mention_start, mention_stop)
import base64
import urlparse
def resolve_hardcoded_redirects(l):
try:
if l.startswith('www.prweb.net'):
l = base64.b64decode(l[len('www.prweb.net/Redirect.aspx?id='):])
elif l.startswith('cts.businesswire.com/ct/') or l.startswith('ctt.marketwire.com/'):
l = urlparse.parse_qs(l)['url'][0]
except: pass
return trim_link_protocol(l)
anchor_filters = set([
'facebook',
'twitter',
'zacks investment research',
'reuters',
'linkedin',
'marketbeat'
])
if False:
def get_link_labels(doc):
text = doc['full_text']
aligned_spans = set()
for l, a in get_mention_aligned_links(doc):
aligned_spans.add((l.start, l.stop))
uri = text[l]
if not 'search' in uri and not text[a].lower().strip() in anchor_filters:
yield (1.0, uri)
for l, a in get_links(doc):
if (l.start, l.stop) not in aligned_spans:
yield (0.0, text[l])
def get_anchor_target_pairs(doc):
text = doc['full_text']
aligned_spans = set()
for l, a in get_mention_aligned_links(doc):
aligned_spans.add((l.start, l.stop))
yield (text[a], resolve_hardcoded_redirects(text[l]), True)
for l, a in get_links(doc):
if (l.start, l.stop) not in aligned_spans:
is_mention = False
if text[a].startswith('@') and not ' ' in text[a]:
is_mention = True # twitter NER = solved
yield (text[a], resolve_hardcoded_redirects(text[l]), is_mention)
```
URI Classification
```
def normalize_uri(uri):
uri = uri.lower()
if uri.startswith('//'):
uri = uri[2:]
if uri.startswith('www.'):
uri = uri[4:]
# trim uri protocol
idx = uri.find('://')
uri = uri[idx+3:] if idx != -1 else uri
# convert 'blah.com/users.php?id=bob' into 'blah.com/users.php/id=bob'
uri = re.sub('([a-z]+)\?', r"\1/", uri)
# convert 'blah.com/users#bob' into 'blah.com/users/bob'
uri = uri.replace('#', '/')
parts = uri.rstrip('/').split('/')
suffix = parts[-1].lower()
if len(parts) > 1 and suffix.startswith('index') or suffix.startswith('default'):
parts = parts[:-1]
if len(parts) > 1:
parts[-1] = '<eid>'
else:
parts.append('<nil>')
return '/'.join(parts)
#normalize_uri('vanityfair.com/index.aspx?rofl')
def get_uri_domain(uri):
return uri.split('/')[0]
def get_uri_features(uri):
features = []
uri_parts = re.sub('[0-9]', 'N', uri).split('/')
dom = uri_parts[0]
uri_parts[0] = "<domain>"
features += list('/'.join(p) for p in izip(uri_parts, uri_parts[1:]))
features += [dom+':'+f for f in features]
features += uri_parts
dom_parts = dom.split('.')
if len(dom_parts) >= 3:
features.append('SD:' + '.'.join(dom_parts[:-2]))
return features
from pyspark.ml.classification import NaiveBayes, LogisticRegression
from pyspark.ml.feature import HashingTF, StringIndexer, CountVectorizer
def balance_dataset(dataset, minor = 1.0, major = 0.0):
major_count = dataset.filter(dataset.label == major).count()
minor_count = dataset.filter(dataset.label == minor).count()
return dataset.filter(dataset.label == major)\
.sample(withReplacement=False, fraction=minor_count/float(major_count))\
.unionAll(dataset.filter(dataset.label == minor))
def stats_at_p(r, p):
tp = 1.0 if (r['label'] == 1.0 and r['probability'][1] >= p) else 0.0
fp = 1.0 if (r['label'] == 0.0 and r['probability'][1] >= p) else 0.0
fn = 1.0 if (r['label'] == 1.0 and r['probability'][1] < p) else 0.0
return p, (tp, fp, fn)
def evaluate(dataset, ps = None):
if ps == None:
ps = [0.5]
stats_by_p = dataset\
.flatMap(lambda r: (stats_at_p(r, p) for p in ps))\
.reduceByKey(lambda a, b: [x+y for x,y in zip(a, b)])\
.filter(lambda (p, (tp, fp, fn)): (tp+fp) > 0 and (tp+fn) > 0)\
.mapValues(lambda (tp, fp, fn): ((float(tp) / (tp+fp)), (float(tp) / (tp+fn))))\
.mapValues(lambda (p, r): (p, r, 2 * (p*r/(p+r))))\
.collect()
return stats_by_p
classifier = LogisticRegression(featuresCol="hashed_features")
REBUILD_CORPUS = False
raw_corpus_path = s3baseuri + 'abbrevi8-rnd/kb/live/20160301/articles'
link_corpus_path = s3baseuri + 'abbrevi8-rnd/web/links/seed/'
anchor_target_pairs = sc\
.textFile(raw_corpus_path)\
.map(json.loads)\
.flatMap(get_anchor_target_pairs)
if REBUILD_CORPUS:
train, test = [
split.flatMap(lambda (prefix, instances): instances)\
.map(lambda (uri, is_mention): (uri, 1.0 if is_mention else 0.0, get_uri_features(uri)))\
.repartition(128)\
.cache()
for split in
anchor_target_pairs\
.map(lambda (anchor, target, is_mention): (normalize_uri(target), is_mention))\
.groupByKey()\
.filter(lambda (k,vs): len(vs) >= 10)\
.mapValues(Counter)\
.mapValues(lambda cs: cs[True] > cs[False])\
.map(lambda (uri, is_mention): (get_uri_domain(uri), (uri, is_mention)))\
.groupByKey()\
.randomSplit([0.9, 0.1])
]
sqlContext\
.createDataFrame(train, ['uri','label','features'])\
.write.mode('overwrite')\
.save(link_corpus_path + 'train')
sqlContext\
.createDataFrame(test, ['uri','label','features'])\
.write.mode('overwrite')\
.save(link_corpus_path + 'test')
train = sqlContext.load(link_corpus_path + 'train')
test = sqlContext.load(link_corpus_path + 'test')
full = train.unionAll(test)
train.filter(train['label']==1.0).count(), train.filter(train['label']==0.0).count()
hashing_tf = HashingTF(inputCol="features", outputCol="hashed_features", numFeatures=500000)
train = hashing_tf.transform(train)
test = hashing_tf.transform(test)
```
Dev Evaluation
```
dev_model = classifier.fit(balance_dataset(train).repartition(64))
train_prs = evaluate(dev_model.transform(train), ps=[p/40. for p in xrange(1, 40)])
dev_prs = evaluate(dev_model.transform(test), ps=[p/40. for p in xrange(1, 40)])
#test_prs = evaluate(dev_model.transform(hashing_tf.transform(labeled_uris)), ps=[p/20. for p in xrange(1, 20)])
print 'Evaluation @ Confidence >= 0.5'
print 'Train P/R=(%.2f, %.2f), F=%.3f' % dict(train_prs)[0.5]
print ' Dev P/R=(%.2f, %.2f), F=%.3f' % dict(dev_prs)[0.5]
c, (p_c, r_c, f_c) = sorted(dev_prs, key=lambda (c, (p,r,f)): f, reverse=True)[0]
print 'Confidence @ Optimal Dev F1 >= %.3f' % c
print 'Train P/R=(%.2f, %.2f), F=%.3f' % dict(train_prs)[c]
print ' Dev P/R=(%.2f, %.2f), F=%.3f' % dict(dev_prs)[c]
```
Full Model
```
model = classifier.fit(balance_dataset(hashing_tf.transform(full)).repartition(128))
uris = [normalize_uri(u) for u in [
'facebook.com/efoim',
'twitter.com/person',
'twitter.com/person/status/1231',
'linkedin.com/company/zcbvx',
'linkedin.com/in/zcbvx',
'en.wikipedia.org/wiki/someone',
'en.wikipedia.org/w/index.php?id=123',
'www.nytimes.com/topic/person/sheldon-silver',
]]
model.transform(
hashing_tf.transform(
sqlContext.createDataFrame(
[(u, get_uri_features(u)) for u in uris],
['uri','features'])))\
.map(lambda r: (r['uri'], r['probability'][1]))\
.collect()
cc_base_path = 's3n://aws-publicdatasets/'
cc_crawl_root = 'common-crawl/crawl-data/CC-MAIN-2016-07'
def parse_wats(lines):
def to_wat(record):
if record and len(record) >= 10 and record[9].startswith('{"Envelope":{'):
return json.loads('\n'.join(record[9:]))
return None
record = []
for line in lines:
if line == 'WARC/1.0':
w = to_wat(record)
if w: yield w
record = [line]
else:
record.append(line)
w = to_wat(record)
if w: yield w
def extract_links(record):
nil = {}
url = record\
.get('Envelope', nil)\
.get('WARC-Header-Metadata', nil)\
.get('WARC-Target-URI', None)
if url:
links = record\
.get('Envelope', nil)\
.get('Payload-Metadata', nil)\
.get('HTTP-Response-Metadata', nil)\
.get('HTML-Metadata', nil)\
.get('Links', [])
for link in links:
if 'text' in link and 'url' in link:
try:
yield (url, link['text'], urljoin(url, link['url']))
except:
pass
cc_wat_paths = ','.join(sc\
.textFile(cc_base_path + cc_crawl_root + '/wat.paths.gz')\
.map(lambda path: cc_base_path + path)\
.takeSample(False, 32))
anchor_stoplist = sc\
.textFile(cc_wat_paths)\
.mapPartitions(parse_wats)\
.flatMap(extract_links)\
.filter(lambda (s, a, t): t.startswith('http://') or t.startswith('https://'))\
.map(lambda (s, a, t): (a.lower(), 1))\
.reduceByKey(add)\
.sortBy(lambda (k,v): v, ascending=False)\
.map(lambda (k, v): k)\
.take(50)
anchor_stoplist = set(anchor_stoplist)
cc_base_path
cc_wat_paths = ','.join(sc\
.textFile(cc_base_path + cc_crawl_root + '/wat.paths.gz')\
.map(lambda path: 's3://aws-publicdatasets/' + path)\
.sample(False, 0.1)\
.collect())
cc_links = sqlContext.createDataFrame(
sc\
.textFile(cc_wat_paths)\
.mapPartitions(parse_wats, preservesPartitioning=True)\
.flatMap(extract_links)\
.filter(lambda (s, a, t): t.startswith('http://') or t.startswith('https://'))\
.filter(lambda (s, a, t): a.lower() not in anchor_stoplist)\
.repartition(4096)\
.map(lambda (s, a, t): (s, a, t, get_uri_features(normalize_uri(t))))
, ['source', 'anchor', 'target', 'features'])
predicted_links = model.transform(hashing_tf.transform(cc_links))
predicted_links\
.map(lambda r: (r['source'], r['anchor'], r['target'], r['probability'][1]))\
.filter(lambda (s,a,t,p): p >= 0.825)\
.map(json.dumps)\
.saveAsTextFile(s3baseuri + 'abbrevi8-rnd/web/links/cc0.1/', 'org.apache.hadoop.io.compress.GzipCodec')\
1
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from tqdm import tqdm
import torch
import os
from sklearn.metrics import silhouette_score
import umap
import matplotlib.pyplot as plt
from matplotlib import colors as mcolors
# !pip install pymagnitude
from pymagnitude import Magnitude
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
data_path = os.path.join('gdrive', 'My Drive', 'Colab Notebooks', 'text_clustering', 'million-headlines', 'abcnews-date-text.csv')
# dataframe
df_all = pd.read_csv(data_path)
# get year from date and put that in a new column "year"
df_all['year'] = df_all['publish_date'].apply(lambda s: str(s)[:4])
print('shape of dataframe:', df_all.shape)
print('number of news headlines in different years:')
print(df_all.groupby('year')['year'].count())
# get news headlines from 2017 only
df = df_all.loc[df_all['year'] == "2017"]
print('shape of dataframe:', df.shape)
sentences = df['headline_text'].values
print('headlines:')
print(sentences)
# download glove vectors
!wget http://magnitude.plasticity.ai/glove/light/glove.6B.100d.magnitude glove/
# Load Magnitude GloVe vectors
glove_vectors = Magnitude('glove/glove.6B.100d.magnitude')
sentences_tok = list(map(str.split, sentences))
# GLOVE embeddings
sentence_embs = glove_vectors.query(sentences_tok)
# compute sentence embedding as mean of word embeddings
sentence_embs = sentence_embs.mean(1)
print(sentence_embs.shape)
################
# kmeans utils #
################
def forgy(X, n_clusters):
_len = len(X)
indices = np.random.choice(_len, n_clusters, replace=False)
initial_state = X[indices]
return initial_state
def do_kmeans_clustering(X, n_clusters, distance='euclidean', tol=1e-4, device=torch.device('cpu')):
print(f'k-means clustering on {device}..')
if distance == 'euclidean':
pairwise_distance_function = pairwise_distance
elif distance == 'cosine':
pairwise_distance_function = pairwise_cosine
else:
raise NotImplementedError
X = X.float()
# transfer to device
X = X.to(device)
initial_state = forgy(X, n_clusters)
iteration = 0
tqdm_meter = tqdm()
while True:
dis = pairwise_distance_function(X, initial_state)
choice_cluster = torch.argmin(dis, dim=1)
initial_state_pre = initial_state.clone()
for index in range(n_clusters):
selected = torch.nonzero(choice_cluster == index).squeeze().to(device)
selected = torch.index_select(X, 0, selected)
initial_state[index] = selected.mean(dim=0)
center_shift = torch.sum(torch.sqrt(torch.sum((initial_state - initial_state_pre) ** 2, dim=1)))
# increment iteration
iteration = iteration + 1
# update tqdm meter
tqdm_meter.set_postfix(iteration=f'{iteration}', center_shift=f'{center_shift ** 2}', tol=f'{tol}')
if center_shift ** 2 < tol:
break
return choice_cluster.cpu(), initial_state.cpu()
def kmeans_predict(X, cluster_centers, distance='euclidean', device=torch.device('cpu')):
print(f'predicting on {device}..')
if distance == 'euclidean':
pairwise_distance_function = pairwise_distance
elif distance == 'cosine':
pairwise_distance_function = pairwise_cosine
else:
raise NotImplementedError
X = X.float()
# transfer to device
X = X.to(device)
dis = pairwise_distance_function(X, cluster_centers)
choice_cluster = torch.argmin(dis, dim=1)
return choice_cluster.cpu()
'''
calculation of pairwise distance, and return condensed result,
i.e. we omit the diagonal and duplicate entries and store
everything in a one-dimensional array
'''
def pairwise_distance(data1, data2=None, device=torch.device('cpu')):
r'''
using broadcast mechanism to calculate pairwise euclidean distance of data
the input data is N*M matrix, where M is the dimension
we first expand the N*M matrix into N*1*M matrix A and 1*N*M matrix B
then a simple elementwise operation of A and B will handle the pairwise operation of points represented by data
'''
if data2 is None:
data2 = data1
data1, data2 = data1.to(device), data2.to(device)
# N*1*M
A = data1.unsqueeze(dim=1)
# 1*N*M
B = data2.unsqueeze(dim=0)
dis = (A - B) ** 2.0
# return N*N matrix for pairwise distance
dis = dis.sum(dim=-1).squeeze()
return dis
def pairwise_cosine(data1, data2=None, device=torch.device('cpu')):
r'''
using broadcast mechanism to calculate pairwise cosine distance of data
the input data is N*M matrix, where M is the dimension
we first expand the N*M matrix into N*1*M matrix A and 1*N*M matrix B
then a simple elementwise operation of A and B will handle the pairwise operation of points represented by data
'''
if data2 is None:
data2 = data1
data1, data2 = data1.to(device), data2.to(device)
# N*1*M
A = data1.unsqueeze(dim=1)
# 1*N*M
B = data2.unsqueeze(dim=0)
# normalize the points | [0.3, 0.4] -> [0.3/sqrt(0.09 + 0.16), 0.4/sqrt(0.09 + 0.16)] = [0.3/0.5, 0.4/0.5]
A_normalized = A / A.norm(dim=-1, keepdim=True)
B_normalized = B / B.norm(dim=-1, keepdim=True)
cosine = A_normalized * B_normalized
# return N*N matrix for pairwise distance
cosine_dis = 1 - cosine.sum(dim=-1).squeeze()
return cosine_dis
def group_pairwise(X, groups, device=torch.device('cpu'), fun=lambda r, c: pairwise_distance(r, c).cpu()):
group_dict = {}
for group_index_r, group_r in enumerate(groups):
for group_index_c, group_c in enumerate(groups):
R, C = X[group_r], X[group_c]
R, C = R.to(device), C.to(device)
group_dict[(group_index_r, group_index_c)] = fun(R, C)
return group_dict
# set device
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# find good k (number of clusters)
sil_scores = []
for k in [2, 3, 4, 5, 6]:
# k-means clustering
cluster_ids_x, cluster_centers = do_kmeans_clustering(
X=torch.from_numpy(sentence_embs),
n_clusters=k,
distance='cosine',
device=device
)
sil_scores.append(silhouette_score(sentence_embs, cluster_ids_x.numpy(), metric='cosine'))
# plot silhouette scores
plt.figure(figsize=(6, 4), dpi=160)
plt.plot([2, 3, 4, 5, 6], sil_scores, color='xkcd:bluish purple')
plt.xticks([2, 3, 4, 5, 6])
plt.xlabel('number of clusters', fontsize=14)
plt.ylabel('silhouette score', fontsize=14)
plt.title('finding optimum number of clusters', fontsize=14)
plt.show()
# cluster using k with max silhoutte score
num_clusters = np.argmax(sil_scores) + 2 # 0th index means 2 clusters, ..
cluster_ids_x, cluster_centers = do_kmeans_clustering(
X=torch.from_numpy(sentence_embs),
n_clusters=num_clusters,
distance='cosine',
device=device
)
# UMAP
emb_umap = umap.UMAP(metric='cosine', verbose=True).fit_transform(sentence_embs)
# plot
cols = [color for name, color in mcolors.TABLEAU_COLORS.items()]
plt.figure(figsize=(6, 6), dpi=160)
for cluster_id in range(num_clusters):
plt.scatter(
emb_umap[cluster_ids_x == cluster_id][:, 0],
emb_umap[cluster_ids_x == cluster_id][:, 1],
color=cols[cluster_id],
label=f'cluster {cluster_id}'
)
plt.title('UMAP Sentence Embeddings', fontsize=14)
plt.legend(fontsize=10)
plt.show()
np.random.choice(sentences[cluster_ids_x == 0], 20, replace=False)
np.random.choice(sentences[cluster_ids_x == 1], 20, replace=False)
```
| github_jupyter |
# Mining the Social Web
## Mining Web Pages
This Jupyter Notebook provides an interactive way to follow along with and explore the examples from the video series. The intent behind this notebook is to reinforce the concepts in a fun, convenient, and effective way.
```
# Downloading nltk packages used in this example
nltk.download('maxent_ne_chunker')
nltk.download('words')
ne_chunks = list(nltk.chunk.ne_chunk_sents(pos_tagged_tokens))
print(ne_chunks)
ne_chunks[0].pprint()
```
## Using boilerpipe to extract the text from a web page
Example blog post:
http://radar.oreilly.com/2010/07/louvre-industrial-age-henry-ford.html
```
# May also require the installation of Java runtime libraries
# pip install boilerpipe3
from boilerpipe.extract import Extractor
# If you're interested, learn more about how Boilerpipe works by reading
# Christian Kohlschütter's paper: http://www.l3s.de/~kohlschuetter/boilerplate/
URL='https://www.oreilly.com/ideas/ethics-in-data-project-design-its-about-planning'
extractor = Extractor(extractor='ArticleExtractor', url=URL)
print(extractor.getText())
```
## Using feedparser to extract the text (and other fields) from an RSS or Atom feed
```
import feedparser # pip install feedparser
FEED_URL='http://feeds.feedburner.com/oreilly/radar/atom'
fp = feedparser.parse(FEED_URL)
for e in fp.entries:
print(e.title)
print(e.links[0].href)
print(e.content[0].value)
```
## Harvesting blog data by parsing feeds
```
import os
import sys
import json
import feedparser
from bs4 import BeautifulSoup
from nltk import clean_html
FEED_URL = 'http://feeds.feedburner.com/oreilly/radar/atom'
def cleanHtml(html):
if html == "": return ""
return BeautifulSoup(html, 'html5lib').get_text()
fp = feedparser.parse(FEED_URL)
print("Fetched {0} entries from '{1}'".format(len(fp.entries[0].title), fp.feed.title))
blog_posts = []
for e in fp.entries:
blog_posts.append({'title': e.title, 'content'
: cleanHtml(e.content[0].value), 'link': e.links[0].href})
out_file = os.path.join('feed.json')
f = open(out_file, 'w+')
f.write(json.dumps(blog_posts, indent=1))
f.close()
print('Wrote output file to {0}'.format(f.name))
```
## Starting to write a web crawler
```
import httplib2
import re
from bs4 import BeautifulSoup
http = httplib2.Http()
status, response = http.request('http://www.nytimes.com')
soup = BeautifulSoup(response, 'html5lib')
links = []
for link in soup.findAll('a', attrs={'href': re.compile("^http(s?)://")}):
links.append(link.get('href'))
for link in links:
print(link)
```
```
Create an empty graph
Create an empty queue to keep track of nodes that need to be processed
Add the starting point to the graph as the root node
Add the root node to a queue for processing
Repeat until some maximum depth is reached or the queue is empty:
Remove a node from the queue
For each of the node's neighbors:
If the neighbor hasn't already been processed:
Add it to the queue
Add it to the graph
Create an edge in the graph that connects the node and its neighbor
```
## Using NLTK to parse web page data
**Naive sentence detection based on periods**
```
text = "Mr. Green killed Colonel Mustard in the study with the candlestick. Mr. Green is not a very nice fellow."
print(text.split("."))
```
**More sophisticated sentence detection**
```
import nltk # Installation instructions: http://www.nltk.org/install.html
# Downloading nltk packages used in this example
nltk.download('punkt')
sentences = nltk.tokenize.sent_tokenize(text)
print(sentences)
harder_example = """My name is John Smith and my email address is j.smith@company.com.
Mostly people call Mr. Smith. But I actually have a Ph.D.!
Can you believe it? Neither can most people..."""
sentences = nltk.tokenize.sent_tokenize(harder_example)
print(sentences)
```
**Word tokenization**
```
text = "Mr. Green killed Colonel Mustard in the study with the candlestick. Mr. Green is not a very nice fellow."
sentences = nltk.tokenize.sent_tokenize(text)
tokens = [nltk.word_tokenize(s) for s in sentences]
print(tokens)
```
**Part of speech tagging for tokens**
```
# Downloading nltk packages used in this example
nltk.download('maxent_treebank_pos_tagger')
pos_tagged_tokens = [nltk.pos_tag(t) for t in tokens]
print(pos_tagged_tokens)
```
**Alphabetical list of part-of-speech tags used in the Penn Treebank Project**
See: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
| # | POS Tag | Meaning |
|:-:|:-------:|:--------|
| 1 | CC | Coordinating conjunction|
|2| CD |Cardinal number|
|3| DT |Determiner|
|4| EX |Existential there|
|5| FW |Foreign word|
|6| IN |Preposition or subordinating conjunction|
|7| JJ |Adjective|
|8| JJR |Adjective, comparative|
|9| JJS |Adjective, superlative|
|10| LS |List item marker|
|11| MD |Modal|
|12| NN |Noun, singular or mass|
|13| NNS |Noun, plural|
|14| NNP |Proper noun, singular|
|15| NNPS |Proper noun, plural|
|16| PDT |Predeterminer|
|17| POS |Possessive ending|
|18| PRP |Personal pronoun|
|19| PRP\$ |Possessive pronoun|
|20| RB |Adverb|
|21| RBR |Adverb, comparative|
|22| RBS |Adverb, superlative|
|23| RP |Particle|
|24| SYM |Symbol|
|25| TO |to|
|26| UH |Interjection|
|27| VB |Verb, base form|
|28| VBD |Verb, past tense|
|29| VBG |Verb, gerund or present participle|
|30| VBN |Verb, past participle|
|31| VBP |Verb, non-3rd person singular present|
|32| VBZ |Verb, 3rd person singular present|
|33| WDT |Wh-determiner|
|34| WP |Wh-pronoun|
|35| WP\$|Possessive wh-pronoun|
|36| WRB |Wh-adverb|
**Named entity extraction/chunking for tokens**
```
# Downloading nltk packages used in this example
nltk.download('maxent_ne_chunker')
nltk.download('words')
jim = "Jim bought 300 shares of Acme Corp. in 2006."
tokens = nltk.word_tokenize(jim)
jim_tagged_tokens = nltk.pos_tag(tokens)
ne_chunks = nltk.chunk.ne_chunk(jim_tagged_tokens)
ne_chunks
ne_chunks = [nltk.chunk.ne_chunk(ptt) for ptt in pos_tagged_tokens]
ne_chunks[0].pprint()
ne_chunks[1].pprint()
ne_chunks[0]
ne_chunks[1]
```
## Using NLTK’s NLP tools to process human language in blog data
```
import json
import nltk
BLOG_DATA = "resources/ch06-webpages/feed.json"
blog_data = json.loads(open(BLOG_DATA).read())
# Download nltk packages used in this example
nltk.download('stopwords')
# Customize your list of stopwords as needed. Here, we add common
# punctuation and contraction artifacts.
stop_words = nltk.corpus.stopwords.words('english') + [
'.',
',',
'--',
'\'s',
'?',
')',
'(',
':',
'\'',
'\'re',
'"',
'-',
'}',
'{',
u'—',
']',
'[',
'...'
]
for post in blog_data:
sentences = nltk.tokenize.sent_tokenize(post['content'])
words = [w.lower() for sentence in sentences for w in
nltk.tokenize.word_tokenize(sentence)]
fdist = nltk.FreqDist(words)
# Remove stopwords from fdist
for sw in stop_words:
del fdist[sw]
# Basic stats
num_words = sum([i[1] for i in fdist.items()])
num_unique_words = len(fdist.keys())
# Hapaxes are words that appear only once
num_hapaxes = len(fdist.hapaxes())
top_10_words_sans_stop_words = fdist.most_common(10)
print(post['title'])
print('\tNum Sentences:'.ljust(25), len(sentences))
print('\tNum Words:'.ljust(25), num_words)
print('\tNum Unique Words:'.ljust(25), num_unique_words)
print('\tNum Hapaxes:'.ljust(25), num_hapaxes)
print('\tTop 10 Most Frequent Words (sans stop words):\n\t\t', \
'\n\t\t'.join(['{0} ({1})'.format(w[0], w[1]) for w in top_10_words_sans_stop_words]))
print()
```
## A document summarization algorithm based principally upon sentence detection and frequency analysis within sentences
```
import json
import nltk
import numpy
BLOG_DATA = "feed.json"
blog_data = json.loads(open(BLOG_DATA).read())
N = 100 # Number of words to consider
CLUSTER_THRESHOLD = 5 # Distance between words to consider
TOP_SENTENCES = 5 # Number of sentences to return for a "top n" summary
stop_words = nltk.corpus.stopwords.words('english') + [
'.',
',',
'--',
'\'s',
'?',
')',
'(',
':',
'\'',
'\'re',
'"',
'-',
'}',
'{',
u'—',
'>',
'<',
'...'
]
# Approach taken from "The Automatic Creation of Literature Abstracts" by H.P. Luhn
def _score_sentences(sentences, important_words):
scores = []
sentence_idx = 0
for s in [nltk.tokenize.word_tokenize(s) for s in sentences]:
word_idx = []
# For each word in the word list...
for w in important_words:
try:
# Compute an index for where any important words occur in the sentence.
word_idx.append(s.index(w))
except ValueError: # w not in this particular sentence
pass
word_idx.sort()
# It is possible that some sentences may not contain any important words at all.
if len(word_idx)== 0: continue
# Using the word index, compute clusters by using a max distance threshold
# for any two consecutive words.
clusters = []
cluster = [word_idx[0]]
i = 1
while i < len(word_idx):
if word_idx[i] - word_idx[i - 1] < CLUSTER_THRESHOLD:
cluster.append(word_idx[i])
else:
clusters.append(cluster[:])
cluster = [word_idx[i]]
i += 1
clusters.append(cluster)
# Score each cluster. The max score for any given cluster is the score
# for the sentence.
max_cluster_score = 0
for c in clusters:
significant_words_in_cluster = len(c)
# true clusters also contain insignificant words, so we get
# the total cluster length by checking the indices
total_words_in_cluster = c[-1] - c[0] + 1
score = 1.0 * significant_words_in_cluster**2 / total_words_in_cluster
if score > max_cluster_score:
max_cluster_score = score
scores.append((sentence_idx, max_cluster_score))
sentence_idx += 1
return scores
def summarize(txt):
sentences = [s for s in nltk.tokenize.sent_tokenize(txt)]
normalized_sentences = [s.lower() for s in sentences]
words = [w.lower() for sentence in normalized_sentences for w in
nltk.tokenize.word_tokenize(sentence)]
fdist = nltk.FreqDist(words)
# Remove stopwords from fdist
for sw in stop_words:
del fdist[sw]
top_n_words = [w[0] for w in fdist.most_common(N)]
scored_sentences = _score_sentences(normalized_sentences, top_n_words)
# Summarization Approach 1:
# Filter out nonsignificant sentences by using the average score plus a
# fraction of the std dev as a filter
avg = numpy.mean([s[1] for s in scored_sentences])
std = numpy.std([s[1] for s in scored_sentences])
mean_scored = [(sent_idx, score) for (sent_idx, score) in scored_sentences
if score > avg + 0.5 * std]
# Summarization Approach 2:
# Another approach would be to return only the top N ranked sentences
top_n_scored = sorted(scored_sentences, key=lambda s: s[1])[-TOP_SENTENCES:]
top_n_scored = sorted(top_n_scored, key=lambda s: s[0])
# Decorate the post object with summaries
return dict(top_n_summary=[sentences[idx] for (idx, score) in top_n_scored],
mean_scored_summary=[sentences[idx] for (idx, score) in mean_scored])
for post in blog_data:
post.update(summarize(post['content']))
print(post['title'])
print('=' * len(post['title']))
print()
print('Top N Summary')
print('-------------')
print(' '.join(post['top_n_summary']))
print()
print('Mean Scored Summary')
print('-------------------')
print(' '.join(post['mean_scored_summary']))
print()
```
## Visualizing document summarization results with HTML output
```
import os
from IPython.display import IFrame
from IPython.core.display import display
HTML_TEMPLATE = """<html>
<head>
<title>{0}</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
</head>
<body>{1}</body>
</html>"""
for post in blog_data:
# Uses previously defined summarize function.
post.update(summarize(post['content']))
# You could also store a version of the full post with key sentences marked up
# for analysis with simple string replacement...
for summary_type in ['top_n_summary', 'mean_scored_summary']:
post[summary_type + '_marked_up'] = '<p>{0}</p>'.format(post['content'])
for s in post[summary_type]:
post[summary_type + '_marked_up'] = \
post[summary_type + '_marked_up'].replace(s, '<strong>{0}</strong>'.format(s))
filename = post['title'].replace("?", "") + '.summary.' + summary_type + '.html'
f = open(os.path.join(filename), 'wb')
html = HTML_TEMPLATE.format(post['title'] + ' Summary', post[summary_type + '_marked_up'])
f.write(html.encode('utf-8'))
f.close()
print("Data written to", f.name)
# Display any of these files with an inline frame. This displays the
# last file processed by using the last value of f.name...
print()
print("Displaying {0}:".format(f.name))
display(IFrame('files/{0}'.format(f.name), '100%', '600px'))
```
## Extracting entities from a text with NLTK
```
import nltk
import json
BLOG_DATA = "feed.json"
blog_data = json.loads(open(BLOG_DATA).read())
for post in blog_data:
sentences = nltk.tokenize.sent_tokenize(post['content'])
tokens = [nltk.tokenize.word_tokenize(s) for s in sentences]
pos_tagged_tokens = [nltk.pos_tag(t) for t in tokens]
# Flatten the list since we're not using sentence structure
# and sentences are guaranteed to be separated by a special
# POS tuple such as ('.', '.')
pos_tagged_tokens = [token for sent in pos_tagged_tokens for token in sent]
all_entity_chunks = []
previous_pos = None
current_entity_chunk = []
for (token, pos) in pos_tagged_tokens:
if pos == previous_pos and pos.startswith('NN'):
current_entity_chunk.append(token)
elif pos.startswith('NN'):
if current_entity_chunk != []:
# Note that current_entity_chunk could be a duplicate when appended,
# so frequency analysis again becomes a consideration
all_entity_chunks.append((' '.join(current_entity_chunk), pos))
current_entity_chunk = [token]
previous_pos = pos
# Store the chunks as an index for the document
# and account for frequency while we're at it...
post['entities'] = {}
for c in all_entity_chunks:
post['entities'][c] = post['entities'].get(c, 0) + 1
# For example, we could display just the title-cased entities
print(post['title'])
print('-' * len(post['title']))
proper_nouns = []
for (entity, pos) in post['entities']:
if entity.istitle():
print('\t{0} ({1})'.format(entity, post['entities'][(entity, pos)]))
print()
```
## Discovering interactions between entities
```
import nltk
import json
BLOG_DATA = "feed.json"
def extract_interactions(txt):
sentences = nltk.tokenize.sent_tokenize(txt)
tokens = [nltk.tokenize.word_tokenize(s) for s in sentences]
pos_tagged_tokens = [nltk.pos_tag(t) for t in tokens]
entity_interactions = []
for sentence in pos_tagged_tokens:
all_entity_chunks = []
previous_pos = None
current_entity_chunk = []
for (token, pos) in sentence:
if pos == previous_pos and pos.startswith('NN'):
current_entity_chunk.append(token)
elif pos.startswith('NN'):
if current_entity_chunk != []:
all_entity_chunks.append((' '.join(current_entity_chunk),
pos))
current_entity_chunk = [token]
previous_pos = pos
if len(all_entity_chunks) > 1:
entity_interactions.append(all_entity_chunks)
else:
entity_interactions.append([])
assert len(entity_interactions) == len(sentences)
return dict(entity_interactions=entity_interactions,
sentences=sentences)
blog_data = json.loads(open(BLOG_DATA).read())
# Display selected interactions on a per-sentence basis
for post in blog_data:
post.update(extract_interactions(post['content']))
print(post['title'])
print('-' * len(post['title']))
for interactions in post['entity_interactions']:
print('; '.join([i[0] for i in interactions]))
print()
```
## Visualizing interactions between entities with HTML output
```
import os
import json
import nltk
from IPython.display import IFrame
from IPython.core.display import display
BLOG_DATA = "feed.json"
HTML_TEMPLATE = """<html>
<head>
<title>{0}</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
</head>
<body>{1}</body>
</html>"""
blog_data = json.loads(open(BLOG_DATA).read())
for post in blog_data:
post.update(extract_interactions(post['content']))
# Display output as markup with entities presented in bold text
post['markup'] = []
for sentence_idx in range(len(post['sentences'])):
s = post['sentences'][sentence_idx]
for (term, _) in post['entity_interactions'][sentence_idx]:
s = s.replace(term, '<strong>{0}</strong>'.format(term))
post['markup'] += [s]
filename = post['title'].replace("?", "") + '.entity_interactions.html'
f = open(os.path.join(filename), 'wb')
html = HTML_TEMPLATE.format(post['title'] + ' Interactions', ' '.join(post['markup']))
f.write(html.encode('utf-8'))
f.close()
print('Data written to', f.name)
# Display any of these files with an inline frame. This displays the
# last file processed by using the last value of f.name...
print('Displaying {0}:'.format(f.name))
display(IFrame('files/{0}'.format(f.name), '100%', '600px'))
```
| github_jupyter |
# Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
**After this assignment you will be able to:**
- Load pre-trained word vectors, and measure similarity using cosine similarity
- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
- Modify word embeddings to reduce their gender bias
Let's get started! Run the following cell to load the packages you will need.
```
import numpy as np
from w2v_utils import *
```
Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`.
```
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
```
You've loaded:
- `words`: set of words in the vocabulary.
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are.
# 1 - Cosine similarity
To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center> **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are</center></caption>
**Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors.
**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
```
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u, v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.sqrt(np.sum(u ** 2))
# Compute the L2 norm of v (≈1 line)
norm_v = np.linalg.norm(v)
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot / (norm_u * norm_v)
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
```
**Expected Output**:
<table>
<tr>
<td>
**cosine_similarity(father, mother)** =
</td>
<td>
0.890903844289
</td>
</tr>
<tr>
<td>
**cosine_similarity(ball, crocodile)** =
</td>
<td>
0.274392462614
</td>
</tr>
<tr>
<td>
**cosine_similarity(france - paris, rome - italy)** =
</td>
<td>
-0.675147930817
</td>
</tr>
</table>
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave.
## 2 - Word analogy task
In the word analogy task, we complete the sentence <font color='brown'>"*a* is to *b* as *c* is to **____**"</font>. An example is <font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>. In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
**Exercise**: Complete the code below to be able to perform word analogies!
```
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b - e_a, word_to_vec_map[w] - e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
```
Run the cell below to test your code, this may take 1-2 minutes.
```
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
```
**Expected Output**:
<table>
<tr>
<td>
**italy -> italian** ::
</td>
<td>
spain -> spanish
</td>
</tr>
<tr>
<td>
**india -> delhi** ::
</td>
<td>
japan -> tokyo
</td>
</tr>
<tr>
<td>
**man -> woman ** ::
</td>
<td>
boy -> girl
</td>
</tr>
<tr>
<td>
**small -> smaller ** ::
</td>
<td>
large -> larger
</td>
</tr>
</table>
Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?.
### Congratulations!
You've come to the end of this assignment. Here are the main points you should remember:
- Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.)
- For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started.
Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook.
Congratulations on finishing the graded portions of this notebook!
## 3 - Debiasing word vectors (OPTIONAL/UNGRADED)
In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded.
Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
```
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
```
Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
```
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
But let's try with some other words.
```
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.
### 3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "othogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center> **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption>
**Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias\_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
```
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = None
# Compute e_biascomponent using the formula give above. (≈ 1 line)
e_biascomponent = None
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = None
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
```
**Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$).
<table>
<tr>
<td>
**cosine similarity between receptionist and g, before neutralizing:** :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
**cosine similarity between receptionist and g, after neutralizing:** :
</td>
<td>
-3.26732746085e-17
</tr>
</table>
### 3.2 - Equalization algorithm for gender-specific words
Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
```
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = None
e_w2B = None
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = None
corrected_e_w2B = None
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
```
**Expected Output**:
cosine similarities before equalizing:
<table>
<tr>
<td>
**cosine_similarity(word_to_vec_map["man"], gender)** =
</td>
<td>
-0.117110957653
</td>
</tr>
<tr>
<td>
**cosine_similarity(word_to_vec_map["woman"], gender)** =
</td>
<td>
0.356666188463
</td>
</tr>
</table>
cosine similarities after equalizing:
<table>
<tr>
<td>
**cosine_similarity(u1, gender)** =
</td>
<td>
-0.700436428931
</td>
</tr>
<tr>
<td>
**cosine_similarity(u2, gender)** =
</td>
<td>
0.700436428931
</td>
</tr>
</table>
Please feel free to play with the input words in the cell above, to apply equalization to other pairs of words.
These debiasing algorithms are very helpful for reducing bias, but are not perfect and do not eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with such variants as well.
### Congratulations
You have come to the end of this notebook, and have seen a lot of the ways that word vectors can be used as well as modified.
Congratulations on finishing this notebook!
**References**:
- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)
- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
```
## MPI PLOTS FOR CASE STUDY 1
```
def MPILOSS(X,Y,Y1,Y2,Y3):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.plot(X,Y2,linewidth =3)
plt.plot(X,Y3,linewidth =3)
plt.grid()
plt.title(' MPI Log Loss ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----Log Loss ----->')
plt.legend(['1 Worker' ,'2 Worker','4 worker','8 Worker' ],prop={'size': 18})
plt.show()
X = range(1,51)
Y = [0.5752236684370319, 0.5747826888869936, 0.5747565455395952, 0.5747600533784686, 0.5747640867582152, 0.5747669428940565, 0.5747692791005982, 0.5747715339634348, 0.5747738794656616, 0.5747763559670855, 0.5747789512746269, 0.574781636158388, 0.5747843787913223, 0.5747871500989727, 0.5747899253749825, 0.5747926844261079, 0.5747954111815087, 0.574798093145454, 0.5748007208426399, 0.5748032873106945, 0.574805787655865, 0.5748082186724898, 0.5748105785211305, 0.5748128664584575, 0.5748150826118612, 0.5748172277922536, 0.5748193033392387, 0.574821310993579, 0.5748232527925895, 0.5748251309847315, 0.5748269479602393, 0.5748287061951041, 0.574830408206161, 0.5748320565153787, 0.5748336536217643, 0.5748352019795485, 0.5748367039815367, 0.5748381619466977, 0.5748395781112091, 0.5748409546223204, 0.5748422935344875, 0.5748435968073395, 0.5748448663051052, 0.5748461037971914, 0.5748473109596628, 0.5748484893774136, 0.5748496405468604, 0.5748507658790166, 0.5748518667028325, 0.5748529442687101]
Y1=[0.6339635512620139, 0.6304404969428715, 0.6287863553761855, 0.6279738852991021, 0.6276156820950709, 0.6275030043187826, 0.6275186603871724, 0.6275962587455842, 0.6276985594206315, 0.6278052613572751, 0.62790590302034, 0.6279956798869201, 0.6280729698470366, 0.6281378719120901, 0.6281913501291941, 0.6282347398754922, 0.6282694708969774, 0.6282969192973948, 0.6283183354045916, 0.6283348154259755, 0.6283472975522393, 0.6283565709388462, 0.6283632907373439, 0.6283679952390973, 0.6283711229471666, 0.6283730284451766, 0.6283739965544023, 0.6283742546284109, 0.628373983031675, 0.628373323948644, 0.628372388712509, 0.6283712638527655, 0.6283700160526373, 0.6283686961904849, 0.6283673426188101, 0.6283659838134242, 0.6283646405054056, 0.628363327390435, 0.6283620544942408, 0.6283608282592517, 0.6283596524059716, 0.6283585286128809, 0.6283574570505828, 0.6283564367992205, 0.6283554661726903, 0.6283545429686687, 0.6283536646597799, 0.6283528285382431, 0.6283520318238921, 0.6283512717434965]
Y2= [4.3169806574173295, 0.7657766602060129, 0.769690724777485, 0.7729097812582286, 0.7733627491222795, 0.7731758464238829, 0.7728184512237485, 0.7724022710038205, 0.7719646329897042, 0.7715229563186605, 0.7710868291225863, 0.7706617506651021, 0.7702508087535918, 0.769855612524523, 0.769476856737671, 0.769114673341886, 0.768768852829496, 0.7684389840059467, 0.7681245420849621, 0.767824943892049, 0.7675395821106334, 0.7672678462023657, 0.7670091348935252, 0.7667628633610751, 0.7665284671193087, 0.7663054038783407, 0.7660931541767686, 0.7658912212905127, 0.7656991307288572, 0.7655164295078882, 0.7653426853157339, 0.7651774856370379, 0.7650204368752074, 0.764871163493498, 0.7647293071856621, 0.7645945260808633, 0.7644664939842131, 0.7643448996525244, 0.764229446103937, 0.7641198499598332, 0.7640158408173039, 0.763917160650533, 0.763823563239572, 0.763734813625115, 0.7636506875880524, 0.7635709711526054, 0.7634954601120466, 0.7634239595760131, 0.7633562835384984, 0.7632922544656726]
Y3= [4.117093141691398, 0.6759358131235735, 0.6641330238240843, 0.6610314407590852, 0.6583841085657133, 0.6563961526004317, 0.6549515972049651, 0.6538930082478842, 0.6530996514020225, 0.6524886462115117, 0.6520044887461877, 0.6516099701960769, 0.6512798871332653, 0.6509969285109589, 0.6507490224403096, 0.6505276204636077, 0.6503265777259422, 0.6501414130606888, 0.6499688134986213, 0.6498062978984815, 0.6496519855014967, 0.6495044345712297, 0.6493625284419923, 0.649225394029864, 0.649092342836467, 0.6489628277185545, 0.6488364108349197, 0.6487127396078527, 0.648591528496725, 0.648472545034421, 0.6483555990256269, 0.6482405341164672, 0.6481272211618898, 0.6480155529701483, 0.6479054401125224, 0.6477968075645875, 0.6476895920020299, 0.6475837396155378, 0.647479204340059, 0.6473759464167363, 0.6472739312232124, 0.6471731283213282, 0.6470735106814633, 0.6469750540507997, 0.6468777364390526, 0.6467815377002195, 0.6466864391928595, 0.6465924235046332, 0.6464994742294108, 0.6464075757873561]
def MPILOSS_2Workers(X,Y,Y1,Y2):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.plot(X,Y2,linewidth =3)
plt.grid()
plt.title(' Comparing Log Loss with 2 Workers for each Framework ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----Log Loss ----->')
plt.legend(['MPI','DTF','SPARK'],prop={'size': 18})
plt.show()
x = range(1,51)
Y=[0.6339635512620139, 0.6304404969428715, 0.6287863553761855, 0.6279738852991021, 0.6276156820950709, 0.6275030043187826, 0.6275186603871724, 0.6275962587455842, 0.6276985594206315, 0.6278052613572751, 0.62790590302034, 0.6279956798869201, 0.6280729698470366, 0.6281378719120901, 0.6281913501291941, 0.6282347398754922, 0.6282694708969774, 0.6282969192973948, 0.6283183354045916, 0.6283348154259755, 0.6283472975522393, 0.6283565709388462, 0.6283632907373439, 0.6283679952390973, 0.6283711229471666, 0.6283730284451766, 0.6283739965544023, 0.6283742546284109, 0.628373983031675, 0.628373323948644, 0.628372388712509, 0.6283712638527655, 0.6283700160526373, 0.6283686961904849, 0.6283673426188101, 0.6283659838134242, 0.6283646405054056, 0.628363327390435, 0.6283620544942408, 0.6283608282592517, 0.6283596524059716, 0.6283585286128809, 0.6283574570505828, 0.6283564367992205, 0.6283554661726903, 0.6283545429686687, 0.6283536646597799, 0.6283528285382431, 0.6283520318238921, 0.6283512717434965]
Y1= [0.715948034274896,0.70289717140913040,0.6905844020679819,0.6717724559631897,0.6527273647994417,0.6474447804910045,0.6464336144796086,0.6460104962764943,0.6457092440153327,0.6454408117073506,0.645183126813747,0.644929985585447,0.6446795702746679,0.6444313744231123,0.6441852977984192,0.6439413671299447,0.6436996481093187,0.643460217301526,0.6432231529318845,0.6429885316349159,0.6427564271345373,0.642526909609488,0.6423000453465729,0.6420758965311065,0.641854521093217,0.6416359725517277,0.6414202998121223,0.6412075468906367,0.64099775255328,0.6407909498749865,0.6405871657383971,0.6403864203026565,0.6401887264793377,0.639994089454955,0.6398025062977499,0.6396139656811805,0.6394284477486862,0.6392459241348711,0.6390663581483045,0.6388897051116094,0.6387159128462027,0.6385449222824692,0.638376668171623,0.638211079873034,0.6380480821902578,0.6378875962300637,0.6377295402610623, 0.6375738305516471,0.6374203821705329,0.6372691097368535]
Y2 = [0.64478,
0.355754,
0.293243,
0.266243,
0.247183,
0.231421,
0.203099,
0.178408,
0.165613,
0.150086,
0.144821,
0.139987,
0.135579,
0.132019,
0.128864,
0.127886,
0.126838,
0.126100,
0.124865,
0.123924,
0.122745,
0.122014,
0.121243,
0.120626,
0.119942,
0.119472,
0.119122,
0.118633,
0.118291,
0.118060,
0.117807,
0.117551,
0.117263,
0.117040,
0.116894,
0.116704,
0.116533,
0.116394,
0.116199,
0.116074,
0.115979,
0.115858,
0.115757,
0.115682,
0.115613,
0.115471,
0.115361,
0.115289,
0.115224,
0.115166]
def MPILOSS_2Workers(X,Y,Y1,Y2):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.plot(X,Y2,linewidth =3)
plt.grid()
plt.title(' Comparing Accuracy with 2 Workers for each Framework ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----Log Loss ----->')
plt.legend(['MPI','DTF','SPARK'],prop={'size': 18})
plt.show()
y1 =[0.6619549273236431, 0.6606214161888252, 0.6601546872916388, 0.6598213095079344, 0.6598879850646753, 0.6598213095079344, 0.6602213628483797, 0.660088011734898, 0.6602213628483797, 0.6602880384051206, 0.6603547139618615, 0.660088011734898, 0.6602213628483797, 0.6600213361781571, 0.6598879850646753, 0.6598879850646753, 0.6598213095079344, 0.6599546606214162, 0.6598879850646753, 0.6599546606214162, 0.6597546339511935, 0.6598213095079344, 0.6598213095079344, 0.6598879850646753, 0.6598879850646753, 0.6598879850646753, 0.6598879850646753, 0.6598879850646753, 0.6599546606214162, 0.6599546606214162, 0.6599546606214162, 0.6599546606214162, 0.6599546606214162, 0.6599546606214162, 0.6598879850646753, 0.6598879850646753, 0.6598879850646753, 0.6598213095079344, 0.6598213095079344, 0.6598213095079344, 0.6598213095079344, 0.6598213095079344, 0.6597546339511935, 0.6596879583944526, 0.6596879583944526, 0.6596879583944526, 0.6596879583944526, 0.6596879583944526, 0.6596879583944526, 0.6596879583944526]
y2 =[0.4188,0.518,0.5868,0.6158,0.63046,0.63046,0.635,0.638,0.6389,0.6399,0.6404,0.6405,0.6404,0.64040,0.6404,0.6406,0.6406,0.6408,0.6406,0.6408,0.6410,0.6411,0.6413,0.64132,0.64153,0.6412,0.64155,0.6423,0.64331,0.64345,0.6467,0.6478,0.6489, 0.6511, 0.6514,0.6518, 0.6519,0.65213,0.6532,0.6538,0.6541,0.6543,0.65464,0.65493,0.6551,0.655127, 0.65521,0.65533, 0.65535, 0.65537]
y3 =[0.7129763968187947, 0.7129763968187949, 0.7129763968187969, 0.7129763968187939, 0.7129763968187944, 0.7129763968187947, 0.7129763968187963, 0.7129763968187965, 0.7129763968187962, 0.7129763968187925, 0.7129763968187942, 0.7129763968187948, 0.7129763968187958, 0.712976396818794, 0.7129763968187962, 0.7129763968187981, 0.7129763968187963, 0.7129763968187969, 0.7129763968187957, 0.712976396818795, 0.7129763968187955, 0.7129763968187922, 0.7129763968187969, 0.7129763968187948, 0.712976396818795, 0.7129763968187948, 0.7129763968187925, 0.7129763968187943, 0.7129763968187948, 0.712976396818795, 0.7129763968187983, 0.7129763968187953, 0.7129763968187978, 0.7129763968187971, 0.7129763968187947, 0.7129763968187985, 0.7129763968187945, 0.7129763968187984, 0.7129763968187951, 0.712976396818794, 0.7129763968187937, 0.7129763968187957, 0.7129763968187957, 0.7129763968187934, 0.7129763968187969, 0.712976396818796, 0.7129763968187952, 0.7129763968187934, 0.7129763968187987, 0.7129763968187948]
len(y3)
x = range (1,51)
MPILOSS_2Workers(x,y1,y2,y3)
m = [0.7129763968187947, 0.7129763968187949, 0.7129763968187969, 0.7129763968187939, 0.7129763968187944, 0.7129763968187947, 0.7129763968187963, 0.7129763968187965, 0.7129763968187962, 0.7129763968187925, 0.7129763968187942, 0.7129763968187948, 0.7129763968187958, 0.712976396818794, 0.7129763968187962, 0.7129763968187981, 0.7129763968187963, 0.7129763968187969, 0.7129763968187957, 0.712976396818795, 0.7129763968187955, 0.7129763968187922, 0.7129763968187969, 0.7129763968187948, 0.712976396818795, 0.7129763968187948, 0.7129763968187925, 0.7129763968187943, 0.7129763968187948, 0.712976396818795, 0.7129763968187983, 0.7129763968187953, 0.7129763968187978, 0.7129763968187971, 0.7129763968187947, 0.7129763968187985, 0.7129763968187945, 0.7129763968187984, 0.7129763968187951, 0.712976396818794, 0.7129763968187937, 0.7129763968187957, 0.7129763968187957, 0.7129763968187934, 0.7129763968187969, 0.712976396818796, 0.7129763968187952, 0.7129763968187934, 0.7129763968187987, 0.7129763968187948]
len(y2)
y3 =[0.7129763968187947, 0.7129763968187949, 0.7129763968187969, 0.7129763968187939, 0.7129763968187944, 0.7129763968187947, 0.7129763968187963, 0.7129763968187965, 0.7129763968187962, 0.7129763968187925, 0.7129763968187942, 0.7129763968187948, 0.7129763968187958, 0.712976396818794, 0.7129763968187962, 0.7129763968187981, 0.7129763968187963, 0.7129763968187969, 0.7129763968187957, 0.712976396818795, 0.7129763968187955, 0.7129763968187922, 0.7129763968187969, 0.7129763968187948, 0.712976396818795, 0.7129763968187948, 0.7129763968187925, 0.7129763968187943, 0.7129763968187948, 0.712976396818795, 0.7129763968187983, 0.7129763968187953, 0.7129763968187978, 0.7129763968187971, 0.7129763968187947, 0.7129763968187985, 0.7129763968187945, 0.7129763968187984, 0.7129763968187951, 0.712976396818794, 0.7129763968187937, 0.7129763968187957, 0.7129763968187957, 0.7129763968187934, 0.7129763968187969, 0.712976396818796, 0.7129763968187952, 0.7129763968187934, 0.7129763968187987, 0.7129763968187948]
len(y3)
MPILOSS_2Workers(x,Y,Y1,Y2)
MPILOSS(X,Y,Y1,Y2,Y3)
```
## Time Taken Vs Loss
```
def MPITIME(X,X1,X2,X3,Y,Y1,Y2,Y3):
plt.figure(figsize =(10,5))
plt.plot(X,Y ,linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.plot(X,Y2,linewidth =3)
plt.plot(X,Y3,linewidth =3)
plt.grid()
plt.title(' Time Taken vs Loss ')
plt.xlabel('<-----Time Taken (Secondss) ---->')
plt.ylabel('<----Log Loss ----->')
plt.legend(['1 Worker' ,'2 Worker','4 worker','8 Worker' ],prop ={'size' :18})
plt.show()
X = [ 84.49820589 , 164.47475089 , 252.57880662 ,332.79226585 , 414.24528569
, 495.48336532 , 576.92765014 ,656.86370173 ,739.25705991 , 818.6356752
, 899.43099506 ,979.77890556 ,1059.83193274 ,1140.55341788 ,1221.16421704
,1301.57192919 ,1381.55797474 ,1462.8466231 ,1542.80778309 ,1623.07644973
,1703.95276138 ,1783.9787478 ,1877.87199879 ,1959.04781022 ,2038.93317987
,2120.90320994 ,2204.96414807 ,2285.77911995 ,2364.96363536 ,2445.87305488
,2525.83184811 ,2606.97834018 ,2686.7101263 ,2766.99245824 ,2847.33861059
,2927.87637653 ,3008.97332209 ,3089.85983705 ,3169.59447478 ,3248.61982474
,3329.64010794 ,3409.53455712 ,3489.92070007 ,3570.4199765 ,3650.18577672
,3729.27832163 ,3809.19140942 ,3888.32132533 ,3972.68657469 ,4072.03480607]
X1 = [ 39.26806437 ,72.57745821 , 106.05096296 , 139.00448176 , 172.29533115
,205.57932461 ,238.56777642 ,272.72148373 ,306.97009107 ,340.38344931
,373.43555898 ,406.51144664 ,440.63416142 ,474.60564955 ,508.90255478
,542.17474864 ,575.40848085 ,609.02158613 ,643.00820186 ,676.24514888
,709.44564522 ,743.47336864 ,777.49872282 ,811.2797804 ,844.40053887
,877.45420111 ,911.26939514 ,945.73890616 ,980.09765186 ,1013.67860485
,1047.14104555 ,1080.20068922 ,1114.26367958 ,1147.5959727 ,1180.67114349
,1214.17691088 ,1249.57923752 ,1282.92121595 ,1316.03580045 ,1349.34485825
,1383.64024229 ,1417.82322523 ,1450.86176798 ,1484.00772195 ,1517.43152646
,1551.42711311 ,1585.98403106 ,1619.55400393 ,1653.27497918 ,1686.40623821]
X2 =[ 21.52811306 ,38.23503125 , 54.76111853 ,71.63844112 , 88.13312931
,104.95305264 ,121.63066064 ,138.29328139 ,154.91264137 ,171.76515176
,188.66564091 ,205.63517836 ,222.38042774 ,238.85313769 ,255.54502948
,272.10224998 ,288.88213652 ,305.7769563 ,322.77716893 ,339.34941018
,356.19008146 ,372.9176443 ,389.60832767 ,406.48996101 ,423.47337075
,440.48152254 ,457.20850426 ,473.89012363 ,490.52530129 ,507.22085219
,523.93095134 ,541.1336942 ,558.17591968 ,575.27343437 ,592.28010516
,608.99618434 ,625.73063412 ,642.3602348 ,659.11609194 ,675.81944694
,693.08620451 ,710.07428502 ,727.20617417 ,743.93632063 ,760.58642419
,777.35678163 ,793.94443998 ,810.66035554 ,827.57658372 ,844.72874445]
X3 = [13.97246734947475522,21.811070889318216,31.64744130874169,40.74003029681262,49.815186276729946,58.967551522560825,68.04791670633495,77.01047306327018,86.02742898126962,95.27113142652524,104.45450471130971,113.5906742803636,122.91588332906758,132.12772087722624,140.94381443626116,149.88122359343106,158.87287187114634,167.76169047180883,176.78346066315862,185.55280467653938,194.6135262991993,203.5145031174834,212.46495168095862,221.43934643233297,230.34223443215524,239.06863715830696,248.05189950993372,257.06174738411573,265.9514390632794,274.8598084869154,283.6458457189874,292.5765608041329,301.4662027643717,310.3892163520977,319.5273972457835,328.6362838168061,337.3893189635055,346.40289787308575,355.3349454960626,364.30210631952286,373.23001034576737,382.0595156741583,390.98750644406755,400.0585248217358,409.0105047987563,418.08025939881554,427.1664674383319,436.18917065299684,445.3574709920031,454.49862514818597]
Y = [0.5752236684370319, 0.5747826888869936, 0.5747565455395952, 0.5747600533784686, 0.5747640867582152, 0.5747669428940565, 0.5747692791005982, 0.5747715339634348, 0.5747738794656616, 0.5747763559670855, 0.5747789512746269, 0.574781636158388, 0.5747843787913223, 0.5747871500989727, 0.5747899253749825, 0.5747926844261079, 0.5747954111815087, 0.574798093145454, 0.5748007208426399, 0.5748032873106945, 0.574805787655865, 0.5748082186724898, 0.5748105785211305, 0.5748128664584575, 0.5748150826118612, 0.5748172277922536, 0.5748193033392387, 0.574821310993579, 0.5748232527925895, 0.5748251309847315, 0.5748269479602393, 0.5748287061951041, 0.574830408206161, 0.5748320565153787, 0.5748336536217643, 0.5748352019795485, 0.5748367039815367, 0.5748381619466977, 0.5748395781112091, 0.5748409546223204, 0.5748422935344875, 0.5748435968073395, 0.5748448663051052, 0.5748461037971914, 0.5748473109596628, 0.5748484893774136, 0.5748496405468604, 0.5748507658790166, 0.5748518667028325, 0.5748529442687101]
Y1=[0.6339635512620139, 0.6304404969428715, 0.6287863553761855, 0.6279738852991021, 0.6276156820950709, 0.6275030043187826, 0.6275186603871724, 0.6275962587455842, 0.6276985594206315, 0.6278052613572751, 0.62790590302034, 0.6279956798869201, 0.6280729698470366, 0.6281378719120901, 0.6281913501291941, 0.6282347398754922, 0.6282694708969774, 0.6282969192973948, 0.6283183354045916, 0.6283348154259755, 0.6283472975522393, 0.6283565709388462, 0.6283632907373439, 0.6283679952390973, 0.6283711229471666, 0.6283730284451766, 0.6283739965544023, 0.6283742546284109, 0.628373983031675, 0.628373323948644, 0.628372388712509, 0.6283712638527655, 0.6283700160526373, 0.6283686961904849, 0.6283673426188101, 0.6283659838134242, 0.6283646405054056, 0.628363327390435, 0.6283620544942408, 0.6283608282592517, 0.6283596524059716, 0.6283585286128809, 0.6283574570505828, 0.6283564367992205, 0.6283554661726903, 0.6283545429686687, 0.6283536646597799, 0.6283528285382431, 0.6283520318238921, 0.6283512717434965]
Y2= [4.3169806574173295, 0.7657766602060129, 0.769690724777485, 0.7729097812582286, 0.7733627491222795, 0.7731758464238829, 0.7728184512237485, 0.7724022710038205, 0.7719646329897042, 0.7715229563186605, 0.7710868291225863, 0.7706617506651021, 0.7702508087535918, 0.769855612524523, 0.769476856737671, 0.769114673341886, 0.768768852829496, 0.7684389840059467, 0.7681245420849621, 0.767824943892049, 0.7675395821106334, 0.7672678462023657, 0.7670091348935252, 0.7667628633610751, 0.7665284671193087, 0.7663054038783407, 0.7660931541767686, 0.7658912212905127, 0.7656991307288572, 0.7655164295078882, 0.7653426853157339, 0.7651774856370379, 0.7650204368752074, 0.764871163493498, 0.7647293071856621, 0.7645945260808633, 0.7644664939842131, 0.7643448996525244, 0.764229446103937, 0.7641198499598332, 0.7640158408173039, 0.763917160650533, 0.763823563239572, 0.763734813625115, 0.7636506875880524, 0.7635709711526054, 0.7634954601120466, 0.7634239595760131, 0.7633562835384984, 0.7632922544656726]
Y3= [4.117093141691398, 0.6759358131235735, 0.6641330238240843, 0.6610314407590852, 0.6583841085657133, 0.6563961526004317, 0.6549515972049651, 0.6538930082478842, 0.6530996514020225, 0.6524886462115117, 0.6520044887461877, 0.6516099701960769, 0.6512798871332653, 0.6509969285109589, 0.6507490224403096, 0.6505276204636077, 0.6503265777259422, 0.6501414130606888, 0.6499688134986213, 0.6498062978984815, 0.6496519855014967, 0.6495044345712297, 0.6493625284419923, 0.649225394029864, 0.649092342836467, 0.6489628277185545, 0.6488364108349197, 0.6487127396078527, 0.648591528496725, 0.648472545034421, 0.6483555990256269, 0.6482405341164672, 0.6481272211618898, 0.6480155529701483, 0.6479054401125224, 0.6477968075645875, 0.6476895920020299, 0.6475837396155378, 0.647479204340059, 0.6473759464167363, 0.6472739312232124, 0.6471731283213282, 0.6470735106814633, 0.6469750540507997, 0.6468777364390526, 0.6467815377002195, 0.6466864391928595, 0.6465924235046332, 0.6464994742294108, 0.6464075757873561]
MPITIME(X,X1,X2,X3,Y,Y1,Y2,Y3)
def plotTM(X,Y,Y1,Y2,Y3):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.plot(X,Y2,linewidth=3)
plt.plot(X,Y3,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for MPI ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','2 Worker','4 Workers','8 Workers'],prop ={'size' :18})
plt.show()
x = range(1,51)
Y = [ 84.49820589 , 164.47475089 , 252.57880662 ,332.79226585 , 414.24528569
, 495.48336532 , 576.92765014 ,656.86370173 ,739.25705991 , 818.6356752
, 899.43099506 ,979.77890556 ,1059.83193274 ,1140.55341788 ,1221.16421704
,1301.57192919 ,1381.55797474 ,1462.8466231 ,1542.80778309 ,1623.07644973
,1703.95276138 ,1783.9787478 ,1877.87199879 ,1959.04781022 ,2038.93317987
,2120.90320994 ,2204.96414807 ,2285.77911995 ,2364.96363536 ,2445.87305488
,2525.83184811 ,2606.97834018 ,2686.7101263 ,2766.99245824 ,2847.33861059
,2927.87637653 ,3008.97332209 ,3089.85983705 ,3169.59447478 ,3248.61982474
,3329.64010794 ,3409.53455712 ,3489.92070007 ,3570.4199765 ,3650.18577672
,3729.27832163 ,3809.19140942 ,3888.32132533 ,3972.68657469 ,4072.03480607]
Y1 = [ 39.26806437 ,72.57745821 , 106.05096296 , 139.00448176 , 172.29533115
,205.57932461 ,238.56777642 ,272.72148373 ,306.97009107 ,340.38344931
,373.43555898 ,406.51144664 ,440.63416142 ,474.60564955 ,508.90255478
,542.17474864 ,575.40848085 ,609.02158613 ,643.00820186 ,676.24514888
,709.44564522 ,743.47336864 ,777.49872282 ,811.2797804 ,844.40053887
,877.45420111 ,911.26939514 ,945.73890616 ,980.09765186 ,1013.67860485
,1047.14104555 ,1080.20068922 ,1114.26367958 ,1147.5959727 ,1180.67114349
,1214.17691088 ,1249.57923752 ,1282.92121595 ,1316.03580045 ,1349.34485825
,1383.64024229 ,1417.82322523 ,1450.86176798 ,1484.00772195 ,1517.43152646
,1551.42711311 ,1585.98403106 ,1619.55400393 ,1653.27497918 ,1686.40623821]
Y2 =[ 21.52811306 ,38.23503125 , 54.76111853 ,71.63844112 , 88.13312931
,104.95305264 ,121.63066064 ,138.29328139 ,154.91264137 ,171.76515176
,188.66564091 ,205.63517836 ,222.38042774 ,238.85313769 ,255.54502948
,272.10224998 ,288.88213652 ,305.7769563 ,322.77716893 ,339.34941018
,356.19008146 ,372.9176443 ,389.60832767 ,406.48996101 ,423.47337075
,440.48152254 ,457.20850426 ,473.89012363 ,490.52530129 ,507.22085219
,523.93095134 ,541.1336942 ,558.17591968 ,575.27343437 ,592.28010516
,608.99618434 ,625.73063412 ,642.3602348 ,659.11609194 ,675.81944694
,693.08620451 ,710.07428502 ,727.20617417 ,743.93632063 ,760.58642419
,777.35678163 ,793.94443998 ,810.66035554 ,827.57658372 ,844.72874445]
Y3 = [13.97246734947475522,21.811070889318216,31.64744130874169,40.74003029681262,49.815186276729946,58.967551522560825,68.04791670633495,77.01047306327018,86.02742898126962,95.27113142652524,104.45450471130971,113.5906742803636,122.91588332906758,132.12772087722624,140.94381443626116,149.88122359343106,158.87287187114634,167.76169047180883,176.78346066315862,185.55280467653938,194.6135262991993,203.5145031174834,212.46495168095862,221.43934643233297,230.34223443215524,239.06863715830696,248.05189950993372,257.06174738411573,265.9514390632794,274.8598084869154,283.6458457189874,292.5765608041329,301.4662027643717,310.3892163520977,319.5273972457835,328.6362838168061,337.3893189635055,346.40289787308575,355.3349454960626,364.30210631952286,373.23001034576737,382.0595156741583,390.98750644406755,400.0585248217358,409.0105047987563,418.08025939881554,427.1664674383319,436.18917065299684,445.3574709920031,454.49862514818597]
plotTM(x,Y,Y1,Y2,Y3)
```
## Speedup Graph
```
number_processes=[1,2,4,8]
Time=[4072.03480607,1686.40623821,844.72874445,454.49862514818597]
multip_stat=dict(zip(number_processes,Time))
def plot(multip_stat):
keys =sorted(multip_stat)
keys = np.array(keys)
speedup = []
for number_processes in keys:
speedup.append(multip_stat[1]/multip_stat[number_processes])
plt.figure(figsize =(10,5))
plt.scatter(keys,speedup)
plt.plot(keys,speedup,linewidth =3)
plt.grid()
plt.title('Speedup Plot')
plt.legend(['MPI'])
plt.ylabel('Speedup ')
plt.xlabel('Number of Workers')
plot(multip_stat)
```
## EFFICENCY CURVE
```
X = range(1,51)
Y=[0.6339635512620139, 0.6304404969428715, 0.6287863553761855, 0.6279738852991021, 0.6276156820950709, 0.6275030043187826, 0.6275186603871724, 0.6275962587455842, 0.6276985594206315, 0.6278052613572751, 0.62790590302034, 0.6279956798869201, 0.6280729698470366, 0.6281378719120901, 0.6281913501291941, 0.6282347398754922, 0.6282694708969774, 0.6282969192973948, 0.6283183354045916, 0.6283348154259755, 0.6283472975522393, 0.6283565709388462, 0.6283632907373439, 0.6283679952390973, 0.6283711229471666, 0.6283730284451766, 0.6283739965544023, 0.6283742546284109, 0.628373983031675, 0.628373323948644, 0.628372388712509, 0.6283712638527655, 0.6283700160526373, 0.6283686961904849, 0.6283673426188101, 0.6283659838134242, 0.6283646405054056, 0.628363327390435, 0.6283620544942408, 0.6283608282592517, 0.6283596524059716, 0.6283585286128809, 0.6283574570505828, 0.6283564367992205, 0.6283554661726903, 0.6283545429686687, 0.6283536646597799, 0.6283528285382431, 0.6283520318238921, 0.6283512717434965]
Y1= [0.715948034274896,0.70289717140913040,0.6905844020679819,0.6717724559631897,0.6527273647994417,0.6474447804910045,0.6464336144796086,0.6460104962764943,0.6457092440153327,0.6454408117073506,0.645183126813747,0.644929985585447,0.6446795702746679,0.6444313744231123,0.6441852977984192,0.6439413671299447,0.6436996481093187,0.643460217301526,0.6432231529318845,0.6429885316349159,0.6427564271345373,0.642526909609488,0.6423000453465729,0.6420758965311065,0.641854521093217,0.6416359725517277,0.6414202998121223,0.6412075468906367,0.64099775255328,0.6407909498749865,0.6405871657383971,0.6403864203026565,0.6401887264793377,0.639994089454955,0.6398025062977499,0.6396139656811805,0.6394284477486862,0.6392459241348711,0.6390663581483045,0.6388897051116094,0.6387159128462027,0.6385449222824692,0.638376668171623,0.638211079873034,0.6380480821902578,0.6378875962300637,0.6377295402610623, 0.6375738305516471,0.6374203821705329,0.6372691097368535]
Y2= [0.645195,
0.352708,
0.295527,
0.265963,
0.247322,
0.230956,
0.203196,
0.178851,
0.165108,
0.152322,
0.147029,
0.142655,
0.137994,
0.134703,
0.132134,
0.130995,
0.130018,
0.129495,
0.128310,
0.127241,
0.126372,
0.125682,
0.124841,
0.124271,
0.123461,
0.122849,
0.122410,
0.121970,
0.121787,
0.121498,
0.121191,
0.121012,
0.120614,
0.120391,
0.120137,
0.119942,
0.119739,
0.119518,
0.119383,
0.119210,
0.119077,
0.118965,
0.118904,
0.118791,
0.118728,
0.118653,
0.118529,
0.118472,
0.118374,
0.118332,
0.118280]
```
## DTF PLOTS FOR CASE STUDY 1
```
def DTFLOSS(X,Y,Y1,Y2,Y3):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.plot(X,Y2,linewidth =3)
plt.plot(X,Y3,linewidth =3)
plt.grid()
plt.title(' DTF Log Loss ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----Log Loss ----->')
plt.legend(['1 Worker' ,'2 Worker','4 worker','8 Worker' ],prop={'size': 18})
plt.show()
X = range(1,51)
Y = [0.8032,0.7307,0.6966,0.6719,0.6570,0.6511,0.6483,0.6467,0.6456,0.6449,0.6443,0.6438,0.6434,0.6431,0.6428,0.6425,0.6423,0.6421,0.6419,0.6417,0.6416,0.6414,0.6413,0.6412,0.6410,0.6409,0.6408,0.6407,0.6406,0.6406,0.6405,0.6404,0.6403,0.6402,0.6401,0.6401,0.6400,0.6399,0.6399,0.6398,0.6397,0.6397,0.6396,0.6395,0.6395, 0.6394,0.6393,0.6393,0.6392,0.6392]
Y1= [0.715948034274896,0.70289717140913040,0.6905844020679819,0.6717724559631897,0.6527273647994417,0.6474447804910045,0.6464336144796086,0.6460104962764943,0.6457092440153327,0.6454408117073506,0.645183126813747,0.644929985585447,0.6446795702746679,0.6444313744231123,0.6441852977984192,0.6439413671299447,0.6436996481093187,0.643460217301526,0.6432231529318845,0.6429885316349159,0.6427564271345373,0.642526909609488,0.6423000453465729,0.6420758965311065,0.641854521093217,0.6416359725517277,0.6414202998121223,0.6412075468906367,0.64099775255328,0.6407909498749865,0.6405871657383971,0.6403864203026565,0.6401887264793377,0.639994089454955,0.6398025062977499,0.6396139656811805,0.6394284477486862,0.6392459241348711,0.6390663581483045,0.6388897051116094,0.6387159128462027,0.6385449222824692,0.638376668171623,0.638211079873034,0.6380480821902578,0.6378875962300637,0.6377295402610623, 0.6375738305516471,0.6374203821705329,0.6372691097368535]
Y2 = [0.7038,0.6787,0.6654,0.6563, 0.6514,0.6487,0.6470, 0.6459,0.6451,0.6445,0.6440,0.6436,0.6432,0.6429,0.6426,0.6423,0.6420,0.6417,0.6415,0.6413,0.6410,0.6408,0.6406,0.6403,0.6401,0.6399,0.6397,0.6394,0.6392,0.6390,0.6387,0.6385,0.6382,0.6380,0.6378,0.6375,0.6373,0.6370,0.6368,0.6366,0.6363,0.6361,0.6359,0.6357,0.6354,0.6352,0.6350,0.6348,0.6347,0.6345]
Y3 = [0.8455,0.7903,0.7331, 0.6984, 0.6801, 0.6694, 0.6626, 0.6581,0.6550,0.6529,0.6513,0.6502, 0.6494,0.6488,0.6483, 0.6478, 0.6474, 0.6471, 0.6468, 0.6465, 0.6462, 0.6460, 0.6457, 0.6455,0.6453, 0.6451,0.6449, 0.6446, 0.6444,0.6443, 0.6441,0.6439, 0.6437,0.6435, 0.6433,0.6432,0.6430,0.6429,0.6427, 0.6426, 0.6424, 0.6423,0.6421,0.6420, 0.6419,0.6417,0.6416,0.6415,0.6414,0.6413]
DTFLOSS(x,Y,Y1,Y2,Y3)
def DTFTIME(X,X1,X2,X3,Y,Y1,Y2,Y3):
plt.figure(figsize =(10,5))
plt.plot(X,Y ,linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.plot(X,Y2,linewidth =3)
plt.plot(X,Y3,linewidth =3)
plt.grid()
plt.title(' Time Taken vs Loss DTF ')
plt.xlabel('<-----Time Taken (Secondss) ---->')
plt.ylabel('<----Log Loss ----->')
plt.legend(['1 Worker' ,'2 Worker','4 worker','8 Worker' ],prop ={'size' :18})
plt.show()
X= [ 16.87558913, 33.29074454, 49.76412535, 67.17805433, 83.68378305, 100.06425428, 116.22213483, 132.54741764, 147.88943338, 162.56206965, 177.82378531, 190.71737599, 206.07281613, 223.09576797, 241.42640066, 259.35313749, 277.19583058, 293.11428428, 310.34029913, 328.12470937, 343.81842685, 358.02016902, 371.23597646, 389.42547941, 407.68673635, 425.49094319, 442.56747818, 461.19541144, 478.54700518, 496.92521 , 513.99962282, 531.47870708, 549.16265583, 565.7648344 , 582.9304204 , 599.21490431, 615.5532937 , 635.68406606, 650.53418756, 665.50302958, 681.26839638, 696.57588673, 712.4587276 , 729.03097391, 744.76885128, 760.5973568 , 776.25076389, 791.98429418, 806.69811916, 823.22756433]
X1=[ 17.60946488, 32.49609518, 46.8456769 , 59.97366548, 72.78248382, 86.13100553, 100.13036442, 114.00211406, 128.93156004, 142.99975872, 157.10885191, 170.58976483, 188.99921346, 206.14611602, 223.39433408, 240.71215391, 258.75031304, 276.02193213, 293.29474902, 309.94306254, 323.73790812, 338.0604918 , 352.12470007, 366.19486022, 379.79518127, 395.5195744 , 412.09480929, 430.4447825 , 447.55368066, 464.86246061, 482.21042061, 500.27792668, 517.51594067, 532.06836963, 545.85312557, 560.88961601, 575.24216127, 589.46198869, 603.06028223, 617.0636332 , 632.25353575, 647.99967837, 665.16695642, 682.51943541, 699.68227649, 716.22630692, 733.0169704 , 750.16986108, 765.39399242, 779.67661476]
X2= [9.58374453, 18.13167834, 26.97000527, 35.51926875, 44.30734992, 52.96224642, 62.22135305, 70.8706727 , 78.6865859 , 87.47659445, 96.63693261, 107.45961523, 117.88072157, 125.78734231, 134.49494171, 142.06098008, 151.19932675, 158.74457216, 167.92506218, 175.69069195, 184.59641671, 193.18962574, 201.25483251, 209.77101421, 217.58441687, 226.29052091, 234.21030354, 243.17912483, 251.8932929 , 260.80951738, 268.50533104, 276.47641182, 285.18519044, 294.05454183, 301.72342682, 310.51598287, 318.07915044, 327.10508418, 336.06855035, 344.62669945, 352.48610854, 361.0362792 , 370.79127121, 379.2261126 , 388.59513116, 397.7535193 , 406.98942423, 414.94340825, 423.73010254, 431.81974101]
X3= [7.98568463, 16.9736433 , 25.74850178, 35.59157944, 45.00773907, 54.25701904, 63.24004149, 71.19508004, 79.79809189, 87.90475798, 96.6304338 , 104.54486966, 113.17023444, 123.03141642, 130.62548566, 139.42713165, 147.11088109, 155.99784803, 164.6090188 , 172.8154459 , 181.48556924, 190.1205256 , 199.11744905, 206.6556344 , 215.63762879, 223.34040642, 232.16977572, 239.85537601, 248.73208237, 257.14199877, 266.95903158, 275.50639963, 284.51115441, 292.41836596, 301.78489256, 310.22965097, 319.20965195, 326.91940737, 335.80662513, 344.42260027, 352.51230812, 361.24297714, 369.43441153, 378.62784195, 386.79733586, 396.04761386, 403.80425334, 412.72740793, 420.42915154, 429.66435623]
Y = [0.8032,0.7307,0.6966,0.6719,0.6570,0.6511,0.6483,0.6467,0.6456,0.6449,0.6443,0.6438,0.6434,0.6431,0.6428,0.6425,0.6423,0.6421,0.6419,0.6417,0.6416,0.6414,0.6413,0.6412,0.6410,0.6409,0.6408,0.6407,0.6406,0.6406,0.6405,0.6404,0.6403,0.6402,0.6401,0.6401,0.6400,0.6399,0.6399,0.6398,0.6397,0.6397,0.6396,0.6395,0.6395, 0.6394,0.6393,0.6393,0.6392,0.6392]
Y1= [0.715948034274896,0.70289717140913040,0.6905844020679819,0.6717724559631897,0.6527273647994417,0.6474447804910045,0.6464336144796086,0.6460104962764943,0.6457092440153327,0.6454408117073506,0.645183126813747,0.644929985585447,0.6446795702746679,0.6444313744231123,0.6441852977984192,0.6439413671299447,0.6436996481093187,0.643460217301526,0.6432231529318845,0.6429885316349159,0.6427564271345373,0.642526909609488,0.6423000453465729,0.6420758965311065,0.641854521093217,0.6416359725517277,0.6414202998121223,0.6412075468906367,0.64099775255328,0.6407909498749865,0.6405871657383971,0.6403864203026565,0.6401887264793377,0.639994089454955,0.6398025062977499,0.6396139656811805,0.6394284477486862,0.6392459241348711,0.6390663581483045,0.6388897051116094,0.6387159128462027,0.6385449222824692,0.638376668171623,0.638211079873034,0.6380480821902578,0.6378875962300637,0.6377295402610623, 0.6375738305516471,0.6374203821705329,0.6372691097368535]
Y2 = [0.7038,0.6787,0.6654,0.6563, 0.6514,0.6487,0.6470, 0.6459,0.6451,0.6445,0.6440,0.6436,0.6432,0.6429,0.6426,0.6423,0.6420,0.6417,0.6415,0.6413,0.6410,0.6408,0.6406,0.6403,0.6401,0.6399,0.6397,0.6394,0.6392,0.6390,0.6387,0.6385,0.6382,0.6380,0.6378,0.6375,0.6373,0.6370,0.6368,0.6366,0.6363,0.6361,0.6359,0.6357,0.6354,0.6352,0.6350,0.6348,0.6347,0.6345]
Y3 = [0.8455,0.7903,0.7331, 0.6984, 0.6801, 0.6694, 0.6626, 0.6581,0.6550,0.6529,0.6513,0.6502, 0.6494,0.6488,0.6483, 0.6478, 0.6474, 0.6471, 0.6468, 0.6465, 0.6462, 0.6460, 0.6457, 0.6455,0.6453, 0.6451,0.6449, 0.6446, 0.6444,0.6443, 0.6441,0.6439, 0.6437,0.6435, 0.6433,0.6432,0.6430,0.6429,0.6427, 0.6426, 0.6424, 0.6423,0.6421,0.6420, 0.6419,0.6417,0.6416,0.6415,0.6414,0.6413]
DTFTIME(X,X1,X2,X3,Y,Y1,Y2,Y3)
def DTFTM(X,Y,Y1,Y2,Y3):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.plot(X,Y2,linewidth=5)
plt.plot(X,Y3,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for DTF ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','2 Worker','4 Workers','8 Workers'],prop ={'size' :12})
plt.show()
X =range(1,51)
Y= [ 16.87558913, 33.29074454, 49.76412535, 67.17805433, 83.68378305, 100.06425428, 116.22213483, 132.54741764, 147.88943338, 162.56206965, 177.82378531, 190.71737599, 206.07281613, 223.09576797, 241.42640066, 259.35313749, 277.19583058, 293.11428428, 310.34029913, 328.12470937, 343.81842685, 358.02016902, 371.23597646, 389.42547941, 407.68673635, 425.49094319, 442.56747818, 461.19541144, 478.54700518, 496.92521 , 513.99962282, 531.47870708, 549.16265583, 565.7648344 , 582.9304204 , 599.21490431, 615.5532937 , 635.68406606, 650.53418756, 665.50302958, 681.26839638, 696.57588673, 712.4587276 , 729.03097391, 744.76885128, 760.5973568 , 776.25076389, 791.98429418, 806.69811916, 823.22756433]
Y1=[ 17.60946488, 32.49609518, 46.8456769 , 59.97366548, 72.78248382, 86.13100553, 100.13036442, 114.00211406, 128.93156004, 142.99975872, 157.10885191, 170.58976483, 188.99921346, 206.14611602, 223.39433408, 240.71215391, 258.75031304, 276.02193213, 293.29474902, 309.94306254, 323.73790812, 338.0604918 , 352.12470007, 366.19486022, 379.79518127, 395.5195744 , 412.09480929, 430.4447825 , 447.55368066, 464.86246061, 482.21042061, 500.27792668, 517.51594067, 532.06836963, 545.85312557, 560.88961601, 575.24216127, 589.46198869, 603.06028223, 617.0636332 , 632.25353575, 647.99967837, 665.16695642, 682.51943541, 699.68227649, 716.22630692, 733.0169704 , 750.16986108, 765.39399242, 779.67661476]
Y2= [9.58374453, 18.13167834, 26.97000527, 35.51926875, 44.30734992, 52.96224642, 62.22135305, 70.8706727 , 78.6865859 , 87.47659445, 96.63693261, 107.45961523, 117.88072157, 125.78734231, 134.49494171, 142.06098008, 151.19932675, 158.74457216, 167.92506218, 175.69069195, 184.59641671, 193.18962574, 201.25483251, 209.77101421, 217.58441687, 226.29052091, 234.21030354, 243.17912483, 251.8932929 , 260.80951738, 268.50533104, 276.47641182, 285.18519044, 294.05454183, 301.72342682, 310.51598287, 318.07915044, 327.10508418, 336.06855035, 344.62669945, 352.48610854, 361.0362792 , 370.79127121, 379.2261126 , 388.59513116, 397.7535193 , 406.98942423, 414.94340825, 423.73010254, 431.81974101]
Y3= [7.98568463, 16.9736433 , 25.74850178, 35.59157944, 45.00773907, 54.25701904, 63.24004149, 71.19508004, 79.79809189, 87.90475798, 96.6304338 , 104.54486966, 113.17023444, 123.03141642, 130.62548566, 139.42713165, 147.11088109, 155.99784803, 164.6090188 , 172.8154459 , 181.48556924, 190.1205256 , 199.11744905, 206.6556344 , 215.63762879, 223.34040642, 232.16977572, 239.85537601, 248.73208237, 257.14199877, 266.95903158, 275.50639963, 284.51115441, 292.41836596, 301.78489256, 310.22965097, 319.20965195, 326.91940737, 335.80662513, 344.42260027, 352.51230812, 361.24297714, 369.43441153, 378.62784195, 386.79733586, 396.04761386, 403.80425334, 412.72740793, 420.42915154, 429.66435623]
DTFTM(X,Y,Y1,Y2,Y3)
number_processes=[1,2,4,8]
Time=[ 823.22756433,779.67661476,431.81974101,429.66435623]
multip_stat=dict(zip(number_processes,Time))
def plot(multip_stat):
keys =sorted(multip_stat)
keys = np.array(keys)
speedup = []
for number_processes in keys:
speedup.append(multip_stat[1]/multip_stat[number_processes])
plt.figure(figsize =(10,5))
plt.scatter(keys,speedup)
plt.plot(keys,speedup,linewidth =3)
plt.grid()
plt.title('Speedup Plot')
plt.legend(['DTF'])
plt.ylabel('Speedup ')
plt.xlabel('Number of Workers')
plot(multip_stat)
```
## SPARK PLOTS
```
def SPARKLOSS(X,Y,Y1,Y2,Y3):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =1)
plt.plot(X,Y1,linewidth =2)
plt.plot(X,Y2,linewidth =1.5)
plt.plot(X,Y3,linewidth =1.25)
plt.grid()
plt.title(' SPARK Log Loss ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----Log Loss ----->')
plt.legend(['1 Worker' ,'2 Worker','4 worker','8 Worker' ],prop={'size': 18})
plt.show()
X = range(1,52)
Y = [0.645347,0.355342,0.291245,0.264423,0.245507,0.229386,0.201665,0.177214,0.165123,0.150110,0.144644,0.140692,0.135483,0.131878,0.129706,
0.128063,0.126949,0.125901, 0.125133,0.124536, 0.123058,0.122290,0.121661, 0.120862,0.120282,0.120047, 0.119488,0.119126,
0.118604,0.118480, 0.118222, 0.117920,0.117718, 0.117397, 0.117170,0.116907,0.116660,0.116522, 0.116314, 0.116209, 0.116058, 0.115942,0.115853,0.115788,0.115714,0.115575,0.115500, 0.115402, 0.115321,0.115258,0.115167]
Y1 = [0.644787,
0.355754,
0.293243,
0.266243,
0.247183,
0.231421,
0.203099,
0.178408,
0.165613,
0.150086,
0.144821,
0.139987,
0.135579,
0.132019,
0.128864,
0.127886,
0.126838,
0.126100,
0.124865,
0.123924,
0.122745,
0.122014,
0.121243,
0.120626,
0.119942,
0.119472,
0.119122,
0.118633,
0.118291,
0.118060,
0.117807,
0.117551,
0.117263,
0.117040,
0.116894,
0.116704,
0.116533,
0.116394,
0.116199,
0.116074,
0.115979,
0.115858,
0.115757,
0.115682,
0.115613,
0.115471,
0.115361,
0.115289,
0.115224,
0.115166,
0.115091]
Y2 = [0.645195,
0.352708,
0.295527,
0.265963,
0.247322,
0.230956,
0.203196,
0.178851,
0.165108,
0.152322,
0.147029,
0.142655,
0.137994,
0.134703,
0.132134,
0.130995,
0.130018,
0.129495,
0.128310,
0.127241,
0.126372,
0.125682,
0.124841,
0.124271,
0.123461,
0.122849,
0.122410,
0.121970,
0.121787,
0.121498,
0.121191,
0.121012,
0.120614,
0.120391,
0.120137,
0.119942,
0.119739,
0.119518,
0.119383,
0.119210,
0.119077,
0.118965,
0.118904,
0.118791,
0.118728,
0.118653,
0.118529,
0.118472,
0.118374,
0.118332,
0.118280]
Y3 = [ 0.645144,
0.356428,
0.303088,
0.269499,
0.251233,
0.236853,
0.207137,
0.182165,
0.167221,
0.151761,
0.146721,
0.141207,
0.137439,
0.134450,
0.131992,
0.130443,
0.129550,
0.128930,
0.127765,
0.126999,
0.125778,
0.125363,
0.124436,
0.123879,
0.123365,
0.122195,
0.121899,
0.121388,
0.121060,
0.120773,
0.120359,
0.120119,
0.119921,
0.119535,
0.119257,
0.118987,
0.118798,
0.118638,
0.118425,
0.118257,
0.118122,
0.117997,
0.117838,
0.117735,
0.117644,
0.117542,
0.117443,
0.117283,
0.117204,
0.117129,
0.117057]
SPARKLOSS(X,Y,Y1,Y2,Y3)
## SPEEDUP GRAPH
number_processes=[1,2,4,8]
Time=[91.999,101.06,116.606,207.522]
multip_stat=dict(zip(number_processes,Time))
def plot(multip_stat):
keys =sorted(multip_stat)
keys = np.array(keys)
speedup = []
for number_processes in keys:
speedup.append(multip_stat[1]/multip_stat[number_processes])
plt.figure(figsize =(10,5))
plt.scatter(keys,speedup)
plt.plot(keys,speedup,linewidth =3)
plt.grid()
plt.title('Speedup Plot')
plt.legend(['SPARK'])
plt.ylabel('Speedup ')
plt.xlabel('Number of Workers')
plot(multip_stat)
number_processes_MPI=[1,2,4,8]
Time_MPI=[4072.03480607,1686.40623821,844.72874445,454.49862514818597]
number_processes_DTF=[1,2,4,8]
Time_DTF=[ 823.22756433,779.67661476,431.81974101,429.66435623]
number_processes_SPARK=[1,2,4,8]
Time_SPARK=[91.999,101.06,116.606,207.522]
multip_stat_MPI=dict(zip(number_processes_MPI,Time_MPI))
multip_stat_DTF=dict(zip(number_processes_DTF,Time_DTF))
multip_stat_SPARK=dict(zip(number_processes_SPARK,Time_SPARK))
def plot(multip_stat_MPI,multip_stat_DTF,multip_stat_SPARK):
keys =sorted(multip_stat_MPI)
keys = np.array(keys)
speedup_MPI = []
speedup_DTF =[]
speedup_SPARK =[]
for number_processes_MPI in keys:
speedup_MPI.append(multip_stat_MPI[1]/multip_stat_MPI[number_processes_MPI])
for number_processes_DTF in keys:
speedup_DTF.append(multip_stat_DTF[1]/multip_stat_DTF[number_processes_DTF])
for number_processes_SPARK in keys:
speedup_SPARK.append(multip_stat_SPARK[1]/multip_stat_SPARK[number_processes_SPARK])
plt.scatter(keys,speedup_MPI)
plt.plot(keys,speedup_MPI)
plt.scatter(keys,speedup_DTF)
plt.plot(keys,speedup_DTF)
plt.scatter(keys,speedup_SPARK)
plt.plot(keys,speedup_SPARK)
plt.grid()
plt.title('Speedup Plot')
plt.legend(['MPI','DTF','Spark'])
plt.ylabel(' Speedup')
plt.xlabel('Number of Workers')
plot(multip_stat_MPI,multip_stat_DTF,multip_stat_SPARK)
MPI_EFFICENCY = [1.0,1.207]
DTF_EFFICENCY =[1.0]
SPARK_EFFICIENCY =[1.0]
def DTFTM(X,Y,Y1,Y2,Y3):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.plot(X,Y2,linewidth=5)
plt.plot(X,Y3,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for DTF ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','2 Worker','4 Workers','8 Workers'],prop ={'size' :12})
plt.show()
```
## CASE STUDY 2 NETFLIX DATA SET
```
def DTFTM(X,Y,Y1,Y2,Y3):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.plot(X,Y2,linewidth=5)
plt.plot(X,Y3,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for DTF ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','2 Worker','4 Workers','8 Workers'],prop ={'size' :12})
plt.show()
X = range(1,51)
Y1_MPI = [ 39.26806437 ,72.57745821 , 106.05096296 , 139.00448176 , 172.29533115
,205.57932461 ,238.56777642 ,272.72148373 ,306.97009107 ,340.38344931
,373.43555898 ,406.51144664 ,440.63416142 ,474.60564955 ,508.90255478
,542.17474864 ,575.40848085 ,609.02158613 ,643.00820186 ,676.24514888
,709.44564522 ,743.47336864 ,777.49872282 ,811.2797804 ,844.40053887
,877.45420111 ,911.26939514 ,945.73890616 ,980.09765186 ,1013.67860485
,1047.14104555 ,1080.20068922 ,1114.26367958 ,1147.5959727 ,1180.67114349
,1214.17691088 ,1249.57923752 ,1282.92121595 ,1316.03580045 ,1349.34485825
,1383.64024229 ,1417.82322523 ,1450.86176798 ,1484.00772195 ,1517.43152646
,1551.42711311 ,1585.98403106 ,1619.55400393 ,1653.27497918 ,1686.40623821]
Y1_DTF=[ 17.60946488, 32.49609518, 46.8456769 , 59.97366548, 72.78248382, 86.13100553, 100.13036442, 114.00211406, 128.93156004, 142.99975872, 157.10885191, 170.58976483, 188.99921346, 206.14611602, 223.39433408, 240.71215391, 258.75031304, 276.02193213, 293.29474902, 309.94306254, 323.73790812, 338.0604918 , 352.12470007, 366.19486022, 379.79518127, 395.5195744 , 412.09480929, 430.4447825 , 447.55368066, 464.86246061, 482.21042061, 500.27792668, 517.51594067, 532.06836963, 545.85312557, 560.88961601, 575.24216127, 589.46198869, 603.06028223, 617.0636332 , 632.25353575, 647.99967837, 665.16695642, 682.51943541, 699.68227649, 716.22630692, 733.0169704 , 750.16986108, 765.39399242, 779.67661476]
Y2_SPARK = [0.00011801719665527344 ,13.048691272735596
,25.986998558044434
,39.0324273109436
, 48.58980917930603
,57.305105447769165
,66.05820512771606
,74.56391334533691
,83.11631631851196
,92.0088939666748
,100.54578685760498
,109.27140808105469
,117.82947421073914
,126.4936170578003
,135.16798162460327
,143.5839183330536
,152.37627696990967
,160.9868664741516
,169.7529172897339
,178.12082934379578
,186.9383897781372
,195.49775624275208
,204.4240026473999
,213.08508133888245
,221.8182110786438
,230.47983479499817
,238.9033019542694
,247.49867463111877
,256.251647233963
,264.93320059776306
,273.65161871910095
,283.66929149627686
,293.13853192329407
,302.1566424369812
,310.8365910053253
,319.69861006736755
,328.4101662635803
,337.20888900756836
,345.73415756225586
,354.9532382488251
,364.00493121147156
,373.05722999572754
,382.04206705093384
,391.415762424469
,400.55709624290466
,409.3212435245514
,417.94327425956726
,426.6168210506439
,435.37222814559937
,445.71824502944946]
## CASE STUYD 2
def MPIRMSE(X,Y,Y1):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.grid()
plt.title(' RMSE vs No. Epochs for MPI ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----RMSE ----->')
plt.legend(['1 Worker','4 Worker'],prop={'size': 18})
plt.show()
x = range(1,16)
y= [1.69502094,1.05884242,0.96390224,0.93960407,0.92669 ,0.91870369
,0.9140874 ,0.9110303 ,0.9085862 ,0.90634117 ,0.90416345 ,0.90206751
,0.90010877 ,0.89831842 ,0.89669002]
y1= [2.14025451 ,1.6820769 , 1.481788 , 1.35927638 ,1.22573769 ,1.13528224,
1.05926993 ,1.04452829 ,1.00289506 ,0.98887135, 0.97327298 ,0.96323273, 0.96098556 ,0.95147152,0.948912135]
MPIRMSE(x,y,y1)
def MPITIME(X,Y,Y1):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for MPi ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','4 Worker'],prop ={'size' :12})
plt.show()
x = range(1,16)
y= [806.0806279115131,
1599.763174903852,
2400.0820581246226,
3150.619616557611,
3892.4219823332023,
4638.169487658593,
5380.002374167871,
6140.009267118039,
6881.306907654285,
7623.962474184056,
8372.484476228525,
9114.23567239234,
9857.62695838877,
11178.364797786697,
12676.97734226488]
y1=[464.39217762563567,
940.6333063229067,
1616.1098495667393,
2192.1590670754194,
2668.9769540420784,
3135.9788324302826,
3609.2469609745203,
4081.974774519367,
4550.694962379344,
5018.33306217123,
5492.441388081985,
5957.0939225837465,
6432.357136358925,
6902.667810202041,
7382.633217463543]
MPITIME(x,y,y1)
def DTFRMSE(X,Y,Y1):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.grid()
plt.title(' RMSE vs No. Epochs for DTF ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----RMSE ----->')
plt.legend(['1 Worker','4 Worker'],prop={'size': 18})
plt.show()
x = range(1,16)
y=[1.20759
,0.98176
,0.96576
,0.93121
,0.9217
,0.9201
,0.9157
,0.9111
,0.9083
,0.9075
,0.9061
,0.9031
,0.8975
,0.8945
,0.8944]
y1 = [1.21483,
1.01961,
0.99742,
0.98133,
0.97976,
0.96442,
0.9566,
0.94281,
0.93648,
0.93028,
0.92934,
0.92642,
0.92137,
0.91915,
0.918524]
DTFRMSE(x,y,y1)
def DTFTIME(X,Y,Y1):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for DTF ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','4 Worker'],prop ={'size' :12})
plt.show()
261.3881688117981,
342.48716402053833,
423.025009393692,
504.36061835289,
585.5085747241974,
665.3532092571259,
747.458996295929,
828.7041790485382,
913.6621732711792,
994.4222047328949,
1075.538974761963,
1156.75257396698,
1237.388594865799]
y1=[67.99266147613525,
136.59420084953308,
218.74961066246033,
301.0985417366028,
383.6787850856781,
466.48629689216614,
552.7772421836853,
615.2632458209991,
689.0136721134186,
745.8860630989075,
811.0473728179932,
905.7610654830933,
996.2549510002136,
1079.2812526226044,
1165.9989485740662]
DTFTIME(x,y,y1)
## SPARK PLOTTING
def SPARKRMSE(X,Y,Y1):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.grid()
plt.title(' RMSE vs No. Epochs for SPARK ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----RMSE ----->')
plt.legend(['1 Worker','4 Worker'],prop={'size': 18})
plt.show()
x = range(1,16)
y = [15.108495501124244,1.3080305327961523,0.981843014595477,0.928033533314255,
0.9077694279555009,0.8980359266984432,0.8925805624215477,0.8891363208210896,
0.8866853306744105,0.8849392145165782,0.8834497193391307,0.8824382735395588,0.8816500662349076
,0.8809881030862765,0.8802436484850389]
y1=[15.132788426122861,1.309908083880348,0.9807624615345205,0.9273805274781128,0.907063637584297
,0.907063637584297,0.8922852419552114,0.8891393314791373,0.8871308272142933,0.8853383504173695,0.8840656875466764, 0.8828589723089326,0.8822197339872899,0.8816838728090692,0.8812077570578511]
SPARKRMSE(x,y,y1)
def PLOT_(X,Y,Y1,Y2):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.plot(X,Y2,linewidth=3)
plt.grid()
plt.title('RMSE vs Number of Epochs Comparison for 4 Workers ')
plt.xlabel('Number of Epochs')
plt.ylabel('RMSE ')
plt.legend(['MPI','DTF','SPARK'],prop ={'size' :12})
plt.show()
x = range(1,16)
y1= [2.14025451 ,1.6820769 , 1.481788 , 1.35927638 ,1.22573769 ,1.13528224,
1.05926993 ,1.04452829 ,1.00289506 ,0.98887135, 0.97327298 ,0.96323273, 0.96098556 ,0.95147152,0.948912135]
y2 = [1.21483,
1.01961,
0.99742,
0.98133,
0.97976,
0.96442,
0.9566,
0.94281,
0.93648,
0.93028,
0.92934,
0.92642,
0.92137,
0.91915,
0.918524]
y3= [15.132788426122861,1.309908083880348,0.9807624615345205,0.9273805274781128,0.907063637584297
,0.907063637584297,0.8922852419552114,0.8891393314791373,0.8871308272142933,0.8853383504173695,0.8840656875466764, 0.8828589723089326,0.8822197339872899,0.8816838728090692,0.8812077570578511]
PLOT_(x,y1,y2,y3)
def SPARKTIME(X,Y,Y1):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for SPARK ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','4 Worker'],prop ={'size' :12})
plt.show()
x =range(1,16)
y= [4.638733625411987 ,
27.605796337127686,
51.98823523521423 ,
77.05124998092651 ,
102.0006492137909,
126.59807181358337,
151.20991778373718 ,
175.78695034980774,
200.34900951385498 ,
225.66183638572693,
250.93925309181213,
275.6295952796936 ,
300.21891617774963,
324.75528860092163,
349.45509099960327]
y1=[7.299824476242065,
34.34166097640991,
60.289286851882935,
85.95397067070007 ,
111.33996343612671 ,
136.57783579826355,
161.6672704219818 ,
186.6448700428009,
212.63845252990723 ,
237.59115529060364 ,
262.39741587638855,
288.0878973007202,
313.2458395957947,
338.2634127140045,
364.03453063964844 ]
SPARKTIME(x,y,y1)
number_processes_MPI=[1,4]
Time_MPI=[12676.97734226488,7382.633217463543]
number_processes_DTF=[1,4]
Time_DTF=[ 1237.3885,1165.9989]
number_processes_SPARK=[1,4]
Time_SPARK=[349.45509099960327,364.03453063964844]
multip_stat_MPI=dict(zip(number_processes_MPI,Time_MPI))
multip_stat_DTF=dict(zip(number_processes_DTF,Time_DTF))
multip_stat_SPARK=dict(zip(number_processes_SPARK,Time_SPARK))
def plot(multip_stat_MPI,multip_stat_DTF,multip_stat_SPARK):
keys =sorted(multip_stat_MPI)
keys = np.array(keys)
speedup_MPI = []
speedup_DTF =[]
speedup_SPARK =[]
for number_processes_MPI in keys:
speedup_MPI.append(multip_stat_MPI[1]/multip_stat_MPI[number_processes_MPI])
for number_processes_DTF in keys:
speedup_DTF.append(multip_stat_DTF[1]/multip_stat_DTF[number_processes_DTF])
for number_processes_SPARK in keys:
speedup_SPARK.append(multip_stat_SPARK[1]/multip_stat_SPARK[number_processes_SPARK])
plt.scatter(keys,speedup_MPI)
plt.plot(keys,speedup_MPI)
plt.scatter(keys,speedup_DTF)
plt.plot(keys,speedup_DTF)
plt.scatter(keys,speedup_SPARK)
plt.plot(keys,speedup_SPARK)
plt.grid()
plt.title('Speedup Plot')
plt.legend(['MPI','DTF','Spark'])
plt.ylabel(' Speedup')
plt.xlabel('Number of Workers')
plot(multip_stat_MPI,multip_stat_DTF,multip_stat_SPARK)
```
## CASE STUDY 3
```
def SPARKTIME(X,Y,Y1):
plt.figure(figsize =(12,6))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for SPARK ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','2 Worker'],prop ={'size' :12})
plt.show()
X = range (1,21)
y = [20.83647584915161, 115.49169039726257, 212.71930932998657, 309.04385590553284, 403.9351382255554, 499.31290578842163, 594.6380987167358, 689.6739726066589, 785.23308634758, 880.0320150852203, 977.2057845592499, 1073.3094279766083, 1168.636705160141, 1263.3567852973938, 1358.2580618858337, 1452.7283022403717, 1547.9805326461792, 1643.1987478733063, 1737.3049399852753, 1833.7379133701324]
y1 =[21.095101356506348, 119.65490746498108, 212.026864528656, 303.77784752845764, 397.72406673431396, 490.35750699043274, 582.486631155014, 675.4774732589722, 768.2065455913544, 858.8493058681488, 950.1117322444916, 1041.1729612350464, 1131.5110013484955, 1223.269261598587, 1314.6060457229614, 1406.1503412723541, 1497.7301285266876, 1589.4273250102997, 1680.055240392685, 1771.4874005317688]
SPARKTIME(X,y,y1)
def DTFTIME(X,Y,Y1):
plt.figure(figsize =(12,6))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for DTF ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','4 Worker'],prop ={'size' :12})
plt.show()
x = range(1,21)
y = [ 408.47553005, 520.02519894, 660.30865974, 724.61447926,
800.78354359, 904.73757248, 990.26633606, 1087.3455534 ,
1206.96430607, 1298.29965649, 1375.60501442, 1473.01621971,
1568.57109604, 1724.43269978, 1836.82773399, 1928.4815506 ,
2044.63057194, 2134.16728535, 2221.11963501, 2331.75720863]
y1 = [ 270.40916174, 395.67714955, 511.48857478, 608.87956694,
703.32401045, 784.99682006, 872.14281919, 959.68447898,
1052.7241711 , 1155.33583027, 1253.81897676, 1356.24870756,
1448.94714718, 1534.71159497, 1629.61876714, 1713.66093295,
1804.63636951, 1900.28650759, 1990.71106394, 2081.04176102]
DTFTIME(x,y,y1)
def SPARKLOSS(X,Y,Y1):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.grid()
plt.title(' Perplexity Loss vs No. Epochs for SPARK ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----Perpelity Loss----->')
plt.legend(['1 Worker','4 Worker'],prop={'size': 18})
plt.show()
x = range(1,21)
y = [0.0035778083456073656, 0.003577535698440121, 0.003576712928425346, 0.00357665688686634, 0.003576380397226235, 0.0035731053595235906, 0.003572268241898214, 0.0035712441362565532, 0.003571083428146677, 0.003571069451978039094, 0.0035707672031630927, 0.00357064345297587114, 0.003570219004415181, 0.003570210706273984, 0.0035702080837832527, 0.003561812552346963, 0.003561533554574813, 0.003561334752870344, 0.003561290187092556, 0.00356128229671359]
y1 = [0.0035762989807367944, 0.003575624648158023, 0.0035752578524294398, 0.00357409967024672774, 0.0035748131913721707, 0.0035749223268236564, 0.0035687351702780065, 0.0035671440007062826, 0.003564636677295121, 0.003563066717400673, 0.0035629292411101913, 0.00356191898369454955, 0.0035618033387627308, 0.0035611047044410878, 0.0035611027840049598, 0.003561101829779855, 0.003561101540203999, 0.0035611010456287103, 0.0035611010095395401, 0.003561011007362615927]
SPARKLOSS(x,y,y1)
def DTFLOSS(X,Y,Y1):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.grid()
plt.title(' Perplexity Loss vs No. Epochs for DTF ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----Perpelity Loss----->')
plt.legend(['1 Worker','4 Worker'],prop={'size': 18})
plt.show()
x = range(1,21)
y1 = [0.2195059 , 0.19203686, 0.14618968, 0.12653376, 0.09865199, 0.07479476, 0.06974027, 0.06495479, 0.06364685, 0.06090322, 0.06017076, 0.06018017, 0.0601708 , 0.06016365, 0.06015678, 0.0601508 , 0.06014525, 0.06014027, 0.06013574, 0.06013172]
y2 = [0.21913235, 0.18677527, 0.18061693, 0.17703079, 0.16613667, 0.16010431, 0.14957219, 0.14604075, 0.13332193, 0.11301769, 0.1006305 , 0.09423706, 0.08705594, 0.08310994, 0.07549633, 0.07122252, 0.06739104, 0.06353338, 0.0619188 , 0.060173031 ]
DTFLOSS(x,y1,y2)
np.log10([[1.657699866, 1.55609771, 1.40019874, 1.338239237, 1.255023895, 1.187940696, 1.1741951241, 1.16132771, 1.15783547, 1.15054398, 1.148605165, 1.1486300432, 1.1486052664, 1.1485863612, 1.1485681737, 1.1485523551, 1.1485376906, 1.1485245148, 1.1485125465, 1.1485019151]])
np.log10([[1.65627465, 1.53735891, 1.51571285, 1.50324855, 1.4660091 , 1.44578697, 1.41114677, 1.39971866, 1.35932069, 1.297232107, 1.26075441, 1.24233024, 1.2219570528, 1.2109046348, 1.1898612612, 1.178209509, 1.16786069 , 11.57532998, 1.153237626, 11.15072488]])
number_processes_MPI=[1,4]
Time_MPI=[2202.6, 1002.6]
number_processes_DTF=[1,4]
Time_DTF=[ 2313.75,2081.041]
number_processes_SPARK=[1,4]
Time_SPARK=[2202.646,2029.055]
multip_stat_MPI=dict(zip(number_processes_MPI,Time_MPI))
multip_stat_DTF=dict(zip(number_processes_DTF,Time_DTF))
multip_stat_SPARK=dict(zip(number_processes_SPARK,Time_SPARK))
def plot(multip_stat_DTF,multip_stat_SPARK):
keys =sorted(multip_stat_DTF)
keys = np.array(keys)
speedup_MPI = []
speedup_DTF =[]
speedup_SPARK =[]
for number_processes_MPI in keys:
speedup_MPI.append(multip_stat_MPI[1]/multip_stat_MPI[number_processes_MPI])
for number_processes_DTF in keys:
speedup_DTF.append(multip_stat_DTF[1]/multip_stat_DTF[number_processes_DTF])
for number_processes_SPARK in keys:
speedup_SPARK.append(multip_stat_SPARK[1]/multip_stat_SPARK[number_processes_SPARK])
plt.scatter(keys,speedup_MPI)
plt.plot(keys,speedup_MPI)
plt.scatter(keys,speedup_DTF)
plt.plot(keys,speedup_DTF)
plt.scatter(keys,speedup_SPARK)
plt.plot(keys,speedup_SPARK)
plt.grid()
plt.title('Speedup Plot')
plt.legend(['MPI','DTF','Spark'])
plt.ylabel(' Speedup')
plt.xlabel('Number of Workers')
plot(multip_stat_DTF,multip_stat_SPARK)
def PLOT_(X,Y,Y1,Y2):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.plot(X,Y2,linewidth=3)
plt.grid()
plt.title('Perplexity vs Number of Epochs Comparison for 4 Workers ')
plt.xlabel('Number of Epochs')
plt.ylabel('Perplexity Loss ')
plt.legend(['MPI','DTF','SPARK'],prop ={'size' :12})
plt.show()
x = range(1,21)
y = [0.94442898, 0.9438166 , 0.94384346, 0.94383408, 0.94382495, 0.94387387, 0.94380138 , 0.94380109, 0.94310257, 0.9431043 , 0.9431004, 0.943100035, 0.943100021, 0.943100019, 0.9431, 0.943080554, 0.94306021,0.943051,0.9429151,0.9429021]
y2 = [0.21913235, 0.18677527, 0.18061693, 0.17703079, 0.16613667, 0.16010431, 0.14957219, 0.14604075, 0.13332193, 0.11301769, 0.1006305 , 0.09423706, 0.08705594, 0.08310994, 0.07549633, 0.07122252, 0.06739104, 0.06353338, 0.0619188 , 0.060173031 ]
y3 = [0.0035762989807367944, 0.003575624648158023, 0.0035752578524294398, 0.00357409967024672774, 0.0035748131913721707, 0.0035749223268236564, 0.0035687351702780065, 0.0035671440007062826, 0.003564636677295121, 0.003563066717400673, 0.0035629292411101913, 0.00356191898369454955, 0.0035618033387627308, 0.0035611047044410878, 0.0035611027840049598, 0.003561101829779855, 0.003561101540203999, 0.0035611010456287103, 0.0035611010095395401, 0.003561011007362615927]
PLOT_(x,y,y2,y3)
x1 = [[-8.64409063 ,-8.64218281 ,-8.64462241 ,-8.63021601 ,-8.63866059, -8.63678908
-8.64572243 -8.65061677 -8.63487463 -8.64811534 -8.6499226 -8.63682821
-8.63969477 -8.65473852 -8.62995692 -8.64544625 -8.6374613 -8.64376042
-8.65134574]]
jlist =[]
for i in x:
j =(-2*np.exp(i))
jlist =np.append(jlist,j)
print(jlist)
x2 =[-8.64409063 ,-8.64218281 ,-8.64462241 ,-8.63021601 ,-8.63866059 ,-8.63678908
,-8.64572243 ,-8.65061677 ,-8.63487463 ,-8.64811534 ,-8.6499226 -8.63682821
,-8.63969477 ,-8.65473852 ,-8.62995692 ,-8.64544625 ,-8.6374613 ,-8.64376042
,-8.65134574]
x3= np.dot(x2,-1)
np.log10(x3)
x4 =[-8.79891203 ,-8.78651388 ,-8.76684772 ,-8.78889112 ,-8.78992064 ,-8.75851465
-8.77519737 ,-8.79657875 ,-8.78721181, -8.79635135 ,-8.78474792 ,-8.79713295,
-8.7739676 , -8.77968725 ,-8.78300203 ,-8.77113758 ,-8.78797149 -8.78339511,
-8.78359683]
x5 = np.dot(x4,-1)
print(x5)
np.log10(x5)
def MPILOSS(X,Y,Y1):
plt.figure(figsize =(10,5))
plt.plot(X,Y, linewidth =3)
plt.plot(X,Y1,linewidth =3)
plt.grid()
plt.title(' Perplexity Loss vs No. Epochs for MPI ')
plt.xlabel('<-----Number of Epochs ---->')
plt.ylabel('<----Perpelity Loss----->')
plt.legend(['1 Worker','4 Worker'],prop={'size': 18})
plt.show()
x = range(1,21)
y1 = [0.93635231, 0.93660129, 0.93660107, 0.93600104, 0.936004147 ,0.93604037, 0.9360404037, 0.93604931, 0.936062345, 0.9354603, 0.93542167, 0.935044641, 0.93503904 , 0.935039039, 0.9350800863, 0.9350030611, 0.9350600272, 0.9350367,0.9350257,0.9350041]
y2 =[0.94442898, 0.9438166 , 0.94384346, 0.94383408, 0.94382495, 0.94387387, 0.94380138 , 0.94380109, 0.94310257, 0.9431043 , 0.9431004, 0.943100035, 0.943100021, 0.943100019, 0.9431, 0.943080554, 0.94306021,0.943051,0.9429151,0.9429021]
x2 =[0.93635231, 0.93650129, 0.936050107, 0.936050104, 0.93604147 ,0.93604037, 0.9370404037, 0.93671931, 0.93662345, 0.93654603, 0.93642167, 0.93644641, 0.93603904 , 0.936039039, 0.93600863, 0.93630611, 0.93600272, 0.930067,0.9300057,0.9300041]
print(len())
MPILOSS(x,y1,y2)
def MPILOSSLOSSTIME(X,Y,Y1):
plt.figure(figsize =(12,6))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.grid()
plt.title('Time Taken vs Number of Epochs for DTF ')
plt.xlabel('Number of Epochs')
plt.ylabel('time taken Per Epoch in seconds ')
plt.legend(['1 Worker','4 Worker'],prop ={'size' :12})
plt.show()
x = [[36.71953403 37.13031434 36.36141502 36.7676942 36.76758604 36.09433365 35.61501311 35.64340032 35.21340679 35.06940272 34.27871376 35.19826049 34.82807174 34.81522003 34.92422866 35.30408916 36.55754593 36.00738782 35.52955621]]
np.(x,60)
def PLOT_(X,Y,Y1,Y2):
plt.figure(figsize =(10,5))
plt.plot(X,Y,linewidth=3)
plt.plot(X,Y1,linewidth=3)
plt.plot(X,Y2,linewidth=3)
plt.grid()
plt.title('RMSE vs Number of Epochs Comparison for 4 Workers ')
plt.xlabel('Number of Epochs')
plt.ylabel('RMSE ')
plt.legend(['MPI','DTF','SPARK'],prop ={'size' :12})
plt.show()
X = range(1,21)
y = 0.94442898, 0.9438166 , 0.94384346, 0.94383408, 0.94382495, 0.94387387, 0.94380138 , 0.94380109, 0.94310257, 0.9431043 , 0.9431004, 0.943100035, 0.943100021, 0.943100019, 0.9431, 0.943080554, 0.94306021,0.943051,0.9429151,0.9429021]
y1 =
y2 =
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Filter/filter_neq.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Filter/filter_neq.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Filter/filter_neq.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Filter/filter_neq.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
states = ee.FeatureCollection('TIGER/2018/States')
# Select all states except California
selected = states.filter(ee.Filter.neq("NAME", 'California'))
Map.centerObject(selected, 6)
Map.addLayer(ee.Image().paint(selected, 0, 2), {'palette': 'yellow'}, 'Selected')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
```
#for manupulations
import numpy as np
import pandas as pd
#for data visulizations
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('data.csv')
print('Shape of the Dataset:', df.shape)
df.head()
#checking if any null values are there or not
df.isnull().sum()
#checking the crops present is the dataset
df['label'].value_counts()
#checking the mean, min & max value for crops
df.describe().transpose()
#Distribution for Agricultural Condition
plt.figure(figsize=(5,5))
sns.set_style('darkgrid')
sns.distplot(df['N'],bins=30,kde=False)
plt.figure(figsize=(5,5))
sns.set_style('darkgrid')
sns.distplot(df['P'],bins=30,kde=False)
#Distribution for Agricultural Condition
plt.figure(figsize=(5,5))
sns.set_style('darkgrid')
sns.distplot(df['K'],bins=30,kde=False)
#Distribution for Agricultural Condition
plt.figure(figsize=(5,5))
sns.set_style('darkgrid')
sns.distplot(df['temperature'],bins=30,kde=False)
#Distribution for Agricultural Condition
plt.figure(figsize=(5,5))
sns.set_style('darkgrid')
sns.distplot(df['humidity'],bins=30,kde=False)
#Distribution for Agricultural Condition
plt.figure(figsize=(5,5))
sns.set_style('darkgrid')
sns.distplot(df['ph'],bins=30,kde=False)
#Distribution for Agricultural Condition
plt.figure(figsize=(5,5))
sns.set_style('darkgrid')
sns.distplot(df['rainfall'],bins=30,kde=False)
print('Crops which require very high content of Nitrogen in soil: ',df[df['N'] > 120]['label'].unique())
print('\n')
print('Crops which require very high content of Phosphorous in soil: ',df[df['P'] > 100]['label'].unique())
print('\n')
print('Crops which require very high content of Potasium in soil: ',df[df['K'] > 200]['label'].unique())
print('\n')
print('Crops which require low Temperature: ',df[df['temperature'] < 10]['label'].unique())
print('\n')
print('Crops which require high Temperature: ',df[df['temperature'] > 40]['label'].unique())
print('\n')
print('Crops which require low Humidity: ',df[df['humidity'] < 20]['label'].unique())
print('\n')
print('Crops which require low ph: ',df[df['ph'] < 4]['label'].unique())
print('\n')
print('Crops which require high ph: ',df[df['ph'] > 9]['label'].unique())
print('\n')
print('Crops which require high rainfall: ',df[df['rainfall'] > 200]['label'].unique())
#which crops can be grown in summer season, rainy season and winter season
print('Summer Crops:')
print(df[(df['temperature'] > 30) & (df['humidity'] > 50)]['label'].unique())
print('\n')
print('Winter Crops:')
print(df[(df['temperature'] < 10) & (df['humidity'] > 30)]['label'].unique())
print('\n')
print('Rainy Crops:')
print(df[(df['rainfall'] > 200) & (df['humidity'] > 30)]['label'].unique())
from sklearn.cluster import KMeans
#removing the label columns
x = df.drop('label',axis=1)
#selecting all the values of the data
x = x.values
#checking the shape
x.shape
#Elbow Method
wcss = []
for i in range (1,11):
km = KMeans(n_clusters = i,init='k-means++',n_init=10,max_iter=300,random_state=0)
km.fit(x)
wcss.append(km.inertia_)
plt.figure(figsize=(10,4))
plt.plot(range(1,11),wcss)
plt.title('The Elbow Method')
plt.xlabel('No of Clusters')
plt.ylabel('wcss')
plt.show()
km = KMeans(n_clusters=4,init='k-means++',n_init=10,max_iter=300,random_state=0)
ymeans = km.fit_predict(x)
a = df['label']
ymeans = pd.DataFrame(ymeans)
z = pd.concat([ymeans,a],axis=1)
z = z.rename(columns = {0: 'cluster'})
print('Crops in First Cluster: ',z[z['cluster'] == 0]['label'].unique())
print('\n')
print('Crops in Second Cluster: ',z[z['cluster'] == 1]['label'].unique())
print('\n')
print('Crops in Third Cluster: ',z[z['cluster'] == 2]['label'].unique())
print('\n')
print('Crops in Fourth Cluster: ',z[z['cluster'] == 3]['label'].unique())
X = df.drop(['label'],axis=1)
y = df['label']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101)
from sklearn.linear_model import LogisticRegression
lm = LogisticRegression()
lm.fit(X_train,y_train)
prediction = lm.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(classification_report(y_test,prediction))
cm = (confusion_matrix(y_test,prediction))
plt.figure(figsize=(10,10))
sns.heatmap(data=cm,annot=True,cmap='viridis')
prediction = lm.predict((np.array([[90,
40,
40,
20,
80,
7,
200]])))
print('The Suggested crop for the given Climatic and Soil condition is: ',prediction)
```
| github_jupyter |
```
%matplotlib inline
import sys
import os
import time
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
import matplotlib as mpl
blues = mpl.cm.get_cmap(plt.get_cmap('Blues'))
greens = mpl.cm.get_cmap(plt.get_cmap('Greens'))
reds = mpl.cm.get_cmap(plt.get_cmap('Reds'))
oranges = mpl.cm.get_cmap(plt.get_cmap('Oranges'))
purples = mpl.cm.get_cmap(plt.get_cmap('Purples'))
greys = mpl.cm.get_cmap(plt.get_cmap('Greys'))
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
import warnings
warnings.filterwarnings('ignore')
images = pd.read_csv('../Data/Geolocation_Image_pairs.csv')
fig, axis = plt.subplots(nrows=1,ncols=1,figsize=(11,5))
_ = axis.hist((images['SIZE1']+images['SIZE2']).values,bins=60,color=blues(150)) #image sizes are in KB we converted it to MB here
_ = axis.set_xticklabels(axis.get_xticks().astype('int').tolist(),fontsize=16)
_ = axis.set_yticklabels(axis.get_yticks().astype('int').tolist(),fontsize=16)
_ = axis.grid('on')
_ = axis.legend(fontsize=14)
_ = axis.set_xlabel('Image Size in MBs',fontsize=16)
_ = axis.set_title('Histogram of Image Sizes', fontsize=16)
_ = axis.set_ylabel('Number of occurences', fontsize=16)
print 'Distribution Characteristics: Median',(images['SIZE1']+images['SIZE2']).median(),'Mean',(images['SIZE1']+images['SIZE2']).median(),'STD',(images['SIZE1']+images['SIZE2']).std()
print 'Min',(images['SIZE1']+images['SIZE2']).min(), 'Max', (images['SIZE1']+images['SIZE2']).max()
print 'Mean',(images['SIZE1']+images['SIZE2']).mean(), 'Sigma',(images['SIZE1']+images['SIZE2']).std()
print '1 Sigma from Mean',[(images['SIZE1']+images['SIZE2']).mean() - (images['SIZE1']+images['SIZE2']).std(),(images['SIZE1']+images['SIZE2']).mean() + (images['SIZE1']+images['SIZE2']).std()]
print '2 Sigma from Mean',[(images['SIZE1']+images['SIZE2']).mean() - 2 * (images['SIZE1']+images['SIZE2']).std(),(images['SIZE1']+images['SIZE2']).mean() + 2 * (images['SIZE1']+images['SIZE2']).std()]
data1 = pd.read_csv('../Scipts/Design2a/node1_images.csv')
data2 = pd.read_csv('../Scipts/Design2a/node2_images.csv')
data3 = pd.read_csv('../Scipts/Design2a/node3_images.csv')
data4 = pd.read_csv('../Scipts/Design2a/node4_images.csv')
fig, axis = plt.subplots(nrows=1,ncols=4,figsize=(20,7.5),sharex='row',sharey='row')
_ = axis[0].hist((data1['SIZE1']+data1['SIZE2']).values,bins=60,color=blues(150)) #image sizes are in KB we converted it to MB here
_ = axis[0].set_xticklabels(axis[0].get_xticks().astype('int').tolist(),fontsize=16)
_ = axis[0].set_yticklabels(axis[0].get_yticks().astype('int').tolist(),fontsize=16)
_ = axis[0].grid('on')
_ = axis[0].legend(fontsize=14)
_ = axis[0].set_xlabel('Node 1 Image Size in MBs',fontsize=16)
_ = axis[0].set_ylabel('Number of occurences', fontsize=16)
_ = axis[1].hist((data2['SIZE1']+data2['SIZE2']).values,bins=60,color=blues(150)) #image sizes are in KB we converted it to MB here
_ = axis[1].set_xticklabels(axis[1].get_xticks().astype('int').tolist(),fontsize=16)
_ = axis[1].set_yticklabels(axis[1].get_yticks().astype('int').tolist(),fontsize=16)
_ = axis[1].grid('on')
_ = axis[1].legend(fontsize=14)
_ = axis[1].set_xlabel('Node 2 Image Size in MBs',fontsize=16)
_ = axis[1].set_ylabel('Number of occurences', fontsize=16)
_ = axis[2].hist((data3['SIZE1']+data3['SIZE2']).values,bins=60,color=blues(150)) #image sizes are in KB we converted it to MB here
_ = axis[2].set_xticklabels(axis[2].get_xticks().astype('int').tolist(),fontsize=16)
_ = axis[2].set_yticklabels(axis[2].get_yticks().astype('int').tolist(),fontsize=16)
_ = axis[2].grid('on')
_ = axis[2].legend(fontsize=14)
_ = axis[2].set_xlabel('Node 3 Image Size in MBs',fontsize=16)
_ = axis[2].set_ylabel('Number of occurences', fontsize=16)
_ = axis[3].hist((data4['SIZE1']+data4['SIZE2']).values,bins=60,color=blues(150)) #image sizes are in KB we converted it to MB here
_ = axis[3].set_xticklabels(axis[3].get_xticks().astype('int').tolist(),fontsize=16)
_ = axis[3].set_yticklabels(axis[3].get_yticks().astype('int').tolist(),fontsize=16)
_ = axis[3].grid('on')
_ = axis[3].legend(fontsize=14)
_ = axis[3].set_xlabel('Node 4 Image Size in MBs',fontsize=16)
```
| github_jupyter |
# Assignment 3: Question Answering
Welcome to this week's assignment of course 4. In this you will explore question answering. You will implement the "Text to Text Transfer from Transformers" (better known as T5). Since you implemented transformers from scratch last week you will now be able to use them.
<img src="https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/images/w4_t5_qa.png" width="300px">
## Outline
- [Overview](#0)
- [Part 0: Importing the Packages](#0)
- [Part 1: C4 Dataset](#1)
- [1.1 Pre-Training Objective](#1.1)
- [1.2 Process C4](#1.2)
- [1.2.1 Decode to natural language](#1.2.1)
- [1.3 Tokenizing and Masking](#1.3)
- [Exercise 01](#ex01)
- [1.4 Creating the Pairs](#1.4)
- [Part 2: Transfomer](#2)
- [2.1 Transformer Encoder](#2.1)
- [2.1.1 The Feedforward Block](#2.1.1)
- [Exercise 02](#ex02)
- [2.1.2 The Encoder Block](#2.1.2)
- [Exercise 03](#ex03)
- [2.1.3 The Transformer Encoder](#2.1.3)
- [Exercise 04](#ex04)
<a name='0'></a>
### Overview
This assignment will be different from the two previous ones. Due to memory and time constraints of this environment you will not be able to train a model and use it for inference. Instead you will create the necessary building blocks for the transformer encoder model and will use a pretrained version of the same model in two ungraded labs after this assignment.
After completing these 3 (1 graded and 2 ungraded) labs you will:
* Implement the code neccesary for Bidirectional Encoder Representation from Transformer (BERT).
* Understand how the C4 dataset is structured.
* Use a pretrained model for inference.
* Understand how the "Text to Text Transfer from Transformers" or T5 model works.
<a name='0'></a>
# Part 0: Importing the Packages
```
%%capture
!pip install trax
!wget https://raw.githubusercontent.com/martin-fabbri/colab-notebooks/master/deeplearning.ai/nlp/assets/bpe_data.txt
!wget https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/pre-trained-models/w4_t5_sentencepiece.model
import ast
import string
import textwrap
import itertools
import numpy as np
import trax
from trax import layers as tl
from trax.supervised import decoding
# Will come handy later.
wrapper = textwrap.TextWrapper(width=70)
# Set random seed
np.random.seed(42)
!pip list | grep trax
```
<a name='1'></a>
## Part 1: C4 Dataset
The [C4](https://www.tensorflow.org/datasets/catalog/c4) is a huge data set. For the purpose of this assignment you will use a few examples out of it which are present in `data.txt`. C4 is based on the [common crawl](https://commoncrawl.org/) project. Feel free to read more on their website.
Run the cell below to see how the examples look like.
```
# load example jsons
example_jsons = list(map(ast.literal_eval, open("bpe_data.txt")))
for i in range(5):
print(f"example number {i+5}: \n\n{example_jsons[i]} \n")
```
Notice the `b` before each string? This means that this data comes as bytes rather than strings. Strings are actually lists of bytes so for the rest of the assignments the name `strings` will be used to describe the data.
To check this run the following cell:
```
type(example_jsons[0].get('text'))
example_jsons[0].get('text')
```
<a name='1.1'></a>
### 1.1 Pre-Training Objective
**Note:** The word "mask" will be used throughout this assignment in context of hiding/removing word(s)
You will be implementing the BERT loss as shown in the following image.
<img src="https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/images/w4_t5_loss.png" width="600px" height = "400px">
Assume you have the following text: <span style = "color:blue"> **Thank you <span style = "color:red">for inviting </span> me to your party <span style = "color:red">last</span> week** </span>
Now as input you will mask the words in red in the text:
<span style = "color:blue"> **Input:**</span> Thank you **X** me to your party **Y** week.
<span style = "color:blue">**Output:**</span> The model should predict the words(s) for **X** and **Y**.
**Z** is used to represent the end.
<a name='1.2'></a>
### 1.2 Process C4
C4 only has the plain string `text` field, so you will tokenize and have `inputs` and `targets` out of it for supervised learning. Given your inputs, the goal is to predict the targets during training.
You will now take the `text` and convert it to `inputs` and `targets`.
```
natural_language_texts = [example_json["text"] for example_json in example_jsons]
natural_language_texts[4]
```
<a name='1.2.1'></a>
#### 1.2.1 Decode to natural language
The following functions will help you `detokenize` and`tokenize` the text data.
The `sentencepiece` vocabulary was used to convert from text to ids. This vocabulary file is loaded and used in this helper functions.
`natural_language_texts` has the text from the examples we gave you.
Run the cells below to see what is going on.
```
PAD, EOS, UNK = 0, 1, 2
def detokenize(np_array):
return trax.data.detokenize(
np_array,
vocab_type="sentencepiece",
vocab_file="w4_t5_sentencepiece.model",
vocab_dir=".",
)
def tokenize(s):
# The trax.data.tokenize function operates on streams,
# that's why we have to create 1-element stream with iter
# and later retrieve the result with next.
return next(
trax.data.tokenize(
iter([s]),
vocab_type="sentencepiece",
vocab_file="w4_t5_sentencepiece.model",
vocab_dir=".",
)
)
# printing the encoding of each word to see how subwords are tokenized
tokenized_text = [(tokenize(word).tolist(), word) for word in natural_language_texts[0].split()]
print(tokenized_text, '\n')
print(f"tokenized: {tokenize('Beginners')}\ndetokenized: {detokenize(tokenize('Beginners'))}")
```
As you can see above, you were able to take a piece of string and tokenize it.
Now you will create `input` and `target` pairs that will allow you to train your model. T5 uses the ids at the end of the vocab file as sentinels. For example, it will replace:
- `vocab_size - 1` by `<Z>`
- `vocab_size - 2` by `<Y>`
- and so forth.
It assigns every word a `chr`.
The `pretty_decode` function below, which you will use in a bit, helps in handling the type when decoding. Take a look and try to understand what the function is doing.
Notice that:
```python
string.ascii_letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
```
**NOTE:** Targets may have more than the 52 sentinels we replace, but this is just to give you an idea of things.
```
vocab_size = trax.data.vocab_size(
vocab_type="sentencepiece",
vocab_file="w4_t5_sentencepiece.model",
vocab_dir="."
)
vocab_size
def get_sentinels(vocab_size=vocab_size, display=False):
sentinels = {}
for i, char in enumerate(reversed(string.ascii_letters), 1):
decoded_text = detokenize([vocab_size - i])
# sentinels, ex <Z> - <a>
sentinels[decoded_text] = f"<{char}>"
if display:
print(f"The sentinel is <{char}> and the decoded token is", decoded_text)
return sentinels
sentinels = get_sentinels(vocab_size, display=True)
def pretty_decode(encoded_str_list, sentinels=sentinels):
if isinstance(encoded_str_list, (str, bytes)):
for token, char in sentinels.items():
encoded_str_list = encoded_str_list.replace(token, char)
return encoded_str_list
return pretty_decode(detokenize(encoded_str_list))
pretty_decode("I want to dress up as an Intellectual this halloween.")
```
The functions above make your `inputs` and `targets` more readable. For example, you might see something like this once you implement the masking function below.
- <span style="color:red"> Input sentence: </span> Younes and Lukasz were working together in the lab yesterday after lunch.
- <span style="color:red">Input: </span> Younes and Lukasz **Z** together in the **Y** yesterday after lunch.
- <span style="color:red">Target: </span> **Z** were working **Y** lab.
<a name='1.3'></a>
### 1.3 Tokenizing and Masking
You will now implement the `tokenize_and_mask` function. This function will allow you to tokenize and mask input words with a noise probability. We usually mask 15% of the words.
<a name='ex01'></a>
### Exercise 01
```
# UNQ_C1
# GRADED FUNCTION: tokenize_and_mask
def tokenize_and_mask(text, vocab_size=vocab_size, noise=0.15,
randomizer=np.random.uniform, tokenize=tokenize):
"""Tokenizes and masks a given input.
Args:
text (str or bytes): Text input.
vocab_size (int, optional): Size of the vocabulary. Defaults to vocab_size.
noise (float, optional): Probability of masking a token. Defaults to 0.15.
randomizer (function, optional): Function that generates random values. Defaults to np.random.uniform.
tokenize (function, optional): Tokenizer function. Defaults to tokenize.
Returns:
tuple: Tuple of lists of integers associated to inputs and targets.
"""
# current sentinel number (starts at 0)
cur_sentinel_num = 0
# inputs
inps = []
# targets
targs = []
### START CODE HERE (REPLACE INSTANCES OF 'None' WITH YOUR CODE) ###
# prev_no_mask is True if the previous token was NOT masked, False otherwise
# set prev_no_mask to True
prev_no_mask = True
# loop through tokenized `text`
for token in tokenize(text):
# check if the `noise` is greater than a random value (weighted coin flip)
if randomizer() < noise:
# check to see if the previous token was not masked
if prev_no_mask==True: # add new masked token at end_id
# number of masked tokens increases by 1
cur_sentinel_num += 1
# compute `end_id` by subtracting current sentinel value out of the total vocabulary size
end_id = vocab_size - cur_sentinel_num
# append `end_id` at the end of the targets
targs.append(end_id)
# append `end_id` at the end of the inputs
inps.append(end_id)
# append `token` at the end of the targets
targs.append(token)
# set prev_no_mask accordingly
prev_no_mask = False
else: # don't have two masked tokens in a row
# append `token ` at the end of the inputs
inps.append(token)
# set prev_no_mask accordingly
prev_no_mask = True
### END CODE HERE ###
return inps, targs
# Some logic to mock a np.random value generator
# Needs to be in the same cell for it to always generate same output
def testing_rnd():
def dummy_generator():
vals = np.linspace(0, 1, 10)
cyclic_vals = itertools.cycle(vals)
for _ in range(100):
yield next(cyclic_vals)
dumr = itertools.cycle(dummy_generator())
def dummy_randomizer():
return next(dumr)
return dummy_randomizer
input_str = natural_language_texts[0]
print(f"input string:\n\n{input_str}\n")
inps, targs = tokenize_and_mask(input_str, randomizer=testing_rnd())
print(f"tokenized inputs:\n\n{inps}\n")
print(f"targets:\n\n{targs}")
```
#### **Expected Output:**
```CPP
b'Beginners BBQ Class Taking Place in Missoula!\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.'
tokenized inputs:
[31999, 15068, 4501, 3, 12297, 3399, 16, 5964, 7115, 31998, 531, 25, 241, 12, 129, 394, 44, 492, 31997, 58, 148, 56, 43, 8, 1004, 6, 474, 31996, 39, 4793, 230, 5, 2721, 6, 1600, 1630, 31995, 1150, 4501, 15068, 16127, 6, 9137, 2659, 5595, 31994, 782, 3624, 14627, 15, 12612, 277, 5, 216, 31993, 2119, 3, 9, 19529, 593, 853, 21, 921, 31992, 12, 129, 394, 28, 70, 17712, 1098, 5, 31991, 3884, 25, 762, 25, 174, 12, 214, 12, 31990, 3, 9, 3, 23405, 4547, 15068, 2259, 6, 31989, 6, 5459, 6, 13618, 7, 6, 3604, 1801, 31988, 6, 303, 24190, 11, 1472, 251, 5, 37, 31987, 36, 16, 8, 853, 19, 25264, 399, 568, 31986, 21, 21380, 7, 34, 19, 339, 5, 15746, 31985, 8, 583, 56, 36, 893, 3, 9, 3, 31984, 9486, 42, 3, 9, 1409, 29, 11, 25, 31983, 12246, 5977, 13, 284, 3604, 24, 19, 2657, 31982]
targets:
[31999, 12847, 277, 31998, 9, 55, 31997, 3326, 15068, 31996, 48, 30, 31995, 727, 1715, 31994, 45, 301, 31993, 56, 36, 31992, 113, 2746, 31991, 216, 56, 31990, 5978, 16, 31989, 379, 2097, 31988, 11, 27856, 31987, 583, 12, 31986, 6, 11, 31985, 26, 16, 31984, 17, 18, 31983, 56, 36, 31982, 5]
```
You will now use the inputs and the targets from the `tokenize_and_mask` function you implemented above. Take a look at the masked sentence using your `inps` and `targs` from the sentence above.
```
print('Inputs: \n\n', pretty_decode(inps))
print('\nTargets: \n\n', pretty_decode(targs))
```
<a name='1.4'></a>
### 1.4 Creating the Pairs
You will now create pairs using your dataset. You will iterate over your data and create (inp, targ) pairs using the functions that we have given you.
```
inputs_targets_pairs = [tokenize_and_mask(text) for text in natural_language_texts]
def display_input_target_pairs(inputs_targets_pairs):
for i, inp_tgt_pair in enumerate(inputs_targets_pairs, 1):
inps, tgts = inp_tgt_pair
inps, tgts = pretty_decode(inps), pretty_decode(tgts)
print(f'[{i}]\n\n'
f'inputs:\n{wrapper.fill(text=inps)}\n\n'
f'targets:\n{wrapper.fill(text=tgts)}\n\n\n\n')
display_input_target_pairs(inputs_targets_pairs)
```
<a name='2'></a>
# Part 2: Transfomer
We now load a Transformer model checkpoint that has been pre-trained using the above C4 dataset and decode from it. This will save you a lot of time rather than have to train your model yourself. Later in this notebook, we will show you how to fine-tune your model.
<img src="https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/images/w4_t5_fulltransformer.png" width="300px" height="600px">
Start by loading in the model. We copy the checkpoint to local dir for speed, otherwise initialization takes a very long time. Last week you implemented the decoder part for the transformer. Now you will implement the encoder part. Concretely you will implement the following.
<img src="https://github.com/martin-fabbri/colab-notebooks/raw/master/deeplearning.ai/nlp/images/w4_t5_encoder.png" width="300px" height="600px">
<a name='2.1'></a>
### 2.1 Transformer Encoder
You will now implement the transformer encoder. Concretely you will implement two functions. The first function is `FeedForwardBlock`.
<a name='2.1.1'></a>
#### 2.1.1 The Feedforward Block
The `FeedForwardBlock` function is an important one so you will start by implementing it. To do so, you need to return a list of the following:
- [`tl.LayerNorm()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.normalization.LayerNorm) = layer normalization.
- [`tl.Dense(d_ff)`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Dense) = fully connected layer.
- [`activation`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.activation_fns.Relu) = activation relu, tanh, sigmoid etc.
- `dropout_middle` = we gave you this function (don't worry about its implementation).
- [`tl.Dense(d_model)`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Dense) = fully connected layer with same dimension as the model.
- `dropout_final` = we gave you this function (don't worry about its implementation).
You can always take a look at [trax documentation](https://trax-ml.readthedocs.io/en/latest/) if needed.
**Instructions**: Implement the feedforward part of the transformer. You will be returning a list.
<a name='ex02'></a>
### Exercise 02
```
# UNQ_C2
# GRADED FUNCTION: FeedForwardBlock
def FeedForwardBlock(d_model, d_ff, dropout, dropout_shared_axes, mode, activation):
"""Returns a list of layers implementing a feed-forward block.
Args:
d_model: int: depth of embedding
d_ff: int: depth of feed-forward layer
dropout: float: dropout rate (how much to drop out)
dropout_shared_axes: list of integers, axes to share dropout mask
mode: str: 'train' or 'eval'
activation: the non-linearity in feed-forward layer
Returns:
A list of layers which maps vectors to vectors.
"""
dropout_middle = tl.Dropout(rate=dropout,
shared_axes=dropout_shared_axes,
mode=mode)
dropout_final = tl.Dropout(rate=dropout,
shared_axes=dropout_shared_axes,
mode=mode)
### START CODE HERE (REPLACE INSTANCES OF 'None' WITH YOUR CODE) ###
ff_block = [
# trax Layer normalization
tl.LayerNorm(),
# trax Dense layer using `d_ff`
tl.Dense(d_ff),
# activation() layer - you need to call (use parentheses) this func!
activation(),
# dropout middle layer
dropout_middle,
# trax Dense layer using `d_model`
tl.Dense(d_model),
# dropout final layer
dropout_final,
]
### END CODE HERE ###
return ff_block
# Print the block layout
feed_forward_example = FeedForwardBlock(d_model=512, d_ff=2048, dropout=0.8, dropout_shared_axes=0, mode = 'train', activation = tl.Relu)
print(feed_forward_example)
```
#### **Expected Output:**
```CPP
[LayerNorm, Dense_2048, Relu, Dropout, Dense_512, Dropout]
```
<a name='2.1.2'></a>
#### 2.1.2 The Encoder Block
The encoder block will use the `FeedForwardBlock`.
You will have to build two residual connections. Inside the first residual connection you will have the `tl.layerNorm()`, `attention`, and `dropout_` layers. The second residual connection will have the `feed_forward`.
You will also need to implement `feed_forward`, `attention` and `dropout_` blocks.
So far you haven't seen the [`tl.Attention()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.attention.Attention) and [`tl.Residual()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Residual) layers so you can check the docs by clicking on them.
```
# UNQ_C3
# GRADED FUNCTION: EncoderBlock
def EncoderBlock(d_model, d_ff, n_heads, dropout, dropout_shared_axes,
mode, ff_activation, FeedForwardBlock=FeedForwardBlock):
"""
Returns a list of layers that implements a Transformer encoder block.
The input to the layer is a pair, (activations, mask), where the mask was
created from the original source tokens to prevent attending to the padding
part of the input.
Args:
d_model (int): depth of embedding.
d_ff (int): depth of feed-forward layer.
n_heads (int): number of attention heads.
dropout (float): dropout rate (how much to drop out).
dropout_shared_axes (int): axes on which to share dropout mask.
mode (str): 'train' or 'eval'.
ff_activation (function): the non-linearity in feed-forward layer.
FeedForwardBlock (function): A function that returns the feed forward block.
Returns:
list: A list of layers that maps (activations, mask) to (activations, mask).
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' WITH YOUR CODE) ###
# Attention block
attention = tl.Attention(
# Use dimension of the model
d_feature=d_model,
# Set it equal to number of attention heads
n_heads=n_heads,
# Set it equal `dropout`
dropout=dropout,
# Set it equal `mode`
mode=mode
)
# Call the function `FeedForwardBlock` (implemented before) and pass in the parameters
feed_forward = FeedForwardBlock(
d_model,
d_ff,
dropout,
dropout_shared_axes,
mode,
ff_activation
)
# Dropout block
dropout_ = tl.Dropout(
# set it equal to `dropout`
rate=dropout,
# set it equal to the axes on which to share dropout mask
shared_axes=dropout_shared_axes,
# set it equal to `mode`
mode=mode
)
encoder_block = [
# add `Residual` layer
tl.Residual(
# add norm layer
tl.LayerNorm(),
# add attention
attention,
# add dropout
dropout_,
),
# add another `Residual` layer
tl.Residual(
# add feed forward
feed_forward,
),
]
### END CODE HERE ###
return encoder_block
# Print the block layout
encoder_example = EncoderBlock(d_model=512, d_ff=2048, n_heads=6, dropout=0.8, dropout_shared_axes=0, mode = 'train', ff_activation=tl.Relu)
print(encoder_example)
```
<a name='2.1.3'></a>
### 2.1.3 The Transformer Encoder
Now that you have implemented the `EncoderBlock`, it is time to build the full encoder. BERT, or Bidirectional Encoder Representations from Transformers is one such encoder.
You will implement its core code in the function below by using the functions you have coded so far.
The model takes in many hyperparameters, such as the `vocab_size`, the number of classes, the dimension of your model, etc. You want to build a generic function that will take in many parameters, so you can use it later. At the end of the day, anyone can just load in an API and call transformer, but we think it is important to make sure you understand how it is built. Let's get started.
**Instructions:** For this encoder you will need a `positional_encoder` first (which is already provided) followed by `n_layers` encoder blocks, which are the same encoder blocks you previously built. Once you store the `n_layers` `EncoderBlock` in a list, you are going to encode a `Serial` layer with the following sublayers:
- [`tl.Branch`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Branch): helps with the branching and has the following sublayers:
- `positional_encoder`.
- [`tl.PaddingMask()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.attention.PaddingMask): layer that maps integer sequences to padding masks.
- Your list of `EncoderBlock`s
- [`tl.Select([0], n_in=2)`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Select): Copies, reorders, or deletes stack elements according to indices.
- [`tl.LayerNorm()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.normalization.LayerNorm).
- [`tl.Mean()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Mean): Mean along the first axis.
- `tl.Dense()` with n_units set to n_classes.
- `tl.LogSoftmax()`
Please refer to the [trax documentation](https://trax-ml.readthedocs.io/en/latest/) for further information.
<a name='2.1.3'></a>
### 2.1.3 The Transformer Encoder
Now that you have implemented the `EncoderBlock`, it is time to build the full encoder. BERT, or Bidirectional Encoder Representations from Transformers is one such encoder.
You will implement its core code in the function below by using the functions you have coded so far.
The model takes in many hyperparameters, such as the `vocab_size`, the number of classes, the dimension of your model, etc. You want to build a generic function that will take in many parameters, so you can use it later. At the end of the day, anyone can just load in an API and call transformer, but we think it is important to make sure you understand how it is built. Let's get started.
**Instructions:** For this encoder you will need a `positional_encoder` first (which is already provided) followed by `n_layers` encoder blocks, which are the same encoder blocks you previously built. Once you store the `n_layers` `EncoderBlock` in a list, you are going to encode a `Serial` layer with the following sublayers:
- [`tl.Branch`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Branch): helps with the branching and has the following sublayers:
- `positional_encoder`.
- [`tl.PaddingMask()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.attention.PaddingMask): layer that maps integer sequences to padding masks.
- Your list of `EncoderBlock`s
- [`tl.Select([0], n_in=2)`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Select): Copies, reorders, or deletes stack elements according to indices.
- [`tl.LayerNorm()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.normalization.LayerNorm).
- [`tl.Mean()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Mean): Mean along the first axis.
- `tl.Dense()` with n_units set to n_classes.
- `tl.LogSoftmax()`
Please refer to the [trax documentation](https://trax-ml.readthedocs.io/en/latest/) for further information.
<a name='ex04'></a>
### Exercise 04
```
# UNQ_C4
# GRADED FUNCTION: TransformerEncoder
def TransformerEncoder(vocab_size=vocab_size,
n_classes=10,
d_model=512,
d_ff=2048,
n_layers=6,
n_heads=8,
dropout=0.1,
dropout_shared_axes=None,
max_len=2048,
mode='train',
ff_activation=tl.Relu,
EncoderBlock=EncoderBlock):
"""
Returns a Transformer encoder model.
The input to the model is a tensor of tokens.
Args:
vocab_size (int): vocab size. Defaults to vocab_size.
n_classes (int): how many classes on output. Defaults to 10.
d_model (int): depth of embedding. Defaults to 512.
d_ff (int): depth of feed-forward layer. Defaults to 2048.
n_layers (int): number of encoder/decoder layers. Defaults to 6.
n_heads (int): number of attention heads. Defaults to 8.
dropout (float): dropout rate (how much to drop out). Defaults to 0.1.
dropout_shared_axes (int): axes on which to share dropout mask. Defaults to None.
max_len (int): maximum symbol length for positional encoding. Defaults to 2048.
mode (str): 'train' or 'eval'. Defaults to 'train'.
ff_activation (function): the non-linearity in feed-forward layer. Defaults to tl.Relu.
EncoderBlock (function): Returns the encoder block. Defaults to EncoderBlock.
Returns:
trax.layers.combinators.Serial: A Transformer model as a layer that maps
from a tensor of tokens to activations over a set of output classes.
"""
positional_encoder = [
tl.Embedding(vocab_size, d_model),
tl.Dropout(rate=dropout, shared_axes=dropout_shared_axes, mode=mode),
tl.PositionalEncoding(max_len=max_len)
]
### START CODE HERE (REPLACE INSTANCES OF 'None' WITH YOUR CODE) ###
# Use the function `EncoderBlock` (implemented above) and pass in the parameters over `n_layers`
encoder_blocks = [EncoderBlock(d_model, d_ff, n_heads, dropout, dropout_shared_axes, mode, ff_activation) for _ in range(n_layers)]
# Assemble and return the model.
return tl.Serial(
# Encode
tl.Branch(
# Use `positional_encoder`
positional_encoder,
# Use trax padding mask
tl.PaddingMask(),
),
# Use `encoder_blocks`
encoder_blocks,
# Use select layer
tl.Select([0]),
# Use trax layer normalization
tl.LayerNorm(),
# Map to output categories.
# Use trax mean. set axis to 1
tl.Mean(axis=1),
# Use trax Dense using `n_classes`
tl.Dense(n_classes),
# Use trax log softmax
tl.LogSoftmax(),
)
### END CODE HERE ###
# Run this cell to see the structure of your model
# Only 1 layer is used to keep the output readable
TransformerEncoder(n_layers=1)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/rlworkgroup/garage/blob/master/examples/jupyter/custom_env.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Demonstrate usage of a custom openai/gym environment with rlworkgroup/garage
Demonstrate usage of [garage](https://github.com/rlworkgroup/garage) with a custom `openai/gym` environment in a jupyter notebook.
## Install pre-requisites
```
%%shell
echo "abcd" > mujoco_fake_key
git clone --depth 1 https://github.com/rlworkgroup/garage/
cd garage
bash scripts/setup_colab.sh --mjkey ../mujoco_fake_key --no-modify-bashrc > /dev/null
raise Exception("Please restart your runtime so that the installed dependencies for 'garage' can be loaded, and then resume running the notebook")
```
---
---
# custom gym environment
```
# Create a gym env that simulates the current water treatment plant
# Based on https://github.com/openai/gym/blob/master/gym/envs/toy_text/nchain.py
import gym
from gym import spaces
import numpy as np
import random
# Gym env
class MyEnv(gym.Env):
"""Custom gym environment
Observation: Coin flip (Discrete binary: 0/1)
Actions: Guess of coin flip outcome (Discrete binary: 0/1)
Reward: Guess the coin flip correctly
Episode termination: Make 5 correct guesses within 20 attempts
"""
def __init__(self):
# set action/observation spaces
self.action_space = spaces.Discrete(2)
self.observation_space = spaces.Discrete(2)
self.reset()
def step(self, action):
assert self.action_space.contains(action), "action not in action space!"
# flip a coin
self.state = np.random.rand() < 0.5
# increment number of attempts
self.attempt += 1
# calculate reward of this element
reward = (action == self.state)
self.score += reward
# allow a maximum number of attempts or reach max score
done = (self.attempt >= 20) | (self.score >= 5)
return self.state, reward, done, {}
def reset(self):
# accumulate score
self.score = 0
# count number of attempts
self.attempt = 0
return 0
# some smoke testing
env_test = MyEnv()
observation = env_test.reset()
for step in range(40):
action = np.random.rand() < 0.5
observation, reward, done, _ = env_test.step(action)
print("step %i: action=%i, observation=%i => reward = %i, done = %s" % (step, action, observation, reward, done))
if done: break
```
# Prepare training
```
# The contents of this cell are mostly copied from garage/examples/...
from garage.np.baselines import LinearFeatureBaseline # <<<<<< requires restarting the runtime in colab after the 1st dependency installation above
from garage.envs import normalize
#from garage.envs.box2d import CartpoleEnv # no need since will use WtpDesignerEnv_v0 defined above
from garage.experiment import run_experiment
from garage.tf.algos import TRPO
from garage.tf.envs import TfEnv
#from garage.tf.policies import GaussianMLPPolicy
from garage.tf.policies import CategoricalMLPPolicy
import gym # already imported before
from garage.experiment import LocalRunner
from garage.logger import logger, StdOutput
# register the env with gym
# https://github.com/openai/gym/tree/master/gym/envs#how-to-create-new-environments-for-gym
from gym.envs.registration import register
register(
id='MyEnv-v0',
entry_point=MyEnv,
)
# test registration was successful
env = gym.make("MyEnv-v0")
# env = TfEnv(normalize(gym.make("MyEnv-v0")))
# env = TfEnv(env_name='MyEnv-v0')
# Wrap the environment to convert the observation to numpy array
# Not sure why this is necessary ATM
# Based on https://github.com/openai/gym/blob/5404b39d06f72012f562ec41f60734bd4b5ceb4b/gym/wrappers/dict.py
# from gym import wrappers
class NpWrapper(gym.ObservationWrapper):
def observation(self, observation):
obs = np.array(observation).astype('int')
return obs
env = NpWrapper(env)
env = TfEnv(normalize(env))
policy = CategoricalMLPPolicy(
name="policy", env_spec=env.spec, hidden_sizes=(32, 32))
baseline = LinearFeatureBaseline(env_spec=env.spec)
algo = TRPO(
env_spec=env.spec,
policy=policy,
baseline=baseline,
max_path_length=50,
n_itr=50,
discount=0.99,
max_kl_step=0.01
)
```
## Start training
```
# log to stdout
logger.add_output(StdOutput())
# start a tensorflow session so that we can keep it open after training and use the trained network to see it performing
import tensorflow as tf
sess = tf.InteractiveSession()
# no need to initialize
sess.run(tf.global_variables_initializer())
# Train the policy (neural network) on the environment
runner = LocalRunner()
runner.setup(algo=algo, env=env)
# use n_epochs = 2 for quick demo
runner.train(n_epochs=2, batch_size=10000, plot=False)
# test results
n_experiments = 10
row_all = []
for i in range(n_experiments):
#print("experiment ", i+1)
# reset
obs_initial = env.reset()
# start
done = False
obs_i = obs_initial
while not done:
row_i = {}
row_i['exp'] = i + 1
row_i['obs'] = obs_i
act_i, _ = policy.get_action(obs_i)
row_i['act'] = act_i
obs_i, rew_i, done, _ = env.step(act_i)
row_i['obs'] = obs_i
row_i['rew'] = rew_i
row_all.append(row_i)
if done: break
#env.close()
import pandas as pd
df = pd.DataFrame(row_all)
pd.DataFrame({
'score': df.groupby('exp')['rew'].sum(),
'nstep': df.groupby('exp')['rew'].count()
})
```
| github_jupyter |
# Hyperparameter tuning with Hyperopt
In this lab, you will learn to tune hyperparameters in Azure Databricks. This lab will cover the following exercises:
- Exercise 2: Using Hyperopt for hyperparameter tuning.
To upload the necessary data, please follow the instructions in the lab guide.
## Attach notebook to your cluster
Before executing any cells in the notebook, you need to attach it to your cluster. Make sure that the cluster is running.
In the notebook's toolbar, select the drop down arrow next to Detached, and then select your cluster under Attach to.
Make sure you run each cells in order.
-sandbox
## Exercise 2: Using Hyporopt for hyperparameter tuning
[Hyperopt](https://github.com/hyperopt/hyperopt) is a Python library for hyperparameter tuning. Databricks Runtime for Machine Learning includes an optimized and enhanced version of Hyperopt, including automated MLflow tracking and the `SparkTrials` class for distributed tuning.
This exercise illustrates how to scale up hyperparameter tuning for a single-machine Python ML algorithm and track the results using MLflow. Run the cell below to import the required packages.
```
from sklearn.datasets import load_iris
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
from hyperopt import fmin, tpe, hp, SparkTrials, STATUS_OK, Trials
# If you are running Databricks Runtime for Machine Learning, `mlflow` is already installed and you can skip the following line.
import mlflow
```
### Load the data
In this exercise, you will use a dataset that includes chemical and visual features of wine samples to classify them based on their cultivar (grape variety).
The dataset consists of 12 numeric features and a classification label with the following classes:
- **0** (variety A)
- **1** (variety B)
- **2** (variety C)
Run the following cell to load the table into a Spark dataframe and reivew the dataframe.
```
df = spark.sql("select * from wine")
display(df)
```
Separate the features from the label (WineVariety):
```
import numpy as np
df_features = df.select('Alcohol','Malic_acid','Ash','Alcalinity','Magnesium','Phenols','Flavanoids','Nonflavanoids','Proanthocyanins','Color_intensity','Hue','OD280_315_of_diluted_wines','Proline').collect()
X = np.array(df_features)
df_label = df.select('WineVariety').collect()
y = np.array(df_label)
```
Check the first four wines to see if the data is loaded in correctly:
```
for n in range(0,4):
print("Wine", str(n+1), "\n Features:",list(X[n]), "\n Label:", y[n])
```
## Part 1. Single-machine Hyperopt workflow
Here are the steps in a Hyperopt workflow:
1. Define a function to minimize.
2. Define a search space over hyperparameters.
3. Select a search algorithm.
4. Run the tuning algorithm with Hyperopt `fmin()`.
For more information, see the [Hyperopt documentation](https://github.com/hyperopt/hyperopt/wiki/FMin).
### Define a function to minimize
In this example, we use a support vector machine classifier. The objective is to find the best value for the regularization parameter `C`.
Most of the code for a Hyperopt workflow is in the objective function. This example uses the [support vector classifier from scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).
```
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
def objective(C):
# Create a support vector classifier model
clf = SVC(C)
# Use the cross-validation accuracy to compare the models' performance
accuracy = cross_val_score(clf, X, y).mean()
# Hyperopt tries to minimize the objective function. A higher accuracy value means a better model, so you must return the negative accuracy.
return {'loss': -accuracy, 'status': STATUS_OK}
```
### Define the search space over hyperparameters
See the [Hyperopt docs](https://github.com/hyperopt/hyperopt/wiki/FMin#21-parameter-expressions) for details on defining a search space and parameter expressions.
```
search_space = hp.lognormal('C', 0, 1.0)
```
### Select a search algorithm
The two main choices are:
* `hyperopt.tpe.suggest`: Tree of Parzen Estimators, a Bayesian approach which iteratively and adaptively selects new hyperparameter settings to explore based on past results
* `hyperopt.rand.suggest`: Random search, a non-adaptive approach that samples over the search space
```
algo=tpe.suggest
```
Run the tuning algorithm with Hyperopt `fmin()`
Set `max_evals` to the maximum number of points in hyperparameter space to test, that is, the maximum number of models to fit and evaluate.
```
argmin = fmin(
fn=objective,
space=search_space,
algo=algo,
max_evals=16)
# Print the best value found for C
print("Best value found: ", argmin)
```
## Part 2. Distributed tuning using Apache Spark and MLflow
To distribute tuning, add one more argument to `fmin()`: a `Trials` class called `SparkTrials`.
`SparkTrials` takes 2 optional arguments:
* `parallelism`: Number of models to fit and evaluate concurrently. The default is the number of available Spark task slots.
* `timeout`: Maximum time (in seconds) that `fmin()` can run. The default is no maximum time limit.
This example uses the very simple objective function defined in Cmd 12. In this case, the function runs quickly and the overhead of starting the Spark jobs dominates the calculation time, so the calculations for the distributed case take more time. For typical real-world problems, the objective function is more complex, and using `SparkTrails` to distribute the calculations will be faster than single-machine tuning.
Automated MLflow tracking is enabled by default. To use it, call `mlflow.start_run()` before calling `fmin()` as shown in the example.
```
from hyperopt import SparkTrials
# To display the API documentation for the SparkTrials class, uncomment the following line.
# help(SparkTrials)
spark_trials = SparkTrials()
with mlflow.start_run():
argmin = fmin(
fn=objective,
space=search_space,
algo=algo,
max_evals=16,
trials=spark_trials)
# Print the best value found for C
print("Best value found: ", argmin)
```
To view the MLflow experiment associated with the notebook, click the **Experiment** icon in the notebook context bar on the upper right. There, you can view all runs. To view runs in the MLflow UI, click the icon at the far right next to **Experiment Runs**.
To examine the effect of tuning `C`:
1. Select the resulting runs and click **Compare**.
1. In the Scatter Plot, select **C** for X-axis and **loss** for Y-axis.
| github_jupyter |
```
from calibrimbore import sauron
import numpy as np
import matplotlib.pyplot as plt
from glob import glob
%matplotlib inline
from astropy.coordinates import SkyCoord
import astropy.units as u
from copy import deepcopy
```
## load in the filter and make the composite relation with sauron
The filter file must be a text file with with 2 columns, where the first is wavelength in angstrom and the second is the throughput.
```
tess_filter = 'tess.dat'
comp = sauron(band=tess_filter,gr_lims=[-.5,0.8],plot=True,cubic_corr=False)
```
If you want something in the southern hemisphere that isn't covered by PS1, then a relation to SkyMapper can be used
```
comp_sm = sauron(band=tess_filter,gr_lims=[-.5,0.8],plot=True,cubic_corr=False,system='skymapper')
```
Both systems do a pretty good job, although the narrower bands of PS1 make reconstructions a little better. The colour limits are only used to select which stars are used in the creation of the zeropoint. Generally it is best to have $g-r < 0.8$ since there are few Calspec sources past that colour range.
To save the state of sauron to use it in other instances so the relations don't need to be recalculated use:
```
comp.save_state('tess_ps1')
```
The save can then be easily read in with
```
comp = sauron(load_state='tess_ps1.npy')
comp.print_comp()
```
With the relations calculated magnitudes can be calculated in the input bandpass using the PS1/SkyMapper source catalogs. If you already have photometry and positions for sources in your images, you can input them with ra and dec into the estimate_mag() function, however, in this test case we will just query a region to get example sources.
```
from calibrimbore import get_ps1_region
coords = ['08:05:11.1 -11:32:01.9']
c = SkyCoord(coords,unit=(u.hourangle, u.deg))
# get all PS1 sources in a radius 1 deg circle
sources = get_ps1_region(c.ra.deg,c.dec.deg,size=1*60**2)
ind = (np.isfinite(sources.r.values) & np.isfinite(sources.i.values)
& np.isfinite(sources.z.values) & (sources['g'].values < 19) & (sources['g'].values > 13))
sources = sources.iloc[ind]
```
With estimate_mag() we use stellar locus regression to calculate the exxtinction contribution for sources within a 0.2 deg field of view. This choice is made to ensure that there are always enough sources for reliable fits, while being small enough to not have significant variability in extinction across the field of view.
```
tess_estimates = comp.estimate_mag(mags = sources)
print(tess_estimates[:100])
```
By matching the sources with real photometric measurements, the zeropoint can be calculated by simply taking the difference between the measured photometry in the input filter, and the predicted photometry:
$zp = m_{pred} - m_{sys}$.
Since we are not comparing to real data, here is a quick comparison between the predicted TESS magnitude and the observed PS1 r-band. Clearly the broader TESS band views sources differently to r-band.
```
plt.figure()
plt.plot(sources['g']-sources['r'],tess_estimates-sources['r'],'.')
plt.xlabel('g-r')
plt.ylabel('tess - r')
```
| github_jupyter |
# Building text classifier with Differential Privacy
In this tutorial we will train a text classifier with Differential Privacy by taking a model pre-trained on public text data and fine-tuning it for a different task.
When training a model with differential privacy, we almost always face a trade-off between model size and accuracy on the task. The exact details depend on the problem, but a rule of thumb is that the fewer parameters the model has, the easier it is to get a good performance with DP.
Most state-of-the-art NLP models are quite deep and large (e.g. [BERT-base](https://github.com/google-research/bert) has over 100M parameters), which makes task of training text model on a private datasets rather challenging.
One way of addressing this problem is to divide the training process into two stages. First, we will pre-train the model on a public dataset, exposing the model to generic text data. Assuming that the generic text data is public, we will not be using differential privacy at this step. Then, we freeze most of the layers, leaving only a few upper layers to be trained on the private dataset using DP-SGD. This way we can get the best of both worlds - we have a deep and powerful text understanding model, while only training a small number of parameters with differentially private algorithm.
In this tutorial we will take the pre-trained [BERT-base](https://github.com/google-research/bert) model and fine-tune it to recognize textual entailment on the [SNLI](https://nlp.stanford.edu/projects/snli/) dataset.
## Dataset
First, we need to download the dataset (we'll user Stanford NLP mirror)
```
STANFORD_SNLI_URL = "https://nlp.stanford.edu/projects/snli/snli_1.0.zip"
DATA_DIR = "data"
import zipfile
import urllib
def download_and_extract(dataset_url, data_dir):
print("Downloading and extracting ...")
filename = "snli.zip"
urllib.request.urlretrieve(dataset_url, filename)
with zipfile.ZipFile(filename) as zip_ref:
zip_ref.extractall(data_dir)
os.remove(filename)
print("Completed!")
download_and_extract(STANFORD_SNLI_URL, DATA_DIR)
```
The dataset comes in two formats (`tsv` and `json`) and has already been split into train/dev/test. Let’s verify that’s the case.
```
import os
snli_folder = os.path.join(DATA_DIR, "snli_1.0")
os.listdir(snli_folder)
```
Let's now take a look inside. [SNLI dataset](https://nlp.stanford.edu/projects/snli/) provides ample syntactic metadata, but we'll only use raw input text. Therefore, the only fields we're interested in are **sentence1** (premise), **sentence2** (hypothesis) and **gold_label** (label chosen by the majority of annotators).
Label defines the relation between premise and hypothesis: either *contradiction*, *neutral* or *entailment*.
```
import pandas as pd
train_path = os.path.join(snli_folder, "snli_1.0_train.txt")
dev_path = os.path.join(snli_folder, "snli_1.0_dev.txt")
df_train = pd.read_csv(train_path, sep='\t')
df_test = pd.read_csv(dev_path, sep='\t')
df_train[['sentence1', 'sentence2', 'gold_label']][:5]
```
## Model
BERT (Bidirectional Encoder Representations from Transformers) is state of the art approach to various NLP tasks. It uses a Transformer architecture and relies heavily on the concept of pre-training.
We'll use a pre-trained BERT-base model, provided in huggingface [transformers](https://github.com/huggingface/transformers) repo.
It gives us a pytorch implementation for the classic BERT architecture, as well as a tokenizer and weights pre-trained on a public English corpus (Wikipedia).
Please follow these [installation instrucitons](https://github.com/huggingface/transformers#installation) before proceeding.
```
from transformers import BertConfig, BertTokenizer, BertForSequenceClassification
model_name = "bert-base-cased"
config = BertConfig.from_pretrained(
model_name,
num_labels=3,
)
tokenizer = BertTokenizer.from_pretrained(
"bert-base-cased",
do_lower_case=False,
)
model = BertForSequenceClassification.from_pretrained(
"bert-base-cased",
config=config,
)
```
The model has the following structure. It uses a combination of word, positional and token *embeddings* to create a sequence representation, then passes the data through 12 *transformer encoders* and finally uses a *linear classifier* to produce the final label.
As the model is already pre-trained and we only plan to fine-tune few upper layers, we want to freeze all layers, except for the last encoder and above (`BertPooler` and `Classifier`).

```
trainable_layers = [model.bert.encoder.layer[-1], model.bert.pooler, model.classifier]
total_params = 0
trainable_params = 0
for p in model.parameters():
p.requires_grad = False
total_params += p.numel()
for layer in trainable_layers:
for p in layer.parameters():
p.requires_grad = True
trainable_params += p.numel()
print(f"Total parameters count: {total_params}") # ~108M
print(f"Trainable parameters count: {trainable_params}") # ~7M
```
Thus, by using pre-trained model we reduce the number of trainable params from over 100 millions to just above 7.5 millions. This will help both performance and convergence with added noise.
## Prepare the data
Before we begin training, we need to preprocess the data and convert it to the format our model expects.
(Note: it'll take 5-10 minutes to run on a laptop)
```
LABEL_LIST = ['contradiction', 'entailment', 'neutral']
MAX_SEQ_LENGHT = 128
import torch
import transformers
from torch.utils.data import TensorDataset
from transformers.data.processors.utils import InputExample
from transformers.data.processors.glue import glue_convert_examples_to_features
def _create_examples(df, set_type):
""" Convert raw dataframe to a list of InputExample. Filter malformed examples
"""
examples = []
for index, row in df.iterrows():
if row['gold_label'] not in LABEL_LIST:
continue
if not isinstance(row['sentence1'], str) or not isinstance(row['sentence2'], str):
continue
guid = f"{index}-{set_type}"
examples.append(
InputExample(guid=guid, text_a=row['sentence1'], text_b=row['sentence2'], label=row['gold_label']))
return examples
def _df_to_features(df, set_type):
""" Pre-process text. This method will:
1) tokenize inputs
2) cut or pad each sequence to MAX_SEQ_LENGHT
3) convert tokens into ids
The output will contain:
`input_ids` - padded token ids sequence
`attention mask` - mask indicating padded tokens
`token_type_ids` - mask indicating the split between premise and hypothesis
`label` - label
"""
examples = _create_examples(df, set_type)
#backward compatibility with older transformers versions
legacy_kwards = {}
from packaging import version
if version.parse(transformers.__version__) < version.parse("2.9.0"):
legacy_kwards = {
"pad_on_left": False,
"pad_token": tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
"pad_token_segment_id": 0,
}
return glue_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
label_list=LABEL_LIST,
max_length=MAX_SEQ_LENGHT,
output_mode="classification",
**legacy_kwards,
)
def _features_to_dataset(features):
""" Convert features from `_df_to_features` into a single dataset
"""
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_attention_mask = torch.tensor(
[f.attention_mask for f in features], dtype=torch.long
)
all_token_type_ids = torch.tensor(
[f.token_type_ids for f in features], dtype=torch.long
)
all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
dataset = TensorDataset(
all_input_ids, all_attention_mask, all_token_type_ids, all_labels
)
return dataset
train_features = _df_to_features(df_train, "train")
test_features = _df_to_features(df_test, "test")
train_dataset = _features_to_dataset(train_features)
test_dataset = _features_to_dataset(test_features)
```
## Choosing batch size
Let's talk about batch sizes for a bit.
In addition to all the considerations you normally take into account when choosing batch size, training model with DP adds another one - privacy cost.
Because of the threat model we assume and the way we add noise to the gradients, larger batch sizes (to a certain extent) generally help convergence. We add the same amount of noise to each gradient update (scaled to the norm of one sample in the batch) regardless of the batch size. What this means is that as the batch size increases, the relative amount of noise added decreases. while preserving the same epsilon guarantee.
You should, however, keep in mind that increasing batch size has its price in terms of epsilon, which grows at `O(sqrt(batch_size))` as we train (therefore larger batches make it grow faster). The good strategy here is to experiment with multiple combinations of `batch_size` and `noise_multiplier` to find the one that provides best possible quality at acceptable privacy guarantee.
There's another side to this - memory. Opacus computes and stores *per sample* gradients, so for every normal gradient, Opacus will store `n=batch_size` per-sample gradients on each step, thus increasing the memory footprint by at least `O(batch_size)`. In reality, however, the peak memory requirement is `O(batch_size^2)` compared to non-private model. This is because some intermediate steps in per sample gradient computation involve operations on two matrices, each with batch_size as one of the dimensions.
The good news is, we can pick the most appropriate batch size, regardless of memory constrains. Opacus has built-in support for *virtual* batches. Using it we can separate physical steps (gradient computation) and logical steps (noise addition and parameter updates): use larger batches for training, while keeping memory footprint low. Below we will specify two constants:
- `BATCH_SIZE` defines the maximum batch size we can afford from a memory standpoint, and only affects computation speed
- `VIRTUAL_BATCH_SIZE`, on the other hand, is equivalent to normal batch_size in the non-private setting, and will affect convergence and privacy guarantee.
```
BATCH_SIZE = 8
VIRTUAL_BATCH_SIZE = 32
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
train_sampler = RandomSampler(train_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=BATCH_SIZE)
test_sampler = SequentialSampler(test_dataset)
test_dataloader = DataLoader(test_dataset, sampler=test_sampler, batch_size=BATCH_SIZE)
```
## Training
```
import torch
# Move the model to appropriate device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
# Define optimizer
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-4, eps=1e-8)
```
Next we will define and attach PrivacyEngine. There are two parameters you need to consider here:
- `noise_multiplier`. It defines the trade-off between privacy and accuracy. Adding more noise will provide stronger privacy guarantees, but will also hurt model quality.
- `max_grad_norm`. Defines the maximum magnitude of L2 norms to which we clip per sample gradients. There is a bit of tug of war with this threshold: on the one hand, a low threshold means that we will clip many gradients, hurting convergence, so we might be tempted to raise it. However, recall that we add noise with `std=noise_multiplier * max_grad_norm` so we will pay for the increased threshold with more noise. In most cases you can rely on the model being quite resilient to clipping (after the first few iterations your model will tend to adjust so that its gradients stay below the clipping threshold), so you can often just keep the default value (`=1.0`) and focus on tuning `batch_size` and `noise_multiplier` instead. That being said, sometimes clipping hurts the model so it may be worth experimenting with different clipping thresholds, like we are doing in this tutorial.
These two parameters define the scale of the noise we add to gradients: the noise will be sampled from a Gaussian distribution with `std=noise_multiplier * max_grad_norm`.
```
from torchdp import PrivacyEngine
ALPHAS = [1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64))
NOISE_MULTIPLIER = 0.4
MAX_GRAD_NORM = 0.1
privacy_engine = PrivacyEngine(
module=model,
batch_size=VIRTUAL_BATCH_SIZE,
sample_size=len(train_dataset),
alphas=ALPHAS,
noise_multiplier=NOISE_MULTIPLIER,
max_grad_norm=MAX_GRAD_NORM,
)
privacy_engine.attach(optimizer)
```
Let’s first define the evaluation cycle.
```
import numpy as np
from tqdm.notebook import tqdm
def accuracy(preds, labels):
return (preds == labels).mean()
# define evaluation cycle
def evaluate(model):
model.eval()
loss_arr = []
accuracy_arr = []
for batch in test_dataloader:
batch = tuple(t.to(device) for t in batch)
with torch.no_grad():
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2],
'labels': batch[3]}
outputs = model(**inputs)
loss, logits = outputs[:2]
preds = np.argmax(logits.detach().cpu().numpy(), axis=1)
labels = inputs['labels'].detach().cpu().numpy()
loss_arr.append(loss.item())
accuracy_arr.append(accuracy(preds, labels))
model.train()
return np.mean(loss_arr), np.mean(accuracy_arr)
```
Now we specify the training parameters and run the training loop for three epochs
```
EPOCHS = 3
LOGGING_INTERVAL = 1000 # once every how many steps we run evaluation cycle and report metrics
DELTA = 1 / len(train_dataloader) # Parameter for privacy accounting. Probability of not uploding privacy guarantees
assert VIRTUAL_BATCH_SIZE % BATCH_SIZE == 0 # VIRTUAL_BATCH_SIZE should be divisible by BATCH_SIZE
virtual_batch_rate = VIRTUAL_BATCH_SIZE / BATCH_SIZE
for epoch in range(1, EPOCHS+1):
losses = []
for step, batch in enumerate(tqdm(train_dataloader)):
batch = tuple(t.to(device) for t in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2],
'labels': batch[3]}
outputs = model(**inputs) # output = loss, logits, hidden_states, attentions
loss = outputs[0]
loss.backward()
losses.append(loss.item())
# We process small batches of size BATCH_SIZE,
# until they're accumulated to a batch of size VIRTUAL_BATCH_SIZE.
# Only then we make a real `.step()` and update model weights
if (step + 1) % virtual_batch_rate == 0 or step == len(train_dataloader) - 1:
optimizer.step()
else:
optimizer.virtual_step()
model.zero_grad()
if step > 0 and step % LOGGING_INTERVAL == 0:
train_loss = np.mean(losses)
eps, alpha = optimizer.privacy_engine.get_privacy_spent(DELTA)
eval_loss, eval_accuracy = evaluate(model)
print(
f"Epoch: {epoch} | "
f"Step: {step} | "
f"Train loss: {train_loss:.3f} | "
f"Eval loss: {eval_loss:.3f} | "
f"Eval accuracy: {eval_accuracy:.3f} | "
f"ɛ: {eps:.2f} (α: {alpha})"
)
```
For the test accuracy, after training for three epochs you should expect something close to the results below.
You can see that we can achieve quite strong privacy guarantee at epsilon=7.5 with a moderate accuracy cost of 11 percentage points compared to non-private model trained in a similar setting (upper layers only) and 16 points compared to best results we were able to achieve using the same architecture.
*NB: When not specified, DP-SGD is trained with upper layers only*
| Model | Noise multiplier | Batch size | Accuracy | Epsilon |
| --- | --- | --- | --- | --- |
| no DP, train full model | N/A | 32 | 90.1% | N/A |
| no DP, train upper layers only | N/A | 32 | 85.4% | N/A |
| DP-SGD | 1.0 | 32 | 70.5% | 0.7 |
| **DP-SGD (this tutorial)** | **0.4** | **32** | **74.3%** | **7.5** |
| DP-SGD | 0.3 | 32 | 75.8% | 20.7 |
| DP-SGD | 0.1 | 32 | 78.3% | 2865 |
| DP-SGD | 0.4 | 8 | 67.3% | 5.9 |
| github_jupyter |
```
# Notebooks
import nbimporter
import os
import sys
# Functions from src
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
# Defined Functions
from utils import *
# Pandas, matplotlib, pickle, seaborn
import pickle
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from statistics import mean
from collections import Counter
# global variables/constants
num_trials = 30
test_size_percentage = 0.2
fixed_depth = 10
```
# Load ASHRAE Dataset - reduced labels
```
df_ashrae_train = pd.read_pickle("data/ashrae/ashrae_train_reduced.pkl")
df_ashrae_test = pd.read_pickle("data/ashrae/ashrae_test_reduced.pkl")
dataset_string = "ashrae-reduced"
print(len(df_ashrae_train))
print(len(df_ashrae_test))
df_ashrae_train.head(3)
# Number of training instances: 46477
# Number of testing instances: 19920
```
# Classification models on train data (imbalanced)
```
acc_rdf, rdf_real_model = train_rdf(df_ashrae_train, rdf_depth=fixed_depth, test_size_percentage=test_size_percentage)
print("rdf acc CV: {}".format(acc_rdf))
save_pickle(rdf_real_model, "models/" + dataset_string + "_rdf_reall_full.pkl")
save_pickle(acc_rdf, "metrics/" + dataset_string + "_rdf_reall_full_acc.pkl")
```
# Variability baseline
```
variability_baseline_list = []
for _ in range(0, num_trials):
variability_baseline = evaluation_variability(df_ashrae_train)
variability_baseline_list.append(variability_baseline)
mean_var_baseline = mean(variability_baseline_list)
print(mean_var_baseline)
save_pickle(mean_var_baseline, "metrics/" + dataset_string + "_variability_baseline.pkl")
```
# Diversity baseline
```
diversity_baseline_list = []
for _ in range(0, num_trials):
diversity_baseline = evaluation_diversity(df_ashrae_train, df_ashrae_train, baseline=True)
diversity_baseline_list.append(diversity_baseline)
mean_diversity_baseline = mean(diversity_baseline_list)
print(mean_diversity_baseline)
save_pickle(mean_diversity_baseline, "metrics/" + dataset_string + "_diversity_baseline.pkl")
```
# Quality of the final classification
```
class_acc_test, class_acc_train, class_models, class_report_rdf = evaluation_classification(df_ashrae_train,
df_ashrae_test,
rdf_depth=fixed_depth,
depth_file_name='default',
test_size_percentage=test_size_percentage)
print(class_acc_test)
print(class_report_rdf)
final_classification_rdf = class_acc_test[3] # RDF
save_pickle(final_classification_rdf, "metrics/" + dataset_string + "_rdf_classification_baseline.pkl")
save_pickle(class_report_rdf, "label-metrics/" + dataset_string + "_class_report_baseline_trials.pkl")
```
| github_jupyter |

# Backends
A **backend** represents either a simulator or a real quantum computer and are responsible for running quantum circuits and/or pulse schedules and returning results.
In `qiskit-ibm-runtime`, a backend is represented by an instance of the [IBMBackend](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.IBMBackend.html#qiskit_ibm_runtime.IBMBackend) class. Attributes of this class provides information about this backend. For example:
- `name`: Name of the backend.
- `instructions`: A list of instructions the backend supports.
- `operation_names`: A list of instruction names the backend supported.
- `num_qubits`: The number of qubits the backend has.
- `coupling_map`: Coupling map of the backend.
- `dt`: System time resolution of input signals.
- `dtm`: System time resolution of output signals.
Refer to the API reference for a complete list of attributes and methods `IBMBackend` has.
```
from qiskit_ibm_runtime import QiskitRuntimeService
# Initialize the account first.
service = QiskitRuntimeService()
```
## Listing backends
Use the [backends()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.backends) method to list all backends you have access to. This method returns a list of `IBMBackend` instances:
```
service.backends()
```
The [backend()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.backend) (singular) method, on the other hand, takes the name of the backend as the input parameter and returns an `IBMBackend` instance representing that particular backend:
```
service.backend("ibmq_qasm_simulator")
```
## Filtering backends
You may also optionally filter the set backends, by passing arguments that query the backend’s configuration, status, or properties. For more general filters, you can make advanced functions using a lambda function. Refer to the API documentation for more details.
Let's try getting only backends that are
- real quantum devices (`simulator=False`)
- currently operational (`operational=True`)
- have at least 5 qubits (`min_num_qubits=5`)
```
service.backends(simulator=False, operational=True, min_num_qubits=5)
```
A similar method is [least_busy()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.least_busy), which takes the same filters as `backends()` but returns the backend that has the least number of jobs pending in the queue:
```
service.least_busy(operational=True, min_num_qubits=5)
```
Some programs also define the type of backends they need in the `backend_requirements` field of the program metadata.
The `hello-world` program, for example, needs a backend that has at least 5 qubits:
```
ibm_quantum_service = QiskitRuntimeService(channel="ibm_quantum")
program = ibm_quantum_service.program("hello-world")
print(program.backend_requirements)
```
You can use this `backend_requirements` field to find backends that meet the criteria:
```
ibm_quantum_service.backends(min_num_qubits=5)
```
## Backend attributes
As mentioned earlier, the attributes of [IBMBackend](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.IBMBackend.html#qiskit_ibm_runtime.IBMBackend) class provides information about the backend.
```
backend = service.backend("ibmq_qasm_simulator")
```
Name of the backend.
```
backend.name
```
Version of the backend.
```
backend.backend_version
```
Check if the backend is a simulator or real system.
```
backend.simulator
```
Number of qubits the backend has.
```
backend.num_qubits
```
Maximum number of circuits per job.
```
backend.max_circuits
```
A list of instructions the backend supports.
```
backend.instructions
```
Coupling map of the backend.
```
backend.coupling_map
```
For a complete list of backend attributes refer to the [IBMBackend](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.IBMBackend.html#qiskit_ibm_runtime.IBMBackend) class documentation.
```
from qiskit.tools.jupyter import *
%qiskit_copyright
```
| github_jupyter |
This IPython Notebook introduces the use of the `openmc.mgxs` module to calculate multi-energy-group and multi-delayed-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:
* Creation of multi-delayed-group cross sections for an **infinite homogeneous medium**
* Calculation of delayed neutron precursor concentrations
## Introduction to Multi-Delayed-Group Cross Sections (MDGXS)
Many Monte Carlo particle transport codes, including OpenMC, use continuous-energy nuclear cross section data. However, most deterministic neutron transport codes use *multi-group cross sections* defined over discretized energy bins or *energy groups*. Furthermore, kinetics calculations typically separate out parameters that involve delayed neutrons into prompt and delayed components and further subdivide delayed components by delayed groups. An example is the energy spectrum for prompt and delayed neutrons for U-235 and Pu-239 computed for a light water reactor spectrum.
```
from IPython.display import Image
Image(filename='images/mdgxs.png', width=350)
```
A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The `openmc.mgxs` Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations and different delayed group models (e.g. 6, 7, or 8 delayed group models) for fine-mesh heterogeneous deterministic neutron transport applications.
Before proceeding to illustrate how one may use the `openmc.mgxs` module, it is worthwhile to define the general equations used to calculate multi-energy-group and multi-delayed-group cross sections. This is only intended as a brief overview of the methodology used by `openmc.mgxs` - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic.
### Introductory Notation
The continuous real-valued microscopic cross section may be denoted $\sigma_{n,x}(\mathbf{r}, E)$ for position vector $\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\Phi(\mathbf{r},E)$ for position $\mathbf{r}$ and energy $E$. **Note**: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity.
### Spatial and Energy Discretization
The energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \in \{1, 2, ..., G\}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as **energy condensation**.
The delayed neutrons created from fissions are created from > 30 delayed neutron precursors. Modeling each of the delayed neutron precursors is possible, but this approach has not recieved much attention due to large uncertainties in certain precursors. Therefore, the delayed neutrons are often combined into "delayed groups" that have a set time constant, $\lambda_d$. Some cross section libraries use the same group time constants for all nuclides (e.g. JEFF 3.1) while other libraries use different time constants for all nuclides (e.g. ENDF/B-VII.1). Multi-delayed-group cross sections can either be created with the entire delayed group set, a subset of delayed groups, or integrated over all delayed groups.
Multi-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \in \{1, 2, ..., K\}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as **spatial homogenization**.
### General Scalar-Flux Weighted MDGXS
The multi-group cross sections computed by `openmc.mgxs` are defined as a *scalar flux-weighted average* of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section. For instance, the delayed-nu-fission multi-energy-group and multi-delayed-group cross section, $\nu_d \sigma_{f,x,k,g}$, can be computed as follows:
$$\nu_d \sigma_{n,x,k,g} = \frac{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \nu_d \sigma_{f,x}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$
This scalar flux-weighted average microscopic cross section is computed by `openmc.mgxs` for only the delayed-nu-fission and delayed neutron fraction reaction type at the moment. These double integrals are stochastically computed with OpenMC's tally system - in particular, [filters](https://docs.openmc.org/en/stable/usersguide/tallies.html#filters) on the energy range and spatial zone (material, cell, universe, or mesh) define the bounds of integration for both numerator and denominator.
### Multi-Group Prompt and Delayed Fission Spectrum
The energy spectrum of neutrons emitted from fission is denoted by $\chi_{n}(\mathbf{r},E' \rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\chi_{n}(\mathbf{r},E)$ with outgoing energy $E$.
Computing the cumulative energy spectrum of emitted neutrons, $\chi_{n}(\mathbf{r},E)$, has been presented in the `mgxs-part-i.ipynb` notebook. Here, we will present the energy spectrum of prompt and delayed emission neutrons, $\chi_{n,p}(\mathbf{r},E)$ and $\chi_{n,d}(\mathbf{r},E)$, respectively. Unlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\sigma_{n,f}(\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\nu_{n,p}(\mathbf{r},E)$ and $\nu_{n,d}(\mathbf{r},E)$ for prompt and delayed neutrons, respectively. The multi-group fission spectrum $\chi_{n,k,g,d}$ is then the probability of fission neutrons emitted into energy group $g$ and delayed group $d$. There are not prompt groups, so inserting $p$ in place of $d$ just denotes all prompt neutrons.
Similar to before, spatial homogenization and energy condensation are used to find the multi-energy-group and multi-delayed-group fission spectrum $\chi_{n,k,g,d}$ as follows:
$$\chi_{n,k,g',d} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\chi_{n,d}(\mathbf{r},E'\rightarrow E'')\nu_{n,d}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\nu_{n,d}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}$$
The fission production-weighted multi-energy-group and multi-delayed-group fission spectrum for delayed neutrons is computed using OpenMC tallies with energy in, energy out, and delayed group filters. Alternatively, the delayed group filter can be omitted to compute the fission spectrum integrated over all delayed groups.
This concludes our brief overview on the methodology to compute multi-energy-group and multi-delayed-group cross sections. The following sections detail more concretely how users may employ the `openmc.mgxs` module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations.
## Generate Input Files
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import openmc
import openmc.mgxs as mgxs
```
First we need to define materials that will be used in the problem. Let's create a material for the homogeneous medium.
```
# Instantiate a Material and register the Nuclides
inf_medium = openmc.Material(name='moderator')
inf_medium.set_density('g/cc', 5.)
inf_medium.add_nuclide('H1', 0.03)
inf_medium.add_nuclide('O16', 0.015)
inf_medium.add_nuclide('U235', 0.0001)
inf_medium.add_nuclide('U238', 0.007)
inf_medium.add_nuclide('Pu239', 0.00003)
inf_medium.add_nuclide('Zr90', 0.002)
```
With our material, we can now create a `Materials` object that can be exported to an actual XML file.
```
# Instantiate a Materials collection and export to XML
materials_file = openmc.Materials([inf_medium])
materials_file.export_to_xml()
```
Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.
```
# Instantiate boundary Planes
min_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)
max_x = openmc.XPlane(boundary_type='reflective', x0=0.63)
min_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)
max_y = openmc.YPlane(boundary_type='reflective', y0=0.63)
```
With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.
```
# Instantiate a Cell
cell = openmc.Cell(cell_id=1, name='cell')
# Register bounding Surfaces with the Cell
cell.region = +min_x & -max_x & +min_y & -max_y
# Fill the Cell with the Material
cell.fill = inf_medium
```
We now must create a geometry and export it to XML.
```
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry([cell])
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
```
Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
```
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 5000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
```
Now we are ready to generate multi-group cross sections! First, let's define a 100-energy-group structure and 1-energy-group structure using the built-in `EnergyGroups` class. We will also create a 6-delayed-group list.
```
# Instantiate a 100-group EnergyGroups object
energy_groups = mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 101)
# Instantiate a 1-group EnergyGroups object
one_group = mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
delayed_groups = list(range(1,7))
```
We can now use the `EnergyGroups` object and delayed group list, along with our previously created materials and geometry, to instantiate some `MGXS` objects from the `openmc.mgxs` module. In particular, the following are subclasses of the generic and abstract `MGXS` class:
* `TotalXS`
* `TransportXS`
* `AbsorptionXS`
* `CaptureXS`
* `FissionXS`
* `NuFissionMatrixXS`
* `KappaFissionXS`
* `ScatterXS`
* `ScatterMatrixXS`
* `Chi`
* `InverseVelocity`
A separate abstract `MDGXS` class is used for cross-sections and parameters that involve delayed neutrons. The subclasses of `MDGXS` include:
* `DelayedNuFissionXS`
* `ChiDelayed`
* `Beta`
* `DecayRate`
These classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections.
In this case, let's create the multi-group chi-prompt, chi-delayed, and prompt-nu-fission cross sections with our 100-energy-group structure and multi-group delayed-nu-fission and beta cross sections with our 100-energy-group and 6-delayed-group structures.
The prompt chi and nu-fission data can actually be gathered using the `Chi` and `FissionXS` classes, respectively, by passing in a value of `True` for the optional `prompt` parameter upon initialization.
```
# Instantiate a few different sections
chi_prompt = mgxs.Chi(domain=cell, groups=energy_groups, by_nuclide=True, prompt=True)
prompt_nu_fission = mgxs.FissionXS(domain=cell, groups=energy_groups, by_nuclide=True, nu=True, prompt=True)
chi_delayed = mgxs.ChiDelayed(domain=cell, energy_groups=energy_groups, by_nuclide=True)
delayed_nu_fission = mgxs.DelayedNuFissionXS(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)
beta = mgxs.Beta(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)
decay_rate = mgxs.DecayRate(domain=cell, energy_groups=one_group, delayed_groups=delayed_groups, by_nuclide=True)
chi_prompt.nuclides = ['U235', 'Pu239']
prompt_nu_fission.nuclides = ['U235', 'Pu239']
chi_delayed.nuclides = ['U235', 'Pu239']
delayed_nu_fission.nuclides = ['U235', 'Pu239']
beta.nuclides = ['U235', 'Pu239']
decay_rate.nuclides = ['U235', 'Pu239']
```
Each multi-group cross section object stores its tallies in a Python dictionary called `tallies`. We can inspect the tallies in the dictionary for our `Decay Rate` object as follows.
```
decay_rate.tallies
```
The `Beta` object includes tracklength tallies for the 'nu-fission' and 'delayed-nu-fission' scores in the 100-energy-group and 6-delayed-group structure in cell 1. Now that each `MGXS` and `MDGXS` object contains the tallies that it needs, we must add these tallies to a `Tallies` object to generate the "tallies.xml" input file for OpenMC.
```
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Add chi-prompt tallies to the tallies file
tallies_file += chi_prompt.tallies.values()
# Add prompt-nu-fission tallies to the tallies file
tallies_file += prompt_nu_fission.tallies.values()
# Add chi-delayed tallies to the tallies file
tallies_file += chi_delayed.tallies.values()
# Add delayed-nu-fission tallies to the tallies file
tallies_file += delayed_nu_fission.tallies.values()
# Add beta tallies to the tallies file
tallies_file += beta.tallies.values()
# Add decay rate tallies to the tallies file
tallies_file += decay_rate.tallies.values()
# Export to "tallies.xml"
tallies_file.export_to_xml()
```
Now we a have a complete set of inputs, so we can go ahead and run our simulation.
```
# Run OpenMC
openmc.run()
```
## Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a `StatePoint` object.
```
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
```
In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a `Summary` object is automatically linked when a `StatePoint` is loaded. This is necessary for the `openmc.mgxs` module to properly process the tally data.
The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the `StatePoint` into each object as follows and our `MGXS` objects will compute the cross sections for us under-the-hood.
```
# Load the tallies from the statepoint into each MGXS object
chi_prompt.load_from_statepoint(sp)
prompt_nu_fission.load_from_statepoint(sp)
chi_delayed.load_from_statepoint(sp)
delayed_nu_fission.load_from_statepoint(sp)
beta.load_from_statepoint(sp)
decay_rate.load_from_statepoint(sp)
```
Voila! Our multi-group cross sections are now ready to rock 'n roll!
## Extracting and Storing MGXS Data
Let's first inspect our delayed-nu-fission section by printing it to the screen after condensing the cross section down to one group.
```
delayed_nu_fission.get_condensed_xs(one_group).get_xs()
```
Since the `openmc.mgxs` module uses [tally arithmetic](https://mit-crpg.github.io/openmc/pythonapi/examples/tally-arithmetic.html) under-the-hood, the cross section is stored as a "derived" `Tally` object. This means that it can be queried and manipulated using all of the same methods supported for the `Tally` class in the OpenMC Python API. For example, we can construct a [Pandas](http://pandas.pydata.org/) `DataFrame` of the multi-group cross section data.
```
df = delayed_nu_fission.get_pandas_dataframe()
df.head(10)
df = decay_rate.get_pandas_dataframe()
df.head(12)
```
Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
```
beta.export_xs_data(filename='beta', format='excel')
```
The following code snippet shows how to export the chi-prompt and chi-delayed `MGXS` to the same HDF5 binary data store.
```
chi_prompt.build_hdf5_store(filename='mdgxs', append=True)
chi_delayed.build_hdf5_store(filename='mdgxs', append=True)
```
## Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Finally, we illustrate how one can leverage OpenMC's [tally arithmetic](https://mit-crpg.github.io/openmc/pythonapi/examples/tally-arithmetic.html) data processing feature with `MGXS` objects. The `openmc.mgxs` module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each `MGXS` object includes an `xs_tally` attribute which is a "derived" `Tally` based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the `Beta`, `DelayedNuFissionXS`, and `DecayRate` objects. The delayed neutron precursor concentrations are modeled using the following equations:
$$\frac{\partial}{\partial t} C_{k,d} (t) = \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t) \nu_d \sigma_{f,x}(\mathbf{r},E',t)\Phi(\mathbf{r},E',t) - \lambda_{d} C_{k,d} (t) $$
$$C_{k,d} (t=0) = \frac{1}{\lambda_{d}} \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t=0) \nu_d \sigma_{f,x}(\mathbf{r},E',t=0)\Phi(\mathbf{r},E',t=0) $$
First, let's investigate the decay rates for U235 and Pu235. The fraction of the delayed neutron precursors remaining as a function of time after fission for each delayed group and fissioning isotope have been plotted below.
```
# Get the decay rate data
dr_tally = decay_rate.xs_tally
dr_u235 = dr_tally.get_values(nuclides=['U235']).flatten()
dr_pu239 = dr_tally.get_values(nuclides=['Pu239']).flatten()
# Compute the exponential decay of the precursors
time = np.logspace(-3,3)
dr_u235_points = np.exp(-np.outer(dr_u235, time))
dr_pu239_points = np.exp(-np.outer(dr_pu239, time))
# Create a plot of the fraction of the precursors remaining as a f(time)
colors = ['b', 'g', 'r', 'c', 'm', 'k']
legend = []
fig = plt.figure(figsize=(8,6))
for g,c in enumerate(colors):
plt.semilogx(time, dr_u235_points [g,:], color=c, linestyle='--', linewidth=3)
plt.semilogx(time, dr_pu239_points[g,:], color=c, linestyle=':' , linewidth=3)
legend.append('U-235 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_u235[g]))
legend.append('Pu-239 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_pu239[g]))
plt.title('Delayed Neutron Precursor Decay Rates')
plt.xlabel('Time (s)')
plt.ylabel('Fraction Remaining')
plt.legend(legend, loc=1, bbox_to_anchor=(1.55, 0.95))
```
Now let's compute the initial concentration of the delayed neutron precursors:
```
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / \
decay_rate.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)
# Get the Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe()
```
We can plot the delayed neutron fractions for each nuclide.
```
energy_filter = [f for f in beta.xs_tally.filters if type(f) is openmc.EnergyFilter]
beta_integrated = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)
beta_u235 = beta_integrated.get_values(nuclides=['U235'])
beta_pu239 = beta_integrated.get_values(nuclides=['Pu239'])
# Reshape the betas
beta_u235.shape = (beta_u235.shape[0])
beta_pu239.shape = (beta_pu239.shape[0])
df = beta_integrated.summation(filter_type=openmc.DelayedGroupFilter, remove_filter=True).get_pandas_dataframe()
print('Beta (U-235) : {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'U235']['mean'][0], df[df['nuclide'] == 'U235']['std. dev.'][0]))
print('Beta (Pu-239): {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'Pu239']['mean'][1], df[df['nuclide'] == 'Pu239']['std. dev.'][1]))
beta_u235 = np.append(beta_u235[0], beta_u235)
beta_pu239 = np.append(beta_pu239[0], beta_pu239)
# Create a step plot for the MGXS
plt.plot(np.arange(0.5, 7.5, 1), beta_u235, drawstyle='steps', color='b', linewidth=3)
plt.plot(np.arange(0.5, 7.5, 1), beta_pu239, drawstyle='steps', color='g', linewidth=3)
plt.title('Delayed Neutron Fraction (beta)')
plt.xlabel('Delayed Group')
plt.ylabel('Beta(fraction total neutrons)')
plt.legend(['U-235', 'Pu-239'])
plt.xlim([0,7])
```
We can also plot the energy spectrum for fission emission of prompt and delayed neutrons.
```
chi_d_u235 = np.squeeze(chi_delayed.get_xs(nuclides=['U235'], order_groups='decreasing'))
chi_d_pu239 = np.squeeze(chi_delayed.get_xs(nuclides=['Pu239'], order_groups='decreasing'))
chi_p_u235 = np.squeeze(chi_prompt.get_xs(nuclides=['U235'], order_groups='decreasing'))
chi_p_pu239 = np.squeeze(chi_prompt.get_xs(nuclides=['Pu239'], order_groups='decreasing'))
chi_d_u235 = np.append(chi_d_u235 , chi_d_u235[0])
chi_d_pu239 = np.append(chi_d_pu239, chi_d_pu239[0])
chi_p_u235 = np.append(chi_p_u235 , chi_p_u235[0])
chi_p_pu239 = np.append(chi_p_pu239, chi_p_pu239[0])
# Create a step plot for the MGXS
plt.semilogx(energy_groups.group_edges, chi_d_u235 , drawstyle='steps', color='b', linestyle='--', linewidth=3)
plt.semilogx(energy_groups.group_edges, chi_d_pu239, drawstyle='steps', color='g', linestyle='--', linewidth=3)
plt.semilogx(energy_groups.group_edges, chi_p_u235 , drawstyle='steps', color='b', linestyle=':', linewidth=3)
plt.semilogx(energy_groups.group_edges, chi_p_pu239, drawstyle='steps', color='g', linestyle=':', linewidth=3)
plt.title('Energy Spectrum for Fission Neutrons')
plt.xlabel('Energy (eV)')
plt.ylabel('Fraction on emitted neutrons')
plt.legend(['U-235 delayed', 'Pu-239 delayed', 'U-235 prompt', 'Pu-239 prompt'],loc=2)
plt.xlim(1.0e3, 20.0e6)
```
| github_jupyter |
# Running NetPyNE in a Jupyter Notebook
## Preliminaries
Hopefully you already completed these preliminaries by following the instructions at https://github.com/Neurosim-lab/netpyne/blob/development/netpyne/tutorials/README.md. We will now walk you through how we installed the NetPyNE tutorials.
We don't want to affect your system in any way, so we will operate from a virtual environment. These preliminary steps must be completed before going through this tutorial. You can't complete the preliminary steps from within Jupyter because you can't enter a virtual environment in Jupyter, you have to switch to a kernel made from your virtual environment.
First we will empty your path of all but essentials. Then we will create and activate a virtual environment. Then we will update pip and install some necessary packages in the virtual environment, and finally we will create a kernel from the virtual environment that can be used by Jupyter.
### Create and activate a virtual environment
First, open a Terminal and switch to the directory where you downloaded this notebook:
cd netpyne_tuts
Next, clear your PATH of all but the essentials. Don't worry, your normal PATH will return the next time you open a Terminal.
export PATH=/bin:/usr/bin
Next, create a virtual environment named "env":
python3 -m venv env
Check to see where you are currently running Python from:
which python3
Enter your new virtual environment:
source env/bin/activate
You should see in your prompt that you are in **env**.
Now see where you are running Python from:
which python3
It should come from inside your new virtual environment. Any changes we make here will only exist in the **env** directory that was created here.
To exit your virtual environment, enter:
deactivate
Your prompt should reflect the change. To get back in, enter:
source env/bin/activate
### Update pip and install packages
We will now update pip and install some necessary packages in the virtual environment. From inside your virtual environment, enter:
python3 -m pip install --upgrade pip
python3 -m pip install --upgrade ipython
python3 -m pip install --upgrade ipykernel
python3 -m pip install --upgrade jupyter
### Make a Jupyter kernel out of this virtual environment
Now we will create a kernel that can be used by Jupyter Notebooks. Enter:
ipython kernel install --user --name=env
### Install NEURON and NetPyNE
python3 -m pip install --upgrade neuron
python3 -m pip install --upgrade netpyne
### Launch this notebook in Jupyter Notebook
Now we will launch Jupyter from within the virtual environment. Enter:
jupyter notebook netpyne_tut1.ipynb
This should open a web browser with Jupyter running this notebook. From the menu bar, click on **Kernel**, hover over **Change kernel** and select **env**. We are now operating in the virtual environment (see **env** in the upper right instead of **Python 3**) and can begin the tutorial.
## Single line command
Entering the following single line command should perform all the previous steps and launch this tutorial in a Jupyter notebook in your web browser:
git clone https://github.com/Neurosim-lab/netpyne.git && cd netpyne/netpyne/tutorials/netpyne_tut1 && export PATH=/bin:/usr/bin && python3 -m venv env && source env/bin/activate && python3 -m pip install --upgrade pip && python3 -m pip install --upgrade ipython && python3 -m pip install --upgrade ipykernel && python3 -m pip install --upgrade jupyter && ipython kernel install --user --name=env && jupyter notebook netpyne_tut1.ipynb
## To run this again in the future
Be sure to enter your virtual environment before running Jupyter!
cd netpyne_tuts
source env/bin/activate
jupyter notebook netpyne_tut1.ipynb
And then make sure you are in the **env** kernel in Jupyter.
# Tutorial 1 -- a simple network with one population
Now we are ready to start NetPyNE Tutorial 1, which will create a simple network model we can simulate. We will create a fairly simple network model of 40 pyramidal-like, two-compartment neurons with standard Hodgkin-Huxley dynamics in the somas and passive dynamics in the dendrites. We will then connect the neurons randomly with a 10% probability of connection using a standard double-exponential synapse model. Finally, we will add a current clamp stimulus to one cell to activate the network. Then we will explore the model.
## Instantiate network parameters and simulation configuration
You need two things to define a model/simulation in NetPyNE: 1) the parameters of the network and all its components (**netParams**) and 2) the configuration of the simulation (**simConfig**). These requirements exist as objects in NetPyNE. Let's instantiate them now.
```
from netpyne import specs, sim
netParams = specs.NetParams()
simConfig = specs.SimConfig()
```
These NetPyNE objects come with a lot of defaults set which you can explore with tab completion, but we'll focus on that more later.
We are going to plunge ahead and build our model: a simple network of 40 pyramidal-like two-compartment neurons with standard Hodgkin-Huxley dynamics in the soma and passive dynamics in the dendrite.
## Create a cell model
First we will add a cell type to our model by adding a dictionary named **pyr** to the *Cell Parameters* dictionary (**cellParams**) in the *Network Parameters* dictionary (**netParams**). We will then add an empty dictionary named **secs** to hold our compartments.
```
netParams.cellParams['pyr'] = {}
netParams.cellParams['pyr']['secs'] = {}
```
### Specify the soma compartment properties
Now we will define our **soma**, by adding a **geom** dictionary defining the geometry of the soma and a **mechs** dictionary defining the biophysical mechanics being added to the soma.
```
netParams.cellParams['pyr']['secs']['soma'] = {}
netParams.cellParams['pyr']['secs']['soma']['geom'] = {
"diam": 12,
"L": 12,
"Ra": 100.0,
"cm": 1
}
netParams.cellParams['pyr']['secs']['soma']['mechs'] = {"hh": {
"gnabar": 0.12,
"gkbar": 0.036,
"gl": 0.0003,
"el": -54.3
}}
```
The **hh** mechanism is builtin to NEURON, but you can see its *.mod* file here:
https://github.com/neuronsimulator/nrn/blob/master/src/nrnoc/hh.mod
It is the original Hodgkin-Huxley treatment for the set of sodium, potassium, and leakage channels found in the squid giant axon membrane.
### Specify the dendrite compartment properties
Next will do the same thing for the dendrite compartment, but we will do it slightly differently. We will first build up a **dend** dictionary and then add it to the cell model dictionary **pyr** when we are done.
```
dend = {}
dend['geom'] = {"diam": 1.0,
"L": 200.0,
"Ra": 100.0,
"cm": 1,
}
dend['mechs'] = {"pas":
{"g": 0.001,
"e": -70}
}
```
The **pas** mechanim is a simple leakage channel and is builtin to NEURON. Its *.mod* file is available here:
https://github.com/neuronsimulator/nrn/blob/master/src/nrnoc/passive.mod
In order to connect the dendrite compartment to the soma compartment, we must add a **topol** dictionary to our **dend** dictionary.
```
dend['topol'] = {"parentSec": "soma",
"parentX": 1.0,
"childX": 0,
}
```
With our **dend** section dictionary complete, we must now add it to the **pyr** cell dictionary.
```
netParams.cellParams['pyr']['secs']['dend'] = dend
```
Our two-compartment cell model is now completely specified. Our next step is to create a population of these cells.
## Create a population of cells
NetPyNE uses *populations* of cells to specify connectivity. In this tutorial, we will create just one population which we will call **E** (for excitatory). It will be made of the **pyr** cells we just specified, and we want 40 of them.
```
netParams.popParams['E'] = {
"cellType": "pyr",
"numCells": 40,
}
```
## Create a synaptic model
We need a synaptic mechanism to connect our cells with. We will create one called **exc** by adding a dictionary to the *synaptic mechanism parameters* dictionary (**synMechParams**). The synapse *mod* used (**Exp2Syn**) is a simple double-exponential which is builtin to NEURON. It's *.mod* file is available here:
https://github.com/neuronsimulator/nrn/blob/master/src/nrnoc/exp2syn.mod
```
netParams.synMechParams['exc'] = {
"mod": "Exp2Syn",
"tau1": 0.1,
"tau2": 1.0,
"e": 0
}
```
## Connect the cells
Now we will specify the connectivity in our model by adding an entry to the **connParams** dictionary. We will call our connectivity rule **E->E** as it will define connectivity from our **E** population to our **E** population.
We will use the *synMech* **exc**, which we defined above. For this synaptic mechanism, a *weight* of about **0.005** is appropriate. These cells will have a 10% probability of getting connected, and will be activated five milliseconds after an action potential occurs in the presynaptic cell. Synapses will occur on the **dend** *section* at its very end (*location* **1.0**)
```
netParams.connParams['E->E'] = {
"preConds": {"pop": "E"},
"postConds": {"pop": "E"},
"weight": 0.005,
"probability": 0.1,
"delay": 5.0,
"synMech": "exc",
"sec": "dend",
"loc": 1.0,
}
```
## Set up the simulation configuration
```
simConfig.filename = "netpyne_tut1"
simConfig.duration = 200.0
simConfig.dt = 0.1
```
We will record from from the first cell (**0**) and we will record the voltage in the middle of the soma and the end of the dendrite.
```
simConfig.recordCells = [0]
simConfig.recordTraces = {
"V_soma": {
"sec": "soma",
"loc": 0.5,
"var": "v",
},
"V_dend": {
"sec": "dend",
"loc": 1.0,
"var": "v",
}
}
```
Finally we will set up some plots to be automatically generated and saved.
```
simConfig.analysis = {
"plotTraces": {
"include": [0],
"saveFig": True,
},
"plotRaster": {
"saveFig": True,
}
}
```
To see plots in the notebook, we first have to enter the following command.
```
%matplotlib inline
```
## Create, simulate, and analyze the model
Use one simple command to create, simulate, and analyze the model.
```
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
```
We can see that there was no spiking in the network, and thus the spike raster was not plotted. But there should be one new file in your directory: **netpyne_tut1_traces.png**. Take a look. Not too interesting, the cell just settles into its resting membrane potential.
Let's overlay the traces.
```
fig, figData = sim.analysis.plotTraces(overlay=True)
```
### Plot the 2D connectivity of the network
Now we can take a look at the physical layout of our network model. You can see all the options available for **plot2Dnet** here:
http://netpyne.org/netpyne.analysis.network.html#netpyne.analysis.network.plot2Dnet
```
fig, figData = sim.analysis.plot2Dnet()
```
### Plot the connectivity matrix
You can see all the options available for **plotConn** here:
http://netpyne.org/netpyne.analysis.network.html#netpyne.analysis.network.plotConn
```
fig, figData = sim.analysis.plotConn()
```
Not very interesting with just one population, but we can also look at the cellular level connectivity.
```
fig, figData = sim.analysis.plotConn(feature='weight', groupBy='cell')
```
## Add a stimulation
We'll need to kickstart this network to see some activity -- let's inject current into one of the cells. First we need to add an entry to the *Stimulation Source Parameters* dictionary (**stimSourceParams**). We'll call our stimulation **IClamp1**, and we'll use the standard NEURON *type*: **IClamp**. The current injection will last for a *duration* of 20 ms, it will start at a *delay* of 5 ms, and it will have an *amplitude* of 0.1 nanoAmps.
```
netParams.stimSourceParams['IClamp1'] = {
"type": "IClamp",
"dur": 5,
"del": 20,
"amp": 0.1,
}
```
Now we need to add a target for our stimulation. We do that by adding a dictionary to the *Stimulation Target Parameters* dictionary (**stimTargetParams**). We'll call this connectivity rule **IClamp1->cell0**, because it will go from the source we just created (**IClamp1**) and the first cell in our population. The stimulation (current injection in this case) will occur in our **dend** *section* at the very tip (*location* of **1.0**).
```
netParams.stimTargetParams['IClamp1->cell0'] = {
"source": "IClamp1",
"conds": {"cellList": [0]},
"sec": "dend",
"loc": 1.0,
}
```
### Create, simulate, and analyze the model
```
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
```
Now we see spiking in the network, and the raster plot appears. Let's improve the plots a little bit.
```
fig, figData = sim.analysis.plotTraces(overlay=True)
fig, figData = sim.analysis.plotRaster(marker='o', markerSize=50)
```
You can see all of the options available in **plotTraces** here:
http://netpyne.org/netpyne.analysis.traces.html#netpyne.analysis.traces.plotTraces
You can see all of the options available in **plotRaster** here:
http://netpyne.org/netpyne.analysis.spikes.html#netpyne.analysis.spikes.plotRaster
### Plot the connectivity matrix
```
fig, figData = sim.analysis.plotConn()
```
## Record and plot a variety of traces
Now let's explore the model by recording and plotting a variety of traces. First let's clear our **recordTraces** dictionary and turn off the automatic raster plot.
```
simConfig.recordTraces = {}
simConfig.analysis['plotRaster'] = False
```
### Record and plot the somatic conductances
Let's record and plot the somatic conductances. We need to take a look at the **hh** mod file to see what the variables are called. The file is available here: https://github.com/neuronsimulator/nrn/blob/master/src/nrnoc/hh.mod
We can see that the conductances are called *gna*, *gk*, and *gl*. Let's set up recording for these conductances in the middle of the soma.
```
simConfig.recordTraces['gNa'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gna'}
simConfig.recordTraces['gK'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gk'}
simConfig.recordTraces['gL'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gl'}
```
Then we can re-run the simulation.
```
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
```
Let's zoom in on one spike and overylay the traces.
```
fig, figData = sim.analysis.plotTraces(timeRange=[90, 110], overlay=True)
```
### Record from synapses
Our synapses are set up to use **Exp2Syn**, which is builtin to NEURON. Its mod file is available here: https://github.com/neuronsimulator/nrn/blob/master/src/nrnoc/exp2syn.mod
Looking in the file, we can see that its current variable is called **i**. Let's record that and the voltage in the dendrite.
```
simConfig.recordTraces = {}
simConfig.recordTraces['iSyn0'] = {'sec': 'dend', 'loc': 1.0, 'synMech': 'exc', 'var': 'i'}
simConfig.recordTraces['V_dend'] = {'sec': 'dend', 'loc': 1.0, 'var': 'v'}
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
```
That's the first synapse created in that location, but there are likely multiple synapses. Let's plot all the synaptic currents entering cell 0. First we need to see what they are. The network is defined in **sim.net**. Type in *sim.net.* and then push *Tab* to see what's available.
The data for cell 0 is in **sim.net.allCells[0]**.
```
sim.net.allCells[0].keys()
```
The connections coming onto the cell are in **conns**.
```
sim.net.allCells[0]['conns']
```
So we want to record six synaptic currents. Lets do that in a *for loop* at the same time creating a dictionary to hold the synaptic trace names as keys (and later the trace arrays as values).
```
simConfig.recordTraces = {}
simConfig.recordTraces['V_soma'] = {'sec': 'soma', 'loc': 0.5, 'var': 'v'}
simConfig.recordTraces['V_dend'] = {'sec': 'dend', 'loc': 1.0, 'var': 'v'}
syn_plots = {}
for index, presyn in enumerate(sim.net.allCells[0]['conns']):
trace_name = 'i_syn_' + str(presyn['preGid'])
syn_plots[trace_name] = None
simConfig.recordTraces[trace_name] = {'sec': 'dend', 'loc': 1.0, 'synMech': 'exc', 'var': 'i', 'index': index}
print(simConfig.recordTraces)
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
```
## Extracting recorded data
Let's make our synaptic currents plot nicer. We'll make a figure with two plots, the top one will be the somatic and dendritic voltage and the bottom plot will be all of the synaptic currents overlaid.
First we'll have to extract the data. Simulation data gets stored in the dictionary **sim.allSimData**.
```
sim.allSimData.keys()
```
**spkt** is an array of the times of all spikes in the network. **spkid** is an array of the universal index (GID) of the cell spiking. **t** is an array of the time for traces. Our traces appear as we named them, and each is a dictionary with its key being **cell_GID** and its value being the array of the trace.
```
sim.allSimData.V_soma.keys()
```
So let's extract our data.
```
time = sim.allSimData['t']
v_soma = sim.allSimData['V_soma']['cell_0']
v_dend = sim.allSimData['V_dend']['cell_0']
for syn_plot in syn_plots:
syn_plots[syn_plot] = sim.allSimData[syn_plot]['cell_0']
```
And now we can make our custom plot.
```
import matplotlib.pyplot as plt
fig = plt.figure()
plt.subplot(211)
plt.plot(time, v_soma, label='v_soma')
plt.plot(time, v_dend, label='v_dend')
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Membrane potential (mV)')
plt.subplot(212)
for syn_plot in syn_plots:
plt.plot(time, syn_plots[syn_plot], label=syn_plot)
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Synaptic current (nA)')
plt.savefig('syn_currents.jpg', dpi=600)
```
Cleaning up our figure (reducing font size, etc.) will be left as an exercise. See the **matplotlib** users guide here:
https://matplotlib.org/users/index.html
Now we will put all of this together into a single file. But first, let's clear our workspace with the following command.
```
%reset
```
## This tutorial in a single Python file
```
from netpyne import specs, sim
netParams = specs.NetParams()
simConfig = specs.SimConfig()
# Create a cell type
# ------------------
netParams.cellParams['pyr'] = {}
netParams.cellParams['pyr']['secs'] = {}
# Add a soma section
netParams.cellParams['pyr']['secs']['soma'] = {}
netParams.cellParams['pyr']['secs']['soma']['geom'] = {
"diam": 12,
"L": 12,
"Ra": 100.0,
"cm": 1
}
# Add hh mechanism to soma
netParams.cellParams['pyr']['secs']['soma']['mechs'] = {"hh": {
"gnabar": 0.12,
"gkbar": 0.036,
"gl": 0.0003,
"el": -54.3
}}
# Add a dendrite section
dend = {}
dend['geom'] = {"diam": 1.0,
"L": 200.0,
"Ra": 100.0,
"cm": 1,
}
# Add pas mechanism to dendrite
dend['mechs'] = {"pas":
{"g": 0.001,
"e": -70}
}
# Connect the dendrite to the soma
dend['topol'] = {"parentSec": "soma",
"parentX": 1.0,
"childX": 0,
}
# Add the dend dictionary to the cell parameters dictionary
netParams.cellParams['pyr']['secs']['dend'] = dend
# Create a population of these cells
# ----------------------------------
netParams.popParams['E'] = {
"cellType": "pyr",
"numCells": 40,
}
# Add Exp2Syn synaptic mechanism
# ------------------------------
netParams.synMechParams['exc'] = {
"mod": "Exp2Syn",
"tau1": 0.1,
"tau2": 1.0,
"e": 0
}
# Define the connectivity
# -----------------------
netParams.connParams['E->E'] = {
"preConds": {"pop": "E"},
"postConds": {"pop": "E"},
"weight": 0.005,
"probability": 0.1,
"delay": 5.0,
"synMech": "exc",
"sec": "dend",
"loc": 1.0,
}
# Add a stimulation
# -----------------
netParams.stimSourceParams['IClamp1'] = {
"type": "IClamp",
"dur": 5,
"del": 20,
"amp": 0.1,
}
# Connect the stimulation
# -----------------------
netParams.stimTargetParams['IClamp1->cell0'] = {
"source": "IClamp1",
"conds": {"cellList": [0]},
"sec": "dend",
"loc": 1.0,
}
# Set up the simulation configuration
# -----------------------------------
simConfig.filename = "netpyne_tut1"
simConfig.duration = 200.0
simConfig.dt = 0.1
# Record from cell 0
simConfig.recordCells = [0]
# Record the voltage at the soma and the dendrite
simConfig.recordTraces = {
"V_soma": {
"sec": "soma",
"loc": 0.5,
"var": "v",
},
"V_dend": {
"sec": "dend",
"loc": 1.0,
"var": "v",
}
}
# Record somatic conductances
#simConfig.recordTraces['gNa'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gna'}
#simConfig.recordTraces['gK'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gk'}
#simConfig.recordTraces['gL'] = {'sec': 'soma', 'loc': 0.5, 'mech': 'hh', 'var': 'gl'}
# Automatically generate some figures
simConfig.analysis = {
"plotTraces": {
"include": [0],
"saveFig": True,
"overlay": True,
},
"plotRaster": {
"saveFig": True,
"marker": "o",
"markerSize": 50,
},
"plotConn": {
"saveFig": True,
"feature": "weight",
"groupby": "cell",
"markerSize": 50,
},
"plot2Dnet": {
"saveFig": True,
},
}
# Create, simulate, and analyze the model
# ---------------------------------------
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
# Set up the recording for the synaptic current plots
syn_plots = {}
for index, presyn in enumerate(sim.net.allCells[0]['conns']):
trace_name = 'i_syn_' + str(presyn['preGid'])
syn_plots[trace_name] = None
simConfig.recordTraces[trace_name] = {'sec': 'dend', 'loc': 1.0, 'synMech': 'exc', 'var': 'i', 'index': index}
# Create, simulate, and analyze the model
# ---------------------------------------
sim.createSimulateAnalyze(netParams=netParams, simConfig=simConfig)
# Extract the data
# ----------------
time = sim.allSimData['t']
v_soma = sim.allSimData['V_soma']['cell_0']
v_dend = sim.allSimData['V_dend']['cell_0']
for syn_plot in syn_plots:
syn_plots[syn_plot] = sim.allSimData[syn_plot]['cell_0']
# Plot our custom figure
# ----------------------
import matplotlib.pyplot as plt
fig = plt.figure()
plt.subplot(211)
plt.plot(time, v_soma, label='v_soma')
plt.plot(time, v_dend, label='v_dend')
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Membrane potential (mV)')
plt.subplot(212)
for syn_plot in syn_plots:
plt.plot(time, syn_plots[syn_plot], label=syn_plot)
plt.legend()
plt.xlabel('Time (ms)')
plt.ylabel('Synaptic current (nA)')
plt.savefig('syn_currents.jpg', dpi=600)
```
| github_jupyter |
# Building your Recurrent Neural Network - Step by Step
Welcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.
Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future.
**Notation**:
- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer.
- Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.
- Superscript $(i)$ denotes an object associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example input.
- Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step.
- Example: $x^{\langle t \rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\langle t \rangle}$ is the input at the $t^{th}$ timestep of example $i$.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.
We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!
Let's first import all the packages that you will need during this assignment.
```
import numpy as np
from rnn_utils import *
```
## 1 - Forward propagation for the basic Recurrent Neural Network
Later this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$.
<img src="images/RNN.png" style="width:500;height:300px;">
<caption><center> **Figure 1**: Basic RNN model </center></caption>
Here's how you can implement an RNN:
**Steps**:
1. Implement the calculations needed for one time-step of the RNN.
2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time.
Let's go!
## 1.1 - RNN cell
A Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell.
<img src="images/rnn_step_forward.png" style="width:700px;height:300px;">
<caption><center> **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $y^{\langle t \rangle}$ </center></caption>
**Exercise**: Implement the RNN-cell described in Figure (2).
**Instructions**:
1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.
2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided you a function: `softmax`.
3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in cache
4. Return $a^{\langle t \rangle}$ , $y^{\langle t \rangle}$ and cache
We will vectorize over $m$ examples. Thus, $x^{\langle t \rangle}$ will have dimension $(n_x,m)$, and $a^{\langle t \rangle}$ will have dimension $(n_a,m)$.
```
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh(np.dot(Wax, xt) + np.dot(Waa, a_prev) + ba)
# compute output of the current cell using the formula given above
yt_pred = softmax(np.dot(Wya, a_next) + by)
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
```
**Expected Output**:
<table>
<tr>
<td>
**a_next[4]**:
</td>
<td>
[ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
-0.18887155 0.99815551 0.6531151 0.82872037]
</td>
</tr>
<tr>
<td>
**a_next.shape**:
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**yt[1]**:
</td>
<td>
[ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
0.36920224 0.9966312 0.9982559 0.17746526]
</td>
</tr>
<tr>
<td>
**yt.shape**:
</td>
<td>
(2, 10)
</td>
</tr>
</table>
## 1.2 - RNN forward pass
You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\langle t-1 \rangle}$) and the current time-step's input data ($x^{\langle t \rangle}$). It outputs a hidden state ($a^{\langle t \rangle}$) and a prediction ($y^{\langle t \rangle}$) for this time-step.
<img src="images/rnn.png" style="width:800px;height:300px;">
<caption><center> **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. </center></caption>
**Exercise**: Code the forward propagation of the RNN described in Figure (3).
**Instructions**:
1. Create a vector of zeros ($a$) that will store all the hidden states computed by the RNN.
2. Initialize the "next" hidden state as $a_0$ (initial hidden state).
3. Start looping over each time step, your incremental index is $t$ :
- Update the "next" hidden state and the cache by running `rnn_cell_forward`
- Store the "next" hidden state in $a$ ($t^{th}$ position)
- Store the prediction in y
- Add the cache to the list of caches
4. Return $a$, $y$ and caches
```
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and parameters["Wya"]
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y" with zeros (≈2 lines)
a = np.zeros((n_a, m, T_x))
y_pred = np.zeros((n_y, m, T_x))
# Initialize a_next (≈1 line)
a_next = a0
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t], a_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
```
**Expected Output**:
<table>
<tr>
<td>
**a[4][1]**:
</td>
<td>
[-0.99999375 0.77911235 -0.99861469 -0.99833267]
</td>
</tr>
<tr>
<td>
**a.shape**:
</td>
<td>
(5, 10, 4)
</td>
</tr>
<tr>
<td>
**y[1][3]**:
</td>
<td>
[ 0.79560373 0.86224861 0.11118257 0.81515947]
</td>
</tr>
<tr>
<td>
**y.shape**:
</td>
<td>
(2, 10, 4)
</td>
</tr>
<tr>
<td>
**cache[1][1][3]**:
</td>
<td>
[-1.1425182 -0.34934272 -0.20889423 0.58662319]
</td>
</tr>
<tr>
<td>
**len(cache)**:
</td>
<td>
2
</td>
</tr>
</table>
Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\langle t \rangle}$ can be estimated using mainly "local" context (meaning information from inputs $x^{\langle t' \rangle}$ where $t'$ is not too far from $t$).
In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps.
## 2 - Long Short-Term Memory (LSTM) network
This following figure shows the operations of an LSTM-cell.
<img src="images/LSTM.png" style="width:500;height:400px;">
<caption><center> **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. </center></caption>
Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with $T_x$ time-steps.
### About the gates
#### - Forget gate
For the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this:
$$\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} $$
Here, $W_f$ are weights that govern the forget gate's behavior. We concatenate $[a^{\langle t-1 \rangle}, x^{\langle t \rangle}]$ and multiply by $W_f$. The equation above results in a vector $\Gamma_f^{\langle t \rangle}$ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state $c^{\langle t-1 \rangle}$. So if one of the values of $\Gamma_f^{\langle t \rangle}$ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of $c^{\langle t-1 \rangle}$. If one of the values is 1, then it will keep the information.
#### - Update gate
Once we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate:
$$\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} $$
Similar to the forget gate, here $\Gamma_u^{\langle t \rangle}$ is again a vector of values between 0 and 1. This will be multiplied element-wise with $\tilde{c}^{\langle t \rangle}$, in order to compute $c^{\langle t \rangle}$.
#### - Updating the cell
To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is:
$$ \tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} $$
Finally, the new cell state is:
$$ c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} $$
#### - Output gate
To decide which outputs we will use, we will use the following two formulas:
$$ \Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}$$
$$ a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} $$
Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the $\tanh$ of the previous state.
### 2.1 - LSTM cell
**Exercise**: Implement the LSTM cell described in the Figure (3).
**Instructions**:
1. Concatenate $a^{\langle t-1 \rangle}$ and $x^{\langle t \rangle}$ in a single matrix: $concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$
2. Compute all the formulas 1-6. You can use `sigmoid()` (provided) and `np.tanh()`.
3. Compute the prediction $y^{\langle t \rangle}$. You can use `softmax()` (provided).
```
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the memory value
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"]
bf = parameters["bf"]
Wi = parameters["Wi"]
bi = parameters["bi"]
Wc = parameters["Wc"]
bc = parameters["bc"]
Wo = parameters["Wo"]
bo = parameters["bo"]
Wy = parameters["Wy"]
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈3 lines)
concat = np.zeros((n_a + n_x, m))
concat[: n_a, :] = a_prev
concat[n_a :, :] = xt
# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
ft = sigmoid(np.dot(Wf, concat) + bf)
it = sigmoid(np.dot(Wi, concat) + bi)
cct = np.tanh(np.dot(Wc, concat) + bc)
c_next = it * cct + ft * c_prev
ot = sigmoid(np.dot(Wo, concat) + bo)
a_next = ot * np.tanh(c_next)
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(np.dot(Wy, a_next) + by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))
```
**Expected Output**:
<table>
<tr>
<td>
**a_next[4]**:
</td>
<td>
[-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
0.76566531 0.34631421 -0.00215674 0.43827275]
</td>
</tr>
<tr>
<td>
**a_next.shape**:
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**c_next[2]**:
</td>
<td>
[ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
0.76449811 -0.0981561 -0.74348425 -0.26810932]
</td>
</tr>
<tr>
<td>
**c_next.shape**:
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**yt[1]**:
</td>
<td>
[ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
0.00943007 0.12666353 0.39380172 0.07828381]
</td>
</tr>
<tr>
<td>
**yt.shape**:
</td>
<td>
(2, 10)
</td>
</tr>
<tr>
<td>
**cache[1][3]**:
</td>
<td>
[-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
0.07651101 -1.03752894 1.41219977 -0.37647422]
</td>
</tr>
<tr>
<td>
**len(cache)**:
</td>
<td>
10
</td>
</tr>
</table>
### 2.2 - Forward pass for LSTM
Now that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs.
<img src="images/LSTM_rnn.png" style="width:500;height:300px;">
<caption><center> **Figure 4**: LSTM over multiple time-steps. </center></caption>
**Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps.
**Note**: $c^{\langle 0 \rangle}$ is initialized with zeros.
```
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
# Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)
n_x, m, T_x = x.shape
n_y, n_a = parameters['Wy'].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = np.zeros((n_a, m, T_x))
c = np.zeros((n_a, m, T_x))
y = np.zeros((n_y, m, T_x))
# Initialize a_next and c_next (≈2 lines)
a_next = a0
c_next = np.zeros((n_a, m))
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
a_next, c_next, yt, cache = lstm_cell_forward(x[:,:,t], a_next, c_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = yt
# Save the value of the next cell state (≈1 line)
c[:,:,t] = c_next
# Append the cache into caches (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))
```
**Expected Output**:
<table>
<tr>
<td>
**a[4][3][6]** =
</td>
<td>
0.172117767533
</td>
</tr>
<tr>
<td>
**a.shape** =
</td>
<td>
(5, 10, 7)
</td>
</tr>
<tr>
<td>
**y[1][4][3]** =
</td>
<td>
0.95087346185
</td>
</tr>
<tr>
<td>
**y.shape** =
</td>
<td>
(2, 10, 7)
</td>
</tr>
<tr>
<td>
**caches[1][1][1]** =
</td>
<td>
[ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
0.41005165]
</td>
</tr>
<tr>
<td>
**c[1][2][1]** =
</td>
<td>
-0.855544916718
</td>
</tr>
</tr>
<tr>
<td>
**len(caches)** =
</td>
<td>
2
</td>
</tr>
</table>
Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance.
The rest of this notebook is optional, and will not be graded.
## 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below.
### 3.1 - Basic RNN backward pass
We will start by computing the backward pass for the basic RNN-cell.
<img src="images/rnn_cell_backprop.png" style="width:500;height:300px;"> <br>
<caption><center> **Figure 5**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculas. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. </center></caption>
#### Deriving the one step backward functions:
To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand.
The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \text{sech}(x)^2 = 1 - \tanh(x)^2$
Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of $\tanh(u)$ is $(1-\tanh(u)^2)du$.
The final two equations also follow same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
```
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of tanh with respect to a_next (≈1 line)
# dtanh = (1 - np.tanh(np.dot(Wax, xt) + np.dot(Waa, a_prev) + ba)**2) * da_next
dtanh = (1 - a_next**2) * da_next
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = np.dot(Wax.T, dtanh)
dWax = np.dot(dtanh , xt.T)
# compute the gradient with respect to Waa (≈2 lines)
da_prev = np.dot(Waa.T, dtanh)
dWaa = np.dot(dtanh , a_prev.T)
# compute the gradient with respect to b (≈1 line)
dba = np.sum(dtanh, 1)
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
```
**Expected Output**:
<table>
<tr>
<td>
**gradients["dxt"][1][2]** =
</td>
<td>
-0.460564103059
</td>
</tr>
<tr>
<td>
**gradients["dxt"].shape** =
</td>
<td>
(3, 10)
</td>
</tr>
<tr>
<td>
**gradients["da_prev"][2][3]** =
</td>
<td>
0.0842968653807
</td>
</tr>
<tr>
<td>
**gradients["da_prev"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dWax"][3][1]** =
</td>
<td>
0.393081873922
</td>
</tr>
<tr>
<td>
**gradients["dWax"].shape** =
</td>
<td>
(5, 3)
</td>
</tr>
<tr>
<td>
**gradients["dWaa"][1][2]** =
</td>
<td>
-0.28483955787
</td>
</tr>
<tr>
<td>
**gradients["dWaa"].shape** =
</td>
<td>
(5, 5)
</td>
</tr>
<tr>
<td>
**gradients["dba"][4]** =
</td>
<td>
[ 0.80517166]
</td>
</tr>
<tr>
<td>
**gradients["dba"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
</table>
#### Backward pass through the RNN
Computing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.
**Instructions**:
Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
```
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = None
(a1, a0, x1, parameters) = None
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈6 lines)
dx = None
dWax = None
dWaa = None
dba = None
da0 = None
da_prevt = None
# Loop through all the time steps
for t in reversed(range(None)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = None
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
dx[:, :, t] = None
dWax += None
dWaa += None
dba += None
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a, y, caches = rnn_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = rnn_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
```
**Expected Output**:
<table>
<tr>
<td>
**gradients["dx"][1][2]** =
</td>
<td>
[-2.07101689 -0.59255627 0.02466855 0.01483317]
</td>
</tr>
<tr>
<td>
**gradients["dx"].shape** =
</td>
<td>
(3, 10, 4)
</td>
</tr>
<tr>
<td>
**gradients["da0"][2][3]** =
</td>
<td>
-0.314942375127
</td>
</tr>
<tr>
<td>
**gradients["da0"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dWax"][3][1]** =
</td>
<td>
11.2641044965
</td>
</tr>
<tr>
<td>
**gradients["dWax"].shape** =
</td>
<td>
(5, 3)
</td>
</tr>
<tr>
<td>
**gradients["dWaa"][1][2]** =
</td>
<td>
2.30333312658
</td>
</tr>
<tr>
<td>
**gradients["dWaa"].shape** =
</td>
<td>
(5, 5)
</td>
</tr>
<tr>
<td>
**gradients["dba"][4]** =
</td>
<td>
[-0.74747722]
</td>
</tr>
<tr>
<td>
**gradients["dba"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
</table>
## 3.2 - LSTM backward pass
### 3.2.1 One Step backward
The LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.)
### 3.2.2 gate derivatives
$$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$
$$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_u^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$
$$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$
$$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$
### 3.2.3 parameter derivatives
$$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$
$$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$
$$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$
$$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$
To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.
Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.
$$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$
Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)
$$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$
$$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$
where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)
**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
```
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = None
n_a, m = None
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dot = None
dcct = None
dit = None
dft = None
# Code equations (7) to (10) (≈4 lines)
dit = None
dft = None
dot = None
dcct = None
# Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
da_prev = None
dc_prev = None
dxt = None
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
```
**Expected Output**:
<table>
<tr>
<td>
**gradients["dxt"][1][2]** =
</td>
<td>
3.23055911511
</td>
</tr>
<tr>
<td>
**gradients["dxt"].shape** =
</td>
<td>
(3, 10)
</td>
</tr>
<tr>
<td>
**gradients["da_prev"][2][3]** =
</td>
<td>
-0.0639621419711
</td>
</tr>
<tr>
<td>
**gradients["da_prev"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dc_prev"][2][3]** =
</td>
<td>
0.797522038797
</td>
</tr>
<tr>
<td>
**gradients["dc_prev"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dWf"][3][1]** =
</td>
<td>
-0.147954838164
</td>
</tr>
<tr>
<td>
**gradients["dWf"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWi"][1][2]** =
</td>
<td>
1.05749805523
</td>
</tr>
<tr>
<td>
**gradients["dWi"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWc"][3][1]** =
</td>
<td>
2.30456216369
</td>
</tr>
<tr>
<td>
**gradients["dWc"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWo"][1][2]** =
</td>
<td>
0.331311595289
</td>
</tr>
<tr>
<td>
**gradients["dWo"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dbf"][4]** =
</td>
<td>
[ 0.18864637]
</td>
</tr>
<tr>
<td>
**gradients["dbf"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbi"][4]** =
</td>
<td>
[-0.40142491]
</td>
</tr>
<tr>
<td>
**gradients["dbi"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbc"][4]** =
</td>
<td>
[ 0.25587763]
</td>
</tr>
<tr>
<td>
**gradients["dbc"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbo"][4]** =
</td>
<td>
[ 0.13893342]
</td>
</tr>
<tr>
<td>
**gradients["dbo"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
</table>
### 3.3 Backward pass through the LSTM RNN
This part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients.
**Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
```
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈12 lines)
dx = None
da0 = None
da_prevt = None
dc_prevt = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# loop back over the whole sequence
for t in reversed(range(None)):
# Compute all gradients using lstm_cell_backward
gradients = None
# Store or add the gradient to the parameters' previous step's gradient
dx[:,:,t] = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
```
**Expected Output**:
<table>
<tr>
<td>
**gradients["dx"][1][2]** =
</td>
<td>
[-0.00173313 0.08287442 -0.30545663 -0.43281115]
</td>
</tr>
<tr>
<td>
**gradients["dx"].shape** =
</td>
<td>
(3, 10, 4)
</td>
</tr>
<tr>
<td>
**gradients["da0"][2][3]** =
</td>
<td>
-0.095911501954
</td>
</tr>
<tr>
<td>
**gradients["da0"].shape** =
</td>
<td>
(5, 10)
</td>
</tr>
<tr>
<td>
**gradients["dWf"][3][1]** =
</td>
<td>
-0.0698198561274
</td>
</tr>
<tr>
<td>
**gradients["dWf"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWi"][1][2]** =
</td>
<td>
0.102371820249
</td>
</tr>
<tr>
<td>
**gradients["dWi"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWc"][3][1]** =
</td>
<td>
-0.0624983794927
</td>
</tr>
<tr>
<td>
**gradients["dWc"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dWo"][1][2]** =
</td>
<td>
0.0484389131444
</td>
</tr>
<tr>
<td>
**gradients["dWo"].shape** =
</td>
<td>
(5, 8)
</td>
</tr>
<tr>
<td>
**gradients["dbf"][4]** =
</td>
<td>
[-0.0565788]
</td>
</tr>
<tr>
<td>
**gradients["dbf"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbi"][4]** =
</td>
<td>
[-0.06997391]
</td>
</tr>
<tr>
<td>
**gradients["dbi"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbc"][4]** =
</td>
<td>
[-0.27441821]
</td>
</tr>
<tr>
<td>
**gradients["dbc"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
<tr>
<td>
**gradients["dbo"][4]** =
</td>
<td>
[ 0.16532821]
</td>
</tr>
<tr>
<td>
**gradients["dbo"].shape** =
</td>
<td>
(5, 1)
</td>
</tr>
</table>
### Congratulations !
Congratulations on completing this assignment. You now understand how recurrent neural networks work!
Lets go on to the next exercise, where you'll use an RNN to build a character-level language model.
| github_jupyter |
```
import numpy as np
import pandas as pd
import seaborn as sns
import math
import matplotlib.pylab as plt
from IPython.display import display, HTML
rs = np.random.RandomState(10)
```
## Links
- https://seaborn.pydata.org/examples/index.html
- https://python-graph-gallery.com/all-charts/
## `DataFrame` Sample Data
`seaborn` provides some sample data from https://github.com/mwaskom/seaborn-data. They are just CSV files, and we can load them for testing using `sns.load_dataset` function. The return data type of `sns.load_dataset` is `pandas.DataFrame`.
By default, if we don't assign result to a variable, jupyter will display it out. In case we want to see content of a variable, use `IPython.display.display`.
```
# sns.load_dataset('anscombe')
# sns.load_dataset('attention')
# sns.load_dataset('brain_networks')
# sns.load_dataset('car_crashes')
# sns.load_dataset('diamonds')
# sns.load_dataset('dots')
# sns.load_dataset('exercise')
# sns.load_dataset('flights')
# sns.load_dataset('fmri')
# sns.load_dataset('gammas')
# sns.load_dataset('iris')
sns.load_dataset('mpg') # Miles Per Gallon
# sns.load_dataset('planets')
# sns.load_dataset('tips')
# sns.load_dataset('titanic')
```
### Constructing `DataFrame`
We can also build a `DataFrame` ourselves using a dictionary.
```
data = {'x':[1, 2, 3], 'y':[9, 8, 7]}
pd.DataFrame(data)
```
## Plotting data
### Distribution / Histogram
Using histogram diagrams, we can visually see the value distribution of a particular metric. Take `horsepower` from `mpg` sample dataset as an example.
```
data = sns.load_dataset('mpg')
sns.distplot([-1 if math.isnan(n) else n for n in data['horsepower'].tolist()])
```
Here, most cars have power in range of 50 to 150.
We can also split and show this metric base on its `origin`.
```
data = sns.load_dataset('mpg')
# Split data base on `origin`.
d = {}
for index, row in data.iterrows():
nation = row['origin']
if nation in d:
d[nation].append(-1 if math.isnan(row['horsepower']) else row['horsepower'])
else:
d[nation] = []
# Then draw each on them on the graph.
for key in d:
sns.distplot(d[key], label=key)
plt.legend()
```
Another example is `weight` metric.
```
data = sns.load_dataset('mpg')
sns.distplot(data['weight'].tolist())
```
### Relationship
In all previous graphs, we only care about one metric separately. Sometimes, it's useful to know the connection/correlation between 2 parameters.
Let's plot `horsepower` and `acceleration` together and analyze them.
```
data = sns.load_dataset('mpg')
sns.relplot(x='horsepower', y='acceleration', data=data)
data = sns.load_dataset('mpg')
sns.jointplot(x='horsepower', y='acceleration', data=data)
data = sns.load_dataset('mpg')
sns.jointplot(x='horsepower', y='acceleration', data=data, kind='kde')
```
Other metrics like `weight`, `horsepower` and `mpg` are worth looking at too.
```
data = sns.load_dataset('mpg')
sns.relplot(x='weight', y='mpg', data=data)
data = sns.load_dataset('mpg')
sns.relplot(x='weight', y='horsepower', data=data)
```
### Other Graph Types
```
data = sns.load_dataset('mpg')
sns.relplot(x='cylinders', y='horsepower', kind="line", data=data)
sns.relplot(x='cylinders', y='mpg', kind="line", data=data)
```
The plotting below clearly shows how power is distributed by number of cylinders.
```
data = sns.load_dataset('mpg')
sns.stripplot(x="cylinders", y="horsepower", data=data, jitter=True)
```
Another graph that can convey the same information is violin graph.
```
data = sns.load_dataset('mpg')
sns.violinplot(x="cylinders", y="horsepower", data=data)
# sns.swarmplot(x="cylinders", y="horsepower", data=data, color="white")
```
We may simply want to know how many cars are manufactured with a certain number of cylinders.
```
data = sns.load_dataset('mpg')
sns.countplot(x="cylinders", data=data)
data = sns.load_dataset('mpg')
sns.countplot(x="cylinders", hue='origin', data=data)
```
All below charts show more information in one graph by utilized colors and symbols.
```
data = sns.load_dataset('mpg')
sns.relplot(x='horsepower', y='mpg', size='cylinders', style='cylinders', data=data)
```
Let's pick a random point from below chart, and we can extract 5 pieces of information:
- `mpg` -> fuel consumption
- `horsepower`
- color -> year of manufacture
- size -> weight
- shape -> number of cylinders
So, some conclusions can be drawn from looking at the whole graph:
- The more cylinders a car has, the lower the fuel efficiency, and the higher horsepower.
- 3- or 5-cylinder setting is rare.
- Most newer cars have 4 cylinders.
- 4-cylinder cars are lighter, have lower horsepower but higher fuel efficiency.
```
data = sns.load_dataset('mpg')
sns.relplot(x='horsepower', y='mpg', size='weight', hue='model_year', style='cylinders',
height=10, sizes=(40, 400),
data=data)
data = sns.load_dataset('mpg')
sns.relplot(x='horsepower', y='mpg', size='weight', hue='origin', style='cylinders',
height=10, sizes=(40, 400), alpha=.5,
data=data)
```
Another useful graph type is heatmap. Let's use `flights` sample dataset to draw a heatmap.
```
data = sns.load_dataset('flights')
display(data)
data2 = data.pivot('month', 'year', 'passengers')
display(data2)
ax = sns.heatmap(data2)
```
| github_jupyter |
```
#The first cell is just to align our markdown tables to the left vs. center
%%html
<style>
table {float:left}
</style>
```
# Manipulating Strings
***
## Learning Objectives
In this lesson you will:
1. Learn the fundamentals of processing text stored in string values
2. Apply various methods to strings
>- Note: This lesson concludes our Python fundamentals section of this course and the material for the Midterm
>- After this, we should have enough of the basic understanding of Python to start working on applied business analytics problems!
## Links to topics and functions:
>- <a id='Lists'></a>[String Literals](#String-Literals)
>- <a id='methods'></a>[String Methods](#String-Methods)
### References:
>- Sweigart(2015, pp. 123-143)
>- w3Schools: https://www.w3schools.com/python/python_strings.asp
#### Don't forget about the Python visualizer tool: http://pythontutor.com/visualize.html#mode=display
## Table of String Methods:
|Methods/Functions |Description |
|:-----------: |:-------------|
|upper() |Returns a new string with all UPPER CASE LETTERS|
|lower() |Returns a new string with all lower case letters|
|isupper() |Checks whether all the letters in a string are UPPER CASE|
|islower() |Checks whether all the letters in a string are lower case|
|isalpha() |Checks whether a string only has letters and is not blank|
|isalnum() |Checks whether only letters and numbers are in the string|
|isdecimal() |Checks whether the string only consists of numeric characters|
|isspace() |Checks whether the string only contains: spaces, tabs, and new lines|
|istitle() |Checks whether the string only contains words that start with upper followed by lower case|
|startswith() |Checks if the string value begins with the string passed to the method
|endswith() |Checks if the string value ends with the string passed to the method
|join() |Concatenates a list of strings into one string
|split() |Basically, "unconcatenates" a string into a list of strings
|rjust() |Right justifies a string based on an integer value of spaces
|ljust() |Left justifies a string based on an integer value of spaces
|center() |Centers a string based on an integer value of spaces
|strip() |Removes whitespace characters at the beginning and end of string
|rstrip() |Removes whitespace from the right end of the string
|lstrip() |Removes whitespace from the left end of the string
### Narration Videos
- https://youtu.be/7pdDMYhScU4
- https://youtu.be/vkEauaYpNi0
- https://youtu.be/oA-eM79nqhI
- https://youtu.be/F2ZDCROgIOo
# String Literals
>- Basically, this is telling Python where a string begins and ends
>- We have already used single `'` and `"` quotes but what if we want to mix these?
### Using double quotes
>- One wrong and correct way to define a string in Python using quotes
#### Another way using escape characters
### Escape characters allow us to put characters in a string that would otherwise be impossible
#### Here are some common escape characters
|Escape Character | Prints as |
:-----------: |:----------: |
|\\' |Single quote |
|\\" |Double quote |
|\t |Tab |
|\n |New line |
|\\\ |Backslash |
### Multi-line Strings
>- Use triple quotes
>- All text within triple quotes is considered part of the string
>- This is particularly useful when commenting out your code
### Indexing and Slicing Strings
>- Recall how we used indexes and slicing with lists: `list[1]`, `list[0:3]`, etc
>- Also recall how we said strings are "list-like"
>- We can think of a string as a list with each character having an index
#### Let's slice up some strings
### How many times does each character appear in `ralphie`?
#### How many times does 'f' appear in our `ralphie` variable?
#### Recall: get a sorted count of characters from `charCount`
## String Methods
### upper(), lower(), isupper(), islower()
##### Are all the letters uppercase?
##### Are all the letters lowercase?
#### We can also type strings prior to the method
### `isalpha()`, `isalnum()`, `isdecimal()`, `isspace()`, `istitle()`
>- These can be useful for data validation
##### Does the string only contain letters with no space characters?
##### Does the string only contain letters or numbers with no spaces?
##### Does the string only contain numbers?
##### Does the string contain only words that start with a capital followed by lowercase letters?
#### Example showing how the `isX` methods are useful
>- Task: create a program that will ask a user for their age and print their age to the screen
>>- Create data validation for age requiring only numbers for the input
>>- If the user does not enter a number, ask them to enter one.
### `startswith()` and `endswith()` methods
##### Does the string start/end with a particular string?
### `join()` and `split()` methods
#### `join()`
>- Take a list of strings and concatenate them into one string
>- The join method is called on a string value and is usually passed a list value
#### `split()`
>- Commonly used to split a multi-line string along the newline characters
>- The split method is called on a string value and returns a list of strings
```
deanLetter = '''
Dear Dean Matusik:
We have been working really hard
to learn Python this semester.
The skills we are learning in
the analytics program will
translate into highly demanded
jobs and higher salaries than
those without anlaytics skills.
'''
```
#### Split `deanLetter` based on the line breaks
>- Will result in a list of all the string values based on line breaks
##### Splitting on another character
##### The default separator is any white space (new lines, spaces, tabs, etc)
##### We can change the default number of splits if we pass a second parameter
### Justifying Text with `rjust()`, `ljust()`, and `center()`
>- General syntax: `string.rjust(length, character)` where:
>>- length is required and represents the total length of the string
>>- character is optional and represents a character to fill in missing space
##### We can insert another character for the spaces
##### Insert another character for spaces
### Justifying Text Example
>- Task: write a function that accepts 3 parameters: itemsDict, leftWidth, rightWidth and prints a table for majors and salaries
>>- itemsDict will be a dictionary variable storing salaries (the values) for majors (the keys)
>>- leftWidth is an integer parameter that will get passed to the ljust() method to define the column width of majors
>>- rightWidth is an integer parameter that will get passed to the ljust() method to define the column width of salaries
### Some basic analytics on our salary table
>- How many total majors were analyzed? Name the variable `sampSize`
>- How was the average salary of all majors? Name the variable `avgSal`
Hi Boss, here is a summary of the results of the salary study:
>- Total majors: {{sampSize}}
>- Average Salary: ${{avgSal}}
#### Recall: To print results in a markdown cell you need to do the following:
Install some notebook extensions using the Anaconda shell (new terminal on a Mac)
1. If you have installed Anaconda on your machine then...
2. Search for "Anaconda Powershell prompt"
3. Open up the Anaconda Powershell and type the following commands
>- pip install jupyter_contrib_nbextensions
>- jupyter contrib nbextension install --user
>- jupyter nbextension enable python-markdown/main
4. After that all installs on your machine, you will need to reload Anaconda and juptyer
<a id='top'></a>[TopPage](#Teaching-Notes)
| github_jupyter |
# SA-CCR analytical app
This notebook provides a reference implementation of interactive analytical app for SA-CCR analytics in python using Jupyter Notebook and [atoti](https://www.atoti.io).
<img src=./app-preview.gif/ width = 70%>
As a quick reminder, the SA-CCR is a regulatory methodology for computing EAD (Exposure At Default) which is part of the consolidated basel framework. It is already implemented for financial institutions in Europe by the Regulation (EU) 2019/876 (CRR II) and will be applicable from June 2021 and January 2022 in the US. In this post, I will walk you through code snippets implementing calculations as defined in the BCBS 279 document.
We'll start by creating an atoti app, then we'll load sample data and implement chains of calculations aggregating EAD from trades, collateral and supervisory parameters on-the-fly. This chart is summarizing the calculation chains:
<img src=./SA-CCR.svg/ width = 50%>
We will be referring to the relevant paragraphs of the Consolidated Basel Framework [CRE52](https://www.bis.org/basel_framework/chapter/CRE/52.htm?inforce=20191215) - see numbers in the square brackets.
```
import pandas as pd
pd.set_option("display.max_rows", 30)
pd.set_option("display.max_columns", 100)
pd.options.display.float_format = "{:,.2f}".format
```
# Launch atoti
```
import atoti as tt
config = tt.config.create_config(metadata_db="./metadata.db")
session = tt.create_session(config=config)
session.url
```
## Input data
In this section we will load the data used by the EAD aggregation into in-memory datastores and start creating a cube. The sample data will be fetched from csv files hosted on s3, and you can replace them with your own sources.
- `trades_store` with information on the trades, books, notionals, market values, classified into asset classes. The time period parameter dates - Mi, Ei, Si and Ti [52.31] - are also provided in this file/
- `nettingSets_store` providing netting set attributes, for instance, the MPOR and collateral
- `supervisoryParameters_store` - values set in section [52.72] "Supervisory specified parameters".
```
trades_store = session.read_csv(
"s3://data.atoti.io/notebooks/sa-ccr/T0/trades.csv",
keys=["AsOfDate", "TradeId"],
types={
"Notional": tt.types.DOUBLE,
"MarketValue": tt.types.DOUBLE,
"AsOfDate": tt.types.LOCAL_DATE,
"Mi_Date": tt.types.LOCAL_DATE,
"Si_Date": tt.types.LOCAL_DATE,
"Ti_Date": tt.types.LOCAL_DATE,
"Ei_Date": tt.types.LOCAL_DATE,
},
store_name="Trades",
)
trades_store.head(3)
```
Let's now load netting set attributes - providing info about collateral and csa details - and link them to trades:
```
nettingSets_store = session.read_csv(
"s3://data.atoti.io/notebooks/sa-ccr/T0/netting-set-attributes.csv",
keys=["AsOfDate", "NettingSetId"],
types={"MPOR": tt.types.DOUBLE, "Collateral": tt.types.DOUBLE},
store_name="NettingSets",
)
trades_store.join(
nettingSets_store, mapping={"NettingSetId": "NettingSetId", "AsOfDate": "AsOfDate"}
)
nettingSets_store.head(3)
```
In the next cell I'm loading parameters into a separate store and using the "join" command to link parameters to trades:
```
supervisoryParameters_store = session.read_csv(
"s3://data.atoti.io/notebooks/sa-ccr/T0/parameters.csv", keys=["AssetClass", "SubClass"], store_name="Parameters",
)
trades_store.join(
supervisoryParameters_store,
mapping={"AssetClass": "AssetClass", "SubClass": "SubClass"},
)
supervisoryParameters_store.head(3)
```
As a next step, I'm creating the cube:
```
# Usually atoti's create_cube command creates .SUM and .MEAN measures automatically from numerical data inputs,
# to prevent that I'm running the comman will the mode = no_measures
cube = session.create_cube(trades_store, "SA-CCR", mode="no_measures")
lvl = cube.levels
m = cube.measures
h = cube.hierarchies
# Setting the date dimension to slicing, so that measures do not aggregate risk coming from different business dates:
h["AsOfDate"].slicing = True
```
Let's display the current data model schema. Later in this notebook we'll extend this data model with additional attributes, but this is the minimum set of fields that we'd need to compute EAD.
```
cube.schema
```
# SA-CCR calculations
In this section we will create aggregation functions (measures) to compute EAD from the input data described above.
We will proceed bottom-up - starting with the supervisory duration metric, then move onto the trade-level adjusted notional, maturity factors, delta adjustments, addons and the necessary EAD components.
We will keep referring to the correponding paragraphs of the [CRE52](https://www.bis.org/basel_framework/chapter/CRE/52.htm?inforce=20191215) as we progress.
## Supervisory duration - [52.34]
In this section we will compute Supervisory Duration per formula defined in CRE52 para [52.34]:
$$SupervisoryDuration = max(\text{10 business days}, \frac{(e^{-0.05*S_i} - e^{-0.05*E_i})}{0.05})$$ for Credit Spread and Interest Rates. We assume it to be equal to 1 for all other Asset Classes.
In this expression, time period parameters are defined in [52.31] as follows:
- $S_i$ is the period of time from the valuation date until start of the time period referenced by an interest rate or credit contract
- and $E_i$ is the period of time from the valuation date until the end of the time period referenced by an interest rate or credit contract.
We will compute them from the Si and Ei dates provided in the input data.
```
number_of_days_in_year = 365
si_date_diff = (
tt.date_diff(lvl["AsOfDate"], lvl["Si_Date"], unit="days") / number_of_days_in_year
)
ei_date_diff = (
tt.date_diff(lvl["AsOfDate"], lvl["Ei_Date"], unit="days") / number_of_days_in_year
)
# The Si and Ei measures will be computed only for the interest rates and credit spread products.
m["Si"] = tt.filter(si_date_diff, lvl["AssetClass"].isin("IR", "CR_IDX", "CR_SN"),)
m["Ei"] = tt.filter(ei_date_diff, lvl["AssetClass"].isin("IR", "CR_IDX", "CR_SN"),)
# Supervisory duration is floored at 10 business dates - see [52.34]
supervisory_duration_floor = 10 / 250
# The following code will create a measure allowing us to visualize the supervisory duration
m["Supervisory_Duration"] = tt.where(
lvl["AssetClass"].isin("IR", "CR_IDX", "CR_SN"),
tt.max(
(tt.exp(-0.05 * m["Si"]) - tt.exp(-0.05 * m["Ei"])) / 0.05,
supervisory_duration_floor,
),
1.0,
)
m["Supervisory_Duration"].formatter = "DOUBLE[#0,0000]"
# The levels required to compute Supervisory_Duration can be computed __above__ trade level, as
# the minimum required attributes are: Si_Date, Ei_Date and AssetClass.
cube.query(
m["Supervisory_Duration"],
m["Si"],
m["Ei"],
levels=[lvl["AssetClass"], lvl["Si_Date"], lvl["Ei_Date"]],
).head(3)
```
## Trade-level adjusted notional (for trade i): di - [52.34]
Having defined the Supervisory Duration measure, we'll apply it to notionals input data to obtain adjusted notional. Please review the paragraphs of the CRE52 [52.33] - [52.34] for more information on the defintinions of these formula.
$$AdjustedNotional = Notional * SupervisoryDuration$$
```
# This measure will aggregate input data - Notional Field
# We assume that all Notionals are provided in the same currency, otherwise we'd need to apply
# FX rates here.
m["Notional"] = tt.agg.sum(trades_store["Notional"])
# Aggregation is applied per AssetClass, Si_Date and Ei_Dates as these are the levels
# required by the Supervisory_Duration calculation.
m["Adjusted_Notional"] = tt.agg.sum(
m["Notional"] * m["Supervisory_Duration"],
scope=tt.scope.origin(lvl["AssetClass"], lvl["Si_Date"], lvl["Ei_Date"]),
)
cube.query(m["Adjusted_Notional"], levels=[lvl["TradeId"]]).head(3)
```
## Maturity factors - [52.48] - [52.53]
The Maturity Factor takes different values, depending on whether the trade is unmargined or margined:
unmargined [52.48]:
$$MF = \sqrt{\frac{min(M_i, 1year)}{1year}}$$
margined [52.52]:
$$MF = 1.5 * \sqrt{\frac{MPOR}{1 year}}$$
In the above formulae:
- $MPOR$ is the Margin Period Of Risk, provided in the input dataset, let's assume it has been provided in calendar days (not business days)
- $M_i$ is the maturity of trade $i$
```
# Measure for the input MPOR values:
m["MPOR"] = tt.agg.single_value(nettingSets_store["MPOR"])
# This measure will compute date difference depending on the value date and the maturity date:
m["Mi"] = tt.date_diff(lvl["AsOfDate"], lvl["Mi_Date"], unit="days")
# MF_unmargined formula examples takes into account 10 days business days floor for Mi:
MF_floor = 10
MF_unmargined = tt.sqrt(
tt.min(tt.max(m["Mi"], MF_floor), number_of_days_in_year) / number_of_days_in_year
)
# In the following calculation of MF_margined we assume that MPOR is provided in calendar days.
# Please note that this formula might need to be adjusted to incorporate floors defined in [52.50].
MF_margined = 1.5 * tt.sqrt(m["MPOR"] / number_of_days_in_year)
# This measure will compute maturity factor, depending on whether the trade is margined or unmargined:
m["Maturity_Factor"] = tt.agg.single_value(
tt.where(lvl["isMargined"] == "Y", MF_margined, MF_unmargined,),
scope=tt.scope.origin(lvl["NettingSetId"]),
)
# Maturity Factor requires NettingSetId and the Mi date:
cube.query(m["Maturity_Factor"], levels=[lvl["NettingSetId"], lvl["Mi_Date"]]).head(3)
```
## Supervisory delta adjustments - [52.38] - [52.41]
Supervisory Delta takes different values depending on the nature of the instrument:
| $\delta_i$ | Bought / Long in the primary risk factor | Sold / Short in the primary risk factor |
|-------------------|-----------------------------------------------------------------------------------|---------------------------------------------------------------------------------- |
| Call Options | $+ \Phi(\frac{ln(P_i / K_i) + 0.5 * \sigma_i^2 * T_i}{\sigma_i * \sqrt{T_i}})$ | $ - \Phi(\frac{ln(P_i / K_i) + 0.5 * \sigma_i^2 * T_i}{\sigma_i * \sqrt{T_i}})$ |
| Put Options | $- \Phi( - \frac{ln(P_i / K_i) + 0.5 * \sigma_i^2 * T_i}{\sigma_i * \sqrt{T_i}})$ | $ + \Phi( - \frac{ln(P_i / K_i) + 0.5 * \sigma_i^2 * T_i}{\sigma_i * \sqrt{T_i}})$ |
| CDO tranches | $+ \frac{15}{(1+14*A_i)*(1+14*D_i)}$ | $- \frac{15}{(1+14*A_i)*(1+14*D_i)}$ |
| Other Instruments | $+1$ | $-1$ |
Where $\Phi$ is the standard normal cumulative distribution function,
- $P_i$ is the Underlying Price of trade $i$,
- $K_i$ is the Strike Price of trade $i$,
- $\sigma_i$ is the Supervisory Volatility of trade $i$,
- $T_i$ is the latest contractual exercise date of the option $i$
- $A_i$ is the Attachment Point of the CDO tranche $i$
- and $D_i$ is the Detachment Point of the CDO tranche $i$
Delta adjustment will be applied to trade-level effective notional, together with maturity factor to obtain effective notional. We'll define Delta adjustment for any trade as product of:
- Long/Short indicator: 1 or -1
- Delta adjustment for call options or 1 - `call_multiplier`,
- Delta adjustment for put options or 1 - `put_multiplier`,
- CDO adjustment or 1 - `cdo_multiplier`.
Let's start by adding measures displaying input data:
```
# Measure displaying supervisory option volatility parameter (input data):
m["Supervisory_Option_Volatility"] = tt.agg.single_value(
supervisoryParameters_store["SupervisoryOptionVolatility"]
)
# Measures displaying trade parameters (input data):
m["Underlying Price"] = tt.agg.single_value(trades_store["Underlying Price"])
m["Strike Price"] = tt.agg.single_value(trades_store["Strike Price"])
m["AttachPoint"] = tt.agg.single_value(trades_store["AttachPoint"])
m["DetachPoint"] = tt.agg.single_value(trades_store["DetachPoint"])
```
And this measure will display 1/-1 for long/short positions.
```
m["Direction"] = tt.agg.single_value(trades_store["Direction"])
```
Let's move onto computing options delta adjustment. To compute time to expiry in years from trade's expiry date attribute, we can use the following syntax:
```
# Computing time period until options' expiry Ti:
m["Ti"] = (
tt.date_diff(lvl["AsOfDate"], lvl["Ti_Date"], unit="days") / number_of_days_in_year
)
# Cumulative function of the standard normal distribution using atoti erf.
# See formula https://en.wikipedia.org/wiki/Error_function#Cumulative_distribution_function
def CDF(x):
return (1.0 + tt.math.erf(x / tt.sqrt(2))) / 2.0
# We'll wrap the argument of the CDF function into a measure:
m["d"] = (
tt.log(m["Underlying Price"] / m["Strike Price"])
+ 0.5 * (m["Supervisory_Option_Volatility"] ** 2) * m["Ti"]
) / (m["Supervisory_Option_Volatility"] * tt.sqrt(m["Ti"]))
cube.query(
m["d"],
levels=[lvl["TradeId"], lvl["Ti_Date"]],
condition=lvl["OptionType"].isin("C", "P"),
).head(3)
```
Finally, the call and put multipliers can be defined as follows:
```
call_multiplier = tt.where(lvl["OptionType"] == "C", CDF(m["d"]), 1.0)
put_multiplier = tt.where(lvl["OptionType"] == "P", -CDF(-m["d"]), 1.0)
```
Using the "AttachPoint" and "DetachPoint" values, the delta adjustment for CDO positions can be defined as follows:
```
cdo_multiplier = tt.where(
lvl["IsCDO"] == "Y",
15.0 / ((1.0 + 14.0 * m["AttachPoint"]) * (1.0 + 14.0 * m["DetachPoint"])),
1.0,
)
```
Using product of the above multipliers we will define the combined Delta Adjustment measure:
```
m["Delta_Adjustment"] = (
m["Direction"] * call_multiplier * put_multiplier * cdo_multiplier
)
```
## Trade Effective Notional - [52.30]
Bringing the trade effective notional components together and calculating on trade level:
```
m["Trade_Effective_Notional"] = tt.agg.sum(
m["Adjusted_Notional"] * m["Delta_Adjustment"] * m["Maturity_Factor"],
scope=tt.scope.origin(
lvl["TradeId"],
lvl["NettingSetId"],
lvl["OptionType"],
lvl["Mi_Date"],
lvl["IsCDO"],
lvl["Ti_Date"],
),
)
cube.query(
m["Trade_Effective_Notional"],
m["Adjusted_Notional"],
m["Delta_Adjustment"],
m["Maturity_Factor"],
levels=[
lvl["TradeId"],
lvl["NettingSetId"],
lvl["OptionType"],
lvl["Mi_Date"],
lvl["IsCDO"],
lvl["Ti_Date"],
],
).head(3)
```
## Asset class level add-ons - [52.55]
Having obtained the trade effective notionals reflecting the necessary supervisory adjustments, we can use them to compute AddOns for each of the five asset classes.
## Add-on for interest rate derivatives - [52.56]
The steps for computing $Addon^{IR}$ are set in [52.57].
- For Interest Rates, each Hedging Set is defined as currency - in our data model it comes from the field Underlying.
- Each hedge set is then separated in maturity buckets depending on $M_i$. If $M_i$ is less than one year, the trade goes in the first Maturity Bucket, if it is between one and five years, the trade goes in the second Maturity Bucket, and if it is more than 5 years the trade goes in the third Maturity Bucket.
- In each Maturity Bucket and each Hedging Set, the effective notional is the sum of the Trade Effective Notionals of the trades.
The effective notional $EN_{HS}$ of the Hedge Set $HS$ is then determined with the following formula :
$$EN_{HS} = ( D_{B1}^2 + D_{B2}^2 + D_{B3}^2 + 1.4 * D_{B1} * D_{B2} + 1.4 * D_{B2} * D_{B3} + 0.6 * D_{B1} * D_{B3} )^{\frac{1}{2}}$$
```
# Step 1 Filtering the trade effective notionals of the Interest rate risk class only:
m["Effective_notional_IR"] = tt.filter(
m["Trade_Effective_Notional"], lvl["AssetClass"] == "IR"
)
# Step 2 Allocating trades into Hedging Sets - for IR we'll use "RiskFactorCcy" as Hedging Set.
cube.query(m["Effective_notional_IR"])
# Step 3 Allocating trades into Maturity Buckets and
# Step 4 Adding together effective notionals
# We use tt.where condition and filtering for the effective notional buckets - D_B1, D_B2, D_b3
# Ei is computed from the Ei_Date attribute, hence we are summing up by Ei_Date.
m["D_B1"] = tt.agg.sum(
tt.where(m["Ei"] < 1.0, m["Effective_notional_IR"], 0.0),
scope=tt.scope.origin(lvl["Ei_Date"]),
)
m["D_B2"] = tt.agg.sum(
tt.where((m["Ei"] >= 1.0) & (m["Ei"] < 5.0), m["Effective_notional_IR"], 0.0),
scope=tt.scope.origin(lvl["Ei_Date"]),
)
m["D_B3"] = tt.agg.sum(
tt.where(m["Ei"] >= 5.0, m["Effective_notional_IR"], 0.0),
scope=tt.scope.origin(lvl["Ei_Date"]),
)
cube.query(m["Effective_notional_IR"], m["D_B1"], m["D_B2"], m["D_B3"])
cube.query(m["Ei"], levels=[lvl["TradeId"], lvl["Ei_Date"]]).sample(3)
# Step 5 Calculating effecting notional of the hedging set - and summing them up across hedging sets.
# As a reminder, a hedging set is a combination of RiskFactorCcy and Underlying fields for interest rates:
m["Effective_notional_IR_Offset_Formula"] = tt.agg.sum(
tt.sqrt(
m["D_B1"] ** 2
+ m["D_B2"] ** 2
+ m["D_B3"] ** 2
+ 1.4 * m["D_B1"] * m["D_B2"]
+ 1.4 * m["D_B2"] * m["D_B3"]
+ 0.6 * m["D_B1"] * m["D_B3"]
),
scope=tt.scope.origin(lvl["Underlying"]),
)
m["Effective_notional_IR_NoOffset_Formula"] = tt.agg.sum(
tt.abs(m["D_B1"]) + tt.abs(m["D_B2"]) + tt.abs(m["D_B3"]),
scope=tt.scope.origin(lvl["Underlying"]),
)
```
The Add-On for each hedge set is then simply :
$$AddOn_{j}^{(IR)} = SF^{(IR)} * EN_j $$
Let's display the Supervisory Factor:
```
m["SupervisoryFactor"] = tt.agg.single_value(
supervisoryParameters_store["SupervisoryFactor"]
)
# You may want to further adjust the Supervisory Factor by checking if the trade is a "basis" trade
# please see [52.46]. In our case, we don't account for that.
cube.query(m["SupervisoryFactor"], condition=lvl["AssetClass"] == "IR")
```
Finally, the AddOn:
```
m["AddOn_IR"] = tt.agg.sum(
m["SupervisoryFactor"] * m["Effective_notional_IR_Offset_Formula"],
scope=tt.scope.origin(lvl["AssetClass"]),
)
# Use No Offset Formula here if the bank chose not to recognize netting
m["AddOn_IR_NoOffset_Formula"] = tt.agg.sum(
m["SupervisoryFactor"] * m["Effective_notional_IR_NoOffset_Formula"],
scope=tt.scope.origin(lvl["AssetClass"]),
)
```
## Add-on for credit derivatives - [52.60] - [52.64]
There is only one Hedge Set for Credit derivatives. In this Hedge Set, the trades are grouped depending on which entity issued them. The Add-On for an entity is the sum of the Trade Effective Notionals of the corresponding trades, multiplied by the Supervisory Factor. The Supervisory Factor depends on the rating of the entity.
```
effective_notional_CR = tt.filter(
m["Trade_Effective_Notional"], lvl["AssetClass"].isin("CR_IDX", "CR_SN")
)
# summing up by AssetClass and SubClass as these are the levels where SupervisoryFactors are defined:
m["AddOn_CR_per_entity"] = tt.agg.sum(
m["SupervisoryFactor"] * effective_notional_CR,
scope=tt.scope.origin(lvl["AssetClass"], lvl["SubClass"]),
)
```
After determining the AddOn per entity, we can deduce the Credit AddOn with the following formula - see para [52.61] :
$$ AddOn^{Credit} = \left[\left(\sum_{entity} \rho_{entity} * AddOn_{entity} \right)^{2} + \sum_{entity} (1-(\rho_{entity})^{2}) * (AddOn_{entity})^{2} \right]^{1/2}$$
where $\rho_{entity}$ is the Correlation. It takes different values for names being Credit Single Name or Credit Index.
```
m["Correlation"] = tt.agg.single_value(supervisoryParameters_store["Correlation"])
cube.query(m["Correlation"], levels=lvl["AssetClass"])
# The first term of the above formula:
AddOn_CR_systematic = (
tt.agg.sum(
m["Correlation"] * m["AddOn_CR_per_entity"],
scope=tt.scope.origin(lvl["Underlying"]),
)
** 2
)
# The second term of the above formula:
AddOn_CR_idiosyncratic = tt.agg.sum(
(1.0 - (m["Correlation"] ** 2)) * (m["AddOn_CR_per_entity"] ** 2),
scope=tt.scope.origin(lvl["Underlying"]),
)
m["AddOn_CR"] = tt.sqrt(AddOn_CR_systematic + AddOn_CR_idiosyncratic)
```
## Add-on for commodity derivatives - [52.69] - [52.71]
Commodities are broken down into 4 Hedging Sets (hierarchy "HS"): Energy, Metals, Agricultural and Others. Inside these Hedging Sets, the trades are grouped by commodity type - in our data model this is represented by the "SubClass" field.
The main formulae and step-by-step instruction is provided in the [52.70]:
$$ AddOn_{HS}^{CO} = \left[\left(\sum_{CommType} \rho_{CommType}\cdot AddOn_{CommType}\right)^{2} + \sum_{CommType} (1-\rho_{CommType}^2)* AddOn_{CommType}^{2} \right]^{1/2}$$
$$AddOn^{CO} = \sum_{HS}AddOn_{HS}^{CO}$$
```
# Step 3 Calculate the combined effective notional
effective_notional_CO = tt.filter(
m["Trade_Effective_Notional"], lvl["AssetClass"] == "CO"
)
# Step 4 Calculate the add-on for each commodity type
AddOn_CommType = tt.agg.sum(
m["SupervisoryFactor"] * effective_notional_CO,
scope=tt.scope.origin(lvl["AssetClass"], lvl["SubClass"]),
)
# Step 5
# The first term of the above formula:
AddOn_CO_systematic = (
tt.agg.sum(
m["Correlation"] * AddOn_CommType, scope=tt.scope.origin(lvl["SubClass"]),
)
** 2
)
# The second term of the above formula:
AddOn_CO_idiosyncratic = tt.agg.sum(
(1.0 - (m["Correlation"] ** 2)) * (AddOn_CommType ** 2),
scope=tt.scope.origin(lvl["SubClass"]),
)
# Generic measure computing the product
AddOn_CO_HS = tt.sqrt(AddOn_CO_systematic + AddOn_CO_idiosyncratic)
# Add on as a sum of products over all hedging sets
m["AddOn_CO"] = tt.agg.sum(AddOn_CO_HS, scope=tt.scope.origin(lvl["HS"]))
cube.query(m["AddOn_CO"], levels=lvl["HS"])
```
## Add-on for foreign exchange derivatives - [52.58] - [52.59]
Step-by-step instruction for calculating $AddOn^{FX}$ are provided in the [52.59]. Hedging sets - are different currency pairs, which can be found in the field "Underlying" in our data model.
Let's jump to Step 4 and compute $AddOn_{HS}^{FX} = | EN_{HS} | \cdot SF$
```
EN_HS_FX = tt.filter(m["Trade_Effective_Notional"], lvl["AssetClass"] == "FX",)
# Step 5: Sum up across absolute values of EN_HS multiplied by supervisory factor by currency pairs:
m["AddOn_FX"] = tt.agg.sum(
tt.abs(EN_HS_FX) * m["SupervisoryFactor"], scope=tt.scope.origin(lvl["Underlying"]),
)
cube.query(
m["AddOn_FX"],
m["Trade_Effective_Notional"],
m["SupervisoryFactor"],
levels=[lvl["Underlying"]],
condition=lvl["AssetClass"] == "FX",
).sample(3)
```
## Add-on for equity derivatives - [52.65] - [52.68]
Equity Add-On is determined in the same way as Credit Add-On. The trades are grouped under a single Hedge Set, and grouped by entity inside. However, the supervisory factor no longer depends on the rating of the entity but rather on wether the trade's Asset Class is Equity Index or Equity Single Name. The formula for the Equity Addon is defined in para [52.66]:
$$ AddOn^{Equity} = \left[\left(\sum_{entity} \rho_{entity}\cdot AddOn_{entity}\right)^{2} + \sum_{entity} (1-\rho_{entity}^2)* AddOn_{entity}^{2} \right]^{1/2}$$
where $\rho_{k}$ is the correlation factor corresponding to entity k
```
AddOn_EQ_entity = tt.filter(
m["SupervisoryFactor"] * m["Trade_Effective_Notional"],
lvl["AssetClass"].isin("EQ_SN", "EQ_IDX"),
)
AddOn_EQ_systematic = tt.agg.sum(
(m["Correlation"] * AddOn_EQ_entity) ** 2, scope=tt.scope.origin(lvl["Underlying"]),
)
AddOn_EQ_idiosyncratic = tt.agg.sum(
(1.0 - (m["Correlation"] ** 2)) * (AddOn_EQ_entity ** 2),
scope=tt.scope.origin(lvl["Underlying"]),
)
m["AddOn_EQ"] = tt.sqrt(AddOn_EQ_systematic + AddOn_EQ_idiosyncratic)
cube.query(m["AddOn_EQ"])
```
## Aggregate add-on - [52.24]
The Add-on aggregate is simply the sum of all five Asset Class Add-Ons:
```
# This measure will be displaying aggregate AddOn
m["AddOn"] = (
m["AddOn_CR"] + m["AddOn_IR"] + m["AddOn_CO"] + m["AddOn_FX"] + m["AddOn_EQ"]
)
cube.query(
m["AddOn"],
m["AddOn_CR"],
m["AddOn_IR"],
m["AddOn_CO"],
m["AddOn_FX"],
m["AddOn_EQ"],
)
```
## Multiplier - [52.23]
In this section we will compute the multiplier, defined per formula:
$$multiplier = min(1, Floor + (1-Floor)*exp(\frac{V-C}{2*(1-Floor)*AddOn^{aggregate}}))$$
(where $Floor = 0.05$ is another fixed parameter)
```
# Let's display V - value of the positions:
m["MarketValue"] = tt.agg.sum(trades_store["MarketValue"])
# Let's also display C - value of collateral (after haircut) - from the input data
m["Collateral"] = tt.agg.sum(nettingSets_store["Collateral"])
floor = 0.05
alpha = 1.4
m["Multiplier"] = tt.agg.single_value(
tt.min(
1.0,
floor
+ (1 - floor)
* tt.exp((m["MarketValue"] - m["Collateral"]) / (2 * (1 - floor) * m["AddOn"])),
),
scope=tt.scope.origin(lvl["NettingSetId"]),
)
cube.query(m["Multiplier"], levels=lvl["NettingSetId"]).sample(3)
```
## PFE add-on for each netting set - [52.20]
Having defined the aggregate AddOn and the multiplier, the PFE is obtained per formula: $PFE = multiplier * AddOn^{aggregate}$ - which is computed for each netting set and then summed up.
```
m["PFE"] = tt.agg.sum(
m["Multiplier"] * m["AddOn"], scope=tt.scope.origin(lvl["NettingSetId"])
)
cube.query(m["PFE"])
```
## Replacement cost - [52.10] - [52.18]
The Replacement Cost is $RC = max(V-C,0)$ for unmargined transactions and $RC = max(V-C, TH + MTA - NICA, 0)$.
In this formula:
- $V$ is the market value of derivative transactions,
- $C$ is the haircut value of net collateral held (if any)),
- TH - positive threshold before the counterparty must send the bank collateral,
- MTA - minimum transfer amount applicable to the counterparty,
- NICA - net independent collateral amount.
```
# Let's start by disaplying TH, MTA and NICA parameters of netting sets:
m["TH"] = tt.agg.single_value(nettingSets_store["TH"])
m["MTA"] = tt.agg.single_value(nettingSets_store["MTA"])
m["NICA"] = tt.agg.single_value(nettingSets_store["NICA"])
cube.query(m["TH"], m["MTA"], m["NICA"], levels=lvl["NettingSetId"])
RC_unmargined = tt.max(m["MarketValue"] - m["Collateral"], 0)
RC_margined = tt.max(
m["MarketValue"] - m["Collateral"], m["TH"] + m["MTA"] - m["NICA"], 0,
)
m["RC"] = tt.agg.sum(
tt.where(lvl["isMargined"] == "Y", RC_margined, RC_unmargined,),
scope=tt.scope.origin(lvl["NettingSetId"]),
)
cube.query(m["RC"], m["MarketValue"], m["Collateral"], levels=lvl["isMargined"])
```
# EAD - [52.1]
$$EAD = \alpha * (RC + PFE)$$
where $alpha = 1.4$ (fixed parameter),
```
alpha = 1.4
m["EAD"] = tt.agg.sum(
alpha * (m["RC"] + m["PFE"]), scope=tt.scope.origin(lvl["NettingSetId"])
)
```
# Formatting
The following cell will allocate measures to folders in the cube explorer, and apply formatting.
```
for measure in sorted(list(m)):
if (
measure.startswith("AddOn")
or measure.startswith("D_")
or measure
in [
"Effective_notional_IR_NoOffset_Formula",
"Effective_notional_IR_Offset_Formula",
"Effective_notional_IR_j_k",
"Effective_notional_IR",
]
):
m[measure].formatter = "DOUBLE[#,###]"
m[measure].folder = "AddOn"
elif measure in [
"Correlation",
"SupervisoryFactor",
"Supervisory_Option_Volatility",
]:
m[measure].formatter = "DOUBLE[#,###.0000]"
m[measure].folder = "Supervisory Parameters"
elif measure in [
"Direction",
"DetachPoint",
"AttachPoint",
"MarketValue",
"Notional",
"Underlying Price",
"Strike Price",
]:
m[measure].formatter = "DOUBLE[#,###.0000]"
m[measure].folder = "Trades Input Data"
elif measure in ["Collateral", "TH", "MTA", "NICA", "MPOR"]:
m[measure].formatter = "DOUBLE[#,###]"
m[measure].folder = "Netting Sets Input Data"
elif measure in ["Mi", "Ei", "Si", "Ti"]:
m[measure].formatter = "DOUBLE[#,###.##]"
m[measure].folder = "Time period parameters"
elif measure in [
"Delta_Adjustment",
"Supervisory_Duration",
"Adjusted_Notional",
"Maturity_Factor",
"d",
"Trade_Effective_Notional",
"Adjusted_Notional x Delta_Adjustment",
]:
m[measure].formatter = "DOUBLE[#,###.00]"
m[measure].folder = "Derived Interim Results"
m['EAD'].formatter = "DOUBLE[#,###]"
m['PFE'].formatter = "DOUBLE[#,###]"
m['RC'].formatter = "DOUBLE[#,###]"
```
# Enriching the data
Having implemented the core metrics for computing EAD from trades data and parameters, we can enrich trades and netting sets data with additional attributes. As an example, let's load business units and link them to trades via book:
```
books_store = session.read_csv(
"s3://data.atoti.io/notebooks/sa-ccr/T0/books.csv", keys=["AsOfDate", "BookId"], store_name="BookStructure",
)
trades_store.join(books_store, mapping={"BookId": "BookId", "AsOfDate": "AsOfDate"})
cube.schema
```
Breaking down EAD by organizational structure, for example by book we'll obtain standalone calculation for each book. Since the EAD aggregation is not linear, sum of elements is not equal to the calculation across Books. Continue reading to learn how to create measures for contribution analysis.
```
sum(cube.query(m["EAD"], levels=[lvl["BookId"]])["EAD"])
cube.query(m["EAD"])
```
# Pro-rata allocation
In the previous section we saw that breaking EAD down by book will result in "standalone" calculations, and since EAD is a non-linear measure, sum of EAD by book is not equal to the EAD calculated for the global portfolio. We may want to implement an allocation methodology for EAD. There multiple methodologies you could choose for evaluating the contribution, let's use "pro-rata" allocation as an illustrative example.
The measure "EAD Pro-Rata" will display the contribution of a business unit into EAD for the global portfolio based on the proportion of EAD for that unit and sum of EADs for all units.
```
# For each Book, the weight is a ratio of EAD and EAD across books.
m["EAD_sum_by_book"] = tt.agg.sum(m["EAD"], scope=tt.scope.siblings(h["BookId"]))
m["Book_weight"] = m["EAD"] / m["EAD_sum_by_book"]
m["EAD_across_books"] = tt.parent_value(
m["EAD"], h["BookId"], degree=1, total_value=m["EAD"], apply_filters=True
)
m["EAD Pro-Rata"] = m["Book_weight"] * m["EAD_across_books"]
session.url
```
# Simulations
atoti provides a powerful simulation framework.
Below we will explore a few examples of uploading alternative data into our app to instantly see the impact on the capital charge metrics.
## Simulation on supervisory parameters
As a reminder, the Supervisory Option Volatility is a parameter in the Delta Adjustment calculation. Let's bring up the values using `cube.query`:
```
cube.query(
m["Supervisory_Option_Volatility"], levels=[lvl["AssetClass"], lvl["SubClass"]]
)
```
"Measure simulations" feature in atoti allow to configure `replace`, `multiply` and `add` actions on measures. In our case, we are creating a simulation to replace volatility values for different AssetClasses as follows:
```
param_what_if = cube.setup_simulation(
"Option Volatility What-If",
replace=[m["Supervisory_Option_Volatility"]],
levels=[lvl["AssetClass"], lvl["SubClass"]],
)
```
Having setup the simulation, we apply the changes we want in our experiment.
```
recalibration = param_what_if.scenarios["Volatility Recalibration"]
recalibration += ("IR", "No SubClass", 0.1)
cube.query(
m["Supervisory_Option_Volatility"],
levels=[lvl["AssetClass"], lvl["SubClass"], lvl["Option Volatility What-If"]],
condition=lvl["AssetClass"] == "IR",
)
cube.query(m["EAD"], m["AddOn_IR"], levels=[lvl["Option Volatility What-If"]])
```
# Simulation on input data - trades and CSA simulations
The "source simulation" in atoti allow uploading data and viewing the resulting change side-by-side with the base scenario. This is handy for "pre-deal", "trade novation" or "aggrement changes" types of simulations where you can use a csv file to add or update positions and netting sets.
See, for instance, the parameters of the first netting set:
Let's change the parameters of this agreement:
```
agreement_update = pd.DataFrame.from_dict(
[
{
"AsOfDate": "2020-01-13",
"NettingSetId": "EX1",
"CounterpartyId": "cpt1",
"MPOR": 15.0,
"Collateral": 1000000.0,
"NICA": 0,
"TH": 0,
"MTA": 0,
"isMargined": "Y",
}
],
orient="columns",
)
agreement_update
nettingSets_store.scenarios["Updating CSA parameters"].load_pandas(agreement_update)
session.url + "/#/dashboard/195"
```
# Adding more dates
```
trades_store.load_csv("s3://data.atoti.io/notebooks/sa-ccr/T1/trades.csv")
books_store.load_csv("s3://data.atoti.io/notebooks/sa-ccr/T1/books.csv")
nettingSets_store.load_csv("s3://data.atoti.io/notebooks/sa-ccr/T1/netting-set-attributes.csv")
```
| github_jupyter |
# 目标检测和边界框
:label:`sec_bbox`
在前面的章节(例如 :numref:`sec_alexnet`— :numref:`sec_googlenet`)中,我们介绍了各种图像分类模型。
在图像分类任务中,我们假设图像中只有一个主要物体对象,我们只关注如何识别其类别。
然而,很多时候图像里有多个我们感兴趣的目标,我们不仅想知道它们的类别,还想得到它们在图像中的具体位置。
在计算机视觉里,我们将这类任务称为*目标检测*(object detection)或*目标识别*(object recognition)。
目标检测在多个领域中被广泛使用。
例如,在无人驾驶里,我们需要通过识别拍摄到的视频图像里的车辆、行人、道路和障碍物的位置来规划行进线路。
机器人也常通过该任务来检测感兴趣的目标。安防领域则需要检测异常目标,如歹徒或者炸弹。
在接下来的几节中,我们将介绍几种用于目标检测的深度学习方法。
我们将首先介绍目标的*位置*。
```
%matplotlib inline
import torch
from d2l import torch as d2l
```
下面加载本节将使用的示例图像。可以看到图像左边是一只狗,右边是一只猫。
它们是这张图像里的两个主要目标。
```
d2l.set_figsize()
img = d2l.plt.imread('../img/catdog.jpg')
d2l.plt.imshow(img);
```
## 边界框
在目标检测中,我们通常使用*边界框*(bounding box)来描述对象的空间位置。
边界框是矩形的,由矩形左上角的以及右下角的$x$和$y$坐标决定。
另一种常用的边界框表示方法是边界框中心的$(x, y)$轴坐标以及框的宽度和高度。
在这里,我们[**定义在这两种表示法之间进行转换的函数**]:`box_corner_to_center`从两角表示法转换为中心宽度表示法,而`box_center_to_corner`反之亦然。
输入参数`boxes`可以是长度为4的张量,也可以是形状为($n$,4)的二维张量,其中$n$是边界框的数量。
```
#@save
def box_corner_to_center(boxes):
"""从(左上,右下)转换到(中间,宽度,高度)"""
x1, y1, x2, y2 = boxes[:, 0], boxes[:, 1], boxes[:, 2], boxes[:, 3]
cx = (x1 + x2) / 2
cy = (y1 + y2) / 2
w = x2 - x1
h = y2 - y1
boxes = torch.stack((cx, cy, w, h), axis=-1)
return boxes
#@save
def box_center_to_corner(boxes):
"""从(中间,宽度,高度)转换到(左上,右下)"""
cx, cy, w, h = boxes[:, 0], boxes[:, 1], boxes[:, 2], boxes[:, 3]
x1 = cx - 0.5 * w
y1 = cy - 0.5 * h
x2 = cx + 0.5 * w
y2 = cy + 0.5 * h
boxes = torch.stack((x1, y1, x2, y2), axis=-1)
return boxes
```
我们将根据坐标信息[**定义图像中狗和猫的边界框**]。
图像中坐标的原点是图像的左上角,向右的方向为$x$轴的正方向,向下的方向为$y$轴的正方向。
```
# bbox是边界框的英文缩写
dog_bbox, cat_bbox = [60.0, 45.0, 378.0, 516.0], [400.0, 112.0, 655.0, 493.0]
```
我们可以通过转换两次来验证边界框转换函数的正确性。
```
boxes = torch.tensor((dog_bbox, cat_bbox))
box_center_to_corner(box_corner_to_center(boxes)) == boxes
```
我们可以[**将边界框在图中画出**],以检查其是否准确。
画之前,我们定义一个辅助函数`bbox_to_rect`。
它将边界框表示成`matplotlib`的边界框格式。
```
#@save
def bbox_to_rect(bbox, color):
# 将边界框(左上x,左上y,右下x,右下y)格式转换成matplotlib格式:
# ((左上x,左上y),宽,高)
return d2l.plt.Rectangle(
xy=(bbox[0], bbox[1]), width=bbox[2]-bbox[0], height=bbox[3]-bbox[1],
fill=False, edgecolor=color, linewidth=2)
```
在图像上添加边界框之后,我们可以看到两个物体的主要轮廓基本上在两个框内。
```
fig = d2l.plt.imshow(img)
fig.axes.add_patch(bbox_to_rect(dog_bbox, 'blue'))
fig.axes.add_patch(bbox_to_rect(cat_bbox, 'red'));
```
## 小结
* 目标检测不仅可以识别图像中所有感兴趣的物体,还能识别它们的位置,该位置通常由矩形边界框表示。
* 我们可以在两种常用的边界框表示(中间,宽度,高度)和(左上,右下)坐标之间进行转换。
## 练习
1. 找到另一张图像,然后尝试标记包含该对象的边界框。比较标注边界框和标注类别哪个需要更长的时间?
1. 为什么`box_corner_to_center`和`box_center_to_corner`的输入参数的最内层维度总是4?
[Discussions](https://discuss.d2l.ai/t/2944)
| github_jupyter |
[Sascha Spors](https://orcid.org/0000-0001-7225-9992),
Professorship Signal Theory and Digital Signal Processing,
[Institute of Communications Engineering (INT)](https://www.int.uni-rostock.de/),
Faculty of Computer Science and Electrical Engineering (IEF),
[University of Rostock, Germany](https://www.uni-rostock.de/en/)
# Tutorial Signals and Systems (Signal- und Systemtheorie)
Summer Semester 2021 (Bachelor Course #24015)
- lecture: https://github.com/spatialaudio/signals-and-systems-lecture
- tutorial: https://github.com/spatialaudio/signals-and-systems-exercises
WIP...
The project is currently under heavy development while adding new material for the summer semester 2021
Feel free to contact lecturer [frank.schultz@uni-rostock.de](https://orcid.org/0000-0002-3010-0294)
## Übung / Exercise 5 Bode Plot of LTI Systems
### References
* Norbert Fliege (1991): "*Systemtheorie*", Teubner, Stuttgart (GER), cf. chapter 4.3.5
* Alan V. Oppenheim, Alan S. Willsky with S. Hamid Nawab (1997): "*Signals & Systems*", Prentice Hall, Upper Saddle River NJ (USA), 2nd ed., cf. chapter 6
* Bernd Girod, Rudolf Rabenstein, Alexander Stenger (2001): "*Signals and Systems*", Wiley, Chichester (UK), cf. chapter 10
* Bernd Girod, Rudolf Rabenstein, Alexander Stenger (2005/2007): "*Einführung in die Systemtheorie*", Teubner, Wiesbaden (GER), 3rd/4th ed., cf. chapter 10
Let's import required packages and define some helping routines for plotting in the following.
```
import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as signal
def lti_bode_plot(sys, txt):
w, mag, phase = sys.bode(np.logspace(-2, 2, 2**8))
fig = plt.figure(figsize=(6, 5), tight_layout=True)
plt.subplot(2, 1, 1)
plt.title(txt)
plt.semilogx(w, mag, 'C0', linewidth=3)
plt.grid(True)
plt.xlabel('$\omega$ / (rad/s)')
plt.ylabel('Level in dB')
plt.xlim(w[0], w[-1])
plt.subplot(2, 1, 2)
plt.semilogx(w, phase, 'C0', linewidth=3)
plt.grid(True)
plt.xlabel('$\omega$ / (rad/s)')
plt.ylabel('Phase in deg')
plt.xlim(w[0], w[-1])
# plot LTI system characteristics for Laplace domain
def plot_LTIs(sys):
z = np.squeeze(sys.zeros)
p = np.squeeze(sys.poles)
k = sys.gain
w = np.logspace(-2, 2, 2**8)
w, H = signal.freqs_zpk(z, p, k, w)
th, h = signal.impulse(sys)
the, he = signal.step(sys)
plt.figure(figsize=(7, 9), tight_layout=True)
# pole / zero plot
plt.subplot(325)
# clear poles / zeros that compensate each other, TBD: numerical robust,
# works for didactical purpose
sz_tmp = z
sp_tmp = p
zp = np.array([1])
while zp.size != 0:
zp, z_ind, p_ind = np.intersect1d(sz_tmp, sp_tmp, return_indices=True)
sz_tmp = np.delete(sz_tmp, z_ind)
sp_tmp = np.delete(sp_tmp, p_ind)
z = sz_tmp
p = sp_tmp
zu, zc = np.unique(z, return_counts=True) # find and count unique zeros
for zui, zci in zip(zu, zc): # plot them individually
plt.plot(np.real(zui), np.imag(zui), ms=10,
color='C0', marker='o', fillstyle='none')
if zci > 1: # if multiple zeros exist then indicate the count
plt.text(np.real(zui), np.imag(zui), zci, color='C0',
fontsize=14, fontweight='bold')
pu, pc = np.unique(p, return_counts=True) # find and count unique poles
for pui, pci in zip(pu, pc): # plot them individually
plt.plot(np.real(pui), np.imag(pui), ms=10,
color='C3', marker='x')
if pci > 1: # if multiple poles exist then indicate the count
plt.text(np.real(pui), np.imag(pui), pci, color='C3',
fontsize=14, fontweight='bold')
plt.axis("equal")
plt.xlabel(r'$\Re\{s\}$')
plt.ylabel(r'$\Im\{s\}$')
plt.grid(True)
plt.title("Pole/Zero/Gain Map, gain=%f" % k)
# Nyquist plot
plt.subplot(326)
plt.plot(H.real, H.imag, "C0", label="$\omega>0$")
plt.plot(H.real, -H.imag, "C1", label="$\omega<0$")
plt.plot(H.real[0], H.imag[0], marker='$w=0$', markersize=25, color="C0")
plt.axis("equal")
plt.legend()
plt.xlabel(r'$\Re\{H(s)\}$')
plt.ylabel(r'$\Im\{H(s)\}$')
plt.grid(True)
plt.title("Nyquist Plot")
# magnitude response
plt.subplot(321)
plt.semilogx(w, 20*np.log10(np.abs(H)))
plt.xlabel('$\omega$ / (rad/s)')
plt.ylabel(r'$A$ / dB')
plt.grid(True)
plt.title("Level")
plt.xlim((w[0], w[-1]))
#plt.ylim((-60, 6))
#plt.yticks(np.arange(-60, +12, 6))
# phase response
plt.subplot(323)
plt.semilogx(w, np.unwrap(np.angle(H))*180/np.pi)
plt.xlabel('$\omega$ / (rad/s)')
plt.ylabel(r'$\phi$ / deg')
plt.grid(True)
plt.title("Phase")
plt.xlim((w[0], w[-1]))
plt.ylim((-180, +180))
plt.yticks(np.arange(-180, +180+45, 45))
# impulse response
plt.subplot(322)
plt.plot(th, h)
plt.xlabel('t / s')
plt.ylabel('h(t)')
plt.grid(True)
plt.title("Impulse Response")
# step response
plt.subplot(324)
plt.plot(the, he)
plt.xlabel('t / s')
plt.ylabel('h$_\epsilon$(t)')
plt.grid(True)
plt.title("Step Response")
# plot LTI system characteristics for z-domain
def plot_LTIz(b, a):
TBD = true
```
# Example: Unity Gain
```
sz = []
sp = []
H0 = +1
txt = 'Unity Gain'
sys = signal.lti(sz, sp, H0)
lti_bode_plot(sys, txt)
```
# Example: Gain and Polarity
```
sz = []
sp = []
H0 = -10
txt = '20 dB Gain with Inverted Polarity'
sys = signal.lti(sz, sp, H0)
lti_bode_plot(sys, txt)
```
# Example: Poles / Zeros in Origin
```
sz = 0, 0 # note: more zeros than poles is not a causal system!
sp = 0,
H0 = 1
sys = signal.lti(sz, sp, H0)
txt = str(len(sz)) + ' Zeros / ' + str(len(sp)) + ' Poles in Origin'
txt1 = (': ' + str((len(sz)-len(sp))*20) + ' dB / decade')
lti_bode_plot(sys, txt+txt1)
sz = 0,
sp = 0, 0, 0
H0 = 1
sys = signal.lti(sz, sp, H0)
txt = str(len(sz)) + ' Zeros / ' + str(len(sp)) + ' Poles in Origin'
txt1 = (': ' + str((len(sz)-len(sp))*20) + ' dB / decade')
lti_bode_plot(sys, txt+txt1)
```
# Example: Single Real Pole, PT1
```
sz = []
sp = -1
H0 = 1
sys = signal.lti(sz, sp, H0)
txt = 'Single Real Pole, decreasing slope, -20 dB / decade'
lti_bode_plot(sys, txt)
```
# Example: Single Real Zero
```
sz = -1
sp = []
H0 = 1
sys = signal.lti(sz, sp, H0)
txt = 'Single Real Zero, increasing slope, + 20 dB / decade'
lti_bode_plot(sys, txt)
```
# Example: Complex Conjugate Zero Pair
```
sz = -3/4-1j, -3/4+1j
sp = []
H0 = 16/25
sys = signal.lti(sz, sp, H0)
txt = 'Conjugate Complex Zero'
lti_bode_plot(sys, txt)
```
# Examples: Complex Conjugate Pole Pair, PT2
```
sz = []
sp = -3/4-1j, -3/4+1j
H0 = 25/16
sys = signal.lti(sz, sp, H0)
txt = 'Conjugate Complex Pole, -3/4$\pm$1j'
lti_bode_plot(sys, txt)
sz = []
sp = -1/2-1j, -1/2+1j
H0 = 5/4
sys = signal.lti(sz, sp, H0)
txt = 'Conjugate Complex Pole, -1/2$\pm$1j'
lti_bode_plot(sys, txt)
sz = []
sp = -1/4-1j, -1/4+1j
H0 = 17/16
sys = signal.lti(sz, sp, H0)
txt = 'Conjugate Complex Pole, -1/4$\pm$1j'
lti_bode_plot(sys, txt)
sz = []
sp = -1/8-1j, -1/8+1j
H0 = 65/64
sys = signal.lti(sz, sp, H0)
txt = 'Conjugate Complex Pole, -1/8$\pm$1j'
lti_bode_plot(sys, txt)
```
# Example: Integrator, I
```
sz = []
sp = 0
H0 = 1
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
plt.savefig('bodeplot_examples_i_element.pdf')
```
# Example: Lowpass 1st Order, PT1
```
sz = []
sp = -1
H0 = np.abs(sp)
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
plt.savefig('bodeplot_examples_pt1_element.pdf')
```
# Example: Highpass 1st Order, DT1
```
sz = 0
sp = -1
H0 = 1
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
plt.savefig('bodeplot_examples_dt1_element.pdf')
```
# Example: Allpass 1st Order
```
sz = +1
sp = -1
H0 = -1
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
plt.savefig('bodeplot_examples_ap1_element.pdf')
```
# Example: Lowpass 2nd Order, PT2
```
sz = []
sp = 1*np.exp(+1j*3*np.pi/4), 1*np.exp(-1j*3*np.pi/4)
H0 = 1
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
plt.savefig('bodeplot_examples_pt2_element.pdf')
```
# Example: Highpass 2nd Order, DT2
```
sz = 0, 0
sp = 1*np.exp(+1j*3*np.pi/4), 1*np.exp(-1j*3*np.pi/4)
H0 = 1
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
plt.savefig('bodeplot_examples_dt2_element.pdf')
```
# Example: Bandpass 2nd Order
```
sz = 0
# sp = 1*np.exp(+1j*2.1*np.pi/4), 1*np.exp(-1j*2.1*np.pi/4) # high Q
sp = 1*np.exp(+1j*3*np.pi/4), 1*np.exp(-1j*3*np.pi/4)
# sp = 1*np.exp(+1j*3.9*np.pi/4), 1*np.exp(-1j*3.9*np.pi/4) # low Q
H0 = np.sqrt(2)
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
plt.savefig('bodeplot_examples_bp2_element.pdf')
```
# Example: Bandstop 2nd Order
```
sz = -1j, +1j
# sp = 1*np.exp(+1j*2.1*np.pi/4), 1*np.exp(-1j*2.1*np.pi/4) # high Q
sp = 1*np.exp(+1j*3*np.pi/4), 1*np.exp(-1j*3*np.pi/4)
# sp = 1*np.exp(+1j*3.9*np.pi/4), 1*np.exp(-1j*3.9*np.pi/4) # low Q
H0 = 1
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
plt.savefig('bodeplot_examples_bs2_element.pdf')
```
# Example: Allpass 2nd Order
```
sz = 1*np.exp(+1j*1*np.pi/4), 1*np.exp(-1j*1*np.pi/4)
sp = 1*np.exp(+1j*3*np.pi/4), 1*np.exp(-1j*3*np.pi/4)
H0 = -1
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
plt.savefig('bodeplot_examples_ap2_element.pdf')
```
# Example: PIT1
```
sz = -1
sp = 0, -10
H0 = 1
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
```
# Example: PIDT
```
sz = -2, -5/2
sp = 0, -10
H0 = 1
sys = signal.lti(sz, sp, H0)
plot_LTIs(sys)
```
# Example: Lowpass 2nd Order, PT2
The 2nd order lopwass
\begin{align}
H_\mathrm{Low}(s) = \frac{1}{\frac{16}{25}s^2+\frac{24}{25}s +1} = [\frac{16}{25}s^2+\frac{24}{25}s +1]^{-1}
\end{align}
is to be characterized by the pole/zero map, the Nyquist plot, the bode plot, the impulse response and the step response.
```
w0 = 5/4
D = 3/5
Q = 1/(2*D)
print("D = %4.3f, Q = %4.3f" % (D, Q))
# these are all the same cases:
A = (1/w0**2, 2*D/w0, 1)
print("A = ", A)
A = (1/w0**2, 1/(Q*w0), 1)
print("A = ", A)
A = (16/25, 24/25, 1)
print("A = ", A)
B = (0, 0, 1)
z, p, k = signal.tf2zpk(B, A)
sys = signal.lti(z, p, k)
plot_LTIs(sys)
```
# Example: Lowpass to Highpass Transform
The Laplace transfer function of a lowpass is transformed to a highpass by exchanging
\begin{align}
s \rightarrow \frac{1}{s}
\end{align}
The 2nd order lopwass $H_\mathrm{Low}(s) = [\frac{16}{25}s^2+\frac{24}{25}s +1]^{-1}$ yields the
following 2nd order highpass filter. It is to be characterized by the pole/zero map, the Nyquist plot, the bode plot, the impulse response and the step response.
```
# highpass 2nd order, from lowpass 2nd order with s -> 1/s
A = (1, 24/25, 16/25)
B = (1, 0, 0)
z, p, k = signal.tf2zpk(B, A)
sys = signal.lti(z, p, k)
plot_LTIs(sys)
```
# Example: Lowpass to Bandpass Transform
The Laplace transfer function of a lowpass is transformed to a bandpass by exchanging
\begin{align}
s \rightarrow s + \frac{1}{s}
\end{align}
The 2nd order lopwass $H_\mathrm{Low}(s) = [\frac{16}{25}s^2+\frac{24}{25}s +1]^{-1}$ yields to the
following 4th order bandpass filter. It is to be characterized by the pole/zero map, the Nyquist plot, the bode plot, the impulse response and the step response.
```
# bandpass 4th order, from lowpass 2nd order with s -> s + 1/s
A = (16, 24, 57, 24, 16)
B = (0, 0, 25, 0, 0)
z, p, k = signal.tf2zpk(B, A)
sys = signal.lti(z, p, k)
plot_LTIs(sys)
```
# Example: Lowpass to Bandstop Transform
The Laplace transfer function of a lowpass is transformed to a bandstop filter by exchanging
\begin{align}
s \rightarrow \frac{1}{s + \frac{1}{s}}
\end{align}
The 2nd order lopwass $H_\mathrm{Low}(s) = [\frac{16}{25}s^2+\frac{24}{25}s +1]^{-1}$ yields to the
following 4th order bandstop filter. It is to be characterized by the pole/zero map, the Nyquist plot, the bode plot, the impulse response and the step response.
```
# bandstop 4th order, from lowpass 2nd order with s -> 1 / (s + 1/s)
A = (25, 24, 66, 24, 25)
B = (25, 0, 50, 0, 25)
z, p, k = signal.tf2zpk(B, A)
sys = signal.lti(z, p, k)
plot_LTIs(sys)
```
# Example: Bandpass from Real Poles
* zero in origin $s_0=0$
* pole at $s_{\infty,1}=-0.1$
* pole at $s_{\infty,2}=-10$
* $H_0$ = 10
\begin{align}
H(s) = H_0\frac{s-s_{0,1}}{(s-s_{\infty,1})(s-s_{\infty,2})} = 10\frac{(s-0)}{(s+0.1)(s+10)}
=\frac{100 s}{10 s^2 + 101 s + 10}
\end{align}
```
if True:
sz = 0
sp = -0.1, -10
H0 = 10
sys = signal.lti(sz, sp, H0)
else:
B = (0, 100, 0)
A = (10, 101, 10)
sys = signal.lti(B, A)
txt = 'Bandpass'
lti_bode_plot(sys, txt)
```
# Example: Highpass with Slight Resonance
\begin{align}H(s) = \frac{s^2+10 s}{s^2+\sqrt{2} s +1}\end{align}
```
sz = 0, 10
sp = (-1+1j)/np.sqrt(2), (-1-1j)/np.sqrt(2)
H0 = 1
sys = signal.lti(sz, sp, H0)
txt = 'Highpass with Slight Resonance'
lti_bode_plot(sys, txt)
```
## Copyright
This tutorial is provided as Open Educational Resource (OER), to be found at
https://github.com/spatialaudio/signals-and-systems-exercises
accompanying the OER lecture
https://github.com/spatialaudio/signals-and-systems-lecture.
Both are licensed under a) the Creative Commons Attribution 4.0 International
License for text and graphics and b) the MIT License for source code.
Please attribute material from the tutorial as *Frank Schultz,
Continuous- and Discrete-Time Signals and Systems - A Tutorial Featuring
Computational Examples, University of Rostock* with
``main file, github URL, commit number and/or version tag, year``.
| github_jupyter |
```
from bs4 import BeautifulSoup
from selenium import webdriver
from time import sleep
from time import time
from random import randint
from IPython.core.display import clear_output
import pandas as pd
import numpy as np
import psycopg2
import requests
import os
import re
import ast
headers = {'User-Agent': 'Mozilla/5.0',
'From': 'itchyandscratchy@gmail.com'
}
data_path = '/Users/amanda/Documents/Projects/insight/data/'
url = 'https://noc.esdc.gc.ca/Structure/Hierarchy/1a06c040a8bd4022ac498a80eb089bc1?objectid=%2Fd0IGA6qD8JPRfoj5UCjpg%3D%3D'
joblinks = []
# Main summary page
with requests.get(url, timeout=5, headers=headers) as u:
soup = BeautifulSoup(u.content)
for link in soup.find_all('a'):
joblinks.append(link.get('href'))
linkends = [s for s in joblinks if 'NocProfile' in s]
print(linkends[0])
base_url = 'https://noc.esdc.gc.ca'
overview_df = pd.DataFrame({'noc_link':[],
'noc':[],
'job_group':[],
'description':[],
'alt_titles':[]})
duties_df = pd.DataFrame({'noc':[],
'duties':[],
'requirements':[]})
for link in linkends:
try:
# Main summary page
url = base_url + link
with requests.get(url, timeout=8, headers=headers) as u:
soup = BeautifulSoup(u.content)
#print(soup.prettify())
# Main website content
a = soup.find('main', attrs={'class':'container'})
# Title and overview section
title = a.find('h2',attrs={'style':'margin-top:25px;margin-bottom:0;'})
intro = a.find('p', attrs={'class':'mrgn-tp-md'})
if title is not None and intro is not None:
noc_summary = title.text.split('–') #Note this is an en-dash, not a hyphen
noc_text = noc_summary[0]
noc_code = 'NOC ' + noc_text.lstrip('0').strip()
job_group = noc_summary[-1].strip()
print(noc_code + ': ' + job_group)
job_intro = intro.text.strip()
else:
continue
title_sec = a.find('div',attrs={'id':'IndexTitles'})
title_list = title_sec.findAll('li')
alt_titles = [s.text.strip() for s in title_list]
overview_df = overview_df.append({'noc_link':link,
'noc':noc_code,
'job_group':job_group,
'description':job_intro,
'alt_titles':alt_titles}, ignore_index=True)
# Duties and requirements
job_duties = []
duty_sec = soup.find('h4', text = 'Main duties').findNext('div', attrs={'class':'panel-body'})
duty_list = duty_sec.findAll('li')
if duty_list:
job_duties = [s.text.strip() for s in duty_list]
else:
job_duties = 'NaN'
job_req = []
req_head = soup.find('h4', text = 'Employment requirements').findNext('ul')
req_list = req_head.findAll('li')
if req_list:
job_reqs = [s.text.strip() for s in req_list]
else:
job_reqs = 'NaN'
duties_df = duties_df.append({'noc':noc_code,
'duties':job_duties,
'requirements':job_reqs}, ignore_index=True)
except:
continue
# Save cleaned dataframes to file
overview_df.to_csv(os.path.join(data_path,'processed','noc-overview.csv'), index=False)
# Try urls to find out if they exist
# Note this is a long process and only generally needs to be run once
url_list = []
for i in range(1,30000):
try:
url = 'https://www.jobbank.gc.ca/marketreport/summary-occupation/{}/ca'.format(i)
request = requests.get(url, headers=headers, timeout=5, allow_redirects=False)
if request.status_code == 200:
url_list.append(url)
print(url)
else:
print('SKIP')
except:
continue
clean_job_urls = [x.replace('https://www.jobbank.gc.ca/marketreport/summary-occupation','') for x in url_list]
# Save url_list to file so I can load it later
file_path = (os.path.join(data_path,'processed','job_url_list.txt'))
with open(file_path, 'w') as filehandle:
for listitem in clean_job_urls:
filehandle.write('%s\n' % listitem)
```
| github_jupyter |
# The Standard Gate Set
For every possible realization of fault-tolerant quantum computing, there is a set of quantum operations that are most straightforward to realize. Often these consist of multiple so-called Clifford gates, combined with a few single-qubit gates that do not belong to the Clifford group. In this section we'll introduce these concepts, in preparation for showing that they are universal.
### Clifford gates
Some of the most important quantum operations are the so-called Clifford operations. A prominent example is the Hadamard gate:
$$
H = |+\rangle\langle0|~+~ |-\rangle\langle1| = |0\rangle\langle+|~+~ |1\rangle\langle-|.
$$
This gate is expressed above using outer products, as described in the last section. When expressed in this form, its famous effect becomes obvious: it takes $|0\rangle$, and rotates it to $|+\rangle$. More generally, we can say it rotates the basis states of the z measurement, $\{ |0\rangle,|1\rangle \}$, to the basis states of the x measurement, $\{ |+\rangle,|-\rangle \}$, and vice versa.
This effect of the Hadamard is to move information around a qubit. It swaps any information that would previously be accessed by an x measurement with that accessed by a z measurement. Indeed, one of the most important jobs of the Hadamard is to do exactly this. We use it when wanting to make an x measurement, given that we can only physically make z measurements.
```c
// x measurement of qubit 0
h q[0];
measure q[0] -> c[0];
```
The Hadamard can be combined with other gates to perform different operations, for example:
$$
H X H = Z,\\\\
H Z H = X.
$$
By doing a Hadamard before and after an $X$, we cause the action it previously applied to the z basis states to be transferred to the x basis states instead. The combined effect is then identical to that of a $Z$. Similarly, we can create an $X$ from Hadamards and a $Z$.
Similar behavior can be seen for the $S$ gate and its Hermitian conjugate,
$$
S X S^{\dagger} = Y,\\\\
S Y S^{\dagger} = -X,\\\\
S Z S^{\dagger} = Z.
$$
This has a similar effect to the Hadamard, except that it swaps $X$ and $Y$ instead of $X$ and $Z$. In combination with the Hadamard, we could then make a composite gate that shifts information between y and z. This therefore gives us full control over single-qubit Paulis.
The property of transforming Paulis into other Paulis is the defining feature of Clifford gates. Stated explicitly for the single-qubit case: if $U$ is a Clifford and $P$ is a Pauli, $U P U^{\dagger}$ will also be a Pauli. For Hermitian gates, like the Hadamard, we can simply use $U P U$.
Further examples of single-qubit Clifford gates are the Paulis themselves. These do not transform the Pauli they act on. Instead, they simply assign a phase of $-1$ to the two that they anticommute with. For example,
$$
Z X Z = -X,\\\\
Z Y Z = -Y,\\\\
Z Z Z= ~~~~Z.
$$
You may have noticed that a similar phase also arose in the effect of the $S$ gate. By combining this with a Pauli, we could make a composite gate that would cancel this phase, and swap $X$ and $Y$ in a way more similar to the Hadamard's swap of $X$ and $Z$.
For multiple-qubit Clifford gates, the defining property is that they transform tensor products of Paulis to other tensor products of Paulis. For example, the most prominent two-qubit Clifford gate is the CNOT. The property of this that we will make use of in this chapter is
$$
{ CX}_{j,k}~ (X \otimes 1)~{ CX}_{j,k} = X \otimes X.
$$
This effectively 'copies' an $X$ from the control qubit over to the target.
The process of sandwiching a matrix between a unitary and its Hermitian conjugate is known as conjugation by that unitary. This process transforms the eigenstates of the matrix, but leaves the eigenvalues unchanged. The reason why conjugation by Cliffords can transform between Paulis is because all Paulis share the same set of eigenvalues.
### Non-Clifford gates
The Clifford gates are very important, but they are not powerful on their own. In order to do any quantum computation, we need gates that are not Cliffords. Three important examples are arbitrary rotations around the three axes of the qubit, $R_x(\theta)$, $R_y(\theta)$ and $R_z(\theta)$.
Let's focus on $R_x(\theta)$. As we saw in the last section, any unitary can be expressed in an exponential form using a Hermitian matrix. For this gate, we find
$$
R_x(\theta) = e^{i \frac{\theta}{2} X}.
$$
The last section also showed us that the unitary and its corresponding Hermitian matrix have the same eigenstates. In this section, we've seen that conjugation by a unitary transforms eigenstates and leaves eigenvalues unchanged. With this in mind, it can be shown that
$$
U R_x(\theta)U^\dagger = e^{i \frac{\theta}{2} ~U X U^\dagger}.
$$
By conjugating this rotation by a Clifford, we can therefore transform it to the same rotation around another axis. So even if we didn't have a direct way to perform $R_y(\theta)$ and $R_z(\theta)$, we could do it with $R_x(\theta)$ combined with Clifford gates. This technique of boosting the power of non-Clifford gates by combining them with Clifford gates is one that we make great use of in quantum computing.
Certain examples of these rotations have specific names. Rotations by $\theta = \pi$ around the x, y and z axes are X, Y and Z, respectively. Rotations by $\theta = \pm \pi/2$ around the z axis are $S$ and $S^†$, and rotations by $\theta = \pm \pi/4$ around the z axis are $T$ and $T^†$.
### Composite gates
As another example of combining $R_x(\theta)$ with Cliffords, let's conjugate it with a CNOT.
$$
CX_{j,k} ~(R_x(\theta) \otimes 1)~ CX_{j,k} = CX_{j,k} ~ e^{i \frac{\theta}{2} ~ (X\otimes 1)}~ CX_{j,k} = e^{i \frac{\theta}{2} ~CX_{j,k} ~ (X\otimes 1)~ CX_{j,k}} = e^{i \frac{\theta}{2} ~ X\otimes X}
$$
This transforms our simple, single-qubit rotation into a much more powerful two-qubit gate. This is not just equivalent to performing the same rotation independently on both qubits. Instead, it is a gate capable of generating and manipulating entangled states.
We needn't stop there. We can use the same trick to extend the operation to any number of qubits. All that's needed is more conjugates by the CNOT to keep copying the $X$ over to new qubits.
Furthermore, we can use single-qubit Cliffords to transform the Pauli on different qubits. For example, in our two-qubit example we could conjugate by $S$ on the qubit on the left to turn the $X$ there into a $Y$:
$$
S ~e^{i \frac{\theta}{2} ~ X\otimes X}~S^\dagger = e^{i \frac{\theta}{2} ~ X\otimes Y}.
$$
With these techniques, we can make complex entangling operations that act on any arbitrary number of qubits, of the form
$$
U = e^{i\frac{\theta}{2} ~ P_{n-1}\otimes P_{n-2}\otimes...\otimes P_0}, ~~~ P_j \in \{I,X,Y,Z\}.
$$
This all goes to show that combining the single and two-qubit Clifford gates with rotations around the x axis gives us a powerful set of possibilities. What's left to demonstrate is that we can use them to do anything.
| github_jupyter |
# Bite Size Bayes
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Review
[In the previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/05_test.ipynb) ...
So far just Bayes's Theorem, now on to Bayesian statistics
and here's the function we defined
## Probability Mass Functions
When we do more than one update, we don't always want to keep the whole Bayes table. In this section we'll replace the Bayes table with a more compact representation, using a Pandas Series.
Here's a function that takes a sequence of values, `xs`, and their corresponding probabilities, `ps`, and returns a Pandas Series:
```
def make_pmf(xs, ps, **options):
"""Make a Series that represents a PMF.
xs: sequence of values
ps: sequence of probabilities
options: keyword arguments passed to Series constructor
returns: Pandas Series
"""
pmf = pd.Series(ps, index=xs, **options)
return pmf
def bayes_update(pmf, likelihood):
"""Do a Bayesian update.
pmf: Series that represents the prior
likelihood: sequence of likelihoods
returns: float probability of the data
"""
pmf *= likelihood
prob_data = pmf.sum()
pmf /= prob_data
return prob_data
xs = np.arange(101)
prior = 1/101
pmf = make_pmf(xs, prior)
likelihood_vanilla = xs / 100
likelihood_chocolate = 1 - likelihood_vanilla
data = 'VVC'
for cookie in data:
if cookie == 'V':
bayes_update(pmf, likelihood_vanilla)
else:
bayes_update(pmf, likelihood_chocolate)
pmf.plot()
plt.xlabel('Bowl #')
plt.ylabel('Probability')
plt.title('Three cookies');
```
## The Euro problem
In this notebook we'll work on a problem from David MacKay's book, [*Information Theory, Inference, and Learning Algorithms*](http://www.inference.org.uk/mackay/itila/p0.html), which is the book where I first learned about Bayesian statistics. MacKay writes:
> A statistical statement appeared in The Guardian on
Friday January 4, 2002:
>
> >"When spun on edge 250 times, a Belgian one-euro coin came
up heads 140 times and tails 110. ‘It looks very suspicious
to me’, said Barry Blight, a statistics lecturer at the London
School of Economics. ‘If the coin were unbiased the chance of
getting a result as extreme as that would be less than 7%’."
>
> But [asks MacKay] do these data give evidence that the coin is biased rather than fair?
To answer this question, we have to make some modeling decisions.
First, let's assume that if you spin a coin on edge, there is some probability that it will land heads up. I'll call that probability $x$.
Second, let's assume that $x$ varies from one coin to the next, depending on how the coin is balanced and maybe other factors.
With these assumptions we can formulate MacKay's question as an inference problem: given the data, 140 heads and 110 tails, what do we think $x$ is for this coin.
This formulation is similar to the 101 Bowls problem we saw in the previous notebook; in fact, we will use the same likelihoods.
But in the 101 Bowls problem, we are told that we choose a bowl at random, which implies that all bowls have the same prior probability.
For the Euro problem, we have to think harder. What values of $x$ do you think are reasonable?
I seems likely that many coins are "fair", meaning that the probability of heads is close to 50%. Do you think there are coins where $x$ is 75%? How about 90%?
To be honest, I don't really know. To get started, I will assume that all values of $x$, from 0% to 100%, are equally likely. So that's the same prior we used for the 101 Bowls.
So here's the prior:
```
xs = np.arange(101)
prior = 1/101
pmf = make_pmf(xs, prior)
```
Here are the likelihoods for heads and tails:
```
likelihood_heads = xs / 100
likelihood_tails = 1 - likelihood_heads
```
And here are the updates for 140 heads and 110 tails.
```
for i in range(140):
bayes_update(pmf, likelihood_heads)
for i in range(110):
bayes_update(pmf, likelihood_tails)
```
Here's what the results look like:
```
pmf.plot()
plt.xlabel('Possible values of x')
plt.ylabel('Probability')
plt.title('140 heads, 110 tails');
```
This curve shows the "posterior distribution" of $x$; a "distribution" is a set of possible values and their probabilities.
## Put a function on it
Before we go on, let's put that update in a function, because we are going to need it again.
```
def bayes_update_euro(pmf, data):
"""Do a Bayesian update.
pmf: Series that represents a prior PMF
data: tuple of number of heads, tails
"""
heads, tails = data
xs = pmf.index
likelihood_heads = xs / 100
likelihood_tails = 1 - likelihood_heads
for i in range(heads):
bayes_update(pmf, likelihood_heads)
for i in range(tails):
bayes_update(pmf, likelihood_tails)
```
This function takes a PMF that represents the prior, and a tuple that contains the number of heads and tails.
Here's the uniform prior again.
```
xs = np.arange(101)
prior = 1/101
uniform = make_pmf(xs, prior)
```
Here's the update.
```
data = 140, 110
bayes_update_euro(uniform, data)
```
And here are the results again.
```
uniform.plot()
plt.xlabel('Possible values of x')
plt.ylabel('Probability')
plt.title('140 heads, 110 tails');
```
## A better prior
But remember that this result is based on a uniform prior, which assumes that any value of $x$ from 0 to 100 is equally likely.
Given what we know about coins, that's probabily not true. I can believe that if you spin a lop-sided coin on edge, it might be somewhat more likely to land on heads or tails.
But unless the coin is heavily weighted on one side, I would be surprised if $x$ were greater than 60% or less than 40%.
Of course, I could be wrong, but in general I would expect to find $x$ closer to 50%, and I would be surprised to find it near 0% or 100%.
I can represent that prior believe with a triangle-shaped prior.
First I'll make an array that ramps up from 0 to 100 an array that ramps down from 100 to 0.
```
ramp_up = xs
ramp_down = 100 - ramp_up
```
To construct a triangle prior I'll start with a copy of `ramp_up` and replace the second half with `ramp_down`.
```
prior = ramp_up.copy()
high = (xs > 50)
prior[high] = ramp_down[high]
```
Then I'll put the results into a PMF and normalize it so it adds up to 1.
```
triangle = make_pmf(xs, prior)
triangle /= triangle.sum()
```
Here's what the triangle prior looks like.
```
triangle.plot(color='C1')
plt.xlabel('Possible values of x')
plt.ylabel('Probability')
plt.title('Triangle prior');
```
Now let's update it with the data.
```
data = 140, 110
bayes_update_euro(triangle, data)
```
And plot the results, along with the posterior based on a uniform prior.
```
uniform.plot(label='Uniform')
triangle.plot(label='Triangle')
plt.xlabel('Possible values of x')
plt.ylabel('Probability')
plt.title('140 heads, 110 tails')
plt.legend();
```
The posterior distributions are almost identical because, in this case, we have enough data to "swamp the prior"; that is, the posteriors depend strongly on the data and only weakly on the priors.
This is good news, because it suggests that we can use data to resolve arguments. Suppose two people disagree about the correct prior. If neither can persuade the other, they might have to agree to disagree.
But if they get new data, and each of them does a Bayesian update, they will usually find their beliefs converging.
And with enough data, the remaining difference can be so small that it makes no difference in practice.
## Summarizing the posterior distribution
The posterior distribution contains all of the information we have about the value of $x$. But sometimes we want to summarize this information.
We have already seen one way to summarize a posterior distribution, the Maximum Aposteori Probability, or MAP:
```
uniform.idxmax()
```
`idxmax` returns the value of $x$ with the highest probability.
In this example, we get the same MAP with the triangle prior:
```
triangle.idxmax()
```
Another way to summarize the posterior distribution is the posterior mean.
Given a set of values, $x_i$, and the corresponding probabilities, $p_i$, the mean of the distribution is:
$\sum_i x_i p_i$
Using the posterior PMF, we can compute it like this:
While we're at it, I'll make a function to compute the posterior mean, too:
```
def pmf_mean(pmf):
"""Compute the mean of a PMF.
pmf: Series representing a PMF
return: float
"""
return np.sum(pmf.index * pmf)
```
Here's the posterior mean based on the uniform prior:
```
pmf_mean(uniform)
```
And here's the posterior mean with the triangle prior:
```
pmf_mean(triangle)
```
The posterior means are not identical, but they are close enough that the difference probably doesn't matter.
In this example, the posterior mean is very close to the MAP. That's true when the posterior distribution is symmetric, but it is not always true.
If someone asks what we think $x$ is, the MAP or the posterior mean might be a good answer.
But MacKay asked a different question: do these data give evidence that the coin is biased rather than fair?
We have more work to do before we can really answer this question. But first, I want to rule out an approach that is tempting, but incorrect.
## Posterior probability
If the coin is "fair", that means that $x$ is 50%. So it might be tempting to use the posterior PMF to compute the probability that $x$ is 50%:
```
uniform[50]
```
The result is the posterior probability that $x$ is 50%, but it is not the probability that the coin is fair.
The problem is that $x$ is really a continuous quantity, which means it could have any value between 0 and 1.
For purposes of computation, I broke this interval into 101 discrete values, but that was an arbitrary choice. I could have done the computation with 201 hypotheses, like this:
```
xs2 = np.linspace(0, 100, 201)
prior2 = 1/201
uniform2 = make_pmf(xs2, prior2)
len(uniform2)
```
Here's the update.
```
bayes_update_euro(uniform2, data)
```
And here's what the results look like.
```
uniform2.plot(color='C2')
plt.xlabel('201 possible values of x')
plt.ylabel('Probability')
plt.title('140 heads, 110 tails');
```
The results are visually similar, but you might notice that the curve is a little smoother at the peak.
The MAPs are the same and the posterior means are almost the same:
```
uniform.idxmax(), uniform2.idxmax()
pmf_mean(uniform), pmf_mean(uniform2)
```
But the total probability is spread out over twice as many hypotheses, so the proability of any single hypothesis is smaller.
If use both posteriors to compute the probability that $x$ is 50%, we get very different results.
```
uniform[50], uniform2[50]
```
Because $x$ is continuous, we divided the interval into discrete values. But the number of values was an arbitrary choice, so the probability of any single value is not meaningful.
However, we can meaningfully compute the probability that $x$ falls in an interval.
## Credible intervals
We can use a Booleans series to select values from the posterior distribution and add up their probabilities.
Here's a function that computes the total probability of all values less than or equal to a given value of $x$.
```
def prob_le(pmf, threshold):
le = (pmf.index <= threshold)
total = pmf[le].sum()
return total
```
For example, here's the probability that $x$ is less than or equal to 60%, based on the uniform prior with 101 values.
```
prob_le(uniform, 60)
```
Here's what we get with 201 values.
```
prob_le(uniform2, 60)
```
And with the triangle prior
```
prob_le(triangle, 60)
```
The results are not identical, but they are close enough that the differences might not matter.
So let's say that the probability is 92% that $x$ is less than or equal to 61.
I'll also compute the probability that $x$ is less than or equal to 51:
```
prob_le(uniform, 50), prob_le(uniform2, 50), prob_le(triangle, 50)
```
It looks like the probability is about 4% that $x$ is less than 50.
Putting these results together, we can estimate the probability that $x$ is between 50 and 60; it's about 92% - 4% = 88%.
And interval like this is called a "credible interval" because it tells us how credible it is that $x$ falls in the interval.
In this case the interval from 50 to 60 is an 88% credible interval.
## Summary
In this notebook, we used data from a coin-spinning experiment to estimate the probability that a given coin lands on heads.
We tried three different priors: uniform distributions with 101 and 201 values, and a triangle distribution. The results are similar, which indicates that we have enough data to "swamp the priors".
And we summarized the posterior distributions three ways, computing the value with Maximum Aposteori Probability (MAP), the posterior mean, and a credible interval.
Although we have made progress, we have not yet answered the question I started with, "Do these data give evidence that the coin is biased rather than fair?"
We'll come back to this in a future notebook, but [in the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/08_soccer.ipynb), we'll take on a different question, which I call the World Cup problem.
| github_jupyter |
```
import keras
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, CuDNNLSTM, CuDNNGRU, BatchNormalization, LocallyConnected2D, Permute, TimeDistributed, Bidirectional
from keras.layers import Concatenate, Reshape, Softmax, Conv2DTranspose, Embedding, Multiply
from keras.callbacks import ModelCheckpoint, EarlyStopping, Callback
from keras import regularizers
from keras import backend as K
from keras.utils.generic_utils import Progbar
from keras.layers.merge import _Merge
import keras.losses
from keras.datasets import mnist
from functools import partial
from collections import defaultdict
import tensorflow as tf
from tensorflow.python.framework import ops
import isolearn.keras as iso
import numpy as np
import tensorflow as tf
import logging
logging.getLogger('tensorflow').setLevel(logging.ERROR)
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
from keras.backend.tensorflow_backend import set_session
def contain_tf_gpu_mem_usage() :
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
contain_tf_gpu_mem_usage()
class EpochVariableCallback(Callback) :
def __init__(self, my_variable, my_func) :
self.my_variable = my_variable
self.my_func = my_func
def on_epoch_begin(self, epoch, logs={}) :
K.set_value(self.my_variable, self.my_func(K.get_value(self.my_variable), epoch))
#Load MNIST data
dataset_name = "mnist_3_vs_5"
img_rows, img_cols = 28, 28
num_classes = 10
batch_size = 32
included_classes = { 3, 5 }
(x_train, y_train), (x_test, y_test) = mnist.load_data()
keep_index_train = []
for i in range(y_train.shape[0]) :
if y_train[i] in included_classes :
keep_index_train.append(i)
keep_index_test = []
for i in range(y_test.shape[0]) :
if y_test[i] in included_classes :
keep_index_test.append(i)
x_train = x_train[keep_index_train]
x_test = x_test[keep_index_test]
y_train = y_train[keep_index_train]
y_test = y_test[keep_index_test]
n_train = int((x_train.shape[0] // batch_size) * batch_size)
n_test = int((x_test.shape[0] // batch_size) * batch_size)
x_train = x_train[:n_train]
x_test = x_test[:n_test]
y_train = y_train[:n_train]
y_test = y_test[:n_test]
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print("x_train.shape = " + str(x_train.shape))
print("n train samples = " + str(x_train.shape[0]))
print("n test samples = " + str(x_test.shape[0]))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
#Binarize images
def _binarize_images(x, val_thresh=0.5) :
x_bin = np.zeros(x.shape)
x_bin[x >= val_thresh] = 1.
return x_bin
x_train = _binarize_images(x_train, val_thresh=0.5)
x_test = _binarize_images(x_test, val_thresh=0.5)
#Visualize background image distribution
pseudo_count = 1.0
x_mean = (np.sum(x_train, axis=(0, 3)) + pseudo_count) / (x_train.shape[0] + pseudo_count)
x_mean_logits = np.log(x_mean / (1. - x_mean))
f = plt.figure(figsize=(4, 4))
plot_ix = 0
plt.imshow(x_mean, cmap="Greys", vmin=0.0, vmax=1.0, aspect='equal')
plt.xticks([], [])
plt.yticks([], [])
plt.tight_layout()
plt.show()
from tensorflow.python.framework import ops
#Stochastic Binarized Neuron helper functions (Tensorflow)
#ST Estimator code adopted from https://r2rt.com/binary-stochastic-neurons-in-tensorflow.html
#See Github https://github.com/spitis/
def bernoulli_sample(x):
g = tf.get_default_graph()
with ops.name_scope("BernoulliSample") as name:
with g.gradient_override_map({"Ceil": "Identity","Sub": "BernoulliSample_ST"}):
return tf.ceil(x - tf.random_uniform(tf.shape(x)), name=name)
@ops.RegisterGradient("BernoulliSample_ST")
def bernoulliSample_ST(op, grad):
return [grad, tf.zeros(tf.shape(op.inputs[1]))]
#Masking and Sampling helper functions
def sample_image_st(x) :
p = tf.sigmoid(x)
return bernoulli_sample(p)
#Generator helper functions
def initialize_templates(generator, background_matrices) :
embedding_backgrounds = []
for k in range(len(background_matrices)) :
embedding_backgrounds.append(background_matrices[k].reshape(1, -1))
embedding_backgrounds = np.concatenate(embedding_backgrounds, axis=0)
generator.get_layer('background_dense').set_weights([embedding_backgrounds])
generator.get_layer('background_dense').trainable = False
#Generator construction function
def build_sampler(batch_size, n_rows, n_cols, n_classes=1, n_samples=1) :
#Initialize Reshape layer
reshape_layer = Reshape((n_rows, n_cols, 1))
#Initialize background matrix
background_dense = Embedding(n_classes, n_rows * n_cols, embeddings_initializer='zeros', name='background_dense')
#Initialize Templating and Masking Lambda layer
background_layer = Lambda(lambda x: x[0] + x[1], name='background_layer')
#Initialize Sigmoid layer
image_layer = Lambda(lambda x: K.sigmoid(x), name='image')
#Initialize Sampling layers
upsampling_layer = Lambda(lambda x: K.tile(x, [n_samples, 1, 1, 1]), name='upsampling_layer')
sampling_layer = Lambda(sample_image_st, name='image_sampler')
permute_layer = Lambda(lambda x: K.permute_dimensions(K.reshape(x, (n_samples, batch_size, n_rows, n_cols, 1)), (1, 0, 2, 3, 4)), name='permute_layer')
def _sampler_func(class_input, raw_logits) :
#Get Template and Mask
background = reshape_layer(background_dense(class_input))
#Add Template and Multiply Mask
image_logits = background_layer([raw_logits, background])
#Compute Image (Sigmoids from logits)
image = image_layer(image_logits)
#Tile each image to sample from and create sample axis
image_logits_upsampled = upsampling_layer(image_logits)
sampled_image = sampling_layer(image_logits_upsampled)
sampled_image = permute_layer(sampled_image)
return image_logits, image, sampled_image
return _sampler_func
#Initialize Encoder and Decoder networks
batch_size = 32
n_rows = 28
n_cols = 28
n_samples = 128
#Load sampler
sampler = build_sampler(batch_size, n_rows, n_cols, n_classes=1, n_samples=n_samples)
#Load Predictor
predictor_path = 'saved_models/mnist_binarized_cnn_10_digits.h5'
predictor = load_model(predictor_path)
predictor.trainable = False
predictor.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mean_squared_error')
#Build scrambler model
dummy_class = Input(shape=(1,), name='dummy_class')
input_logits = Input(shape=(n_rows, n_cols, 1), name='input_logits')
image_logits, image, sampled_image = sampler(dummy_class, input_logits)
scrambler_model = Model([input_logits, dummy_class], [image_logits, image, sampled_image])
#Initialize Templates and Masks
initialize_templates(scrambler_model, [x_mean_logits])
scrambler_model.trainable = False
scrambler_model.compile(
optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999),
loss='mean_squared_error'
)
scrambler_model.summary()
file_names = [
"autoscrambler_dataset_mnist_3_vs_5_n_samples_32_resnet_5_4_32_3_00_n_epochs_50_target_bits_0005_kl_divergence_higher_entropy_penalty_importance_scores_test.npy",
"autoscrambler_dataset_mnist_3_vs_5_inverted_scores_n_samples_32_resnet_5_4_32_3_00_n_epochs_50_target_bits_03_kl_divergence_higher_entropy_penalty_importance_scores_test.npy",
"pytorch_saliency_mnist_3_vs_5_smaller_blur_importance_scores_test.npy",
"extremal_mnist_3_vs_5_mode_delete_perturbation_blur_area_01_importance_scores_test.npy",
"extremal_mnist_3_vs_5_mode_delete_perturbation_fade_area_01_importance_scores_test.npy"
]
model_names =[
"scrambler",
"scrambler_inverted",
"saliency_model",
"torchray_blur",
"torchray_fade",
]
model_importance_scores_test = [np.load(file_name) for file_name in file_names]
feature_quantiles = [0.80, 0.90, 0.95, 0.98]
on_state_logit_val = 50.
dummy_test = np.zeros((x_test.shape[0], 1))
x_test_logits = 2. * x_test - 1.
digit_test = np.argmax(y_test, axis=-1)
y_pred_ref = predictor.predict([x_test], batch_size=32, verbose=True)
model_kl_divergences = []
for model_i in range(len(model_names)) :
print("Benchmarking model '" + str(model_names[model_i]) + "'...")
feature_quantile_kl_divergences = []
for feature_quantile in feature_quantiles :
print("Feature quantile = " + str(feature_quantile))
importance_scores_test = np.abs(model_importance_scores_test[model_i])
n_to_test = importance_scores_test.shape[0] // batch_size * batch_size
importance_scores_test = importance_scores_test[:n_to_test]
quantile_vals = np.quantile(importance_scores_test, axis=(1, 2, 3), q=feature_quantile, keepdims=True)
quantile_vals = np.tile(quantile_vals, (1, importance_scores_test.shape[1], importance_scores_test.shape[2], importance_scores_test.shape[3]))
top_logits_test = np.zeros(importance_scores_test.shape)
top_logits_test[importance_scores_test <= quantile_vals] = on_state_logit_val
top_logits_test = top_logits_test * x_test_logits[:n_to_test]
_, _, samples_test = scrambler_model.predict([top_logits_test, dummy_test[:n_to_test]], batch_size=batch_size)
mean_kl_divs = []
for data_ix in range(samples_test.shape[0]) :
if data_ix % 100 == 0 :
print("Processing example " + str(data_ix) + "...")
y_pred_var_samples = predictor.predict([samples_test[data_ix, ...]], batch_size=n_samples)
y_pred_ref_samples = np.tile(np.expand_dims(y_pred_ref[data_ix, :], axis=0), (n_samples, 1))
kl_divs = np.sum(y_pred_ref_samples * np.log(y_pred_ref_samples / y_pred_var_samples), axis=-1)
#kl_divs = (y_pred_ref_samples[:, digit_test[data_ix]] - np.mean(y_pred_ref_samples, axis=-1)) - (y_pred_var_samples[:, digit_test[data_ix]] - np.mean(y_pred_var_samples, axis=-1))
mean_kl_div = np.mean(kl_divs)
mean_kl_divs.append(mean_kl_div)
mean_kl_divs = np.array(mean_kl_divs)
feature_quantile_kl_divergences.append(mean_kl_divs)
model_kl_divergences.append(feature_quantile_kl_divergences)
model_names =[
"scrambler",
"scrambler\n(inverted)",
"saliency\nmodel",
"torchray\n(blur)",
"torchray\n(fade)",
]
def lighten_color(color, amount=0.5):
import matplotlib.colors as mc
import colorsys
try:
c = mc.cnames[color]
except:
c = color
c = colorsys.rgb_to_hls(*mc.to_rgb(c))
return colorsys.hls_to_rgb(c[0], 1 - amount * (1 - c[1]), c[2])
fig = plt.figure(figsize=(13, 6))
benchmark_name = "benchmark_ablation_mnist_scrambler_like"
save_figs = True
width = 0.2
max_y_val = 4.0
cm = plt.get_cmap('viridis_r')
shades = [0.4, 0.6, 0.8, 1]
quantiles = [0.5, 0.8, 0.9, 0.95]
all_colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] + plt.rcParams['axes.prop_cycle'].by_key()['color']
model_colors = {model_names[i]: all_colors[i] for i in range(len(model_names))}
results = np.zeros((len(quantiles), len(model_names), 1))
for i in range(1, len(feature_quantiles) + 1) :
for j in range(len(model_names)):
kl_div_samples = model_kl_divergences[j][i-1]
for l in range(len(quantiles)):
quantile = quantiles[l]
results[l, j, 0] = np.quantile(kl_div_samples, q=quantile)
xs = range(len(model_names))
xs = [xi + i*width for xi in xs]
for j in range(len(model_names)) :
for l in range(len(quantiles)) :
model_name = model_names[j]
c = model_colors[model_name]
val = results[l, j, 0]
if i == 1 and j == 0 :
lbl = "$%i^{th}$ Perc." % int(100*quantiles[l])
else :
lbl=None
if l == 0 :
plt.bar(xs[j], val, width=width, color=lighten_color(c, shades[l]), edgecolor='k', linewidth=1, label=lbl, zorder=l+1)
else :
prev_val = results[l-1, j].mean(axis=-1)
plt.bar(xs[j],val-prev_val, width=width, bottom = prev_val, color=lighten_color(c, shades[l]), edgecolor='k', linewidth=1, label=lbl, zorder=l+1)
if l == len(quantiles) - 1 :
plt.text(xs[j], val, "Top\n" + str(int(100 - 100 * feature_quantiles[i-1])) + "%", horizontalalignment='center', verticalalignment='bottom', fontdict={ 'family': 'serif', 'color': 'black', 'weight': 'bold', 'size': 10 })
prev_results = results
plt.xticks([i + 2.5*width for i in range(len(model_names))])
all_lbls = [model_names[j].upper() for j in range(len(model_names))]
plt.gca().set_xticklabels(all_lbls, rotation=60)
plt.ylabel("Test Set KL-Divergence")
max_y_val = np.max(results) * 1.1
#plt.ylim([0, max_y_val])
plt.grid(True)
plt.gca().set_axisbelow(True)
plt.gca().grid(color='gray', alpha=0.2)
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.gca().yaxis.set_ticks_position('left')
plt.gca().xaxis.set_ticks_position('bottom')
plt.legend(fontsize=12, frameon=True, loc='upper left')
leg = plt.gca().get_legend()
for l in range(len(quantiles)):
leg.legendHandles[l].set_color(lighten_color(all_colors[7], shades[l]))
leg.legendHandles[l].set_edgecolor('k')
plt.tight_layout()
if save_figs :
plt.savefig(benchmark_name + ".png", dpi=300, transparent=True)
plt.savefig(benchmark_name + ".eps")
plt.show()
#Visualize a few example patterns
save_figs = True
from numpy.ma import masked_array
digit_test = np.argmax(y_test, axis=1)
plot_examples = [3, 4]
feature_quantiles = [0.80, 0.90, 0.95, 0.98]
on_state_logit_val = 50.
dummy_test = np.zeros((x_test.shape[0], 1))
x_test_logits = 2. * x_test - 1.
for data_ix in plot_examples :
print("Test pattern = " + str(data_ix) + ":")
y_test_hat_ref = predictor.predict(x=[np.expand_dims(x_test[data_ix], axis=0)], batch_size=1)[0, digit_test[data_ix]]
print(" - Prediction (original) = " + str(round(y_test_hat_ref, 2))[:4])
for model_i in range(len(model_names)) :
print("Model = '" + str(model_names[model_i]) + "'...")
if len(model_importance_scores_test[model_i].shape) >= 5 :
importance_scores_test = np.abs(model_importance_scores_test[model_i][1, ...])
else :
importance_scores_test = np.abs(model_importance_scores_test[model_i])
importance_scores_test = importance_scores_test[:32]
y_test_hat_mean_qts = []
for feature_quantile_i, feature_quantile in enumerate(feature_quantiles) :
quantile_vals = np.quantile(importance_scores_test, axis=(1, 2, 3), q=feature_quantile, keepdims=True)
quantile_vals = np.tile(quantile_vals, (1, importance_scores_test.shape[1], importance_scores_test.shape[2], importance_scores_test.shape[3]))
top_logits_test = np.zeros(importance_scores_test.shape)
top_logits_test[importance_scores_test <= quantile_vals] = on_state_logit_val
top_logits_test = top_logits_test * x_test_logits[:32]
_, _, samples_test = scrambler_model.predict([top_logits_test, dummy_test[:32]], batch_size=batch_size)
y_test_hat = predictor.predict([samples_test[data_ix, ...]], batch_size=n_samples)[:, digit_test[data_ix]]
y_test_hat_mean = np.mean(y_test_hat)
y_test_hat_mean_qts.append(str(round(y_test_hat_mean, 2))[:4])
print(" - Prediction (scrambled qts) = " + str(y_test_hat_mean_qts))
image_test, samples_test = None, None
if 'scrambler' in model_names[model_i] :
scrambled_logits_test = (1. / np.maximum(importance_scores_test[:32], 1e-7)) * x_test_logits[:32]
_, image_test, samples_test = scrambler_model.predict_on_batch([scrambled_logits_test, dummy_test[:32]])
y_test_hat = predictor.predict([samples_test[data_ix, ...]], batch_size=n_samples)[:, digit_test[data_ix]]
y_test_hat_mean = np.mean(y_test_hat)
print(" - Prediction (scrambled natural) = " + str(round(y_test_hat_mean, 2))[:4])
else :
image_test = np.zeros(importance_scores_test.shape)
samples_test = np.zeros(importance_scores_test.shape)
f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(3 * 4, 3))
ax1.imshow(x_test[data_ix, :, :, 0], cmap="Greys", vmin=0.0, vmax=1.0, aspect='equal')
plt.sca(ax1)
plt.xticks([], [])
plt.yticks([], [])
ax2.imshow(image_test[data_ix, :, :, 0], cmap="Greys", vmin=0.0, vmax=1.0, aspect='equal')
plt.sca(ax2)
plt.xticks([], [])
plt.yticks([], [])
ax3.imshow(importance_scores_test[data_ix, :, :, 0], cmap="hot", vmin=0.0, vmax=np.max(importance_scores_test[data_ix, :, :, 0]), aspect='equal')
plt.sca(ax3)
plt.xticks([], [])
plt.yticks([], [])
ax4.imshow(x_test[data_ix, :, :, 0], cmap="Greys", vmin=0.0, vmax=1.0, aspect='equal')
plt.sca(ax4)
plt.xticks([], [])
plt.yticks([], [])
ax4.imshow(importance_scores_test[data_ix, :, :, 0], alpha=0.75, cmap="hot", vmin=0.0, vmax=np.max(importance_scores_test[data_ix, :, :, 0]), aspect='equal')
plt.sca(ax4)
plt.xticks([], [])
plt.yticks([], [])
plt.tight_layout()
if save_figs :
plt.savefig("benchmark_ablation_scrambler_like_" + dataset_name + "_test_ix_" + str(data_ix) + "_" + model_names[model_i].replace("\n", "_").replace("(", "").replace(")", "") + ".png", transparent=True, dpi=300)
plt.savefig("benchmark_ablation_scrambler_like_" + dataset_name + "_test_ix_" + str(data_ix) + "_" + model_names[model_i].replace("\n", "_").replace("(", "").replace(")", "") + ".eps")
plt.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.