code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
### First steps
The easiest way to run Python in your computer, is to install Anaconda:
https://www.anaconda.com/download for your OS (Windows, macOS, Linux).
Then from Anaconda's launcher you can run Jupyter notebook. This tutorial written in Jupyter notebooks.
### Magic hapenning here
In Jupyter notebook, you can run bash commands with '%' or '%%'. The difference in second case bash command will run for entire cell, and in first case bash command will run for line.
```
# run user defined finction from *.py file
%run eratosthenes_sieve.py
eratosthenes_sieve(20)
# %timeit magic command shows execution time for line of code
%timeit lst = [i**2 for i in range(10000)] # list comprehension
# there's list of all available magic commands
%lsmagic
```
### Shell commands
They should start with '!' sign.
```
!echo "Hello World!" # it's like Python's print command
!pwd # path to current folder
!ls # list of contents
```
We can pass shell results into Python.
```
lst = !ls
print(lst)
```
But they have different format rather than python lists.
```
type(lst)
```
This type has additional grep and fields methods
### Python essentials
Python is object oriented programming language. Everything (variables, lists, functions) in Python is object. Traditionally, very first program should be "Hello World!".
```
print("Hello World!") # prints out given string inside print
```
Python can be used as calculator.
```
88 + 12
12 * 6
54 / 6 # return floating-point number
2**5
```
### Exercise
Calculate number of seconds in one year (365 days).
```
365 * 24 * 60 * 60
```
### Variables
Any values can be assigned to variable. Variable name can be any length. It shouldn't start with number, and Python's keywords are restricted to use as variable names.
```
my_name = 'Alibek'
my_name
my_weight = 83
```
Variables stored in memory, so we can use them for further calculations.
```
my_weight + 10
```
Order of mathematical operations follows *PEMDAS* convention.
- P - parentheses
- E - exponentiation
- M&D - multiplication & division
- A&S - addition & subtraction
"+" and "*" operators can be used to string variables as well. "+" for concatenation of strings, "*" for copying number of times.
```
str1 = 'Marko '
str2 = 'Polo'
str1 + str2
str1 * 3
```
Comments are very useful for programmer.
- \# - one line comment
- """ - multi-line comment
### Functions
Python has a lot of built-in functions. You can change type of variables, use mathematical operations, and many many more.
```
int(3.9)
str(3.9)
import math # import math operator for mathematical operations
math.sin(3)
math.sqrt(100)
```
You can nest functions inside other functions.
```
math.log(math.cos(25))
```
It's possible to define your own functions. The classic example is converting celcius to fahrenheits.
```
def cel_to_fa(celcius = 0):
"""Converts celcius into fahrenheits"""
return celcius * 1.8 + 32
cel_to_fa(36.6)
help(cel_to_fa) # docstrings available on help function
```
### Exercise
Calculate volume of sphere with formula $$v = \frac{4}{3} \pi r^3$$
```
def sphere_volume(r = 1):
return math.pi * r**3 * 4 / 3
sphere_volume(r = 5)
```
### Conditionals and recursion
Boolean expressions are either true or false. They can be produced with '==' operator.
```
5 == 6
3 == 3
```
There's also other relational operators:
- != - not equal
- \> - greater
- \>= - greater or equal
- < - less
- <= - less or equal
There're three logical operators: and, or, not. The meaning of these operators are same as in english language semantics.
Conditional execution obtained with 'if' statement.
```
x = 5
if x > 0:
print('x is positive')
if x % 2 == 0:
print('x is even')
else:
print('x is odd')
```
Nested and chained conditionals sometimes necessary.
```
x = int(x / 2)
if x % 2 == 0:
if x == 2:
print('x is even and equal to 2')
elif x % 2 == 0:
print('x is even')
else:
print('x is odd')
```
Sometimes, functions can call themselves. It's called recursion.
```
def start_time(n = 5):
if n == 0:
print('start')
else:
print('{} seconds left'.format(n))
start_time(n - 1)
start_time()
```
### Exercise
Define factorial finction using recursion
```
def factorial(n = 5):
if n == 0:
return 1
else:
return n * factorial(n - 1)
factorial()
1*2*3*4*5
```
### Iteration
There's main 2 statements for iterations: while and for loops.
```
n = 5
while n > 0:
print(n)
n = n - 1
n = 5
for i in range(n, 0, -1):
print(i)
```
'break' operator can stop iterations on any given step
```
n = 5
while n > 0:
print(n)
n -= 1
if n == 2:
break
```
### Exercise
Calculate square root of number with Newton algorithm
```
def square_root_newton(x = 100, x0 = 1, eps = 1e-9):
while True:
ans = x0 - (x0**2 - x) / (2 * x0)
if abs(ans - x0) < eps:
break
x0 = ans
return x0
square_root_newton(9995)
```
### Strings
Strings are sequence of characters. We can get any character from string.
```
my_name[2]
len(my_name) # number of characters (including spaces and other special symbols)
# we can loop over characters
for letter in my_name:
print(letter)
# string slices
my_name[0:3]
# strings are immutable
my_name[0] = 'M'
# strings have some built-in methods
my_name.upper()
my_name.find('b')
# 'in' operator checks if character in string sequence
'e' in my_name
# you can check if strings are same
'Alibyk' == my_name
```
### Exercise
Write function that reverses order of letters
```
def reverse_string(string):
rev_str = ''
for i in range(1, len(string) + 1):
rev_str += string[-i]
return rev_str
reverse_string('koka-kola')
```
### Lists
The most useful built-in type in Python. List is sequence of values, and values can be any type.
```
[1, 2, 3, 4, 5]
['ss', 2, True]
```
Lists are mutable
```
lst = [1, 2, 3, 4, 5]
lst[2] = 'foo'
lst
# list concatenation
lst1 = [1, 2, 3]
lst2 = [9, 8, 7]
lst3 = lst1 + lst2
lst3
# 'append' can add element to list
lst3.append(10)
lst3
# 'extend' add list elements to list
lst3.extend(lst1)
lst3
# in contrary append add list as element
lst2.append(lst1)
lst2
# you can sort elements of list
lst3.sort()
lst3
# sum of all list elements
sum(lst3)
# 'pop' method return and delete last element of list
lst3.pop()
lst3
# you can delete given element of list (note that you provide index number)
del lst3[3]
lst3
# you can remove first appearance of element in list
txt = ['a', 'b', 'c', 'b']
txt.remove('b')
txt
# you can get list from string with 'list' operator
txt = 'parrot peter picked a peck of pickled peppers'
lst = list(txt)
lst
# 'split' method can break text into list elements by given split character
lst = txt.split(' ')
lst
# join will do the reverse
txt = ' '.join(lst)
txt
```
### Exercise
Find number of prime numbers before 1000
```
def primes(n):
primes = [False, False] + [True] * (n - 2)
i = 2
while i < n:
if not primes[i]:
i += 1
continue
else:
k = i * i
while k < n:
primes[k] = False
k += i
i += 1
return [i for i in range(n) if primes[i]]
len(primes(1000))
```
### Dictionaries
Dictionaries are like lists, but indexes can be any type. Collection of indexes called keys. Elements called values. The items of dictionaries called key-value pairs.
```
d = dict()
d['one'] = 1
d
d['two'] = 2
d['three'] = 3
d
d.values() # values of dictionary can be obtained with .values()
d.keys() # keys can be obtained with .keys()
```
### Tuples
Tuples are sequence of values. Values can be any type, and indexed just like lists. The maun difference that tuples are immutable.
```
t = 1, 2, 3, 4, 5
t
```
Tuples have same methods as list, except few differences.
### Function arguments
User defined functions can take any number of arguments. Particularly \*args gives you opportunity to include as many variables as you want.
```
def printall(*args):
print(args)
printall(1, 2, 3, 's')
```
### Question
Name built-in functions with any number of arguments. (sum, max, etc.)
### Zip function
zip function takes two or more sequence of elements, and return list of tuples.
```
t1 = 1,2,3
t2 = 'one', 'two', 'three'
t = zip(t1, t2)
t # it's zip object, and we can iterate over the elements
for elems in t:
print(elems)
```
Zip object is iterator, that can loop through all elements, but can not get any value at given index.
| github_jupyter |
```
from birdcall.data import *
from birdcall.metrics import *
from birdcall.ops import *
import torch
import torchvision
from torch import nn
import numpy as np
import pandas as pd
from pathlib import Path
import soundfile as sf
BS = 16
MAX_LR = 1e-3
classes = pd.read_pickle('data/classes.pkl')
splits = pd.read_pickle('data/all_splits.pkl')
all_train_items = pd.read_pickle('data/all_train_items.pkl')
train_items = np.array(all_train_items)[splits[0][0]].tolist()
val_items = np.array(all_train_items)[splits[0][1]].tolist()
from collections import defaultdict
class2train_items = defaultdict(list)
for cls_name, path, duration in train_items:
class2train_items[cls_name].append((path, duration))
train_ds = MelspecPoolDataset(class2train_items, classes, len_mult=50, normalize=False)
train_dl = torch.utils.data.DataLoader(train_ds, batch_size=BS, num_workers=NUM_WORKERS, pin_memory=True, shuffle=True)
val_items = [(classes.index(item[0]), item[1], item[2]) for item in val_items]
val_items_binned = bin_items_negative_class(val_items)
class Model(nn.Module):
def __init__(self):
super().__init__()
self.cnn = nn.Sequential(*list(torchvision.models.resnet34(True).children())[:-2])
self.classifier = nn.Sequential(*[
nn.Linear(512, 512), nn.ReLU(), nn.Dropout(p=0.5), nn.BatchNorm1d(512),
nn.Linear(512, 512), nn.ReLU(), nn.Dropout(p=0.5), nn.BatchNorm1d(512),
nn.Linear(512, len(classes))
])
def forward(self, x):
x = torch.log10(1 + x)
max_per_example = x.view(x.shape[0], -1).max(1)[0] # scaling to between 0 and 1
x /= max_per_example[:, None, None, None, None] # per example!
bs, im_num = x.shape[:2]
x = x.view(-1, x.shape[2], x.shape[3], x.shape[4])
x = self.cnn(x)
x = x.mean((2,3))
x = self.classifier(x)
x = x.view(bs, im_num, -1)
x = lme_pool(x)
return x
model = Model().cuda()
import torch.optim as optim
from sklearn.metrics import accuracy_score, f1_score
import time
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), 1e-3)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, 5)
sc_ds = SoundscapeMelspecPoolDataset(pd.read_pickle('data/soundscape_items.pkl'), classes)
sc_dl = torch.utils.data.DataLoader(sc_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)
t0 = time.time()
for epoch in range(260):
running_loss = 0.0
for i, data in enumerate(train_dl, 0):
model.train()
inputs, labels = data[0].cuda(), data[1].cuda()
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
if np.isnan(loss.item()):
raise Exception(f'!!! nan encountered in loss !!! epoch: {epoch}\n')
loss.backward()
optimizer.step()
scheduler.step()
running_loss += loss.item()
if epoch % 5 == 4:
model.eval();
preds = []
targs = []
for num_specs in val_items_binned.keys():
valid_ds = MelspecShortishValidatioDataset(val_items_binned[num_specs], classes)
valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)
with torch.no_grad():
for data in valid_dl:
inputs, labels = data[0].cuda(), data[1].cuda()
outputs = model(inputs)
preds.append(outputs.cpu().detach())
targs.append(labels.cpu().detach())
preds = torch.cat(preds)
targs = torch.cat(targs)
f1s = []
ts = []
for t in np.linspace(0.4, 1, 61):
f1s.append(f1_score(preds.sigmoid() > t, targs, average='micro'))
ts.append(t)
sc_preds = []
sc_targs = []
with torch.no_grad():
for data in sc_dl:
inputs, labels = data[0].cuda(), data[1].cuda()
outputs = model(inputs)
sc_preds.append(outputs.cpu().detach())
sc_targs.append(labels.cpu().detach())
sc_preds = torch.cat(sc_preds)
sc_targs = torch.cat(sc_targs)
sc_f1 = f1_score(sc_preds.sigmoid() > 0.5, sc_targs, average='micro')
sc_f1s = []
sc_ts = []
for t in np.linspace(0.4, 1, 61):
sc_f1s.append(f1_score(sc_preds.sigmoid() > t, sc_targs, average='micro'))
sc_ts.append(t)
f1 = f1_score(preds.sigmoid() > 0.5, targs, average='micro')
print(f'[{epoch + 1}, {(time.time() - t0)/60:.1f}] loss: {running_loss / (len(train_dl)-1):.3f}, f1: {max(f1s):.3f}, sc_f1: {max(sc_f1s):.3f}')
running_loss = 0.0
torch.save(model.state_dict(), f'models/{epoch+1}_lmepool_simple_minmax_log_{round(f1, 2)}.pth')
```
| github_jupyter |
# 作業 : (Kaggle)鐵達尼生存預測
https://www.kaggle.com/c/titanic
# 作業1
* 參考範例,將鐵達尼的船票票號( 'Ticket' )欄位使用特徵雜湊 / 標籤編碼 / 目標均值編碼三種轉換後,
與其他數值型欄位一起預估生存機率
```
# 做完特徵工程前的所有準備 (與前範例相同)
import pandas as pd
import numpy as np
import copy, time
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
data_path = 'data/data2/'
df_train = pd.read_csv(data_path + 'titanic_train.csv')
df_test = pd.read_csv(data_path + 'titanic_test.csv')
train_Y = df_train['Survived']
ids = df_test['PassengerId']
df_train = df_train.drop(['PassengerId', 'Survived'] , axis=1)
df_test = df_test.drop(['PassengerId'] , axis=1)
df = pd.concat([df_train,df_test])
df.head()
#只取類別值 (object) 型欄位, 存於 object_features 中
object_features = []
for dtype, feature in zip(df.dtypes, df.columns):
if dtype == 'object':
object_features.append(feature)
print(f'{len(object_features)} Numeric Features : {object_features}\n')
# 只留類別型欄位
df = df[object_features]
df = df.fillna('None')
train_num = train_Y.shape[0]
df.head()
```
# 作業2
* 承上題,三者比較效果何者最好?
- Answer: 在此例中三種效果差不多,計數編碼準確度略高一些,但不明顯
```
# 對照組 : 標籤編碼 + 邏輯斯迴歸
df_temp = pd.DataFrame()
for c in df.columns:
df_temp[c] = LabelEncoder().fit_transform(df[c])
train_X = df_temp[:train_num]
estimator = LogisticRegression()
print(cross_val_score(estimator, train_X, train_Y, cv=5).mean())
df_temp.head()
# 加上 'Cabin' 欄位的計數編碼
count_cabin = df.groupby('Cabin').size().reset_index()
count_cabin.columns = ['Cabin', 'Cabin_count']
count_df = pd.merge(df, count_cabin, on='Cabin', how='left')
df_temp['Cabin_count'] = count_df['Cabin_count']
train_X = df_temp[:train_Y.shape[0]]
train_X.head()
# 'Cabin'計數編碼 + 邏輯斯迴歸
cv = 5
LR = LogisticRegression()
mean_accuracy = cross_val_score(LR, train_X, train_Y, cv=cv).mean()
print('{}-fold cross validation average accuracy: {}'.format(cv, mean_accuracy))
# 'Cabin'特徵雜湊 + 邏輯斯迴歸
cv=5
df_temp = pd.DataFrame()
for c in df.columns:
df_temp[c] = LabelEncoder().fit_transform(df[c])
df_temp['Cabin_hash'] = df['Cabin'].apply(lambda x: hash(x) % 10).reset_index()['Cabin']
train_X = df_temp[:train_Y.shape[0]]
LR = LogisticRegression()
mean_accuracy = cross_val_score(LR, train_X, train_Y, cv=cv).mean()
print('{}-fold cross validation average accuracy: {}'.format(cv, mean_accuracy))
# 'Cabin'計數編碼 + 'Cabin'特徵雜湊 + 邏輯斯迴歸
cv=5
df_temp = pd.DataFrame()
for c in df.columns:
df_temp[c] = LabelEncoder().fit_transform(df[c])
df_temp['Cabin_hash']= df['Cabin'].apply(lambda x: hash(x) % 10).reset_index()['Cabin']
df_temp['Cabin_count'] = count_df['Cabin_count']
train_X = df_temp[:train_Y.shape[0]]
LR = LogisticRegression()
mean_accuracy = cross_val_score(LR, train_X, train_Y, cv=cv).mean()
print('{}-fold cross validation average accuracy: {}'.format(cv, mean_accuracy))
```
| github_jupyter |
### What if we buy a share every day at the highest price?
```
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
symbols = ['ABBV','AAPL','ADBE','APD','BRK-B','COST','CTL','DRI','IRM','KIM','MA','MCD','NFLX','NVDA','SO','V','VLO']
dates = ['2018-01-01', '2018-12-31']
data_directory = './data/hist/'
plot_directory = './plot/hist/'
def get_ticker_data(symbol, start_date, end_date):
ticker = pd.read_csv(data_directory + symbol + '.csv')
ticker['Date'] = pd.to_datetime(ticker['Date'], format='%Y-%m-%d')
ticker = ticker[(ticker['Date'] >= pd.to_datetime(start_date, format='%Y-%m-%d'))
& (ticker['Date'] <= pd.to_datetime(end_date, format='%Y-%m-%d'))]
ticker['units'] = 1
# At the highest price
ticker['investment'] = ticker['units'] * ticker['High']
ticker['total_units'] = ticker['units'].cumsum()
ticker['total_investment'] = ticker['investment'].cumsum()
# At the lowest price
ticker['total_value'] = ticker['total_units'] * ticker['Low']
ticker['percent'] = ((ticker['total_value'] - ticker['total_investment'])/ ticker['total_investment']) * 100.0
return ticker
def get_ticker_data_adj(symbol, start_date, end_date):
ticker = pd.read_csv(data_directory + symbol + '.csv')
ticker['Date'] = pd.to_datetime(ticker['Date'], format='%Y-%m-%d')
ticker = ticker[(ticker['Date'] >= pd.to_datetime(start_date, format='%Y-%m-%d'))
& (ticker['Date'] <= pd.to_datetime(end_date, format='%Y-%m-%d'))]
ticker['units'] = 1
ticker['investment'] = ticker['units'] * ticker['Adj Close']
ticker['total_units'] = ticker['units'].cumsum()
ticker['total_investment'] = ticker['investment'].cumsum()
ticker['total_value'] = ticker['total_units'] * ticker['Adj Close']
ticker['percent'] = ((ticker['total_value'] - ticker['total_investment'])/ ticker['total_investment']) * 100.0
return ticker
for symbol in symbols:
ticker = get_ticker_data(symbol, *dates)
fig = plt.figure(figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
# 1
plt.subplot(2, 1, 1)
plt.plot(ticker['Date'], ticker['total_investment'], color='b')
plt.plot(ticker['Date'], ticker['total_value'], color='r')
plt.title(symbol + ' Dates: ' + dates[0] + ' to ' + dates[1])
plt.ylabel('Values')
# 2
plt.subplot(2, 1, 2)
plt.plot(ticker['Date'], ticker['percent'], color='b')
plt.xlabel('Dates')
plt.ylabel('Percent')
plt.show()
#fig.savefig(plot_directory + symbol + '.pdf', bbox_inches='tight')
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
for symbol in symbols:
ticker = get_ticker_data(symbol, *dates)
plt.plot(ticker['Date'], ticker['percent'])
plt.xlabel('Dates')
plt.ylabel('Percent')
plt.legend(symbols)
plt.show()
fig, axs = plt.subplots(len(symbols), 1, sharex=True)
# Remove horizontal space between axes
fig.subplots_adjust(hspace=0)
for i in range(0, len(symbols)):
ticker = get_ticker_data(symbols[i], *dates)
# Plot each graph, and manually set the y tick values
axs[i].plot(ticker['Date'], ticker['percent'])
axs[i].set_ylim(-200, 800)
axs[i].legend([symbols[i]])
print(type(axs[i]))
```
| github_jupyter |
# 1. Loading and filtering data
```
import pandas as pd
```
## 1.1. Firstly we load the data and filter the columns
```
df = pd.read_csv("/home/alberto/Documentos/MatchingLearning/Practicas/Moriarty2.csv",
usecols=["UUID","ActionType"])
df2 = pd.read_csv("/home/alberto/Documentos/MatchingLearning/Practicas/T4.csv",
usecols=["UUID", "CPU_0", "CPU_1", "CPU_2", "CPU_3", "Traffic_TotalRxBytes",
"Traffic_TotalTxBytes", "MemFree"])
```
## 1.2. Merging two datasheets
The first thing we need to do it is to convert column 'UUID' (which is a timestamp in milliseconds) into a date timestamp
```
df['UUID'] = pd.to_datetime(df['UUID'], unit="ms")
df['UUID'] = df['UUID'].dt.round('t')
df2['UUID'] = pd.to_datetime(df['UUID'], unit="ms")
df2['UUID'] = df['UUID'].dt.round('t')
data = pd.merge(df,df2, on=['UUID'])
```
## 1.3. Replace ActionType
We need numeric values in the columns, so we replace ActionType malicious/benign by 1/0 respectively. And finally que don't need column 'UUID' for the predincting model so we remove it.
```
data['ActionType'] = data['ActionType'].replace(['malicious'], 1)
data['ActionType'] = data['ActionType'].replace(['benign'], 0)
data = data.drop('UUID', 1)
data
```
# 2. Naive Bayes.
## 2.1. Preprocessing.
```
x = data[['CPU_0', 'CPU_1', 'CPU_2', 'CPU_3', 'Traffic_TotalRxBytes', 'Traffic_TotalTxBytes', 'MemFree']]
```
## 2.2. Standarization.
```
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(x)
x_scaled = scaler.transform(x)
```
## 2.2. Round.
```
y = data['ActionType']
y_round = [ round(e,0) for e in y ]
```
## 2.3. sample a training set while holding out 40% of the data for testing (evaluating) our classifier:
```
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_scaled, y_round, test_size=0.4)
```
## 2.4. Create a Gaussian Classifier.
```
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
```
## 2.5. Training the model.
```
model.fit(x_train, y_train)
```
## 2.6. Prediction with the same data.
```
y_pred = model.predict(x_test)
```
## 2.7. Calculation.
```
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(y_test,y_pred)
print ("Error Measure ", mae)
#plt.subplot(2, 1, i + 1)
# x axis for plotting
import numpy as np
xx = np.stack(i for i in range(len(y_test)))
plt.scatter(xx, y_test, c='r', label='data')
plt.plot(xx, y_pred, c='g', label='prediction')
plt.axis('tight')
plt.legend()
plt.title("Gaussian NaiveBayes")
plt.show()
```
# 3. CROSS VALIDATION ANALYSIS
## 3.1. Features and labels
```
x = data[['CPU_0', 'CPU_1', 'CPU_2', 'CPU_3', 'Traffic_TotalRxBytes', 'Traffic_TotalTxBytes', 'MemFree']]
y = data['ActionType']
```
## 3.2. x axis for plotting
```
import numpy as np
xx = np.stack(i for i in range(len(y)))
```
## 3.3. Analysis
```
import matplotlib.pyplot as plt
from sklearn import neighbors
from sklearn.cross_validation import cross_val_score
for i, weights in enumerate(['uniform', 'distance']):
total_scores = []
for n_neighbors in range(1,30):
knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)
knn.fit(x,y)
scores = -cross_val_score(knn, x,y,
scoring='neg_mean_absolute_error', cv=10)
total_scores.append(scores.mean())
plt.plot(range(0,len(total_scores)), total_scores,
marker='o', label=weights)
plt.ylabel('cv score')
plt.legend()
plt.show()
```
# PCA.
```
from sklearn import preprocessing
scaler = preprocessing.StandardScaler()
datanorm = scaler.fit_transform(data)
from sklearn.decomposition import PCA
n_components = 2
estimator = PCA(n_components)
X_pca = estimator.fit_transform(datanorm)
import numpy
import matplotlib.pyplot as plt
x = X_pca[:,0]
y = X_pca[:,1]
plt.scatter(x,y)
plt.show()
```
# Creating CSV to analyze the results.
```
import os
directory = "../data/processed"
if not os.path.exists(directory):
os.makedirs(directory)
data.to_csv(directory + "/MoriartyT4.csv")
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see [this forum post](https://carnd-forums.udacity.com/cq/viewquestion.action?spaceKey=CAR&id=29496372&questionTitle=finding-lanes---import-cv2-fails-even-though-python-in-the-terminal-window-has-no-problem-with-import-cv2) for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, roi_top, roi_bottom, min_slope, max_slope, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
#Initialize variables
sum_fit_left = 0
sum_fit_right = 0
number_fit_left = 0
number_fit_right = 0
for line in lines:
for x1,y1,x2,y2 in line:
#find the slope and offset of each line found (y=mx+b)
fit = np.polyfit((x1, x2), (y1, y2), 1)
#limit the slope to plausible left lane values and compute the mean slope/offset
if fit[0] >= min_slope and fit[0] <= max_slope:
sum_fit_left = fit + sum_fit_left
number_fit_left = number_fit_left + 1
#limit the slope to plausible right lane values and compute the mean slope/offset
if fit[0] >= -max_slope and fit[0] <= -min_slope:
sum_fit_right = fit + sum_fit_right
number_fit_right = number_fit_right + 1
#avoid division by 0
if number_fit_left > 0:
#Compute the mean of all fitted lines
mean_left_fit = sum_fit_left/number_fit_left
#Given two y points (bottom of image and top of region of interest), compute the x coordinates
x_top_left = int((roi_top - mean_left_fit[1])/mean_left_fit[0])
x_bottom_left = int((roi_bottom - mean_left_fit[1])/mean_left_fit[0])
#Draw the line
cv2.line(img, (x_bottom_left,roi_bottom), (x_top_left,roi_top), [255, 0, 0], 5)
else:
mean_left_fit = (0,0)
if number_fit_right > 0:
#Compute the mean of all fitted lines
mean_right_fit = sum_fit_right/number_fit_right
#Given two y points (bottom of image and top of region of interest), compute the x coordinates
x_top_right = int((roi_top - mean_right_fit[1])/mean_right_fit[0])
x_bottom_right = int((roi_bottom - mean_right_fit[1])/mean_right_fit[0])
#Draw the line
cv2.line(img, (x_bottom_right,roi_bottom), (x_top_right,roi_top), [255, 0, 0], 5)
else:
fit_right_mean = (0,0)
def hough_lines(img, roi_top, roi_bottom, min_slope, max_slope, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines, roi_top, roi_bottom, min_slope, max_slope, color=[255, 0, 0], thickness=4)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
test_images = os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
def process_image1(img):
#Apply greyscale
gray_img = grayscale(img)
# Define a kernel size and Apply Gaussian blur
kernel_size = 5
blur_img = gaussian_blur(gray_img, kernel_size)
#Apply the Canny transform
low_threshold = 50
high_threshold = 150
canny_img = canny(blur_img, low_threshold, high_threshold)
#Region of interest (roi) horizontal percentages
roi_hor_perc_top_left = 0.4675
roi_hor_perc_top_right = 0.5375
roi_hor_perc_bottom_left = 0.11
roi_hor_perc_bottom_right = 0.95
#Region of interest vertical percentages
roi_vert_perc = 0.5975
#Apply a region of interest mask of the image
vertices = np.array([[(int(roi_hor_perc_bottom_left*img.shape[1]),img.shape[0]), (int(roi_hor_perc_top_left*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_top_right*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_bottom_right*img.shape[1]),img.shape[0])]], dtype=np.int32)
croped_img = region_of_interest(canny_img,vertices)
# Define the Hough img parameters
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
min_slope = 0.5 # minimum line slope
max_slope = 0.8 # maximum line slope
# Apply the Hough transform to get an image and the lines
hough_img = hough_lines(croped_img, int(roi_vert_perc*img.shape[0]), img.shape[0], min_slope, max_slope, rho, theta, threshold, min_line_length, max_line_gap)
# Return the image of the lines blended with the original
return weighted_img(img, hough_img, 0.7, 1.0)
#prepare directory to receive processed images
newpath = 'test_images/processed'
if not os.path.exists(newpath):
os.makedirs(newpath)
for file in test_images:
# skip files starting with processed
if file.startswith('processed'):
continue
image = mpimg.imread('test_images/' + file)
processed_img = process_image1(image)
#Extract file name
base = os.path.splitext(file)[0]
#break
mpimg.imsave('test_images/processed/processed-' + base +'.png', processed_img, format = 'png', cmap = plt.cm.gray)
print("Processed ", file)
```
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(img):
#Apply greyscale
gray_img = grayscale(img)
# Define a kernel size and Apply Gaussian blur
kernel_size = 5
blur_img = gaussian_blur(gray_img, kernel_size)
#Apply the Canny transform
low_threshold = 50
high_threshold = 150
canny_img = canny(blur_img, low_threshold, high_threshold)
#Region of interest (roi) horizontal percentages
roi_hor_perc_top_left = 0.4675
roi_hor_perc_top_right = 0.5375
roi_hor_perc_bottom_left = 0.11
roi_hor_perc_bottom_right = 0.95
#Region of interest vertical percentages
roi_vert_perc = 0.5975
#Apply a region of interest mask of the image
vertices = np.array([[(int(roi_hor_perc_bottom_left*img.shape[1]),img.shape[0]), (int(roi_hor_perc_top_left*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_top_right*img.shape[1]), int(roi_vert_perc*img.shape[0])), (int(roi_hor_perc_bottom_right*img.shape[1]),img.shape[0])]], dtype=np.int32)
croped_img = region_of_interest(canny_img,vertices)
# Define the Hough img parameters
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
min_slope = 0.5 # minimum line slope
max_slope = 0.8 # maximum line slope
# Apply the Hough transform to get an image and the lines
hough_img = hough_lines(croped_img, int(roi_vert_perc*img.shape[0]), img.shape[0], min_slope, max_slope, rho, theta, threshold, min_line_length, max_line_gap)
# Return the image of the lines blended with the original
return weighted_img(img, hough_img, 0.7, 1.0)
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}" >
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
# Modeling and Simulation in Python
Chapter 23
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
```
### Code from the previous chapter
```
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
degree = UNITS.degree
params = Params(x = 0 * m,
y = 1 * m,
g = 9.8 * m/s**2,
mass = 145e-3 * kg,
diameter = 73e-3 * m,
rho = 1.2 * kg/m**3,
C_d = 0.3,
angle = 45 * degree,
velocity = 40 * m / s,
t_end = 20 * s)
def make_system(params):
"""Make a system object.
params: Params object with angle, velocity, x, y,
diameter, duration, g, mass, rho, and C_d
returns: System object
"""
unpack(params)
# convert angle to degrees
theta = np.deg2rad(angle)
# compute x and y components of velocity
vx, vy = pol2cart(theta, velocity)
# make the initial state
init = State(x=x, y=y, vx=vx, vy=vy)
# compute area from diameter
area = np.pi * (diameter/2)**2
return System(params, init=init, area=area)
def drag_force(V, system):
"""Computes drag force in the opposite direction of `V`.
V: velocity
system: System object with rho, C_d, area
returns: Vector drag force
"""
unpack(system)
mag = -rho * V.mag**2 * C_d * area / 2
direction = V.hat()
f_drag = mag * direction
return f_drag
def slope_func(state, t, system):
"""Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with g, rho, C_d, area, mass
returns: sequence (vx, vy, ax, ay)
"""
x, y, vx, vy = state
unpack(system)
V = Vector(vx, vy)
a_drag = drag_force(V, system) / mass
a_grav = Vector(0, -g)
a = a_grav + a_drag
return vx, vy, a.x, a.y
def event_func(state, t, system):
"""Stop when the y coordinate is 0.
state: State object
t: time
system: System object
returns: y coordinate
"""
x, y, vx, vy = state
return y
```
### Optimal launch angle
To find the launch angle that maximizes distance from home plate, we need a function that takes launch angle and returns range.
```
def range_func(angle, params):
"""Computes range for a given launch angle.
angle: launch angle in degrees
params: Params object
returns: distance in meters
"""
params = Params(params, angle=angle)
system = make_system(params)
results, details = run_ode_solver(system, slope_func, events=event_func)
x_dist = get_last_value(results.x) * m
return x_dist
```
Let's test `range_func`.
```
%time range_func(45, params)
```
And sweep through a range of angles.
```
angles = linspace(20, 80, 21)
sweep = SweepSeries()
for angle in angles:
x_dist = range_func(angle, params)
print(angle, x_dist)
sweep[angle] = x_dist
```
Plotting the `Sweep` object, it looks like the peak is between 40 and 45 degrees.
```
plot(sweep, color='C2')
decorate(xlabel='Launch angle (degree)',
ylabel='Range (m)',
title='Range as a function of launch angle',
legend=False)
savefig('figs/chap10-fig03.pdf')
```
We can use `max_bounded` to search for the peak efficiently.
```
%time res = max_bounded(range_func, [0, 90], params)
```
`res` is an `ModSimSeries` object with detailed results:
```
res
```
`x` is the optimal angle and `fun` the optional range.
```
optimal_angle = res.x * degree
max_x_dist = res.fun
```
### Under the hood
Read the source code for `max_bounded` and `min_bounded`, below.
Add a print statement to `range_func` that prints `angle`. Then run `max_bounded` again so you can see how many times it calls `range_func` and what the arguments are.
```
%psource max_bounded
%psource min_bounded
```
### The Manny Ramirez problem
Finally, let's solve the Manny Ramirez problem:
*What is the minimum effort required to hit a home run in Fenway Park?*
Fenway Park is a baseball stadium in Boston, Massachusetts. One of its most famous features is the "Green Monster", which is a wall in left field that is unusually close to home plate, only 310 feet along the left field line. To compensate for the short distance, the wall is unusually high, at 37 feet.
Although the problem asks for a minimum, it is not an optimization problem. Rather, we want to solve for the initial velocity that just barely gets the ball to the top of the wall, given that it is launched at the optimal angle.
And we have to be careful about what we mean by "optimal". For this problem, we don't want the longest range, we want the maximum height at the point where it reaches the wall.
If you are ready to solve the problem on your own, go ahead. Otherwise I will walk you through the process with an outline and some starter code.
As a first step, write a function called `height_func` that takes a launch angle and a params as parameters, simulates the flights of a baseball, and returns the height of the baseball when it reaches a point 94.5 meters (310 feet) from home plate.
```
def event_func1(state, t, system):
"""Stop when the ball reaches the home plate.
state: State object
t: time
system: System object
returns: y coordinate
"""
x, y, vx, vy = state
return x-94.5
def height_func(angle, params):
"""Computes range for a given launch angle.
angle: launch angle in degrees
params: Params object
returns: final height of the ball
"""
params = Params(params, angle=angle)
system = make_system(params)
results, details = run_ode_solver(system, slope_func, events=event_func1)
y_height = get_last_value(results.y) * m
return y_height
```
Always test the slope function with the initial conditions.
```
# Solution goes here
# Solution goes here
```
Test your function with a launch angle of 45 degrees:
```
# Solution goes here
```
Now use `max_bounded` to find the optimal angle. Is it higher or lower than the angle that maximizes range?
```
maxim = max_bounded(height_func, [0,90],params)
optimal_angle = maxim.x
# Solution goes here
```
With initial velocity 40 m/s and an optimal launch angle, the ball clears the Green Monster with a little room to spare.
Which means we can get over the wall with a lower initial velocity.
### Finding the minimum velocity
Even though we are finding the "minimum" velocity, we are not really solving a minimization problem. Rather, we want to find the velocity that makes the height at the wall exactly 11 m, given given that it's launched at the optimal angle. And that's a job for `fsolve`.
Write an error function that takes a velocity and a `Params` object as parameters. It should use `max_bounded` to find the highest possible height of the ball at the wall, for the given velocity. Then it should return the difference between that optimal height and 11 meters.
```
def error_func(velocity, params):
params1 = Params(params=params,
velocity=velocity)
answer = max_bounded(height_func, [0,90],params)
return 11 - answer.fun
```
Test your error function before you call `fsolve`.
```
error_func(12, params)
```
Then use `fsolve` to find the answer to the problem, the minimum velocity that gets the ball out of the park.
```
# Solution goes here
# Solution goes here
```
And just to check, run `error_func` with the value you found.
```
# Solution goes here
```
| github_jupyter |
```
from IPython.display import Markdown as md
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#from sklearn.linear_model import LogisticRegression
#from sklearn.metrics import auc as sklearn_auc
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction import DictVectorizer
import shelve
savefile = 'Savefile.sav'
```
- Homework source https://github.com/alexeygrigorev/mlbookcamp-code/blob/master/course-zoomcamp/06-trees/homework.md
- Lecture https://github.com/alexeygrigorev/mlbookcamp-code/blob/master/chapter-06-trees/06-trees.ipynb
## 6.10 Homework
The goal of this homework is to create a tree-based regression model for prediction apartment prices (column `'price'`).
In this homework we'll again use the New York City Airbnb Open Data dataset - the same one we used in homework 2 and 3.
You can take it from [Kaggle](https://www.kaggle.com/dgomonov/new-york-city-airbnb-open-data?select=AB_NYC_2019.csv)
or download from [here](https://raw.githubusercontent.com/alexeygrigorev/datasets/master/AB_NYC_2019.csv)
if you don't want to sign up to Kaggle.
For this homework, we prepared a [starter notebook](homework-6-starter.ipynb).
## Loading the data
* Use only the following columns:
* `'neighbourhood_group',`
* `'room_type',`
* `'latitude',`
* `'longitude',`
* `'minimum_nights',`
* `'number_of_reviews','reviews_per_month',`
* `'calculated_host_listings_count',`
* `'availability_365',`
* `'price'`
* Fill NAs with 0
* Apply the log tranform to `price`
* Do train/validation/test split with 60%/20%/20% distribution.
* Use the `train_test_split` function and set the `random_state` parameter to 1
* Use `DictVectorizer` to turn the dataframe into matrices
```
col = ['neighbourhood_group',
'room_type',
'latitude',
'longitude',
'minimum_nights',
'number_of_reviews','reviews_per_month',
'calculated_host_listings_count',
'availability_365',
'price']
df = (
pd.read_csv('../input/new-york-city-airbnb-open-data/AB_NYC_2019.csv')
[col]
.fillna(0)
)
df['price'] = np.log1p(df['price'])
df.head()
y='price'
test=0.2
val=0.2
seed=1
df_train_full, df_test = train_test_split(df, test_size=test, random_state=seed)
df_train, df_val = train_test_split(df_train_full, test_size=val/(1-test), random_state=seed)
y_test = df_test[y].copy().values
y_val = df_val[y].copy().values
y_train = df_train[y].copy().values
del df_test[y]
del df_val[y]
del df_train[y]
# hot encoding
dict_train = df_train.to_dict(orient='records')
dict_val = df_val.to_dict(orient='records')
dv = DictVectorizer(sparse=False)
X_train = dv.fit_transform(dict_train)
X_val = dv.transform(dict_val)
```
## Question 1
Let's train a decision tree regressor to predict the price variable.
* Train a model with `max_depth=1`
Which feature is used for splitting the data?
* `room_type`
* `neighbourhood_group`
* `number_of_reviews`
* `reviews_per_month`
```
# https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html
from sklearn.tree import DecisionTreeRegressor
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X_train, y_train)
from sklearn.tree import export_text
print(export_text(dt, feature_names=dv.get_feature_names()))
from sklearn.tree import plot_tree
plot_tree(dt, feature_names=dv.get_feature_names())
plt.show()
# first node
feature_id = dt.tree_.feature[0] # [12, -2, -2]
feature_name = dv.get_feature_names()[feature_id] # 'room_type=Entire home/apt'
md(f'### Which feature is used for splitting the data?: **{feature_name.split("=")[0]}**')
```
## Question 2
Train a random forest model with these parameters:
* `n_estimators=10`
* `random_state=1`
* `n_jobs=-1` (optional - to make training faster)
What's the RMSE of this model on validation?
* 0.059
* 0.259
* 0.459
* 0.659
```
# https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html?highlight=randomforest#sklearn.ensemble.RandomForestRegressor
from sklearn.ensemble import RandomForestRegressor
def get_rmse(y_pred, y_true):
mse = ((y_pred - y_true) ** 2).mean()
return np.sqrt(mse)
#enddef
rf = RandomForestRegressor(n_estimators=10, random_state=1) # n_jobs=-1
rf.fit(X_train, y_train)
md(f"### What's the RMSE of this model on validation? : **{get_rmse(rf.predict(X_val), y_val):.4f}**")
# 0.4599
```
## Question 3
Now let's experiment with the `n_estimators` parameter
* Try different values of this parameter from 10 to 200 with step 10
* Set `random_state` to `1`
* Evaluate the model on the validation dataset
After which value of `n_estimators` does RMSE stop improving?
- 10
- 50
- 70
- 120
```
with shelve.open(savefile, 'c') as save:
k = 'rmse01'
if k not in save:
rmse_list = {}
for n in np.linspace(10,200,10).astype(int):
rf = RandomForestRegressor(n_estimators=n, random_state=1,n_jobs=-1)
rf.fit(X_train, y_train)
rmse_list[n] = get_rmse(rf.predict(X_val), y_val)
print(n,rmse_list[n])
#endfor
save[k] = rmse_list
else:
rmse_list = save[k]
#endif
#endwith
pd.Series(rmse_list).plot()
plt.grid()
plt.show()
```
### After which value of n_estimators does RMSE stop improving? **120**
## Question 4
Let's select the best `max_depth`:
* Try different values of `max_depth`: `[10, 15, 20, 25]`
* For each of these values, try different values of `n_estimators` from 10 till 200 (with step 10)
* Fix the random seed: `random_state=1`
What's the best `max_depth`:
* 10
* 15
* 20
* 25
```
rmse_list02 = None
with shelve.open(savefile, 'c') as save:
k = 'rmse02'
if k not in save:
rmse_list02 = {}
for d in [10, 15, 20, 25]:
rmse_list02[d] = rmse_list02.get(d,{}) # create empty Dictionary if key doesn't exist yet
for n in np.linspace(10,200,10).astype(int):
if n not in rmse_list02[d]:
rf = RandomForestRegressor(n_estimators=n
,max_depth=d
,random_state=1
,n_jobs=-1) #
rf.fit(X_train, y_train)
rmse_list02[d][n] = get_rmse(rf.predict(X_val), y_val)
#endif
print(d,n,rmse_list02[d][n])
#endfor
#endfor
save[k]= rmse_list02
else:
rmse_list02 = save[k]
#endif
#endwith
plt.figure(figsize=(6, 4))
for d in [10, 15, 20, 25]:
x = rmse_list02[d].keys()
y = [rmse_list02[d][n] for n in x]
plt.plot(x, y, label=f'depth={d}')
#endfor
plt.xticks(range(0, 201, 10))
plt.grid()
plt.legend()
plt.xlabel('n_estimators')
plt.ylabel('rmse')
plt.show()
res = { min([rmse_list02[d][n] for n in rmse_list02[d]]) : d for d in rmse_list02 }
md(f"### What's the best `max_depth`? : **{res[sorted(res)[0]]}**") # 15
```
#### **Bonus question (not graded):**
Will the answer be different if we change the seed for the model?
**Answer**: it should *not*, since n_estimators is sufficently high (>100).
## Question 5
We can extract feature importance information from tree-based models.
At each step of the decision tree learning algorith, it finds the best split.
When doint it, we can calculate "gain" - the reduction in impurity before and after the split.
This gain is quite useful in understanding what are the imporatant features
for tree-based models.
In Scikit-Learn, tree-based models contain this information in the
[`feature_importances_`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html#sklearn.ensemble.RandomForestRegressor.feature_importances_)
field.
For this homework question, we'll find the most important feature:
* Train the model with these parametes:
* `n_estimators=10`,
* `max_depth=20`,
* `random_state=1`,
* `n_jobs=-1` (optional)
* Get the feature importance information from this model
What's the most important feature?
* `neighbourhood_group=Manhattan`
* `room_type=Entire home/apt`
* `longitude`
* `latitude`
```
rf = RandomForestRegressor(n_estimators=10
,max_depth=20
,random_state=1
,n_jobs=-1) #
rf.fit(X_train, y_train)
importances = list(zip(dv.feature_names_, rf.feature_importances_))
df_importance = (
pd.DataFrame(importances, columns=['feature', 'gain'])
# [lambda x : x['gain'] > 0]
.sort_values(by='gain', ascending=False)
)
md(f"### What's the most important feature? : **{df_importance['feature'].iloc[0]}**")
# room_type=Entire home/apt
```
## Question 6
Now let's train an XGBoost model! For this question, we'll tune the `eta` parameter
* Install XGBoost
* Create DMatrix for train and validation
* Create a watchlist
* Train a model with these parameters for 100 rounds:
```
xgb_params = {
'eta': 0.3,
'max_depth': 6,
'min_child_weight': 1,
'objective': 'reg:squarederror',
'nthread': 8,
'seed': 1,
'verbosity': 1,
}
```
Now change `eta` first to `0.1` and then to `0.01`
What's the best eta?
* 0.3
* 0.1
* 0.01
```
import xgboost as xgb # Install XGBoost
def parse_xgb_output(output):
tree = []
p_train = []
p_val = []
for line in output.stdout.strip().split('\n'):
it_line, train_line, val_line = line.split('\t')
it = int(it_line.strip('[]'))
train = float(train_line.split(':')[1])
val = float(val_line.split(':')[1])
tree.append(it)
p_train.append(train)
p_val.append(val)
return tree, p_train, p_val
#enddef
# Create DMatrix for train and validation
dtrain = xgb.DMatrix(X_train, label=y_train, feature_names=dv.feature_names_)
dval = xgb.DMatrix(X_val, label=y_val, feature_names=dv.feature_names_)
# Create a watchlist
watchlist = [(dtrain, 'train'), (dval, 'val')]
# Train a model with these parameters for 100 rounds:
xgb_params = {
'eta': 0.3,
'max_depth': 6,
'min_child_weight': 1,
'objective': 'reg:squarederror',
'nthread': 8,
'seed': 1,
'verbosity': 1,
}
%%capture output
# capture instruction that saves the result to output
xgb_params['eta'] = 0.3
model = xgb.train(xgb_params, dtrain,
num_boost_round=100,
evals=watchlist, verbose_eval=10)
tree, p_train, p_val = parse_xgb_output(output)
print(f'Eta={xgb_params["eta"]} : Best performance (squarederror, number of trees) ', min(zip(p_val, tree)))
plt.figure(figsize=(6, 4))
plt.plot(tree, p_train, color='black', linestyle='dashed', label='Train Loss')
plt.plot(tree, p_val, color='black', linestyle='solid', label='Validation Loss')
# plt.xticks(range(0, 101, 25))
plt.legend()
plt.title('XGBoost: number of trees vs "squarederror"')
plt.xlabel('Number of trees')
plt.ylabel('squarederror')
plt.yscale('log')
plt.show()
%%capture output_010
# capture instruction that saves the result to output
xgb_params['eta'] = 0.1
model = xgb.train(xgb_params, dtrain,
num_boost_round=100,
evals=watchlist, verbose_eval=10)
tree, _, p_val = parse_xgb_output(output_010)
print(f'Eta={xgb_params["eta"]} : Best performance (squarederror, number of trees) ', min(zip(p_val, tree)))
%%capture output_001
# capture instruction that saves the result to output
xgb_params['eta'] = 0.01
model = xgb.train(xgb_params, dtrain,
num_boost_round=100,
evals=watchlist, verbose_eval=10)
tree, _, p_val = parse_xgb_output(output_001)
print(f'Eta={xgb_params["eta"]} : Best performance (squarederror, number of trees) ', min(zip(p_val, tree)))
plt.figure(figsize=(6, 4))
for eta, out in zip([0.3,0.1,0.01],[output,output_010,output_001]):
tree, _, p_val = parse_xgb_output(out)
#plt.plot(tree, p_train, color='black', linestyle='dashed', label='eta=eta, Train Loss')
plt.plot(tree, p_val, linestyle='solid', label=f'eta={eta}')
print(f'Eta={eta} : Best performance (squarederror, number of trees) ', min(zip(p_val, tree)))
# plt.xticks(range(0, 101, 25))
plt.legend()
plt.title('XGBoost: number of trees vs Validation "squarederror"')
plt.xlabel('Number of trees')
plt.ylabel('Validation "squarederror"')
# plt.yscale('log')
plt.ylim(0.43,0.46)
plt.grid()
plt.show()
md("### What's the best eta? **0.1**")
```
| github_jupyter |
# Logistic regression with $\ell_1$ regularization
In this example, we use CVXPY to train a logistic regression classifier with $\ell_1$ regularization. We are given data $(x_i,y_i)$, $i=1,\ldots, m$. The $x_i \in {\bf R}^n$ are feature vectors, while the $y_i \in \{0, 1\}$ are associated boolean classes; we assume the first component of each $x_i$ is $1$.
Our goal is to construct a linear classifier $\hat y = \mathbb{1}[\beta^T x > 0]$, which is $1$ when $\beta^T x$ is positive and $0$ otherwise. We model the posterior probabilities of the classes given the data linearly, with
$$
\log \frac{\mathrm{Pr} (Y=1 \mid X = x)}{\mathrm{Pr} (Y=0 \mid X = x)} = \beta^T x.
$$
This implies that
$$
\mathrm{Pr} (Y=1 \mid X = x) = \frac{\exp(\beta^T x)}{1 + \exp(\beta^T x)}, \quad
\mathrm{Pr} (Y=0 \mid X = x) = \frac{1}{1 + \exp(\beta^T x)}.
$$
We fit $\beta$ by maximizing the log-likelihood of the data, plus a regularization term $\lambda \|{\beta_{1:}}\|_1$ with $\lambda > 0$:
$$
\ell(\beta) = \sum_{i=1}^{m} y_i \beta^T x_i - \log(1 + \exp (\beta^T x_i)) - \lambda \|{\beta_{1:}}\|_1.
$$
Because $\ell$ is a concave function of $\beta$, this is a convex optimization problem.
```
from __future__ import division
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
```
In the following code we generate data with $n=20$ features by randomly choosing $x_i$ and a sparse $\beta_{\mathrm{true}} \in {\bf R}^n$.
We then set $y_i = \mathbb{1}[\beta_{\mathrm{true}}^T x_i - z_i > 0]$, where the $z_i$ are i.i.d. normal random variables.
We divide the data into training and test sets with $m=1000$ examples each.
```
np.random.seed(1)
n = 20
m = 1000
density = 0.2
beta_true = np.random.randn(n,1)
idxs = np.random.choice(range(n), int((1-density)*n), replace=False)
for idx in idxs:
beta_true[idx] = 0
sigma = 45
X = np.random.normal(0, 5, size=(m,n))
X[:, 0] = 1.0
Y = X @ beta_true + np.random.normal(0, sigma, size=(m,1))
Y[Y > 0] = 1
Y[Y <= 0] = 0
X_test = np.random.normal(0, 5, size=(m, n))
X_test[:, 0] = 1.0
Y_test = X_test @ beta_true + np.random.normal(0, sigma, size=(m,1))
Y_test[Y_test > 0] = 1
Y_test[Y_test <= 0] = 0
```
We next formulate the optimization problem using CVXPY.
```
beta = cp.Variable((n,1))
lambd = cp.Parameter(nonneg=True)
log_likelihood = cp.sum(
cp.reshape(cp.multiply(Y, X @ beta), (m,)) -
cp.log_sum_exp(cp.hstack([np.zeros((m,1)), X @ beta]), axis=1) -
lambd * cp.norm(beta[1:], 1)
)
problem = cp.Problem(cp.Maximize(log_likelihood))
```
We solve the optimization problem for a range of $\lambda$ to compute a trade-off curve.
We then plot the train and test error over the trade-off curve.
A reasonable choice of $\lambda$ is the value that minimizes the test error.
```
def error(scores, labels):
scores[scores > 0] = 1
scores[scores <= 0] = 0
return np.sum(np.abs(scores - labels)) / float(np.size(labels))
trials = 100
train_error = np.zeros(trials)
test_error = np.zeros(trials)
lambda_vals = np.logspace(-2, 0, trials)
beta_vals = []
for i in range(trials):
lambd.value = lambda_vals[i]
problem.solve()
train_error[i] = error(X @ beta.value, Y)
test_error[i] = error(X_test @ beta.value, Y_test)
beta_vals.append(beta.value)
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
plt.plot(lambda_vals, train_error, label="Train error")
plt.plot(lambda_vals, test_error, label="Test error")
plt.xscale('log')
plt.legend(loc='upper left')
plt.xlabel(r"$\lambda$", fontsize=16)
plt.show()
```
We also plot the regularization path, or the $\beta_i$ versus $\lambda$. Notice that
a few features remain non-zero longer for larger $\lambda$ than the rest, which suggests that these features are the most important.
```
for i in range(n):
plt.plot(lambda_vals, [wi[i,0] for wi in beta_vals])
plt.xlabel(r"$\lambda$", fontsize=16)
plt.xscale("log")
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Регрессия: Спрогнозируй эффективность расхода топлива
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/keras/basic_regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ru/beta/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ru/beta/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/ru/beta/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
В задаче *регрессии*, мы хотим дать прогноз какого-либо непрерывного значения, например цену или вероятность. Сравните это с задачей *классификации*, где нужно выбрать конкретную категорию из ограниченного списка (например, есть ли на картинке яблоко или апельсин, распознайте какой фрукт на изображении).
Этот урок использует классический датасет [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) и строит модель, предсказывающую эффективность расхода топлива автомобилей конца 70-х и начала 80-х. Чтобы сделать это, мы предоставим модели описания множества различных автомобилей того времени. Эти описания будут содержать такие параметры как количество цилиндров, лошадиных сил, объем двигателя и вес.
В этом примере используется tf.keras API, подробнее [смотри здесь](https://www.tensorflow.org/guide/keras).
```
# Установим библиотеку seaborn для построения парных графиков
!pip install seaborn
from __future__ import absolute_import, division, print_function, unicode_literals
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
!pip install tensorflow==2.0.0-beta1
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
```
## Датасет Auto MPG
Датасет доступен в [репозитории машинного обучения UCI](https://archive.ics.uci.edu/ml/).
### Получите данные
Сперва загрузим датасет.
```
dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
```
Импортируем его при помощи библиотеки Pandas:
```
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
```
### Подготовьте данные
Датасет содержит несколько неизвестных значений.
```
dataset.isna().sum()
```
Чтобы урок оставался простым, удалим эти строки.
```
dataset = dataset.dropna()
```
Столбец `"Origin"` на самом деле категорийный, а не числовой. Поэтому конвертируем его в one-hot
```
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
```
### Разделите данные на обучающую и тестовую выборки
Сейчас разделим датасет на обучающую и тестовую выборки.
Тестовую выборку будем использовать для итоговой оценки нашей модели
```
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
```
### Проверьте данные
Посмотрите на совместное распределение нескольких пар колонок из тренировочного набора данных:
```
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
```
Также посмотрите на общую статистику:
```
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
```
### Отделите признаки от меток
Отделите целевые значения или "метки" от признаков. Обучите модель для предсказания значений.
```
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
```
### Нормализуйте данные
Взгляните еще раз на блок train_stats приведенный выше. Обратите внимание на то, как отличаются диапазоны каждого из признаков.
Это хорошая практика - нормализовать признаки у которых различные масштабы и диапазон изменений. Хотя модель *может* сходиться и без нормализации признаков, обучение при этом усложняется и итоговая модель становится зависимой от выбранных единиц измерения входных данных..
Примечание. Мы намеренно генерируем эти статистические данные только из обучающей выборки, они же будут использоваться для нормализации тестовой выборки. Мы должны сделать это, чтобы тестовая выборка была из того распределения на которой обучалась модель.
```
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
```
Для обучения модели мы будем использовать эти нормализованные данные.
Внимание: статистики использованные для нормализации входных данных (среднее и стандартное отклонение) должны быть применены к любым другим данным, которые используются в модели. Это же касается one-hot кодирования которое мы делали ранее. Преобразования необходимо применять как к тестовым данным, так и к данным с которыми модель используется в работе.
## Модель
### Постройте модель
Давайте построим нашу модель. Мы будем использовать `Sequential` (последовательную) модель с двумя полносвязными скрытыми слоями, а выходной слой будет возвращать одно непрерывное значение. Этапы построения модели мы опишем в функции build_model, так как позже мы создадим еще одну модель.
```
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model()
```
### Проверьте модель
Используйте метод `.summary` чтобы напечатать простое описание модели.
```
model.summary()
```
Сейчас попробуем нашу модель. Возьмем пакет из`10` примеров из обучающей выборки и вызовем `model.predict` на них.
```
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
```
Похоже все работает правильно, модель показывает результат ожидаемой размерности и типа.
### Обучите модель
Обучите модель за 1000 эпох и запишите точность модели на тренировочных и проверочных данных в объекте `history`.
```
# Выведем прогресс обучения в виде точек после каждой завершенной эпохи
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
```
Визуализируйте процесс обучения модели используя статистику содержащуюся в объекте `history`.
```
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mae'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mae'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mse'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mse'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
```
Полученный график показывает, небольшое улучшение или даже деградацию ошибки валидации после примерно 100 эпох обучения. Давай обновим метод model.fit чтобы автоматически прекращать обучение когда ошибка валидации Val loss прекращает улучшаться. Для этого используем функцию *EarlyStopping callback* которая проверяет показатели обучения после каждой эпохи. Если после определенного количество эпох нет никаких улучшений, то функция автоматически остановит обучение.
Вы можете больше узнать про этот коллбек [здесь](https://www.tensorflow.org/versions/master/api_docs/python/tf/keras/callbacks/EarlyStopping).
```
model = build_model()
# Параметр patience определяет количество эпох, проверяемых на улучшение
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
```
График показывает что среднее значение ошибки на проверочных данных - около 2 галлонов на милю. Хорошо это или плохо? Решать тебе.
Давай посмотрим как наша модель справится на **тестовой** выборке, которую мы еще не использовали при обучении модели. Эта проверка покажет нам какого результата ожидать от модели, когда мы будем ее использовать в реальном мире
```
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=0)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
```
### Сделайте прогноз
Наконец, спрогнозируйте значения миль-на-галлон используя данные из тестовой выборки:
```
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
```
Вроде наша модель дает хорошие предсказания. Давайте посмотрим распределение ошибки.
```
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
```
Она не достаточно гауссова, но мы могли это предполагать потому что количество примеров очень мало.
## Заключение
Это руководство познакомило тебя с несколькими способами решения задач регрессии.
* Среднеквадратичная ошибка (MSE) это распространенная функция потерь используемая для задач регрессии (для классификации используются другие функции).
* Аналогично, показатели оценки модели для регрессии отличаются от используемых в классификации. Обычной метрикой для регрессии является средняя абсолютная ошибка (MAE).
* Когда значения числовых входных данных из разных диапазонов, каждый признак должен быть незавизимо масштабирован до одного и того же диапазона.
* Если данных для обучения немного, используй небольшую сеть из нескольких скрытых слоев. Это поможет избежать переобучения.
* Метод ранней остановки очень полезная техника для избежания переобучения.
| github_jupyter |
<img src="imagenes/rn3.png" width="200">
<img src="http://www.identidadbuho.uson.mx/assets/letragrama-rgb-150.jpg" width="200">
# [Curso de Redes Neuronales](https://rn-unison.github.io)
# Redes neuronales multicapa y el algoritmo de *b-prop*
[**Julio Waissman Vilanova**](http://mat.uson.mx/~juliowaissman/), 27 de febrero de 2019.
En esta libreta vamos a practicar con las diferentes variaciones del método de descenso de gradiente que se utilizan en el entrenamiento de redes neuronales profundas. Esta no es una libreta tutorial (por el momento, una segunda versión puede que si sea). Así, vamos a referenciar los algoritmos a tutoriales y artículos originales. Sebastian Ruder escribio [este tutorial que me parece muy bueno](http://ruder.io/optimizing-gradient-descent/index.html). Es claro, conciso y bien referenciado por si quieres mayor detalle. Nos basaremos en este tutorial para nuestra libreta.
Igualmente, vamos a aprovechar la misma libreta para hacer y revisar como funcionan los *autocodificadores*. Los autocodificadores son muy importantes porque dan la intuición necesaria para introducir las redes convolucionales, y porque muestra el poder de compartir parámetros en diferentes partes de una arquitectura distribuida.
Empecemos por inicializar los modulos que vamos a requerir.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (16,8)
plt.style.use('ggplot')
```
## 1. Definiendo la red neuronal con arquitectura fija
Como la definición de red neuronal, *f-prop* y *b-prop* ya fue tratados en otra libreta, vamos a inicializar una red neuronal sencilla, la cual tenga:
1. Una etapa de autoencoder (usado para dos palabras)
2. Una capa oculta con activación ReLU
3. Una capa de salida con una neurona logística (problema de clasificación binaria)
El número de salidas del autocodificador lo vamos a denotar como $n_a$, y el número de unidades ReLU de la capa oculta como $n_h$
A contonuación se agregan celdas de código para
1. Inicialización de pesos
2. Predicción (feed forward)
3. El algoritmo de *b-prop* (calculo de $\delta^{(1)} y $\delta^{(2)}$)
Si bien es bastante estandar algunas consideraciones se hicieron las cuales se resaltan más adelanta
```
def inicializa_red(n_v, n_a, n_h):
"""
Inicializa la red neuronal
Parámetros
----------
n_v : int con el número de palabras en el vocabulario
n_a : int con el número de características del autocodificador
n_h : int con el número de unidades ReLU en la capa oculta
Devuelve
--------
W = [W_a, W_h, W_o] Lista con las matrices de pesos
B = [b_h, b_o] Lista con los sesgos
"""
np.random.seed(0) # Solo para efectos de reproducibilidad
W_ac = np.random.randn(n_v, n_a)
W_h = np.random.randn(n_h, 2 * n_a)
W_o = np.random.randn(1, n_h)
b_h = np.random.randn(n_h,1)
b_o = np.sqrt(n_h) * (2 * np.random.rand() - 1.0)
return [W_ac, W_h, W_o], [b_h, b_o]
def relu(A):
"""
Calcula el valor de ReLU de una matriz de activaciones A
"""
return np.maximum(A, 0)
def logistica(a):
"""
Calcula la funcion logística de a
"""
return 1. / (1. + np.exp(-a))
def feedforward(X, vocab, W, b):
"""
Calcula las activaciones de las unidades de la red neuronal
Parámetros
----------
X: un ndarra [-1, 2], dtype='str', con dos palabras por ejemplo
vocab: Una lista con las palabras ordenadas del vocabulario a utilizar
W: Lista con las matrices de pesos (ver inicializa_red para mas info)
b: Lista con los vectores de sesgos (ver inicializa_red para mas info)
"""
W_a, W_h, W_o = W
b_h, b_o = b
one_hot_1 = [vocab.index(x_i) if x_i in vocab else -1 for x_i in X[:,0]]
one_hot_2 = [vocab.index(x_i) if x_i in vocab else -1 for x_i in X[:,1]]
activacion_z = np.array([one_hot_1, one_hot_2])
activacion_a = np.r_[W_a[one_hot_1, :].T,
W_a[one_hot_2, :].T]
activacion_h = relu(W_h @ activacion_a + b_h)
activacion_o = logistica(W_o @ activacion_h + b_o)
return [activacion_z, activacion_a, activacion_h, activacion_o]
```
#### Ejercicio: Realiza un ejemplo pequeño a mano, imprime las activaciones y compruebalo con tus resultados obtenidos manualmente
Se agrega un ejemplo sin calcular manualmente. Posiblemente sea mejor establecer W y b en forma manual que faciliten los calculos y menos ejemplos.
```
vocab = ['a', 'e', 'ei', 'ti', 'tu', 'ya', 'ye', 'toto', 'tur', 'er', 'OOV']
X = np.array([
['a', 'a'],
['e', 'tu'],
['ti', 'ya'],
['er', 'ye'],
['a', 'a'],
['e', 'tu']
])
n_v, n_a, n_h = len(vocab), 5, 7
W, b = inicializa_red(n_v, n_a, n_h)
A = feedforward(X, vocab, W, b)
print("Codificación 'one hot': \n", A[0])
print("Autocodificador: \n", A[1])
print("Activacion capa oculta:\n", A[2])
print("Salidas:\n", A[3])
assert np.all(A[0][:, 1] == A[0][:, -1]) and np.all(A[0][:, 0] == A[0][:, -2])
assert np.all(A[1][:, 1] == A[1][:, -1]) and np.all(A[1][:, 0] == A[1][:, -2])
assert np.all(A[2][:, 1] == A[2][:, -1]) and np.all(A[2][:, 0] == A[2][:, -2])
def deriv_relu(a):
"""
Calcula la derivada de la activación de a usando ReLU
"""
return np.where(a > 0.0, 1.0, 0.0)
def b_prop(A, Y, W):
W_a, W_h, W_o = W
activacion_z, activacion_a, activacion_h, activacion_o = A
delta_o = Y.reshape(1, -1) - activacion_o
delta_h = deriv_relu(activacion_h) * (W_o.T @ delta_o)
delta_a = W_h.T @ delta_h
gradiente_W_o = delta_o.T @ activacion_h
gradiente_W_h = delta_h.T @ activacion_a
gradiente_b_o = delta_o.mean(axis=0).reshape(-1, 1)
gradiente_b_h = delta_h.mean()
gradiente_W_a
```
| github_jupyter |
<a href="https://colab.research.google.com/github/arunk-vnk-chn/insaid-interview-questions/blob/master/20%20April%20-%20Introduction%20to%20Machine%20Learning%20(part%202).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
```
# 20 April – Introduction to Machine Learning (part 2)
**1. Top 100 data science interview questions link (could cover 60-70% of questions asked in interviews)**
* http://nitin-panwar.github.io/Top-100-Data-science-interview-questions/
* https://www.edureka.co/blog/interview-questions/data-science-interview-questions/
**2. Typical interview process**
* Round 1: Programming test using hackerrank or codility (may be skipped for general roles like data scientist or decision scientist, but very frequent for ML engineer, data science developer etc.)
* Round 2: Case study on a business problem (very typical for a data scientist role)
* Round 3: Detailed interview with senior data scientist
**3. What is the data science workflow, or life cycle of ML projects, or how do we execute a ML project end to end?**
* It is important to adhere to the below flow strictly while doing any ML project.
* It is a cyclic process, where the project across different phases till the business requirement is satisfied.

**4. Good question to ask interviewer: Does the company/department work majorly on POC (proof of concept) projects or actual production projects? What are the projects or products being developed?**
* For a good career progression, it is suggested to join a company where production projects are happening. As POC projects only happen for 3 to 6 months and don’t proceed further. There is more learning in end to end projects. Most end to end projects are happening at internet companies / start-ups. However they could pose a challenge for work-life balance.
**5. What is a data lake?**
* A data lake is a singular representation of data from different sources which enables to carry on any analytical cycle or data science cycle over it. These involve data engineers.
**6. What is hypothesis testing?**
* Noting down rules business domain suggests or establishes to make a prediction and then checking if the data supports that or not.
**7. What is EDA (exploratory data analysis)?**
* Trying to obtain different insights from the data, generally done by trying different graphs to identify relationships between variables. This may be part of the hypothesis given by the business or something new figured by analysing the data. EDA has no limited scope and is open-ended.
**8. Is it preferable to do a prediction with lower number of features or higher?**
* Lesser the better, because lesser features will have a lesser dependency on different inputs. It has to be optimum to ensure neither is important information ignored nor is too much noise added.
**9. What is feature engineering?**
* Creating relevant features or deriving KPIs by a mathematical combination of multiple columns. This helps optimise the number of features required for prediction.
**10. Can feature selection be automated?**
* Yes and no. It is a process which requires human intervention, however there are some tools which can automate some parts of the process.
**11. What is the difference between regression and classification?**
* In supervised learning, regression is when we are predicting a continuous number (uncountable) while classification is when we are trying to assign a discrete category (countable).
**12. What is regression?**
* A predictive modelling technique which investigates the relationship between a dependent variable and one or more independent variables.
**13. What is linear regression?**
* A regression in which the relationship between the dependent variable and all the independent variables is of the first degree.
**14. When is a model said to be robust or flexible?**
* A model is robust or flexible if it can adapt well to new data. Therefore it has to be optimum and not over-fit the current data. Flexibility also supports easy interpretability to business.
**15. What is the difference between a model and an algorithm?**
* An algorithm is a set of steps and could indicate a model or the entire process of an ML project.
| github_jupyter |
# Image Captioning with LSTMs
In the previous exercise you implemented a vanilla RNN and applied it to image captioning. In this notebook you will implement the LSTM update rule and use it for image captioning.
```
# As usual, a bit of setup
import time, os, json
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.rnn_layers import *
from cs231n.captioning_solver import CaptioningSolver
from cs231n.classifiers.rnn import CaptioningRNN
from cs231n.coco_utils import load_coco_data, sample_coco_minibatch, decode_captions
from cs231n.image_utils import image_from_url
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
```
# Load MS-COCO data
As in the previous notebook, we will use the Microsoft COCO dataset for captioning.
```
# Load COCO data from disk; this returns a dictionary
# We'll work with dimensionality-reduced features for this notebook, but feel
# free to experiment with the original features by changing the flag below.
data = load_coco_data(pca_features=True)
# Print out all the keys and values from the data dictionary
for k, v in data.items():
if type(v) == np.ndarray:
print(k, type(v), v.shape, v.dtype)
else:
print(k, type(v), len(v))
```
# LSTM
If you read recent papers, you'll see that many people use a variant on the vanilla RNN called Long-Short Term Memory (LSTM) RNNs. Vanilla RNNs can be tough to train on long sequences due to vanishing and exploding gradients caused by repeated matrix multiplication. LSTMs solve this problem by replacing the simple update rule of the vanilla RNN with a gating mechanism as follows.
Similar to the vanilla RNN, at each timestep we receive an input $x_t\in\mathbb{R}^D$ and the previous hidden state $h_{t-1}\in\mathbb{R}^H$; the LSTM also maintains an $H$-dimensional *cell state*, so we also receive the previous cell state $c_{t-1}\in\mathbb{R}^H$. The learnable parameters of the LSTM are an *input-to-hidden* matrix $W_x\in\mathbb{R}^{4H\times D}$, a *hidden-to-hidden* matrix $W_h\in\mathbb{R}^{4H\times H}$ and a *bias vector* $b\in\mathbb{R}^{4H}$.
At each timestep we first compute an *activation vector* $a\in\mathbb{R}^{4H}$ as $a=W_xx_t + W_hh_{t-1}+b$. We then divide this into four vectors $a_i,a_f,a_o,a_g\in\mathbb{R}^H$ where $a_i$ consists of the first $H$ elements of $a$, $a_f$ is the next $H$ elements of $a$, etc. We then compute the *input gate* $g\in\mathbb{R}^H$, *forget gate* $f\in\mathbb{R}^H$, *output gate* $o\in\mathbb{R}^H$ and *block input* $g\in\mathbb{R}^H$ as
$$
\begin{align*}
i = \sigma(a_i) \hspace{2pc}
f = \sigma(a_f) \hspace{2pc}
o = \sigma(a_o) \hspace{2pc}
g = \tanh(a_g)
\end{align*}
$$
where $\sigma$ is the sigmoid function and $\tanh$ is the hyperbolic tangent, both applied elementwise.
Finally we compute the next cell state $c_t$ and next hidden state $h_t$ as
$$
c_{t} = f\odot c_{t-1} + i\odot g \hspace{4pc}
h_t = o\odot\tanh(c_t)
$$
where $\odot$ is the elementwise product of vectors.
In the rest of the notebook we will implement the LSTM update rule and apply it to the image captioning task.
In the code, we assume that data is stored in batches so that $X_t \in \mathbb{R}^{N\times D}$, and will work with *transposed* versions of the parameters: $W_x \in \mathbb{R}^{D \times 4H}$, $W_h \in \mathbb{R}^{H\times 4H}$ so that activations $A \in \mathbb{R}^{N\times 4H}$ can be computed efficiently as $A = X_t W_x + H_{t-1} W_h$
# LSTM: step forward
Implement the forward pass for a single timestep of an LSTM in the `lstm_step_forward` function in the file `cs231n/rnn_layers.py`. This should be similar to the `rnn_step_forward` function that you implemented above, but using the LSTM update rule instead.
Once you are done, run the following to perform a simple test of your implementation. You should see errors on the order of `e-8` or less.
```
N, D, H = 3, 4, 5
x = np.linspace(-0.4, 1.2, num=N*D).reshape(N, D)
prev_h = np.linspace(-0.3, 0.7, num=N*H).reshape(N, H)
prev_c = np.linspace(-0.4, 0.9, num=N*H).reshape(N, H)
Wx = np.linspace(-2.1, 1.3, num=4*D*H).reshape(D, 4 * H)
Wh = np.linspace(-0.7, 2.2, num=4*H*H).reshape(H, 4 * H)
b = np.linspace(0.3, 0.7, num=4*H)
next_h, next_c, cache = lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)
expected_next_h = np.asarray([
[ 0.24635157, 0.28610883, 0.32240467, 0.35525807, 0.38474904],
[ 0.49223563, 0.55611431, 0.61507696, 0.66844003, 0.7159181 ],
[ 0.56735664, 0.66310127, 0.74419266, 0.80889665, 0.858299 ]])
expected_next_c = np.asarray([
[ 0.32986176, 0.39145139, 0.451556, 0.51014116, 0.56717407],
[ 0.66382255, 0.76674007, 0.87195994, 0.97902709, 1.08751345],
[ 0.74192008, 0.90592151, 1.07717006, 1.25120233, 1.42395676]])
print('next_h error: ', rel_error(expected_next_h, next_h))
print('next_c error: ', rel_error(expected_next_c, next_c))
```
# LSTM: step backward
Implement the backward pass for a single LSTM timestep in the function `lstm_step_backward` in the file `cs231n/rnn_layers.py`. Once you are done, run the following to perform numeric gradient checking on your implementation. You should see errors on the order of `e-7` or less.
```
np.random.seed(231)
N, D, H = 4, 5, 6
x = np.random.randn(N, D)
prev_h = np.random.randn(N, H)
prev_c = np.random.randn(N, H)
Wx = np.random.randn(D, 4 * H)
Wh = np.random.randn(H, 4 * H)
b = np.random.randn(4 * H)
next_h, next_c, cache = lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)
dnext_h = np.random.randn(*next_h.shape)
dnext_c = np.random.randn(*next_c.shape)
fx_h = lambda x: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fh_h = lambda h: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fc_h = lambda c: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fWx_h = lambda Wx: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fWh_h = lambda Wh: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fb_h = lambda b: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[0]
fx_c = lambda x: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fh_c = lambda h: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fc_c = lambda c: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fWx_c = lambda Wx: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fWh_c = lambda Wh: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
fb_c = lambda b: lstm_step_forward(x, prev_h, prev_c, Wx, Wh, b)[1]
num_grad = eval_numerical_gradient_array
dx_num = num_grad(fx_h, x, dnext_h) + num_grad(fx_c, x, dnext_c)
dh_num = num_grad(fh_h, prev_h, dnext_h) + num_grad(fh_c, prev_h, dnext_c)
dc_num = num_grad(fc_h, prev_c, dnext_h) + num_grad(fc_c, prev_c, dnext_c)
dWx_num = num_grad(fWx_h, Wx, dnext_h) + num_grad(fWx_c, Wx, dnext_c)
dWh_num = num_grad(fWh_h, Wh, dnext_h) + num_grad(fWh_c, Wh, dnext_c)
db_num = num_grad(fb_h, b, dnext_h) + num_grad(fb_c, b, dnext_c)
dx, dh, dc, dWx, dWh, db = lstm_step_backward(dnext_h, dnext_c, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dh error: ', rel_error(dh_num, dh))
print('dc error: ', rel_error(dc_num, dc))
print('dWx error: ', rel_error(dWx_num, dWx))
print('dWh error: ', rel_error(dWh_num, dWh))
print('db error: ', rel_error(db_num, db))
```
# LSTM: forward
In the function `lstm_forward` in the file `cs231n/rnn_layers.py`, implement the `lstm_forward` function to run an LSTM forward on an entire timeseries of data.
When you are done, run the following to check your implementation. You should see an error on the order of `e-7` or less.
```
N, D, H, T = 2, 5, 4, 3
x = np.linspace(-0.4, 0.6, num=N*T*D).reshape(N, T, D)
h0 = np.linspace(-0.4, 0.8, num=N*H).reshape(N, H)
Wx = np.linspace(-0.2, 0.9, num=4*D*H).reshape(D, 4 * H)
Wh = np.linspace(-0.3, 0.6, num=4*H*H).reshape(H, 4 * H)
b = np.linspace(0.2, 0.7, num=4*H)
h, cache = lstm_forward(x, h0, Wx, Wh, b)
expected_h = np.asarray([
[[ 0.01764008, 0.01823233, 0.01882671, 0.0194232 ],
[ 0.11287491, 0.12146228, 0.13018446, 0.13902939],
[ 0.31358768, 0.33338627, 0.35304453, 0.37250975]],
[[ 0.45767879, 0.4761092, 0.4936887, 0.51041945],
[ 0.6704845, 0.69350089, 0.71486014, 0.7346449 ],
[ 0.81733511, 0.83677871, 0.85403753, 0.86935314]]])
print('h error: ', rel_error(expected_h, h))
```
# LSTM: backward
Implement the backward pass for an LSTM over an entire timeseries of data in the function `lstm_backward` in the file `cs231n/rnn_layers.py`. When you are done, run the following to perform numeric gradient checking on your implementation. You should see errors on the order of `e-8` or less. (For `dWh`, it's fine if your error is on the order of `e-6` or less).
```
from cs231n.rnn_layers import lstm_forward, lstm_backward
np.random.seed(231)
N, D, T, H = 2, 3, 10, 6
x = np.random.randn(N, T, D)
h0 = np.random.randn(N, H)
Wx = np.random.randn(D, 4 * H)
Wh = np.random.randn(H, 4 * H)
b = np.random.randn(4 * H)
out, cache = lstm_forward(x, h0, Wx, Wh, b)
dout = np.random.randn(*out.shape)
dx, dh0, dWx, dWh, db = lstm_backward(dout, cache)
fx = lambda x: lstm_forward(x, h0, Wx, Wh, b)[0]
fh0 = lambda h0: lstm_forward(x, h0, Wx, Wh, b)[0]
fWx = lambda Wx: lstm_forward(x, h0, Wx, Wh, b)[0]
fWh = lambda Wh: lstm_forward(x, h0, Wx, Wh, b)[0]
fb = lambda b: lstm_forward(x, h0, Wx, Wh, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dh0_num = eval_numerical_gradient_array(fh0, h0, dout)
dWx_num = eval_numerical_gradient_array(fWx, Wx, dout)
dWh_num = eval_numerical_gradient_array(fWh, Wh, dout)
db_num = eval_numerical_gradient_array(fb, b, dout)
print('dx error: ', rel_error(dx_num, dx))
print('dh0 error: ', rel_error(dh0_num, dh0))
print('dWx error: ', rel_error(dWx_num, dWx))
print('dWh error: ', rel_error(dWh_num, dWh))
print('db error: ', rel_error(db_num, db))
```
# INLINE QUESTION
Recall that in an LSTM the input gate $i$, forget gate $f$, and output gate $o$ are all outputs of a sigmoid function. Why don't we use the ReLU activation function instead of sigmoid to compute these values? Explain.
# LSTM captioning model
Now that you have implemented an LSTM, update the implementation of the `loss` method of the `CaptioningRNN` class in the file `cs231n/classifiers/rnn.py` to handle the case where `self.cell_type` is `lstm`. This should require adding less than 10 lines of code.
Once you have done so, run the following to check your implementation. You should see a difference on the order of `e-10` or less.
```
N, D, W, H = 10, 20, 30, 40
word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}
V = len(word_to_idx)
T = 13
model = CaptioningRNN(word_to_idx,
input_dim=D,
wordvec_dim=W,
hidden_dim=H,
cell_type='lstm',
dtype=np.float64)
# Set all model parameters to fixed values
for k, v in model.params.items():
model.params[k] = np.linspace(-1.4, 1.3, num=v.size).reshape(*v.shape)
features = np.linspace(-0.5, 1.7, num=N*D).reshape(N, D)
captions = (np.arange(N * T) % V).reshape(N, T)
loss, grads = model.loss(features, captions)
expected_loss = 9.82445935443
print('loss: ', loss)
print('expected loss: ', expected_loss)
print('difference: ', abs(loss - expected_loss))
```
# Overfit LSTM captioning model
Run the following to overfit an LSTM captioning model on the same small dataset as we used for the RNN previously. You should see a final loss less than 0.5.
```
np.random.seed(231)
small_data = load_coco_data(max_train=50)
small_lstm_model = CaptioningRNN(
cell_type='lstm',
word_to_idx=data['word_to_idx'],
input_dim=data['train_features'].shape[1],
hidden_dim=512,
wordvec_dim=256,
dtype=np.float32,
)
small_lstm_solver = CaptioningSolver(small_lstm_model, small_data,
update_rule='adam',
num_epochs=50,
batch_size=25,
optim_config={
'learning_rate': 5e-3,
},
lr_decay=0.995,
verbose=True, print_every=10,
)
small_lstm_solver.train()
# Plot the training losses
plt.plot(small_lstm_solver.loss_history)
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.title('Training loss history')
plt.show()
```
# LSTM test-time sampling
Modify the `sample` method of the `CaptioningRNN` class to handle the case where `self.cell_type` is `lstm`. This should take fewer than 10 lines of code.
When you are done run the following to sample from your overfit LSTM model on some training and validation set samples. As with the RNN, training results should be very good, and validation results probably won't make a lot of sense (because we're overfitting).
```
for split in ['train', 'val']:
minibatch = sample_coco_minibatch(small_data, split=split, batch_size=2)
gt_captions, features, urls = minibatch
gt_captions = decode_captions(gt_captions, data['idx_to_word'])
sample_captions = small_lstm_model.sample(features)
sample_captions = decode_captions(sample_captions, data['idx_to_word'])
for gt_caption, sample_caption, url in zip(gt_captions, sample_captions, urls):
plt.imshow(image_from_url(url))
plt.title('%s\n%s\nGT:%s' % (split, sample_caption, gt_caption))
plt.axis('off')
plt.show()
```
| github_jupyter |
# Part 3: Advanced Remote Execution Tools
In the last section we trained a toy model using Federated Learning. We did this by calling .send() and .get() on our model, sending it to the location of training data, updating it, and then bringing it back. However, at the end of the example we realized that we needed to go a bit further to protect people privacy. Namely, we want to average the gradients **before** calling .get(). That way, we won't ever see anyone's exact gradient (thus better protecting their privacy!!!)
But, in order to do this, we need a few more pieces:
- use a pointer to send a Tensor directly to another worker
And in addition, while we're here, we're going to learn about a few more advanced tensor operations as well which will help us both with this example and a few in the future!
Authors:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
```
import torch
import syft as sy
hook = sy.TorchHook(torch)
```
# Section 3.1 - Pointers to Pointers
As you know, PointerTensor objects feel just like normal tensors. In fact, they are _so much like tensors_ that we can even have pointers **to** the pointers. Check it out!
```
bob = sy.VirtualWorker(hook, id='bob')
alice = sy.VirtualWorker(hook, id='alice')
# making sure that bob/alice know about each other
bob.add_worker(alice)
alice.add_worker(bob)
# this is a local tensor
x = torch.tensor([1,2,3,4])
x
# this sends the local tensor to Bob
x_ptr = x.send(bob)
# this is now a pointer
x_ptr
# now we can SEND THE POINTER to alice!!!
pointer_to_x_ptr = x_ptr.send(alice)
pointer_to_x_ptr
```
### What happened?
So, in the previous example, we created a tensor called `x` and send it to Bob, creating a pointer on our local machine (`x_ptr`).
Then, we called `x_ptr.send(alice)` which **sent the pointer** to Alice.
Note, this did NOT move the data! Instead, it moved the pointer to the data!!
```
# As you can see above, Bob still has the actual data (data is always stored in a LocalTensor type).
bob._objects
# Alice, on the other hand, has x_ptr!! (notice how it points at bob)
alice._objects
```
```
# and we can use .get() to get x_ptr back from Alice
x_ptr = pointer_to_x_ptr.get()
x_ptr
# and then we can use x_ptr to get x back from Bob!
x = x_ptr.get()
x
```
### Arithmetic on Pointer -> Pointer -> Data Object
And just like with normal pointers, we can perform arbitrary PyTorch operations across these tensors
```
bob._objects
alice._objects
p2p2x = torch.tensor([1,2,3,4,5]).send(bob).send(alice)
y = p2p2x + p2p2x
bob._objects
alice._objects
y.get().get()
bob._objects
alice._objects
p2p2x.get().get()
bob._objects
alice._objects
```
# Section 3.2 - Pointer Chain Operations
So in the last section whenever we called a .send() or a .get() operation, it called that operation directly on the tensor on our local machine. However, if you have a chain of pointers, sometimes you want to call operations like .get() or .send() on the **last** pointer in the chain (such as sending data directly from one worker to another). To accomplish this, you want to use functions which are especially designed for this privacy preserving operation.
These operations are:
- `my_pointer2pointer.move(another_worker)`
```
# x is now a pointer to a pointer to the data which lives on Bob's machine
x = torch.tensor([1,2,3,4,5]).send(bob)
print(' bob:', bob._objects)
print('alice:',alice._objects)
x = x.move(alice)
print(' bob:', bob._objects)
print('alice:',alice._objects)
x
```
Excellent! Now we're equiped with the tools to perform remote **gradient averaging** using a trusted aggregator!
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on GitHub
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
```
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
```
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
print("features: ", N_i)
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
#### Your answer below
##### How well does the model predict the data?
The model predicts data with a pretty high accuracy as seen in the graph above for the most part. Upto Dec 21st the model accuracy is very very high and again after Dec 25, starting on the 26th the model is super accurate.
##### Where does it fail?
The model is failing for the period from Dec 22nd to Dec 25th. We can see in the graph above, the predicted values are way larger than the observed counts on those days.
##### Why does it fail where it does?
It is clear that the training data the model has seen fits very well to the days before Dec 22-25th and also the days after. Also, you can see the predicted values are more in line with the remaining days of the year. My hypothesis is the lower demand for bikes in the period Dec 22-25th is because of christmas / holidays and since we have split the data for the last 21 days into a test set, the model has never seen the effects of major holidays onto the demand. If the model is trained on uniformly sampled data, I feel like it will learn the correlation between this special week of Dec 22nd to 25th and lower demand and then it will be able to predict the demand for these test points much more accurately.
| github_jupyter |
# Tutorial 3 of 3: Advanced Topics and Usage
**Learning Outcomes**
* Use different methods to add boundary pores to a network
* Manipulate network topology by adding and removing pores and throats
* Explore the ModelsDict design, including copying models between objects, and changing model parameters
* Write a custom pore-scale model and a custom Phase
* Access and manipulate objects associated with the network
* Combine multiple algorithms to predict relative permeability
## Build and Manipulate Network Topology
For the present tutorial, we'll keep the topology simple to help keep the focus on other aspects of OpenPNM.
```
import warnings
import numpy as np
import scipy as sp
import openpnm as op
%matplotlib inline
np.random.seed(10)
ws = op.Workspace()
ws.settings['loglevel'] = 40
np.set_printoptions(precision=4)
pn = op.network.Cubic(shape=[10, 10, 10], spacing=0.00006, name='net')
```
## Adding Boundary Pores
When performing transport simulations it is often useful to have 'boundary' pores attached to the surface(s) of the network where boundary conditions can be applied. When using the **Cubic** class, two methods are available for doing this: ``add_boundaries``, which is specific for the **Cubic** class, and ``add_boundary_pores``, which is a generic method that can also be used on other network types and which is inherited from **GenericNetwork**. The first method automatically adds boundaries to ALL six faces of the network and offsets them from the network by 1/2 of the value provided as the network ``spacing``. The second method provides total control over which boundary pores are created and where they are positioned, but requires the user to specify to which pores the boundary pores should be attached to. Let's explore these two options:
```
pn.add_boundary_pores(labels=['top', 'bottom'])
```
Let's quickly visualize this network with the added boundaries:
```
#NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_connections(pn, c='r')
fig = op.topotools.plot_coordinates(pn, c='b', fig=fig)
fig.set_size_inches([10, 10])
```
### Adding and Removing Pores and Throats
OpenPNM uses a list-based data storage scheme for all properties, including topological connections. One of the benefits of this approach is that adding and removing pores and throats from the network is essentially as simple as adding or removing rows from the data arrays. The one exception to this 'simplicity' is that the ``'throat.conns'`` array must be treated carefully when trimming pores, so OpenPNM provides the ``extend`` and ``trim`` functions for adding and removing, respectively. To demonstrate, let's reduce the coordination number of the network to create a more random structure:
```
Ts = np.random.rand(pn.Nt) < 0.1 # Create a mask with ~10% of throats labeled True
op.topotools.trim(network=pn, throats=Ts) # Use mask to indicate which throats to trim
```
When the ``trim`` function is called, it automatically checks the health of the network afterwards, so logger messages might appear on the command line if problems were found such as isolated clusters of pores or pores with no throats. This health check is performed by calling the **Network**'s ``check_network_health`` method which returns a **HealthDict** containing the results of the checks:
```
a = pn.check_network_health()
print(a)
```
The **HealthDict** contains several lists including things like duplicate throats and isolated pores, but also a suggestion of which pores to trim to return the network to a healthy state. Also, the **HealthDict** has a ``health`` attribute that is ``False`` is any checks fail.
```
op.topotools.trim(network=pn, pores=a['trim_pores'])
```
Let's take another look at the network to see the trimmed pores and throats:
```
#NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_connections(pn, c='r')
fig = op.topotools.plot_coordinates(pn, c='b', fig=fig)
fig.set_size_inches([10, 10])
```
## Define Geometry Objects
The boundary pores we've added to the network should be treated a little bit differently. Specifically, they should have no volume or length (as they are not physically representative of real pores). To do this, we create two separate **Geometry** objects, one for internal pores and one for the boundaries:
```
Ps = pn.pores('*boundary', mode='not')
Ts = pn.throats('*boundary', mode='not')
geom = op.geometry.StickAndBall(network=pn, pores=Ps, throats=Ts, name='intern')
Ps = pn.pores('*boundary')
Ts = pn.throats('*boundary')
boun = op.geometry.Boundary(network=pn, pores=Ps, throats=Ts, name='boun')
```
The **StickAndBall** class is preloaded with the pore-scale models to calculate all the necessary size information (pore diameter, pore.volume, throat lengths, throat.diameter, etc). The **Boundary** class is speciall and is only used for the boundary pores. In this class, geometrical properties are set to small fixed values such that they don't affect the simulation results.
## Define Multiple Phase Objects
In order to simulate relative permeability of air through a partially water-filled network, we need to create each **Phase** object. OpenPNM includes pre-defined classes for each of these common fluids:
```
air = op.phases.Air(network=pn)
water = op.phases.Water(network=pn)
water['throat.contact_angle'] = 110
water['throat.surface_tension'] = 0.072
```
### Aside: Creating a Custom Phase Class
In many cases you will want to create your own fluid, such as an oil or brine, which may be commonly used in your research. OpenPNM cannot predict all the possible scenarios, but luckily it is easy to create a custom **Phase** class as follows:
```
from openpnm.phases import GenericPhase
class Oil(GenericPhase):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.add_model(propname='pore.viscosity',
model=op.models.misc.polynomial,
prop='pore.temperature',
a=[1.82082e-2, 6.51E-04, -3.48E-7, 1.11E-10])
self['pore.molecular_weight'] = 116 # g/mol
```
* Creating a **Phase** class basically involves placing a series of ``self.add_model`` commands within the ``__init__`` section of the class definition. This means that when the class is instantiated, all the models are added to *itself* (i.e. ``self``).
* ``**kwargs`` is a Python trick that captures all arguments in a *dict* called ``kwargs`` and passes them to another function that may need them. In this case they are passed to the ``__init__`` method of **Oil**'s parent by the ``super`` function. Specifically, things like ``name`` and ``network`` are expected.
* The above code block also stores the molecular weight of the oil as a constant value
* Adding models and constant values in this way could just as easily be done in a run script, but the advantage of defining a class is that it can be saved in a file (i.e. 'my_custom_phases') and reused in any project.
```
oil = Oil(network=pn)
print(oil)
```
## Define Physics Objects for Each Geometry and Each Phase
In the tutorial #2 we created two **Physics** object, one for each of the two **Geometry** objects used to handle the stratified layers. In this tutorial, the internal pores and the boundary pores each have their own **Geometry**, but there are two **Phases**, which also each require a unique **Physics**:
```
phys_water_internal = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom)
phys_air_internal = op.physics.GenericPhysics(network=pn, phase=air, geometry=geom)
phys_water_boundary = op.physics.GenericPhysics(network=pn, phase=water, geometry=boun)
phys_air_boundary = op.physics.GenericPhysics(network=pn, phase=air, geometry=boun)
```
> To reiterate, *one* **Physics** object is required for each **Geometry** *AND* each **Phase**, so the number can grow to become annoying very quickly Some useful tips for easing this situation are given below.
### Create a Custom Pore-Scale Physics Model
Perhaps the most distinguishing feature between pore-network modeling papers is the pore-scale physics models employed. Accordingly, OpenPNM was designed to allow for easy customization in this regard, so that you can create your own models to augment or replace the ones included in the OpenPNM *models* libraries. For demonstration, let's implement the capillary pressure model proposed by [Mason and Morrow in 1994](http://dx.doi.org/10.1006/jcis.1994.1402). They studied the entry pressure of non-wetting fluid into a throat formed by spheres, and found that the converging-diverging geometry increased the capillary pressure required to penetrate the throat. As a simple approximation they proposed $P_c = -2 \sigma \cdot cos(2/3 \theta) / R_t$
Pore-scale models are written as basic function definitions:
```
def mason_model(target, diameter='throat.diameter', theta='throat.contact_angle',
sigma='throat.surface_tension', f=0.6667):
proj = target.project
network = proj.network
phase = proj.find_phase(target)
Dt = network[diameter]
theta = phase[theta]
sigma = phase[sigma]
Pc = 4*sigma*np.cos(f*np.deg2rad(theta))/Dt
return Pc[phase.throats(target.name)]
```
Let's examine the components of above code:
* The function receives a ``target`` object as an argument. This indicates which object the results will be returned to.
* The ``f`` value is a scale factor that is applied to the contact angle. Mason and Morrow suggested a value of 2/3 as a decent fit to the data, but we'll make this an adjustable parameter with 2/3 as the default.
* Note the ``pore.diameter`` is actually a **Geometry** property, but it is retrieved via the network using the data exchange rules outlined in the second tutorial.
* All of the calculations are done for every throat in the network, but this pore-scale model may be assigned to a ``target`` like a **Physics** object, that is a subset of the full domain. As such, the last line extracts values from the ``Pc`` array for the location of ``target`` and returns just the subset.
* The actual values of the contact angle, surface tension, and throat diameter are NOT sent in as numerical arrays, but rather as dictionary keys to the arrays. There is one very important reason for this: if arrays had been sent, then re-running the model would use the same arrays and hence not use any updated values. By having access to dictionary keys, the model actually looks up the current values in each of the arrays whenever it is run.
* It is good practice to include the dictionary keys as arguments, such as ``sigma = 'throat.contact_angle'``. This way the user can control where the contact angle could be stored on the ``target`` object.
### Copy Models Between Physics Objects
As mentioned above, the need to specify a separate **Physics** object for each **Geometry** and **Phase** can become tedious. It is possible to *copy* the pore-scale models assigned to one object onto another object. First, let's assign the models we need to ``phys_water_internal``:
```
mod = op.models.physics.hydraulic_conductance.hagen_poiseuille
phys_water_internal.add_model(propname='throat.hydraulic_conductance',
model=mod)
phys_water_internal.add_model(propname='throat.entry_pressure',
model=mason_model)
```
Now make a copy of the ``models`` on ``phys_water_internal`` and apply it all the other water **Physics** objects:
```
phys_water_boundary.models = phys_water_internal.models
```
The only 'gotcha' with this approach is that each of the **Physics** objects must be *regenerated* in order to place numerical values for all the properties into the data arrays:
```
phys_water_boundary.regenerate_models()
phys_air_internal.regenerate_models()
phys_air_internal.regenerate_models()
```
### Adjust Pore-Scale Model Parameters
The pore-scale models are stored in a **ModelsDict** object that is itself stored under the ``models`` attribute of each object. This arrangement is somewhat convoluted, but it enables integrated storage of models on the object's wo which they apply. The models on an object can be inspected with ``print(phys_water_internal)``, which shows a list of all the pore-scale properties that are computed by a model, and some information about the model's *regeneration* mode.
Each model in the **ModelsDict** can be individually inspected by accessing it using the dictionary key corresponding to *pore-property* that it calculates, i.e. ``print(phys_water_internal)['throat.capillary_pressure'])``. This shows a list of all the parameters associated with that model. It is possible to edit these parameters directly:
```
phys_water_internal.models['throat.entry_pressure']['f'] = 0.75 # Change value
phys_water_internal.regenerate_models() # Regenerate model with new 'f' value
```
More details about the **ModelsDict** and **ModelWrapper** classes can be found in :ref:`models`.
## Perform Multiphase Transport Simulations
### Use the Built-In Drainage Algorithm to Generate an Invading Phase Configuration
```
inv = op.algorithms.Porosimetry(network=pn)
inv.setup(phase=water)
inv.set_inlets(pores=pn.pores(['top', 'bottom']))
inv.run()
```
* The inlet pores were set to both ``'top'`` and ``'bottom'`` using the ``pn.pores`` method. The algorithm applies to the entire network so the mapping of network pores to the algorithm pores is 1-to-1.
* The ``run`` method automatically generates a list of 25 capillary pressure points to test, but you can also specify more pores, or which specific points to tests. See the methods documentation for the details.
* Once the algorithm has been run, the resulting capillary pressure curve can be viewed with ``plot_drainage_curve``. If you'd prefer a table of data for plotting in your software of choice you can use ``get_drainage_data`` which prints a table in the console.
### Set Pores and Throats to Invaded
After running, the ``mip`` object possesses an array containing the pressure at which each pore and throat was invaded, stored as ``'pore.inv_Pc'`` and ``'throat.inv_Pc'``. These arrays can be used to obtain a list of which pores and throats are invaded by water, using Boolean logic:
```
Pi = inv['pore.invasion_pressure'] < 5000
Ti = inv['throat.invasion_pressure'] < 5000
```
The resulting Boolean masks can be used to manually adjust the hydraulic conductivity of pores and throats based on their phase occupancy. The following lines set the water filled throats to near-zero conductivity for air flow:
```
Ts = phys_water_internal.map_throats(~Ti, origin=water)
phys_water_internal['throat.hydraulic_conductance'][Ts] = 1e-20
```
* The logic of these statements implicitly assumes that transport between two pores is only blocked if the throat is filled with the other phase, meaning that both pores could be filled and transport is still permitted. Another option would be to set the transport to near-zero if *either* or *both* of the pores are filled as well.
* The above approach can get complicated if there are several **Geometry** objects, and it is also a bit laborious. There is a pore-scale model for this under **Physics.models.multiphase** called ``conduit_conductance``. The term conduit refers to the path between two pores that includes 1/2 of each pores plus the connecting throat.
### Calculate Relative Permeability of Each Phase
We are now ready to calculate the relative permeability of the domain under partially flooded conditions. Instantiate an **StokesFlow** object:
```
water_flow = op.algorithms.StokesFlow(network=pn, phase=water)
water_flow.set_value_BC(pores=pn.pores('left'), values=200000)
water_flow.set_value_BC(pores=pn.pores('right'), values=100000)
water_flow.run()
Q_partial, = water_flow.rate(pores=pn.pores('right'))
```
The *relative* permeability is the ratio of the water flow through the partially water saturated media versus through fully water saturated media; hence we need to find the absolute permeability of water. This can be accomplished by *regenerating* the ``phys_water_internal`` object, which will recalculate the ``'throat.hydraulic_conductance'`` values and overwrite our manually entered near-zero values from the ``inv`` simulation using ``phys_water_internal.models.regenerate()``. We can then re-use the ``water_flow`` algorithm:
```
phys_water_internal.regenerate_models()
water_flow.run()
Q_full, = water_flow.rate(pores=pn.pores('right'))
```
And finally, the relative permeability can be found from:
```
K_rel = Q_partial/Q_full
print(f"Relative permeability: {K_rel:.5f}")
```
* The ratio of the flow rates gives the normalized relative permeability since all the domain size, viscosity and pressure differential terms cancel each other.
* To generate a full relative permeability curve the above logic would be placed inside a for loop, with each loop increasing the pressure threshold used to obtain the list of invaded throats (``Ti``).
* The saturation at each capillary pressure can be found be summing the pore and throat volume of all the invaded pores and throats using ``Vp = geom['pore.volume'][Pi]`` and ``Vt = geom['throat.volume'][Ti]``.
| github_jupyter |
## What is convolution and how it works ?
[Convolution][1] is the process of adding each element of the image to its local neighbors, weighted by the [kernel][2]. A kernel, convolution matrix, filter, or mask is a small matrix. It is used for blurring, sharpening, embossing, edge detection, and more. This is accomplished by doing a convolution between a kernel and an image. Lets see how to do this.
[1]: https://en.wikipedia.org/wiki/Kernel_(image_processing)#Convolution
[2]: https://en.wikipedia.org/wiki/Kernel_(image_processing)
```
import numpy as np
from scipy import signal
import skimage
import skimage.io as sio
from skimage import filters
import matplotlib.pyplot as plt
```
#### Load the Image and show it.
```
img = sio.imread('images/lines.jpg')
img = skimage.color.rgb2gray(img)
print('Image Shape is:',img.shape)
plt.figure(figsize = (8,8))
plt.imshow(img,cmap='gray',aspect='auto'),plt.show()
```
#### Generally a convolution filter(kernel) is an odd size squared matrix. Here is an illustration of convolution.
<img src='images/3D_Convolution_Animation.gif'>

<table style="width:100%; table-layout:fixed;">
<tr>
<td><img width="150px" src="images/no_padding_no_strides.gif"></td>
<td><img width="150px" src="images/arbitrary_padding_no_strides.gif"></td>
<td><img width="150px" src="images/same_padding_no_strides.gif"></td>
<td><img width="150px" src="images/full_padding_no_strides.gif"></td>
</tr>
<tr>
<td>No padding, no strides</td>
<td>Arbitrary padding, no strides</td>
<td>Half padding, no strides</td>
<td>Full padding, no strides</td>
</tr>
<tr>
<td><img width="150px" src="images/no_padding_strides.gif"></td>
<td><img width="150px" src="images/padding_strides.gif"></td>
<td><img width="150px" src="images/padding_strides_odd.gif"></td>
<td></td>
</tr>
<tr>
<td>No padding, strides</td>
<td>Padding, strides</td>
<td>Padding, strides (odd)</td>
<td></td>
</tr>
</table>
#### Implementation of Convolution operation
```
def convolution2d(img, kernel, stride=1, padding=True):
kernel_size = kernel.shape[0]
img_row,img_col = img.shape
if padding:
pad_value = kernel_size//2
img = np.pad(img,(pad_value,pad_value),mode='edge')
else:
pad_value = 0
filter_half = kernel_size//2
img_new_row = (img_row-kernel_size+2*pad_value)//stride + 1
img_new_col = (img_col-kernel_size+2*pad_value)//stride + 1
img_new = np.zeros((img_new_row,img_new_col))
ii=0
for i in range(filter_half,img_row-filter_half,stride):
jj=0
for j in range(filter_half,img_col-filter_half,stride):
curr_img = img[i-filter_half:i+filter_half+1,j-filter_half:j+filter_half+1]
sum_value = np.sum(np.multiply(curr_img,kernel))
img_new[ii,jj] = sum_value
jj += 1
ii += 1
return img_new
kernel_size = (7,7) #Defining kernel size
kernel = np.ones(kernel_size) #Initializing a random kernel
kernel = kernel/np.sum(kernel) #Averaging the Kernel
img_conv = convolution2d(img,kernel,padding=True)#Applying the convolution operation
print(img_conv.shape)
plt.figure(figsize = (8,8))
plt.imshow(img_conv,cmap='gray'),plt.show()
```
By convolving an image using a kernel can give image features. As we can see here that using a random kernel blurs the image. However, there are predefined kernels such as [Sobel](https://www.researchgate.net/profile/Irwin_Sobel/publication/239398674_An_Isotropic_3x3_Image_Gradient_Operator/links/557e06f508aeea18b777c389/An-Isotropic-3x3-Image-Gradient-Operator.pdf?origin=publication_detail) or Prewitt which are used to get the edges of the image.
```
kernel_x = np.array([[ 1, 2, 1],
[ 0, 0, 0],
[-1,-2,-1]]) / 4.0 #Sobel kernel
kernel_y = np.transpose(kernel_x)
output_x = convolution2d(img, kernel_x)
output_y = convolution2d(img, kernel_y)
output = np.sqrt(output_x**2 + output_y**2)
output /= np.sqrt(2)
fig, (ax1, ax2,ax3,ax4) = plt.subplots(1, 4, figsize=(20, 20))
ax1.set_title("Original Image",fontweight='bold')
ax1.imshow(img, cmap=plt.cm.Greys_r)
ax2.set_title("Horizontal Edges",fontweight='bold')
ax2.imshow(output_x, cmap=plt.cm.Greys_r)
ax3.set_title("Vertical Edges",fontweight='bold')
ax3.imshow(output_y, cmap=plt.cm.Greys_r)
ax4.set_title("All Edges",fontweight='bold')
ax4.imshow(output, cmap=plt.cm.Greys_r)
fig.tight_layout()
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/DingLi23/s2search/blob/pipelining/pipelining/exp-cshc/exp-cshc_cshc_1w_ale_plotting.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Experiment Description
> This notebook is for experiment \<exp-cshc\> and data sample \<cshc\>.
### Initialization
```
%load_ext autoreload
%autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
!rm -rf s2search
!git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
%cd s2search/pipelining/exp-cshc/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
```
### Loading data
```
sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from getting_data import read_conf
from s2search_score_pdp import pdp_based_importance
sample_name = 'cshc'
f_list = [
'title', 'abstract', 'venue', 'authors',
'year',
'n_citations'
]
ale_xy = {}
ale_metric = pd.DataFrame(columns=['feature_name', 'ale_range', 'ale_importance', 'absolute mean'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_1w_ale_{f}.npz')
if os.path.exists(file):
nparr = np.load(file)
quantile = nparr['quantile']
ale_result = nparr['ale_result']
values_for_rug = nparr.get('values_for_rug')
ale_xy[f] = {
'x': quantile,
'y': ale_result,
'rug': values_for_rug,
'weird': ale_result[len(ale_result) - 1] > 20
}
if f != 'year' and f != 'n_citations':
ale_xy[f]['x'] = list(range(len(quantile)))
ale_xy[f]['numerical'] = False
else:
ale_xy[f]['xticks'] = quantile
ale_xy[f]['numerical'] = True
ale_metric.loc[len(ale_metric.index)] = [f, np.max(ale_result) - np.min(ale_result), pdp_based_importance(ale_result, f), np.mean(np.abs(ale_result))]
# print(len(ale_result))
print(ale_metric.sort_values(by=['ale_importance'], ascending=False))
print()
```
### ALE Plots
```
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import MaxNLocator
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'ALE',
'ale_xy': ale_xy['title']
},
{
'xlabel': 'Abstract',
'ale_xy': ale_xy['abstract']
},
{
'xlabel': 'Authors',
'ale_xy': ale_xy['authors'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 14],
# }
},
{
'xlabel': 'Venue',
'ale_xy': ale_xy['venue'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 13],
# }
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'ALE',
'ale_xy': ale_xy['year'],
# 'zoom': {
# 'inset_axes': [0.15, 0.4, 0.4, 0.4],
# 'x_limit': [2019, 2023],
# 'y_limit': [1.9, 2.1],
# },
},
{
'xlabel': 'Citations',
'ale_xy': ale_xy['n_citations'],
# 'zoom': {
# 'inset_axes': [0.4, 0.65, 0.47, 0.3],
# 'x_limit': [-1000.0, 12000],
# 'y_limit': [-0.1, 1.2],
# },
},
]
def pdp_plot(confs, title):
fig, axes_list = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axes = axes if len(confs) == 1 else axes_list[subplot_idx]
sns.rugplot(conf['ale_xy']['rug'], ax=axes, height=0.02)
axes.axhline(y=0, color='k', linestyle='-', lw=0.8)
axes.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axes.grid(alpha = 0.4)
# axes.set_ylim([-2, 20])
axes.xaxis.set_major_locator(MaxNLocator(integer=True))
axes.yaxis.set_major_locator(MaxNLocator(integer=True))
if ('ylabel' in conf):
axes.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
# if ('xticks' not in conf['ale_xy'].keys()):
# xAxis.set_ticklabels([])
axes.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['ale_xy']['weird']):
if (conf['ale_xy']['numerical']):
axes.set_ylim([-1.5, 1.5])
pass
else:
axes.set_ylim([-7, 20])
pass
if 'zoom' in conf:
axins = axes.inset_axes(conf['zoom']['inset_axes'])
axins.xaxis.set_major_locator(MaxNLocator(integer=True))
axins.yaxis.set_major_locator(MaxNLocator(integer=True))
axins.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axes.indicate_inset_zoom(axins)
connects[0].set_visible(False)
connects[1].set_visible(False)
connects[2].set_visible(True)
connects[3].set_visible(True)
subplot_idx += 1
pdp_plot(categorical_plot_conf, f"ALE for {len(categorical_plot_conf)} categorical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
pdp_plot(numerical_plot_conf, f"ALE for {len(numerical_plot_conf)} numerical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
```
| github_jupyter |
# SAX circuit simulator
[SAX](https://flaport.github.io/sax/) is a circuit solver written in JAX, writing your component models in SAX enables you not only to get the function values but the gradients, this is useful for circuit optimization.
This tutorial has been adapted from SAX tutorial.
Note that SAX does not work on Windows, so if you use windows you'll need to run from [WSL](https://docs.microsoft.com/en-us/windows/wsl/) or using docker.
You can install sax with pip
```
! pip install sax
```
```
import gdsfactory as gf
import gdsfactory.simulation.sax as gs
import gdsfactory.simulation.modes as gm
import sax
```
## Scatter *dictionaries*
The core datastructure for specifying scatter parameters in SAX is a dictionary... more specifically a dictionary which maps a port combination (2-tuple) to a scatter parameter (or an array of scatter parameters when considering multiple wavelengths for example). Such a specific dictionary mapping is called ann `SDict` in SAX (`SDict ≈ Dict[Tuple[str,str], float]`).
Dictionaries are in fact much better suited for characterizing S-parameters than, say, (jax-)numpy arrays due to the inherent sparse nature of scatter parameters. Moreover, dictonaries allow for string indexing, which makes them much more pleasant to use in this context.
```
o2 o3
\ /
========
/ \
o1 o4
```
```
coupling = 0.5
kappa = coupling ** 0.5
tau = (1 - coupling) ** 0.5
coupler_dict = {
("o1", "o4"): tau,
("o4", "o1"): tau,
("o1", "o3"): 1j * kappa,
("o3", "o1"): 1j * kappa,
("o2", "o4"): 1j * kappa,
("o4", "o2"): 1j * kappa,
("o2", "o3"): tau,
("o3", "o2"): tau,
}
coupler_dict
```
it can still be tedious to specify every port in the circuit manually. SAX therefore offers the `reciprocal` function, which auto-fills the reverse connection if the forward connection exist. For example:
```
coupler_dict = sax.reciprocal(
{
("o1", "o4"): tau,
("o1", "o3"): 1j * kappa,
("o2", "o4"): 1j * kappa,
("o2", "o3"): tau,
}
)
coupler_dict
```
## Parametrized Models
Constructing such an `SDict` is easy, however, usually we're more interested in having parametrized models for our components. To parametrize the coupler `SDict`, just wrap it in a function to obtain a SAX `Model`, which is a keyword-only function mapping to an `SDict`:
```
def coupler(coupling=0.5) -> sax.SDict:
kappa = coupling ** 0.5
tau = (1 - coupling) ** 0.5
coupler_dict = sax.reciprocal(
{
("o1", "o4"): tau,
("o1", "o3"): 1j * kappa,
("o2", "o4"): 1j * kappa,
("o2", "o3"): tau,
}
)
return coupler_dict
coupler(coupling=0.3)
def waveguide(wl=1.55, wl0=1.55, neff=2.34, ng=3.4, length=10.0, loss=0.0) -> sax.SDict:
dwl = wl - wl0
dneff_dwl = (ng - neff) / wl0
neff = neff - dwl * dneff_dwl
phase = 2 * jnp.pi * neff * length / wl
transmission = 10 ** (-loss * length / 20) * jnp.exp(1j * phase)
sdict = sax.reciprocal(
{
("o1", "o2"): transmission,
}
)
return sdict
```
## Component Models
### Waveguide model
You can create a dispersive waveguide model in SAX.
Lets compute the effective index `neff` and group index `ng` for a 1550nm 500nm straight waveguide
```
m = gm.find_mode_dispersion(wavelength=1.55)
print(m.neff, m.ng)
straight_sc = gf.partial(gs.models.straight, neff=m.neff, ng=m.ng)
gs.plot_model(straight_sc)
gs.plot_model(straight_sc, phase=True)
```
### Coupler model
```
gm.find_coupling_vs_gap?
df = gm.find_coupling_vs_gap()
df
```
For a 200nm gap the effective index difference `dn` is `0.02`, which means that there is 100% power coupling over 38.2um
```
coupler_sc = gf.partial(gs.models.coupler, dn=0.02, length=0, coupling0=0)
gs.plot_model(coupler_sc)
```
If we ignore the coupling from the bend `coupling0 = 0` we know that for a 3dB coupling we need half of the `lc` length, which is the length needed to coupler `100%` of power.
```
coupler_sc = gf.partial(gs.models.coupler, dn=0.02, length=38.2/2, coupling0=0)
gs.plot_model(coupler_sc)
```
### FDTD Sparameters model
You can also fit a model from Sparameter FDTD simulation data.
```
from gdsfactory.simulation.get_sparameters_path import get_sparameters_path_lumerical
filepath = get_sparameters_path_lumerical(gf.c.mmi1x2)
mmi1x2 = gf.partial(gs.read.sdict_from_csv, filepath=filepath)
gs.plot_model(mmi1x2)
```
## Circuit Models
You can combine component models into a circuit using `sax.circuit`, which basically creates a new `Model` function:
Lets define a [MZI interferometer](https://en.wikipedia.org/wiki/Mach%E2%80%93Zehnder_interferometer)
```
_________
| top |
| |
lft===| |===rgt
| |
|_________|
bot
o1 top o2
----------
o2 o3 o2 o3
\ / \ /
======== ========
/ \ / \
o1 lft 04 o1 rgt 04
----------
o1 bot o2
```
```
waveguide = straight_sc
coupler = coupler_sc
mzi = sax.circuit(
instances={
"lft": coupler,
"top": waveguide,
"bot": waveguide,
"rgt": coupler,
},
connections={
"lft,o4": "bot,o1",
"bot,o2": "rgt,o1",
"lft,o3": "top,o1",
"top,o2": "rgt,o2",
},
ports={
"o1": "lft,o1",
"o2": "lft,o2",
"o4": "rgt,o4",
"o3": "rgt,o3",
},
)
```
The `circuit` function just creates a similar function as we created for the waveguide and the coupler, but in stead of taking parameters directly it takes parameter *dictionaries* for each of the instances in the circuit. The keys in these parameter dictionaries should correspond to the keyword arguments of each individual subcomponent.
Let's now do a simulation for the MZI we just constructed:
```
%time mzi()
import jax
import jax.example_libraries.optimizers as opt
import jax.numpy as jnp
import matplotlib.pyplot as plt # plotting
mzi2 = jax.jit(mzi)
%time mzi2()
mzi(top={"length": 25.0}, btm={"length": 15.0})
wl = jnp.linspace(1.51, 1.59, 1000)
%time S = mzi(wl=wl, top={"length": 25.0}, btm={"length": 15.0})
plt.plot(wl * 1e3, abs(S["o1", "o3"]) ** 2, label='o3')
plt.plot(wl * 1e3, abs(S["o1", "o4"]) ** 2, label='o4')
plt.ylim(-0.05, 1.05)
plt.xlabel("λ [nm]")
plt.ylabel("T")
plt.ylim(-0.05, 1.05)
plt.legend()
plt.show()
```
## Optimization
You can optimize an MZI to get T=0 at 1550nm.
To do this, you need to define a loss function for the circuit at 1550nm.
This function should take the parameters that you want to optimize as positional arguments:
```
@jax.jit
def loss(delta_length):
S = mzi(wl=1.55, top={"length": 15.0 + delta_length}, btm={"length": 15.0})
return (abs(S["o1", "o4"]) ** 2).mean()
%time loss(10.0)
```
You can use this loss function to define a grad function which works on the parameters of the loss function:
```
grad = jax.jit(
jax.grad(
loss,
argnums=0, # JAX gradient function for the first positional argument, jitted
)
)
```
Next, you need to define a JAX optimizer, which on its own is nothing more than three more functions:
1. an initialization function with which to initialize the optimizer state
2. an update function which will update the optimizer state (and with it the model parameters).
3. a function with the model parameters given the optimizer state.
```
initial_delta_length = 10.0
optim_init, optim_update, optim_params = opt.adam(step_size=0.1)
optim_state = optim_init(initial_delta_length)
def train_step(step, optim_state):
settings = optim_params(optim_state)
lossvalue = loss(settings)
gradvalue = grad(settings)
optim_state = optim_update(step, gradvalue, optim_state)
return lossvalue, optim_state
import tqdm
range_ = tqdm.trange(300)
for step in range_:
lossvalue, optim_state = train_step(step, optim_state)
range_.set_postfix(loss=f"{lossvalue:.6f}")
delta_length = optim_params(optim_state)
delta_length
S = mzi(wl=wl, top={"length": 15.0 + delta_length}, btm={"length": 15.0})
plt.plot(wl * 1e3, abs(S["o1", "o4"]) ** 2)
plt.xlabel("λ [nm]")
plt.ylabel("T")
plt.ylim(-0.05, 1.05)
plt.plot([1550, 1550], [0, 1])
plt.show()
```
The minimum of the MZI is perfectly located at 1550nm.
## Model fit
You can fit a sax model to Sparameter FDTD simulation data.
```
import tqdm
import jax
import jax.numpy as jnp
import jax.example_libraries.optimizers as opt
import matplotlib.pyplot as plt
import gdsfactory as gf
import gdsfactory.simulation.modes as gm
import gdsfactory.simulation.sax as gs
gf.config.sparameters_path
sd = gs.read.sdict_from_csv(gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3)
coupler_fdtd = gf.partial(gs.read.sdict_from_csv, filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3)
gs.plot_model(coupler_fdtd)
gs.plot_model(coupler_fdtd, ports2=('o3', 'o4'))
modes = gm.find_modes_coupler(gap=0.224)
modes
dn = modes[1].neff - modes[2].neff
dn
coupler = gf.partial(gf.simulation.sax.models.coupler, dn=dn, length=20, coupling0=0.3)
gs.plot_model(coupler)
coupler_fdtd = gs.read.sdict_from_csv(filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3)
S = coupler_fdtd
T_fdtd = abs(S['o1', 'o3'])**2
K_fdtd = abs(S['o1', 'o4'])**2
@jax.jit
def loss(coupling0, dn, dn1, dn2, dk1, dk2):
"""Returns fit least squares error from a coupler model spectrum
to the FDTD Sparameter spectrum that we want to fit.
Args:
coupling0: coupling from the bend raegion
dn: effective index difference between even and odd mode solver simulations.
dn1: first derivative of effective index difference vs wavelength.
dn2: second derivative of effective index difference vs wavelength.
dk1: first derivative of coupling0 vs wavelength.
dk2: second derivative of coupling vs wavelength.
.. code::
coupling0/2 coupling coupling0/2
<-------------><--------------------><---------->
o2 ________ _______o3
\ /
\ length /
======================= gap
/ \
________/ \________
o1 o4
------------------------> K (coupled power)
/
/ K
-----------------------------------> T = 1 - K (transmitted power)
T: o1 -> o4
K: o1 -> o3
"""
S = gf.simulation.sax.models.coupler(dn=dn, length=20, coupling0=coupling0, dn1=dn1, dn2=dn2, dk1=dk1, dk2=dk2)
T_model = abs(S['o1', 'o4'])**2
K_model = abs(S['o1', 'o3'])**2
return jnp.abs(T_fdtd-T_model).mean() + jnp.abs(K_fdtd-K_model).mean()
loss(coupling0=0.3, dn=0.016, dk1 = 1.2435, dk2 = 5.3022, dn1 = 0.1169, dn2 = 0.4821)
grad = jax.jit(
jax.grad(
loss,
argnums=0, # JAX gradient function for the first positional argument, jitted
)
)
def train_step(step, optim_state, dn, dn1, dn2, dk1, dk2):
settings = optim_params(optim_state)
lossvalue = loss(settings, dn, dn1, dn2, dk1, dk2)
gradvalue = grad(settings, dn, dn1, dn2, dk1, dk2)
optim_state = optim_update(step, gradvalue, optim_state)
return lossvalue, optim_state
coupling0 = 0.3
optim_init, optim_update, optim_params = opt.adam(step_size=0.1)
optim_state = optim_init(coupling0)
dn = 0.0166
dn1 = 0.11
dn2 = 0.48
dk1 = 1.2
dk2 = 5
range_ = tqdm.trange(300)
for step in range_:
lossvalue, optim_state = train_step(step, optim_state, dn, dn1, dn2, dk1, dk2)
range_.set_postfix(loss=f"{lossvalue:.6f}")
coupling0_fit = optim_params(optim_state)
coupling0_fit
coupler = gf.partial(gf.simulation.sax.models.coupler, dn=dn, length=20, coupling0=coupling0_fit)
gs.plot_model(coupler)
wl = jnp.linspace(1.50, 1.60, 1000)
S = gf.simulation.sax.models.coupler(dn=dn, length=20, coupling0=coupling0_fit, dn1=dn1, dn2=dn2, dk1=dk1, dk2=dk2, wl=wl)
T_model = abs(S['o1', 'o4'])**2
K_model = abs(S['o1', 'o3'])**2
coupler_fdtd = S = gs.read.sdict_from_csv(filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3, wl=wl)
T_fdtd = abs(S['o1', 'o3'])**2
K_fdtd = abs(S['o1', 'o4'])**2
plt.plot(wl, T_fdtd, label='fdtd', c='b')
plt.plot(wl, T_model, label='fit', c='b', ls='-.')
plt.plot(wl, K_fdtd, label='fdtd', c='r')
plt.plot(wl, K_model, label='fit', c='r', ls='-.')
plt.legend()
```
### Multi-variable optimization
As you can see we need to fit more than 1 variable `coupling0` to get a good fit.
```
grad = jax.jit(
jax.grad(
loss,
#argnums=0, # JAX gradient function for the first positional argument, jitted
argnums=[0, 1, 2, 3, 4, 5], # JAX gradient function for all positional arguments, jitted
)
)
def train_step(step, optim_state):
coupling0, dn, dn1, dn2, dk1, dk2 = optim_params(optim_state)
lossvalue = loss(coupling0, dn, dn1, dn2, dk1, dk2)
gradvalue = grad(coupling0, dn, dn1, dn2, dk1, dk2)
optim_state = optim_update(step, gradvalue, optim_state)
return lossvalue, optim_state
coupling0 = 0.3
dn = 0.0166
dn1 = 0.11
dn2 = 0.48
dk1 = 1.2
dk2 = 5.0
optim_init, optim_update, optim_params = opt.adam(step_size=0.01)
optim_state = optim_init((coupling0, dn, dn1, dn2, dk1, dk2))
range_ = tqdm.trange(1000)
for step in range_:
lossvalue, optim_state = train_step(step, optim_state)
range_.set_postfix(loss=f"{lossvalue:.6f}")
coupling0_fit, dn_fit, dn1_fit, dn2_fit, dk1_fit, dk2_fit = optim_params(optim_state)
coupling0_fit, dn_fit, dn1_fit, dn2_fit, dk1_fit, dk2_fit
wl = jnp.linspace(1.5, 1.60, 1000)
coupler_fdtd = gs.read.sdict_from_csv(filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv',wl=wl, xkey='wavelength_nm', prefix='S', xunits=1e-3)
S = coupler_fdtd
T_fdtd = abs(S['o1', 'o3'])**2
S = gf.simulation.sax.models.coupler(dn=dn_fit,
length=20,
coupling0=coupling0_fit,
dn1=dn1_fit,
dn2=dn2_fit,
dk1=dk1_fit,
dk2=dk2_fit,
wl=wl)
T_model = abs(S['o1', 'o4'])**2
K_model = abs(S['o1', 'o3'])**2
plt.plot(wl, T_fdtd, label='fdtd', c='b')
plt.plot(wl, T_model, label='fit', c='b', ls='-.')
plt.plot(wl, K_fdtd, label='fdtd', c='r')
plt.plot(wl, K_model, label='fit', c='r', ls='-.')
plt.legend()
```
As you can see trying to fit many parameters do not give you a better fit,
you have to make sure you fit the right parameters, in this case `dn1`
```
wl = jnp.linspace(1.50, 1.60, 1000)
S = gf.simulation.sax.models.coupler(dn=dn_fit,
length=20,
coupling0=coupling0_fit,
dn1=dn1_fit-0.045,
dn2=dn2_fit,
dk1=dk1_fit,
dk2=dk2_fit,
wl=wl)
T_model = abs(S['o1', 'o4'])**2
K_model = abs(S['o1', 'o3'])**2
plt.plot(wl, T_fdtd, label='fdtd', c='b')
plt.plot(wl, T_model, label='fit', c='b', ls='-.')
plt.plot(wl, K_fdtd, label='fdtd', c='r')
plt.plot(wl, K_model, label='fit', c='r', ls='-.')
plt.legend()
dn = dn_fit
dn2 = dn2_fit
dk1 = dk1_fit
dk2 = dk2_fit
@jax.jit
def loss(dn1):
"""Returns fit least squares error from a coupler model spectrum
to the FDTD Sparameter spectrum that we want to fit.
"""
S = gf.simulation.sax.models.coupler(dn=dn, length=20, coupling0=coupling0, dn1=dn1, dn2=dn2, dk1=dk1, dk2=dk2)
T_model = jnp.abs(S['o1', 'o4'])**2
K_model = jnp.abs(S['o1', 'o3'])**2
return jnp.abs(T_fdtd-T_model).mean() + jnp.abs(K_fdtd-K_model).mean()
grad = jax.jit(
jax.grad(
loss,
argnums=0, # JAX gradient function for the first positional argument, jitted
)
)
dn1 = 0.11
optim_init, optim_update, optim_params = opt.adam(step_size=0.001)
optim_state = optim_init(dn1)
def train_step(step, optim_state):
settings = optim_params(optim_state)
lossvalue = loss(settings)
gradvalue = grad(settings)
optim_state = optim_update(step, gradvalue, optim_state)
return lossvalue, optim_state
range_ = tqdm.trange(300)
for step in range_:
lossvalue, optim_state = train_step(step, optim_state)
range_.set_postfix(loss=f"{lossvalue:.6f}")
dn1_fit = optim_params(optim_state)
dn1_fit
wl = jnp.linspace(1.50, 1.60, 1000)
S = gf.simulation.sax.models.coupler(dn=dn,
length=20,
coupling0=coupling0,
dn1=dn1_fit,
dn2=dn2,
dk1=dk1,
dk2=dk2,
wl=wl)
T_model = abs(S['o1', 'o4'])**2
K_model = abs(S['o1', 'o3'])**2
coupler_fdtd = gs.read.sdict_from_csv(filepath=gf.config.sparameters_path / 'coupler' / 'coupler_G224n_L20_S220.csv', xkey='wavelength_nm', prefix='S', xunits=1e-3, wl=wl)
S = coupler_fdtd
T_fdtd = abs(S['o1', 'o3'])**2
K_fdtd = abs(S['o1', 'o4'])**2
plt.plot(wl, T_fdtd, label='fdtd', c='b')
plt.plot(wl, T_model, label='fit', c='b', ls='-.')
plt.plot(wl, K_fdtd, label='fdtd', c='r')
plt.plot(wl, K_model, label='fit', c='r', ls='-.')
plt.legend()
```
## Model fit (linear regression)
For a better fit of the coupler we can build a linear regression model of the coupler with `sklearn`
```
import sax
import gdsfactory as gf
import gdsfactory.simulation.sax as gs
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
from scipy.constants import c
from sklearn.linear_model import LinearRegression
f = jnp.linspace(c / 1.0e-6, c / 2.0e-6, 500) * 1e-12 # THz
wl = c / (f * 1e12) * 1e6 # um
filepath = gf.config.sparameters_path / "coupler" / "coupler_G224n_L20_S220.csv"
coupler_fdtd = gf.partial(gs.read.sdict_from_csv, filepath, xkey="wavelength_nm", prefix="S", xunits=1e-3)
sd = coupler_fdtd(wl=wl)
k = sd["o1", "o3"]
t = sd["o1", "o4"]
s = t + k
a = t - k
```
Lets fit the symmetric (t+k) and antisymmetric (t-k) transmission
### Symmetric
```
plt.plot(wl, jnp.abs(s))
plt.grid(True)
plt.xlabel("Frequency [THz]")
plt.ylabel("Transmission")
plt.title('symmetric (transmission + coupling)')
plt.legend()
plt.show()
plt.plot(wl, jnp.abs(a))
plt.grid(True)
plt.xlabel("Frequency [THz]")
plt.ylabel("Transmission")
plt.title('anti-symmetric (transmission - coupling)')
plt.legend()
plt.show()
r = LinearRegression()
fX = lambda x, _order=8: x[:,None]**(jnp.arange(_order)[None, :]) # artificially create more 'features' (wl**2, wl**3, wl**4, ...)
X = fX(wl)
r.fit(X, jnp.abs(s))
asm, bsm = r.coef_, r.intercept_
fsm = lambda x: fX(x)@asm + bsm # fit symmetric module fiir
plt.plot(wl, jnp.abs(s))
plt.plot(wl, fsm(wl))
plt.grid(True)
plt.xlabel("Frequency [THz]")
plt.ylabel("Transmission")
plt.legend()
plt.show()
r = LinearRegression()
r.fit(X, jnp.unwrap(jnp.angle(s)))
asp, bsp = r.coef_, r.intercept_
fsp = lambda x: fX(x)@asp + bsp # fit symmetric phase
plt.plot(wl, jnp.unwrap(jnp.angle(s)))
plt.plot(wl, fsp(wl))
plt.grid(True)
plt.xlabel("Frequency [THz]")
plt.ylabel("Angle [deg]")
plt.legend()
plt.show()
fs = lambda x: fsm(x)*jnp.exp(1j*fsp(x))
```
Lets fit the symmetric (t+k) and antisymmetric (t-k) transmission
### Anti-Symmetric
```
r = LinearRegression()
r.fit(X, jnp.abs(a))
aam, bam = r.coef_, r.intercept_
fam = lambda x: fX(x)@aam + bam
plt.plot(wl, jnp.abs(a))
plt.plot(wl, fam(wl))
plt.grid(True)
plt.xlabel("Frequency [THz]")
plt.ylabel("Transmission")
plt.legend()
plt.show()
r = LinearRegression()
r.fit(X, jnp.unwrap(jnp.angle(a)))
aap, bap = r.coef_, r.intercept_
fap = lambda x: fX(x)@aap + bap
plt.plot(wl, jnp.unwrap(jnp.angle(a)))
plt.plot(wl, fap(wl))
plt.grid(True)
plt.xlabel("Frequency [THz]")
plt.ylabel("Angle [deg]")
plt.legend()
plt.show()
fa = lambda x: fam(x)*jnp.exp(1j*fap(x))
```
### Total
```
t_ = 0.5 * (fs(wl) + fa(wl))
plt.plot(wl, jnp.abs(t))
plt.plot(wl, jnp.abs(t_))
plt.xlabel("Frequency [THz]")
plt.ylabel("Transmission")
k_ = 0.5 * (fs(wl) - fa(wl))
plt.plot(wl, jnp.abs(k))
plt.plot(wl, jnp.abs(k_))
plt.xlabel("Frequency [THz]")
plt.ylabel("Coupling")
@jax.jit
def coupler(wl=1.5):
wl = jnp.asarray(wl)
wl_shape = wl.shape
wl = wl.ravel()
t = (0.5 * (fs(wl) + fa(wl))).reshape(*wl_shape)
k = (0.5 * (fs(wl) - fa(wl))).reshape(*wl_shape)
sdict = {
("o1", "o4"): t,
("o1", "o3"): k,
("o2", "o3"): k,
("o2", "o4"): t,
}
return sax.reciprocal(sdict)
f = jnp.linspace(c / 1.0e-6, c / 2.0e-6, 500) * 1e-12 # THz
wl = c / (f * 1e12) * 1e6 # um
filepath = gf.config.sparameters_path / "coupler" / "coupler_G224n_L20_S220.csv"
coupler_fdtd = gf.partial(gs.read.sdict_from_csv, filepath, xkey="wavelength_nm", prefix="S", xunits=1e-3)
sd = coupler_fdtd(wl=wl)
sd_ = coupler(wl=wl)
T = jnp.abs(sd["o1", "o4"]) ** 2
K = jnp.abs(sd["o1", "o3"]) ** 2
T_ = jnp.abs(sd_["o1", "o4"]) ** 2
K_ = jnp.abs(sd_["o1", "o3"]) ** 2
dP = jnp.unwrap(jnp.angle(sd["o1", "o3"]) - jnp.angle(sd["o1", "o4"]))
dP_ = jnp.unwrap(jnp.angle(sd_["o1", "o3"]) - jnp.angle(sd_["o1", "o4"]))
plt.figure(figsize=(12,3))
plt.plot(wl, T, label="T (fdtd)", c="C0", ls=":", lw="6")
plt.plot(wl, T_, label="T (model)", c="C0")
plt.plot(wl, K, label="K (fdtd)", c="C1", ls=":", lw="6")
plt.plot(wl, K_, label="K (model)", c="C1")
plt.ylim(-0.05, 1.05)
plt.grid(True)
plt.twinx()
plt.plot(wl, dP, label="ΔΦ (fdtd)", color="C2", ls=":", lw="6")
plt.plot(wl, dP_, label="ΔΦ (model)", color="C2")
plt.xlabel("Frequency [THz]")
plt.ylabel("Transmission")
plt.figlegend(bbox_to_anchor=(1.08, 0.9))
plt.savefig("fdtd_vs_model.png", bbox_inches="tight")
plt.show()
```
| github_jupyter |
```
from torchvision.models import *
import wandb
from sklearn.model_selection import train_test_split
import os,cv2
import numpy as np
import matplotlib.pyplot as plt
from torch.optim import *
from torch.nn import *
import torch,torchvision
from tqdm import tqdm
device = 'cuda'
PROJECT_NAME = 'Musical-Instruments-Image-Classification'
def load_data():
data = []
labels = {}
labels_r = {}
idx = 0
for label in os.listdir('./data/'):
idx += 1
labels[label] = idx
labels_r[idx] = label
for folder in os.listdir('./data/'):
for file in os.listdir(f'./data/{folder}/'):
img = cv2.imread(f'./data/{folder}/{file}')
img = cv2.resize(img,(56,56))
img = img / 255.0
data.append([
img,
np.eye(labels[folder]+1,len(labels))[labels[folder]-1]
])
X = []
y = []
for d in data:
X.append(d[0])
y.append(d[1])
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.125,shuffle=False)
X_train = torch.from_numpy(np.array(X_train)).to(device).view(-1,3,56,56).float()
y_train = torch.from_numpy(np.array(y_train)).to(device).float()
X_test = torch.from_numpy(np.array(X_test)).to(device).view(-1,3,56,56).float()
y_test = torch.from_numpy(np.array(y_test)).to(device).float()
return X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data
X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data = load_data()
# torch.save(labels_r,'labels_r.pt')
# torch.save(labels,'labels.pt')
# torch.save(X_train,'X_train.pth')
# torch.save(y_train,'y_train.pth')
# torch.save(X_test,'X_test.pth')
# torch.save(y_test,'y_test.pth')
# torch.save(labels_r,'labels_r.pth')
# torch.save(labels,'labels.pth')
def get_accuracy(model,X,y):
preds = model(X)
correct = 0
total = 0
for pred,yb in zip(preds,y):
pred = int(torch.argmax(pred))
yb = int(torch.argmax(yb))
if pred == yb:
correct += 1
total += 1
acc = round(correct/total,3)*100
return acc
def get_loss(model,X,y,criterion):
preds = model(X)
loss = criterion(preds,y)
return loss.item()
model = resnet18().to(device)
model.fc = Linear(512,len(labels))
criterion = MSELoss()
optimizer = Adam(model.parameters(),lr=0.001)
epochs = 100
batch_size = 32
wandb.init(project=PROJECT_NAME,name='baseline')
for _ in tqdm(range(epochs)):
for i in range(0,len(X_train),batch_size):
X_batch = X_train[i:i+batch_size]
y_batch = y_train[i:i+batch_size]
model.to(device)
preds = model(X_batch)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
model.eval()
torch.cuda.empty_cache()
wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2})
torch.cuda.empty_cache()
wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})
torch.cuda.empty_cache()
wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})
torch.cuda.empty_cache()
wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)})
torch.cuda.empty_cache()
model.train()
wandb.finish()
class Model(Module):
def __init__(self):
super().__init__()
self.max_pool2d = MaxPool2d((2,2),(2,2))
self.activation = ReLU()
self.conv1 = Conv2d(3,7,(5,5))
self.conv2 = Conv2d(7,14,(5,5))
self.conv2bn = BatchNorm2d(14)
self.conv3 = Conv2d(14,21,(5,5))
self.linear1 = Linear(21*3*3,256)
self.linear2 = Linear(256,512)
self.linear2bn = BatchNorm1d(512)
self.linear3 = Linear(512,256)
self.output = Linear(256,len(labels))
def forward(self,X):
preds = self.max_pool2d(self.activation(self.conv1(X)))
preds = self.max_pool2d(self.activation(self.conv2bn(self.conv2(preds))))
preds = self.max_pool2d(self.activation(self.conv3(preds)))
print(preds.shape)
preds = preds.view(-1,21*3*3)
preds = self.activation(self.linear1(preds))
preds = self.activation(self.linear2bn(self.linear2(preds)))
preds = self.activation(self.linear3(preds))
preds = self.output(preds)
return preds
model = Model().to(device)
criterion = MSELoss()
optimizer = Adam(model.parameters(),lr=0.001)
epochs = 100
batch_size = 32
wandb.init(project=PROJECT_NAME,name='baseline')
for _ in tqdm(range(epochs)):
for i in range(0,len(X_train),batch_size):
X_batch = X_train[i:i+batch_size]
y_batch = y_train[i:i+batch_size]
model.to(device)
preds = model(X_batch)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
model.eval()
torch.cuda.empty_cache()
wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2})
torch.cuda.empty_cache()
wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})
torch.cuda.empty_cache()
wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})
torch.cuda.empty_cache()
wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)})
torch.cuda.empty_cache()
model.train()
wandb.finish()
```
| github_jupyter |
# (Optional) Testing the Function Endpoint with your Own Audio Clips
Instead of using pre-recorded clips we show you in this notebook how to invoke the deployed Function
with your **own** audio clips.
In the cells below, we will use the [PyAudio library](https://pypi.org/project/PyAudio/) to record a short 1 second clip. we will then submit
that short clip to the Function endpoint on Oracle Functions. **Make sure PyAudio is installed on your laptop** before running this notebook.
The helper function defined below will record a 1-sec audio clip when executed. Speak into the microphone
of your computer and say one of the words `cat`, `eight`, `right`.
I'd recommend double-checking that you are not muted and that you are using the internal computer mic. No
headset.
```
# we will use pyaudio and wave in the
# bottom half of this notebook.
import pyaudio
import wave
print(pyaudio.__version__)
def record_wave(duration=1.0, output_wave='./output.wav'):
"""Using the pyaudio library, this function will record a video clip of a given duration.
Args:
- duration (float): duration of the recording in seconds
- output_wave (str) : filename of the wav file that contains your recording
Returns:
- frames : a list containing the recorded waveform
"""
# number of frames per buffer
frames_perbuff = 2048
# 16 bit int
format = pyaudio.paInt16
# mono sound
channels = 1
# Sampling rate -- CD quality (44.1 kHz). Standard
# for most recording devices.
sampling_rate = 44100
# frames contain the waveform data:
frames = []
# number of buffer chunks:
nchunks = int(duration * sampling_rate / frames_perbuff)
p = pyaudio.PyAudio()
stream = p.open(format=format,
channels=channels,
rate=sampling_rate,
input=True,
frames_per_buffer=frames_perbuff)
print("RECORDING STARTED ")
for i in range(0, nchunks):
data = stream.read(frames_perbuff)
frames.append(data)
print("RECORDING ENDED")
stream.stop_stream()
stream.close()
p.terminate()
# Write the audio clip to disk as a .wav file:
wf = wave.open(output_wave, 'wb')
wf.setnchannels(channels)
wf.setsampwidth(p.get_sample_size(format))
wf.setframerate(sampling_rate)
wf.writeframes(b''.join(frames))
wf.close()
# let's record your own, 1-sec clip
my_own_clip = "./my_clip.wav"
frames = record_wave(output_wave=my_own_clip)
# Playback
ipd.Audio("./my_clip.wav")
```
Looks good? Now let's try to send that clip to our model API endpoint. We will repeat the same process we adopted when we submitted pre-recorded clips.
```
# oci:
import oci
from oci.config import from_file
from oci import pagination
import oci.functions as functions
from oci.functions import FunctionsManagementClient, FunctionsInvokeClient
# Lets specify the location of our OCI configuration file:
oci_config = from_file("/home/datascience/block_storage/.oci/config")
# Lets specify the compartment OCID, and the application + function names:
compartment_id = 'ocid1.compartment.oc1..aaaaaaaafl3avkal72rrwuy4m5rumpwh7r4axejjwq5hvwjy4h4uoyi7kzyq'
app_name = 'machine-learning-models'
fn_name = 'speech-commands'
fn_management_client = FunctionsManagementClient(oci_config)
app_result = pagination.list_call_get_all_results(
fn_management_client.list_applications,
compartment_id,
display_name=app_name
)
fn_result = pagination.list_call_get_all_results(
fn_management_client.list_functions,
app_result.data[0].id,
display_name=fn_name
)
invoke_client = FunctionsInvokeClient(oci_config, service_endpoint=fn_result.data[0].invoke_endpoint)
# here we need to be careful. `my_own_clip` was recorded at a 44.1 kHz sampling rate.
# Yet the training sample has data at a 16 kHz rate. To ensure that we feed data of the same
# size, we will downsample the data to a 16 kHz rate (sr=16000)
waveform, _ = librosa.load(my_own_clip, mono=True, sr=16000)
```
Below we call the deployed Function. Note that the first call could take 60 sec. or more. This is due to the cold start problem of Function. Subsequent calls are much faster. Typically < 1 sec.
```
%%time
resp = invoke_client.invoke_function(fn_result.data[0].id,
invoke_function_body=json.dumps({"input": waveform.tolist()}))
print(resp.data.text)
```
| github_jupyter |

<font size=3 color="midnightblue" face="arial">
<h1 align="center">Escuela de Ciencias Básicas, Tecnología e Ingeniería</h1>
</font>
<font size=3 color="navy" face="arial">
<h1 align="center">ECBTI</h1>
</font>
<font size=2 color="darkorange" face="arial">
<h1 align="center">Curso:</h1>
</font>
<font size=2 color="navy" face="arial">
<h1 align="center">Introducción al lenguaje de programación Python</h1>
</font>
<font size=1 color="darkorange" face="arial">
<h1 align="center">Febrero de 2020</h1>
</font>
<h2 align="center">Sesión 08 - Manipulación de archivos JSON</h2>
## Introducción
`JSON` (*JavaScript Object Notation*) es un formato ligero de intercambio de datos que los humanos pueden leer y escribir fácilmente. También es fácil para las computadoras analizar y generar. `JSON` se basa en el lenguaje de programación [JavaScript](https://www.javascript.com/ 'JavaScript'). Es un formato de texto que es independiente del lenguaje y se puede usar en `Python`, `Perl`, entre otros idiomas. Se utiliza principalmente para transmitir datos entre un servidor y aplicaciones web. `JSON` se basa en dos estructuras:
- Una colección de pares nombre / valor. Esto se realiza como un objeto, registro, diccionario, tabla hash, lista con clave o matriz asociativa.
- Una lista ordenada de valores. Esto se realiza como una matriz, vector, lista o secuencia.
## JSON en Python
Hay una serie de paquetes que admiten `JSON` en `Python`, como [metamagic.json](https://pypi.org/project/metamagic.json/ 'metamagic.json'), [jyson](http://opensource.xhaus.com/projects/jyson/wiki 'jyson'), [simplejson](https://simplejson.readthedocs.io/en/latest/ 'simplejson'), [Yajl-Py](http://pykler.github.io/yajl-py/ 'Yajl-Py'), [ultrajson](https://github.com/esnme/ultrajson 'ultrajson') y [json](https://docs.python.org/3.6/library/json.html 'json'). En este curso, utilizaremos [json](https://docs.python.org/3.6/library/json.html 'json'), que es compatible de forma nativa con `Python`. Podemos usar [este sitio](https://jsonlint.com/ 'jsonlint') que proporciona una interfaz `JSON` para verificar nuestros datos `JSON`.
A continuación se muestra un ejemplo de datos `JSON`.
```
{
"nombre": "Jaime",
"apellido": "Perez",
"aficiones": ["correr", "ciclismo", "caminar"],
"edad": 35,
"hijos": [
{
"nombre": "Pedro",
"edad": 6
},
{
"nombre": "Alicia",
"edad": 8
}
]
}
```
Como puede verse, `JSON` admite tanto tipos primitivos, cadenas de caracteres y números, como listas y objetos anidados.
Notamos que la representación de datos es muy similar a los diccionarios de `Python`
```
{
"articulo": [
{
"id":"01",
"lenguaje": "JSON",
"edicion": "primera",
"autor": "Derrick Mwiti"
},
{
"id":"02",
"lenguaje": "Python",
"edicion": "segunda",
"autor": "Derrick Mwiti"
}
],
"blog":[
{
"nombre": "Datacamp",
"URL":"datacamp.com"
}
]
}
```
Reescribámoslo en una forma más familiar
```
{"articulo":[{"id":"01","lenguaje": "JSON","edicion": "primera","author": "Derrick Mwiti"},
{"id":"02","lenguaje": "Python","edicion": "segunda","autor": "Derrick Mwiti"}],
"blog":[{"nombre": "Datacamp","URL":"datacamp.com"}]}
```
## `JSON` nativo en `Python`
`Python` viene con un paquete incorporado llamado `json` para codificar y decodificar datos `JSON`.
```
import json
```
## Un poco de vocabulario
El proceso de codificación de `JSON` generalmente se llama serialización. Este término se refiere a la transformación de datos en una serie de bytes (por lo tanto, en serie) para ser almacenados o transmitidos a través de una red. También puede escuchar el término de clasificación, pero esa es otra discusión. Naturalmente, la deserialización es el proceso recíproco de decodificación de datos que se ha almacenado o entregado en el estándar `JSON`.
De lo que estamos hablando aquí es leer y escribir. Piénselo así: la codificación es para escribir datos en el disco, mientras que la decodificación es para leer datos en la memoria.
### Serialización en `JSON`
¿Qué sucede después de que una computadora procesa mucha información? Necesita tomar un volcado de datos. En consecuencia, la biblioteca `json` expone el método `dump()` para escribir datos en archivos. También hay un método `dumps()` (pronunciado como "*dump-s*") para escribir en una cadena de `Python`.
Los objetos simples de `Python` se traducen a `JSON` de acuerdo con una conversión bastante intuitiva.
Comparemos los tipos de datos en `Python` y `JSON`.
|**Python** | **JSON** |
|:---------:|:----------------:|
|dict |object |
|list|array |
|tuple| array|
|str| string|
|int| number|
|float| number|
|True| true|
|False| false|
|None| null|
### Serialización, ejemplo
tenemos un objeto `Python` en la memoria que se parece a algo así:
```
data = {
"president": {
"name": "Zaphod Beeblebrox",
"species": "Betelgeusian"
}
}
print(type(data))
```
Es fundamental que se guarde esta información en el disco, por lo que la tarea es escribirla en un archivo.
Con el administrador de contexto de `Python`, puede crear un archivo llamado `data_file.json` y abrirlo en modo de escritura. (Los archivos `JSON` terminan convenientemente en una extensión `.json`).
```
with open("data_file.json", "w") as write_file:
json.dump(data, write_file)
```
Tenga en cuenta que `dump()` toma dos argumentos posicionales:
1. el objeto de datos que se va a serializar y
2. el objeto tipo archivo en el que se escribirán los bytes.
O, si estaba tan inclinado a seguir usando estos datos `JSON` serializados en su programa, podría escribirlos en un objeto `str` nativo de `Python`.
```
json_string = json.dumps(data)
print(type(json_string))
```
Tenga en cuenta que el objeto similar a un archivo está ausente ya que no está escribiendo en el disco. Aparte de eso, `dumps()` es como `dump()`.
Se ha creado un objeto `JSON` y está listo para trabajarlo.
### Algunos argumentos útiles de palabras clave
Recuerde, `JSON` está destinado a ser fácilmente legible por los humanos, pero la sintaxis legible no es suficiente si se aprieta todo junto. Además, probablemente tenga un estilo de programación diferente a éste presentado, y puede que le resulte más fácil leer el código cuando está formateado a su gusto.
***NOTA:*** Los métodos `dump()` y `dumps()` usan los mismos argumentos de palabras clave.
La primera opción que la mayoría de la gente quiere cambiar es el espacio en blanco. Puede usar el argumento de sangría de palabras clave para especificar el tamaño de sangría para estructuras anidadas. Compruebe la diferencia por sí mismo utilizando los datos, que definimos anteriormente, y ejecutando los siguientes comandos en una consola:
```
json.dumps(data)
json.dumps(data, indent=4)
```
Otra opción de formato es el argumento de palabra clave de separadores. Por defecto, esta es una tupla de 2 de las cadenas de separación (`","`, `": "`), pero una alternativa común para `JSON` compacto es (`","`, `":"`). observe el ejemplo `JSON` nuevamente para ver dónde entran en juego estos separadores.
Hay otros, como `sort_keys`. Puede encontrar una lista completa en la [documentación](https://docs.python.org/3/library/json.html#basic-usage) oficial.
### Deserializando JSON
Hemos trabajado un poco de `JSON` muy básico, ahora es el momento de ponerlo en forma. En la biblioteca `json`, encontrará `load()` y `loads()` para convertir datos codificados con `JSON` en objetos de `Python`.
Al igual que la serialización, hay una tabla de conversión simple para la deserialización, aunque probablemente ya puedas adivinar cómo se ve.
|**JSON** | **Python** |
|:---------:|:----------------:|
|object |dict |
|array |list|
|array|tuple |
|string|str |
|number|int |
|number|float |
|true|True |
|false|False |
|null|None |
Técnicamente, esta conversión no es un inverso perfecto a la tabla de serialización. Básicamente, eso significa que si codifica un objeto de vez en cuando y luego lo decodifica nuevamente más tarde, es posible que no recupere exactamente el mismo objeto. Me imagino que es un poco como teletransportación: descomponga mis moléculas aquí y vuelva a unirlas allí. ¿Sigo siendo la misma persona?
En realidad, probablemente sea más como hacer que un amigo traduzca algo al japonés y que otro amigo lo traduzca nuevamente al inglés. De todos modos, el ejemplo más simple sería codificar una tupla y recuperar una lista después de la decodificación, así:
```
blackjack_hand = (8, "Q")
encoded_hand = json.dumps(blackjack_hand)
decoded_hand = json.loads(encoded_hand)
blackjack_hand == decoded_hand
type(blackjack_hand)
type(decoded_hand)
blackjack_hand == tuple(decoded_hand)
```
### Deserialización, ejemplo
Esta vez, imagine que tiene algunos datos almacenados en el disco que le gustaría manipular en la memoria. Todavía usará el administrador de contexto, pero esta vez abrirá el archivo de datos existente `archivo_datos.json` en modo de lectura.
```
with open("data_file.json", "r") as read_file:
data = json.load(read_file)
```
Hasta ahora las cosas son bastante sencillas, pero tenga en cuenta que el resultado de este método podría devolver cualquiera de los tipos de datos permitidos de la tabla de conversión. Esto solo es importante si está cargando datos que no ha visto antes. En la mayoría de los casos, el objeto raíz será un diccionario o una lista.
Si ha extraído datos `JSON` de otro programa o ha obtenido una cadena de datos con formato `JSON` en `Python`, puede deserializarlo fácilmente con `loads()`, que naturalmente se carga de una cadena:
```
my_json_string = """{
"article": [
{
"id":"01",
"language": "JSON",
"edition": "first",
"author": "Derrick Mwiti"
},
{
"id":"02",
"language": "Python",
"edition": "second",
"author": "Derrick Mwiti"
}
],
"blog":[
{
"name": "Datacamp",
"URL":"datacamp.com"
}
]
}
"""
to_python = json.loads(my_json_string)
print(type(to_python))
```
Ahora ya estamos trabajando con `JSON` puro. Lo que se hará de ahora en adelante dependerá del usuario, por lo que hay qué estar muy atentos con lo que se quiere hacer, se hace, y el resultado que se obtiene.
## Un ejemplo real
Para este ejemplo introductorio, utilizaremos [JSONPlaceholder](https://jsonplaceholder.typicode.com/ "JSONPlaceholder"), una excelente fuente de datos `JSON` falsos para fines prácticos.
Primero cree un archivo de script llamado `scratch.py`, o como desee llamarlo.
Deberá realizar una solicitud de `API` al servicio `JSONPlaceholder`, así que solo use el paquete de solicitudes para realizar el trabajo pesado. Agregue estas importaciones en la parte superior de su archivo:
```
import json
import requests
```
Ahora haremos una solicitud a la `API` `JSONPlaceholder`, si no está familiarizado con las solicitudes, existe un práctico método `json()` que hará todo el trabajo, pero puede practicar el uso de la biblioteca `json` para deserializar el atributo de texto del objeto de respuesta. Debería verse más o menos así:
```
response = requests.get("https://jsonplaceholder.typicode.com/todos")
todos = json.loads(response.text)
```
Para saber si lo anterior funcionó (por lo menos no sacó ningún error), verifique el tipo de `todos` y luego hacer una consulta a los 10 primeros elementos de la lista.
```
todos == response.json()
type(todos)
todos[:10]
len(todos)
```
Puede ver la estructura de los datos visualizando el archivo en un navegador, pero aquí hay un ejemplo de parte de él:
```
# parte del archivo JSON - TODO
{
"userId": 1,
"id": 1,
"title": "delectus aut autem",
"completed": false
}
```
Hay varios usuarios, cada uno con un ID de usuario único, y cada tarea tiene una propiedad booleana completada. ¿Puedes determinar qué usuarios han completado la mayoría de las tareas?
```
# Mapeo de userID para la cantidad completa de TODOS para cada usuario
todos_by_user = {}
# Incrementa el recuento completo de TODOs para cada usuario.
for todo in todos:
if todo["completed"]:
try:
# Incrementa el conteo del usuario existente.
todos_by_user[todo["userId"]] += 1
except KeyError:
# Este usuario no ha sido visto, se inicia su conteo en 1.
todos_by_user[todo["userId"]] = 1
# Crea una lista ordenada de pares (userId, num_complete).
top_users = sorted(todos_by_user.items(),
key=lambda x: x[1], reverse=True)
# obtiene el número máximo completo de TODO
max_complete = top_users[0][1]
# Cree una lista de todos los usuarios que hayan completado la cantidad máxima de TODO
users = []
for user, num_complete in top_users:
if num_complete < max_complete:
break
users.append(str(user))
max_users = " y ".join(users)
```
Ahora se pueden manipular los datos `JSON` como un objeto `Python` normal.
Al ejecutar el script se obtienen los siguientes resultados:
```
s = "s" if len(users) > 1 else ""
print(f"usuario{s} {max_users} completaron {max_complete} TODOs")
```
Continuando, se creará un archivo `JSON` que contiene los *TODO* completos para cada uno de los usuarios que completaron el número máximo de *TODO*.
Todo lo que necesita hacer es filtrar todos y escribir la lista resultante en un archivo. llamaremos al archivo de salida `filter_data_file.json`. Hay muchas maneras de hacerlo, pero aquí hay una:
```
# Defina una función para filtrar TODO completos de usuarios con TODOS máximos completados.
def keep(todo):
is_complete = todo["completed"]
has_max_count = str(todo["userId"]) in users
return is_complete and has_max_count
# Escriba el filtrado de TODO a un archivo.
with open("filtered_data_file.json", "w") as data_file:
filtered_todos = list(filter(keep, todos))
json.dump(filtered_todos, data_file, indent=2)
```
Se han filtrado todos los datos que no se necesitan y se han guardado los necesarios en un archivo nuevo! Vuelva a ejecutar el script y revise `filter_data_file.json` para verificar que todo funcionó. Estará en el mismo directorio que `scratch.py` cuando lo ejecutes.
```
s = "s" if len(users) > 1 else ""
print(f"usuario{s} {max_users} completaron {max_complete} TODOs")
```
Por ahora estamos viendo los aspectos básicos de la manipulación de datos en `JSON`. Ahora vamos a tratar de avanzar un poco más en profundidad.
## Codificación y decodificación de objetos personalizados de `Python`
Veamos un ejemplo de una clase de un juego muy famoso (Dungeons & Dragons) ¿Qué sucede cuando intentamos serializar la clase `Elf` de esa aplicación?
```
class Elf:
def __init__(self, level, ability_scores=None):
self.level = level
self.ability_scores = {
"str": 11, "dex": 12, "con": 10,
"int": 16, "wis": 14, "cha": 13
} if ability_scores is None else ability_scores
self.hp = 10 + self.ability_scores["con"]
elf = Elf(level=4)
json.dumps(elf)
```
`Python` indica que `Elf` no es serializable
Aunque el módulo `json` puede manejar la mayoría de los tipos de `Python` integrados, no comprende cómo codificar los tipos de datos personalizados de forma predeterminada. Es como tratar de colocar una clavija cuadrada en un orificio redondo: necesita una sierra circular y la supervisión de los padres.
## Simplificando las estructuras de datos
cómo lidiar con estructuras de datos más complejas?. Se podría intentar codificar y decodificar el `JSON` "*manualmente*", pero hay una solución un poco más inteligente que ahorrará algo de trabajo. En lugar de pasar directamente del tipo de datos personalizado a `JSON`, puede lanzar un paso intermedio.
Todo lo que se necesita hacer es representar los datos en términos de los tipos integrados que `json` ya comprende. Esencialmente, traduce el objeto más complejo en una representación más simple, que el módulo `json` luego traduce a `JSON`. Es como la propiedad transitiva en matemáticas: si `A = B` y `B = C`, entonces `A = C`.
Para entender esto, necesitarás un objeto complejo con el que jugar. Puede usar cualquier clase personalizada que desee, pero `Python` tiene un tipo incorporado llamado `complex` para representar números complejos, y no es serializable por defecto.
```
z = 3 + 8j
type(z)
json.dumps(z)
```
Una buena pregunta que debe hacerse al trabajar con tipos personalizados es ¿Cuál es la cantidad mínima de información necesaria para recrear este objeto? En el caso de números complejos, solo necesita conocer las partes real e imaginaria, a las que puede acceder como atributos en el objeto `complex`:
```
z.real
z.imag
```
Pasar los mismos números a un constructor `complex` es suficiente para satisfacer el operador de comparación `__eq__`:
```
complex(3, 8) == z
```
Desglosar los tipos de datos personalizados en sus componentes esenciales es fundamental para los procesos de serialización y deserialización.
## Codificación de tipos personalizados
Para traducir un objeto personalizado a `JSON`, todo lo que necesita hacer es proporcionar una función de codificación al parámetro predeterminado del método `dump()`. El módulo `json` llamará a esta función en cualquier objeto que no sea serializable de forma nativa. Aquí hay una función de decodificación simple que puede usar para practicar ([aquí](https://www.programiz.com/python-programming/methods/built-in/isinstance "isinstance") encontrará información acerca de la función `isinstance`):
```
def encode_complex(z):
if isinstance(z, complex):
return (z.real, z.imag)
else:
type_name = z.__class__.__name__
raise TypeError(f"Object of type '{type_name}' is not JSON serializable")
```
Tenga en cuenta que se espera que genere un `TypeError` si no obtiene el tipo de objeto que esperaba. De esta manera, se evita serializar accidentalmente a cualquier `Elfo`. Ahora ya podemos intentar codificar objetos complejos.
```
json.dumps(9 + 5j, default=encode_complex)
json.dumps(elf, default=encode_complex)
```
¿Por qué codificamos el número complejo como una tupla? es la única opción, es la mejor opción? Qué pasaría si necesitáramos decodificar el objeto más tarde?
El otro enfoque común es subclasificar el `JSONEncoder` estándar y anular el método `default()`:
```
class ComplexEncoder(json.JSONEncoder):
def default(self, z):
if isinstance(z, complex):
return (z.real, z.imag)
else:
return super().default(z)
```
En lugar de subir el `TypeError` usted mismo, simplemente puede dejar que la clase base lo maneje. Puede usar esto directamente en el método `dump()` a través del parámetro `cls` o creando una instancia del codificador y llamando a su método `encode()`:
```
json.dumps(2 + 5j, cls=ComplexEncoder)
encoder = ComplexEncoder()
encoder.encode(3 + 6j)
```
## Decodificación de tipos personalizados
Si bien las partes reales e imaginarias de un número complejo son absolutamente necesarias, en realidad no son suficientes para recrear el objeto. Esto es lo que sucede cuando intenta codificar un número complejo con `ComplexEncoder` y luego decodifica el resultado:
```
complex_json = json.dumps(4 + 17j, cls=ComplexEncoder)
json.loads(complex_json)
```
Todo lo que se obtiene es una lista, y se tendría que pasar los valores a un constructor complejo si se quiere ese objeto complejo nuevamente. Recordemos el comentario sobre *teletransportación*. Lo que falta son metadatos o información sobre el tipo de datos que está codificando.
La pregunta que realmente debería hacerse es ¿Cuál es la cantidad mínima de información necesaria y suficiente para recrear este objeto?
El módulo `json` espera que todos los tipos personalizados se expresen como objetos en el estándar `JSON`. Para variar, puede crear un archivo `JSON` esta vez llamado `complex_data.json` y agregar el siguiente objeto que representa un número complejo:
```
# JSON
{
"__complex__": true,
"real": 42,
"imag": 36
}
```
¿Ves la parte inteligente? Esa clave "`__complex__`" son los metadatos de los que acabamos de hablar. Realmente no importa cuál sea el valor asociado. Para que este pequeño truco funcione, todo lo que necesitas hacer es verificar que exista la clave:
```
def decode_complex(dct):
if "__complex__" in dct:
return complex(dct["real"], dct["imag"])
return dct
```
Si "`__complex__`" no está en el diccionario, puede devolver el objeto y dejar que el decodificador predeterminado se encargue de él.
Cada vez que el método `load()` intenta analizar un objeto, se le da la oportunidad de interceder antes de que el decodificador predeterminado se adapte a los datos. Puede hacerlo pasando su función de decodificación al parámetro `object_hook`.
Ahora regresemos a lo de antes
```
with open("complex_data.json") as complex_data:
data = complex_data.read()
z = json.loads(data, object_hook=decode_complex)
type(z)
```
Si bien `object_hook` puede parecer la contraparte del parámetro predeterminado del método `dump()`, la analogía realmente comienza y termina allí.
```
# JSON
[
{
"__complex__":true,
"real":42,
"imag":36
},
{
"__complex__":true,
"real":64,
"imag":11
}
]
```
Esto tampoco funciona solo con un objeto. Intente poner esta lista de números complejos en `complex_data.json` y vuelva a ejecutar el script:
```
with open("complex_data.json") as complex_data:
data = complex_data.read()
numbers = json.loads(data, object_hook=decode_complex)
```
Si todo va bien, obtendrá una lista de objetos complejos:
```
type(z)
numbers
```
## Finalizando...
Ahora puede ejercer el poderoso poder de JSON para todas y cada una de las necesidades de `Python`.
Si bien los ejemplos con los que ha trabajado aquí son ciertamente demasiado simplistas, ilustran un flujo de trabajo que puede aplicar a tareas más generales:
- Importa el paquete json.
- Lea los datos con `load()` o `loads()`.
- Procesar los datos.
- Escriba los datos alterados con `dump()` o `dumps()`.
Lo que haga con los datos una vez que se hayan cargado en la memoria dependerá de su caso de uso. En general, el objetivo será recopilar datos de una fuente, extraer información útil y transmitir esa información o mantener un registro de la misma.
| github_jupyter |
# Interdisciplinary Health Data Competition - Data Cleaning
## Import necessary libraries
```
import pandas as pd
import numpy as np
import warnings
```
## Agenda
Step 1 - Read in Data Files
- Read in drug and prescription files
- Inspect their initial format
- Inspect their initial data types
- Inspect data distribution
Step 2 - Check for Nulls
- Check for nulls by count
- Check for nulls by percentage
- Replace any known nulls
- Drop columns with ~20% or more missing values
- Drop rows with ~10% or less missing values
Step 3 - Try to Impute Nulls in Percent Change
- analyze head of dataframe
- analyze the cost per script rank
- remove original percent change features
Step 4 - Properly Join the Two Files
- join prescription 2012 to prescription 2016
- create new percent change features
- fill in calculations for the percent change columns in prescription
- join drug 2012 to drug 2016
Step 5 - Create Lists Where Applicable
Step 6 - Write cleaned data to new files
## Step 1 - Read in Data Files
```
# read in the first file and inspect
drug = pd.read_excel("DrugDetailMerged.xlsx")
drug.head()
# inspect data types
drug.dtypes
# Inspect data distribution
drug.describe()
# read in the first file and inspect
prescription = pd.read_excel("SummaryMerged.xlsx")
prescription.head()
# inspect data types
prescription.dtypes
# Inspect data distribution
prescription.describe()
```
## Step 2 - Check for Nulls
```
# count of nulls
drug.isnull().sum()
# Percentage of null values for easier analysis
drug.isnull().sum()* 100 / len(drug)
# count of nulls
prescription.isnull().sum()
# Percentage of null values for easier analysis
prescription.isnull().sum()* 100 / len(prescription)
# Replace any nulls with known values
# 'PROPNAME' NA means generic
drug['PROPNAME'] = drug['PROPNAME'].fillna('GENERIC')
drug.isnull().sum()* 100 / len(drug)
# Drop columns with ~20% of more missing values
#prescription = prescription.drop(['PCT_SCRIPTS_0_18','PCT_SCRIPTS_19_44', 'PCT_SCRIPTS_45_64',
# 'PCT_SCRIPTS_65_PLUS', 'PCT_URBAN_CORE', 'PCT_SUBURBAN',
# 'PCT_MICROPOLITAN', 'PCT_RURAL_SMALLTOWN'], axis = 1)
# Confirm columns were dropped
prescription.isnull().sum()* 100 / len(prescription)
# Drop rows with ~10% or less missing values and confirm rows were dropped
#prescription = prescription.dropna(axis=0, subset=['PCT_SCRIPTS_FEMALE', 'PCT_SCRIPTS_MALE'])
#prescription.isnull().sum()* 100 / len(prescription)
# Drop rows with ~10% or less missing values and confirm rows were dropped
drug = drug.dropna(axis=0, subset=['DOSAGE_FORM', 'ACTIVE_STRENGTH', 'ACTIVE_STRENGTH_UNIT', 'LABELERNAME', 'LAUNCH_YEAR', 'PRODUCT_NDC'])
drug.isnull().sum()* 100 / len(drug)
```
## Step 3 - Try to Impute Nulls in Percent Change
```
# analyze head of dataframe
prescription.head()
# analyze cost per script rank
scriptRank = prescription[['YEAR','COST_PER_USER_RANK', 'COST_PER_SCRIPT_RANK', 'COST_PER_DAYS_SUPPLY_RANK',
'COST_PER_UNIT_DISPENSED_RANK', 'TOTAL_SCRIPTS_FILLED_RANK',
'PCT_CHANGE_COST_PER_SCRIPT_RANK']]
scriptRank.head()
```
Appears the 'PCT_CHANGE_COST_PER_SCRIPT_RANK' and 'PCT_CHANGE_COST_PER_SCRIPT' are by year. Since I do not have the prior years (2011 and 2015), I cannot calculate the missing values.
Instead I will make a new feature to compare the growth between 2012 and 2016. These will replace the 'PCT_CHANGE_COST_PER_SCRIPT_RANK' and 'PCT_CHANGE_COST_PER_SCRIPT' features. If the year is 2012, the percent change will be 0. If the year is 2016 I will calculate the percent change as follows:
$PCT\_CHANGE\_COST\_PER\_SCRIPT\_RANK = \frac{(COST\_PER\_SCRIPT\_RANK\_2016 - COST\_PER\_SCRIPT\_RANK\_2012)}{COST\_PER\_SCRIPT\_RANK\_2012} \times 100\%$
$PCT\_CHANGE\_COST\_PER\_SCRIPT = \frac{(COST\_PER\_SCRIPT\_2016 - COST\_PER\_SCRIPT\_2012)}{COST\_PER\_SCRIPT\_2012} \times 100\%$
These formulas will be applied later.
```
# remove original percent change features and confirm removal
prescription = prescription.drop(['PCT_CHANGE_COST_PER_SCRIPT_RANK','PCT_CHANGE_COST_PER_SCRIPT'], axis = 1)
prescription.isnull().sum()* 100 / len(prescription)
```
## Step 4 - Properly Join the Two Files
```
# divide the prescription file into their two years
prescription_2012 = prescription.loc[prescription['YEAR'] == 2012]
prescription_2012 = prescription_2012.add_suffix('_2012')
prescription_2016 = prescription.loc[prescription['YEAR'] == 2016]
prescription_2016 = prescription_2016.add_suffix('_2016')
# join prescription file on ['RECORD_TYPE','NPROPNAME', 'THER_CLASS','PAYER']
prescription_merged = prescription_2012.merge(prescription_2016, how = "outer", left_on = ['RECORD_TYPE_2012','NPROPNAME_2012', 'THER_CLASS_2012','PAYER_2012'],
right_on = ['RECORD_TYPE_2016','NPROPNAME_2016', 'THER_CLASS_2016','PAYER_2016'])
prescription_merged.head()
# lost a significant portion of data with this merge (lost almost 40%)
# additionally, multiple NaN were introduced, but there is a business interpretation to this
# if the NaN is in 2012, then a new drug could have been created and then prescribed in 2016
# if the NaN is in 2016, then a drug is no longer being prescribed that was once available
prescription_merged.shape
```
Upon inspection, it is possible to join the two prescription files, but it will take a lot of work. For example in the 'NPROPNAME' an item has been listed two different ways: CALCIUM PANTOTHEN and CALCIUM P. I believe these are the same but will need outsider information to confirm. Until then these tables will not be joined.
Confirmed this fact on my own using https://www.drugbank.ca/salts/DBSALT000034 and https://www.drugs.com/international/calcium-p.html
Would love if someone else could confirm
```
# create new percent change columns filled with zeros and check
prescription_merged['PCT_CHANGE_COST_PER_SCRIPT_RANK'] = (prescription_merged['COST_PER_SCRIPT_RANK_2016'] - prescription_merged['COST_PER_SCRIPT_RANK_2012'] ) / prescription_merged['COST_PER_SCRIPT_RANK_2012'] * 100
prescription_merged['PCT_CHANGE_COST_PER_SCRIPT'] = (prescription_merged['COST_PER_SCRIPT_2016'] - prescription_merged['COST_PER_SCRIPT_2012'] ) / prescription_merged['COST_PER_SCRIPT_2012'] * 100
prescription_merged.head()
# divide the drug file into their two years
drug_2012 = drug.loc[drug['YEAR'] == 2012]
drug_2012 = drug_2012.add_suffix('_2012')
drug_2016 = drug.loc[drug['YEAR'] == 2016]
drug_2016 = drug_2016.add_suffix('_2016')
# join drug file on ['NDC9','PRODUCT_NDC']
drug_merged = drug_2012.merge(drug_2016, how = "outer", left_on = ['NDC9_2012','PRODUCT_NDC_2012', 'RECORD_TYPE_2012','NPROPNAME_2012', 'THER_CLASS_2012','PAYER_2012'],
right_on = ['NDC9_2016','PRODUCT_NDC_2016', 'RECORD_TYPE_2016','NPROPNAME_2016', 'THER_CLASS_2016','PAYER_2016'])
drug_merged.head()
# lost a significant portion of data with this merge (lost almost 25%)
# additionally, multiple NaN were introduced, but there is a business interpretation to this
# if the NaN is in 2012, then a new drug could have been created and then prescribed in 2016
# if the NaN is in 2016, then a drug is no longer being prescribed that was once available
drug_merged.shape
```
## Step 5 - Convert to lists where applicable
```
# NPROPNAME in drug and prescription
prescription_merged['NPROPNAMES_2012'] = prescription_merged['NPROPNAME_2012'].str.split(",")
drug_merged['NPROPNAMES_2012'] = drug_merged['NPROPNAME_2012'].str.split(",")
prescription_merged['NPROPNAMES_2016'] = prescription_merged['NPROPNAME_2016'].str.split(",")
drug_merged['NPROPNAMES_2016'] = drug_merged['NPROPNAME_2016'].str.split(",")
# DOSAGE_FORM in drug
drug_merged['DOSAGE_FORMS_2012'] = drug_merged['DOSAGE_FORM_2012'].str.split(",")
drug_merged['DOSAGE_FORMS_2016'] = drug_merged['DOSAGE_FORM_2016'].str.split(",")
# ACTIVE_STRENGTH in drug
drug_merged['ACTIVE_STRENGTHS_2012'] = drug_merged['ACTIVE_STRENGTH_2012'].str.split(";")
drug_merged['ACTIVE_STRENGTHS_2016'] = drug_merged['ACTIVE_STRENGTH_2016'].str.split(";")
# ACTIVE_STRENGTH_UNIT in drug
drug_merged['ACTIVE_STRENGTH_UNITS_2012'] = drug_merged['ACTIVE_STRENGTH_UNIT_2012'].str.split(";")
drug_merged['ACTIVE_STRENGTH_UNITS_2016'] = drug_merged['ACTIVE_STRENGTH_UNIT_2016'].str.split(";")
# drop original columns
prescription_merged.drop(['NPROPNAME_2012', 'NPROPNAME_2016'], axis = 1)
drug_merged.drop(['NPROPNAME_2012', 'NPROPNAME_2016', 'DOSAGE_FORM_2012', 'DOSAGE_FORM_2016',
'ACTIVE_STRENGTH_2012', 'ACTIVE_STRENGTH_2016', 'ACTIVE_STRENGTH_UNIT_2012',
'ACTIVE_STRENGTH_UNIT_2016'], axis = 1)
```
## Step 6 - Write cleaned data to new files
```
drug_merged.to_excel("C:/Users/LMoor/Downloads/Drug_Clean_v2.xlsx")
prescription_merged.to_excel("C:/Users/LMoor/Downloads/Prescription_Clean_v2.xlsx")
```
| github_jupyter |
# Import Libraries
```
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize, sent_tokenize
```
# Sentences
```
sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"),("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]
sentence2 = "Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal"
sentence3 = "Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate—we can not consecrate—we can not hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth."
```
# Regex Function
```
grammar = "NP: {<DT>?<JJ>*<NN>}"
c = nltk.RegexpParser(grammar)
result = c.parse(sentence)
print(result)
result.draw()
```
# Preprocessing - I
```
# Sentence read by Keerthivasan S M :D
stop_words = set(stopwords.words('english'))
l = list(sentence2.split(" "))
print(l)
# Geeks for Geeks
tokenized = sent_tokenize(sentence2)
for i in tokenized:
wordsList = nltk.word_tokenize(i)
wordsList = [w for w in wordsList if not w in stop_words]
tagged = nltk.pos_tag(wordsList)
```
# Regex Function - I
```
grammar = "NP: {<DT>?<JJ>*<NN>}"
c = nltk.RegexpParser(grammar)
result = c.parse(tagged)
print(result)
result.draw()
```
# Assignment
```
stop_words = set(stopwords.words('english'))
l = list(sentence3.split(". "))
print(l)
for i in range(len(l)):
l1 = l[i].split(" ")
tokenized = sent_tokenize(l[i])
for i in tokenized:
wordsList = nltk.word_tokenize(i)
wordsList = [w for w in wordsList if not w in stop_words]
tagged = nltk.pos_tag(wordsList)
grammar = "NP: {<DT>?<JJ>*<NN>}"
c = nltk.RegexpParser(grammar)
result = c.parse(tagged)
print(result)
print()
for i in range(len(l)):
l1 = l[i].split(" ")
tokenized = sent_tokenize(l[i])
for i in tokenized:
wordsList = nltk.word_tokenize(i)
wordsList = [w for w in wordsList if not w in stop_words]
tagged = nltk.pos_tag(wordsList)
grammar = "VP: {<MD>?<VB.*><NP|PP>}"
c = nltk.RegexpParser(grammar)
result = c.parse(tagged)
print(result)
print()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/shivangisachan20/ML-DL-Projects/blob/master/Copy_of_pytorch_quick_start.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# PyTorch 1.2 Quickstart with Google Colab
In this code tutorial we will learn how to quickly train a model to understand some of PyTorch's basic building blocks to train a deep learning model. This notebook is inspired by the ["Tensorflow 2.0 Quickstart for experts"](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb#scrollTo=DUNzJc4jTj6G) notebook.
After completion of this tutorial, you should be able to import data, transform it, and efficiently feed the data in batches to a convolution neural network (CNN) model for image classification.
**Author:** [Elvis Saravia](https://twitter.com/omarsar0)
**Complete Code Walkthrough:** [Blog post](https://medium.com/dair-ai/pytorch-1-2-quickstart-with-google-colab-6690a30c38d)
```
!pip3 install torch==1.2.0+cu92 torchvision==0.4.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
```
Note: We will be using the latest stable version of PyTorch so be sure to run the command above to install the latest version of PyTorch, which as the time of this tutorial was 1.2.0. We PyTorch belowing using the `torch` module.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import fastai
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
print(torch.__version__)
```
## Import The Data
The first step before training the model is to import the data. We will use the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) which is like the Hello World dataset of machine learning.
Besides importing the data, we will also do a few more things:
- We will tranform the data into tensors using the `transforms` module
- We will use `DataLoader` to build convenient data loaders or what are referred to as iterators, which makes it easy to efficiently feed data in batches to deep learning models.
- As hinted above, we will also create batches of the data by setting the `batch` parameter inside the data loader. Notice we use batches of `32` in this tutorial but you can change it to `64` if you like. I encourage you to experiment with different batches.
```
BATCH_SIZE = 32
## transformations
transform = transforms.Compose(
[transforms.ToTensor()])
## download and load training dataset
trainset = torchvision.datasets.MNIST(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=2)
## download and load testing dataset
testset = torchvision.datasets.MNIST(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=BATCH_SIZE,
shuffle=False, num_workers=2)
```
## Exploring the Data
As a practioner and researcher, I am always spending a bit of time and effort exploring and understanding the dataset. It's fun and this is a good practise to ensure that everything is in order.
Let's check what the train and test dataset contains. I will use `matplotlib` to print out some of the images from our dataset.
```
import matplotlib.pyplot as plt
import numpy as np
## functions to show an image
def imshow(img):
#img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
## get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
## show images
imshow(torchvision.utils.make_grid(images))
```
**EXERCISE:** Try to understand what the code above is doing. This will help you to better understand your dataset before moving forward.
Let's check the dimensions of a batch.
```
for images, labels in trainloader:
print("Image batch dimensions:", images.shape)
print("Image label dimensions:", labels.shape)
break
```
## The Model
Now using the classical deep learning framework pipeline, let's build the 1 convolutional layer model.
Here are a few notes for those who are beginning with PyTorch:
- The model below consists of an `__init__()` portion which is where you include the layers and components of the neural network. In our model, we have a convolutional layer denoted by `nn.Conv2d(...)`. We are dealing with an image dataset that is in a grayscale so we only need one channel going in, hence `in_channels=1`. We hope to get a nice representation of this layer, so we use `out_channels=32`. Kernel size is 3, and for the rest of parameters we use the default values which you can find [here](https://pytorch.org/docs/stable/nn.html?highlight=conv2d#conv2d).
- We use 2 back to back dense layers or what we refer to as linear transformations to the incoming data. Notice for `d1` I have a dimension which looks like it came out of nowhere. 128 represents the size we want as output and the (`26*26*32`) represents the dimension of the incoming data. If you would like to find out how to calculate those numbers refer to the [PyTorch documentation](https://pytorch.org/docs/stable/nn.html?highlight=linear#conv2d). In short, the convolutional layer transforms the input data into a specific dimension that has to be considered in the linear layer. The same applies for the second linear transformation (`d2`) where the dimension of the output of the previous linear layer was added as `in_features=128`, and `10` is just the size of the output which also corresponds to the number of classes.
- After each one of those layers, we also apply an activation function such as `ReLU`. For prediction purposes, we then apply a `softmax` layer to the last transformation and return the output of that.
```
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
# 28x28x1 => 26x26x32
self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3)
self.d1 = nn.Linear(26 * 26 * 32, 128)
self.d2 = nn.Linear(128, 10)
def forward(self, x):
# 32x1x28x28 => 32x32x26x26
x = self.conv1(x)
x = F.relu(x)
# flatten => 32 x (32*26*26)
x = x.flatten(start_dim = 1)
# 32 x (32*26*26) => 32x128
x = self.d1(x)
x = F.relu(x)
# logits => 32x10
logits = self.d2(x)
out = F.softmax(logits, dim=1)
return out
```
As I have done in my previous tutorials, I always encourage to test the model with 1 batch to ensure that the output dimensions are what we expect.
```
## test the model with 1 batch
model = MyModel()
for images, labels in trainloader:
print("batch size:", images.shape)
out = model(images)
print(out.shape)
break
```
## Training the Model
Now we are ready to train the model but before that we are going to setup a loss function, an optimizer and a function to compute accuracy of the model.
```
learning_rate = 0.001
num_epochs = 5
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = MyModel()
model = model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
## compute accuracy
def get_accuracy(logit, target, batch_size):
''' Obtain accuracy for training round '''
corrects = (torch.max(logit, 1)[1].view(target.size()).data == target.data).sum()
accuracy = 100.0 * corrects/batch_size
return accuracy.item()
```
Now it's time for training.
```
for epoch in range(num_epochs):
train_running_loss = 0.0
train_acc = 0.0
model = model.train()
## training step
for i, (images, labels) in enumerate(trainloader):
images = images.to(device)
labels = labels.to(device)
## forward + backprop + loss
logits = model(images)
loss = criterion(logits, labels)
optimizer.zero_grad()
loss.backward()
## update model params
optimizer.step()
train_running_loss += loss.detach().item()
train_acc += get_accuracy(logits, labels, BATCH_SIZE)
model.eval()
print('Epoch: %d | Loss: %.4f | Train Accuracy: %.2f' \
%(epoch, train_running_loss / i, train_acc/i))
```
We can also compute accuracy on the testing dataset to see how well the model performs on the image classificaiton task. As you can see below, our basic CNN model is performing very well on the MNIST classification task.
```
test_acc = 0.0
for i, (images, labels) in enumerate(testloader, 0):
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
test_acc += get_accuracy(outputs, labels, BATCH_SIZE)
print('Test Accuracy: %.2f'%( test_acc/i))
```
**EXERCISE:** As a way to practise, try to include the testing part inside the code where I was outputing the training accuracy, so that you can also keep testing the model on the testing data as you proceed with the training steps. This is useful as sometimes you don't want to wait until your model has completed training to actually test the model with the testing data.
## Final Words
That's it for this tutorial! Congratulations! You are now able to implement a basic CNN model in PyTorch for image classification. If you would like, you can further extend the CNN model by adding more convolution layers and max pooling, but as you saw, you don't really need it here as results look good. If you are interested in implementing a similar image classification model using RNNs see the references below.
## References
- [Building RNNs is Fun with PyTorch and Google Colab](https://colab.research.google.com/drive/1NVuWLZ0cuXPAtwV4Fs2KZ2MNla0dBUas)
- [CNN Basics with PyTorch by Sebastian Raschka](https://github.com/rasbt/deeplearning-models/blob/master/pytorch_ipynb/cnn/cnn-basic.ipynb)
- [Tensorflow 2.0 Quickstart for experts](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb#scrollTo=DUNzJc4jTj6G)
| github_jupyter |
# Talking Head Anime from a Single Image 2: More Expressive (Manual Poser Tool)
**Instruction**
1. Run the four cells below, one by one, in order by clicking the "Play" button to the left of it. Wait for each cell to finish before going to the next one.
2. Scroll down to the end of the last cell, and play with the GUI.
**Constraints on Images**
1. Must be an image of a single humanoid anime character.
3. The head must be roughly contained in the middle box.
5. Must have an alpha channel.
**Links**
* Github repository: http://github.com/pkhungurn/talking-head-anime-2-demo
* Project writeup: http://pkhungurn.github.io/talking-head-anime-2/
```
# Clone the repository
%cd /content
!git clone https://github.com/pkhungurn/talking-head-anime-2-demo.git
# CD into the repository directory.
%cd /content/talking-head-anime-2-demo
# Download model files
!wget -O data/combiner.pt https://www.dropbox.com/s/at2r3v22xgyoxtk/combiner.pt?dl=0
!wget -O data/eyebrow_decomposer.pt https://www.dropbox.com/s/pbomb5vgens03rk/eyebrow_decomposer.pt?dl=0
!wget -O data/eyebrow_morphing_combiner.pt https://www.dropbox.com/s/yk9m5ok03e0ub1f/eyebrow_morphing_combiner.pt?dl=0
!wget -O data/face_morpher.pt https://www.dropbox.com/s/77sza8qkiwd4qq5/face_morpher.pt?dl=0
!wget -O data/two_algo_face_rotator.pt https://www.dropbox.com/s/ek261g9sspf0cqi/two_algo_face_rotator.pt?dl=0
import PIL.Image
import io
from io import StringIO, BytesIO
import IPython.display
import numpy
import ipywidgets
from tha2.util import extract_PIL_image_from_filelike, resize_PIL_image, extract_pytorch_image_from_PIL_image, convert_output_image_from_torch_to_numpy
import tha2.poser.modes.mode_20
import time
import threading
import torch
FRAME_RATE = 30.0
DEVICE_NAME = 'cuda'
device = torch.device(DEVICE_NAME)
last_torch_input_image = None
torch_input_image = None
def show_pytorch_image(pytorch_image, output_widget=None):
output_image = pytorch_image.detach().cpu()
numpy_image = numpy.uint8(numpy.rint(convert_output_image_from_torch_to_numpy(output_image) * 255.0))
pil_image = PIL.Image.fromarray(numpy_image, mode='RGBA')
IPython.display.display(pil_image)
input_image_widget = ipywidgets.Output(
layout={
'border': '1px solid black',
'width': '256px',
'height': '256px'
})
upload_input_image_button = ipywidgets.FileUpload(
accept='.png',
multiple=False,
layout={
'width': '256px'
}
)
output_image_widget = ipywidgets.Output(
layout={
'border': '1px solid black',
'width': '256px',
'height': '256px'
}
)
eyebrow_dropdown = ipywidgets.Dropdown(
options=["troubled", "angry", "lowered", "raised", "happy", "serious"],
value="troubled",
description="Eyebrow:",
)
eyebrow_left_slider = ipywidgets.FloatSlider(
value=0.0,
min=0.0,
max=1.0,
step=0.01,
description="Left:",
readout=True,
readout_format=".2f"
)
eyebrow_right_slider = ipywidgets.FloatSlider(
value=0.0,
min=0.0,
max=1.0,
step=0.01,
description="Right:",
readout=True,
readout_format=".2f"
)
eye_dropdown = ipywidgets.Dropdown(
options=["wink", "happy_wink", "surprised", "relaxed", "unimpressed", "raised_lower_eyelid"],
value="wink",
description="Eye:",
)
eye_left_slider = ipywidgets.FloatSlider(
value=0.0,
min=0.0,
max=1.0,
step=0.01,
description="Left:",
readout=True,
readout_format=".2f"
)
eye_right_slider = ipywidgets.FloatSlider(
value=0.0,
min=0.0,
max=1.0,
step=0.01,
description="Right:",
readout=True,
readout_format=".2f"
)
mouth_dropdown = ipywidgets.Dropdown(
options=["aaa", "iii", "uuu", "eee", "ooo", "delta", "lowered_corner", "raised_corner", "smirk"],
value="aaa",
description="Mouth:",
)
mouth_left_slider = ipywidgets.FloatSlider(
value=0.0,
min=0.0,
max=1.0,
step=0.01,
description="Value:",
readout=True,
readout_format=".2f"
)
mouth_right_slider = ipywidgets.FloatSlider(
value=0.0,
min=0.0,
max=1.0,
step=0.01,
description=" ",
readout=True,
readout_format=".2f",
disabled=True,
)
def update_mouth_sliders(change):
if mouth_dropdown.value == "lowered_corner" or mouth_dropdown.value == "raised_corner":
mouth_left_slider.description = "Left:"
mouth_right_slider.description = "Right:"
mouth_right_slider.disabled = False
else:
mouth_left_slider.description = "Value:"
mouth_right_slider.description = " "
mouth_right_slider.disabled = True
mouth_dropdown.observe(update_mouth_sliders, names='value')
iris_small_left_slider = ipywidgets.FloatSlider(
value=0.0,
min=0.0,
max=1.0,
step=0.01,
description="Left:",
readout=True,
readout_format=".2f"
)
iris_small_right_slider = ipywidgets.FloatSlider(
value=0.0,
min=0.0,
max=1.0,
step=0.01,
description="Right:",
readout=True,
readout_format=".2f",
)
iris_rotation_x_slider = ipywidgets.FloatSlider(
value=0.0,
min=-1.0,
max=1.0,
step=0.01,
description="X-axis:",
readout=True,
readout_format=".2f"
)
iris_rotation_y_slider = ipywidgets.FloatSlider(
value=0.0,
min=-1.0,
max=1.0,
step=0.01,
description="Y-axis:",
readout=True,
readout_format=".2f",
)
head_x_slider = ipywidgets.FloatSlider(
value=0.0,
min=-1.0,
max=1.0,
step=0.01,
description="X-axis:",
readout=True,
readout_format=".2f"
)
head_y_slider = ipywidgets.FloatSlider(
value=0.0,
min=-1.0,
max=1.0,
step=0.01,
description="Y-axis:",
readout=True,
readout_format=".2f",
)
neck_z_slider = ipywidgets.FloatSlider(
value=0.0,
min=-1.0,
max=1.0,
step=0.01,
description="Z-axis:",
readout=True,
readout_format=".2f",
)
control_panel = ipywidgets.VBox([
eyebrow_dropdown,
eyebrow_left_slider,
eyebrow_right_slider,
ipywidgets.HTML(value="<hr>"),
eye_dropdown,
eye_left_slider,
eye_right_slider,
ipywidgets.HTML(value="<hr>"),
mouth_dropdown,
mouth_left_slider,
mouth_right_slider,
ipywidgets.HTML(value="<hr>"),
ipywidgets.HTML(value="<center><b>Iris Shrinkage</b></center>"),
iris_small_left_slider,
iris_small_right_slider,
ipywidgets.HTML(value="<center><b>Iris Rotation</b></center>"),
iris_rotation_x_slider,
iris_rotation_y_slider,
ipywidgets.HTML(value="<hr>"),
ipywidgets.HTML(value="<center><b>Head Rotation</b></center>"),
head_x_slider,
head_y_slider,
neck_z_slider,
])
controls = ipywidgets.HBox([
ipywidgets.VBox([
input_image_widget,
upload_input_image_button
]),
control_panel,
ipywidgets.HTML(value=" "),
output_image_widget,
])
poser = tha2.poser.modes.mode_20.create_poser(device)
pose_parameters = tha2.poser.modes.mode_20.get_pose_parameters()
pose_size = poser.get_num_parameters()
last_pose = torch.zeros(1, pose_size).to(device)
iris_small_left_index = pose_parameters.get_parameter_index("iris_small_left")
iris_small_right_index = pose_parameters.get_parameter_index("iris_small_right")
iris_rotation_x_index = pose_parameters.get_parameter_index("iris_rotation_x")
iris_rotation_y_index = pose_parameters.get_parameter_index("iris_rotation_y")
head_x_index = pose_parameters.get_parameter_index("head_x")
head_y_index = pose_parameters.get_parameter_index("head_y")
neck_z_index = pose_parameters.get_parameter_index("neck_z")
def get_pose():
pose = torch.zeros(1, pose_size)
eyebrow_name = f"eyebrow_{eyebrow_dropdown.value}"
eyebrow_left_index = pose_parameters.get_parameter_index(f"{eyebrow_name}_left")
eyebrow_right_index = pose_parameters.get_parameter_index(f"{eyebrow_name}_right")
pose[0, eyebrow_left_index] = eyebrow_left_slider.value
pose[0, eyebrow_right_index] = eyebrow_right_slider.value
eye_name = f"eye_{eye_dropdown.value}"
eye_left_index = pose_parameters.get_parameter_index(f"{eye_name}_left")
eye_right_index = pose_parameters.get_parameter_index(f"{eye_name}_right")
pose[0, eye_left_index] = eye_left_slider.value
pose[0, eye_right_index] = eye_right_slider.value
mouth_name = f"mouth_{mouth_dropdown.value}"
if mouth_name == "mouth_lowered_cornered" or mouth_name == "mouth_raised_corner":
mouth_left_index = pose_parameters.get_parameter_index(f"{mouth_name}_left")
mouth_right_index = pose_parameters.get_parameter_index(f"{mouth_name}_right")
pose[0, mouth_left_index] = mouth_left_slider.value
pose[0, mouth_right_index] = mouth_right_slider.value
else:
mouth_index = pose_parameters.get_parameter_index(mouth_name)
pose[0, mouth_index] = mouth_left_slider.value
pose[0, iris_small_left_index] = iris_small_left_slider.value
pose[0, iris_small_right_index] = iris_small_right_slider.value
pose[0, iris_rotation_x_index] = iris_rotation_x_slider.value
pose[0, iris_rotation_y_index] = iris_rotation_y_slider.value
pose[0, head_x_index] = head_x_slider.value
pose[0, head_y_index] = head_y_slider.value
pose[0, neck_z_index] = neck_z_slider.value
return pose.to(device)
display(controls)
def update(change):
global last_pose
global last_torch_input_image
if torch_input_image is None:
return
needs_update = False
if last_torch_input_image is None:
needs_update = True
else:
if (torch_input_image - last_torch_input_image).abs().max().item() > 0:
needs_update = True
pose = get_pose()
if (pose - last_pose).abs().max().item() > 0:
needs_update = True
if not needs_update:
return
output_image = poser.pose(torch_input_image, pose)[0]
with output_image_widget:
output_image_widget.clear_output(wait=True)
show_pytorch_image(output_image, output_image_widget)
last_torch_input_image = torch_input_image
last_pose = pose
def upload_image(change):
global torch_input_image
for name, file_info in upload_input_image_button.value.items():
content = io.BytesIO(file_info['content'])
if content is not None:
pil_image = resize_PIL_image(extract_PIL_image_from_filelike(content))
w, h = pil_image.size
if pil_image.mode != 'RGBA':
with input_image_widget:
input_image_widget.clear_output(wait=True)
display(ipywidgets.HTML("Image must have an alpha channel!!!"))
else:
torch_input_image = extract_pytorch_image_from_PIL_image(pil_image).to(device)
with input_image_widget:
input_image_widget.clear_output(wait=True)
show_pytorch_image(torch_input_image, input_image_widget)
update(None)
upload_input_image_button.observe(upload_image, names='value')
eyebrow_dropdown.observe(update, 'value')
eyebrow_left_slider.observe(update, 'value')
eyebrow_right_slider.observe(update, 'value')
eye_dropdown.observe(update, 'value')
eye_left_slider.observe(update, 'value')
eye_right_slider.observe(update, 'value')
mouth_dropdown.observe(update, 'value')
mouth_left_slider.observe(update, 'value')
mouth_right_slider.observe(update, 'value')
iris_small_left_slider.observe(update, 'value')
iris_small_right_slider.observe(update, 'value')
iris_rotation_x_slider.observe(update, 'value')
iris_rotation_y_slider.observe(update, 'value')
head_x_slider.observe(update, 'value')
head_y_slider.observe(update, 'value')
neck_z_slider.observe(update, 'value')
```
| github_jupyter |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
from __future__ import unicode_literals
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import pearsonr
from scipy.misc import logsumexp
sns.set(color_codes=True)
def annotate_upper_left(ax, text, annotation_offset=(-50, 30)):
ax.annotate(text, xy=(0, 1), xycoords='axes fraction', fontsize=18,
xytext=annotation_offset, textcoords='offset points',
ha='left', va='top')
f = np.load('../output/hinge_results.npz')
temps = f['temps']
indices_to_remove = f['indices_to_remove']
actual_loss_diffs = f['actual_loss_diffs']
predicted_loss_diffs = f['predicted_loss_diffs']
influences = f['influences']
sns.set_style('white')
fontsize=14
fig, axs = plt.subplots(1, 4, sharex=False, sharey=False, figsize=(13, 3))
# Graph of approximations
x = np.arange(-5, 15, 0.01)
ts = [0.001, 0.01, 0.1]
y = 1 - x
y[y < 0] = 0
axs[0].plot(x, y, label='t=0 (Hinge)')
for t in ts:
# y = t * np.log(1 + np.exp(-(x-1)/t))
y = t * logsumexp(
np.vstack((np.zeros_like(x), -(x-1)/t)),
axis=0)
axs[0].plot(x, y, label='t=%s' % t)
axs[0].set_xlim((0.8, 1.2))
axs[0].set_xticks((0.8, 0.9, 1.0, 1.1, 1.2))
axs[0].set_ylim((0, 0.3))
axs[0].legend(fontsize=fontsize-4)
axs[0].set_xlabel('s', fontsize=fontsize)
axs[0].set_ylabel('SmoothHinge(s)', fontsize=fontsize)
# Hinge loss
ax_idx = 1
temp_idx = 0
smooth_influence_preds = influences[temp_idx, indices_to_remove[0, :]]
print(temps[temp_idx], pearsonr(actual_loss_diffs[0, :], smooth_influence_preds)[0])
axs[ax_idx].scatter(actual_loss_diffs[0, :], smooth_influence_preds)
max_value = 1.1 * np.max([np.max(np.abs(actual_loss_diffs[0, :])), np.max(np.abs(smooth_influence_preds))])
axs[ax_idx].set_xlim((-0.025, 0.025))
axs[ax_idx].set_ylim(-max_value,max_value)
axs[ax_idx].set_xlabel('Actual diff in loss', fontsize=fontsize)
axs[ax_idx].set_ylabel('Predicted diff in loss', fontsize=fontsize)
axs[ax_idx].plot([-0.025, 0.025], [-0.025, 0.025], 'k-', alpha=0.2, zorder=1)
axs[ax_idx].set_title('t=0 (Hinge)', fontsize=fontsize)
# t = 0.001
ax_idx = 2
temp_idx = 1
smooth_influence_preds = influences[temp_idx, indices_to_remove[0, :]]
print(temps[temp_idx], pearsonr(actual_loss_diffs[0, :], smooth_influence_preds)[0])
axs[ax_idx].scatter(actual_loss_diffs[0, :], smooth_influence_preds)
max_value = 1.1 * np.max([np.max(np.abs(actual_loss_diffs[0, :])), np.max(np.abs(smooth_influence_preds))])
axs[ax_idx].set_xlim((-0.025, 0.025))
axs[ax_idx].set_ylim((-0.025, 0.025))
axs[ax_idx].set_aspect('equal')
axs[ax_idx].set_xlabel('Actual diff in loss', fontsize=fontsize)
axs[ax_idx].plot([-0.025, 0.025], [-0.025, 0.025], 'k-', alpha=0.2, zorder=1)
axs[ax_idx].set_title('t=0.001', fontsize=fontsize)
# t = 0.1
ax_idx = 3
temp_idx = 2
smooth_influence_preds = influences[temp_idx, indices_to_remove[0, :]]
print(temps[temp_idx], pearsonr(actual_loss_diffs[0, :], smooth_influence_preds)[0])
axs[ax_idx].scatter(actual_loss_diffs[0, :], smooth_influence_preds)
max_value = 1.1 * np.max([np.max(np.abs(actual_loss_diffs[0, :])), np.max(np.abs(smooth_influence_preds))])
axs[ax_idx].set_xlim((-0.025, 0.025))
axs[ax_idx].set_ylim((-0.025, 0.025))
axs[ax_idx].set_aspect('equal')
axs[ax_idx].set_xlabel('Actual diff in loss', fontsize=fontsize)
axs[ax_idx].plot([-0.025, 0.025], [-0.025, 0.025], 'k-', alpha=0.2, zorder=1)
axs[ax_idx].set_title('t=0.1', fontsize=fontsize)
# plt.setp(axs[ax_idx].get_yticklabels(), visible=False)
def move_ax_right(ax, dist):
bbox = ax.get_position()
bbox.x0 += dist
bbox.x1 += dist
ax.set_position(bbox)
move_ax_right(axs[1], 0.05)
move_ax_right(axs[2], 0.06)
move_ax_right(axs[3], 0.07)
annotate_upper_left(axs[0], '(a)', (-50, 15))
annotate_upper_left(axs[1], '(b)', (-50, 15))
# plt.savefig(
# '../figs/fig-hinge.png',
# dpi=600, bbox_inches='tight')
```
| github_jupyter |
# Notebook example
Installing some necessary packages:
```
!pip install ipywidgets
!jupyter nbextension enable --py widgetsnbextension
!jupyter labextension install @jupyter-widgets/jupyterlab-manager
!pip install xgboost
```
**It is necessary to change the working directory so the project structure works properly:**
```
import sys
sys.path.append("../../")
```
From this point, it's on you!
---
```
import pandas as pd
from ml.data_source.spreadsheet import Spreadsheet
from ml.preprocessing.preprocessing import Preprocessing
from ml.model.trainer import TrainerSklearn
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
df = Spreadsheet().get_data('../../../data/raw/train.csv')
df.columns
p = Preprocessing()
df = p.clean_data(df)
df = p.categ_encoding(df)
df.head()
X = df.drop(columns=["Survived"])
y = df["Survived"]
# Ensure the same random state passed to TrainerSkleran().train()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
rf = TrainerSklearn().train(X, y, classification=True,
algorithm=RandomForestClassifier,
preprocessing=p,
data_split=('train_test', {'test_size':.3}),
random_state=123)
rf.get_metrics()
rf.get_columns()
rf.predict_proba(X_test, binary=True)
# Predicting new data
def predict_new(X, model, probs=True):
X = p.clean_data(X)
X = p.categ_encoding(X)
columns = model.get_columns()
for col in columns:
if col not in X.columns:
X[col] = 0
print(X)
if probs:
return model.predict_proba(X)
else:
return model.predict(X)
new_data = pd.DataFrame({
'Pclass':3,
'Sex': 'male',
'Age':4
}, index=[0])
new_data
predict_new(new_data, rf)
```
**Data Quality:**
```
from ml.preprocessing.dataquality import DataQuality
import great_expectations as ge
import warnings
warnings.filterwarnings('ignore')
df = Spreadsheet().get_data('../../../data/raw/train.csv')
X_train, X_test = train_test_split(df, test_size=0.3, random_state=123)
X_train.shape, X_test.shape
dq = DataQuality(discrete_cat_cols=['Sex', 'Pclass'])
df_ge = dq.perform(X_train, target='Survived')
#df_ge.save_expectation_suite('/opt/ml/processing/output/expectations/expectations.json')
df_ge.save_expectation_suite('../../../output/expectations.json')
X_test.drop(columns=['Survived'], inplace=True)
df_ge = ge.dataset.PandasDataset(X_test)
ge_val = df_ge.validate(expectation_suite='../../../output/expectations.json', only_return_failures=False)
ge_val
```
**Get local explainer for each instance:**
```
# Get local explainer
res = rf.local_interpret(X_test, len(X_test.columns))
res
```
| github_jupyter |
# Segregation Index Decomposition
## Table of Contents
* [Decomposition framework of the PySAL *segregation* module](#Decomposition-framework-of-the-PySAL-*segregation*-module)
* [Map of the composition of the Metropolitan area of Los Angeles](#Map-of-the-composition-of-the-Metropolitan-area-of-Los-Angeles)
* [Map of the composition of the Metropolitan area of New York](#Map-of-the-composition-of-the-Metropolitan-area-of-New-York)
* [Composition Approach (default)](#Composition-Approach-%28default%29)
* [Share Approach](#Share-Approach)
* [Dual Composition Approach](#Dual-Composition-Approach)
* [Inspecting a different index: Relative Concentration](#Inspecting-a-different-index:-Relative-Concentration)
This is a notebook that explains a step-by-step procedure to perform decomposition on comparative segregation measures.
First, let's import all the needed libraries.
```
import pandas as pd
import pickle
import numpy as np
import matplotlib.pyplot as plt
import segregation
from segregation.decomposition import DecomposeSegregation
```
In this example, we are going to use census data that the user must download its own copy, following similar guidelines explained in https://github.com/spatialucr/geosnap/blob/master/examples/01_getting_started.ipynb where you should download the full type file of 2010. The zipped file download will have a name that looks like `LTDB_Std_All_fullcount.zip`. After extracting the zipped content, the filepath of the data should looks like this:
```
#filepath = '~/LTDB_Std_2010_fullcount.csv'
```
Then, we read the data:
```
df = pd.read_csv(filepath, encoding = "ISO-8859-1", sep = ",")
```
We are going to work with the variable of the nonhispanic black people (`nhblk10`) and the total population of each unit (`pop10`). So, let's read the map of all census tracts of US and select some specific columns for the analysis:
```
# This file can be download here: https://drive.google.com/open?id=1gWF0OCn6xuR_WrEj7Ot2jY6KI2t6taIm
with open('data/tracts_US.pkl', 'rb') as input:
map_gpd = pickle.load(input)
map_gpd['INTGEOID10'] = pd.to_numeric(map_gpd["GEOID10"])
gdf_pre = map_gpd.merge(df, left_on = 'INTGEOID10', right_on = 'tractid')
gdf = gdf_pre[['GEOID10', 'geometry', 'pop10', 'nhblk10']]
```
In this notebook, we use the Metropolitan Statistical Area (MSA) of US (we're also using the word 'cities' here to refer them). So, let's read the correspondence table that relates the tract id with the corresponding Metropolitan area...
```
# You can download this file here: https://drive.google.com/open?id=10HUUJSy9dkZS6m4vCVZ-8GiwH0EXqIau
with open('data/tract_metro_corresp.pkl', 'rb') as input:
tract_metro_corresp = pickle.load(input).drop_duplicates()
```
..and merge them with the previous data.
```
merged_gdf = gdf.merge(tract_metro_corresp, left_on = 'GEOID10', right_on = 'geoid10')
```
We now build the composition variable (`compo`) which is the division of the frequency of the chosen group and total population. Let's inspect the first rows of the data.
```
merged_gdf['compo'] = np.where(merged_gdf['pop10'] == 0, 0, merged_gdf['nhblk10'] / merged_gdf['pop10'])
merged_gdf.head()
```
Now, we chose two different metropolitan areas to compare the degree of segregation.
## Map of the composition of the Metropolitan area of Los Angeles
```
la_2010 = merged_gdf.loc[(merged_gdf.name == "Los Angeles-Long Beach-Anaheim, CA")]
la_2010.plot(column = 'compo', figsize = (10, 10), cmap = 'OrRd', legend = True)
plt.axis('off')
```
## Map of the composition of the Metropolitan area of New York
```
ny_2010 = merged_gdf.loc[(merged_gdf.name == 'New York-Newark-Jersey City, NY-NJ-PA')]
ny_2010.plot(column = 'compo', figsize = (20, 10), cmap = 'OrRd', legend = True)
plt.axis('off')
```
We first compare the Gini index of both cities. Let's import the `Gini_Seg` class from `segregation`, fit both indexes and check the difference in point estimation.
```
from segregation.aspatial import GiniSeg
G_la = GiniSeg(la_2010, 'nhblk10', 'pop10')
G_ny = GiniSeg(ny_2010, 'nhblk10', 'pop10')
G_la.statistic - G_ny.statistic
```
Let's decompose these difference according to *Rey, S. et al "Comparative Spatial Segregation Analytics". Forthcoming*. You can check the options available in this decomposition below:
```
help(DecomposeSegregation)
```
## Composition Approach (default)
The difference of -0.10653 fitted previously, can be decomposed into two components. The Spatial component and the attribute component. Let's estimate both, respectively.
```
DS_composition = DecomposeSegregation(G_la, G_ny)
DS_composition.c_s
DS_composition.c_a
```
So, the first thing to notice is that attribute component, i.e., given by a difference in the population structure (in this case, the composition) plays a more important role in the difference, since it has a higher absolute value.
The difference in the composition can be inspected in the plotting method with the type `cdfs`:
```
DS_composition.plot(plot_type = 'cdfs')
```
If your data is a GeoDataFrame, it is also possible to visualize the counterfactual compositions with the argument `plot_type = 'maps'`
The first and second contexts are Los Angeles and New York, respectively.
```
DS_composition.plot(plot_type = 'maps')
```
*Note that in all plotting methods, the title presents each component of the decomposition performed.*
## Share Approach
The share approach takes into consideration the share of each group in each city. Since this approach takes into consideration the focus group and the complementary group share to build the "counterfactual" total population of each unit, it is of interest to inspect all these four cdf's.
*ps.: The share is the population frequency of each group in each unit over the total population of that respectively group.*
```
DS_share = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'share')
DS_share.plot(plot_type = 'cdfs')
```
We can see that curve between the contexts are closer to each other which represent a drop in the importance of the population structure (attribute component) to -0.062. However, this attribute still overcomes the spatial component (-0.045) in terms of importance due to both absolute magnitudes.
```
DS_share.plot(plot_type = 'maps')
```
We can see that the counterfactual maps of the composition (outside of the main diagonal), in this case, are slightly different from the previous approach.
## Dual Composition Approach
The `dual_composition` approach is similar to the composition approach. However, it uses also the counterfactual composition of the cdf of the complementary group.
```
DS_dual = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'dual_composition')
DS_dual.plot(plot_type = 'cdfs')
```
It is possible to see that the component values are very similar with slight changes from the `composition` approach.
```
DS_dual.plot(plot_type = 'maps')
```
The counterfactual distributions are virtually the same (but not equal) as the one from the `composition` approach.
## Inspecting a different index: Relative Concentration
```
from segregation.spatial import RelativeConcentration
RCO_la = RelativeConcentration(la_2010, 'nhblk10', 'pop10')
RCO_ny = RelativeConcentration(ny_2010, 'nhblk10', 'pop10')
RCO_la.statistic - RCO_ny.statistic
RCO_DS_composition = DecomposeSegregation(RCO_la, RCO_ny)
RCO_DS_composition.c_s
RCO_DS_composition.c_a
```
It is possible to note that, in this case, the spatial component is playing a much more relevant role in the decomposition.
| github_jupyter |
# Visualising PAG neurons in CCF space
In this notebook we will load the .csv file containing the metadata from our PAG_scRNAseq project and use the CCF coordinates obtained after registration with Sharp-Track to visualise our sequenced cells with Brainrender. We will also write some code to generate some figures for the thesis.
### 1 | Import packages and set defaults
```
import brainrender
from brainrender import Scene, Animation
from brainrender.actors import Points
from vedo import settings as vsettings
from brainrender.video import VideoMaker
import pandas as pd # used to load the cwv
import numpy as np # used to set beginning and end of a custom slice
from vedo import embedWindow, Plotter, show # <- this will be used to render an embedded scene
# // DEFAULT SETTINGS //
# You can see all the default settings here: https://github.com/brainglobe/brainrender/blob/19c63b97a34336898871d66fb24484e8a55d4fa7/brainrender/settings.py
# --------------------------- brainrender settings --------------------------- #
# Change some of the default settings
brainrender.settings.BACKGROUND_COLOR = "white" # color of the background window (defaults to "white", try "blackboard")
brainrender.settings.DEFAULT_ATLAS = "allen_mouse_25um" # default atlas
brainrender.settings.DEFAULT_CAMERA = "three_quarters" # Default camera settings (orientation etc. see brainrender.camera.py)
brainrender.settings.INTERACTIVE = False # rendering interactive ?
brainrender.settings.LW = 2 # e.g. for silhouettes
brainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])
brainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)
brainrender.settings.SCREENSHOT_SCALE = 1 # values >1 yield higher resolution screenshots
brainrender.settings.SHADER_STYLE = "cartoon" # affects the look of rendered brain regions, values can be: ["metallic", "plastic", "shiny", "glossy", "cartoon"] and can be changed in interactive mode
brainrender.settings.SHOW_AXES = False
brainrender.settings.WHOLE_SCREEN = True # If true render window is full screen
brainrender.settings.OFFSCREEN = False
# ------------------------------- vedo settings ------------------------------ #
# For transparent background with screenshots
vsettings.screenshotTransparentBackground = True # vedo for transparent bg
vsettings.useFXAA = False # This needs to be false for transparent bg
# // SET PARAMETERS //
# Save folder
save_folder = r"D:\Dropbox (UCL)\Project_transcriptomics\analysis\PAG_scRNAseq_brainrender\output"
```
#### 1.1 | Check atlas availability
Brainrender integrates several atlases that can be used to visualise and explore brain anatomy. We can check which atlases are available, take a look at the ones we have already downloaded, and render a basic scene with the axis to get an idea of the overall picture.
```
# Run this to get a list of the available atlases:
from bg_atlasapi import show_atlases
show_atlases()
# Find the dimensions of an atlas:
from bg_atlasapi.bg_atlas import BrainGlobeAtlas
atlas = BrainGlobeAtlas("allen_mouse_25um")
reference_image = atlas.reference
print(reference_image.shape)
```
Currently, among the atlases we can use for mouse data:
* allen_mouse_10um_v1.2 - Volume dimension of \[AP, SI, LR\] equivalent to \[1320, 800, 1140\]
* allen_mouse_25um_v1.2 - Volume dimension of \[AP, SI, LR\] equivalent to \[528, 320, 456\] (default atlas)
* kim_mouse_10um_v1.0 - Volume dimension of \[AP, SI, LR\] equivalent to \[1320, 800, 1140\]
* kim_unified_25um_v1.0 - Volume dimension of \[AP, SI, LR\] equivalent to \[528, 320, 456\]
* kim_unified_50um_v1.0 - Volume dimension of \[AP, SI, LR\] equivalent to \[264, 160, 228\]
* osten_mouse_10um_v1.1 - Volume dimension of \[AP, SI, LR\] equivalent to \[1320, 800, 1140\]
The CCF coordinates we obtained from Sharp-Track are at 10um resolution, with a Volume dimension of \[AP, SI, LR\] equivalent to \[1320, 800, 1140\]
```
embedWindow(None) # <- this will make your scene popup
# Create a scene with the with the preferred atlas and check the dimensions
brainrender.settings.SHOW_AXES = True
scene = Scene(root = True, atlas_name = 'allen_mouse_25um', inset = False, title = 'allen_mouse_25um', screenshots_folder = save_folder, plotter = None)
scene.render(interactive = True, camera = "sagittal", zoom = 1) # make sure to press 'Esc' to close not 'q' or kernel dies
```
### 2 | Load metadata including CCF coordinates
Other options to load registered points include `add_cells_from_file` and `get_probe_points_from_sharptrack`.
```
# Load data
pag_data = pd.read_csv("D:\\Dropbox (UCL - SWC)\\Project_transcriptomics\\analysis\\PAG_scRNAseq_brainrender\\PAG_scRNAseq_metadata_211018.csv")
# Look at the first 5 rows of the metadata
pag_data.head()
# List all column names in data
pag_data.columns
```
#### 2.1 | Scaling up coordinates
The CCF coordinates were obtained by registering images to the Allen Brain Atlas using Sharp-Track (see Shamash et al. bioRxiv 2018 and https://github.com/cortex-lab/allenCCF), which yields coordinates at a resolution of 10 micrometers. In the Allen Brain Atlas, a point at coordinates \[1, 0, 0\] is at 10um from the origin (in other words, 1 unit of the atlas space equals 10um). However, BrainRender's space is at 1um resolution, so the first thing we need to do is to scale up the coordinate values by 10 to get them to match correctly.
```
# Inspect the CCF coordinates for each cell
pag_data[["cell.id", "cell.type", "PAG.area", "CCF.AllenAP", "CCF.AllenDV", "CCF.AllenML"]]
# Scale up data
sharptrack_to_brainrender_scale_factor = 10 # Sharp-Track uses a 10um reference atlas so the coordinates need to be scaled to match brainrender's
pag_data["CCF.AllenAP"] *= sharptrack_to_brainrender_scale_factor
pag_data["CCF.AllenDV"] *= sharptrack_to_brainrender_scale_factor
pag_data["CCF.AllenML"] *= sharptrack_to_brainrender_scale_factor
pag_data[["cell.id", "cell.type", "CCF.AllenAP", "CCF.AllenDV", "CCF.AllenML"]]
```
### 3 | Rendering cells with Brainrender
Now that we have the coordinates at the right scale, we can render our cells in Brainrender and colour them according to any metadata we want. We will prepare different renderings in each of the following code chunks.
#### 3.1 | Selecting a subset of cells
We can first take a look at how to use the metadata to select subsets of cells. This will be useful to either render just some of the cells, or to color cells based on some metadata info.
```
# We can subset cells using any criteria we want. For instance, let's keep cells coming from each hemisphere in separate variables:
cells_hemisphere_right = pag_data.loc[pag_data["PAG.hemisphere"] == "right"]
cells_hemisphere_left = pag_data.loc[pag_data["PAG.hemisphere"] == "left"]
cells_hemisphere_right.head()
# We can also use multiple criteria at the same time, such as hemisphere and cell type:
vgat_cells_hemisphere_left = pag_data.loc[(pag_data["PAG.hemisphere"] == "left")&(pag_data["cell.type"] == "VGAT")]
vgat_cells_hemisphere_left.head()
```
We can now render a scene adding each subset independently, so we can tweak variables such as color or size separately. However, brainrender requires an array of 3 values as coordinates, so we need to get the values out instead of subsetting the dataframe and providing it as input when creating a scene:
```
column_names = ["CCF.AllenAP", "CCF.AllenDV", "CCF.AllenML"] # name of the columns containing the CCF coordinates
cells_hemisphere_right[column_names].head().values # brainrender needs a numpy array as coordinates. Without the .values we get a dataframe.
```
Now we can render a scene and color the cells on the right and left hemisphere differently:
```
embedWindow(None) # <- this will make your scene popup
brainrender.settings.SHOW_AXES = False # Set this back to False
# Create a variable containing the XYZ coordinates of the cells.
column_names = ["CCF.AllenAP", "CCF.AllenDV", "CCF.AllenML"] # name of the columns containing the CCF coordinates
# // CREATE SCENE //
scene = Scene(root = True, atlas_name = 'allen_mouse_25um', inset = False, title = 'Aspirated Cells', screenshots_folder = save_folder, plotter = None)
# // ADD REGIONS AND CELLS//
scene.add_brain_region("PAG", alpha = 0.1, color = "darkgoldenrod", silhouette = None, hemisphere = "both")
scene.add(Points(cells_hemisphere_right[column_names].values, name = "right hemisphere", colors = "salmon", alpha = 1, radius = 20, res = 16))
scene.add(Points(cells_hemisphere_left[column_names].values, name = "left hemisphere", colors = "skyblue", alpha = 1, radius = 20, res = 16))
# // RENDER INTERACTIVELY //
# Render interactively. You can press "s" to take a screenshot
scene.render(interactive = True, camera = "sagittal", zoom = 1) # make sure to press 'Esc' to close not 'q' or kernel dies
```
### 4 | Rendering VGAT and VGluT2 cells for Chapter 2
We can follow the strategy above and assign each cell type to a different variable and render them separately.
#### 4.1 | Plot brain, PAG, and aspirated cells
We have increased the screenshot resoolution, changed the save folder, and increased the radius of the cells so they are visible.
We will use "plastic" as shader style in this overview figure, have the axes so we can draw a scale bar, and render using the Top camera and a 1.1x zoom.
Once rendered, save a screenshot by pressing "s".
```
embedWindow(None) # <- this will make your scene popup
brainrender.settings.SHOW_AXES = True
brainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots
brainrender.settings.SHADER_STYLE = "plastic" # affects the look of rendered brain regions, values can be: ["metallic", "plastic", "shiny", "glossy", "cartoon"] and can be changed in interactive mode
brainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])
brainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)
save_folder_thesis = r"D:\Dropbox (UCL)\Project_transcriptomics\analysis\PAG_scRNAseq_brainrender\output\figures_thesis_chapter_2"
# Create a variable containing the XYZ coordinates of the cells.
column_names = ["CCF.AllenAP", "CCF.AllenDV", "CCF.AllenML"] # name of the columns containing the CCF coordinates
# Color cells according to whether they are excitatory or inhibitory:
vgat_cells = pag_data.loc[pag_data["cell.type"] == "VGAT"]
vglut2_cells = pag_data.loc[pag_data["cell.type"] == "VGluT2"]
# // CREATE SCENE //
scene = Scene(root = True, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)
# // ADD REGIONS AND CELLS//
scene.add_brain_region("PAG", alpha = 0.2, color = "darkgoldenrod", silhouette = None, hemisphere = "both")
#scene.add_brain_region("SCm", alpha = 0.1, color = "olivedrab", silhouette = None, hemisphere = "both")
scene.add(Points(vgat_cells[column_names].values, name = "vgat", colors = "#FF8080", alpha = 1, radius = 30, res = 16)) # colour is #FF8080 in figures, but salmon is the same
scene.add(Points(vglut2_cells[column_names].values, name = "vglut2", colors = "#0F99B2", alpha = 1, radius = 30, res = 16)) # colour is #0F99B2 in figures, but skyblue looks good here
# // RENDER INTERACTIVELY //
# Render interactively. You can press "s" to take a screenshot
scene.render(interactive = True, camera = "top", zoom = 1.1) # cameras can be "sagittal", "sagittal2", "frontal", "top", "top_side", "three_quarters"
```
#### 4.2 | Plot PAG and aspirated cells at different angles
Now we want to render and save some images in which we zoom into the PAG to visualise the registered cells. We can color by cell type or by PAG subdivision.
Colouring cells by cell_type (VGAT or VGluT2), we will save the following screenshots:
* Axes True&False, Zoom 4, Top Camera
* Axes True&False, Zoom 4, Sagittal Camera
* Axes True&False, Zoom 4, Three quarters Camera
```
embedWindow(None) # <- this will make your scene popup
brainrender.settings.SHOW_AXES = True
brainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots
brainrender.settings.SHADER_STYLE = "cartoon" # affects the look of rendered brain regions, values can be: ["metallic", "plastic", "shiny", "glossy", "cartoon"] and can be changed in interactive mode
brainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])
brainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)
save_folder_thesis = r"D:\Dropbox (UCL)\Project_transcriptomics\analysis\PAG_scRNAseq_brainrender\output\figures_thesis_chapter_2"
# Create a variable containing the XYZ coordinates of the cells.
column_names = ["CCF.AllenAP", "CCF.AllenDV", "CCF.AllenML"] # name of the columns containing the CCF coordinates
# Color cells according to whether they are excitatory or inhibitory:
vgat_cells = pag_data.loc[pag_data["cell.type"] == "VGAT"]
vglut2_cells = pag_data.loc[pag_data["cell.type"] == "VGluT2"]
# // CREATE SCENE //
scene = Scene(root = False, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)
# // ADD REGIONS AND CELLS//
scene.add_brain_region("PAG", alpha = 0.2, color = "darkgoldenrod", silhouette = None, hemisphere = "both")
#scene.add_brain_region("SCm", alpha = 0.1, color = "olivedrab", silhouette = None, hemisphere = "both")
scene.add(Points(vgat_cells[column_names].values, name = "vgat", colors = "#FF8080", alpha = 1, radius = 20, res = 16)) # colour is #FF8080 in figures, but salmon is the same
scene.add(Points(vglut2_cells[column_names].values, name = "vglut2", colors = "#0F99B2", alpha = 1, radius = 20, res = 16)) # colour is #0F99B2 in figures, but skyblue looks good here
# // RENDER INTERACTIVELY //
# Render interactively. You can press "s" to take a screenshot
scene.render(interactive = True, camera = "top", zoom = 4) # cameras can be "sagittal", "sagittal2", "frontal", "top", "top_side", "three_quarters"
```
Colouring cells by cell_type (VGAT or VGluT2), we will save the following screenshots:
* Axes True&False, Zoom 4, Frontal Camera, sliced
```
embedWindow(None) # <- this will make your scene popup
brainrender.settings.SHOW_AXES = True
brainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots
brainrender.settings.SHADER_STYLE = "cartoon" # affects the look of rendered brain regions, values can be: ["metallic", "plastic", "shiny", "glossy", "cartoon"] and can be changed in interactive mode
brainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])
brainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)
save_folder_thesis = r"D:\Dropbox (UCL)\Project_transcriptomics\analysis\PAG_scRNAseq_brainrender\output\figures_thesis_chapter_2"
# Create a variable containing the XYZ coordinates of the cells.
column_names = ["CCF.AllenAP", "CCF.AllenDV", "CCF.AllenML"] # name of the columns containing the CCF coordinates
# Color cells according to whether they are excitatory or inhibitory:
vgat_cells = pag_data.loc[pag_data["cell.type"] == "VGAT"]
vglut2_cells = pag_data.loc[pag_data["cell.type"] == "VGluT2"]
# // CREATE SCENE //
scene = Scene(root = False, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)
# // ADD REGIONS AND CELLS//
pag = scene.add_brain_region("PAG", alpha = 0.2, color = "darkgoldenrod", silhouette = None, hemisphere = "both")
#scene.add_brain_region("SCm", alpha = 0.1, color = "olivedrab", silhouette = None, hemisphere = "both")
scene.add(Points(vgat_cells[column_names].values, name = "vgat", colors = "#FF8080", alpha = 1, radius = 20, res = 16)) # colour is #FF8080 in figures, but salmon is the same
scene.add(Points(vglut2_cells[column_names].values, name = "vglut2", colors = "#0F99B2", alpha = 1, radius = 20, res = 16)) # colour is #0F99B2 in figures, but skyblue looks good here
# // SLICE SCENE //
slice_start = pag.centerOfMass() + np.array([-150, 0, 0]) # X microns from center of mass towards the nose (if positive) or cerebellum (if negative)
slice_end = pag.centerOfMass() + np.array([+800, 0, 0]) # X microns from center of mass towards the nose (if adding) or cerebellum (if subtracting)
for p, n in zip((slice_start, slice_end), (1, -1)):
plane = scene.atlas.get_plane(pos = p, norm = (n, 0, 0))
scene.slice(plane, actors = pag, close_actors = True)
# // RENDER INTERACTIVELY //
# Render interactively. You can press "s" to take a screenshot
scene.render(interactive = True, camera = "frontal", zoom = 4) # cameras can be "sagittal", "sagittal2", "frontal", "top", "top_side", "three_quarters"
```
Colouring cells by PAG subdivision, we will save the following screenshots:
* Axes False, Zoom 4, Top Camera
* Axes False, Zoom 4, Sagittal Camera
* Axes False, Zoom 4, Three quarters Camera
```
embedWindow(None) # <- this will make your scene popup
brainrender.settings.SHOW_AXES = False # Set this back to False
brainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots
brainrender.settings.SHADER_STYLE = "cartoon" # affects the look of rendered brain regions, values can be: ["metallic", "plastic", "shiny", "glossy", "cartoon"] and can be changed in interactive mode
brainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])
brainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)
save_folder_thesis = r"D:\Dropbox (UCL)\Project_transcriptomics\analysis\PAG_scRNAseq_brainrender\output\figures_thesis_chapter_2"
# Create a variable containing the XYZ coordinates of the cells.
column_names = ["CCF.AllenAP", "CCF.AllenDV", "CCF.AllenML"] # name of the columns containing the CCF coordinates
# Color cells according to their subdivision:
dmpag_cells = pag_data.loc[pag_data["PAG.area"] == "dmpag"]
dlpag_cells = pag_data.loc[pag_data["PAG.area"] == "dlpag"]
lpag_cells = pag_data.loc[pag_data["PAG.area"] == "lpag"]
vlpag_cells = pag_data.loc[pag_data["PAG.area"] == "vlpag"]
# // CREATE SCENE //
scene = Scene(root = False, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)
# // ADD REGIONS AND CELLS//
scene.add_brain_region("PAG", alpha = 0.2, color = "darkgoldenrod", silhouette = None, hemisphere = "both")
#scene.add_brain_region("SCm", alpha = 0.1, color = "olivedrab", silhouette = None, hemisphere = "both")
scene.add(Points(dmpag_cells[column_names].values, name = "dmpag", colors = "cornflowerblue", alpha = 1, radius = 20, res = 16))
scene.add(Points(dlpag_cells[column_names].values, name = "dlpag", colors = "darkorange", alpha = 1, radius = 20, res = 16))
scene.add(Points(lpag_cells[column_names].values, name = "lpag", colors = "forestgreen", alpha = 1, radius = 20, res = 16))
scene.add(Points(vlpag_cells[column_names].values, name = "vlpag", colors = "firebrick", alpha = 1, radius = 20, res = 16))
# // RENDER INTERACTIVELY //
# Render interactively. You can press "s" to take a screenshot
scene.render(interactive = True, camera = "top", zoom = 4) # choose one of the cameras: sagittal, sagittal2, frontal, top, top_side, three_quarters
```
Colouring cells by PAG subdivision, we will save the following screenshots:
* Axes False, Zoom 4, Frontal Camera, sliced
```
embedWindow(None) # <- this will make your scene popup
brainrender.settings.SHOW_AXES = False # Set this back to False
brainrender.settings.SCREENSHOT_SCALE = 2 # values >1 yield higher resolution screenshots
brainrender.settings.SHADER_STYLE = "cartoon" # affects the look of rendered brain regions, values can be: ["metallic", "plastic", "shiny", "glossy", "cartoon"] and can be changed in interactive mode
brainrender.settings.ROOT_COLOR = [0.4, 0.4, 0.4] # color of the overall brain model's actor (defaults to [0.8, 0.8, 0.8])
brainrender.settings.ROOT_ALPHA = 0.2 # transparency of the overall brain model's actor (defaults to 0.2)
save_folder_thesis = r"D:\Dropbox (UCL)\Project_transcriptomics\analysis\PAG_scRNAseq_brainrender\output\figures_thesis_chapter_2"
# Create a variable containing the XYZ coordinates of the cells.
column_names = ["CCF.AllenAP", "CCF.AllenDV", "CCF.AllenML"] # name of the columns containing the CCF coordinates
# Color cells according to their subdivision:
dmpag_cells = pag_data.loc[pag_data["PAG.area"] == "dmpag"]
dlpag_cells = pag_data.loc[pag_data["PAG.area"] == "dlpag"]
lpag_cells = pag_data.loc[pag_data["PAG.area"] == "lpag"]
vlpag_cells = pag_data.loc[pag_data["PAG.area"] == "vlpag"]
# // CREATE SCENE //
scene = Scene(root = False, atlas_name = 'allen_mouse_25um', inset = False, title = None, screenshots_folder = save_folder_thesis, plotter = None)
# // ADD REGIONS AND CELLS//
pag = scene.add_brain_region("PAG", alpha = 0.2, color = "darkgoldenrod", silhouette = None, hemisphere = "both")
#scene.add_brain_region("SCm", alpha = 0.1, color = "olivedrab", silhouette = None, hemisphere = "both")
scene.add(Points(dmpag_cells[column_names].values, name = "dmpag", colors = "cornflowerblue", alpha = 1, radius = 20, res = 16))
scene.add(Points(dlpag_cells[column_names].values, name = "dlpag", colors = "darkorange", alpha = 1, radius = 20, res = 16))
scene.add(Points(lpag_cells[column_names].values, name = "lpag", colors = "forestgreen", alpha = 1, radius = 20, res = 16))
scene.add(Points(vlpag_cells[column_names].values, name = "vlpag", colors = "firebrick", alpha = 1, radius = 20, res = 16))
# // SLICE SCENE //
slice_start = pag.centerOfMass() + np.array([-150, 0, 0]) # X microns from center of mass towards the nose (if positive) or cerebellum (if negative)
slice_end = pag.centerOfMass() + np.array([+800, 0, 0]) # X microns from center of mass towards the nose (if adding) or cerebellum (if subtracting)
for p, n in zip((slice_start, slice_end), (1, -1)):
plane = scene.atlas.get_plane(pos = p, norm = (n, 0, 0))
scene.slice(plane, actors = pag, close_actors = True)
# // RENDER INTERACTIVELY //
# Render interactively. You can press "s" to take a screenshot
scene.render(interactive = True, camera = "frontal", zoom = 4) # choose one of the cameras: sagittal, sagittal2, frontal, top, top_side, three_quarters
# Get the hex codes for colors
import matplotlib
print(matplotlib.colors.to_hex("cornflowerblue")) # #6495ed
print(matplotlib.colors.to_hex("darkorange")) # #ff8c00
print(matplotlib.colors.to_hex("forestgreen")) # #228b22
print(matplotlib.colors.to_hex("firebrick")) # #b22222
```
| github_jupyter |
```
import numpy as np
from matplotlib import pyplot as plt
%matplotlib
# if you are plotting at the rig computer and want to plot the last debugging
# run images, set this to True.
plot_at_rig = True
processed_is_CDS_subtracted = True # whether to halve the processed_img size
# to help explore possible settings
reshape_factor = 2 # width is divided by this, height is multiplied by this
# set to False if the image was already remapped in PhotoZ
remap_quadrants = False
############ COMMON TO CHANGE: ###################
prog = 7 # sizes from 0-smallest to 7-largest
# which images to plot
plot_all = False
# Used if plot_all is False
image_no_selected = [355] # [0, 150, 350]
path = "C:\\Users\\RedShirtImaging\\Desktop\\PhotoZ_Jan2021\\PhotoZ_upgrades\\PhotoZ\\"
if not plot_at_rig:
path = ".\\data\\"
plot_all = True
date = "6-29-21"
path += date + "\\prog" + str(prog) + "\\"
# (internal) dimensions of the quadrants
'''
(= 7 - m_program in PhotoZ)
"DM2K_2048x512.cfg", 7 "200 Hz 2048x1024"
"DM2K_2048x50.cfg", 6 "2000 Hz 2048x100"
"DM2K_1024x160.cfg", 5 "1000 Hz 1024x320"
"DM2K_1024x80.cfg", 4 "2000 Hz 1024x160"
"DM2K_1024x80.cfg", 3 "2000 Hz 512x160"
"DM2K_1024x40.cfg", 2 "4000 Hz 512x80"
"DM2K_1024x30.cfg", 1 "5000 Hz 256x60"
"DM2K_1024x20.cfg" 0 "7500 Hz 256x40"
'''
dimensions = {
7 : {'height': 512,
'width': 2048 },
6 : {'height': 50,
'width': 2048 },
5 : {'height': 160,
'width': 1024 },
4 : {'height': 80,
'width': 1024 },
3 : {'height': 80,
'width': 1024},
2 : {'height': 40,
'width': 1024},
1 : {'height': 30,
'width': 1024},
0 : {'height': 20,
'width': 1024}
}
height = int(dimensions[prog]['height'] * reshape_factor)
width = int(dimensions[prog]['width'] // reshape_factor)
quadrantSize = height * width * 4
def load_image(image_version, quadrantSize, height, delim=' ', load_raw=False):
height *= 2
final = np.genfromtxt(path + image_version + ".txt", delimiter = delim) # always tab-delim
raw_img = None
if load_raw:
raw = np.genfromtxt(path + "raw-" + image_version + ".txt", delimiter = delim)
raw_img = raw[:quadrantSize,1].reshape(height,-1)
final_size = quadrantSize
if processed_is_CDS_subtracted:
quadrantSize = quadrantSize // 2
final_height = height
if not remap_quadrants: # if PhotoZ already moved quadrants
height //= 2
final_img = final[:final_size,1].reshape(height,-1) # CDS-correct image is half the size. Reset rows were removed
print("Shaping final image to:",final_size, quadrantSize // height)
return raw_img, final_img
def plot_image(raw_img, final_img, image_version, image_no, plot_raw=False):
fig = plt.figure()
n_plots = int(plot_raw) + 1
ax1 = fig.add_subplot(n_plots, 1, 1)
if plot_raw:
ax2 = fig.add_subplot(n_plots, 1, 2)
ax2.set_title("Raw, RLI frame " + str(image_no))
ax2.imshow(raw_img[1:,:], aspect='auto', cmap='jet')
fig.subplots_adjust(hspace = 0.6)
ax1.set_title("Processed, RLI frame " + str(image_no))
ax1.imshow(final_img, aspect='auto', cmap='jet')
plt.show()
# save to date-specific directory
plt.savefig(path + 'readout-RLI-' + image_version + ".png")
def remapQuadrants(img):
# Place second half to the right of the first half
h,w = img.shape
q0 = img[:h//4,:]
q1 = img[h//4:h//2,:]
q2 = img[h//2:3*h//4,:]
q3 = img[3*h//4:,:]
img = np.zeros((h//2, w*2))
img[:h//4,:w] = q0 # upper left
img[h//4:,:w] = q1 # upper right
img[:h//4,w:] = q2 # lower left
img[h//4:,w:] = q3 # lower right
return img
def normalizeQuadrants(img):
h,w = img.shape
img[:h//2,:w//2] = (img[:h//2,:w//2] - np.min(img[:h//2,:w//2])) / np.max(img[:h//2,:w//2])
img[:h//2,w//2:] = (img[:h//2,w//2:] - np.min(img[:h//2,w//2:])) / np.max(img[:h//2,w//2:])
img[h//2:,:w//2] = (img[h//2:,:w//2] - np.min(img[h//2:,:w//2])) / np.max(img[h//2:,:w//2])
img[h//2:,w//2:] = (img[h//2:,w//2:] - np.min(img[h//2:,w//2:])) / np.max(img[h//2:,w//2:])
return img
def plot_remapped(raw_img, final_img, image_version, image_no, plot_raw=False):
raw_img_2 = remapQuadrants(raw_img)
final_img_2 = remapQuadrants(final_img)
plot_image(raw_img_2,
final_img_2,
image_version,
image_no,
plot_raw=plot_raw)
for image_no in [0, 150, 350, 355, 450]:
if plot_all or (image_no in image_no_selected):
try:
image_version = "full-out" + str(image_no)
raw_img, final_img = load_image(image_version, quadrantSize, height)
plot_image(raw_img, final_img, image_version, image_no)
if remap_quadrants:
plot_remapped(raw_img, final_img, image_version, image_no)
print("Displayed frame " + str(image_no))
except:
print("Can't plot frame " + str(image_no))
```
| github_jupyter |
# Introduction to Digital Earth Australia <img align="right" src="../Supplementary_data/dea_logo.jpg">
* **Acknowledgement**: This notebook was originally created by [Digital Eath Australia (DEA)](https://www.ga.gov.au/about/projects/geographic/digital-earth-australia) and has been modified for use in the EY Data Science Program
* **Prerequisites**: Users of this notebook should have a basic understanding of:
* How to run a [Jupyter notebook](01_Jupyter_notebooks.ipynb)
## Background
[Digital Earth Australia](https://www.ga.gov.au/dea) (DEA) is a digital platform that catalogues large amounts of Earth observation data covering continental Australia.
It is underpinned by the [Open Data Cube](https://www.opendatacube.org/) (ODC), an open source software package that has an ever growing number of users, contributors and implementations.
The ODC and DEA platforms are designed to:
* Catalogue large amounts of Earth observation data
* Provide a Python based API for high performance querying and data access
* Give users easy ability to perform exploratory data analysis
* Allow scalable continent-scale processing of the stored data
* Track the provenance of data to allow for quality control and updates
The DEA program catalogues data from a range of satellite sensors and has adopted processes and terminology that users should be aware of to enable efficient querying and use of the datasets stored within.
This notebook introduces these important concepts and forms the basis of understanding for the remainder of the notebooks in this beginner's guide.
Resources to further explore these concepts are recommended at the end of the notebook.
## Description
This introduction to DEA will briefly introduce the ODC and review the types of data catalogued in the DEA platform.
It will also cover commonly-used terminology for measurements within product datasets.
Topics covered include:
* A brief introduction to the ODC
* A review of the satellite sensors that provide data to DEA
* An introduction to analysis ready data and the processes to make it
* DEA's data naming conventions
* Coordinate reference scheme
* Derived products
***
## Open Data Cube

The [Open Data Cube](https://www.opendatacube.org/) (ODC) is an open-source software package for organising and analysing large quantities of Earth observation data.
At its core, the Open Data Cube consists of a database where data is stored, along with commands to load, view and analyse that data.
This functionality is delivered by the [datacube-core](https://github.com/opendatacube/datacube-core) open-source Python library.
The library is designed to enable and support:
* Large-scale workflows on high performance computing infrastructures
* Exploratory data analysis
* Cloud-based services
* Standalone applications
There are a number of existing implementations of the ODC, including DEA and [Digital Earth Africa](https://www.digitalearthafrica.org/).
More information can be found in the [Open Data Cube Manual](https://datacube-core.readthedocs.io/en/latest/index.html).
## Satellite datasets in DEA
Digital Earth Australia catalogues data from a range of satellite sensors.
The earliest datasets of optical satellite imagery in DEA date from 1986.
DEA includes data from:
* [Landsat 5 TM](https://www.usgs.gov/land-resources/nli/landsat/landsat-5?qt-science_support_page_related_con=0#qt-science_support_page_related_con) (LS5 TM), operational between March 1984 and January 2013
* [Landsat 7 ETM+](https://www.usgs.gov/land-resources/nli/landsat/landsat-7?qt-science_support_page_related_con=0#qt-science_support_page_related_con) (LS7 ETM+), operational since April 1999
* [Landsat 8 OLI](https://www.usgs.gov/land-resources/nli/landsat/landsat-8?qt-science_support_page_related_con=0#qt-science_support_page_related_con) (LS8 OLI), operational since February 2013
* [Sentinel 2A MSI](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) (S2A MSI), operational since June 2015
* [Sentinel 2B MSI](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) (S2B MSI, operational since March 2017
Landsat missions are jointly operated by the United States Geological Survey (USGS) and National Aeronautics and Space Administration (NASA).
Sentinel missions are operated by the European Space Agency (ESA).
One major difference between the two programs is the spatial resolution: each Landsat pixel represents 30 x 30 m on the ground while each Sentinel-2 pixel represents 10 x 10 m to 60 x 60 m depending on the spectral band.
### Spectral bands
All of the datasets listed above are captured by multispectral satellites.
This means that the satellites measure primarily light that is reflected from the Earth's surface in discrete sections of the electromagnetic spectrum, known as *spectral bands*.
Figure 1 shows the spectral bands for recent Landsat and Sentinel-2 sensors, allowing a direct comparison of how each sensor samples the overall electromagnetic spectrum.
Landsat 5 TM is not displayed in this image; for reference, it measured light in seven bands that covered the same regions as bands 1 to 7 on Landsat 7 ETM+.

> **Figure 1:** The bands that are detected by each of the satellites are shown in the numbered boxes and the width of each box represents the spectral range that band detects.
The bands are overlaid on the percentage transmission of each wavelength returned to the atmosphere from the Earth relative to the amount of incoming solar radiation.
The y-axis has no bearing on the comparison of the satellite sensors [[source]](https://directory.eoportal.org/web/eoportal/satellite-missions/l/landsat-9).
Figure 1 highlights that the numbering of the bands relative to the detected wavelengths is inconsistent between sensors.
As an example, in the green region of the electromagnetic spectrum (around 560 nm), Landsat 5 TM and Landsat 7 ETM+ detect a wide green region called band 2, where as Landsat 8 OLI detects a slightly narrower region and calls it band 3.
Finally, Sentinel-2 MSI (A and B) detects a narrow green region but also calls it band 3.
Consequently, when working with different sensors, it is important to understand the differences in their bands, and any impact this could have on an analysis.
To promote awareness of these differences, DEA band naming is based on both the spectral band name and sample region.
The naming convention will be covered in more detail in the [DEA band naming conventions section](#DEA-band-naming-conventions).
## Analysis Ready Data
Digital Earth Australia produces Analysis Ready Data (ARD) for each of the sensors listed above.
The [ARD standard](http://ceos.org/ard/) for satellite data requires that data have undergone a number of processing steps, along with the creation of additional attributes for the data.
DEA's ARD datasets include the following characteristics:
* **Geometric correction:** This includes establishing ground position, accounting for terrain (orthorectification) and ground control points, and assessing absolute position accuracy.
Geometric calibration means that imagery is positioned accurately on the Earth's surface and stacked consistently so that sequential observations can be used to track meaningful change over time.
Adjustments for ground variability typically use a Digital Elevation Model (DEM).
* **Surface reflectance correction:** This includes adjustments for sensor/instrument gains, biases and offsets, include adjustments for terrain illumination and sensor viewing angle with respect to the pixel position on the surface.
Once satellite data is processed to surface reflectance, pixel values from the same sensor can be compared consistently both spatially and over time.
* **Observation attributes:** Per-pixel metadata such as quality flags and content attribution that enable users to make informed decisions about the suitability of the products for their use. For example, clouds, cloud shadows, missing data, saturation and water are common pixel level attributes.
* **Metadata:** Dataset metadata including the satellite, instrument, acquisition date and time, spatial boundaries, pixel locations, mode, processing details, spectral or frequency response and grid projection.
### Surface reflectance
Optical sensors, such as those on the Landsat and Sentinel-2 satellites, measure light that has come from the sun and been reflected by the Earth's surface.
The sensor measures the intensity of light in each of its spectral bands (known as "radiance").
The intensity of this light is affected by many factors including the angle of the sun relative to the ground, the angle of the sensor relative to the ground, and how the light interacts with the Earth's atmosphere on its way to the sensor.
Because radiance can be affected by so many factors, it is typically more valuable to determine how much light was originally reflected at the ground level.
This is known as bottom-of-atmosphere **surface reflectance**.
Surface reflectance can be calculated by using robust physical models to correct the observed radiance values based on atmospheric conditions, the angle of the sun, sensor geometry and local topography or terrain.
There are many approaches to satellite surface reflectance correction and DEA opts to use two: NBAR and NBART.
**Users will choose which of these measurements to load when querying the DEA datacube and so it is important to understand their major similarities and differences.**
#### NBAR
NBAR stands for *Nadir-corrected BRDF Adjusted Reflectance*, where BRDF stands for *Bidirectional reflectance distribution function*.
The approach involves atmospheric correction to compute bottom-of-atmosphere radiance, and bi-directional reflectance modelling to remove the effects of topography and angular variation in reflectance.
NBAR can be useful for analyses in extremely flat areas not affected by terrain shadow, and for producing attractive data visualisations that are not affected by NBART's nodata gaps (see below).
#### NBART
NBART has the same features of NBAR but includes an additional *terrain illumination* reflectance correction and as such considered to be actual surface reflectance as it takes into account the surface topography.
Terrain affects optical satellite images in a number of ways; for example, slopes facing the sun receive more sunlight and appear brighter compared to those facing away from the sun.
To obtain comparable surface reflectance from satellite images covering hilly areas, it is therefore necessary to process the images to reduce or remove the topographic effect.
This correction is performed with a Digital Surface Model (DSM) that has been resampled to the same resolution as the satellite data being corrected.
NBART is typically the default choice for most analyses as removing terrain illumination and shadows allows changes in the landscape to be compared more consistently across time.
However, it can be prone to distortions in extremely flat areas if noisy elevation values exist in the DSM.

> **Figure 2:** The animation above demonstrates how the NBART correction results in a significantly more two-dimensional looking image that is less affected by terrain illumination and shadow.
Black pixels in the NBART image represent areas of deep terrain shadow that can't be corrected as they're determined not to be viewable by either the sun or the satellite.
These are represented by -999 `nodata` values in the data.
### Observation Attributes
The *Observation Attributes (OA)* are a suite of measurements included in DEA's analysis ready datasets.
They are an assessment of each image pixel to determine if it is an unobscured, unsaturated observation of the Earth's surface, along with whether the pixel is represented in each spectral band.
The OA product allows users to exclude pixels that do not meet the quality criteria for their analysis.
The capacity to automatically exclude such pixels is essential for analysing any change over time, since poor-quality pixels can significantly alter the percieved change over time.
The most common use of OA is for cloud masking, where users can choose to remove images that have too much cloud, or ignore the clouds within each satellite image.
A demonstration of how to use cloud masking can be found in the [masking data](../Frequently_used_code/Masking_data.ipynb) notebook.
The OA suite of measurements include the following observation pixel-based attributes:
* Null pixels
* Clear pixels
* Cloud pixels
* Cloud shadow pixels
* Snow pixels
* Water pixels
* Terrain shaded pixels
* Spectrally contiguous pixels (i.e. whether a pixel contains data in every spectral band)
Also included is a range of pixel-based attributes related to the satellite, solar and sensing geometries:
* Solar zenith
* Solar azimuth
* Satellite view
* Incident angle
* Exiting angle
* Azimuthal incident
* Azimuthal exiting
* Relative azimuth
* Timedelta
## Data format
### DEA band naming conventions
To account for the various available satellite datasets, DEA uses a band naming convention to help distinguish datasets that come from the different sensors.
The band names are comprised of the applied surface reflectance correction (NBAR or NBART) and the spectral region detected by the satellite.
This removes all reference to the sensor band numbering scheme (e.g. [Figure 1](#Spectral-Bands)) and assumes that users understand that the spectral region described by the DEA band name is only approximately the same between sensors, not identical.
**Table 1** summarises the DEA band naming terminology for the spectral regions common to both Landsat and Sentinel, coupled with the corresponding NBAR and NBAR band names for the available sensors:
|Spectral region|DEA measurement name (NBAR)|DEA measurement name (NBAR)|Landsat 5<br>TM|Landsat 7<br>ETM+|Landsat 8<br>OLI|Sentinel-2A,B<br>MSI|
|----|----|----|----|----|----|----|
|Coastal aerosol|nbar_coastal_aerosol|nbart_coastal_aerosol|||1|1|
|Blue|nbar_blue|nbart_blue|1|1|2|2|
|Green|nbar_green|nbart_green|2|2|3|3|
|Red|nbar_red|nbart_red|3|3|4|4|
|NIR (Near infra-red)|nbar_nir (Landsat)<br>nbar_nir_1 (Sentinel-2)|nbart_nir (Landsat) <br>nbart_nir_1 (Sentinel-2)|4|4|5|8|
|SWIR 1 (Short wave infra-red 1)|nbar_swir_1 (Landsat) <br>nbar_swir_2 (Sentinel-2) |nbart_swir_1 (Landsat) <br>nbart_swir_2 (Sentinel-2)|5|5|6|11|
|SWIR 2 (Short wave infra-red 2)|nbar_swir_2 (Landsat) <br>nbar_swir_3 (Sentinel-2) |nbart_swir_2 (Landsat) <br>nbart_swir_3 (Sentinel-2)|7|7|7|12|
> **Note:** Be aware that NIR and SWIR band names differ between Landsat and Sentinel-2 due to the different number of these bands available in Sentinel-2. The `nbar_nir` Landsat band corresponds to the spectral region covered by Sentinel-2's `nbar_nir_1` band, the `nbar_swir_1` Landsat band corresponds to Sentinel-2's `nbar_swir_2` band, and the `nbar_swir_2` Landsat band corresponds to Sentinel-2's `nbar_swir_3` band.
### DEA satellite data projection and holdings
Keeping with the practices of the Landsat and Sentinel satellite programs, all DEA satellite datasets are projected using the **Universal Transverse Mercator (UTM)** coordinate reference system.
The World Geodectic System 84 (WGS84) ellipsoid is used to model the UTM projection. All data queries default to the WGS84 datum's coordinate reference system unless specified otherwise.
By default, the spatial extent of the DEA data holdings is approximately the Australian coastal shelf.
The actual extent varies based on the sensor and product.
The current extents of each DEA product can be viewed using the interactive [DEA Datacube Explorer](http://explorer.sandbox.dea.ga.gov.au/ga_ls8c_ard_3).
## Derived products

In addition to ARD satellite data, DEA generates a range of products that are derived from Landsat or Sentinel-2 surface reflectance data.
These products have been developed to characterise and monitor different aspects of Australia's natural and built environment, such as mapping the distribution of water and vegetation across the landscape through time.
Derived DEA products include:
* **Water Observations from Space (WOfS):** WOfS is the world's first continent-scale map of surface water and provides images and data showing where water has been seen in Australia from 1987 to the present. This map can be used to better understand where water usually occurs across the continent and to plan water management strategies.
* **Fractional Cover (FC):** Fractional Cover (FC) is a measurement that splits the landscape into three parts, or fractions; green (leaves, grass, and growing crops), brown (branches, dry grass or hay, and dead leaf litter), and bare ground (soil or rock). DEA uses Fractional Cover to characterise every 25 m square of Australia for any point in time from 1987 to today. This measurement can inform a broad range of natural resource management issues.
* **High and Low Tide Composites (HLTC):** The High and Low Tide Composites (HLTC) are imagery mosaics developed to visualise Australia's coasts, estuaries and reefs at low and high tide, whilst removing the influence of noise features such as clouds, breaking water and sun-glint. These products are highly interpretable, and provide a valuable snapshot of the coastline at different biophysical states.
* **Intertidal Extents Model (ITEM):** The Intertidal Extents Model (ITEM) product utilises 30 years of Earth observation data from the Landsat archive to map the extents and topography of Australia's intertidal mudflats, beaches and reefs; the area exposed between high and low tide.
* **National Intertidal Digital Elevation Model (NIDEM):** The National Intertidal Digital Elevation Model (NIDEM) is a national dataset that maps the three-dimensional structure of Australia’s intertidal zone. NIDEM provides a first-of-its kind source of intertidal elevation data for Australia’s entire coastline.
Each of the products above have dataset-specific naming conventions, measurements, resolutions, data types and coordinate reference systems.
For more information about DEA's derived products, refer to the [DEA website](http://www.ga.gov.au/dea/products), the [Content Management Interface](https://cmi.ga.gov.au/) (CMI) containing detailed product metadata, or the "DEA datasets" notebooks in this repository.
## Recommended next steps
For more detailed information on the concepts introduced in this notebook, please see the [DEA User Guide](https://docs.dea.ga.gov.au/index.html#) and [Open Data Cube Manual](https://datacube-core.readthedocs.io/en/latest/).
For more information on the development of the DEA platform, please see [Dhu et al. 2017](https://doi.org/10.1080/20964471.2017.1402490).
To continue with the beginner's guide, the following notebooks are designed to be worked through in the following order:
1. [Jupyter Notebooks](01_Jupyter_notebooks.ipynb)
2. **Digital Earth Australia (this notebook)**
3. [Products and Measurements](03_Products_and_measurements.ipynb)
4. [Loading data](04_Loading_data.ipynb)
5. [Plotting](05_Plotting.ipynb)
6. [Performing a basic analysis](06_Basic_analysis.ipynb)
7. [Introduction to Numpy](07_Intro_to_numpy.ipynb)
8. [Introduction to Xarray](08_Intro_to_xarray.ipynb)
9. [Parallel processing with Dask](09_Parallel_processing_with_dask.ipynb)
***
## Additional information
**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
**Contact:** If you need assistance, please post a question in the [Troubleshooting EY Data Science Program MS Teams Channel](https://teams.microsoft.com/l/channel/19%3a90804a73cb5a4159a60693c41a8820d2%40thread.tacv2/Troubleshooting?groupId=f6acd945-fed9-4db4-bed8-414988473a36&tenantId=5b973f99-77df-4beb-b27d-aa0c70b8482c) or on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).
**Last modified:** October 2020
| github_jupyter |
#Given a budget of 30 million dollar (or less) and genre, can I predict gross domestic profit using linear regression?
```
%matplotlib inline
import pickle
from pprint import pprint
import pandas as pd
import numpy as np
from dateutil.parser import parse
import math
# For plotting
import seaborn as sb
import matplotlib.pyplot as plt
# For linear regression
from patsy import dmatrices
from patsy import dmatrix
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation
def perform_linear_regression(df, axes, title):
plot_data = df.sort('budget', ascending = True)
y, X = dmatrices('log_gross ~ budget', data = plot_data, return_type = 'dataframe')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
columns = ['budget']
#Patsy
model = sm.OLS(y_train,X_train)
fitted = model.fit()
r_squared = fitted.rsquared
pval = fitted.pvalues
params = fitted.params
#Plotting
axes.plot(X_train[columns], y_train, 'go')
#axes.plot(X_test[columns], y_test, 'yo')
#axes.plot(X_test[columns], fitted.predict(X_test), 'ro')
axes.plot(X[columns], fitted.predict(X), '-')
axes.set_title('{0} (Rsquared = {1:.2f}) p = {2:.2f} m = {3:.2f}'.format(title, r_squared, pval[1], np.exp(params[1])))
axes.set_xlabel('Budget')
axes.set_ylabel('ln(Gross)')
axes.set_ylim(0, 25)
return None
def perform_linear_regression1(df, axes, title):
plot_data = df.sort('budget', ascending = True)
y, X = dmatrices('log_gross ~ budget', data = plot_data, return_type = 'dataframe')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
columns = ['budget']
#Patsy
model = sm.OLS(y_train,X_train)
fitted = model.fit()
r_squared = fitted.rsquared
pval = fitted.pvalues
params = fitted.params
y_test = y_test
#Plotting
#axes.plot(X_train[columns], y_train, 'go')
axes.plot(X_test[columns], y_test, 'yo')
#axes.plot(X_test[columns], fitted.predict(X_test), 'ro')
axes.plot(X[columns], fitted.predict(X), '-')
axes.set_title('{0} (Rsquared = {1:.2f}) p = {2:.2f}'.format(title, r_squared, pval[1]))
axes.set_xlabel('Budget')
axes.set_ylabel('ln(Gross)')
axes.set_ylim(0, 25)
return None
def perform_linear_regression_all(df, axes, title):
plot_data = df.sort('budget', ascending = True)
y, X = dmatrices('log_gross ~ budget', data = plot_data, return_type = 'dataframe')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
columns = ['budget']
#Patsy
model = sm.OLS(y_train,X_train)
fitted = model.fit()
r_squared = fitted.rsquared
pval = fitted.pvalues
params = fitted.params
#Plotting
axes.plot(X_train[columns], y_train, 'go')
axes.plot(X_test[columns], y_test, 'yo')
axes.set_title('{0}'.format(title))
axes.set_xlabel('Budget')
axes.set_ylabel('ln(Gross)')
axes.set_ylim(0, 25)
return None
def create_genre_column(df, genre):
return df['genre'].apply(lambda x: 1 if genre in x else 0)
def get_genre_dataframes(df, genre):
columns = ['log_gross', 'gross', 'log_budget', 'budget', 'runtime']
df_out = df.copy()[df[genre] == 1][columns]
df_out['genre'] = genre
return df_out
```
###Load the movie dictionary
```
d = pickle.load(open('movie_dictionary.p'))
#Create a dataframe
df = pd.DataFrame.from_dict(d, orient = 'index')
```
###Clean the data and remove N/A's
Keep only movies with a positive runtime
```
df2 = df.copy()
df2 = df2[['gross', 'date', 'budget', 'genre', 'runtime']]
df2['gross'][df2.gross == 'N/A'] = np.nan
df2['budget'][df2.budget == 'N/A'] = np.nan
df2['date'][df2.date == 'N/A'] = np.nan
df2['genre'][df2.genre == 'N/A'] = np.nan
df2['genre'][df2.genre == 'Unknown'] = np.nan
df2['runtime'][df2.runtime == 'N/A'] = np.nan
df2 = df2[df2.date > parse('01-01-2005').date()]
df2 = df2[df2.runtime >= 0]
#df2 = df2[df2.budget <30]
df2 = df2.dropna()
```
For budget and gross, if data is missing, populate them with the mean of all the movies
```
#df2['budget'][df2['budget'].isnull()] = df2['budget'].mean()
#df2['gross'][df2['gross'].isnull()] = df2['gross'].mean()
df2['date'] = pd.to_datetime(df2['date'])
df2['year'] = df2['date'].apply(lambda x: x.year)
df2['gross'] = df2['gross'].astype(float)
df2['budget'] = df2['budget'].astype(float)
df2['runtime'] = df2['runtime'].astype(float)
df2['genre'] = df2['genre'].astype(str)
```
###Create some log columns
```
df2['log_runtime'] = df2['runtime'].apply(lambda x: np.log(x))
df2['log_budget'] = df2['budget'].apply(lambda x: np.log(x))
df2['log_gross'] = df2['gross'].apply(lambda x: np.log(x))
```
###How does the gross and budget data look? (Not normally distributed)
```
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10,5))
df2['gross'].plot(ax = axes[0], kind = 'hist', title = 'Gross Histogram')
df2['budget'].plot(ax = axes[1], kind = 'hist', title = 'Budget Histogram')
```
###Looks more normally distributed now!
```
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10,5))
df2['log_gross'].plot(ax = axes[0], kind = 'hist', title = 'log(Gross)')
df2['budget'].plot(ax = axes[1], kind = 'hist', title = 'Budget')
```
###Check top grossing genres
```
df2.groupby('genre')[['gross']].mean().sort('gross', ascending = True).plot(figsize = (10,10), kind = 'barh', legend = False, title = 'Mean Domestic Gross by Genre')
test = df2.groupby('genre')[['gross']].mean()
test['Count'] = df2.groupby('genre')[['gross']].count()
test.sort('gross', ascending = False)
```
### Check top genres by count
```
df2.groupby('genre')[['gross']].count().sort('gross', ascending = True).plot(figsize = (10,10), kind = 'barh', legend = False, title = 'Count of Movies by Genre')
```
###Create categories for top unique grossing genres
```
genre_list = ['Comedy', 'Drama', 'Horror', 'Romance', 'Thriller', 'Sci-Fi', 'Music', 'Action', 'Adventure', 'Historical', \
'Family', 'War', 'Sports', 'Crime', 'Animation']
for genre in genre_list:
df2[genre] = create_genre_column(df2, genre)
```
###Create a new column for genres that concatenates all the individual columns
```
df_comedy = get_genre_dataframes(df2, 'Comedy')
df_drama = get_genre_dataframes(df2, 'Drama')
df_horror = get_genre_dataframes(df2, 'Horror')
df_romance = get_genre_dataframes(df2, 'Romance')
df_thriller = get_genre_dataframes(df2, 'Thriller')
df_scifi = get_genre_dataframes(df2, 'Sci-Fi')
df_music = get_genre_dataframes(df2, 'Music')
df_action = get_genre_dataframes(df2, 'Action')
df_adventure = get_genre_dataframes(df2, 'Adventure')
df_historical = get_genre_dataframes(df2, 'Historical')
df_family = get_genre_dataframes(df2, 'Family')
df_war = get_genre_dataframes(df2, 'War')
df_sports = get_genre_dataframes(df2, 'Sports')
df_crime = get_genre_dataframes(df2, 'Crime')
df_animation = get_genre_dataframes(df2, 'Animation')
final_df = df_comedy.copy()
final_df = final_df.append(df_drama)
final_df = final_df.append(df_horror)
final_df = final_df.append(df_romance)
final_df = final_df.append(df_thriller)
final_df = final_df.append(df_scifi)
final_df = final_df.append(df_music)
final_df = final_df.append(df_action)
final_df = final_df.append(df_adventure)
final_df = final_df.append(df_historical)
final_df = final_df.append(df_family)
final_df = final_df.append(df_war)
final_df = final_df.append(df_sports)
final_df = final_df.append(df_crime)
final_df = final_df.append(df_animation)
final_df[['genre', 'budget', 'log_gross']].head()
final_df[['log_gross', 'genre']].groupby('genre').count().sort('log_gross', ascending = False).plot(kind = 'bar', legend = False, title = 'Counts of Movies by Genre')
temp = final_df[['gross', 'genre']].groupby('genre').mean()
temp['Count'] = final_df[['gross', 'genre']].groupby('genre').count()
temp.sort('gross',ascending = False)
temp = temp.rename(columns={'gross': 'Average Gross'})
temp.sort('Average Gross', ascending = False)
fig, axes = plt.subplots(nrows=4, ncols=4,figsize=(25,25))
perform_linear_regression_all(df_comedy, axes[0,0], 'Comedy')
perform_linear_regression_all(df_horror, axes[0,1], 'Horror')
perform_linear_regression_all(df_romance, axes[0,2], 'Romance')
perform_linear_regression_all(df_thriller, axes[0,3], 'Thriller')
perform_linear_regression_all(df_scifi, axes[1,0], 'Sci_Fi')
perform_linear_regression_all(df_music, axes[1,1], 'Music')
perform_linear_regression_all(df_action, axes[1,2], 'Action')
perform_linear_regression_all(df_adventure, axes[1,3], 'Adventure')
perform_linear_regression_all(df_historical, axes[2,0], 'Historical')
perform_linear_regression_all(df_family, axes[2,1], 'Family')
perform_linear_regression_all(df_war, axes[2,2], 'War')
perform_linear_regression_all(df_sports, axes[2,3], 'Sports')
perform_linear_regression_all(df_crime, axes[3,0], 'Crime')
perform_linear_regression_all(df_animation, axes[3,1], 'Animation')
perform_linear_regression_all(df_drama, axes[3,2], 'Drama')
fig, axes = plt.subplots(nrows=4, ncols=4,figsize=(25,25))
perform_linear_regression(df_comedy, axes[0,0], 'Comedy')
perform_linear_regression(df_horror, axes[0,1], 'Horror')
perform_linear_regression(df_romance, axes[0,2], 'Romance')
perform_linear_regression(df_thriller, axes[0,3], 'Thriller')
perform_linear_regression(df_scifi, axes[1,0], 'Sci_Fi')
perform_linear_regression(df_music, axes[1,1], 'Music')
perform_linear_regression(df_action, axes[1,2], 'Action')
perform_linear_regression(df_adventure, axes[1,3], 'Adventure')
perform_linear_regression(df_historical, axes[2,0], 'Historical')
perform_linear_regression(df_family, axes[2,1], 'Family')
perform_linear_regression(df_war, axes[2,2], 'War')
perform_linear_regression(df_sports, axes[2,3], 'Sports')
perform_linear_regression(df_crime, axes[3,0], 'Crime')
perform_linear_regression(df_animation, axes[3,1], 'Animation')
perform_linear_regression(df_drama, axes[3,2], 'Drama')
```
###Linear Regression
```
fig, axes = plt.subplots(nrows=4, ncols=4,figsize=(25,25))
perform_linear_regression1(df_comedy, axes[0,0], 'Comedy')
perform_linear_regression1(df_horror, axes[0,1], 'Horror')
perform_linear_regression1(df_romance, axes[0,2], 'Romance')
perform_linear_regression1(df_thriller, axes[0,3], 'Thriller')
perform_linear_regression1(df_scifi, axes[1,0], 'Sci-Fi')
perform_linear_regression1(df_music, axes[1,1], 'Music')
perform_linear_regression1(df_action, axes[1,2], 'Action')
perform_linear_regression1(df_adventure, axes[1,3], 'Adventure')
perform_linear_regression1(df_historical, axes[2,0], 'Historical')
perform_linear_regression1(df_family, axes[2,1], 'Family')
perform_linear_regression1(df_war, axes[2,2], 'War')
perform_linear_regression1(df_sports, axes[2,3], 'Sports')
perform_linear_regression1(df_crime, axes[3,0], 'Crime')
perform_linear_regression1(df_animation, axes[3,1], 'Animation')
perform_linear_regression1(df_drama, axes[3,2], 'Drama')
```
| github_jupyter |
## Import a model from ONNX and run using PyTorch
We demonstrate how to import a model from ONNX and convert to PyTorch
#### Imports
```
import os
import operator as op
import warnings; warnings.simplefilter(action='ignore', category=FutureWarning)
import numpy as np
import torch
from torch import nn
from torch.nn import functional as F
from torch.autograd import Variable
import onnx
import gamma
from gamma import convert, protobuf, utils
```
#### 1: Download the model
```
fpath = utils.get_file('https://s3.amazonaws.com/download.onnx/models/squeezenet.tar.gz')
onnx_model = onnx.load(os.path.join(fpath, 'squeezenet/model.onnx'))
inputs = [i.name for i in onnx_model.graph.input if
i.name not in {x.name for x in onnx_model.graph.initializer}]
outputs = [o.name for o in onnx_model.graph.output]
```
#### 2: Import into Gamma
```
graph = convert.from_onnx(onnx_model)
constants = {k for k, (v, i) in graph.items() if v['type'] == 'Constant'}
utils.draw(gamma.strip(graph, constants))
```
#### 3: Convert to PyTorch
```
make_node = gamma.make_node_attr
def torch_padding(params):
padding = params.get('pads', [0,0,0,0])
assert (padding[0] == padding[2]) and (padding[1] == padding[3])
return (padding[0], padding[1])
torch_ops = {
'Add': lambda params: op.add,
'Concat': lambda params: (lambda *xs: torch.cat(xs, dim=params['axis'])),
'Constant': lambda params: nn.Parameter(torch.FloatTensor(params['value'])),
'Dropout': lambda params: nn.Dropout(params.get('ratio', 0.5)).eval(), #.eval() sets to inference mode. where should this logic live?
'GlobalAveragePool': lambda params: nn.AdaptiveAvgPool2d(1),
'MaxPool': lambda params: nn.MaxPool2d(params['kernel_shape'], stride=params.get('strides', [1,1]),
padding=torch_padding(params),
dilation=params.get('dilations', [1,1])),
'Mul': lambda params: op.mul,
'Relu': lambda params: nn.ReLU(),
'Softmax': lambda params: nn.Softmax(dim=params.get('axis', 1)),
}
def torch_op(node, inputs):
if node['type'] in torch_ops:
op = torch_ops[node['type']](node['params'])
return make_node('Torch_op', {'op': op}, inputs)
return (node, inputs)
def torch_conv_node(params, x, w, b):
ko, ki, kh, kw = w.shape
group = params.get('group', 1)
ki *= group
conv = nn.Conv2d(ki, ko, (kh,kw),
stride=tuple(params.get('strides', [1,1])),
padding=torch_padding(params),
dilation=tuple(params.get('dilations', [1,1])),
groups=group)
conv.weight = nn.Parameter(torch.FloatTensor(w))
conv.bias = nn.Parameter(torch.FloatTensor(b))
return make_node('Torch_op', {'op': conv}, [x])
def convert_to_torch(graph):
v, _ = gamma.var, gamma.Wildcard
conv_pattern = {
v('conv'): make_node('Conv', v('params'), [v('x'), v('w'), v('b')]),
v('w'): make_node('Constant', {'value': v('w_val')}, []),
v('b'): make_node('Constant', {'value': v('b_val')}, [])
}
matches = gamma.search(conv_pattern, graph)
g = gamma.union(graph, {m[v('conv')]:
torch_conv_node(m[v('params')], m[v('x')], m[v('w_val')], m[v('b_val')])
for m in matches})
remove = {m[x] for m in matches for x in (v('w'), v('b'))}
g = {k: torch_op(v, i) for k, (v, i) in g.items() if k not in remove}
return g
def torch_graph(graph):
return gamma.FuncCache(lambda k: graph[k][0]['params']['op'](*[tg[x] for x in graph[k][1]]))
g = convert_to_torch(graph)
utils.draw(g)
```
#### 4: Load test example and check PyTorch output
```
def load_onnx_tensor(fname):
tensor = onnx.TensorProto()
with open(fname, 'rb') as f:
tensor.ParseFromString(f.read())
return protobuf.unwrap(tensor)
input_0 = load_onnx_tensor(os.path.join(fpath, 'squeezenet/test_data_set_0/input_0.pb'))
output_0 = load_onnx_tensor(os.path.join(fpath, 'squeezenet/test_data_set_0/output_0.pb'))
tg = torch_graph(g)
tg[inputs[0]] = Variable(torch.Tensor(input_0))
torch_outputs = tg[outputs[0]]
np.testing.assert_almost_equal(output_0, torch_outputs.data.numpy(), decimal=5)
print('Success!')
```
| github_jupyter |
# Credit Risk Classification
Credit risk poses a classification problem that’s inherently imbalanced. This is because healthy loans easily outnumber risky loans. In this Challenge, you’ll use various techniques to train and evaluate models with imbalanced classes. You’ll use a dataset of historical lending activity from a peer-to-peer lending services company to build a model that can identify the creditworthiness of borrowers.
## Instructions:
This challenge consists of the following subsections:
* Split the Data into Training and Testing Sets
* Create a Logistic Regression Model with the Original Data
* Predict a Logistic Regression Model with Resampled Training Data
### Split the Data into Training and Testing Sets
Open the starter code notebook and then use it to complete the following steps.
1. Read the `lending_data.csv` data from the `Resources` folder into a Pandas DataFrame.
2. Create the labels set (`y`) from the “loan_status” column, and then create the features (`X`) DataFrame from the remaining columns.
> **Note** A value of `0` in the “loan_status” column means that the loan is healthy. A value of `1` means that the loan has a high risk of defaulting.
3. Check the balance of the labels variable (`y`) by using the `value_counts` function.
4. Split the data into training and testing datasets by using `train_test_split`.
### Create a Logistic Regression Model with the Original Data
Employ your knowledge of logistic regression to complete the following steps:
1. Fit a logistic regression model by using the training data (`X_train` and `y_train`).
2. Save the predictions on the testing data labels by using the testing feature data (`X_test`) and the fitted model.
3. Evaluate the model’s performance by doing the following:
* Calculate the accuracy score of the model.
* Generate a confusion matrix.
* Print the classification report.
4. Answer the following question: How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels?
### Predict a Logistic Regression Model with Resampled Training Data
Did you notice the small number of high-risk loan labels? Perhaps, a model that uses resampled data will perform better. You’ll thus resample the training data and then reevaluate the model. Specifically, you’ll use `RandomOverSampler`.
To do so, complete the following steps:
1. Use the `RandomOverSampler` module from the imbalanced-learn library to resample the data. Be sure to confirm that the labels have an equal number of data points.
2. Use the `LogisticRegression` classifier and the resampled data to fit the model and make predictions.
3. Evaluate the model’s performance by doing the following:
* Calculate the accuracy score of the model.
* Generate a confusion matrix.
* Print the classification report.
4. Answer the following question: How well does the logistic regression model, fit with oversampled data, predict both the `0` (healthy loan) and `1` (high-risk loan) labels?
### Write a Credit Risk Analysis Report
For this section, you’ll write a brief report that includes a summary and an analysis of the performance of both machine learning models that you used in this challenge. You should write this report as the `README.md` file included in your GitHub repository.
Structure your report by using the report template that `Starter_Code.zip` includes, and make sure that it contains the following:
1. An overview of the analysis: Explain the purpose of this analysis.
2. The results: Using bulleted lists, describe the balanced accuracy scores and the precision and recall scores of both machine learning models.
3. A summary: Summarize the results from the machine learning models. Compare the two versions of the dataset predictions. Include your recommendation for the model to use, if any, on the original vs. the resampled data. If you don’t recommend either model, justify your reasoning.
```
# Import the modules
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import confusion_matrix
from imblearn.metrics import classification_report_imbalanced
import warnings
warnings.filterwarnings('ignore')
```
---
## Split the Data into Training and Testing Sets
### Step 1: Read the `lending_data.csv` data from the `Resources` folder into a Pandas DataFrame.
```
# Read the CSV file from the Resources folder into a Pandas DataFrame
lending_data_df = pd.read_csv(Path("Resources/lending_data.csv"))
# Review the DataFrame
lending_data_df.head()
```
### Step 2: Create the labels set (`y`) from the “loan_status” column, and then create the features (`X`) DataFrame from the remaining columns.
```
# Separate the data into labels and features
# Separate the y variable, the labels
y = lending_data_df["loan_status"]
# Separate the X variable, the features
X = lending_data_df.drop(columns="loan_status")
# Review the y variable Series
y
# Review the X variable DataFrame
X
```
### Step 3: Check the balance of the labels variable (`y`) by using the `value_counts` function.
```
# Check the balance of our target values
y.value_counts()
```
### Step 4: Split the data into training and testing datasets by using `train_test_split`.
```
# Import the train_test_learn module
from sklearn.model_selection import train_test_split
# Split the data using train_test_split
# Assign a random_state of 1 to the function
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
```
---
## Create a Logistic Regression Model with the Original Data
### Step 1: Fit a logistic regression model by using the training data (`X_train` and `y_train`).
```
# Import the LogisticRegression module from SKLearn
from sklearn.linear_model import LogisticRegression
# Instantiate the Logistic Regression model
# Assign a random_state parameter of 1 to the model
logistical_regression = LogisticRegression(random_state=1)
# Fit the model using training data
logistical_regression.fit(X_train, y_train)
```
### Step 2: Save the predictions on the testing data labels by using the testing feature data (`X_test`) and the fitted model.
```
# Make a prediction using the testing data
y_prediction = logistical_regression.predict(X_test)
```
### Step 3: Evaluate the model’s performance by doing the following:
* Calculate the accuracy score of the model.
* Generate a confusion matrix.
* Print the classification report.
```
# Print the balanced_accuracy score of the model
balanced_accuracy_score_set = balanced_accuracy_score(y_test, y_prediction)*100
print(f"The balanced accuracy score is {(balanced_accuracy_score_set):.2f}%")
# Generate a confusion matrix for the model
confusion_matrix_set = confusion_matrix(y_test, y_prediction)
print(confusion_matrix_set)
# Print the classification report for the model
classification_report = classification_report_imbalanced(y_test, y_prediction)
print(classification_report)
```
### Step 4: Answer the following question.
**Question:** How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels?
**Answer:** It appears that predition of healthy loans is 100%, however there is an 84% likelyhood of predicting a "1", or, high risk loan.
---
## Predict a Logistic Regression Model with Resampled Training Data
### Step 1: Use the `RandomOverSampler` module from the imbalanced-learn library to resample the data. Be sure to confirm that the labels have an equal number of data points.
```
# Import the RandomOverSampler module form imbalanced-learn
from imblearn.over_sampling import RandomOverSampler
# Instantiate the random oversampler model
# # Assign a random_state parameter of 1 to the model
random_over_sampler = RandomOverSampler(random_state=1)
# Fit the original training data to the random_oversampler model
X_train_ros, y_train_ros = random_over_sampler.fit_resample(X_train, y_train)
# Count the distinct values of the resampled labels data
y_train_ros.value_counts()
```
### Step 2: Use the `LogisticRegression` classifier and the resampled data to fit the model and make predictions.
```
# Instantiate the Logistic Regression model
# Assign a random_state parameter of 1 to the model
log_ros = LogisticRegression(random_state=1)
# Fit the model using the resampled training data
log_ros.fit(X_train_ros, y_train_ros)
# Make a prediction using the testing data
y_prediction_ros = log_ros.predict(X_test)
```
### Step 3: Evaluate the model’s performance by doing the following:
* Calculate the accuracy score of the model.
* Generate a confusion matrix.
* Print the classification report.
```
# Print the balanced_accuracy score of the model
balanced_accuracy_score_ros = balanced_accuracy_score(y_test, y_prediction_ros)*100
print(f"The balanced accuracy score for this is {(balanced_accuracy_score_ros):.2f}%")
# Generate a confusion matrix for the model
confusion_matrix_ros = confusion_matrix(y_test, y_prediction_ros)
print(confusion_matrix_ros)
# Print the classification report for the model
classification_report_ros = classification_report_imbalanced(y_test, y_prediction_ros)
print(classification_report_ros)
```
### Step 4: Answer the following question
**Question:** How well does the logistic regression model, fit with oversampled data, predict both the `0` (healthy loan) and `1` (high-risk loan) labels?
**Answer:** It appears that this has 100% accuaracy for predicting "0", or, Healthy Loans, while there is a 84% likely hood, again, of predicitng "1", or, High Risk Loans.
| github_jupyter |
<a href="https://colab.research.google.com/github/LeonVillanueva/CoLab/blob/master/Google_CoLab_DL_Recommender.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Loading Libraries
```
!pip install -q tensorflow==2.0.0-beta1
%%capture
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, Concatenate, GlobalMaxPooling2D, MaxPooling1D, GaussianNoise, BatchNormalization, MaxPooling2D, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D, Embedding
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from scipy import stats
import math
import seaborn as sns
import re
from nltk.stem import WordNetLemmatizer
import re
```
### Data
```
!wget -nc http://files.grouplens.org/datasets/movielens/ml-latest-small.zip
!unzip ml-latest-small.zip
df = pd.read_csv ('ml-latest-small/ratings.csv')
df.sort_values (by='timestamp', inplace=True, ascending=True)
df.head(3)
cutoff = int(len(df)*.90)
df['user_id'] = pd.Categorical (df['userId'])
df['user_id'] = df['user_id'].cat.codes
df['movie_id'] = pd.Categorical (df['movieId'])
df['movie_id'] = df['movie_id'].cat.codes
train, test = df.iloc[:cutoff], df.iloc[cutoff:]
df.head(3)
U = len(set(df['user_id']))
M = len(set(df['movie_id']))
K = 12 # embedding dimensions
user_ids = df['user_id'].values
movie_ids = df['movie_id'].values
rating = df['rating'].values
len(user_ids) == len(movie_ids), len(movie_ids) == len(rating)
p = np.random.permutation (len(user_ids))
user_ids = user_ids[p]
movie_ids = movie_ids[p]
rating = rating[p]
train_user = user_ids[:cutoff]
train_movie = movie_ids[:cutoff]
train_rating = rating[:cutoff]
test_user = user_ids[cutoff:]
test_movie = movie_ids[cutoff:]
test_rating = rating[cutoff:]
rating_mean = train_rating.mean()
train_rating = train_rating - rating_mean
test_rating = test_rating - rating_mean
u = Input ((1,))
m = Input ((1,))
u_emb = Embedding (U,K) (u) # samples, 1, K
m_emb = Embedding (M,K) (m)
u_emb = Flatten () (u_emb) # samples, K
m_emb = Flatten () (m_emb)
x = Concatenate () ([u_emb, m_emb])
x = Dense (400, activation='relu') (x)
x = Dropout (0.5) (x)
x = Dense (400, activation='relu') (x)
x = Dense (1, activation='relu') (x)
model = Model(inputs=[u,m], outputs=x)
adam = tf.keras.optimizers.Adam (learning_rate=0.005, decay=5e-6)
model.compile (optimizer='adam',
loss='mse')
epochs = 20
r = model.fit ([train_user, train_movie], train_rating, validation_data=([test_user, test_movie], test_rating), verbose=False, epochs=epochs, batch_size=1024)
plt.plot (r.history['loss'], label='loss', color='#840000')
plt.plot (r.history['val_loss'], label='validation loss', color='#00035b')
plt.legend ()
re = model.evaluate ([test_user, test_movie], test_rating)
re**2
model.summary()
```
| github_jupyter |
# Building Simple Neural Networks
In this section you will:
* Import the MNIST dataset from Keras.
* Format the data so it can be used by a Sequential model with Dense layers.
* Split the dataset into training and test sections data.
* Build a simple neural network using Keras Sequential model and Dense layers.
* Train that model.
* Evaluate the performance of that model.
While we are accomplishing these tasks, we will also stop to discuss important concepts:
* Splitting data into test and training sets.
* Training rounds, batch size, and epochs.
* Validation data vs test data.
* Examining results.
## Importing and Formatting the Data
Keras has several built-in datasets that are already well formatted and properly cleaned. These datasets are an invaluable learning resource. Collecting and processing datasets is a serious undertaking, and deep learning tactics perform poorly without large high quality datasets. We will be leveraging the [Keras built in datasets](https://keras.io/datasets/) extensively, and you may wish to explore them further on your own.
In this exercise, we will be focused on the MNIST dataset, which is a set of 70,000 images of handwritten digits each labeled with the value of the written digit. Additionally, the images have been split into training and test sets.
```
# For drawing the MNIST digits as well as plots to help us evaluate performance we
# will make extensive use of matplotlib
from matplotlib import pyplot as plt
# All of the Keras datasets are in keras.datasets
from tensorflow.keras.datasets import mnist
# Keras has already split the data into training and test data
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# Training images is a list of 60,000 2D lists.
# Each 2D list is 28 by 28—the size of the MNIST pixel data.
# Each item in the 2D array is an integer from 0 to 255 representing its grayscale
# intensity where 0 means white, 255 means black.
print(len(training_images), training_images[0].shape)
# training_labels are a value between 0 and 9 indicating which digit is represented.
# The first item in the training data is a 5
print(len(training_labels), training_labels[0])
# Lets visualize the first 100 images from the dataset
for i in range(100):
ax = plt.subplot(10, 10, i+1)
ax.axis('off')
plt.imshow(training_images[i], cmap='Greys')
```
## Problems With This Data
There are (at least) two problems with this data as it is currently formatted, what do you think they are?
1. The input data is formatted as a 2D array, but our deep neural network needs to data as a 1D vector.
* This is because of how deep neural networks are constructed, it is simply not possible to send anything but a vector as input.
* These vectors can be/represent anything, but from the computer's perspective they must be a 1D vector.
2. Our labels are numbers, but we're not performing regression. We need to use a 1-hot vector encoding for our labels.
* This is important because if we use the number values we would be training our network to think of these values as continuous.
* If the digit is supposed to be a 2, guessing 1 and guessing 9 are both equally wrong.
* Training the network with numbers would imply that a prediction of 1 would be "less wrong" than a prediction of 9, when in fact both are equally wrong.
### Fixing the data format
Luckily, this is a common problem and we can use two methods to fix the data: `numpy.reshape` and `keras.utils.to_categorical`. This is nessesary because of how deep neural networks process data, there is no way to send 2D data to a `Sequential` model made of `Dense` layers.
```
from tensorflow.keras.utils import to_categorical
# Preparing the dataset
# Setup train and test splits
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# 28 x 28 = 784, because that's the dimensions of the MNIST data.
image_size = 784
# Reshaping the training_images and test_images to lists of vectors with length 784
# instead of lists of 2D arrays. Same for the test_images
training_data = training_images.reshape(training_images.shape[0], image_size)
test_data = test_images.reshape(test_images.shape[0], image_size)
# [
# [1,2,3]
# [4,5,6]
# ]
# => [1,2,3,4,5,6]
# Just showing the changes...
print("training data: ", training_images.shape, " ==> ", training_data.shape)
print("test data: ", test_images.shape, " ==> ", test_data.shape)
# Create 1-hot encoded vectors using to_categorical
num_classes = 10 # Because it's how many digits we have (0-9)
# to_categorical takes a list of integers (our labels) and makes them into 1-hot vectors
training_labels = to_categorical(training_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)
# Recall that before this transformation, training_labels[0] was the value 5. Look now:
print(training_labels[0])
```
## Building a Deep Neural Network
Now that we've prepared our data, it's time to build a simple neural network. To start we'll make a deep network with 3 layers—the input layer, a single hidden layer, and the output layer. In a deep neural network all the layers are 1 dimensional. The input layer has to be the shape of our input data, meaning it must have 784 nodes. Similarly, the output layer must match our labels, meaning it must have 10 nodes. We can choose the number of nodes in our hidden layer, I've chosen 32 arbitrarally.
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Sequential models are a series of layers applied linearly.
model = Sequential()
# The first layer must specify it's input_shape.
# This is how the first two layers are added, the input layer and the hidden layer.
model.add(Dense(units=32, activation='sigmoid', input_shape=(image_size,)))
# This is how the output layer gets added, the 'softmax' activation function ensures
# that the sum of the values in the output nodes is 1. Softmax is very
# common in classification networks.
model.add(Dense(units=num_classes, activation='softmax'))
# This function provides useful text data for our network
model.summary()
```
## Compiling and Training a Model
Our model must be compiled and trained before it can make useful predictions. Models are trainined with the training data and training labels. During this process Keras will use an optimizer, loss function, metrics of our chosing to repeatedly make predictions and recieve corrections. The loss function is used to train the model, the metrics are only used for human evaluation of the model during and after training.
Training happens in a series of epochs which are divided into a series of rounds. Each round the network will recieve `batch_size` samples from the training data, make predictions, and recieve one correction based on the errors in those predictions. In a single epoch, the model will look at every item in the training set __exactly once__, which means individual data points are sampled from the training data without replacement during each round of each epoch.
During training, the training data itself will be broken into two parts according to the `validation_split` parameter. The proportion that you specify will be left out of the training process, and used to evaluate the accuracy of the model. This is done to preserve the test data, while still having a set of data left out in order to test against — and hopefully prevent — overfitting. At the end of each epoch, predictions will be made for all the items in the validation set, but those predictions won't adjust the weights in the model. Instead, if the accuracy of the predictions in the validation set stops improving then training will stop early, even if accuracy in the training set is improving.
```
# sgd stands for stochastic gradient descent.
# categorical_crossentropy is a common loss function used for categorical classification.
# accuracy is the percent of predictions that were correct.
model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy'])
# The network will make predictions for 128 flattened images per correction.
# It will make a prediction on each item in the training set 5 times (5 epochs)
# And 10% of the data will be used as validation data.
history = model.fit(training_data, training_labels, batch_size=128, epochs=5, verbose=True, validation_split=.1)
```
## Evaluating Our Model
Now that we've trained our model, we want to evaluate its performance. We're using the "test data" here although in a serious experiment, we would likely not have done nearly enough work to warrent the application of the test data. Instead, we would rely on the validation metrics as a proxy for our test results until we had models that we believe would perform well.
Once we evaluate our model on the test data, any subsequent changes we make would be based on what we learned from the test data. Meaning, we would have functionally incorporated information from the test set into our training procedure which could bias and even invalidate the results of our research. In a non-research setting the real test might be more like putting this feature into production.
Nevertheless, it is always wise to create a test set that is not used as an evaluative measure until the very end of an experimental lifecycle. That is, once you have a model that you believe __should__ generalize well to unseen data you should test it on the test data to test that hypothosis. If your model performs poorly on the test data, you'll have to reevaluate your model, training data, and procedure.
```
loss, accuracy = model.evaluate(test_data, test_labels, verbose=True)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}')
```
## How Did Our Network Do?
* Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?
* Our model was more accurate on the validation data than it was on the training data.
* Is this okay? Why or why not?
* What if our model had been more accurate on the training data than the validation data?
* Did our model get better during each epoch?
* If not: why might that be the case?
* If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss?
### Answers:
* Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?
* __Because we only evaluate the test data once at the very end, but we evaluate training and validation scores once per epoch.__
* Our model was more accurate on the validation data than it was on the training data.
* Is this okay? Why or why not?
* __Yes, this is okay, and even good. When our validation scores are better than our training scores, it's a sign that we are probably not overfitting__
* What if our model had been more accurate on the training data than the validation data?
* __This would concern us, because it would suggest we are probably overfitting.__
* Did our model get better during each epoch?
* If not: why might that be the case?
* __Optimizers rely on the gradient to update our weights, but the 'function' we are optimizing (our neural network) is not a ground truth. A single batch, and even a complete epoch, may very well result in an adjustment that hurts overall performance.__
* If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss?
* __Not at all, see the above answer.__
## Look at Specific Results
Often, it can be illuminating to view specific results, both when the model is correct and when the model is wrong. Lets look at the images and our model's predictions for the first 16 samples in the test set.
```
from numpy import argmax
# Predicting once, then we can use these repeatedly in the next cell without recomputing the predictions.
predictions = model.predict(test_data)
# For pagination & style in second cell
page = 0
fontdict = {'color': 'black'}
# Repeatedly running this cell will page through the predictions
for i in range(16):
ax = plt.subplot(4, 4, i+1)
ax.axis('off')
plt.imshow(test_images[i + page], cmap='Greys')
prediction = argmax(predictions[i + page])
true_value = argmax(test_labels[i + page])
fontdict['color'] = 'black' if prediction == true_value else 'red'
plt.title("{}, {}".format(prediction, true_value), fontdict=fontdict)
page += 16
plt.tight_layout()
plt.show()
```
## Will A Different Network Perform Better?
Given what you know so far, use Keras to build and train another sequential model that you think will perform __better__ than the network we just built and trained. Then evaluate that model and compare its performance to our model. Remember to look at accuracy and loss for training and validation data over time, as well as test accuracy and loss.
```
# Your code here...
```
## Bonus questions: Go Further
Here are some questions to help you further explore the concepts in this lab.
* Does the original model, or your model, fail more often on a particular digit?
* Write some code that charts the accuracy of our model's predictions on the test data by digit.
* Is there a clear pattern? If so, speculate about why that could be...
* Training for longer typically improves performance, up to a point.
* For a simple model, try training it for 20 epochs, and 50 epochs.
* Look at the charts of accuracy and loss over time, have you reached diminishing returns after 20 epochs? after 50?
* More complex networks require more training time, but can outperform simpler networks.
* Build a more complex model, with at least 3 hidden layers.
* Like before, train it for 5, 20, and 50 epochs.
* Evaluate the performance of the model against the simple model, and compare the total amount of time it took to train.
* Was the extra complexity worth the additional training time?
* Do you think your complex model would get even better with more time?
* A little perspective on this last point: Some models train for [__weeks to months__](https://openai.com/blog/ai-and-compute/).
| github_jupyter |
```
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import math
%matplotlib inline
```
# Volunteer 1
## 3M Littmann Data
```
image = Image.open('3Ms.bmp')
image
x = image.size[0]
y = image.size[1]
print(x)
print(y)
matrix = []
points = []
integrated_density = 0
for i in range(x):
matrix.append([])
for j in range(y):
matrix[i].append(image.getpixel((i,j)))
#integrated_density += image.getpixel((i,j))[1]
#points.append(image.getpixel((i,j))[1])
```
### Extract Red Line Position
```
redMax = 0
xStore = 0
yStore = 0
for xAxis in range(x):
for yAxis in range(y):
currentPoint = matrix[xAxis][yAxis]
if currentPoint[0] ==255 and currentPoint[1] < 10 and currentPoint[2] < 10:
redMax = currentPoint[0]
xStore = xAxis
yStore = yAxis
print(xStore, yStore)
```
- The redline position is located at y = 252.
### Extract Blue Points
```
redline_pos = 51
absMax = 0
littmannArr = []
points_vertical = []
theOne = 0
for xAxis in range(x):
for yAxis in range(y):
currentPoint = matrix[xAxis][yAxis]
# Pickup Blue points
if currentPoint[2] == 255 and currentPoint[0] < 220 and currentPoint[1] < 220:
points_vertical.append(yAxis)
#print(points_vertical)
# Choose the largest amplitude
for item in points_vertical:
if abs(item-redline_pos) > absMax:
absMax = abs(item-redline_pos)
theOne = item
littmannArr.append((theOne-redline_pos)*800)
absMax = 0
theOne = 0
points_vertical = []
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr, linewidth=0.6, color='blue')
```
# Ascul Pi Data
```
pathBase = 'C://Users//triti//OneDrive//Dowrun//Text//Manuscripts//Data//YangChuan//AusculPi//'
filename = 'Numpy_Array_File_2020-06-21_07_54_16.npy'
line = pathBase + filename
arr = np.load(line)
arr
arr.shape
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[0], linewidth=1.0, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[:,100], linewidth=1.0, color='black')
start = 1830
end = 2350
start_adj = int(start * 2583 / 3000)
end_adj = int(end * 2583 / 3000)
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[start_adj:end_adj,460], linewidth=0.6, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr, linewidth=0.6, color='blue')
asculArr = arr[start_adj:end_adj,460]
```
## Preprocess the two array
```
asculArr_processed = []
littmannArr_processed = []
for item in asculArr:
asculArr_processed.append(abs(item))
for item in littmannArr:
littmannArr_processed.append(abs(item))
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(asculArr_processed, linewidth=0.6, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr_processed, linewidth=0.6, color='blue')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(asculArr_processed[175:375], linewidth=1.0, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr_processed[:200], linewidth=1.0, color='blue')
len(littmannArr)
len(asculArr)
```
### Coeffient
```
stats.pearsonr(asculArr_processed, littmannArr_processed)
stats.pearsonr(asculArr_processed[176:336], littmannArr_processed[:160])
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[start_adj:end_adj,460][176:336], linewidth=0.6, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr[:160], linewidth=0.6, color='blue')
```
### Fitness
```
stats.chisquare(asculArr_processed[174:334], littmannArr_processed[:160])
def cosCalculate(a, b):
l = len(a)
sumXY = 0
sumRootXSquare = 0
sumRootYSquare = 0
for i in range(l):
sumXY = sumXY + a[i]*b[i]
sumRootXSquare = sumRootXSquare + math.sqrt(a[i]**2)
sumRootYSquare = sumRootYSquare + math.sqrt(b[i]**2)
cosValue = sumXY / (sumRootXSquare * sumRootYSquare)
return cosValue
cosCalculate(asculArr_processed[175:335], littmannArr_processed[:160])
```
| github_jupyter |
```
#Set working directory
import os
path="/Users/sarakohnke/Desktop/data_type_you/processed-final/"
os.chdir(path)
os.getcwd()
#Import required packages
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#Import cleaned dataframe
dataframe=pd.read_csv('dataframe240620.csv',index_col=0)
dataframe.shape
#dataframe=pd.read_csv('dataframe240620.csv',index_col=0)
#from pycaret.classification import *
#exp1=setup(dataframe, target='A1C (%)')
#compare_models()
from sklearn.tree import DecisionTreeRegressor
from pprint import pprint
from sklearn.model_selection import train_test_split
X_rf = dataframe.drop(['A1C (%)'],1)
y_rf = dataframe['A1C (%)']
X_train_rf, X_test_rf, y_train_rf, y_test_rf = train_test_split(X_rf, y_rf, random_state = 0)
clf_rf = DecisionTreeRegressor(random_state = 0).fit(X_train_rf, y_train_rf)
print('Parameters currently in use:\n')
pprint(clf_rf.get_params())
X_train_rf.to_csv('X_train.csv')
y_train_rf.to_csv('y_train.csv')
from sklearn.model_selection import RandomizedSearchCV
# Number of features to consider at every split - for regressor, none is good
max_features = None
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = True
# Create the random grid
random_grid = {
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf
}
pprint(random_grid)
# Use the random grid to search for best hyperparameters
# First create the base model to tune
clf_rf = DecisionTreeRegressor(random_state=0)
# Random search of parameters, using 3 fold cross validation,
# search across 10 different combinations, and use all available cores
rf_random = RandomizedSearchCV(estimator = clf_rf, param_distributions = random_grid, n_iter=50,cv = 3, verbose=2, random_state=0, n_jobs = -1)
# Fit the random search model
rf_random.fit(X_train_rf, y_train_rf)
rf_random.best_params_
# Make features and target objects
X_rf2 = dataframe.drop(['A1C (%)'],1)
y_rf2 = dataframe['A1C (%)']
X_train_rf2, X_test_rf2, y_train_rf2, y_test_rf2 = train_test_split(X_rf2, y_rf2, random_state = 0)
# Train model
clf_rf2 = DecisionTreeRegressor(max_depth=10,min_samples_leaf=4,
min_samples_split=10,random_state = 0).fit(X_train_rf2, y_train_rf2)
# Print r2 score
print('R-squared score (training): {:.3f}'
.format(clf_rf2.score(X_train_rf2, y_train_rf2)))
print('R-squared score (test): {:.3f}'
.format(clf_rf2.score(X_test_rf2, y_test_rf2)))
#Find feature importances in model
import pandas as pd
feat_importances = pd.Series(clf_rf2.feature_importances_, index=X_rf2.columns)
feat_importances.to_csv('feat_importances.csv')
feat_importances.nlargest(10).plot(kind='barh')
#plt.savefig('importance.png',dpi=300)
X_test_rf2.head()
X_test_rf2.to_csv('xtestjun22.csv')
y_test_rf2.head()
y_test_rf2.to_csv('ytestjun22.csv')
#Make patient lists from test data for app
patient1_list=X_test_rf2.iloc[115,:].values.tolist()
patient2_list=X_test_rf2.iloc[253,:].values.tolist()
patient3_list=X_test_rf2.iloc[603,:].values.tolist()
#Find current predicted A1C score
A1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])
print('your current score: '+str(A1C_prediction_drug1_2[0]))
#insulin - 1-3?->just1
patient1_list[6]=1
A1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])
print('your predicted score with insulin: '+str(A1C_prediction_drug1_2[0]))
#bmi
patient1_list[18]=24
A1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])
print('your predicted score with ideal bmi: '+str(A1C_prediction_drug1_2[0]))
#bp
patient1_list[3]=120
patient1_list[4]=80
A1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])
print('your predicted score with ideal blood pressure: '+str(A1C_prediction_drug1_2[0]))
#triglycerides
patient1_list[32]=150
A1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])
print('your predicted score with ideal triglycerides: '+str(A1C_prediction_drug1_2[0]))
#healthy diet
patient1_list[43]=1
A1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])
print('your predicted score with healthy diet: '+str(A1C_prediction_drug1_2[0]))
#cholesterol
patient1_list[19]=170
A1C_prediction_drug1_2 = clf_rf2.predict([patient1_list])
print('your predicted score with cholesterol: '+str(A1C_prediction_drug1_2[0]))
```
do the same for other patients
test my model against dummy regressor
```
import numpy as np
import matplotlib.pyplot as plt
predicted_rf2 = clf_rf2.predict(X_test_rf2)
plt.scatter(y_test_rf2,predicted_rf2)
plt.xlabel('Actual values')
plt.ylabel('Predicted values')
plt.plot(np.unique(y_test_rf2), np.poly1d(np.polyfit(y_test_rf2, predicted_rf2, 1))(np.unique(y_test_rf2)))
#plt.savefig('r2.png',dpi=300)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.dummy import DummyRegressor
dummy_mean=DummyRegressor(strategy='mean')
predicted_rf2 = clf_rf2.predict(X_test_rf2)
predicted_dummy=dummy_mean.predict(X_test_rf2)
plt.scatter(y_test_rf2,predicted_dummy)
plt.xlabel('Actual values')
plt.ylabel('Predicted values')
plt.plot(np.unique(y_test_rf2), np.poly1d(np.polyfit(y_test_rf2, predicted_dummy, 1))(np.unique(y_test_rf2)))
plt.show()
#pickle the model so can upload trained model to app
import pickle
#import bz2
#import _pickle as cPickle
with open('model_pkl.pickle','wb') as output_file:
pickle.dump(clf_rf2,output_file)
# Pickle a file and then compress it into a file with extension
#def compressed_pickle(model, clf_rf2):
# with bz2.BZ2File(model + '.bz2', 'w') as f:
# cPickle.dump(clf_rf2, f)
##file=bz2.BZ2File('c.pkl','w')
#pickle.dump(clf_rf2,sfile)
#compressed_pickle('model',clf_rf2)
# to open compressed pickle
#def decompress_pickle(file):
# model=bz2.BZ2File(file,'rb')
# model=cPickle.load(model)
# return model
#model=decompress_pickle('model.pbz2')
#to open normal pickle
with open('model_pkl.pickle','rb') as input_file:
model=pickle.load(input_file)
```
| github_jupyter |
```
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
```
Volumetric Analysis
===================
Calculate mass properties such as the volume or area of datasets
```
# sphinx_gallery_thumbnail_number = 4
import numpy as np
from pyvista import examples
```
Computing mass properties such as the volume or area of datasets in
PyVista is quite easy using the
`pyvista.DataSetFilters.compute_cell_sizes`{.interpreted-text
role="func"} filter and the `pyvista.DataSet.volume`{.interpreted-text
role="attr"} property on all PyVista meshes.
Let\'s get started with a simple gridded mesh:
```
# Load a simple example mesh
dataset = examples.load_uniform()
dataset.set_active_scalars("Spatial Cell Data")
```
We can then calculate the volume of every cell in the array using the
`.compute_cell_sizes` filter which will add arrays to the cell data of
the mesh core the volume and area by default.
```
# Compute volumes and areas
sized = dataset.compute_cell_sizes()
# Grab volumes for all cells in the mesh
cell_volumes = sized.cell_arrays["Volume"]
```
We can also compute the total volume of the mesh using the `.volume`
property:
```
# Compute the total volume of the mesh
volume = dataset.volume
```
Okay awesome! But what if we have have a dataset that we threshold with
two volumetric bodies left over in one dataset? Take this for example:
```
threshed = dataset.threshold_percent([0.15, 0.50], invert=True)
threshed.plot(show_grid=True, cpos=[-2, 5, 3])
```
We could then assign a classification array for the two bodies, compute
the cell sizes, then extract the volumes of each body. Note that there
is a simpler implementation of this below in
`split_vol_ref`{.interpreted-text role="ref"}.
```
# Create a classifying array to ID each body
rng = dataset.get_data_range()
cval = ((rng[1] - rng[0]) * 0.20) + rng[0]
classifier = threshed.cell_arrays["Spatial Cell Data"] > cval
# Compute cell volumes
sizes = threshed.compute_cell_sizes()
volumes = sizes.cell_arrays["Volume"]
# Split volumes based on classifier and get volumes!
idx = np.argwhere(classifier)
hvol = np.sum(volumes[idx])
idx = np.argwhere(~classifier)
lvol = np.sum(volumes[idx])
print(f"Low grade volume: {lvol}")
print(f"High grade volume: {hvol}")
print(f"Original volume: {dataset.volume}")
```
Or better yet, you could simply extract the largest volume from your
thresholded dataset by passing `largest=True` to the `connectivity`
filter or by using `extract_largest` filter (both are equivalent).
```
# Grab the largest connected volume present
largest = threshed.connectivity(largest=True)
# or: largest = threshed.extract_largest()
# Get volume as numeric value
large_volume = largest.volume
# Display it!
largest.plot(show_grid=True, cpos=[-2, 5, 3])
```
------------------------------------------------------------------------
Splitting Volumes {#split_vol_ref}
=================
What if instead, we wanted to split all the different connected bodies /
volumes in a dataset like the one above? We could use the
`pyvista.DataSetFilters.split_bodies`{.interpreted-text role="func"}
filter to extract all the different connected volumes in a dataset into
blocks in a `pyvista.MultiBlock`{.interpreted-text role="class"}
dataset. For example, lets split the thresholded volume in the example
above:
```
# Load a simple example mesh
dataset = examples.load_uniform()
dataset.set_active_scalars("Spatial Cell Data")
threshed = dataset.threshold_percent([0.15, 0.50], invert=True)
bodies = threshed.split_bodies()
for i, body in enumerate(bodies):
print(f"Body {i} volume: {body.volume:.3f}")
bodies.plot(show_grid=True, multi_colors=True, cpos=[-2, 5, 3])
```
------------------------------------------------------------------------
A Real Dataset
==============
Here is a realistic training dataset of fluvial channels in the
subsurface. This will threshold the channels from the dataset then
separate each significantly large body and compute the volumes for each!
Load up the data and threshold the channels:
```
data = examples.load_channels()
channels = data.threshold([0.9, 1.1])
```
Now extract all the different bodies and compute their volumes:
```
bodies = channels.split_bodies()
# Now remove all bodies with a small volume
for key in bodies.keys():
b = bodies[key]
vol = b.volume
if vol < 1000.0:
del bodies[key]
continue
# Now lets add a volume array to all blocks
b.cell_arrays["TOTAL VOLUME"] = np.full(b.n_cells, vol)
```
Print out the volumes for each body:
```
for i, body in enumerate(bodies):
print(f"Body {i:02d} volume: {body.volume:.3f}")
```
And visualize all the different volumes:
```
bodies.plot(scalars="TOTAL VOLUME", cmap="viridis", show_grid=True)
```
| github_jupyter |
# Exemplo sobre a correlação cruzada
A correlação cruzada é definida por
\begin{equation}
R_{xy}(\tau)=\int_{-\infty}^{\infty}x(t)y(t+\tau)\mathrm{d} t
\tag{1}
\end{equation}
Considerede um navio a navegar por águas não muito conhecidas. Para navegar com segurança, o navio necessita ter uma noção da profundidade da coluna de água sobre a qual navega. É difícil inspecionar a coluna d'água por inspeção visual, já que a luz não se propaga bem na água. No entanto, podemos usar as ondas sonoras para tal.

Assim, o navio é equipado com uma fonte sonora e um hidrofone. A fonte emite um sinal na água, $s(t)$, que se propaga até o fundo e é, então refletido. O hidrofone, próximo à fonte sonora, captará o som direto, $s(t)$, e a reflexão - uma versão atrasada e reduzida do sinal emitido, $r_c s(t-\Delta)$ . No entanto, ambos sinais são corrompidos por ruído, especialmente a reflexão. Assim, os sinais medidos são:
\begin{equation}
x(t)=s(t) + n_x(t)
\end{equation}
\begin{equation}
y(t)=s(t) + r_c s(t-\Delta) + n_y(t)
\end{equation}
Vamos iniciar olhando para estes sinais.
```
# importar as bibliotecas necessárias
import numpy as np # arrays
import matplotlib.pyplot as plt # plots
from scipy.stats import norm
from scipy import signal
plt.rcParams.update({'font.size': 14})
import IPython.display as ipd # to play signals
import sounddevice as sd
# Frequencia de amostragem e vetor temporal
fs = 1000
time = np.arange(0, 2, 1/fs)
Delta = 0.25
r_c = 0.5
# inicializar o gerador de números aleatórios
#np.random.seed(0)
# sinal s(t)
st = np.random.normal(loc = 0, scale = 1, size = len(time))
# Ruído de fundo
n_x = np.random.normal(loc = 0, scale = 0.1, size = len(time))
n_y = np.random.normal(loc = 0, scale = 1, size = len(time))
# Sinais x(t) e y(t)
xt = st + n_x # O sinal é totalmente contaminado por ruído
yt = np.zeros(len(time)) + st + n_y # Inicialize - o sinal é totalmente contaminado por ruído
yt[int(Delta*fs):] = yt[int(Delta*fs):] + r_c * st[:len(time)-int(Delta*fs)] # A partir de um certo instante temos a reflexão
# plot signal
plt.figure(figsize = (10, 6))
plt.subplot(2,1,1)
plt.plot(time, xt, linewidth = 1, color='b', alpha = 0.7)
plt.grid(linestyle = '--', which='both')
plt.title('Sinal emitidqo e contaminado por ruído')
plt.ylabel(r'$x(t)$')
plt.xlabel('Tempo [s]')
plt.xlim((0, time[-1]))
plt.ylim((-5, 5))
plt.subplot(2,1,2)
plt.plot(time, yt, linewidth = 1, color='b', alpha = 0.7)
plt.grid(linestyle = '--', which='both')
plt.title('Sinal gravado e contaminado por ruído')
plt.ylabel(r'$y(t)$')
plt.xlabel('Tempo [s]')
plt.xlim((0, time[-1]))
plt.ylim((-5, 5))
plt.tight_layout()
```
# Como podemos estimar a distância até o fundo?
Vamos pensar em mensurar a auto-correlação de $y(t)$ e a correlação cruzada entre $x(t)$ e $y(t)$. Tente suar o conceito de estimadores ($E[\cdot]$) para ter uma intuição a respeito. Com eles, você poderá provar que
\begin{equation}
R_{yy}(\tau)=(1+r_{c}^{2})R_{ss}(\tau) + R_{n_y n_y}(\tau) + r_c R_{ss}(\tau-\Delta) + r_c R_{ss}(\tau+\Delta)
\end{equation}
\begin{equation}
R_{xy}(\tau)=R_{ss}(\tau) + r_c R_{ss}(\tau-\Delta)
\end{equation}
```
# Calculemos a auto-correlação
Ryy = np.correlate(yt, yt, mode = 'same')
Rxy = np.correlate(xt, yt, mode = 'same')
tau = np.linspace(-0.5*len(Rxy)/fs, 0.5*len(Rxy)/fs, len(Rxy))
#tau = np.linspace(0, len(Rxy)/fs, len(Rxy))
# plot autocorrelação
plt.figure(figsize = (10, 3))
plt.plot(tau, Ryy/len(Ryy), linewidth = 1, color='b')
plt.grid(linestyle = '--', which='both')
plt.ylabel(r'$R_{yy}(\tau)$')
#plt.xlim((tau[0], tau[-1]))
plt.xlabel(r'$\tau$ [s]')
plt.tight_layout()
# plot Correlação cruzada
plt.figure(figsize = (10, 3))
plt.plot(-tau,Rxy/len(Ryy), linewidth = 1, color='b')
plt.grid(linestyle = '--', which='both')
plt.ylabel(r'$R_{xy}(\tau)$')
#plt.xlim((tau[0], tau[-1]))
plt.xlabel(r'$\tau$ [s]')
plt.tight_layout()
```
# Conhecendo a velocidade do som na água...
Podemos calcular a distância. $c_{a} = 1522$ [m/s].
```
find_peak = np.where(np.logical_and(Rxy/len(Ryy) >= 0.2, Rxy/len(Ryy) <= 0.5))
lag = -tau[find_peak[0][0]]
distance = 0.5*1522*lag
print('O atraso detectado é: {:.2f} [s]'.format(lag))
print('A distância para o fundo é: {:.2f} [m]'.format(distance))
```
| github_jupyter |
## Vehicle Detection
### Import
Import of the used packages.
```
import numpy as np
import os
import cv2
import pickle
import glob
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from moviepy.editor import VideoFileClip
from IPython.display import HTML
from skimage.feature import hog
import time
from sklearn.svm import LinearSVC
from sklearn.utils import shuffle
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
from sklearn import linear_model
import Augmentor
%matplotlib inline
```
### Load training data
The following code load the raw data of car and none car images that are stored in folder as .PNG images. It prints their lenght and the ratio of car to none car image. Ratio is near to 100% meaning their approximatively have the same size which is valuable for training the classifier. I have added the possibility to increase the dataset using Augmentor pipeline.
However, the accuracy of the classifier din't imprve much meaning it tends to overfit.
```
augment_dataCar = False
if augment_dataCar is True :
p = Augmentor.Pipeline('train_data/vehicles/KITTI_extracted',save_format='PNG')
p.rotate(probability=0.8, max_left_rotation=2, max_right_rotation=2)
p.zoom(probability=0.8, min_factor=1.1, max_factor=1.4)
p.flip_left_right(probability=0.5)
p.random_distortion(probability=0.6, magnitude = 1, grid_width = 8, grid_height = 8)
p.sample(8000)
p.process()
augment_dataNonCar = False
if augment_dataNonCar is True :
p = Augmentor.Pipeline('train_data/non-vehicles')
p.rotate(probability=0.8, max_left_rotation=2, max_right_rotation=2)
p.zoom(probability=0.8, min_factor=1, max_factor=1.2)
p.flip_left_right(probability=0.5)
p.random_distortion(probability=0.6, magnitude = 1, grid_width = 8, grid_height = 8)
p.sample(5000)
p.process()
def renamedir() :
dirname = "train_data/vehicles/KITTI_extracted/output"
for i, filename in enumerate(os.listdir(dirname)):
os.rename(dirname + "/" + filename, dirname +"/"+ str(i) + ".png")
if (augment_dataCar and augment_dataNonCar) :
renamedir()
car_images = glob.glob('train_data/vehicles/KITTI_extracted/output/*.png')
noncar_images = glob.glob('train_data/non-vehicles/output/**/*.png')
ratio = (len(car_images)/ len(noncar_images))*100
print(len(car_images), len(noncar_images), round(ratio))
def show_image_compare(image1,image2) :
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 3))
#f.tight_layout()
ax1.imshow(image1)
ax1.set_title('Dataset car image', fontsize=20)
ax2.imshow(image2)
ax2.set_title('Data noncar image', fontsize=20)
rand = np.random.randint(0,len(car_images))
show_image(mpimg.imread(car_images[rand]), mpimg.imread(noncar_images[rand]))
def show_image_compare_feature_extraction(image1,image2,image3,original,colorspace) :
f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(16, 8))
#f.tight_layout()
ax1.imshow(image1, cmap='gray')
ax1.set_title('channel1', fontsize=20)
ax2.imshow(image2, cmap='gray')
ax2.set_title('channel2', fontsize=20)
ax3.imshow(image3, cmap='gray')
ax3.set_title('channel3', fontsize=20)
ax4.imshow(original, cmap='gray')
ax4.set_title('original', fontsize=20)
ax1.set_xlabel(colorspace)
```
### Convert image to Histogram of Oriented Gradient (HOG)
HOG stands for histogram of oriented gradiant. A build in function in provided within the sklearn library. The following parameters of the hog function are listed below.
* <b>img </b>: input image
* <b>orient </b>: number of possible orientation of the gradient
* <b>pix_per_cell </b>: size (in pixel) of a cell
* <b>cell_per_block </b>: Number of cells in each block
* <b>vis </b>: Allow returning an image of the gradient
* <b>feature_vec </b>: Allow returning the data as a feature vector
The ``get_hog_features`` returns the extracted features and/or an example image within a ``numpy.ndarray`` depending on the value of the ``vis`` (``True`` or ``False``) parameter.
<i>The code is copy-paste from the lession material.</i>
```
def get_hog_features(img, orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True):
# Call with two outputs if vis==True
if vis == True:
features, hog_image = hog(img, orientations=orient,
pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block),
transform_sqrt=False,
visualise=vis, feature_vector=feature_vec)
return features, hog_image
# Otherwise call with one output
else:
features = hog(img, orientations=orient,
pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block),
transform_sqrt=True,
visualise=vis, feature_vector=feature_vec)
return features
```
### Method to Extract HOG Features from an Array of Car and Non-Car Images
The ``extract_features`` function extracts features from a list of image and returns them into a ``list``.
<u>Note</u>: This function could also be used to call bin_spatial() and color_hist() (as in the lessons) to extract flattened spatial color features and color histogram features and combine them all to be used together for classification.
[Jeremy Shannon](https://github.com/jeremy-shannon/CarND-Vehicle-Detection/blob/master/vehicle_detection_project.ipynb) has provided in his GitHub an insightful study of the influence of the parameters of the ``get_hog_features`` function. He has chosen YUV color space, 11 orientations, 16 Pixels Per Cell, 2 Cells Per Block, and to use ALL of the color channel. It provided him a 98,17% accuracy and 55,22 s extracting time for the entire dataset. Its comparison can be found in his GitHub.
As concluded by this [document](https://www.researchgate.net/publication/224200365_Color_exploitation_in_hog-based_traffic_sign_detection) (which analyzed the affect of color spaces for traffic sign classifiction), YCrCb and CIELab color spaces perform well. YUV shares similitudes with YCrCb ([Tushar Chugh](https://github.com/TusharChugh/Vehicle-Detection-HOG/blob/master/src/vehicle-detection.ipynb)) making YUV a good candidate as shown by different medium articles.
From [Tushar Chugh](https://github.com/TusharChugh/Vehicle-Detection-HOG/blob/master/src/vehicle-detection.ipynb) I concluded that color histogram and spatial histogram does not provide strong accuracy improvement and reduce the extracting time. Thus, they little improvement is not a strong asset in this case.
In the case of extraction feature for autonomous driving the accuracy is as important as the extraction time. Hence, the choice of Jeremy Shanon was logic, but I think it might be interesting to investigate another parameter set which provide great extracting time and great accuracy even if they are not the best. Jeremy Shannon didn't apply data-set enhancement which could provide an accuracy improvement. The used parameter set bring a 1,52% accuracy decrease and 13,19 s improvement meaning the data enhancement should compensate the accuracy drop to be efficient (this assumption revealed to be not true due to overfitting).
After some experimentation, the LAB colorspace doesn't work as well as YUV or YCrCb, hence I continued with YUV. The orientation effect the calculation, I have found 12 possible gradient orientation is great deal.
<i>The code is copy-paste from the lession material. Color Histogram and Spatial Binning have been omitted.</i>
```
def extract_features(imgs, cspace='RGB', orient=9,
pix_per_cell=8, cell_per_block=2, hog_channel=0):
# Create a list to append feature vectors to
features = []
# Iterate through the list of images
for file in imgs:
# Read in each one by one
image = mpimg.imread(file)
#image = np.copy(np.sqrt(np.mean(np.square(image))))
# apply color conversion if other than 'RGB'
if cspace != 'RGB':
if cspace == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
elif cspace == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LUV)
elif cspace == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
elif cspace == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
elif cspace == 'YCrCb':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YCrCb)
elif cspace == 'LAB':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LAB)
else: feature_image = np.copy(image)
# Call get_hog_features() with vis=False, feature_vec=True
if hog_channel == 'ALL':
hog_features = []
for channel in range(feature_image.shape[2]):
hog_features.append(get_hog_features(feature_image[:,:,channel],
orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True))
hog_features = np.ravel(hog_features)
else:
hog_features = get_hog_features(feature_image[:,:,hog_channel], orient,
pix_per_cell, cell_per_block, vis=False, feature_vec=True)
# Append the new feature vector to the features list
features.append(hog_features)
# Return list of feature vectors
return features
```
### Preparing the data
```
# Feature extraction parameters
colorspace = 'YUV'
orient = 12
pix_per_cell = 8
cell_per_block = 2
hog_channel = 'ALL'
# Extracting feature from car and noncar images
car_features = extract_features(car_images, cspace=colorspace, orient=orient,
pix_per_cell=pix_per_cell, cell_per_block=cell_per_block,
hog_channel=hog_channel)
notcar_features = extract_features(noncar_images, cspace=colorspace, orient=orient,
pix_per_cell=pix_per_cell, cell_per_block=cell_per_block,
hog_channel=hog_channel)
img = mpimg.imread('./test_images/1.bmp')
img2 = cv2.cvtColor(img,cv2.COLOR_RGB2LAB)
t=time.time()
temp, im1 = get_hog_features(img2[:,:,0], orient, pix_per_cell, cell_per_block, vis=True, feature_vec=True)
temp, im2 = get_hog_features(img2[:,:,1], orient, pix_per_cell, cell_per_block, vis=True, feature_vec=True)
temp, im3 = get_hog_features(img2[:,:,2], orient, pix_per_cell, cell_per_block, vis=True, feature_vec=True)
t2 = time.time()
show_image_compare_feature_extraction(im1,im2,im3,img,'LAB')
print(round(t2-t,2), 's of extraction time per image (ALL channel)')
```
In the code below, the ```car_feature``` and ```notcar_feature``` are combined vertically. The created vetor is then scaled using ```StandardSacler``` (it removes the mean and provide a unit variance of the dataset). The result vector is created and combined horizontally to feature vector. The test and validation are then created randomly with 20% proportion and shuffle as well.
```
# Create an array stack of feature vectors
X = np.vstack((car_features, notcar_features)).astype(np.float64)
#print(X[0])
#StandardScaler decrease classifying performances (?)
# Fit a per-column scaler
X_scaler = StandardScaler().fit(X)
pickle.dump(X_scaler, open('X_scaler', 'wb'))
# Apply the scaler to X
scaled_X = X_scaler.transform(X)
pickle.dump(X_scaler, open('scaled_X', 'wb'))
#print(scaled_X[0])
# Define the labels vector
y = np.hstack((np.ones(len(car_features)), np.zeros(len(notcar_features))))
# Split X and y (data) into training and testing set
#rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(
scaled_X, y, test_size=0.2, random_state=42)
```
### Train a classifier
The SGDC classifier used is variant of a linear support vector machine model. I have tested Naive Bayes and decision three model, which have shown to have similar accurcy result. The special SGDC classifier provides however an improved training time for a some accuracy, therefore I prefered using this insteaf of the other.
```
# Use a linear SVC
SGDC = linear_model.SGDClassifier()
# Check the training time for the SVC
t = time.time()
SGDC.fit(X_train, y_train)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to train SVC...')
# Check the score of the SVC
print('Test Accuracy of SVC = ', round(SGDC.score(X_test, y_test), 4))
# Check the prediction time for a single sample
t=time.time()
n_predict = 10
print('My SGDC predicts: ', SGDC.predict(X_test[0:n_predict]))
print('For these',n_predict, 'labels: ', y_test[0:n_predict])
t2 = time.time()
print(round(t2-t, 5), 'Seconds to predict', n_predict,'labels with SVC')
# save the model to disk
filename = 'model'+str(colorspace)+str(orient)+str(pix_per_cell)+str(cell_per_block)+str(hog_channel)+'.sav'
pickle.dump(SGDC, open(filename, 'wb'))
# load the model from disk
filename = 'model'+str(colorspace)+str(orient)+str(pix_per_cell)+str(cell_per_block)+hog_channel+'.sav'
SGDC = pickle.load(open(filename, 'rb'))
X_scaler = pickle.load(open('X_scaler', 'rb'))
scaled_X = pickle.load(open('scaled_X', 'rb'))
```
### Sliding Windows
```Find_car``` is a function which extracts hog features of an entire image than apply a sliding windows technic to this HOG image. Each frame taken appart from the sliding windows is analyzed by the SDGC classifier to predict whether the frame is a vehicle or not. If the frame reveals to be a car, the the boxes coordinates of the predicted vehicle are calculated and returned by the the function. An image with the boxen vehicle is also returned.
The function ```apply_sliding_window``` is in charge to apply the ```Find_car``` sliding window method using multiple window sizes which allow the procedure to be scale proof. The y start and stop position are use a band where the sliding window technic is applied. It allows to focus on a ROI. The scaled of the sliding window is find in order to fit a entire car in the frame.
I am facing a computation time problem : computing the sliding window technic takes 15s per images, involving a tremedous amount of time to calute an entire video. This problem should be solved, but I don't find any problem.
Even if the classifier provides a great accuracy, the amount of false positive and negative is still high. Increasing the the ROI and adding sliding window search wasn't viable due to the really high time of computation. The color histogram might avoid those false positive, therefore this possibility should be tested.
<i>The code is inspired from the lession material.</i>
```
# conver the given image into the chosen color channel
def convert_color(img, conv='RGB2YCrCb'):
if conv == 'RGB2YCrCb':
return cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
if conv == 'BGR2YCrCb':
return cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)
if conv == 'RGB2LUV':
return cv2.cvtColor(img, cv2.COLOR_RGB2LUV)
if conv == 'RGB2YUV':
return cv2.cvtColor(img, cv2.COLOR_RGB2YUV)
# Here is your draw_boxes function from the previous exercise (from lecture)
def draw_boxes(img, bboxes, color=(0, 0, 255), thick=6):
# Make a copy of the image
imcopy = np.copy(img)
random_color = False
# Iterate through the bounding boxes
for bbox in bboxes:
if color == 'random' or random_color:
color = (np.random.randint(0,255), np.random.randint(0,255), np.random.randint(0,255))
random_color = True
# Draw a rectangle given bbox coordinates
cv2.rectangle(imcopy, bbox[0], bbox[1], color, thick)
# Return the image copy with boxes drawn
return imcopy
def find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block):
draw_img = np.copy(img)
img = img.astype(np.float32)/255
img_tosearch = img[ystart:ystop,:,:] # sub-sampling
ctrans_tosearch = convert_color(img_tosearch, conv='RGB2YUV')
if scale != 1:
imshape = ctrans_tosearch.shape
ctrans_tosearch = cv2.resize(ctrans_tosearch, (np.int(imshape[1]/scale), np.int(imshape[0]/scale)))
ch1 = ctrans_tosearch[:,:,0]
ch2 = ctrans_tosearch[:,:,1]
ch3 = ctrans_tosearch[:,:,2]
# Define blocks and steps as above
nxblocks = (ch1.shape[1] // pix_per_cell) - cell_per_block + 1
nyblocks = (ch1.shape[0] // pix_per_cell) - cell_per_block + 1
nfeat_per_block = orient*cell_per_block**2
# 64 was the orginal sampling rate, with 8 cells and 8 pix per cell
window = 64
nblocks_per_window = (window // pix_per_cell) - cell_per_block + 1
#nblocks_per_window = (window // pix_per_cell)-1
cells_per_step = 2 # Instead of overlap, define how many cells to step
nxsteps = (nxblocks - nblocks_per_window) // cells_per_step
nysteps = (nyblocks - nblocks_per_window) // cells_per_step
# Compute individual channel HOG features for the entire image
hog1 = get_hog_features(ch1, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)
hog2 = get_hog_features(ch2, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)
hog3 = get_hog_features(ch3, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=False)
bboxes = []
for xb in range(nxsteps):
for yb in range(nysteps):
ypos = yb*cells_per_step
xpos = xb*cells_per_step
# Extract HOG for this patch
hog_feat1 = hog1[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat2 = hog2[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat3 = hog3[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_features = np.hstack((hog_feat1, hog_feat2, hog_feat3))
xleft = xpos*pix_per_cell
ytop = ypos*pix_per_cell
# Extract the image patch
subimg = cv2.resize(ctrans_tosearch[ytop:ytop+window, xleft:xleft+window], (64,64))
# Get color features
#spatial_features = bin_spatial(subimg, size=spatial_size)
#hist_features = color_hist(subimg, nbins=hist_bins)
# Scale features and make a prediction
test_stacked = np.hstack(hog_features).reshape(1, -1)
test_features = X_scaler.transform(test_stacked)
#test_features = scaler.transform(np.array(features).reshape(1, -1))
#test_features = X_scaler.transform(np.hstack((shape_feat, hist_feat)).reshape(1, -1))
test_prediction = SGDC.predict(test_features)
if test_prediction == 1:
xbox_left = np.int(xleft*scale)
ytop_draw = np.int(ytop*scale)
win_draw = np.int(window*scale)
cv2.rectangle(draw_img,(xbox_left, ytop_draw+ystart),(xbox_left+win_draw,ytop_draw+win_draw+ystart),(0,0,255),6)
bboxes.append(((int(xbox_left), int(ytop_draw+ystart)),(int(xbox_left+win_draw),int(ytop_draw+win_draw+ystart))))
return draw_img, bboxes
def apply_sliding_window(image, SGDC, X_scaler, orient, pix_per_cell, cell_per_block):
t = time.time()
#rectangles = []
bboxes = []
ystart = 400
ystop = 500
out_img, bboxes1 = find_cars(image, ystart, ystop, 1.0, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
ystart = 400
ystop = 500
out_img, bboxes2 = find_cars(image, ystart, ystop, 1.3, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
ystart = 410
ystop = 500
out_img, bboxes3 = find_cars(out_img, ystart, ystop, 1.4, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
ystart = 420
ystop = 556
out_img, bboxes4 = find_cars(out_img, ystart, ystop, 1.6, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
ystart = 430
ystop = 556
out_img, bboxes5 = find_cars (out_img, ystart, ystop, 1.8, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
ystart = 430
ystop = 556
out_img, bboxes6 = find_cars (out_img, ystart, ystop, 2.0, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
ystart = 440
ystop = 556
out_img, bboxes7 = find_cars (out_img, ystart, ystop, 1.9, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
ystart = 400
ystop = 556
out_img, bboxes8 = find_cars (out_img, ystart, ystop, 1.3, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
ystart = 400
ystop = 556
out_img, bboxes9 = find_cars (out_img, ystart, ystop, 2.2, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
ystart = 500
ystop = 656
out_img, bboxes10 = find_cars (out_img, ystart, ystop, 3.0, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
bboxes.extend(bboxes1)
bboxes.extend(bboxes2)
bboxes.extend(bboxes3)
bboxes.extend(bboxes4)
bboxes.extend(bboxes5)
bboxes.extend(bboxes6)
bboxes.extend(bboxes7)
bboxes.extend(bboxes8)
bboxes.extend(bboxes9)
bboxes.extend(bboxes10)
t2 = time.time()
print(round(t2-t,2), 'apply sliding window')
return out_img, bboxes
img = mpimg.imread('./test_images/3.bmp')
ystart = 400
ystop = 596
scale = 1.2
#plt.imshow(out_img)
t=time.time()
out, bo = apply_sliding_window(img, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
t2 = time.time()
print(round(t2-t,2), 's of execution per frame')
temp=draw_boxes(img,bo)
plt.imshow(temp)
```
### Heatmap
To avoid false postive, a heat map is used. The principle is the following, first a map is create, zero array-like of the analyzed image size. Once, a car is detected and box is found, the boxen area of image is added by 1 on the same region of the array. Therefore, when a zero on the heat means that no car has ever been found in the process. When its values is one, a vehicle have been found once and when the value is greater than one, a vehcile have been multiple times. The heatmap thus a measure of certainty of the prediction. Applying a treshold of 1, meaning we consider that the confidence of unique positive prediction is this area is not enough, allow one to filter the false positves.
```
from scipy.ndimage.measurements import label
def add_heat(heatmap, bbox_list):
# Iterate through list of bboxes
for box in bbox_list:
# Add += 1 for all pixels inside each bbox
# Assuming each "box" takes the form ((x1, y1), (x2, y2))
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
# Return updated heatmap
return heatmap
def apply_threshold(heatmap, threshold):
# Zero out pixels below the threshold
heatmap[heatmap <= threshold] = 0
# Return thresholded map
return heatmap
def draw_labeled_bboxes(img, labels):
# Iterate through all detected cars
for car_number in range(1, labels[1]+1):
# Find pixels with each car_number label value
nonzero = (labels[0] == car_number).nonzero()
# Identify x and y values of those pixels
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Define a bounding box based on min/max x and y
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
# Draw the box on the image
cv2.rectangle(img, bbox[0], bbox[1], (0,0,255), 6)
# Return the image
return img
heatmap = np.zeros_like(out[:,:,0]).astype(np.float)
heatmap = add_heat(heatmap,bo)
heatmap = apply_threshold(heatmap, 1)
labels = label(heatmap)
print(labels[1], 'cars found')
plt.imshow(labels[0], cmap='gray')
# Read in the last image above
#image = mpimg.imread('img105.jpg')
# Draw bounding boxes on a copy of the image
draw_img = draw_labeled_bboxes(np.copy(img), labels)
# Display the image
plt.imshow(draw_img)
show_image_compare_heatmap(img,heatmap,temp,draw_img)
def show_image_compare_heatmap(img,heatmap,temp,boxen) :
f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(16, 8))
#f.tight_layout()
ax1.imshow(img, cmap='gray')
ax1.set_title('Original', fontsize=20)
ax2.imshow(temp, cmap='gray')
ax2.set_title('Boxen cars', fontsize=20)
ax3.imshow(heatmap, cmap='gray')
ax3.set_title('Heatmap (tresholded)', fontsize=20)
ax4.imshow(boxen, cmap='gray')
ax4.set_title('Filtered boxen image', fontsize=20)
```
### Vheicle detection pipeline
```
def vehicle_detection_piepline(image) :
# Find cars in image using multiple sliding window method
# Filter false positive using tresholded heatmap
# label the vehicles
# draw the boxes on the image
detected_image, boxes = apply_sliding_window(image, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
heatmap = np.zeros_like(detected_image[:,:,0]).astype(np.float)
heatmap = add_heat(heatmap,boxes)
heatmap = apply_threshold(heatmap, 2)
labels = label(heatmap)
draw_img = draw_labeled_bboxes(np.copy(image), labels)
return draw_img
image_output_vehicle = vehicle_detection_piepline(mpimg.imread('./test_images/3.bmp'))
plt.imshow(image_output_vehicle)
from collections import deque
import imageio
imageio.plugins.ffmpeg.download()
from moviepy.editor import VideoFileClip
from IPython.display import HTML
output = 'test_result.mp4'
clip = VideoFileClip("test_video.mp4")
video_clip = clip.fl_image(vehicle_detection_piepline)
%time video_clip.write_videofile(output, audio=False)
history = deque(maxlen = 8)
output = 'result.mp4'
clip = VideoFileClip("project_video.mp4")
video_clip = clip.fl_image(vehicle_detection_piepline)
%time video_clip.write_videofile(output, audio=False)
```
The output video can be found [here](https://youtu.be/Gt2ZO6IfRfo)
### temp
```
def apply_sliding_window(img, SGDC, X_scaler, orient, pix_per_cell, cell_per_block):
t = time.time()
rectangles = []
ystart = 400
ystop = 464
scale = 1.0
boxen_image, bbobx = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx)
ystart = 400
ystop = 464
scale = 1.3
boxen_image, bbobx8 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx8)
ystart = 416
ystop = 480
scale = 1.0
boxen_image, bbobx1 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx)
ystart = 400
ystop = 496
scale = 1.5
boxen_image, bbobx2 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx2)
ystart = 432
ystop = 528
scale = 1.5
boxen_image, bbobx3 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx3)
ystart = 400
ystop = 528
scale = 2.0
boxen_image, bbobx4 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx4)
ystart = 432
ystop = 560
scale = 2.0
boxen_image, bbobx5 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx5)
ystart = 400
ystop = 596
scale = 1.2
boxen_image, bbobx6 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx6)
ystart = 464
ystop = 660
scale = 3.5
boxen_image, bbobx7 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx7)
ystart = 400
ystop = 556
scale = 1.3
#boxen_image, bbobx9 = find_cars (img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
#rectangles.extend(bbobx9)
ystart = 400
ystop = 556
scale = 2.2
#boxen_image, bbobx10 = find_cars (img, ystart, ystop, sacle, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
#rectangles.extend(bbobx10)
t2 = time.time()
print(round(t2-t,2), 'apply sliding window')
return boxen_image, rectangles
def vehicle_detection_piepline(image) :
# Find cars in image using sliding window
# Filter false positive using tresholded heatmap
# label the vehicles
rectangles = []
ystart = 400
ystop = 464
scale = 1.0
boxen_image, bbobx = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx)
ystart = 400
ystop = 464
scale = 1.3
boxen_image, bbobx8 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx8)
ystart = 416
ystop = 480
scale = 1.0
boxen_image, bbobx1 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx)
ystart = 400
ystop = 496
scale = 1.5
boxen_image, bbobx2 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx2)
ystart = 432
ystop = 528
scale = 1.5
boxen_image, bbobx3 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx3)
ystart = 400
ystop = 528
scale = 2.0
boxen_image, bbobx4 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx4)
ystart = 432
ystop = 560
scale = 2.0
boxen_image, bbobx5 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx5)
ystart = 400
ystop = 596
scale = 1.2
boxen_image, bbobx6 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx6)
ystart = 464
ystop = 660
scale = 3.5
boxen_image, bbobx7 = find_cars(img, ystart, ystop, scale, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx7)
ystart = 400
ystop = 556
boxen_image, bbobx9 = find_cars (img, ystart, ystop, 1.3, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx9)
ystart = 400
ystop = 556
boxen_image, bbobx10 = find_cars (img, ystart, ystop, 2.2, SGDC, X_scaler, orient, pix_per_cell, cell_per_block)
rectangles.extend(bbobx10)
heatmap = np.zeros_like(boxen_image[:,:,0]).astype(np.float)
heatmap = add_heat(heatmap,rectangles)
heatmap = apply_threshold(heatmap, 2)
labels = label(heatmap)
draw_img = draw_labeled_bboxes(np.copy(image), labels)
return draw_img
image_output_vehicle = vehicle_detection_piepline(mpimg.imread('./test_images/3.bmp'))
plt.imshow(image_output_vehicle)
```
| github_jupyter |
```
#Python Basics
#Functions in Python
#Functions take some inputs, then they produce some outputs
#The functions are just a piece of code that you can reuse
#You can implement your functions, but in many cases, people reuse other people's functions
#in this case, it is important how the function work and how we can import function
#Python has many bult-in functions
#For example function"len" which computes the length of the lists
list1=[1,7,7.8, 9,3.9, 2, 8, 5.01, 6,2, 9, 11, 46, 91, 58, 2]
n=len(list1)
print("The length of list1 is ",n, "elements")
list2=[2, 8, 5.01, 6,2, 9]
m=len(list2)
print("The length of list2 is ",m, "elements")
#For example function"sum" which returns the total of all of the elements
list3=[10,74,798, 19,3.9, 12, 8, 5.01, 6,2, 19, 11, 246, 91, 58, 2.2]
n=sum(list3)
print("The total of list1 is ",n)
list4=[72, 98, 15.01, 16,2, 69.78]
m=sum(list4)
print("The total of list2 is ",m )
#METHODS are similar to the functions
#sort vs sorted
#for example we have two ways to sort a list through "sort method" and "sorted function"
#Assume we have a list, namely Num
Num=[10,74,798, 19,3.9, 12, 8, 5.01, 6,2, 19, 11, 246, 91, 58, 2.2]
#we can sort this variable using sorted function as follows
#sort vs sorted
Num_rating=sorted(Num)
print(Num_rating)
print(Num)
#So Num is not changed, but in the case of sort method the list itself is changed
#in this case no new list is created
Num=[10,74,798, 19,3.9, 12, 8, 5.01, 6,2, 19, 11, 246, 91, 58, 2.2]
print("Befor appling the sort method, Num has these values:", Num)
Num.sort()
print("After appling the sort method, Num has these values:", Num)
#Making our own functions in Python
#For making a function, def FunctionName(input):
def Add1(InputFunc):
OUT=InputFunc+15
return OUT
#We can reuse function Add1 among our program
print(Add1(3))
print(Add1(15))
Add1(3.144)
#Example
#a is input of the function
#y is the output of the fuvtion
#Whenever function Time1 is called the output is calculated
def Time1(a):
y=a*15
return y
c=Time1(2)
print(c)
d=Time1(30)
print(d)
#Documentation function using """ Documentation"""
def Add1(InputFunc):
"""
ADD Function
"""
OUT=InputFunc+15
return OUT
#functions with multiple parameters
def MULTIPARA(a, b, c):
W1=a*b+ c
W2=(a+b)*c
return (W1,W2)
print(MULTIPARA(2,3,7))
#functions with multiple parameters
def Mu2(a, b):
W1=a*b+ c
W2=(a+b)*c
W3=15/a
W4=65/b
return (W1,W2,W3,W4)
print(Mu2(11,3))
def Mu3(a1, a2, a3, a4):
c1=a1*a2+ a4+23
c2=(a3+a1)*a2
c3=15/a3
c4=65/a4+8976*d
return (c1,c2,c3,c4)
print(Mu3(0.008,0.0454,0.0323, 0.00232))
#repeating a string for n times
def mu4(St, REPEAT):
OUT=St*REPEAT
return OUT
print(mu4("Michel Jackson", 2))
print(mu4("Michel Jackson", 3))
print(mu4("Michel Jackson", 4))
print(mu4("Michel Jackson", 5))
#In many cases functions does not have a return statement
#In these cases, Python will return "NONE" Special
#Assume MBJ() function with no inputs
def MBJ():
print("M: Mohammad")
print("B:Behdad")
print("J:Jamshdi")
#Calling functins with no parameters
MBJ()
def ERROR():
print("There is something wrong in codes")
#Calling function with no parameters
ERROR()
#Function which does not do anything
def NOWORK():
pass
#Calling function NOWORK
print(NOWORK())
#this function return "None"
#LOOPS in FUNCTIONS
#we can use loops in functions
#example
#Force [N] to Mass [Kg] Convertor
def Forve2Mass(F):
for S,Val in enumerate(F):
print("The mass of number", S,"is measured: ", Val/9.8, "Kg")
Fl=[344, 46783, 5623, 6357]
Forve2Mass(Fl)
#Mass [Kg] to Force [N] Convertor
def Mass2Force(M):
for S,Val in enumerate(M):
print("The mass of number", S,"is measured: ",Val*9.8, "Kg")
we=Val*9.8
if (we>200):
print ("The above item is over weight")
M1=[54, 71, 59, 34, 21, 16, 15]
Mass2Force(M1)
#Collecting arguments
def AI_Methods(*names):
#Star * is used for string
for name in names:
print(name)
#calling the function
AI_Methods("Deep Learning", "Machine Learning", "ANNs", "LSTM")
#or
AI1=["Deep Learning", "Machine Learning", "ANNs", "LSTM"]
AI_Methods(AI1)
#Local scope and global scope in function
#Every vaiable within the function is counted as a local scope
#Every vaiable out of the function is counted as a global scope
#local scopes just reflect the output of function while global scopes reflect the value of body
#Assume we have Date as both global scope and local scope
def LOCAL(a):
Date=a+15
return(Date)
Date=1986
y=LOCAL(Date)
print(y)
#The different is here, look at the output of the function
#Global Scope
print("Global Scope (BODY): ",Date)
#Local Scope
print("Local Scopes (Function): ", LOCAL(Date))
#if the variable is not defined in the function, function considers the value of it in BODY
#Let's look at vaiable a
a=1
def add(b):
return a+b
c=add(10)
print(c)
def f(c):
return sum(c)
f([11, 67])
#Using if/else Statements and Loops in Functions
# Function example
def type_of_album(artist, album, year_released):
print(artist, album, year_released)
if year_released > 1980:
return "Modern"
else:
return "Oldie"
x = type_of_album("Michael Jackson", "Thriller", 1980)
print(x)
```
| github_jupyter |
# Navigation
---
Congratulations for completing the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893)! In this notebook, you will learn how to control an agent in a more challenging environment, where it can learn directly from raw pixels! **Note that this exercise is optional!**
### 1. Start the Environment
We begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/VisualBanana.app"`
- **Windows** (x86): `"path/to/VisualBanana_Windows_x86/Banana.exe"`
- **Windows** (x86_64): `"path/to/VisualBanana_Windows_x86_64/Banana.exe"`
- **Linux** (x86): `"path/to/VisualBanana_Linux/Banana.x86"`
- **Linux** (x86_64): `"path/to/VisualBanana_Linux/Banana.x86_64"`
- **Linux** (x86, headless): `"path/to/VisualBanana_Linux_NoVis/Banana.x86"`
- **Linux** (x86_64, headless): `"path/to/VisualBanana_Linux_NoVis/Banana.x86_64"`
For instance, if you are using a Mac, then you downloaded `VisualBanana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="VisualBanana.app")
```
```
env = UnityEnvironment(file_name="...")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
The simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:
- `0` - walk forward
- `1` - walk backward
- `2` - turn left
- `3` - turn right
The environment state is an array of raw pixels with shape `(1, 84, 84, 3)`. *Note that this code differs from the notebook for the project, where we are grabbing **`visual_observations`** (the raw pixels) instead of **`vector_observations`**.* A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.visual_observations[0]
print('States look like:')
plt.imshow(np.squeeze(state))
plt.show()
state_size = state.shape
print('States have shape:', state.shape)
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.visual_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.visual_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
```
When finished, you can close the environment.
```
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
| github_jupyter |
```
# Imports
from biocrnpyler import *
from genelet import *
from subsbml import System, createSubsystem, combineSystems, createNewSubsystem, createBasicSubsystem, SimpleModel, SimpleReaction
import numpy as np
import pylab as plt
from bokeh.layouts import row
from bokeh.io import export_png
import warnings
import libsbml
import bokeh.io
import bokeh.plotting
```
## Cell 1 simple rR1 transport w/ rR1 external reservoir
```
ss1 = createSubsystem ("SBML_Models/BiSwitch_CRN.xml")
ss2 = createSubsystem ("SBML_Models/rR1_external_reservoir.xml")
# Create a simple rR1 membrane
mb1 = createSubsystem("SBML_Models/simplemembrane_rR1.xml", membrane = True)
cell_1 = System("cell_1")
cell_1.setInternal([ss1])
cell_1.setExternal([ss2])
cell_1.setMembrane(mb1)
cell_1_model = cell_1.getModel()
#cell_1 = System ("cell_1", ListOfInternalSubsystems = [ss1],
# ListOfExternalSubsystems = [ss2],
# ListOfMembraneSubsystems = [mb1])
##### Set Species Amount #####
cell_1_model.setSpeciesAmount('rna_rR1_e', 0, compartment = 'cell_1_external')
#cell_1_model.setSpeciesAmount('rna_rR1', 100, compartment = 'cell_1_external')
cell_1_model.setSpeciesAmount('Core1_OFF', 6e3, compartment = 'cell_1_internal')
cell_1_model.setSpeciesAmount('Core2_OFF', 4e3, compartment = 'cell_1_internal')
cell_1_model.setSpeciesAmount('dna_dA2', 6e3, compartment = 'cell_1_internal')
cell_1_model.setSpeciesAmount('dna_dA1', 6e3, compartment = 'cell_1_internal')
cell_1_model.setSpeciesAmount('protein_RNAseH', 40, compartment = 'cell_1_internal')
cell_1_model.setSpeciesAmount('protein_RNAP', 300, compartment = 'cell_1_internal')
#cell_1_model.getSBMLDocument().getModel().getCompartment(1).setSize(1e-4)
cell_1_model.writeSBML('cell_1_model.xml')
# Calling Names
X_id1 = cell_1_model.getSpeciesByName('complex_Core1_ON').getId()
X_id2 = cell_1_model.getSpeciesByName('Core1_OFF').getId()
X_id3 = cell_1_model.getSpeciesByName('complex_Core2_ON').getId()
X_id4 = cell_1_model.getSpeciesByName('Core2_OFF').getId()
X_id5 = cell_1_model.getSpeciesByName("rna_rR2").getId()
X_id6 = cell_1_model.getSpeciesByName("rna_rR1", compartment = 'cell_1_internal').getId()
X_id7 = cell_1_model.getSpeciesByName("rna_rR1_e", compartment = 'cell_1_external').getId()
X_id8 = cell_1_model.getSpeciesByName("complex_Core2_AI").getId()
X_id9 = cell_1_model.getSpeciesByName("complex_Core1_AI").getId()
X_id10 = cell_1_model.getSpeciesByName("dna_dA2").getId()
X_id11 = cell_1_model.getSpeciesByName("dna_dA1").getId()
print (X_id6)
# Simulate with BioScrape
timepoints = np.linspace(0, 28800, 1000)
results_1,_ = cell_1_model.simulateWithBioscrape(timepoints)
timepoints = timepoints/3600
# For label convenience
x = 'Time (hours)'
y = 'Concentration (uM)'
bokeh.io.output_notebook()
a = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
a.circle(timepoints, results_1[X_id1], legend_label = "Core1 ON" , color = "blue")
a.circle(timepoints, results_1[X_id3], legend_label = "Core2 ON", color = "red")
a.legend.click_policy="hide"
a.legend.location="bottom_right"
b = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
#b.circle(timepoints, results_1[X_id5], legend_label = "rR2", color = "red")
b.circle(timepoints, results_1[X_id6], legend_label = "rR1_internal", color = "blue")
b.legend.click_policy="hide"
b.legend.location="center_right"
c = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
c.circle(timepoints, results_1[X_id7], legend_label = "rR1_external", color = "orange")
c.legend.click_policy="hide"
c.legend.location="center_right"
bokeh.io.show(row(a, b, c))
warnings.filterwarnings("ignore")
d = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
d.circle(timepoints, results_1[X_id8], legend_label = "rR1_i:dA1", color = "orange")
d.circle(timepoints, results_1[X_id9], legend_label = "rR2:dA2", color = "magenta")
d.legend.click_policy="hide"
d.legend.location="bottom_right"
f = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
f.circle(timepoints, results_1[X_id10], legend_label = "dA2", color = "magenta")
f.circle(timepoints, results_1[X_id11], legend_label = "dA1", color = "orange")
f.legend.click_policy="hide"
f.legend.location="top_right"
X_id12 = cell_1_model.getSpeciesByName("complex_Core1_ON_protein_RNAP").getId()
X_id13 = cell_1_model.getSpeciesByName("complex_Core2_ON_protein_RNAP").getId()
g = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
g.circle(timepoints, results_1[X_id12], legend_label = "Core1_ON_RNAP", color = "blue")
#g.circle(timepoints, results_1[X_id13], legend_label = "Core2_ON_RNAP", color = "red")
g.legend.click_policy="hide"
g.legend.location="center_right"
bokeh.io.show(row(d, f, g))
warnings.filterwarnings("ignore")
```
## Cell 2 with transport protein ##
```
ss1 = createSubsystem ("SBML_Models/BiSwitch_CRN.xml")
ss2 = createSubsystem ("SBML_Models/rR1_external_reservoir.xml")
# Create a simple rR1 membrane
mb2 = createSubsystem("SBML_Models/rR1_membrane1_detailed.xml", membrane = True)
cell_2 = System("cell_2")
cell_2.setInternal([ss1])
cell_2.setExternal([ss2])
cell_2.setMembrane(mb2)
cell_2_model = cell_2.getModel()
##### Set Species Amount #####
#cell_2_model.setSpeciesAmount('rna_rR1_e', 0, compartment = 'cell_2_external')
cell_2_model.setSpeciesAmount('rna_rR1_e', 100, compartment = 'cell_2_external')
cell_2_model.setSpeciesAmount('Core1_OFF', 6e3, compartment = 'cell_2_internal')
cell_2_model.setSpeciesAmount('Core2_OFF', 5e3, compartment = 'cell_2_internal')
cell_2_model.setSpeciesAmount('dna_dA2', 6e3, compartment = 'cell_2_internal')
cell_2_model.setSpeciesAmount('dna_dA1', 6e3, compartment = 'cell_2_internal')
cell_2_model.setSpeciesAmount('protein_RNAseH', 20, compartment = 'cell_2_internal')
#cell_2_model.setSpeciesAmount('protein_RNAseH', 40, compartment = 'cell_2_internal')
cell_2_model.setSpeciesAmount('protein_RNAP', 150, compartment = 'cell_2_internal')
#cell_2_model.getSBMLDocument().getModel().getCompartment(1).setSize(1e-4)
cell_2_model.writeSBML('SBML_Models/cell_2_model.xml')
# Calling Names
X_id1 = cell_2_model.getSpeciesByName('complex_Core1_ON').getId()
X_id2 = cell_2_model.getSpeciesByName('Core1_OFF').getId()
X_id3 = cell_2_model.getSpeciesByName('complex_Core2_ON').getId()
X_id4 = cell_2_model.getSpeciesByName('Core2_OFF').getId()
X_id5 = cell_2_model.getSpeciesByName("rna_rR2").getId()
X_id6 = cell_2_model.getSpeciesByName("rna_rR1", compartment = 'cell_2_internal').getId()
X_id7 = cell_2_model.getSpeciesByName("rna_rR1_e", compartment = 'cell_2_external').getId()
X_id8 = cell_2_model.getSpeciesByName("complex_Core2_AI").getId()
X_id9 = cell_2_model.getSpeciesByName("complex_Core1_AI").getId()
X_id10 = cell_2_model.getSpeciesByName("dna_dA2").getId()
X_id11 = cell_2_model.getSpeciesByName("dna_dA1").getId()
# Simulate with BioScrape
timepoints = np.linspace(0, 28800, 1000)
results_2,_ = cell_2_model.simulateWithBioscrape(timepoints)
timepoints = timepoints/3600
# For label convenience
x = 'Time (hours)'
y = 'Concentration (uM)'
bokeh.io.output_notebook()
a = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
a.circle(timepoints, results_2[X_id1], legend_label = "Core1 ON" , color = "blue")
a.circle(timepoints, results_2[X_id3], legend_label = "Core2 ON", color = "red")
a.legend.click_policy="hide"
a.legend.location="bottom_right"
b = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
b.circle(timepoints, results_2[X_id5], legend_label = "rR2" , color = "red")
b.circle(timepoints, results_2[X_id6], legend_label = "rR1_i", color = "blue")
b.legend.click_policy="hide"
b.legend.location="center_right"
c = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
c.circle(timepoints, results_2[X_id7], legend_label = "rR1_e", color = "orange")
c.legend.click_policy="hide"
c.legend.location="center_right"
bokeh.io.show(row(a, c, b))
d = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
d.circle(timepoints, results_2[X_id8], legend_label = "rR1_i:dA1", color = "orange")
d.circle(timepoints, results_2[X_id9], legend_label = "rR2:dA2", color = "magenta")
d.legend.click_policy="hide"
d.legend.location="bottom_right"
f = bokeh.plotting.figure(plot_width=400, plot_height=300, x_axis_label = x, y_axis_label=y)
f.circle(timepoints, results_2[X_id10], legend_label = "dA2", color = "magenta")
f.circle(timepoints, results_2[X_id11], legend_label = "dA1", color = "orange")
f.legend.click_policy="hide"
f.legend.location="top_right"
bokeh.io.show(row(d, f))
warnings.filterwarnings("ignore")
```
| github_jupyter |
## 3. Exploring data tables with Pandas
1. Use Pandas to read the house prices data. How many columns and rows are there in this dataset?
2. The first step I usually do is to use commands like pandas.head() to print a few rows of data. Look around what kind of features are available and read data description.txt for more info. Try to understand as much as you can. Pick three features you think will be good predictors of house prices and explain what they are.
3. How many unique conditions are there in SaleCondition? Use Pandas to find out how many samples are labeled with each condition. What do you learn from doing this?
4. Select one variable you picked in b., do you want to know something more about that variable? Use Pandas to answer your own question and de- scribe what you did shortly here.
```
import pandas as pd
import numpy as np
import sklearn
data1 =pd.read_csv('train.csv')
print (data1)
#3.1
rows = 1460
colums = 81
#3.2
#MSSubClass LotArea LotFrontage เป็น Feature ที่มีผลกระทบต่อราคาบ้านค่อนข้างสูง
#3.3
for i in data1['SaleCondition'].unique():
print (i)
#3.3
#6 unique 1.Normal 2.Abnormal 3.Partial 4.AdjLand 5.Alloca 6.Family
data1['LotArea'].mean()
#3.4
# Pick 'LotArea' อยากทราบค่าmeanของค่า Lot size in square feet เพื่อที่จะทราบว่าคนส่วนใหญ่ ชอบซื้อประมาณไหน
```
## 4. Learning to explore data with Seaborn
1. Let us first look at the variable we want to predict SalePrice. Use Seaborn to plot histogram of sale prices. What do you notice in the histogram?
2. Plot the histogram of the LotArea variable. What do you notice in the histogram?
3. Use Seaborn to plot LotArea in the x-axis and SalePrice on the y-axis. Try plotting log(LotArea) versus log(SalePrice) and see if the plot looks better.
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.distplot(data1['SalePrice'])
#4.1 เราสามารถเห็นว่าคนส่วนใหญ่ซื้อประมาณในราคาช่วง 1-2แสน
sns.distplot(data1['LotArea'])
#4.2 จากรูป ทำให้เห้นค่า ของคนส่วนใหญ่ชอบความกว้างของ LotAreaประมาณไหน จากรูปจะเห็นได้อยู่ในช่วง 0- 5แสน และช่วง ประมาณ 1แสน-2แสน จะค่อนข้างสูง
sns.jointplot(x= 'LotArea',y ='SalePrice',data = data1)
data1['LotArea'] =np.log(data1['LotArea'])
data1['SalePrice'] =np.log(data1['SalePrice'])
sns.jointplot(x= 'LotArea',y ='SalePrice',data = data1)
#4.3 จากรูปภาพข้างบนจะทำให้เห็นว่า ขอบ Scale ของค่า ทั้ง ภาพ2ภาพต่างกัน LotArea จะดูเล็กลงจนทำให้ค่าที่อยู่ใกล้กันกระจุกตัวกันมากขึ้น และไปอยู่ตรงกลางของกราฟ
```
## 5. Dealing with missing values
1. Suppose we want to start the first step of house price modeling by exploring the relationship between four variables: MSSubClass, LotArea, LotFrontage and SalePrice. I have done some exploring and found out that LotFrontage has a lot of missing values, so you need to fix it.
2. LotFrontage is the width of the front side of the property. Use Pandas to find out how many of the houses in our database is missing LotFrontage value.
3. Use Pandas to replace NaN values with another number. Since we are just exploring and not modeling yet, you can simply replace NaN with zeros for now.
```
data1[['MSSubClass','LotArea','LotFrontage','SalePrice']]
#5.1 more NaN in LotFrontage
#data1['LotFrontage'].mean()
#data1['LotFrontage'].fillna(70)
data1['LotFrontage'].isnull().sum()
#5.2 ใช้ คำสั่ง .isnull().sum() ทำให้มันรวมค่าทั้งหมดที่เป็นค่าที่ระบุไม่ได้ ให้ = 259
data1['LotFrontage']=data1['LotFrontage'].fillna(0)
print (data1['LotFrontage'])
#5.3 จากโจทย์ ผมแทนค่า 0เข้าไปแทนที่ NaN ด้วยคำสั่ง.fillna(0)
```
## 6. Correlations between multiple variables
One incredible feature of Seaborn is the ability to create correlation grid with pairplot function. We want to create one single plot that show us how all variables are correlated.
1. First, you need to create a data table with four columns: MSSubClass, LotArea (with log function applied), LotFrontage (missing values replaced) and SalePrice (with log function applied).
2. Then, use pairplot to create a grid of correlation plots. What do you observe from this plot?
```
DATA_Create =data1[['MSSubClass','LotArea','LotFrontage','SalePrice']]
print (DATA_Create)
#6.1 จากข้อเก่าๆผมได้ทำให้ LotArea ,SalePrice เป็น logแล้ว และ ได้เติม LotFrontage จากค่า NaN ให้เป็น 0แล้ว
sns.pairplot(DATA_Create)
#6.2 หลังจากทำ Pairplotแล้ว จะได้เห้นความสัมพันธ์ของ graph แต่ละ Graph โดย ถ้าเป็น Column เดียวกัน ทั้ง แกน X และ Y จะได้เป้น HisTrogram ส่วนอืนๆจะเป็นจุด
```
## 7. Data Preparation
Let's prepare train.csv for model training
1. Pick columns that are numeric data and plot distributions of those data (with Seaborn). If you find a column with skewed distribution you will write a script to transform that column with a log function. Then standardize them.
2. For categorical variables, we will simply transform categorical data into numeric data by using function `pandas.get dummies()`.
3. Split data into x and y. The variable x contains all the house features except the SalePrice. y contains only the SalePrice.
```
#data1.skew()
Numeric_data = data1.select_dtypes(include = ['int8','int16','int32','int64','float16','float32','float64'])
Cat_dat = ['Id','MSSubClass','MoSold','YrSold','OverallQual','OverallCond','YearBuilt','YearRemodAdd','GarageYrBlt']
Column_new = Numeric_data.drop(columns=Cat_dat)
for i in Column_new.columns:
Column_new[i] = Column_new[i].fillna(Column_new[i].mean())
Column_new[i] =np.log(Column_new[i]+1)
sns.distplot(Column_new[i])
Column_new
#7.1 แยก Data ที่ไม่ใช่ Numeric data ออก และเติมค่า mean ของค่าๆนั้นๆเมื่อเจอค่าที่หาค่าไม่ได้ และหลังจากนั้น แปลงเป็น log function ทั้งหมด
from sklearn import preprocessing
Cat= data1.select_dtypes(exclude = ['int8','int16','int32','int64','float16','float32','float64'])
Columndown = data1[Cat_dat]
ConCad_dat = pd.concat([Columndown,Cat],axis =1)
Dummie = pd.get_dummies(ConCad_dat)
Dummie
#7.2 change categorical Variable to numeric with .getdummies()
x = Column_new.drop(columns= ['SalePrice'],axis=1)
y = Column_new['SalePrice']
#7.3 X except SalePrice , Y Only SalePrice
print (x,y)
```
## 8. Let us first fit a very simple linear regression model, just to see what we get.
1. Use import LinearRegression from sklearn.linear model and use function `fit()` to fit the model.
2. Use function `predict()` to get house price predictions from the model (let’s call the predicted house prices yhat).
3. Plot `y` against `yhat` to see how good your predictions are.
```
from sklearn import linear_model
Data_LineReg = linear_model.LinearRegression()
Data_Linefit = Data_LineReg.fit(x,y)
print (Data_Linefit)
#8.1 use fit to fit model LinearRegression
yhat = Data_Linefit.predict(x)
yhat2 = Data_LineReg.predict(x)
print (yhat)
#8.2 predict house prices with LinearRegression model equal yhat [2.58400315 2.57481799 2.5877421 ... 2.57729059 2.54419389 2.5616654 ]
sns.regplot(x = y,y=yhat,data=data1)
#8.3 plot data with y against yhat
```
## 9. Assessing Your Model
According to Kaggle’s official rule on this problem, they use root mean square errors (rmse) to judge the accuracy of our model. This error computes the dif- ference between the log of actual house prices and the log of predicted house price. Find the mean and squareroot them.
We want to see how we compare to other machine learning contestants on Kag- gle so let us compute our rmse. Luckily, sklearn has done most of the work for you by providing mean square error function. You can use it by importing the function from sklearn.metrics. Then, you can compute mean square error and take a squareroot to get rmse.
What’s the rmse of your current model? Check out Kaggle Leaderboard for this problem to see how your number measures up with the other contestants.
```
from sklearn.metrics import mean_squared_error
root_mean = np.sqrt(mean_squared_error(y,yhat))
print (root_mean)
#9 it Room mean Sqaure error = 0.013526154149753134 from y againts yhat
```
## 10. Cross Validation
As we discussed earlier, don’t brag about your model’s accuracy until you have performed cross validation. Let us check cross-validated performance to avoid embarrassment.
Luckily, scikit learn has done most of the work for us once again. You can use the function `cross_val_predict()` to train the model with cross validation method and output the predictions.
What’s the rmse of your cross-validated model? Discuss what you observe in your results here. You may try plotting this new yhat with y to get better insights about this question.
```
from sklearn import datasets,linear_model
from sklearn.model_selection import cross_val_predict
Predict_cross = cross_val_predict(Data_Linefit,x,y)
sns.distplot(Predict_cross)
Predict_cross
sns.regplot( x= y , y=Predict_cross,data= data1)
#10 rmse is rootmeansquare error it equae Predict_cross is ([2.5827847 , 2.57504129, 2.58578173, ..., 2.57913513, 2.54371586,2.56422615]
#and we plot with new yhat with Precit_cross is CrossValidation predict fuction
```
## 11 (Optional) Fit Better Models
There are other models you can fit that will perform better than linear regres- sion. For example, you can fit linear regression with L2 regularization. This class of models has a street name of ‘Ridge Regression’ and sklearn simply called them Ridge. As we learned last time, this model will fight overfitting problem. Furthermore, you can try linear regression with L1 regularization (street name Lasso Regression or Lasso in sklearn). Try these models and see how you com- pare with other Kagglers now. You can write about your findings below.
| github_jupyter |
```
#Import section
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import pickle
%matplotlib inline
# Loading camera calibration coefficients(matrix and camera coefficients) from pickle file
def getCameraCalibrationCoefficientsFromPickleFile(filePath):
cameraCalibration = pickle.load( open(filePath, 'rb' ) )
mtx, dist = map(cameraCalibration.get, ('mtx', 'dist'))
return mtx, dist
def getTestImages(filePath):
# Load test images.
testImages = list(map(lambda imageFileName: (imageFileName, cv2.imread(imageFileName)), glob.glob(filePath)))
return testImages
def undistortImageAndGetHLS(image, mtx, dist):
# hlsOriginal = undistortAndHLS(originalImage, mtx, dist)
"""
Undistort the image with `mtx`, `dist` and convert it to HLS.
"""
undist = cv2.undistort(image, mtx, dist, None, mtx)
hls = cv2.cvtColor(undist, cv2.COLOR_RGB2HLS)
#extract HLS from the image
H = hls[:,:,0] #channels
L = hls[:,:,1]
S = hls[:,:,2]
return H, L, S
def thresh(yourChannel, threshMin = 0, threshMax = 255):
# Apply a threshold to the S channel
# thresh = (0, 160)
binary_output = np.zeros_like(yourChannel)
binary_output[(yourChannel >= threshMin) & (yourChannel <= threshMax)] = 1
# Return a binary image of threshold result
return binary_output
def applySobel(img, orient='x', sobel_kernel=3, thresh_min = 0, thresh_max = 255):
# Apply the following steps to img
# 1) Take the derivative in x or y given orient = 'x' or 'y'
sobel = 0
if orient == 'x':
sobel = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
else:
sobel = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# 2) Take the absolute value of the derivative or gradient
abs_sobel = np.absolute(sobel)
# 3) Scale to 8-bit (0 - 255) then convert to type = np.uint8
scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))
# 4) Create a mask of 1's where the scaled gradient magnitude is > thresh_min and < thresh_max
binary_output = thresh(scaled_sobel,thresh_min,thresh_max)
# 5) Return this mask as your binary_output image
return binary_output
def applyActionToImages(images, action):
return list(map(lambda img: (img[0], action(img[1])), images))
# Method to plot images on cols / rows
def showImages(images, cols = 4, rows = 5, figsize=(15,10), cmap = None):
imgLength = len(images)
fig, axes = plt.subplots(rows, cols, figsize=figsize)
indexes = range(cols * rows)
for ax, index in zip(axes.flat, indexes):
if index < imgLength:
imagePathName, image = images[index]
if cmap == None:
ax.imshow(image)
else:
ax.imshow(image, cmap=cmap)
ax.set_title(imagePathName)
ax.axis('off')
# Get camera matrix and distortion coefficient
mtx, dist = getCameraCalibrationCoefficientsFromPickleFile('./pickled_data/camera_calibration.p')
# Lambda action applied on all images
useSChannel = lambda img: undistortImageAndGetHLS(img, mtx, dist)[2]
# Get Test images
testImages = getTestImages('./test_images/*.jpg')
# Get all 'S' channels from all Test images
resultSChannel = applyActionToImages(testImages, useSChannel)
# Show our result
#showImages(resultSChannel, 2, 3, (15, 13), cmap='gray')
# Apply Sobel in 'x' direction and plot images
applySobelX = lambda img: applySobel(useSChannel(img), orient='x', thresh_min=10, thresh_max=160)
# Get all 'S' channels from all Test images
resultApplySobelX = applyActionToImages(testImages, applySobelX)
# Show our result
#showImages(resultApplySobelX, 2, 3, (15, 13), cmap='gray')
# Apply Sobel in 'y' direction and plot images
applySobelY = lambda img: applySobel(useSChannel(img), orient='y', thresh_min=10, thresh_max=160)
# Get all 'S' channels from all Test images
resultApplySobelY = applyActionToImages(testImages, applySobelY)
# Show our result
#showImages(resultApplySobelY, 2, 3, (15, 13), cmap='gray')
def mag_thresh(img, sobel_kernel=3, thresh_min = 0, thresh_max = 255):
# Apply the following steps to img
# 1) Take the gradient in x and y separately
sobelX = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
sobelY = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# 2) Calculate the magnitude
gradmag = np.sqrt(sobelX**2 + sobelY**2)
# 3) Scale to 8-bit (0 - 255) and convert to type = np.uint8
scale_factor = np.max(gradmag)/255
gradmag = (gradmag/scale_factor).astype(np.uint8)
# 4) Create a binary mask where mag thresholds are met
binary_output = thresh(gradmag,thresh_min, thresh_max)
# 5) Return this mask as your binary_output image
return binary_output
# Apply Magnitude in 'x' and 'y' directions in order to calcultate the magnitude of pixels and plot images
applyMagnitude = lambda img: mag_thresh(useSChannel(img), thresh_min=5, thresh_max=160)
# Apply the lamnda function to all test images
resultMagnitudes = applyActionToImages(testImages, applyMagnitude)
# Show our result
#showImages(resultMagnitudes, 2, 3, (15, 13), cmap='gray')
# Define a function that applies Sobel x and y,
# then computes the direction of the gradient
# and applies a threshold.
def dir_threshold(img, sobel_kernel=3, thresh_min = 0, thresh_max = np.pi/2):
# 1) Take the gradient in x and y separately and
# Take the absolute value of the x and y gradients
sobelX = np.absolute(cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel))
sobelY = np.absolute(cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel))
# 2) Use np.arctan2(abs_sobely, abs_sobelx) to calculate the direction of the gradient
# sobelY / sobelX
gradientDirection = np.arctan2(sobelY, sobelX)
# 3) Create a binary mask where direction thresholds are met
binary_output = thresh(gradientDirection, thresh_min, thresh_max)
# 4) Return this mask as your binary_output image
return binary_output
# Apply direction of the gradient
applyDirection = lambda img: dir_threshold(useSChannel(img), thresh_min=0.79, thresh_max=1.20)
# Apply the lambda function to all test images
resultDirection = applyActionToImages(testImages, applyDirection)
# Show our result
#showImages(resultDirection, 2, 3, (15, 13), cmap='gray')
def combineGradients(img):
sobelX = applySobelX(img)
sobelY = applySobelY(img)
magnitude = applyMagnitude(img)
direction = applyDirection(img)
combined = np.zeros_like(sobelX)
combined[((sobelX == 1) & (sobelY == 1)) | ((magnitude == 1) & (direction == 1))] = 1
return combined
resultCombined = applyActionToImages(testImages, combineGradients)
# Show our result
#showImages(resultCombined, 2, 3, (15, 13), cmap='gray')
def show_compared_results():
titles = ['Apply Sobel X', 'Apply Sobel Y', 'Apply Magnitude', 'Apply Direction', 'Combined']
results = list(zip(resultApplySobelX, resultApplySobelY, resultMagnitudes, resultDirection, resultCombined))
# only 5 images
resultsAndTitle = list(map(lambda images: list(zip(titles, images)), results))[3:6]
flattenResults = [item for sublist in resultsAndTitle for item in sublist]
fig, axes = plt.subplots(ncols=5, nrows=len(resultsAndTitle), figsize=(25,10))
for ax, imageTuple in zip(axes.flat, flattenResults):
title, images = imageTuple
imagePath, img = images
ax.imshow(img, cmap='gray')
ax.set_title(imagePath + '\n' + title, fontsize=8)
ax.axis('off')
fig.subplots_adjust(hspace=0, wspace=0.05, bottom=0)
```
| github_jupyter |
```
import numpy as np
import re
import pandas as pd
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans, DBSCAN
from sklearn.neighbors import NearestNeighbors
from requests import get
import unicodedata
from bs4 import BeautifulSoup
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
import xgboost as xgb
from sklearn.metrics import accuracy_score
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
%matplotlib inline
```
# Reading in the data
```
df = pd.read_csv('movie_metadata.csv')
df.head()
df.shape
def classify(col):
if col['imdb_score'] >= 0 and col['imdb_score'] < 4:
return 0
elif col['imdb_score'] >= 4 and col['imdb_score'] < 6:
return 1
elif col['imdb_score'] >= 6 and col['imdb_score'] < 7:
return 2
elif col['imdb_score'] >= 7 and col['imdb_score'] < 8:
return 3
elif col['imdb_score'] >= 8 and col['imdb_score'] <= 10:
return 4
df['Success'] = df.apply(classify, axis=1)
df.describe()
df.head()
```
# Filling NAN's with median.
```
def fill_nan(col):
df[col] = df[col].fillna(df[col].median())
cols = list(df.columns)
fill_nan(cols)
```
# Cleaning
```
def clean_backward_title(col):
string = col.rstrip()[:-2]
return unicodedata.normalize('NFD', unicode(string, 'utf-8')).encode('ascii', 'ignore')
df['movie_title'] = df['movie_title'].astype(str)
df['movie_title'] = df['movie_title'].apply(clean_backward_title)
df['movie_title']
```
# IMDB Revenue scraping script. Redundant right now.. but can be useful in other projects
```
# def revenue_parse(url, revenue_per_movie):
# url = url + 'business'
# response = get(url)
# html_soup = BeautifulSoup(response.text, 'html.parser')
# movie_containers = html_soup.find('div', {"id": "tn15content"})
# text_spend = movie_containers.text.split('\n')
# if 'Gross' in text_spend:
# gross_index = text_spend.index('Gross')
# rev = [int(i[1:].replace(',', '')) if i[1:].replace(',', '').isdigit() else -1 for i in re.findall(r'[$]\S*', text_spend[gross_index+1])]
# if len(rev) == 0:
# revenue_per_movie.append(-1)
# else:
# revenue_per_movie.append(max(rev))
# else:
# revenue_per_movie.append(-1)
# revenue_per_movie = []
# for i in df['url']:
# revenue_parse(i, revenue_per_movie)
```
# Describing the data to find the Missing values
```
df.describe()
```
# Normalizing or Standardizing the data.. change the commenting as per your needs
```
col = list(df.describe().columns)
col.remove('Success')
sc = StandardScaler()
# sc = MinMaxScaler()
temp = sc.fit_transform(df[col])
df[col] = temp
df.head()
```
# PCA
```
pca = PCA(n_components=3)
df_pca = pca.fit_transform(df[col])
df_pca
pca.explained_variance_ratio_
df['pca_one'] = df_pca[:, 0]
df['pca_two'] = df_pca[:, 1]
df['pca_three'] = df_pca[:, 2]
plt.figure(figsize=(12,12))
plt.scatter(df['pca_one'][:50], df['pca_two'][:50], color=['orange', 'cyan', 'brown'], cmap='viridis')
for m, p1, p2 in zip(df['movie_title'][:50], df['pca_one'][:50], df['pca_two'][:50]):
plt.text(p1, p2, s=m, color=np.random.rand(3)*0.7)
```
# KMeans
```
km = KMeans(n_clusters = 5)
#P_fit = km.fit(df[['gross','imdb_score','num_critic_for_reviews','director_facebook_likes','actor_1_facebook_likes','movie_facebook_likes','actor_3_facebook_likes','actor_2_facebook_likes']])
P_fit = km.fit(df[['gross','imdb_score']])
P_fit.labels_
# colormap = {0:'red',1:'green',2:'blue'}
# lc = [colormap[c] for c in colormap]
# plt.scatter(df['pca_one'],df['pca_two'],c = lc)
df['cluster'] = P_fit.labels_
np.unique(P_fit.labels_)
for i in np.unique(P_fit.labels_):
temp = df[df['cluster'] == i]
plt.scatter(temp['gross'], temp['imdb_score'], color=np.random.rand(3)*0.7)
```
# DBSCAN
```
cols3 = ['director_facebook_likes','imdb_score']
#The min_pts are taken as >= D+1 and the eps value is estimated from the elbow in k-distance graph
db = DBSCAN(eps = .5, min_samples=3).fit(df[cols3])
len(db.core_sample_indices_)
df['cluster'] = db.labels_
colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(np.unique(db.labels_)))]
plt.figure(figsize= (12,12))
for i in np.unique(db.labels_):
temp = df[df['cluster'] == i]
plt.scatter(temp['director_facebook_likes'], temp['imdb_score'], color = np.random.rand(3)*0.7)
```
# Random Forest
```
features = col
features.remove('imdb_score')
features
X_train, X_test, y_train, y_test = train_test_split(df[features], df['Success'], test_size=0.2)
# rf = RandomForestClassifier(random_state=1, n_estimators=250, min_samples_split=8, min_samples_leaf=4)
# rf = GradientBoostingClassifier(random_state=0, n_estimators=250, min_samples_split=8,
# min_samples_leaf=4, learning_rate=0.1)
rf = xgb.XGBClassifier(n_estimators=250)
rf.fit(X_train, y_train)
predictions = rf.predict(X_test)
predictions = predictions.astype(int)
np.unique(predictions)
accuracy_score(y_test, predictions)
features.insert(0, 'imdb_score')
sns.heatmap(df[features].corr())
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import tensorflow as tf
import numpy as np
from glob import glob
from itertools import cycle
mels = glob('universal-mel/*.npy')
file_cycle = cycle(mels)
f = next(file_cycle)
path = 'hifigan-512-combined'
ckpt_path = tf.train.latest_checkpoint(path)
ckpt_path
def generate(batch_max_steps = 8192, hop_size = 256):
while True:
f = next(file_cycle)
mel = np.load(f)
audio = np.load(f.replace('mels', 'audios'))
yield {'mel': mel, 'audio': audio}
dataset = tf.data.Dataset.from_generator(
generate,
{'mel': tf.float32, 'audio': tf.float32},
output_shapes = {
'mel': tf.TensorShape([None, 80]),
'audio': tf.TensorShape([None]),
},
)
features = dataset.make_one_shot_iterator().get_next()
features
import malaya_speech
import malaya_speech.train
from malaya_speech.train.model import hifigan
from malaya_speech.train.model import stft
import malaya_speech.config
hifigan_config = malaya_speech.config.hifigan_config_v2
hifigan_config['hifigan_generator_params']['filters'] = 512
generator = hifigan.MultiGenerator(
hifigan.GeneratorConfig(**hifigan_config['hifigan_generator_params']),
name = 'hifigan_generator',
)
y_hat = generator([features['mel']], training = False)
y_hat
x = tf.placeholder(tf.float32, [None, None, 80])
y_hat_ = generator(x, training = False)
y_hat_
y_hat_ = tf.identity(y_hat_, name = 'logits')
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
saver = tf.train.Saver(var_list = var_list)
saver.restore(sess, ckpt_path)
import IPython.display as ipd
# %%time
# f, y_hat_ = sess.run([features, y_hat])
# ipd.Audio(f['audio'], rate = 22050)
# ipd.Audio(y_hat_[0,:,0], rate = 22050)
y, _ = malaya_speech.load('shafiqah-idayu.wav', sr = 22050)
m = malaya_speech.featurization.universal_mel(y)
%%time
y_ = sess.run(y_hat_, feed_dict = {x: [m]})
ipd.Audio(y_[0,:,0], rate = 22050)
%%time
y_ = sess.run(y_hat_, feed_dict = {x: [np.load(mels[-1000])]})
ipd.Audio(y_[0,:,0], rate = 22050)
saver = tf.train.Saver()
saver.save(sess, 'universal-hifigan-512-output/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'gather' in n.op.lower()
or 'Placeholder' in n.name
or 'logits' in n.name)
and 'adam' not in n.name
and 'global_step' not in n.name
and 'Assign' not in n.name
and 'ReadVariableOp' not in n.name
and 'Gather' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('universal-hifigan-512-output', strings)
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-1024, fallback_max=1024)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'universal-hifigan-512-output/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
['Placeholder'],
['logits'], transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
b2_application_key_id = os.environ['b2_application_key_id']
b2_application_key = os.environ['b2_application_key']
from b2sdk.v1 import *
info = InMemoryAccountInfo()
b2_api = B2Api(info)
application_key_id = b2_application_key_id
application_key = b2_application_key
b2_api.authorize_account("production", application_key_id, application_key)
file_info = {'how': 'good-file'}
b2_bucket = b2_api.get_bucket_by_name('malaya-speech-model')
file = 'universal-hifigan-512-output/frozen_model.pb'
outPutname = 'vocoder-hifigan/universal-512/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
file = 'universal-hifigan-512-output/frozen_model.pb.quantized'
outPutname = 'vocoder-hifigan/universal-512-quantized/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
!tar -zcvf universal-hifigan-512-output.tar.gz universal-hifigan-512-output
file = 'universal-hifigan-512-output.tar.gz'
outPutname = 'pretrained/universal-hifigan-512-output.tar.gz'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
```
| github_jupyter |
# Import and convert Neo23x0 Sigma scripts
ianhelle@microsoft.com
This notebook is a is a quick and dirty Sigma to Log Analytics converter.
It uses the modules from sigmac package to do the conversion.
Only a subset of the Sigma rules are convertible currently. Failure to convert
could be for one or more of these reasons:
- known limitations of the converter
- mismatch between the syntax expressible in Sigma and KQL
- data sources referenced in Sigma rules do not yet exist in Azure Sentinel
The sigmac tool is downloadable as a package from PyPi but since we are downloading
the rules from the repo, we also copy and import the package from the repo source.
After conversion you can use an interactive browser to step through the rules and
view (and copy/save) the KQL equivalents. You can also take the conversion results and
use them in another way (e.g.bulk save to files).
The notebook is all somewhat experimental and offered as-is without any guarantees
## Download and unzip the Sigma repo
```
import requests
# Download the repo ZIP
sigma_git_url = 'https://github.com/Neo23x0/sigma/archive/master.zip'
r = requests.get(sigma_git_url)
from ipywidgets import widgets, Layout
import os
from pathlib import Path
def_path = Path.joinpath(Path(os.getcwd()), "sigma")
path_wgt = widgets.Text(value=str(def_path),
description='Path to extract to zipped repo files: ',
layout=Layout(width='50%'),
style={'description_width': 'initial'})
path_wgt
import zipfile
import io
repo_zip = io.BytesIO(r.content)
zip_archive = zipfile.ZipFile(repo_zip, mode='r')
zip_archive.extractall(path=path_wgt.value)
RULES_REL_PATH = 'sigma-master/rules'
rules_root = Path(path_wgt.value) / RULES_REL_PATH
```
### Check that we have the files
You should see a folder with folders such as application, apt, windows...
```
%ls {rules_root}
```
## Convert Sigma Files to Log Analytics Kql queries
```
# Read the Sigma YAML file paths into a dict and make a
# a copy for the target Kql queries
from pathlib import Path
from collections import defaultdict
import copy
def get_rule_files(rules_root):
file_dict = defaultdict(dict)
for file in Path(rules_root).resolve().rglob("*.yml"):
rel_path = Path(file).relative_to(rules_root)
path_key = '.'.join(rel_path.parent.parts)
file_dict[path_key][rel_path.name] = file
return file_dict
sigma_dict = get_rule_files(rules_root)
kql_dict = copy.deepcopy(sigma_dict)
# Add downloaded sigmac tool to sys.path and import Sigmac functions
import os
import sys
module_path = os.path.abspath(os.path.join('sigma/sigma-master/tools'))
if module_path not in sys.path:
sys.path.append(module_path)
from sigma.parser.collection import SigmaCollectionParser
from sigma.parser.exceptions import SigmaCollectionParseError, SigmaParseError
from sigma.configuration import SigmaConfiguration, SigmaConfigurationChain
from sigma.config.exceptions import SigmaConfigParseError, SigmaRuleFilterParseException
from sigma.filter import SigmaRuleFilter
import sigma.backends.discovery as backends
from sigma.backends.base import BackendOptions
from sigma.backends.exceptions import BackendError, NotSupportedError, PartialMatchError, FullMatchError
# Sigma to Log Analytics Conversion
import yaml
_LA_MAPPINGS = '''
fieldmappings:
Image: NewProcessName
ParentImage: ProcessName
ParentCommandLine: NO_MAPPING
'''
NOT_CONVERTIBLE = 'Not convertible'
def sigma_to_la(file_path):
with open(file_path, 'r') as input_file:
try:
sigmaconfigs = SigmaConfigurationChain()
sigmaconfig = SigmaConfiguration(_LA_MAPPINGS)
sigmaconfigs.append(sigmaconfig)
backend_options = BackendOptions(None, None)
backend = backends.getBackend('ala')(sigmaconfigs, backend_options)
parser = SigmaCollectionParser(input_file, sigmaconfigs, None)
results = parser.generate(backend)
kql_result = ''
for result in results:
kql_result += result
except (NotImplementedError, NotSupportedError):
kql_result = NOT_CONVERTIBLE
input_file.seek(0,0)
sigma_txt = input_file.read()
if not kql_result == NOT_CONVERTIBLE:
try:
kql_header = "\n".join(get_sigma_properties(sigma_txt))
kql_result = kql_header + "\n" + kql_result
except Exception as e:
print("exception reading sigma YAML: ", e)
print(sigma_txt, kql_result, sep='\n')
return sigma_txt, kql_result
sigma_keys = ['title', 'description', 'tags', 'status',
'author', 'logsource', 'falsepositives', 'level']
def get_sigma_properties(sigma_rule):
sigma_docs = yaml.load_all(sigma_rule, Loader=yaml.SafeLoader)
sigma_rule_dict = next(sigma_docs)
for prop in sigma_keys:
yield get_property(prop, sigma_rule_dict)
def get_property(name, sigma_rule_dict):
sig_prop = sigma_rule_dict.get(name, 'na')
if isinstance(sig_prop, dict):
sig_prop = ' '.join([f"{k}: {v}" for k, v in sig_prop.items()])
return f"// {name}: {sig_prop}"
_KQL_FILTERS = {
'date': ' | where TimeGenerated >= datetime({start}) and TimeGenerated <= datetime({end}) ',
'host': ' | where Computer has {host_name} '
}
def insert_at(source, insert, find_sub):
pos = source.find(find_sub)
if pos != -1:
return source[:pos] + insert + source[pos:]
else:
return source + insert
def add_filter_clauses(source, **kwargs):
if "{" in source or "}" in source:
source = ("// Warning: embedded braces in source. Please edit if necessary.\n"
+ source)
source = source.replace('{', '{{').replace('}', '}}')
if kwargs.get('host', False):
source = insert_at(source, _KQL_FILTERS['host'], '|')
if kwargs.get('date', False):
source = insert_at(source, _KQL_FILTERS['date'], '|')
return source
# Run the conversion
conv_counter = {}
for categ, sources in sigma_dict.items():
src_converted = 0
for file_name, file_path in sources.items():
sigma, kql = sigma_to_la(file_path)
kql_dict[categ][file_name] = (sigma, kql)
if not kql == NOT_CONVERTIBLE:
src_converted += 1
conv_counter[categ] = (len(sources), src_converted)
print("Conversion statistics")
print("-" * len("Conversion statistics"))
print('\n'.join([f'{categ}: rules: {counter[0]}, converted: {counter[1]}'
for categ, counter in conv_counter.items()]))
```
## Display the results in an interactive browser
```
from ipywidgets import widgets, Layout
# Browser Functions
def on_cat_value_change(change):
queries_w.options = kql_dict[change['new']].keys()
queries_w.value = queries_w.options[0]
def on_query_value_change(change):
if view_qry_check.value:
qry_text = kql_dict[sub_cats_w.value][queries_w.value][1]
if "Not convertible" not in qry_text:
qry_text = add_filter_clauses(qry_text,
date=add_date_filter_check.value,
host=add_host_filter_check.value)
query_text_w.value = qry_text.replace('|', '\n|')
orig_text_w.value = kql_dict[sub_cats_w.value][queries_w.value][0]
def on_view_query_value_change(change):
vis = 'visible' if view_qry_check.value else 'hidden'
on_query_value_change(None)
query_text_w.layout.visibility = vis
orig_text_w.layout.visibility = vis
# Function defs for ExecuteQuery cell below
def click_exec_hqry(b):
global qry_results
query_name = queries_w.value
query_cat = sub_cats_w.value
query_text = query_text_w.value
query_text = query_text.format(**qry_wgt.query_params)
disp_results(query_text)
def disp_results(query_text):
out_wgt.clear_output()
with out_wgt:
print("Running query...", end=' ')
qry_results = execute_kql_query(query_text)
print(f'done. {len(qry_results)} rows returned.')
display(qry_results)
exec_hqry_button = widgets.Button(description="Execute query..")
out_wgt = widgets.Output() #layout=Layout(width='100%', height='200px', visiblity='visible'))
exec_hqry_button.on_click(click_exec_hqry)
# Browser widget setup
categories = list(sorted(kql_dict.keys()))
sub_cats_w = widgets.Select(options=categories,
description='Category : ',
layout=Layout(width='30%', height='120px'),
style = {'description_width': 'initial'})
queries_w = widgets.Select(options = kql_dict[categories[0]].keys(),
description='Query : ',
layout=Layout(width='30%', height='120px'),
style = {'description_width': 'initial'})
query_text_w = widgets.Textarea(
value='',
description='Kql Query:',
layout=Layout(width='100%', height='300px', visiblity='hidden'),
disabled=False)
orig_text_w = widgets.Textarea(
value='',
description='Sigma Query:',
layout=Layout(width='100%', height='250px', visiblity='hidden'),
disabled=False)
query_text_w.layout.visibility = 'hidden'
orig_text_w.layout.visibility = 'hidden'
sub_cats_w.observe(on_cat_value_change, names='value')
queries_w.observe(on_query_value_change, names='value')
view_qry_check = widgets.Checkbox(description="View query", value=True)
add_date_filter_check = widgets.Checkbox(description="Add date filter", value=False)
add_host_filter_check = widgets.Checkbox(description="Add host filter", value=False)
view_qry_check.observe(on_view_query_value_change, names='value')
add_date_filter_check.observe(on_view_query_value_change, names='value')
add_host_filter_check.observe(on_view_query_value_change, names='value')
# view_qry_button.on_click(click_exec_hqry)
# display(exec_hqry_button);
vbox_opts = widgets.VBox([view_qry_check, add_date_filter_check, add_host_filter_check])
hbox = widgets.HBox([sub_cats_w, queries_w, vbox_opts])
vbox = widgets.VBox([hbox, orig_text_w, query_text_w])
on_view_query_value_change(None)
display(vbox)
```
## Click the `Execute query` button to run the currently display query
**Notes:**
- To run the queries, first authenticate to Log Analytics (scroll down and execute remaining cells in the notebook)
- If you added a date filter to the query set the date range below
```
from msticpy.nbtools.nbwidgets import QueryTime
qry_wgt = QueryTime(units='days', before=5, after=0, max_before=30, max_after=10)
vbox = widgets.VBox([exec_hqry_button, out_wgt])
display(vbox)
```
### Set Query Time bounds
```
qry_wgt.display()
```
### Authenticate to Azure Sentinel
```
def clean_kql_comments(query_string):
"""Cleans"""
import re
return re.sub(r'(//[^\n]+)', '', query_string, re.MULTILINE).replace('\n', '').strip()
def execute_kql_query(query_string):
if not query_string or len(query_string.strip()) == 0:
print('No query supplied')
return None
src_query = clean_kql_comments(query_string)
result = get_ipython().run_cell_magic('kql', line='', cell=src_query)
if result is not None and result.completion_query_info['StatusCode'] == 0:
results_frame = result.to_dataframe()
return results_frame
return []
import os
from msticpy.nbtools.wsconfig import WorkspaceConfig
from msticpy.nbtools import kql, GetEnvironmentKey
ws_config_file = 'config.json'
try:
ws_config = WorkspaceConfig(ws_config_file)
print('Found config file')
for cf_item in ['tenant_id', 'subscription_id', 'resource_group', 'workspace_id', 'workspace_name']:
print(cf_item, ws_config[cf_item])
except:
ws_config = None
ws_id = GetEnvironmentKey(env_var='WORKSPACE_ID',
prompt='Log Analytics Workspace Id:')
if ws_config:
ws_id.value = ws_config['workspace_id']
ws_id.display()
try:
WORKSPACE_ID = select_ws.value
except NameError:
try:
WORKSPACE_ID = ws_id.value
except NameError:
WORKSPACE_ID = None
if not WORKSPACE_ID:
raise ValueError('No workspace selected.')
kql.load_kql_magic()
%kql loganalytics://code().workspace(WORKSPACE_ID)
```
## Save All Converted Files
```
path_save_wgt = widgets.Text(value=str(def_path) + "_kql_out",
description='Path to save KQL files: ',
layout=Layout(width='50%'),
style={'description_width': 'initial'})
path_save_wgt
root = Path(path_save_wgt.value)
root.mkdir(exist_ok=True)
for categ, kql_files in kql_dict.items():
sub_dir = root.joinpath(categ)
for file_name, contents in kql_files.items():
kql_txt = contents[1]
if not kql_txt == NOT_CONVERTIBLE:
sub_dir.mkdir(exist_ok=True)
file_path = sub_dir.joinpath(file_name.replace('.yml', '.kql'))
with open(file_path, 'w') as output_file:
output_file.write(kql_txt)
print(f"Saved {file_path}")
```
| github_jupyter |
## This notebook shows how to run evaluation on our models straight from Colab environment
```
# mount GD
from google.colab import drive
drive.mount('/content/drive')
# your GD path to clone the repo
project_path="/content/drive/MyDrive/UofT_MEng/MIE1517/Project/FINDER_github/"
# Clone repo
%cd {project_path}
!git clone https://github.com/faraz2023/FINDER-pytorch.git
%cd FINDER-pytorch
%ls -a
# if already cloned
%cd {project_path}/FINDER-pytorch/
!pwd
# install environments, NEED TO restart kernel after installation
# torch_sparse and torch_scatter are slow on installation (normal, don't abort)
# could takes ~ 16 min
!pip install cython==0.29.13
!pip install networkx==2.3
!pip install numpy==1.17.3
!pip install pandas==0.25.2
!pip install scipy==1.3.1
!pip install tqdm==4.36.1
!pip install torchvision
!pip install torch_sparse
!pip install torch_scatter
!pip install tensorflow-gpu==1.14.0
# To where you want to build modules
# Old tf ND_cost model
%cd {project_path}/FINDER-pytorch/code/old_FINDER_ND_cost_tf/
# New torch ND_cost model
#%cd {project_path}/FINDER-pytorch/code/FINDER_ND_cost/
# build modules
!python setup.py build_ext -i
# If you encounter - cannot import name 'export_saved_model' from 'tensorflow.python.keras.saving.saved_model'
# try resinstall tf and restart kernel
!pip uninstall -y tensorflow-gpu
!pip install tensorflow-gpu
import time
import sys,os
import networkx as nx
import numpy as np
import random
import os
import os
from shutil import copyfile
from tqdm import tqdm
# use old module functions
sys.path.append(f'{project_path}/FINDER-pytorch/code/old_FINDER_ND_cost_tf/')
from FINDER import FINDER
old_finder = FINDER()
# HXA with maxcc
def HXA(g, method):
# 'HDA', 'HBA', 'HPRA', 'HCA'
sol = []
G = g.copy()
while (nx.number_of_edges(G)>0):
if method == 'HDA':
dc = nx.degree_centrality(G)
elif method == 'HBA':
dc = nx.betweenness_centrality(G)
elif method == 'HCA':
dc = nx.closeness_centrality(G)
elif method == 'HPRA':
dc = nx.pagerank(G)
keys = list(dc.keys())
values = list(dc.values())
maxTag = np.argmax(values)
node = keys[maxTag]
sol.append(node)
G.remove_node(node)
solution = sol + list(set(g.nodes())^set(sol))
solutions = [int(i) for i in solution]
Robustness = old_finder.utils.getRobustness(old_finder.GenNetwork(g), solutions)
MaxCCList = old_finder.utils.MaxWccSzList
return Robustness,MaxCCList,solutions
# modified from original EvaluateSol
def EvaluateSol(g, sol_file, strategyID=0, reInsertStep=20):
#evaluate the robust given the solution, strategyID:0,count;2:rank;3:multipy
#sys.stdout.flush()
# g = nx.read_weighted_edgelist(data_test)
#g = nx.read_gml(data_test)
g_inner = old_finder.GenNetwork(g)
print('Evaluating FINDER model')
print('number of nodes:%d'%nx.number_of_nodes(g))
print('number of edges:%d'%nx.number_of_edges(g))
nodes = list(range(nx.number_of_nodes(g)))
sol = []
for line in open(sol_file):
sol.append(int(line))
sol_left = list(set(nodes)^set(sol))
if strategyID > 0:
start = time.time()
sol_reinsert = old_finder.utils.reInsert(g_inner, sol, sol_left, strategyID, reInsertStep)
end = time.time()
print ('reInsert time:%.6f'%(end-start))
else:
sol_reinsert = sol
solution = sol_reinsert + sol_left
print('number of solution nodes:%d'%len(solution))
Robustness = old_finder.utils.getRobustness(g_inner, solution)
MaxCCList = old_finder.utils.MaxWccSzList
return Robustness, MaxCCList, solution
# load graph from ready to use gml (converted from datasets)
# Network names are: "Digg", "HI-II-14"
# Weight types are: 001, degree, random, zero
def build_graph_path(network_name,weight_type="001"):
return f"{project_path}/FINDER-pytorch/data/real/cost/{network_name}_{weight_type}.gml"
# load solution files generated by model
# Network names are: "Digg", "HI-II-14"
# Model names are: FINDER_ND_cost, old_FINDER_ND_cost_tf etc.
# step_ratio are: 0.0100, etc.
# Weight types are: 001, degree, random, zero
def build_solution_path(network_name,model_name="FINDER_CN_cost",step_ratio="0.0100",weight_type="001"):
data_folder=""
if(weight_type!=""):
weight_type=f"_{weight_type}"
data_folder=f"Data{weight_type}/"
return f"{project_path}/FINDER-pytorch/code/results/{model_name}/real/{data_folder}StepRatio_{step_ratio}/{network_name}{weight_type}.txt"
def get_node_weights(g):
sum=0.0
for i,v in g.nodes(data=True):
sum+=v["weight"]
return sum
# compute the ratio of cost of removed nodes / totol cost
# TODO, add step (or not, since it's test dataset, step is just trick at training stage)
def get_frac_cost_of_removed_nodes(g,solutions,ND_cost=False,verbose=0):
num_nodes = nx.number_of_nodes(g)
if(ND_cost):
total_weight = get_node_weights(g)
else:
total_weight = g.size()
g_mod = g.copy()
if(verbose>0):
print("\nOriginal # of nodes: ",num_nodes)
print("Original total weight: ",total_weight)
print("Solution: ", len(solutions), " = ", solutions , "\n")
frac_cost_list=[]
for rm_node in tqdm(solutions):
#for rm_node in reversed(solutions):
g_mod.remove_node(rm_node)
if(ND_cost):
left_weight = get_node_weights(g_mod)
else:
left_weight = g_mod.size()
frac_cost = (total_weight - left_weight) / total_weight
frac_cost_list.append(frac_cost)
if(verbose>1):
print("Removed node: ", rm_node)
print("left_weight: ", left_weight)
print("Frac cost of removed nodes: ", frac_cost)
return frac_cost_list
```
## ND_COST
```
# ND, cost
# load network
network_file_path = build_graph_path("HI-II-14","degree")
g = nx.read_gml(network_file_path, destringizer=int)
# get HDA solution
HDA_robustness, HDA_maxcclist,HDA_solutions = HXA(g, "HDA")
print("From HDA:",HDA_robustness, HDA_maxcclist[0:5],HDA_solutions[0:5])
# get our torch FINDER solution
FINDER_torch_solution_file_path = build_solution_path("HI-II-14",model_name="FINDER_ND_cost",weight_type="degree")
print("\nSolution file:", FINDER_torch_solution_file_path)
FINDER_robustness, FINDER_maxcclist,FINDER_solutions = EvaluateSol(g, FINDER_torch_solution_file_path)
print("From FINDER:",FINDER_robustness, FINDER_maxcclist[0:5],FINDER_solutions[0:5])
# get old FINDER solution
OFINDER_torch_solution_file_path = build_solution_path("HI-II-14",model_name="old_FINDER_ND_cost_tf",weight_type="degree")
print("\nSolution file:", OFINDER_torch_solution_file_path)
OFINDER_robustness, OFINDER_maxcclist,OFINDER_solutions = EvaluateSol(g, OFINDER_torch_solution_file_path)
print("From Old FINDER:",OFINDER_robustness, OFINDER_maxcclist[0:5],OFINDER_solutions[0:5])
# get ratio of cost of removed nodes / totol cost per remove nodes
FINDER_frac_cost_list = get_frac_cost_of_removed_nodes(g,FINDER_solutions,ND_cost=True,verbose=1)
OFINDER_frac_cost_list = get_frac_cost_of_removed_nodes(g,OFINDER_solutions,ND_cost=True,verbose=1)
HDA_frac_cost_list = get_frac_cost_of_removed_nodes(g,HDA_solutions,ND_cost=True)
# plot
from matplotlib import pyplot as plt
plt.figure(figsize=(12,12))
plt.plot(FINDER_frac_cost_list, FINDER_maxcclist, label="FINDER torch")
plt.plot(OFINDER_frac_cost_list, OFINDER_maxcclist, label="FINDER tf")
plt.plot(HDA_frac_cost_list, HDA_maxcclist, label="HDA")
plt.legend()
plt.xlabel("fraction of cost")
plt.ylabel("Residual gcc size")
```
## ND
```
# ND, without cost
# load network
network_file_path = build_graph_path("HI-II-14","zero")
print("Graph Network:", network_file_path)
g = nx.read_gml(network_file_path, destringizer=int)
# get HDA solution
HDA_robustness, HDA_maxcclist,HDA_solutions = HXA(g, "HDA")
print("From HDA:",HDA_robustness, HDA_maxcclist[0:5],HDA_solutions[0:5])
# get our torch FINDER solution
FINDER_torch_solution_file_path = build_solution_path("HI-II-14",model_name="FINDER_ND",weight_type="")
print("\nSolution file:", FINDER_torch_solution_file_path)
FINDER_robustness, FINDER_maxcclist,FINDER_solutions = EvaluateSol(g, FINDER_torch_solution_file_path)
print("From FINDER:",FINDER_robustness, FINDER_maxcclist[0:5],FINDER_solutions[0:5])
# get old FINDER solution
OFINDER_torch_solution_file_path = build_solution_path("HI-II-14",model_name="old_FINDER_ND_tf",weight_type="")
print("\nSolution file:", OFINDER_torch_solution_file_path)
OFINDER_robustness, OFINDER_maxcclist,OFINDER_solutions = EvaluateSol(g, OFINDER_torch_solution_file_path)
print("From Old FINDER:",OFINDER_robustness, OFINDER_maxcclist[0:5],OFINDER_solutions[0:5])
# get ratio of cost of removed nodes / totol cost per remove nodes
FINDER_frac_cost_list = get_frac_cost_of_removed_nodes(g,FINDER_solutions)
OFINDER_frac_cost_list = get_frac_cost_of_removed_nodes(g,OFINDER_solutions,verbose=1)
HDA_frac_cost_list = get_frac_cost_of_removed_nodes(g,HDA_solutions,verbose=1)
#g.size(weight='weight')
g.size()
# plot
from matplotlib import pyplot as plt
plt.figure(figsize=(12,12))
plt.plot(FINDER_frac_cost_list, FINDER_maxcclist, label="FINDER torch")
plt.plot(OFINDER_frac_cost_list, OFINDER_maxcclist, label="FINDER tf")
plt.plot(HDA_frac_cost_list, HDA_maxcclist, label="HDA")
plt.legend()
plt.xlabel("fraction of cost")
plt.ylabel("Residual gcc size")
def load_maxcc_file(f):
FINDER_f = open(f, "r")
scores = []
for score in FINDER_f:
scores.append(float(score))
return scores
# maxcclist directly from scorefiles
FINDER_maxcclist2=load_maxcc_file(f"{project_path}/FINDER-pytorch/code/results/FINDER_ND/real/StepRatio_0.0100/MaxCCList_Strategy_HI-II-14.txt")
OFINDER_maxcclist2=load_maxcc_file(f"{project_path}/FINDER-pytorch/code/results/old_FINDER_ND_tf/real/StepRatio_0.0100/MaxCCList_Strategy_HI-II-14.txt")
# plot
from matplotlib import pyplot as plt
plt.figure(figsize=(12,12))
plt.plot(FINDER_frac_cost_list, FINDER_maxcclist2, label="FINDER torch")
plt.plot(OFINDER_frac_cost_list, OFINDER_maxcclist2, label="FINDER tf")
plt.plot(HDA_frac_cost_list, HDA_maxcclist, label="HDA")
plt.legend()
plt.xlabel("fraction of cost")
plt.ylabel("Residual gcc size")
```
# Data pre-proccess
```
# Load raw HI-II-14 data from .tsv format
import pandas as pd
# The data and web portal are made available to the public under the CC BY 4.0 license.
# Users of the web portal or its data should cite the web portal and the HuRI publication.
# Data source: http://www.interactome-atlas.org/
# HuRI publication: https://pubmed.ncbi.nlm.nih.gov/25416956/
data_url = "http://www.interactome-atlas.org/data/HI-II-14.tsv"
raw_edge_list = pd.read_csv(data_url, sep='\t', names=['node_from','node_to'])
raw_edge_list
# As we can see there are several self referencing edges, we would need to clean up those first
uni_edge_list = raw_edge_list[raw_edge_list.node_from != raw_edge_list.node_to]
# Now we need to mask all protein labels [GENCODE (v27)] such as ENSG00000204889 to index numbers
edge_list = uni_edge_list.stack().rank(method='dense').unstack().astype(int)
print(edge_list.sort_values(by=['node_from']))
# Then we need to un-scale all index by 1, so it starts at 0
edge_list['node_from']-=1
edge_list['node_to']-=1
print(edge_list)
# Now we use networkx lib to convert it into a graph
G = nx.from_pandas_edgelist(edge_list, source='node_from', target='node_to')
# We add weights to nodes (note it's not weights on edges)
nx.set_node_attributes(G, 0.0, "weight")
nx.write_gml(G, f"{project_path}/FINDER-pytorch/data/raw_HI-II-14.gml")
print(G.nodes[0]['weight'])
# ND, without cost
# load network
network_file_path = f"{project_path}/FINDER-pytorch/data/raw_HI-II-14.gml"
print("Graph Network:", network_file_path)
g = nx.read_gml(network_file_path, destringizer=int)
# get HDA solution
HDA_robustness, HDA_maxcclist,HDA_solutions = HXA(g, "HDA")
print("From HDA:",HDA_robustness, HDA_maxcclist[0:5],HDA_solutions[0:5])
%cd {project_path}/FINDER-pytorch/code/old_FINDER_ND_tf/
# build modules
!python setup.py build_ext -i
import sys,os
# use old module functions for now
sys.path.append(f'{project_path}/FINDER-pytorch/code/old_FINDER_ND_tf/')
from FINDER import FINDER
import numpy as np
from tqdm import tqdm
import time
import networkx as nx
import pandas as pd
import pickle as cp
import random
def mkdir(path):
if not os.path.exists(path):
os.mkdir(path)
# Predict
model_file = f"{project_path}/FINDER-pytorch/code/old_FINDER_ND_tf/models/nrange_30_50_iter_78000.ckpt"
print('The choosen model is :%s' % (model_file))
dqn = FINDER()
dqn.LoadModel(model_file)
# Directory to save results
save_dir = f"{project_path}/FINDER-pytorch/code/results/raw_HI-II-14"
if not os.path.exists(save_dir):
os.makedirs(save_dir, exist_ok=True)
# input data
data_test = network_file_path
stepRatio=0.01
solution, time = dqn.EvaluateRealData(model_file, data_test, save_dir, stepRatio)
```
| github_jupyter |
```
%matplotlib inline
import re
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from numpy import nan
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support.wait import WebDriverWait
## create a pandas dataframe to store the scraped data
df = pd.DataFrame(
columns=['hotel', 'rating', 'distance', 'score', 'recommendation_ratio', 'review_number', 'lowest_price'])
## launch Firefox driver, please change directory to the location of your Geckodriver exe file and save that as my_path
#Use Chromedriver for launching chrome
#Use Edgedriver for launching edge
# headless mode
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--window-size=1920x1080') # indispensable
my_path = r"chromedriver.exe" # choose your own path
browser = webdriver.Chrome(chrome_options=chrome_options, executable_path=my_path) #webdriver.Chrome for chromedriver
browser.maximize_window()
def get_elements(xpath, attr = 'text', pattern = ''):
elements = browser.find_elements_by_xpath(xpath) # find the elements according to conditions stated in xpath
if (attr == 'text'):
res = list(map(lambda x: x.text, elements))
else:
res = list(map(lambda x: x.get_attribute(attr),elements))
return res
columns=['hotel', 'rating', 'distance', 'score', 'recommendation_ratio', 'review_number', 'lowest_price'];
df = pd.DataFrame(columns=columns)
place = '旺角'; # choose a place in HK
url = r"http://hotels.ctrip.com/hotel/hong%20kong58/k1"+place;
try:
browser.get(url)
star3 = browser.find_element_by_id("star-3")
star4 = browser.find_element_by_id("star-4")
star5 = browser.find_element_by_id("star-5")
# choose hotels that >= 3 stars
ActionChains(browser).click(star3).perform()
ActionChains(browser).click(star4).perform()
ActionChains(browser).click(star5).perform()
time.sleep(4) # better way: WebDriverWait
from selenium.webdriver.support.wait import WebDriverWait
tst = WebDriverWait(browser, 5).until(
lambda x: x.find_element_by_link_text("下一页"))
clo =browser.find_element_by_id('appd_wrap_close')
ActionChains(browser).move_to_element(clo).click(clo).perform()
page = 0
while (tst.get_attribute('class') != 'c_down_nocurrent'): # utill the last page
page += 1
# hotel brand
hotel_xpath = "//h2[@class='hotel_name']/a"
hotel = get_elements(hotel_xpath,'title')
hnum = len(hotel) # hotel numbers in current page
# hotel rating
rating_xpath = "//span[@class='hotel_ico']/span[starts-with(@class,'hotel_diamond')]"
rating = get_elements(rating_xpath,'class')
rating = [rating[i][-1:] for i in range(hnum)]
# distance
distance_xpath = "//p[@class='hotel_item_htladdress']/span[@class='dest_distance']"
distance = get_elements(distance_xpath)
distance_pattern = re.compile(r"\D+(\d+.\d+)\D+");
distance = list(map(lambda x: distance_pattern.match(x).group(1), distance))
# score
score_xpath = "//div[@class='hotelitem_judge_box']/a | //div[@class='hotelitem_judge_box']/span[@class='no_grade']"
score = get_elements(score_xpath,'title')
score_pattern = re.compile(r"\D+(\d+.\d+)\D+?");
score = list(map(lambda x: '/' if x=='暂无评分' or x=='' else score_pattern.match(x).group(1), score))
# recommendation
ratio_xpath = "//div[@class='hotelitem_judge_box']/a/span[@class='total_judgement_score']/span | //div[@class='hotelitem_judge_box']/span[@class='no_grade']"
ratio = get_elements(ratio_xpath)
# review
review_xpath = "//div[@class='hotelitem_judge_box']/a/span[@class='hotel_judgement']/span | //div[@class='hotelitem_judge_box']/span[@class='no_grade'] "
review = get_elements(review_xpath)
# lowset price
lowest_price_xpath = "//span[@class='J_price_lowList']"
price = get_elements(lowest_price_xpath)
rows = np.array([hotel, rating, distance, score, ratio, review, price]).T
dfrows = pd.DataFrame(rows,columns=columns)
df = df.append(dfrows,ignore_index=True)
ActionChains(browser).click(tst).perform() # next page
tst = WebDriverWait(browser, 10).until(
lambda x: x.find_element_by_link_text("下一页"))
print(tst.get_attribute('class'), page)
except Exception as e:
print(e.__doc__)
print(e.message)
finally:
browser.quit()
## create a csv file in our working directory with our scraped data
df.to_csv(place+"_hotel.csv", index=False,encoding='utf_8_sig')
print('Scraping is done!')
df.score = pd.to_numeric(df.score, errors='coerce')
df.rating = pd.to_numeric(df.rating, errors='coerce')
#df.recommendation_ratio = pd.to_numeric(df.recommendation_ratio,errors='coerce')
df['distance']=pd.to_numeric(df['distance'], errors='coerce')
df.review_number = pd.to_numeric(df.review_number, errors='coerce')
df.lowest_price = pd.to_numeric(df.lowest_price,errors='coerce')
df=df.sort_values(by='distance')
df
def piepic():
plt.figure(num='Rpie',dpi=100)
labels = ['3 stars', '4 stars', '5 stars']
sizes = [df.rating[df.rating==k].count() for k in [3,4,5] ]
colors = ['gold', 'lightcoral', 'lightskyblue']
explode = (0.01, 0.01, 0.01) # explode 1st slice
def atxt(pct, allvals):
absolute = int(pct/100.*np.sum(allvals))
return "{:.1f}%\n({:d})".format(pct, absolute)
# Plot
plt.pie(sizes, labels=labels, colors=colors, explode=explode, autopct=lambda pct: atxt(pct, sizes),
shadow=True, startangle=140)
plt.legend(labels,
title="hotel rating")
plt.axis('equal')
plt.savefig('Rpie.jpg')
plt.show()
plt.close()
def DvPpic(): # distance vs price
plt.figure(num='DvP',dpi=100)
plt.plot(df.distance[df.rating==3],df.lowest_price[df.rating==3],'x-',label='3 stars')
plt.plot(df.distance[df.rating==4],df.lowest_price[df.rating==4],'*-',label='4 stars')
plt.plot(df.distance[df.rating==5],df.lowest_price[df.rating==5],'rD-',label='5 stars')
plt.legend()
plt.xlabel('Distance (km)')
plt.ylabel('Price (Yuan)')
plt.grid()
plt.title('Distance vs. Price')
plt.savefig('DvP.jpg')
plt.show()
plt.close()
def Pdensity():
plt.figure(num='Pdensity',dpi=100)
df.lowest_price[df.rating==3].plot(kind='density',label='3 stars')
df.lowest_price[df.rating==4].plot(kind='density',label='4 stars')
df.lowest_price[df.rating==5].plot(kind='density',label='5 stars')
plt.grid()
plt.legend()
plt.xlabel('Price (Yuan)')
plt.title('Distribution of Price')
plt.savefig('Pdensity.jpg')
plt.show()
def Sbox():
plt.figure(num='Sbox',dpi=200)
data = pd.concat([df.score[df.rating==3].rename('3 stars'),
df.score[df.rating==4].rename('4 stars'),
df.score[df.rating==5].rename('5 stars')],
axis=1)
data.plot.box()
plt.minorticks_on()
plt.ylabel('score')
# plt.grid(b=True, which='minor', color='r', linestyle='--')
plt.title('Boxplot of Scores')
plt.savefig('Sbox.jpg')
#data.plot.box()
piepic()
DvPpic()
Pdensity()
Sbox()
```
| github_jupyter |
```
%pushd ../../
%env CUDA_VISIBLE_DEVICES=3
import json
import os
import sys
import tempfile
from tqdm.auto import tqdm
import torch
import torchvision
from torchvision import transforms
from PIL import Image
import numpy as np
torch.cuda.set_device(0)
from netdissect import setting
segopts = 'netpqc'
segmodel, seglabels, _ = setting.load_segmenter(segopts)
class UnsupervisedImageFolder(torchvision.datasets.ImageFolder):
def __init__(self, root, transform=None, max_size=None, get_path=False):
self.temp_dir = tempfile.TemporaryDirectory()
os.symlink(root, os.path.join(self.temp_dir.name, 'dummy'))
root = self.temp_dir.name
super().__init__(root, transform=transform)
self.get_path = get_path
self.perm = None
if max_size is not None:
actual_size = super().__len__()
if actual_size > max_size:
self.perm = torch.randperm(actual_size)[:max_size].clone()
logging.info(f"{root} has {actual_size} images, downsample to {max_size}")
else:
logging.info(f"{root} has {actual_size} images <= max_size={max_size}")
def _find_classes(self, dir):
return ['./dummy'], {'./dummy': 0}
def __getitem__(self, key):
if self.perm is not None:
key = self.perm[key].item()
if isinstance(key, str):
path = key
else:
path, target = self.samples[index]
sample = self.loader(path)
if self.transform is not None:
sample = self.transform(sample)
if self.get_path:
return sample, path
else:
return sample
def __len__(self):
if self.perm is not None:
return self.perm.size(0)
else:
return super().__len__()
len(seglabels)
class Sampler(torch.utils.data.Sampler):
def __init__(self, dataset, seg_path):
self.todos = []
for path, _ in dataset.samples:
k = os.path.splitext(os.path.basename(path))[0]
if not os.path.exists(os.path.join(seg_path, k + '.pth')):
self.todos.append(path)
def __len__(self):
return len(self.todos)
def __iter__(self):
yield from self.todos
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
def process(img_path, seg_path, device='cuda', batch_size=128, **kwargs):
os.makedirs(seg_path, exist_ok=True)
dataset = UnsupervisedImageFolder(img_path, transform=transform, get_path=True)
sampler = Sampler(dataset, seg_path)
loader = torch.utils.data.DataLoader(dataset, num_workers=24, batch_size=batch_size, pin_memory=True, sampler=sampler)
with torch.no_grad():
for x, paths in tqdm(loader):
segs = segmodel.segment_batch(x.to(device), **kwargs).detach().cpu()
for path, seg in zip(paths, segs):
k = os.path.splitext(os.path.basename(path))[0]
torch.save(seg, os.path.join(seg_path, k + '.pth'))
del segs
import glob
torch.backends.cudnn.benchmark=True
!ls churches/dome2tree
!ls notebooks/stats/churches/
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/baselines/pyflow/dome2spire_all_256/naive',
'notebooks/stats/churches/dome2spire_all/naive',
batch_size=8,
)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/baselines/pyflow/dome2spire_all_256/poisson',
'notebooks/stats/churches/dome2spire_all/poisson',
batch_size=8,
)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/baselines/pyflow/dome2spire_all_256/laplace',
'notebooks/stats/churches/dome2spire_all/laplace',
batch_size=8,
)
process(
'/data/vision/torralba/distillation/gan_rewriting/results/ablations/stylegan-church-dome2tree-8-1-2001-0.0001-overfitdomes_filtered/images',
'churches/dome2tree/overfit',
batch_size=8)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/domes',
'churches/domes',
batch_size=12)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/dome2tree',
'churches/dome2tree/ours',
batch_size=8)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/dome2spire',
'churches/dome2spire/ours',
batch_size=8)
```
| github_jupyter |
# How do distributions transform under a change of variables ?
Kyle Cranmer, March 2016
```
%pylab inline --no-import-all
```
We are interested in understanding how distributions transofrm under a change of variables.
Let's start with a simple example. Think of a spinner like on a game of twister.
<!--<img src="http://cdn.krrb.com/post_images/photos/000/273/858/DSCN3718_large.jpg?1393271975" width=300 />-->
We flick the spinner and it stops. Let's call the angle of the pointer $x$. It seems a safe assumption that the distribution of $x$ is uniform between $[0,2\pi)$... so $p_x(x) = 1/\sqrt{2\pi}$
Now let's say that we change variables to $y=\cos(x)$ (sorry if the names are confusing here, don't think about x- and y-coordinates, these are just names for generic variables). The question is this:
**what is the distribution of y?** Let's call it $p_y(y)$
Well it's easy to do with a simulation, let's try it out
```
# generate samples for x, evaluate y=cos(x)
n_samples = 100000
x = np.random.uniform(0,2*np.pi,n_samples)
y = np.cos(x)
# make a histogram of x
n_bins = 50
counts, bins, patches = plt.hist(x, bins=50, density=True, alpha=0.3)
plt.plot([0,2*np.pi], (1./2/np.pi, 1./2/np.pi), lw=2, c='r')
plt.xlim(0,2*np.pi)
plt.xlabel('x')
plt.ylabel('$p_x(x)$')
```
Ok, now let's make a histogram for $y=\cos(x)$
```
counts, y_bins, patches = plt.hist(y, bins=50, density=True, alpha=0.3)
plt.xlabel('y')
plt.ylabel('$p_y(y)$')
```
It's not uniform! Why is that? Let's look at the $x-y$ relationship
```
# make a scatter of x,y
plt.scatter(x[:300],y[:300]) #just the first 300 points
xtest = .2
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = 2*np.pi-xtest
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
xtest = np.pi/2
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')
xtest = 2*np.pi-xtest
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
xtest = xtest+.1
plt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')
plt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')
plt.ylim(-1.5,1.5)
plt.xlim(-1,7)
```
The two sets of vertical lines are both separated by $0.1$. The probability $P(a < x < b)$ must equal the probability of $P( cos(b) < y < cos(a) )$. In this example there are two different values of $x$ that give the same $y$ (see green and red lines), so we need to take that into account. For now, let's just focus on the first part of the curve with $x<\pi$.
So we can write (this is the important equation):
\begin{equation}
\int_a^b p_x(x) dx = \int_{y_b}^{y_a} p_y(y) dy
\end{equation}
where $y_a = \cos(a)$ and $y_b = \cos(b)$.
and we can re-write the integral on the right by using a change of variables (pure calculus)
\begin{equation}
\int_a^b p_x(x) dx = \int_{y_b}^{y_a} p_y(y) dy = \int_a^b p_y(y(x)) \left| \frac{dy}{dx}\right| dx
\end{equation}
notice that the limits of integration and integration variable are the same for the left and right sides of the equation, so the integrands must be the same too. Therefore:
\begin{equation}
p_x(x) = p_y(y) \left| \frac{dy}{dx}\right|
\end{equation}
and equivalently
\begin{equation}
p_y(y) = p_x(x) \,/ \,\left| \, {dy}/{dx}\, \right |
\end{equation}
The factor $\left|\frac{dy}{dx} \right|$ is called a Jacobian. When it is large it is stretching the probability in $x$ over a large range of $y$, so it makes sense that it is in the denominator.
```
plt.plot((0.,1), (0,.3))
plt.plot((0.,1), (0,0), lw=2)
plt.plot((1.,1), (0,.3))
plt.ylim(-.1,.4)
plt.xlim(-.1,1.6)
plt.text(0.5,0.2, '1', color='b')
plt.text(0.2,0.03, 'x', color='black')
plt.text(0.5,-0.05, 'y=cos(x)', color='g')
plt.text(1.02,0.1, '$\sin(x)=\sqrt{1-y^2}$', color='r')
```
In our case:
\begin{equation}
\left|\frac{dy}{dx} \right| = \sin(x)
\end{equation}
Looking at the right-triangle above you can see $\sin(x)=\sqrt{1-y^2}$ and finally there will be an extra factor of 2 for $p_y(y)$ to take into account $x>\pi$. So we arrive at
\begin{equation}
p_y(y) = 2 \times \frac{1}{2 \pi} \frac{1}{\sin(x)} = \frac{1}{\pi} \frac{1}{\sin(\arccos(y))} = \frac{1}{\pi} \frac{1}{\sqrt{1-y^2}}
\end{equation}
Notice that when $y=\pm 1$ the pdf is diverging. This is called a [caustic](http://www.phikwadraat.nl/huygens_cusp_of_tea/) and you see them in your coffee and rainbows!
| | |
|---|---|
| <img src="http://www.nanowerk.com/spotlight/id19915_1.jpg" size=200 /> | <img src="http://www.ams.org/featurecolumn/images/february2009/caustic.gif" size=200> |
**Let's check our prediction**
```
counts, y_bins, patches = plt.hist(y, bins=50, density=True, alpha=0.3)
pdf_y = (1./np.pi)/np.sqrt(1.-y_bins**2)
plt.plot(y_bins, pdf_y, c='r', lw=2)
plt.ylim(0,5)
plt.xlabel('y')
plt.ylabel('$p_y(y)$')
```
Perfect!
## A trick using the cumulative distribution function (cdf) to generate random numbers
Let's consider a different variable transformation now -- it is a special one that we can use to our advantage.
\begin{equation}
y(x) = \textrm{cdf}(x) = \int_{-\infty}^x p_x(x') dx'
\end{equation}
Here's a plot of a distribution and cdf for a Gaussian.
(NOte: the axes are different for the pdf and the cdf http://matplotlib.org/examples/api/two_scales.html
```
from scipy.stats import norm
x_for_plot = np.linspace(-3,3, 30)
fig, ax1 = plt.subplots()
ax1.plot(x_for_plot, norm.pdf(x_for_plot), c='b')
ax1.set_ylabel('p(x)', color='b')
for tl in ax1.get_yticklabels():
tl.set_color('b')
ax2 = ax1.twinx()
ax2.plot(x_for_plot, norm.cdf(x_for_plot), c='r')
ax2.set_ylabel('cdf(x)', color='r')
for tl in ax2.get_yticklabels():
tl.set_color('r')
```
Ok, so let's use our result about how distributions transform under a change of variables to predict the distribution of $y=cdf(x)$. We need to calculate
\begin{equation}
\frac{dy}{dx} = \frac{d}{dx} \int_{-\infty}^x p_x(x') dx'
\end{equation}
Just like particles and anti-particles, when derivatives meet anti-derivatives they annihilate. So $\frac{dy}{dx} = p_x(x)$, which shouldn't be a surprise.. the slope of the cdf is the pdf.
So putting these together we find the distribution for $y$ is:
\begin{equation}
p_y(y) = p_x(x) \, / \, \frac{dy}{dx} = p_x(x) /p_x(x) = 1
\end{equation}
So it's just a uniform distribution from $[0,1]$, which is perfect for random numbers.
We can turn this around and generate a uniformly random number between $[0,1]$, take the inverse of the cdf and we should have the distribution we want for $x$.
Let's try it for a Gaussian. The inverse of the cdf for a Gaussian is called [ppf](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.norm.html)
```
norm.ppf.__doc__
#check it out
norm.cdf(0), norm.ppf(0.5)
```
Ok, let's use CDF trick to generate Normally-distributed (aka Gaussian-distributed) random numbers
```
rand_cdf = np.random.uniform(0,1,10000)
rand_norm = norm.ppf(rand_cdf)
_ = plt.hist(rand_norm, bins=30, density=True, alpha=0.3)
plt.xlabel('x')
```
**Pros**: The great thing about this technique is it is very efficient. You only generate one random number per random $x$.
**Cons**: the downside is you need to know how to compute the inverse cdf for $p_x(x)$ and that can be difficult. It works for a distribution like a Gaussian, but for some random distribution this might be even more computationally expensive than the accept/reject approach. This approach also doesn't really work if your distribution is for more than one variable.
## Going full circle
Ok, let's try it for our distribution of $y=\cos(x)$ above. We found
\begin{equation}
p_y(y) = \frac{1}{\pi} \frac{1}{\sqrt{1-y^2}}
\end{equation}
So the CDF is (see Wolfram alpha for [integral](http://www.wolframalpha.com/input/?i=integrate%5B1%2Fsqrt%5B1-x%5E2%5D%2FPi%5D) )
\begin{equation}
cdf(y') = \int_{-1}^{y'} \frac{1}{\pi} \frac{1}{\sqrt{1-y^2}} = \frac{1}{\pi}\arcsin(y') + C
\end{equation}
and we know that for $y=-1$ the CDF must be 0, so the constant is $1/2$ and by looking at the plot or remembering some trig you know that it's also $cdf(y') = (1/\pi) \arccos(y')$.
So to apply the trick, we need to generate uniformly random variables $z$ between 0 and 1, and then take the inverse of the cdf to get $y$. Ok, so what would that be:
\begin{equation}
y = \textrm{cdf}^{-1}(z) = \cos(\pi z)
\end{equation}
**Of course!** that's how we started in the first place, we started with a uniform $x$ in $[0,2\pi]$ and then defined $y=\cos(x)$. So we just worked backwards to get where we started. The only difference here is that we only evaluate the first half: $\cos(x < \pi)$
| github_jupyter |
# Introduction to Docker
**Learning Objectives**
* Build and run Docker containers
* Pull Docker images from Docker Hub and Google Container Registry
* Push Docker images to Google Container Registry
## Overview
Docker is an open platform for developing, shipping, and running applications. With Docker, you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker helps you ship code faster, test faster, deploy faster, and shorten the cycle between writing code and running code.
Docker does this by combining kernel containerization features with workflows and tooling that helps you manage and deploy your applications.
Docker containers can be directly used in Kubernetes, which allows them to be run in the Kubernetes Engine with ease. After learning the essentials of Docker, you will have the skillset to start developing Kubernetes and containerized applications.
## Basic Docker commands
See what docker images you have.
```
!docker images
```
If this is the first time working with docker you won't have any repositories listed.
**Note**. If you are running this in an AI Notebook, then you should see a single image `gcr.io/inverting-proxy/agent`. This is the container that is currently running the AI Notebook.
Let's use `docker run` to pull a docker image called `hello-world` from the public registry. The docker daemon will search for the `hello-world` image, if it doesn't find the image locally, it pulls the image from a public registry called Docker Hub, creates a container from that image, and runs the container for you.
```
!docker run hello-world
```
Now when we look at our docker images we should see `hello-world` there as well.
```
!docker images
```
This is the image pulled from the Docker Hub public registry. The Image ID is in `SHA256` hash format—this field specifies the Docker image that's been provisioned. When the docker daemon can't find an image locally, it will by default search the public registry for the image. Let's run the container again:
Now, if we want to run `docker run hello-world` again, it won't have to download from the container registry.
To see all docker containers running, use `docker ps`.
```
!docker ps
```
There are no running containers. **Note. If you are running this in at AI Notebook, you'll see one container running.**
The `hello-world` containers you ran previously already exited. In order to see all containers, including ones that have finished executing, run docker `ps -a`:
```
!docker ps -a
```
This shows you the Container ID, a UUID generated by Docker to identify the container, and more metadata about the run. The container Names are also randomly generated but can be specified with docker run --name [container-name] hello-world.
## Build a Docker container
Let's build a Docker image that's based on a simple node application.
**Exercise**
Open the text file called `intro.docker` in the `dockerfiles` folder and complete the TODO there.
Your dockerfile should have the following steps
1. use `FROM` to inherit an official Node runtime as the parent image; e.g. node:6
2. use `WORKDIR` to seet the working directory to /app
3. use `ADD` to copy the current directory to the container at /app
4. use `EXPOSE` to make the containers port 80 available to the outside world
5. use `CMD` to run the command `node ./src/app.js`
This file instructs the Docker daemon on how to build your image.
The initial line specifies the base parent image, which in this case is the official Docker image for node version 6.
In the second, we set the working (current) directory of the container.
In the third, we add the current directory's contents (indicated by the "." ) into the container.
Then we expose the container's port so it can accept connections on that port and finally run the node command to start the application.
Check out the other [Docker command references](https://docs.docker.com/engine/reference/builder/#known-issues-run) to understand what each line does.
We're going to use this Docker container to run a simple node.js app. Have a look at `app.js`. This is a simple HTTP server that listens on port 80 and returns "Hello World."
Now let's build the image. Note again the "`.`", which means current directory so you need to run this command from within the directory that has the Dockerfile.
The `-t` is to name and tag an image with the `name:tag` syntax. The name of the image is `node-app` and the tag is `0.1`. The tag is highly recommended when building Docker images. If you don't specify a tag, the tag will default to latest and it becomes more difficult to distinguish newer images from older ones. Also notice how each line in the Dockerfile above results in intermediate container layers as the image is built.
**Exercise**
Use `docker build` to build the docker image at `dockerfiles/intro.docker`. Tag the image `node-app:0.1`.
```
!docker build -t node-app:0.1 -f dockerfiles/intro.docker .
```
Let's check that the image has been created correctly.
```
!docker images
```
You should see a `node-app` repository that was created only seconds ago.
Notice `node` is the base image and `node-app` is the image you built. You can't remove `node` without removing `node-app` first. The size of the image is relatively small compared to VMs. Other versions of the node image such as `node:slim` and `node:alpine` can give you even smaller images for easier portability. The topic of slimming down container sizes is further explored in Advanced Topics. You can view all versions in the official repository here.
Note, you can remove an image from your docker images using `docker rmi [repository]:[tag]`.
## Run a Docker container
Now we'll run the container based on the image you built above using the `docker run` command. The `--name` flag allows you to name the container if you like. And `-p` instructs Docker to map the host's port 4000 to the container's port 80. This allows you to reach the server at http://localhost:4000. Without port mapping, you would not be able to reach the container at localhost.
```
!docker ps -a
```
**Exercise**
Use `docker run` to run the container you just build called `node-app:0.1`. Assign the host port `4000` to port `80` and assign it the name `my-app`.
```
%%bash
docker run -p 4000:80 --name my-app node-app:0.1
```
To test out the server, open a terminal window and type the following command:
```bash
curl http://localhost:4000
```
You should see the server respond with `Hello World`
The container will run as long as the initial terminal is running. If you want to stop the container, run the following command in the terminal to stop and remove the container:
```bash
docker stop my-app && docker rm my-app
```
After a few moments the container will stop. You should notice the cell above will complete execution.
#### Running the container in the background
If you want to the container to run in the background (not tied to the terminal's session), you need to specify the `-d` flag.
Now run the following command to start the container in the background
**Exercise**
Modify your command above with `-d` flag to run `my-app` in the background.
```
%%bash
docker run -p 4000:80 --name my-app node-app:0.1 -d
```
Your container is now running in the background. You can check the status of your running container using `docker ps`
```
!docker ps
```
Notice the container is running in the output of docker ps. You can look at the logs by executing `docker logs [container_id]`.
```
# Note, your container id will be different
!docker logs b9d5fd6b8e33
```
You should see
```bash
Server running at http://0.0.0.0:80/
```
If you want to follow the log's output as the container is running, use the `-f` option.
## Modify & Publish
Let's modify the application and push it to your Google Cloud Repository (gcr). After that you'll remove all local containers and images to simulate a fresh environment, and then pull and run your containers from gcr. This will demonstrate the portability of Docker containers.
### Edit `app.js`
Open the file `./src/app.js` with the text editor and replace "Hello World" with another string. Then build this new image.
**Exercise**
After modifying the `app.js` file, use `docker build` to build a new container called `node-app:0.2` from the same docker file.
```
%%bash
docker build -t node-app:0.2 -f dockerfiles/intro.docker .
```
Notice in `Step 2` of the output we are using an existing cache layer. From `Step 3` and on, the layers are modified because we made a change in `app.js`.
Run another container with the new image version. Notice how we map the host's port 8000 instead of 80. We can't use host port 4000 because it's already in use.
**Exercise**
Run this new container in the background using a different port and with the name `my-app-2`.
```
!docker run -p 8000:80 --name my-app-4 -d node-app:0.2 -d
```
You can check that both container are running using `docker ps`.
```
!docker ps
```
And let's test boht containers using `curl` as before:
```
!curl http://localhost:8000
!curl http://localhost:4000
```
Recall, to stop a container running, you can execute the following command either in a terminal or (because they are running in the background) in a cell in this notebook.
### Publish to gcr
Now you're going to push your image to the Google Container Registry (gcr). To push images to your private registry hosted by gcr, you need to tag the images with a registry name. The format is `[hostname]/[project-id]/[image]:[tag]`.
For gcr:
* `[hostname]`= gcr.io
* `[project-id]`= your project's ID
* `[image]`= your image name
* `[tag]`= any string tag of your choice. If unspecified, it defaults to "latest".
```
import os
PROJECT_ID = "qwiklabs-gcp-00-eeb852ce8ccb" # REPLACE WITH YOUR PROJECT NAME
os.environ["PROJECT_ID"] = PROJECT_ID
```
Let's tag `node-app:0.2`.
```
!docker images
```
**Exercise**
Tag the `node-app:0.2` image with a new image name conforming to the naming convention `gcr.io/[project-id]/[image]:[tag]`. Keep the image and tag names the same.
```
%%bash
docker tag node-app:0.2 gcr.io/${PROJECT_ID}/node-app:0.2
```
Now when we list our docker images we should see this newly tagged repository.
```
!docker images
```
Next, let's push this image to gcr.
**Exercise**
Push this new image to the gcr.
```
%%bash
docker push gcr.io/${PROJECT_ID}/node-app:0.2
```
Check that the image exists in `gcr` by visiting the image registry Cloud Console. You can navigate via the console to `Navigation menu > Container Registry` or visit the url from the cell below:
```
%%bash
echo "http://gcr.io/${PROJECT_ID}/node-app"
```
### Test the published gcr image
Let's test this image. You could start a new VM, ssh into that VM, and install gcloud. For simplicity, we'll just remove all containers and images to simulate a fresh environment.
First, stop and remove all containers using `docker stop` and `docker rm`. **Be careful not to stop the container running this AI Notebook!**.
```
!docker stop my-app && docker rm my-app
!docker stop my-app-2 && docker rm my-app-2
```
Now remove the docker images you've created above using `docker rmi`.
```
!docker images
%%bash
docker rmi node-app:0.2
docker rmi gcr.io/${PROJECT_ID}/node-app:0.2
docker rmi node-app:0.1
docker rmi node:6
docker rmi -f hello-world:latest
```
Confirm all images are removed with `docker images`.
```
!docker images
```
At this point you should have a pseudo-fresh environment. Now, pull the image and run it.
```
%%bash
docker pull gcr.io/${PROJECT_ID}/node-app:0.2
docker run -p 4000:80 -d gcr.io/${PROJECT_ID}/node-app:0.2
```
You can check that it's running as expected using before:
```
!curl http://localhost:4000
```
Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import glob
import os
from skimage.io import imread,imshow
from skimage.transform import resize
from sklearn.utils import shuffle
from tqdm import tqdm
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import InputLayer,Conv2D,MaxPool2D,BatchNormalization,Dropout,Flatten,Dense
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
%matplotlib inline
from skimage.io import imread,imshow
from skimage.transform import resize
from sklearn.utils import shuffle
from tqdm import tqdm
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import InputLayer,Conv2D,MaxPool2D,BatchNormalization,Dropout,Flatten,Dense
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
%matplotlib inline
# my Contribution: Reduce Dataset
train_dataset_0_all=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_0/all/*.bmp')
train_dataset_0_hem=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_0/hem/*.bmp')
train_dataset_1_all=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_1/all/*.bmp')
train_dataset_1_hem=glob.glob('../input/leukemia-classification/C-NMC_Leukemia/training_data/fold_1/hem/*.bmp')
valid_data=pd.read_csv('../input/leukemia-classification/C-NMC_Leukemia/validation_data/C-NMC_test_prelim_phase_data_labels.csv')
Test_dataset = glob.glob('../input/leukemia-classification/C-NMC_Leukemia/testing_data/C-NMC_test_final_phase_data')
a,b=len(train_dataset_0_all),len(train_dataset_1_all)
d=a+b
print('count:',d)
valid_data.head()
A=[]
H=[]
A.extend(train_dataset_0_all)
A.extend(train_dataset_1_all)
H.extend(train_dataset_0_hem)
H.extend(train_dataset_1_hem)
print(len(A))
print(len(H))
A=np.array(A)
H=np.array(H)
fig, ax = plt.subplots(nrows = 1, ncols = 5, figsize = (20,20))
for i in tqdm(range(0,5)):
rand=np.random.randint(len(A))
img=imread(A[rand])
img=resize(img,(128,128))
ax[i].imshow(img)
fig, ax = plt.subplots(nrows = 1, ncols = 5, figsize = (20,20))
for i in tqdm(range(0,5)):
rand=np.random.randint(len(H))
img=imread(H[rand])
img=resize(img,(128,128))
ax[i].imshow(img)
image=[]
label=[]
for i in tqdm(range(len(A))):
img=imread(A[i])
img=resize(img,(128,128))
image.append(img)
label.append(1)
for i in tqdm(range(len(H))):
img_=imread(H[i])
img=resize(img,(128,128))
image.append(img)
label.append(0)
image=np.array(image)
label=np.array(label)
del A
del H
image, label = shuffle(image, label, random_state = 42)
fig, ax = plt.subplots(nrows = 1, ncols = 5, figsize = (20,20))
for i in tqdm(range(0,5)):
rand=np.random.randint(len(image))
ax[i].imshow(image[rand])
a=label[rand]
if a==1:
ax[i].set_title('diseased')
else:
ax[i].set_title('fine')
X=image
y=label
del image
del label
X_val = []
for image_name in valid_data.new_names:
# Loading images
img = imread('../input/leukemia-classification/C-NMC_Leukemia/validation_data/C-NMC_test_prelim_phase_data/' + image_name)
# Resizing
img = resize(img, (128,128))
# Appending them into list
X_val.append(img)
# Converting into array
X_val = np.array(X_val)
# Storing target values as well
y_val = valid_data.labels.values
train_datagen = ImageDataGenerator(horizontal_flip=True,
vertical_flip=True,
zoom_range = 0.2)
train_datagen.fit(X)
#Contribution: Made some changes in layers and add some hidden layers
model=Sequential()
model.add(InputLayer(input_shape=(128,128,3)))
model.add(Conv2D(filters=32,kernel_size=(3,3),padding='valid',activation='relu'))
model.add(BatchNormalization())
model.add(Dense(4))
model.add(MaxPool2D(pool_size=(2,2),padding='valid'))
model.add(Dropout(.2))
model.add(Conv2D(filters=64,kernel_size=(3,3),padding='valid',activation='relu'))
model.add(BatchNormalization())
model.add(Dense(2))
model.add(MaxPool2D(pool_size=(2,2),padding='valid'))
model.add(Dropout(.2))
model.add(Conv2D(filters=128,kernel_size=(3,3),padding='valid',activation='relu'))
model.add(BatchNormalization())
model.add(Dense(1))
model.add(MaxPool2D(pool_size=(2,2),padding='valid'))
model.add(Dropout(.2))
model.add(Conv2D(filters=256,kernel_size=(3,3),padding='valid',activation='relu')) #contribution
model.add(BatchNormalization())#contribution
model.add(Flatten())
model.add(Dense(units = 128, activation = 'relu')) #contribution
model.add(Dropout(0.3))
model.add(Dense(units = 64, activation = 'relu')) #contribution
model.add(Dropout(0.3))
model.add(Dense(units=1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
filepath = './best_weights.hdf5'
earlystopping = EarlyStopping(monitor = 'val_accuracy',
mode = 'max' ,
patience = 15)
checkpoint = ModelCheckpoint(filepath,
monitor = 'val_accuracy',
mode='max',
save_best_only=True,
verbose = 1)
callback_list = [earlystopping, checkpoint]
len(X),len(X_val)
#contribution
lr_reduction = ReduceLROnPlateau(monitor='val_loss',
patience=10,
verbose=2,
factor=.75)
model_checkpoint= ModelCheckpoint("/best_result_checkpoint", monitor='val_loss', save_best_only=True, verbose=0)
#history1 = model.fit(train_datagen.flow(X, y, batch_size = 512),
#validation_data = (X_val, y_val),
# epochs = 2,
#verbose = 1,
# callbacks =[earlystopping])
#history2 = model.fit(train_datagen.flow(X, y, batch_size = 512),
# validation_data = (X_val, y_val),
# epochs = 4,
#verbose = 1,
#callbacks =[earlystopping])
#contribution: Reduced in Epochs and changed in Activation function using Softmax and reduce batch size
history = model.fit(train_datagen.flow(X, y, batch_size = 212),
validation_data = (X_val, y_val),
epochs = 6,
verbose = 1,
callbacks =[earlystopping])
tf.keras.models.save_model(model,'my_model.hdf5')
model.summary
#my Contribution
print(history.history['accuracy'])
plt.plot(history.history['accuracy'],'--', label='accuracy on training set')
plt.plot(history.history['val_accuracy'], label='accuracy on validation set')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
#my Contribution
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
# creating the dataset
data = {'Reference Accuracy':65, 'My Accuracy':97}
courses = list(data.keys())
values = list(data.values())
fig = plt.figure(figsize = (10, 5))
# creating the bar plot
plt.bar(courses, values, color ='maroon',
width = 0.4)
plt.xlabel("Comparison Between My Accuracy and Reference Accuracy ")
plt.ylabel("Accuracy")
plt.show()
Reference:
https://www.kaggle.com/dimaorizz/kravtsov-lab7
https://towardsdatascience.com/cnn-architectures-a-deep-dive-a99441d18049
https://ieeexplore.ieee.org/document/9071471
google.com/leukemia classification
```
| github_jupyter |
[<img src="https://deepnote.com/buttons/launch-in-deepnote-small.svg">](https://deepnote.com/launch?url=https%3A%2F%2Fgithub.com%2Fgordicaleksa%2Fget-started-with-JAX%2Fblob%2Fmain%2FTutorial_4_Flax_Zero2Hero_Colab.ipynb)
<a href="https://colab.research.google.com/github/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_4_Flax_Zero2Hero_Colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Flax: From Zero to Hero!
This notebook heavily relies on the [official Flax docs](https://flax.readthedocs.io/en/latest/) and [examples](https://github.com/google/flax/blob/main/examples/) + some additional code/modifications, comments/notes, etc.
### Enter Flax - the basics ❤️
Before you jump into the Flax world I strongly recommend you check out my JAX tutorials, as I won't be covering the details of JAX here.
* (Tutorial 1) ML with JAX: From Zero to Hero ([video](https://www.youtube.com/watch?v=SstuvS-tVc0), [notebook](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_1_JAX_Zero2Hero_Colab.ipynb))
* (Tutorial 2) ML with JAX: from Hero to Hero Pro+ ([video](https://www.youtube.com/watch?v=CQQaifxuFcs), [notebook](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_2_JAX_HeroPro%2B_Colab.ipynb))
* (Tutorial 3) ML with JAX: Coding a Neural Network from Scratch in Pure JAX ([video](https://www.youtube.com/watch?v=6_PqUPxRmjY), [notebook](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_3_JAX_Neural_Network_from_Scratch_Colab.ipynb))
That out of the way - let's start with the basics!
```
# Install Flax and JAX
!pip install --upgrade -q "jax[cuda11_cudnn805]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install --upgrade -q git+https://github.com/google/flax.git
!pip install --upgrade -q git+https://github.com/deepmind/dm-haiku # Haiku is here just for comparison purposes
import jax
from jax import lax, random, numpy as jnp
# NN lib built on top of JAX developed by Google Research (Brain team)
# Flax was "designed for flexibility" hence the name (Flexibility + JAX -> Flax)
import flax
from flax.core import freeze, unfreeze
from flax import linen as nn # nn notation also used in PyTorch and in Flax's older API
from flax.training import train_state # a useful dataclass to keep train state
# DeepMind's NN JAX lib - just for comparison purposes, we're not learning Haiku here
import haiku as hk
# JAX optimizers - a separate lib developed by DeepMind
import optax
# Flax doesn't have its own data loading functions - we'll be using PyTorch dataloaders
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
# Python libs
import functools # useful utilities for functional programs
from typing import Any, Callable, Sequence, Optional
# Other important 3rd party libs
import numpy as np
import matplotlib.pyplot as plt
```
The goal of this notebook is to get you started with Flax!
I'll only cover the most essential parts of Flax (and Optax) - just as much as needed to get you started with training NNs!
```
# Let's start with the simplest model possible: a single feed-forward layer (linear regression)
model = nn.Dense(features=5)
# All of the Flax NN layers inherit from the Module class (similarly to PyTorch)
print(nn.Dense.__bases__)
```
So how can we do inference with this simple model? 2 steps: init and apply!
```
# Step 1: init
seed = 23
key1, key2 = random.split(random.PRNGKey(seed))
x = random.normal(key1, (10,)) # create a dummy input, a 10-dimensional random vector
# Initialization call - this gives us the actual model weights
# (remember JAX handles state externally!)
y, params = model.init_with_output(key2, x)
print(y)
print(jax.tree_map(lambda x: x.shape, params))
# Note1: automatic shape inference
# Note2: immutable structure (hence FrozenDict)
# Note3: init_with_output if you care, for whatever reason, about the output here
# Step 2: apply
y = model.apply(params, x) # this is how you run prediction in Flax, state is external!
print(y)
try:
y = model(x) # this doesn't work anymore (bye bye PyTorch syntax)
except Exception as e:
print(e)
# todo: a small coding exercise - let's contrast Flax with Haiku
#@title Haiku vs Flax solution
model = hk.transform(lambda x: hk.Linear(output_size=5)(x))
seed = 23
key1, key2 = random.split(random.PRNGKey(seed))
x = random.normal(key1, (10,)) # create a dummy input, a 10-dimensional random vector
params = model.init(key2, x)
out = model.apply(params, None, x)
print(out)
print(hk.Linear.__bases__)
```
All of this might (initially!) be overwhelming if you're used to stateful, object-oriented paradigm.
What Flax offers is high performance and flexibility (similarly to JAX).
Here are some [benchmark numbers](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) from the HuggingFace team.

Now that we have a an answer to "why should I learn Flax?" - let's start our descent into Flaxlandia!
### A toy example 🚚 - training a linear regression model
We'll first implement a pure-JAX appoach and then we'll do it the Flax-way.
```
# Defining a toy dataset
n_samples = 150
x_dim = 2 # putting small numbers here so that we can visualize the data easily
y_dim = 1
noise_amplitude = 0.1
# Generate (random) ground truth W and b
# Note: we could get W, b from a randomely initialized nn.Dense here, being closer to JAX for now
key, w_key, b_key = random.split(random.PRNGKey(seed), num=3)
W = random.normal(w_key, (x_dim, y_dim)) # weight
b = random.normal(b_key, (y_dim,)) # bias
# This is the structure that Flax expects (recall from the previous section!)
true_params = freeze({'params': {'bias': b, 'kernel': W}})
# Generate samples with additional noise
key, x_key, noise_key = random.split(key, num=3)
xs = random.normal(x_key, (n_samples, x_dim))
ys = jnp.dot(xs, W) + b
ys += noise_amplitude * random.normal(noise_key, (n_samples, y_dim))
print(f'xs shape = {xs.shape} ; ys shape = {ys.shape}')
# Let's visualize our data (becoming one with the data paradigm <3)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
assert xs.shape[-1] == 2 and ys.shape[-1] == 1 # low dimensional data so that we can plot it
ax.scatter(xs[:, 0], xs[:, 1], zs=ys)
# todo: exercise - let's show that our data lies on the 2D plane embedded in 3D
# option 1: analytic approach
# option 2: data-driven approach
def make_mse_loss(xs, ys):
def mse_loss(params):
"""Gives the value of the loss on the (xs, ys) dataset for the given model (params)."""
# Define the squared loss for a single pair (x,y)
def squared_error(x, y):
pred = model.apply(params, x)
# Inner because 'y' could have in general more than 1 dims
return jnp.inner(y-pred, y-pred) / 2.0
# Batched version via vmap
return jnp.mean(jax.vmap(squared_error)(xs, ys), axis=0)
return jax.jit(mse_loss) # and finally we jit the result (mse_loss is a pure function)
mse_loss = make_mse_loss(xs, ys)
value_and_grad_fn = jax.value_and_grad(mse_loss)
# Let's reuse the simple feed-forward layer since it trivially implements linear regression
model = nn.Dense(features=y_dim)
params = model.init(key, xs)
print(f'Initial params = {params}')
# Let's set some reasonable hyperparams
lr = 0.3
epochs = 20
log_period_epoch = 5
print('-' * 50)
for epoch in range(epochs):
loss, grads = value_and_grad_fn(params)
# SGD (closer to JAX again, but we'll progressively go towards how stuff is done in Flax)
params = jax.tree_multimap(lambda p, g: p - lr * g, params, grads)
if epoch % log_period_epoch == 0:
print(f'epoch {epoch}, loss = {loss}')
print('-' * 50)
print(f'Learned params = {params}')
print(f'Gt params = {true_params}')
```
Now let's do the same thing but this time with dedicated optimizers!
Enter DeepMind's optax! ❤️🔥
```
opt_sgd = optax.sgd(learning_rate=lr)
opt_state = opt_sgd.init(params) # always the same pattern - handling state externally
print(opt_state)
# todo: exercise - compare Adam's and SGD's states
params = model.init(key, xs) # let's start with fresh params again
for epoch in range(epochs):
loss, grads = value_and_grad_fn(params)
updates, opt_state = opt_sgd.update(grads, opt_state) # arbitrary optim logic!
params = optax.apply_updates(params, updates)
if epoch % log_period_epoch == 0:
print(f'epoch {epoch}, loss = {loss}')
# Note 1: as expected we get the same loss values
# Note 2: we'll later see more concise ways to handle all of these state components (hint: TrainState)
```
In this toy SGD example Optax may not seem that useful but it's very powerful.
You can build arbitrary optimizers with arbitrary hyperparam schedules, chaining, param freezing, etc. You can check the [official docs here](https://optax.readthedocs.io/en/latest/).
```
#@title Optax Advanced Examples
# This cell won't "compile" (no ml_collections package) and serves just as an example
# Example from Flax (ImageNet example)
# https://github.com/google/flax/blob/main/examples/imagenet/train.py#L88
def create_learning_rate_fn(
config: ml_collections.ConfigDict,
base_learning_rate: float,
steps_per_epoch: int):
"""Create learning rate schedule."""
warmup_fn = optax.linear_schedule(
init_value=0., end_value=base_learning_rate,
transition_steps=config.warmup_epochs * steps_per_epoch)
cosine_epochs = max(config.num_epochs - config.warmup_epochs, 1)
cosine_fn = optax.cosine_decay_schedule(
init_value=base_learning_rate,
decay_steps=cosine_epochs * steps_per_epoch)
schedule_fn = optax.join_schedules(
schedules=[warmup_fn, cosine_fn],
boundaries=[config.warmup_epochs * steps_per_epoch])
return schedule_fn
tx = optax.sgd(
learning_rate=learning_rate_fn,
momentum=config.momentum,
nesterov=True,
)
# Example from Haiku (ImageNet example)
# https://github.com/deepmind/dm-haiku/blob/main/examples/imagenet/train.py#L116
def make_optimizer() -> optax.GradientTransformation:
"""SGD with nesterov momentum and a custom lr schedule."""
return optax.chain(
optax.trace(
decay=FLAGS.optimizer_momentum,
nesterov=FLAGS.optimizer_use_nesterov),
optax.scale_by_schedule(lr_schedule), optax.scale(-1))
```
Now let's go beyond these extremely simple models!
### Creating custom models ⭐
```
class MLP(nn.Module):
num_neurons_per_layer: Sequence[int] # data field (nn.Module is Python's dataclass)
def setup(self): # because dataclass is implicitly using the __init__ function... :')
self.layers = [nn.Dense(n) for n in self.num_neurons_per_layer]
def __call__(self, x):
activation = x
for i, layer in enumerate(self.layers):
activation = layer(activation)
if i != len(self.layers) - 1:
activation = nn.relu(activation)
return activation
x_key, init_key = random.split(random.PRNGKey(seed))
model = MLP(num_neurons_per_layer=[16, 8, 1]) # define an MLP model
x = random.uniform(x_key, (4,4)) # dummy input
params = model.init(init_key, x) # initialize via init
y = model.apply(params, x) # do a forward pass via apply
print(jax.tree_map(jnp.shape, params))
print(f'Output: {y}')
# todo: exercise - use @nn.compact pattern instead
# todo: check out https://realpython.com/python-data-classes/
```
Great!
Now that we know how to build more complex models let's dive deeper and understand how the 'nn.Dense' module is designed itself.
#### Introducing "param"
```
class MyDenseImp(nn.Module):
num_neurons: int
weight_init: Callable = nn.initializers.lecun_normal()
bias_init: Callable = nn.initializers.zeros
@nn.compact
def __call__(self, x):
weight = self.param('weight', # parametar name (as it will appear in the FrozenDict)
self.weight_init, # initialization function, RNG passed implicitly through init fn
(x.shape[-1], self.num_neurons)) # shape info
bias = self.param('bias', self.bias_init, (self.num_neurons,))
return jnp.dot(x, weight) + bias
x_key, init_key = random.split(random.PRNGKey(seed))
model = MyDenseImp(num_neurons=3) # initialize the model
x = random.uniform(x_key, (4,4)) # dummy input
params = model.init(init_key, x) # initialize via init
y = model.apply(params, x) # do a forward pass via apply
print(jax.tree_map(jnp.shape, params))
print(f'Output: {y}')
# todo: exercise - check out the source code:
# https://github.com/google/flax/blob/main/flax/linen/linear.py
# https://github.com/google/jax/blob/main/jax/_src/nn/initializers.py#L150 <- to see why lecun_normal() vs zeros (no brackets)
from inspect import signature
# You can see it expects a PRNG key and it is passed implicitly through the init fn (same for zeros)
print(signature(nn.initializers.lecun_normal()))
```
So far we've only seen **trainable** params.
ML models often times have variables which are part of the state but are not optimized via gradient descent.
Let's see how we can handle them using a simple (and contrived) example!
#### Introducing "variable"
*Note on terminology: variable is a broader term and it includes both params (trainable variables) as well as non-trainable vars.*
```
class BiasAdderWithRunningMean(nn.Module):
decay: float = 0.99
@nn.compact
def __call__(self, x):
is_initialized = self.has_variable('batch_stats', 'ema')
# 'batch_stats' is not an arbitrary name!
# Flax uses that name in its implementation of BatchNorm (hard-coded, probably not the best of designs?)
ema = self.variable('batch_stats', 'ema', lambda shape: jnp.zeros(shape), x.shape[1:])
# self.param will by default add this variable to 'params' collection (vs 'batch_stats' above)
# Again some idiosyncrasies here we need to pass a key even though we don't actually use it...
bias = self.param('bias', lambda key, shape: jnp.zeros(shape), x.shape[1:])
if is_initialized:
# self.variable returns a reference hence .value
ema.value = self.decay * ema.value + (1.0 - self.decay) * jnp.mean(x, axis=0, keepdims=True)
return x - ema.value + bias
x_key, init_key = random.split(random.PRNGKey(seed))
model = BiasAdderWithRunningMean()
x = random.uniform(x_key, (10,4)) # dummy input
variables = model.init(init_key, x)
print(f'Multiple collections = {variables}') # we can now see a new collection 'batch_stats'
# We have to use mutable since regular params are not modified during the forward
# pass, but these variables are. We can't keep state internally (because JAX) so we have to return it.
y, updated_non_trainable_params = model.apply(variables, x, mutable=['batch_stats'])
print(updated_non_trainable_params)
# Let's see how we could train such model!
def update_step(opt, apply_fn, x, opt_state, params, non_trainable_params):
def loss_fn(params):
y, updated_non_trainable_params = apply_fn(
{'params': params, **non_trainable_params},
x, mutable=list(non_trainable_params.keys()))
loss = ((x - y) ** 2).sum() # not doing anything really, just for the demo purpose
return loss, updated_non_trainable_params
(loss, non_trainable_params), grads = jax.value_and_grad(loss_fn, has_aux=True)(params)
updates, opt_state = opt.update(grads, opt_state)
params = optax.apply_updates(params, updates)
return opt_state, params, non_trainable_params # all of these represent the state - ugly, for now
model = BiasAdderWithRunningMean()
x = jnp.ones((10,4)) # dummy input, using ones because it's easier to see what's going on
variables = model.init(random.PRNGKey(seed), x)
non_trainable_params, params = variables.pop('params')
del variables # delete variables to avoid wasting resources (this pattern is used in the official code)
sgd_opt = optax.sgd(learning_rate=0.1) # originally you'll see them use the 'tx' naming (from opTaX)
opt_state = sgd_opt.init(params)
for _ in range(3):
# We'll later see how TrainState abstraction will make this step much more elegant!
opt_state, params, non_trainable_params = update_step(sgd_opt, model.apply, x, opt_state, params, non_trainable_params)
print(non_trainable_params)
```
Let's go a level up in abstraction again now that we understand params and variables!
Certain layers like BatchNorm will use variables in the background.
Let's see a last example that is conceptually as complicated as it gets when it comes to Flax's idiosyncrasies, and high-level at the same time.
```
class DDNBlock(nn.Module):
"""Dense, dropout + batchnorm combo.
Contains trainable variables (params), non-trainable variables (batch stats),
and stochasticity in the forward pass (because of dropout).
"""
num_neurons: int
training: bool
@nn.compact
def __call__(self, x):
x = nn.Dense(self.num_neurons)(x)
x = nn.Dropout(rate=0.5, deterministic=not self.training)(x)
x = nn.BatchNorm(use_running_average=not self.training)(x)
return x
key1, key2, key3, key4 = random.split(random.PRNGKey(seed), 4)
model = DDNBlock(num_neurons=3, training=True)
x = random.uniform(key1, (3,4,4))
# New: because of Dropout we now have to include its unique key - kinda weird, but you get used to it
variables = model.init({'params': key2, 'dropout': key3}, x)
print(variables)
# And same here, everything else remains the same as the previous example
y, non_trainable_params = model.apply(variables, x, rngs={'dropout': key4}, mutable=['batch_stats'])
# Let's run these model variables during "evaluation":
eval_model = DDNBlock(num_neurons=3, training=False)
# Because training=False we don't have stochasticity in the forward pass neither do we update the stats
y = eval_model.apply(variables, x)
```
### A fully-fledged CNN on MNIST example in Flax! 💥
Modified the official MNIST example here: https://github.com/google/flax/tree/main/examples/mnist
We'll be using PyTorch dataloading instead of TFDS.
Let's start by defining a model:
```
class CNN(nn.Module): # lots of hardcoding, but it serves a purpose for a simple demo
@nn.compact
def __call__(self, x):
x = nn.Conv(features=32, kernel_size=(3, 3))(x)
x = nn.relu(x)
x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))
x = nn.Conv(features=64, kernel_size=(3, 3))(x)
x = nn.relu(x)
x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))
x = x.reshape((x.shape[0], -1)) # flatten
x = nn.Dense(features=256)(x)
x = nn.relu(x)
x = nn.Dense(features=10)(x)
x = nn.log_softmax(x)
return x
```
Let's add the data loading support in PyTorch!
I'll be reusing code from [tutorial #3](https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_3_JAX_Neural_Network_from_Scratch_Colab.ipynb):
```
def custom_transform(x):
# A couple of modifications here compared to tutorial #3 since we're using a CNN
# Input: (28, 28) uint8 [0, 255] torch.Tensor, Output: (28, 28, 1) float32 [0, 1] np array
return np.expand_dims(np.array(x, dtype=np.float32), axis=2) / 255.
def custom_collate_fn(batch):
"""Provides us with batches of numpy arrays and not PyTorch's tensors."""
transposed_data = list(zip(*batch))
labels = np.array(transposed_data[1])
imgs = np.stack(transposed_data[0])
return imgs, labels
mnist_img_size = (28, 28, 1)
batch_size = 128
train_dataset = MNIST(root='train_mnist', train=True, download=True, transform=custom_transform)
test_dataset = MNIST(root='test_mnist', train=False, download=True, transform=custom_transform)
train_loader = DataLoader(train_dataset, batch_size, shuffle=True, collate_fn=custom_collate_fn, drop_last=True)
test_loader = DataLoader(test_dataset, batch_size, shuffle=False, collate_fn=custom_collate_fn, drop_last=True)
# optimization - loading the whole dataset into memory
train_images = jnp.array(train_dataset.data)
train_lbls = jnp.array(train_dataset.targets)
# np.expand_dims is to convert shape from (10000, 28, 28) -> (10000, 28, 28, 1)
# We don't have to do this for training images because custom_transform does it for us.
test_images = np.expand_dims(jnp.array(test_dataset.data), axis=3)
test_lbls = jnp.array(test_dataset.targets)
# Visualize a single image
imgs, lbls = next(iter(test_loader))
img = imgs[0].reshape(mnist_img_size)[:, :, 0]
gt_lbl = lbls[0]
print(gt_lbl)
plt.imshow(img); plt.show()
```
Great - we have our data pipeline ready and the model architecture defined.
Now let's define core training functions:
```
@jax.jit
def train_step(state, imgs, gt_labels):
def loss_fn(params):
logits = CNN().apply({'params': params}, imgs)
one_hot_gt_labels = jax.nn.one_hot(gt_labels, num_classes=10)
loss = -jnp.mean(jnp.sum(one_hot_gt_labels * logits, axis=-1))
return loss, logits
(_, logits), grads = jax.value_and_grad(loss_fn, has_aux=True)(state.params)
state = state.apply_gradients(grads=grads) # this is the whole update now! concise!
metrics = compute_metrics(logits=logits, gt_labels=gt_labels) # duplicating loss calculation but it's a bit cleaner
return state, metrics
@jax.jit
def eval_step(state, imgs, gt_labels):
logits = CNN().apply({'params': state.params}, imgs)
return compute_metrics(logits=logits, gt_labels=gt_labels)
def train_one_epoch(state, dataloader, epoch):
"""Train for 1 epoch on the training set."""
batch_metrics = []
for cnt, (imgs, labels) in enumerate(dataloader):
state, metrics = train_step(state, imgs, labels)
batch_metrics.append(metrics)
# Aggregate the metrics
batch_metrics_np = jax.device_get(batch_metrics) # pull from the accelerator onto host (CPU)
epoch_metrics_np = {
k: np.mean([metrics[k] for metrics in batch_metrics_np])
for k in batch_metrics_np[0]
}
return state, epoch_metrics_np
def evaluate_model(state, test_imgs, test_lbls):
"""Evaluate on the validation set."""
metrics = eval_step(state, test_imgs, test_lbls)
metrics = jax.device_get(metrics) # pull from the accelerator onto host (CPU)
metrics = jax.tree_map(lambda x: x.item(), metrics) # np.ndarray -> scalar
return metrics
# This one will keep things nice and tidy compared to our previous examples
def create_train_state(key, learning_rate, momentum):
cnn = CNN()
params = cnn.init(key, jnp.ones([1, *mnist_img_size]))['params']
sgd_opt = optax.sgd(learning_rate, momentum)
# TrainState is a simple built-in wrapper class that makes things a bit cleaner
return train_state.TrainState.create(apply_fn=cnn.apply, params=params, tx=sgd_opt)
def compute_metrics(*, logits, gt_labels):
one_hot_gt_labels = jax.nn.one_hot(gt_labels, num_classes=10)
loss = -jnp.mean(jnp.sum(one_hot_gt_labels * logits, axis=-1))
accuracy = jnp.mean(jnp.argmax(logits, -1) == gt_labels)
metrics = {
'loss': loss,
'accuracy': accuracy,
}
return metrics
# Finally let's define the high-level training/val loops
seed = 0 # needless to say these should be in a config or defined like flags
learning_rate = 0.1
momentum = 0.9
num_epochs = 2
batch_size = 32
train_state = create_train_state(jax.random.PRNGKey(seed), learning_rate, momentum)
for epoch in range(1, num_epochs + 1):
train_state, train_metrics = train_one_epoch(train_state, train_loader, epoch)
print(f"Train epoch: {epoch}, loss: {train_metrics['loss']}, accuracy: {train_metrics['accuracy'] * 100}")
test_metrics = evaluate_model(train_state, test_images, test_lbls)
print(f"Test epoch: {epoch}, loss: {test_metrics['loss']}, accuracy: {test_metrics['accuracy'] * 100}")
# todo: exercise - how would we go about adding dropout? What about BatchNorm? What would have to change?
```
Bonus point: a walk-through the "non-toy", distributed ImageNet CNN training example.
Head over to https://github.com/google/flax/tree/main/examples/imagenet
You'll keep seeing the same pattern/structure in all official Flax examples.
### Further learning resources 📚
Aside from the [official docs](https://flax.readthedocs.io/en/latest/) and [examples](https://github.com/google/flax/tree/main/examples) I found [HuggingFace's Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax) and the resources from their ["community week"](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects) useful as well.
Finally, [source code](https://github.com/google/flax) is also your friend, as the library is still evolving.
### Connect with me ❤️
Last but not least I regularly post AI-related stuff (paper summaries, AI news, etc.) on my Twitter/LinkedIn. We also have an ever increasing Discord community (1600+ members at the time of writing this). If you care about any of these I encourage you to connect!
Social: <br/>
💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ <br/>
🐦 Twitter - https://twitter.com/gordic_aleksa <br/>
👨👩👧👦 Discord - https://discord.gg/peBrCpheKE <br/>
🙏 Patreon - https://www.patreon.com/theaiepiphany <br/>
Content: <br/>
📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ <br/>
📚 Medium - https://gordicaleksa.medium.com/ <br/>
💻 GitHub - https://github.com/gordicaleksa <br/>
📢 AI Newsletter - https://aiepiphany.substack.com/ <br/>
| github_jupyter |
```
import numpy as np
import scipy
from scipy.linalg import expm
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
import pandas as pd
def SSDP_DDoS(training_size, test_size, n, PLOT_DATA=True):
class_labels = [r'BENING', r'DrDoS_SSDP']
data = pd.read_csv('DrDoS_SSDP_features_removed.csv', skiprows=[i for i in range(1,141550)], skipfooter=141547, engine="python")
x = StandardScaler().fit_transform(np.array(data.drop(columns=['Label'])))
y = np.array(data['Label'].astype('category').cat.codes.astype(int))
X_train, X_test, Y_train, Y_test = train_test_split(x, y, stratify=y, test_size=0.3, random_state=109)
pca = PCA(n_components=n).fit(X_train)
X_train = pca.transform(X_train)
X_test = pca.transform(X_test)
samples = np.append(X_train, X_test, axis=0)
minmax_scale = MinMaxScaler((-1, 1)).fit(samples)
X_train = minmax_scale.transform(X_train)
X_test = minmax_scale.transform(X_test)
training_input = {key: (X_train[Y_train == k, :])[:training_size] for k, key in enumerate(class_labels)}
test_input = {key: (X_test[Y_test == k, :])[:test_size] for k, key in enumerate(class_labels)}
if PLOT_DATA:
for k in range(0, 2):
x_axis_data = X_train[Y_train == k, 0][:training_size]
y_axis_data = X_train[Y_train == k, 1][:training_size]
label = 'DDoS' if k == 1 else 'Benign'
plt.scatter(x_axis_data, y_axis_data, label=label)
plt.title("DDoS_SSDP Dataset (Dimensionality Reduced With PCA)")
plt.legend()
plt.show()
return X_train, training_input, test_input, class_labels
from qiskit.aqua.utils import split_dataset_to_data_and_labels
n = 2 # How many features to use (dimensionality)
training_dataset_size = 1033
testing_dataset_size = 443
sample_Total, training_input, test_input, class_labels = SSDP_DDoS(training_dataset_size, testing_dataset_size, n)
datapoints, class_to_label = split_dataset_to_data_and_labels(test_input)
print(class_to_label)
%load_ext memory_profiler
from qiskit import BasicAer
from qiskit.ml.datasets import *
from qiskit.circuit.library import ZZFeatureMap
from qiskit.aqua.utils import split_dataset_to_data_and_labels, map_label_to_class_name
from qiskit.aqua import QuantumInstance
from qiskit.aqua.algorithms import QSVM
seed = 10598
feature_map = ZZFeatureMap(feature_dimension=2, reps=2, entanglement='linear')
qsvm = QSVM(feature_map, training_input, test_input, datapoints[0])
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
%%time
%memit result2 = qsvm.run(quantum_instance)
print("ground truth: {}".format(datapoints[1]))
print("prediction: {}".format(result2['predicted_labels']))
print("predicted class: {}".format(result2['predicted_classes']))
print("accuracy: {}".format(result2['testing_accuracy']))
from sklearn.metrics import classification_report, recall_score
from sklearn.metrics import f1_score, accuracy_score, precision_score, make_scorer
#Metrics
classification = classification_report(datapoints[1], result2['predicted_labels'])
confusion = confusion_matrix(datapoints[1], result2['predicted_labels'])
# Accuracy
accuracy = round(accuracy_score(datapoints[1], result2['predicted_labels']),5)
# Recall
recall = round(recall_score(datapoints[1], result2['predicted_labels'], average='macro')*100,5)
# Precision
precision = round(precision_score(datapoints[1], result2['predicted_labels'], average='weighted')*100,5)
# F1
f1 = round(f1_score(datapoints[1], result2['predicted_labels'], average='weighted')*100,5)
print(accuracy)
print(recall)
print(precision)
print(f1)
print(1-accuracy)
```
| github_jupyter |
```
# Загрузка зависимостей
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import keras
from keras.models import Sequential
from keras.layers import Dense
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from scipy import stats
from statsmodels.graphics.gofplots import qqplot
# Загрузка подготовленного набора данных
dataset = pd.read_csv('prepared_data.csv')
dataset.head(10)
X = dataset.drop(columns=['age']).values
Y = dataset['age'].values
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2)
# Задаем параметры структуры нейронной сети.
input_layer_size = 12
first_hidden_layer_size = 10
second_hidden_layer_size = 15
output_layer_size = 1
epochs_number = 200
batch_size = 16
# Создание нейронной сети прямого распространения, пока она пустая, т.е. не содержит слоёв и нейронов.
model = Sequential()
# Входной слой и первый скрытый слой, функция активации - ReLU
model.add(Dense(first_hidden_layer_size, input_dim=input_layer_size, activation='relu'))
model.add(Dense(second_hidden_layer_size, activation='relu'))
model.add(Dense(output_layer_size, activation='linear'))
model.summary()
# Настройка нейронной сети.
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mean_absolute_error', 'mean_squared_error'])
# Обучение нейронной сети.
history = model.fit(X_train, Y_train, validation_data = (X_test,Y_test), epochs=epochs_number, batch_size=batch_size)
# Выводим динамику среднего абсолютного отклонения от номера эпохи обучения.
plt.plot(history.history['mean_absolute_error'])
plt.plot(history.history['val_mean_absolute_error'])
plt.title('Model MAE')
plt.ylabel('MAE')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper right')
plt.show()
# Выводим динамику среднеквадратического отклонения, т.е. значения функции потерь, от номера эпохи обучения.
plt.plot(history.history['mean_squared_error'])
plt.plot(history.history['val_mean_squared_error'])
plt.title('Model MSE')
plt.ylabel('MSE')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper right')
plt.show()
# Предсказание уже обученной нейронной сети на обучающей выборке:
Y_pred_train = model.predict(X_train).flatten()
# Сравним эталонные значения Y_train и результат работы обученной нейронной сети Y_pred_train для обучающей выборки.
plt.plot(Y_train, Y_pred_train, 'bo')
plt.plot([-2,2], [-2,2], 'r-')
plt.title('Test vs Pred_test')
plt.ylabel('Pred_test')
plt.xlabel('Test')
plt.show()
# Выведем сами значения Y_train и Y_pred_train.
plt.plot(Y_train)
plt.plot(Y_pred_train)
plt.show()
# Предсказание обученной нейронной сети на тестовой выборке:
Y_pred_test = model.predict(X_test).flatten()
# Сравним эталонные значения Y_test и результат работы обученной нейронной сети Y_pred_test для тестовой выборки.
plt.plot(Y_test, Y_pred_test, 'bo')
plt.plot([-2,2], [-2,2], 'r-')
plt.title('Test vs Pred_test')
plt.ylabel('Pred_test')
plt.xlabel('Test')
plt.show()
# Выведем сами значения Y_test и Y_pred_test.
plt.plot(Y_test)
plt.plot(Y_pred_test)
plt.show()
# Сравним среднеквадратичные ошибки (значения функции потерь) для обучающей и тестовой выборок.
print(np.sqrt(mean_squared_error(Y_train, Y_pred_train)))
print(np.sqrt(mean_squared_error(Y_test, Y_pred_test)))
# Проверим на нормальное распределение разности пар (Y_train, Y_pred_train), (Y_test, Y_pred_test)
k_train, p_train = stats.shapiro(Y_train - Y_pred_train)
print('Train k = {0}, p = {1}'.format(k_train, p_train))
k_test, p_test = stats.shapiro(Y_test - Y_pred_test)
print('Test k = {0}, p = {1}'.format(k_test, p_test))
# Для полной выборки (Y, Y_pred) применим два статистических теста: shapiro и normaltest.
Y_pred = model.predict(X).flatten()
k_s, p_s = stats.shapiro(Y - Y_pred)
print('k_s = {0}, p_s = {1}'.format(k_s, p_s))
k_n, p_n = stats.normaltest(Y - Y_pred)
print('k_n = {0}, p_n = {1}'.format(k_n, p_n))
# И тоже самое визуально, с помощью грфиков квантиль-квантиль.
# Обучающая выборка
qqplot(Y_train - Y_pred_train)
plt.show()
# Тестовая выборка
qqplot(Y_test - Y_pred_test)
plt.show()
# Полная выборка
qqplot(Y - Y_pred)
plt.show()
plt.hist(Y - Y_pred, bins=50)
plt.show()
model.save('SimpleNeuralNetwork.h5')
```
| github_jupyter |
```
#Importing libraries
import tensorflow as tf
from tensorflow import keras
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from sklearn.metrics import classification_report
#Setting the visualization
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
mpl.style.use( 'ggplot' )
plt.style.use('fivethirtyeight')
sns.set(context="notebook", palette="dark", style = 'whitegrid' , color_codes=True)
#Upload Fashion MNIST data
(X_train_orig, y_train_orig), (X_test_orig, y_test_orig) = keras.datasets.fashion_mnist.load_data()
#Labels
class_names = ['T-shirt/top', 'Trouser', 'Pullover',
'Dress', 'Coat', 'Sandal', 'Shirt',
'Sneaker', 'Bag', 'Ankie boot']
#See the dimensionality of DataFrames
print('Dimensionality of DataFrames:')
print('X_train_orig:', X_train_orig.shape)
print('y_train_orig:', y_train_orig.shape)
print('X_test_orig:', X_test_orig.shape)
print('y_test_orig:', y_test_orig.shape)
#See a slice of the image
print('\n\nImage converted to array:\n', X_train_orig[0][:5][:5])
#Check unique values by class (training)
print('y_train_orig:')
np.unique(y_train_orig, return_counts=True)
#Check unique values by classes (test)
print('y_test_orig:')
np.unique(y_test_orig, return_counts=True)
#See some sample images
plt.figure(figsize=(6,6))
for i in range(25):
plt.subplot(5, 5, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(X_train_orig[i], cmap=plt.cm.binary)
plt.xlabel(class_names[y_train_orig[i]])
plt.tight_layout()
#Create lambda function that turns into float32 and normalizes pixels
f = lambda x: (x / 255.0).astype('float32')
#Apply the lambda function to the X_train and X_test datasets
X_train = f(X_train_orig)
X_test = f(X_test_orig)
#Resize images
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
print('X_train:{}'.format(X_train.shape))
print('X_test:\t{}'.format(X_test.shape))
#One-Hot Encoding
example = np.array([1, 3, 4, 2, 0])
print('Example before Encoding:')
print(example)
example_encoded = keras.utils.to_categorical(example)
print('\nExample after Encoding')
print(example_encoded)
y_train = keras.utils.to_categorical(y_train_orig)
y_test = keras.utils.to_categorical(y_test_orig)
#First CONV => RELU => CONV => RELU => POOL layer set
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(32, 3, padding="same", activation='relu',))
model.add(keras.layers.BatchNormalization(axis=1))
model.add(keras.layers.Conv2D(32, (3, 3), padding="same", activation='relu'))
model.add(keras.layers.BatchNormalization(axis=1))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Dropout(0.25))
#Second CONV => RELU => CONV => RELU => POOL layer set
model.add(keras.layers.Conv2D(64, (3, 3), padding="same", activation='relu'))
model.add(keras.layers.BatchNormalization(axis=1))
model.add(keras.layers.Conv2D(64, (3, 3), padding="same", activation='relu'))
model.add(keras.layers.BatchNormalization(axis=1))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Dropout(0.25))
#First (and only) set of FC => RELU layers
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(512, activation='relu'))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dropout(0.5))
# Softmax classifier
model.add(keras.layers.Dense(10, activation='softmax'))
#Model.compile(optimizer='adam', loss="sparse_categorical_crossentropy", metrics=['accuracy'])
model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=['accuracy'])
#Train the model and save the information in history
history = model.fit(X_train, y_train, epochs=10, validation_split=0.3)
#Evaluating the Modely_hat = model.predict(X_test)
y_hat = model.predict(X_test)
y_hat_classes = np.argmax(y_hat, axis=1)
print(classification_report(y_test_orig, y_hat_classes, target_names=class_names))
#Plot optimization history
pd.DataFrame(history.history).plot()
plt.show()
score = model.evaluate(X_test, y_test)
#Check model performance
print('Loss: {:.4f}'.format(score[0]))
print('Accuracy: {:.4f}'.format(score[1]))
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
!pip3 install glove_python
import os
os.chdir('/content/drive/MyDrive/sharif/DeepLearning/ipython(guide)')
import numpy as np
import codecs
import os
import random
import pandas
from keras import backend as K
from keras.models import Model
from keras.layers.embeddings import Embedding
from keras.layers import Input, Dense, Lambda, Permute, Dropout
from keras.layers import Conv2D, MaxPooling1D
from keras.optimizers import SGD
import ast
from glove import Glove,Corpus
import re
from sklearn.preprocessing import MultiLabelBinarizer
limit_number = 750
data = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv',index_col=0,converters={'body': eval})
data = data.dropna().reset_index(drop=True)
X = data["body"].values.tolist()
y = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv')
labels = []
tag=[]
for item in y['tag']:
labels += [i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' ']
tag.append([i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' '])
labels = list(set(labels))
mlb = MultiLabelBinarizer()
Y=mlb.fit_transform(tag)
len(labels)
data.head()
sentence_maxlen = max(map(len, (d for d in X)))
print('sentence maxlen', sentence_maxlen)
# vocab = []
# for d in X:
# for w in d:
# if w not in vocab: vocab.append(w)
# vocab = sorted(vocab)
# vocab_size = len(vocab)
# print('vocab examples:', vocab[:10])
# pandas.DataFrame(vocab).to_csv("vocab.csv",header=None)
import re
freq_dist = pandas.read_csv('../Data/FreqDist_sorted_'+str(limit_number)+'.csv',index_col=False)
vocab=[]
for item in freq_dist["word"]:
try:
word = re.sub(r"[\\u200c]","",item.replace(" ",""))
if word!=' ' and word!= None and word!='':
vocab.append(word)
except:
# print(word)
pass
len(vocab) , len(freq_dist)
vocab = sorted(vocab)
vocab_size = len(vocab)
print('vocab size', len(vocab))
w2i = {w:i for i,w in enumerate(vocab)}
# i2w = {i:w for i,w in enumerate(vocab)}
def vectorize(data, sentence_maxlen, w2i):
vec_data = []
for d in data:
try:
vec = [w2i[w] for w in d if w in w2i]
pad_len = max(0, sentence_maxlen - len(vec))
vec += [0] * pad_len
vec_data.append(vec)
# print('-----------------------')
except:
print(type(d), d==None)
vec_data = np.array(vec_data)
return vec_data
vecX = vectorize(X, sentence_maxlen, w2i)
vecY=Y
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(vecX, vecY, test_size=0.2)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25)
print('train: ', X_train.shape , '\ntest: ', X_test.shape , '\nval: ', X_val.shape)
corpus = Corpus()
#Training the corpus to generate the co occurence matrix which is used in GloVe
corpus.fit(X, window=10)
glove = Glove(no_components=5, learning_rate=0.05)
glove.fit(corpus.matrix, epochs=30, no_threads=4, verbose=True)
glove.add_dictionary(corpus.dictionary)
glove.save('glove.model')
from gensim.scripts.glove2word2vec import glove2word2vec
glove2word2vec(glove_input_file='glove.model', word2vec_output_file="gensim_glove_vectors.txt")
from gensim.models.keyedvectors import KeyedVectors
glove_embd_w = KeyedVectors.load_word2vec_format("gensim_glove_vectors.txt", binary=False)
# def load_glove_weights(glove_dir, embd_dim, vocab_size, word_index):
# embeddings_index = {}
# f = open(os.path.join(glove_dir, 'glove.6B.' + str(embd_dim) + 'd.txt'))
# for line in f:
# values = line.split()
# word = values[0]
# coefs = np.asarray(values[1:], dtype='float32')
# embeddings_index[word] = coefs
# f.close()
# print('Found %s word vectors.' % len(embeddings_index))
# embedding_matrix = np.zeros((vocab_size, embd_dim))
# print('embed_matrix.shape', embedding_matrix.shape)
# for word, i in word_index.items():
# embedding_vector = embeddings_index.get(word)
# if embedding_vector is not None:
# # words not found in embedding index will be all-zeros.
# embedding_matrix[i] = embedding_vector
# return embedding_matrix
# embd_dim = 300
# glove_embd_w = load_glove_weights('./dataset', embd_dim, vocab_size, w2i)
###########################
# import gensim
# embd_dim = 300
# embed_model = gensim.models.Word2Vec(X, size=embd_dim, window=5, min_count=5)
# embed_model.save('word2vec_model')
# embed_model=gensim.models.Word2Vec.load('word2vec_model')
############################
embd_dim=300
glove_embd_w = np.zeros((vocab_size, embd_dim))
for word, i in w2i.items():
# if word in embed_model.wv.vocab:
embedding_vector =glove[word]
# words not found in embedding index will be all-zeros.
glove_embd_w[i] = embedding_vector
def Net(vocab_size, embd_size, sentence_maxlen, glove_embd_w):
sentence = Input((sentence_maxlen,), name='SentenceInput')
# embedding
embd_layer = Embedding(input_dim=vocab_size,
output_dim=embd_size,
weights=[glove_embd_w],
trainable=False,
name='shared_embd')
embd_sentence = embd_layer(sentence)
embd_sentence = Permute((2,1))(embd_sentence)
embd_sentence = Lambda(lambda x: K.expand_dims(x, -1))(embd_sentence)
# cnn
cnn = Conv2D(1,
kernel_size=(3, sentence_maxlen),
activation='relu')(embd_sentence)
cnn = Lambda(lambda x: K.sum(x, axis=3))(cnn)
cnn = MaxPooling1D(3)(cnn)
cnn = Lambda(lambda x: K.sum(x, axis=2))(cnn)
out = Dense(len(labels), activation='sigmoid')(cnn)
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model = Model(inputs=sentence, outputs=out, name='sentence_claccification')
model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=["accuracy", "binary_accuracy",
"categorical_accuracy",])
return model
model = Net(vocab_size, embd_dim, sentence_maxlen, glove_embd_w)
print(model.summary())
model.fit(X_train, y_train,
batch_size=32,
epochs=5,
validation_data=(X_val, y_val)
)
model.save('cnn_model_'+str(limit_number)+'_accuracy_categoricalaccuracy_.h5')
# from keras.models import load_model
# model = load_model('cnn_model.h5')
```
Evaluation
```
pred=model.predict(X_test)
# For evaluation: If the probability > 0.5 you can say that it belong to the class.
print(pred[0])#example
y_pred=[]
for l in pred:
temp=[]
for value in l:
if value>=0.5:
temp.append(1)
else:
temp.append(0)
y_pred.append(temp)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
```
| github_jupyter |
(code-advcd-best-practice)=
# Tools for Better Coding
## Introduction
This chapter covers the tools that will help you to write better code. This includes practical topics such as debugging code, logging, linting, and the magic of auto-formatting.
As ever, you may need to `conda install packagename` or `pip install packagename` on the terminal before being able to use some of the packages that are featured.
## Auto-magically improving your code
In the previous chapter, we met the idea of *code style*: even for code that runs, *how* you write it matters for readability. (And it goes without saying that you don't want bugs in your code that stop it running at all.) It is possible to catch some errors, to flag style issues, and even to re-format code to comply with a code style automatically. In this section, we'll see how to use tools to perform these functions automatically.
### Linting
Linters are tools that analyse code for programmatic and stylistic errors, assuming you have declared a style. A linting tool flags any potential errors and deviations from style before you even run it. When you run a linter, you get a report of what line the issue is on and why it has been raised. They are supposedly named after the lint trap in a clothes dryer because of the way they catch small errors that could have big effects.
Some of the most popular linters in Python are [**flake8**](https://flake8.pycqa.org/en/latest/), [**pycodestyle**](https://pycodestyle.pycqa.org/en/latest/intro.html), and [**pylint**](https://www.pylint.org/).
Let's see an example of running a linter. VS Code has direct integration with a range of linters. To get going, use `⇧⌘P` (Mac) and then type 'Python Select Linter'. In the example below, we'll use **flake8** (and **pylance**, another VS Code extension). Let's pretend we have a script, `test.py`, containing
```python
list_defn = [1,5, 6,
7]
def this_is_a_func():
print('hello')
print(X)
import numpy as np
```
To see the linting report, press <kbd>^</kbd> + <kbd>\`</kbd> (Mac) or <kbd>ctrl</kbd> + <kbd>`</kbd> (otherwise) and navigate to the 'Problems' tab. We get a whole load of error messages about this script, here are a few:
- ⓧ missing whitespace after ',' flake8(E231) 1, 15
- ⓧ continuation line under-indented for visual indent flake8(E128) 2, 1
- ⓧ expected 2 blank lines, found 1 flake8(E302) 4, 1
- ⓧ indentation is not a multiple of 4 flake8(E111) 5, 4
- ⓧ undefined name 'X' flake8(F821) 7, 7
- ⓧ module level import not at top of file flake8(E402) 9, 1
- ⓧ 'numpy as np' imported but unused flake8(F3401) 9, 1
- ⚠ "X" is not defined Pylance(reportUndefinedVariable) 7, 7
- ⚠ no newline at end of file flake8(W292) 78, 338
each message is a warning or error that says what the problem is (for example, missing whitespace after ','), what library is reporting it (mostly flake8 here), the name of the rule that has been broken (E231), and the line, row position (1, 15). Very helpfully, we get an undefined name message for variable `X`, this is especially handy because it would cause an error on execution otherwise. The same goes for the indentation message (indentation matters!). You can customise your [linting settings](https://code.visualstudio.com/docs/python/linting) in VS Code too.
Although the automatic linting offered by an IDE is very convenient, it's not the only way to use linting tools. You can also run them from the command line. For example, for **flake8**, the command is `flake8 test.py`.
### Formatting
It's great to find out all the ways in which you are failing with respect to code style from a linter but wouldn't it be *even* better if you could fix those style issues automatically? The answer is clearly yes! This is where formatters come in; they can take valid code and forcibly apply a code style to them. This is really handy in practice for all kinds of reasons.
The most popular code formatters in Python are probably: [**yapf**](https://github.com/google/yapf), 'yet another Python formatter', from Google; [**autopep8**](https://github.com/hhatto/autopep8), which applies PEP8 to your code; and [**black**](https://black.readthedocs.io/en/stable/), the 'uncompromising formatter' that is very opinionated ("any colour, as long as it's black").
There are two ways to use formatters, line-by-line (though **black** doesn't work in this mode) or on an entire script at once. VS Code offers an integration with formatters. To select a formatter in VS Code, bring up the settings using <kbd>⌘</kbd> + <kbd>,</kbd> (Mac) or <kbd>ctrl</kbd> + <kbd>,</kbd> (otherwise) and type 'python formatting provider' and you can choose from autopep8, black, and yapf.
If you choose **autopep8** and then open a script you can format a *selection* of code by pressing <kbd>⌘</kbd> + <kbd>k</kbd>, <kbd>⌘</kbd> + <kbd>f</kbd> (Mac) or <kbd>ctrl</kbd> + <kbd>k</kbd>, <kbd>ctrl</kbd> + <kbd>f</kbd> (otherwise). They can also (and only, in the case of **black**) be used from the command line. For instance, to use **black**, the command is `black test.py`, assuming you have it installed.
Let's see an example of a poorly styled script and see what happens when we select all lines and use <kbd>ctrl</kbd> + <kbd>k</kbd>, <kbd>ctrl</kbd> + <kbd>f</kbd> to auto format with **autopep8**. The contents of `test.py` before formatting are:
```python
def very_important_function(y_val,debug = False, keyword_arg=0, another_arg =2):
X = np.linspace(0,10,5)
return X+ y_val +keyword_arg
very_important_function(2)
list_defn = [1,
2,
3,
5,
6,
7]
import numpy as np
```
and, after running the auto-formatting command,
```python
import numpy as np
def very_important_function(y_val, debug=False, keyword_arg=0, another_arg=2):
X = np.linspace(0, 10, 5)
return X + y_val + keyword_arg
very_important_function(2)
list_defn = [1,
2,
3,
5,
6,
7]
```
So what did the formatter do? Many things. It moved the import to the top, put two blank lines after the function definition, removed whitespace around keyword arguments, added a new line at the end, and fixed some of the indentation. The different formatters have different strengths and weaknesses; for example, **black** is not so good at putting imports in the right place but excels at splitting up troublesome wide lines. If you need a formatter that deals specifically with module imports, check out [**isort**](https://pycqa.github.io/isort/).
Apart from taking the pressure off you to always be thinking about code style, formatters can be useful when working collaboratively too. For some open source packages, maintainers ask that new code or changed code be run through a particular formatter if it is to be incorporated into the main branch. This helps ensure the code style is consistent regardless of who is writing it. Running the code formatter can even be automated to happen every time someone *commits* some code to a shared code repository too, using something called a *pre-commit hook*.
There is a package that can run **Black** on Jupyter Notebooks too: [**black-nb**](https://pypi.org/project/black-nb/).
## Debugging code
Computers are *very* literal, so literal that unless you're perfectly precise about what you want, they will end up doing something different. When that happens, one of the most difficult issues in programming is to understand *why* the code isn't doing what you expected. When the code doesn't do what we expect, it's called a bug.
Bugs could be fundamental issues with the code you're using (in fact, the term originated because a moth causing a problem in an early computer) and, if you find one of these, you should file an issue with the maintainers of the code. However, what's much more likely is that the instructions you gave aren't quite what is needed to produce the outcome that you want. And, in this case, you might need to *debug* the code: to find out which part of it isn't doing what you expect.
Even with a small code base, it can be tricky to track down where the bug is: but don't fear, there are tools on hand to help you find where the bug is.
### Print statements
The simplest, and I'm afraid to say the most common, way to debug code is to plonk `print` statements in the code. Let's take a common example in which we perform some simple array operations, here multiplying an array and then summing it with another array:
```python
import numpy as np
def array_operations(in_arr_one, in_arr_two):
out_arr = in_arr_one*1.5
out_arr = out_arr + in_arr_two
return out_arr
in_vals_one = np.array([3, 2, 5, 16, '7', 8, 9, 22])
in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
result = array_operations(in_vals_one, in_vals_two)
result
```
```python
---------------------------------------------------------------------------
UFuncTypeError Traceback (most recent call last)
<ipython-input-1-166160824d19> in <module>
11 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
12
---> 13 result = array_operations(in_vals_one, in_vals_two)
14 result
<ipython-input-1-166160824d19> in array_operations(in_arr_one, in_arr_two)
3
4 def array_operations(in_arr_one, in_arr_two):
----> 5 out_arr = in_arr_one*1.5
6 out_arr = out_arr + in_arr_two
7 return out_arr
UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32')
```
Oh no! We've got a `UFuncTypeError` here, perhaps not the most illuminating error message we've ever seen. We'd like to know what's going wrong here. The `Traceback` did give us a hint about where the issue occurred though; it happens in the multiplication line of the function we wrote.
To debug the error with print statements, we might re-run the code like this:
```python
def array_operations(in_arr_one, in_arr_two):
print(f'in_arr_one is {in_arr_one}')
out_arr = in_arr_one*1.5
out_arr = out_arr + in_arr_two
return out_arr
in_vals_one = np.array([3, 2, 5, 16, '7', 8, 9, 22])
in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
result = array_operations(in_vals_one, in_vals_two)
result
```
```
in_arr_one is ['3' '2' '5' '16' '7' '8' '9' '22']
```
```python
---------------------------------------------------------------------------
UFuncTypeError Traceback (most recent call last)
<ipython-input-2-6a04719bc0ff> in <module>
9 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
10
---> 11 result = array_operations(in_vals_one, in_vals_two)
12 result
<ipython-input-2-6a04719bc0ff> in array_operations(in_arr_one, in_arr_two)
1 def array_operations(in_arr_one, in_arr_two):
2 print(f'in_arr_one is {in_arr_one}')
----> 3 out_arr = in_arr_one*1.5
4 out_arr = out_arr + in_arr_two
5 return out_arr
UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32')
```
What can we tell from the values of `in_arr_one` that are now being printed? Well, they seem to have quote marks around them and what that means is that they're strings, *not* floating point numbers or integers! Multiplying a string by 1.5 doesn't make sense here, so that's our error. If we did this, we might then trace the origin of that array back to find out where it was defined and see that instead of `np.array([3, 2, 5, 16, 7, 8, 9, 22])` being declared, we have `np.array([3, 2, 5, 16, '7', 8, 9, 22])` instead and `numpy` decides to cast the whole array as a string to ensure consistency.
Let's fix that problem by turning `'7'` into `7` and run it again:
```python
def array_operations(in_arr_one, in_arr_two):
out_arr = in_arr_one*1.5
out_arr = out_arr + in_arr_two
return out_arr
in_vals_one = np.array([3, 2, 5, 16, 7, 8, 9, 22])
in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
result = array_operations(in_vals_one, in_vals_two)
result
```
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-ebd3efde9b3e> in <module>
8 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
9
---> 10 result = array_operations(in_vals_one, in_vals_two)
11 result
<ipython-input-3-ebd3efde9b3e> in array_operations(in_arr_one, in_arr_two)
1 def array_operations(in_arr_one, in_arr_two):
2 out_arr = in_arr_one*1.5
----> 3 out_arr = out_arr + in_arr_two
4 return out_arr
5
ValueError: operands could not be broadcast together with shapes (8,) (7,)
```
Still not working! But we've moved on to a different error now. We can still use a print statement to debug this one, which seems to be related to the shapes of variables passed into the function:
```python
def array_operations(in_arr_one, in_arr_two):
print(f'in_arr_one shape is {in_arr_one.shape}')
out_arr = in_arr_one*1.5
print(f'intermediate out_arr shape is {out_arr.shape}')
print(f'in_arr_two shape is {in_arr_two.shape}')
out_arr = out_arr + in_arr_two
return out_arr
in_vals_one = np.array([3, 2, 5, 16, 7, 8, 9, 22])
in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
result = array_operations(in_vals_one, in_vals_two)
result
```
```
in_arr_one shape is (8,)
intermediate out_arr shape is (8,)
in_arr_two shape is (7,)
```
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-4961f476c7eb> in <module>
11 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
12
---> 13 result = array_operations(in_vals_one, in_vals_two)
14 result
<ipython-input-4-4961f476c7eb> in array_operations(in_arr_one, in_arr_two)
4 print(f'intermediate out_arr shape is {out_arr.shape}')
5 print(f'in_arr_two shape is {in_arr_two.shape}')
----> 6 out_arr = out_arr + in_arr_two
7 return out_arr
8
ValueError: operands could not be broadcast together with shapes (8,) (7,)
```
The print statement now tells us the shapes of the arrays as we go through the function. We can see that in the line before the `return` statement the two arrays that are being combined using the `+` operator don't have the same shape, so we're effectively adding two vectors from two differently dimensioned vector spaces and, understandably, we are being called out on our nonsense. To fix this problem, we would have to ensure that the input arrays are the same shape (it looks like we may have just missed a value from `in_vals_two`).
`print` statements are great for a quick bit of debugging and you are likely to want to use them more frequently than any other debugging tool. However, for complex, nested code debugging, they aren't always very efficient and you will sometimes feel like you are playing battleships in continually refining where they should go until you have pinpointed the actual problem, so they're far from perfect. Fortunately, there are other tools in the debugging toolbox...
### Icecream and better print statements
Typing `print` statements with arguments that help you debug code can become tedious. There are better ways to work, which we'll come to, but we must also recognise that `print` is used widely in practice. So what if we had a function that was as easier to use as `print` but better geared toward debugging? Well, we do, and it's called [**icecream**](https://github.com/gruns/icecream), and it's available in most major languages, including Python, Dart, Rust, javascript, C++, PHP, Go, Ruby, and Java.
Let's take an example from earlier in this chapter, where we used a `print` statement to display the contents of `in_arr_one` in advance of the line that caused an error being run. All we will do now is switch out `print(f'in_arr_one is {in_arr_one}')` for `ic(in_arr_one)`.
```python
from icecream import ic
def array_operations(in_arr_one, in_arr_two):
# Old debug line using `print`
# print(f'in_arr_one is {in_arr_one}')
# new debug line:
ic(in_arr_one)
out_arr = in_arr_one*1.5
out_arr = out_arr + in_arr_two
return out_arr
in_vals_one = np.array([3, 2, 5, 16, '7', 8, 9, 22])
in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
array_operations(in_vals_one, in_vals_two)
```
```
ic| in_arr_one: array(['3', '2', '5', '16', '7', '8', '9', '22'], dtype='<U21')
---------------------------------------------------------------------------
UFuncTypeError Traceback (most recent call last)
<ipython-input-6-9efd5fc1a1fe> in <module>
14 in_vals_two = np.array([4, 7, 3, 23, 6, 8, 0])
15
---> 16 array_operations(in_vals_one, in_vals_two)
<ipython-input-6-9efd5fc1a1fe> in array_operations(in_arr_one, in_arr_two)
6 # new debug line:
7 ic(in_arr_one)
----> 8 out_arr = in_arr_one*1.5
9 out_arr = out_arr + in_arr_two
10 return out_arr
UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32')
```
What we get in terms of debugging output is `ic| in_arr_one: array(['3', '2', '5', '16', '7', '8', '9', '22'], dtype='<U21')`, which is quite similar to before apart from three important differences, all of which are advantages:
1. it is easier and quicker to write `ic(in_arr_one)` than `print(f'in_arr_one is {in_arr_one}')`
2. **icecream** automatically picks up the name of the variable, `in_arr_one`, and clearly displays its contents
3. **icecream** shows us that `in_arr_one` is of `type` array and that it has the `dtype` of `U`, which stands for Unicode (i.e. a string). `<U21` just means that all strings in the array are less than 21 characters long.
**icecream** has some other advantages relative to print statements too, for instance it can tell you about which lines were executed in which scripts if you call it without arguments:
```python
def foo():
ic()
print('first')
if 10 < 20:
ic()
print('second')
else:
ic()
print('Never executed')
foo()
```
```
ic| <ipython-input-7-8ced0f8fcf82>:2 in foo() at 00:58:19.962
ic| <ipython-input-7-8ced0f8fcf82>:6 in foo() at 00:58:19.979
first
second
```
And it can wrap assignments rather than living on its own lines:
```python
def half(i):
return ic(i) / 2
a = 6
b = ic(half(a))
```
```
ic| i: 6
ic| half(a): 3.0
```
All in all, if you find yourself using `print` to debug, you might find a one-time import of **icecream** followed by use of `ic` instead both more convenient and more effective.
### **rich** for beautiful debugging
[**rich**](https://github.com/willmcgugan/rich) is much more than just a tool for debugging, it's a way to create beautiful renderings of all objects both in the terminal and in interactive Python windows. You can use it to build fantastic looking command line interfaces. Here, we'll see how it can help us debug what value a variable takes *and* what methods can be used on a variable via its `inspect` function.
```
from rich import inspect
my_list = ["foo", "bar"]
inspect(my_list, methods=True)
```
Check out all of the many options for `inspect` using `help(inspect)`. We ran it here with `methods=True`, but there are plenty of other options.
```{admonition} Exercise
Create a dictionary (it doesn't matter what's in it, but you could map the integer 1 to the letter "a"). Then use `inspect` with `methods=True` to find out about all the methods you can call on an object of type dict.
```
### Debugging with the IDE
In this section, we'll learn about how your Integrated Development Environment, or IDE, can aid you with debugging. While we'll talk through the use of Visual Studio Code, which is free, directly supports Python, R, and other languages, and is especially rich, many of the features will be present in other IDEs too and the ideas are somewhat general.
To begin debugging using Visual Studio Code, get a script ready, for example `script.py`, that you'd like to debug. If your script has an error in, a debug run will automatically run into it and stop on the error; alternatively you can click to the left of the line number in your script to create a *breakpoint* that your code will stop at anyway when in debug mode.
To begin a debug session, click on the play button partially covered by a bug that's on the left hand ribbon of the VS Code window. It will bring up a menu. Click 'Run and debug' and select 'Python file'. The debugger will now start running the script you had open. When it reaches and error or a breakpoint it will stop.
Why is this useful? Once the code stops, you can hover over any variables and see what's 'inside' them, which is useful for working out what's going on. Remember, in the examples above, we only saw variables that we asked for. Using the debugger, we can hover over any variable we're interested in without having to decide ahead of time! We can also see other useful bits of info such as the *call stack* of functions that have been called, what local (within the current scope) and global (available everywhere) variables have been defined, and we can nominate variables to watch too.
Perhaps you now want to progress the code on from a breakpoint; you can do this too. You'll see that a menu has appeared with stop, restart, play, and other buttons on it. To skip over the next line of code, use the curved arrow over the dot. To dig in to the next line of code, for example if it's a function, use the arrow pointing toward a dot. To carry on running the code, use the play button.
This is only really scratching the surface of what you can do with IDE based debugging, but even that surface layer provides lots of really useful tools for finding out what's going on when your code executes.
You can find a short 'hello world!' debugging tutorial in the [official VS Code documentation](https://code.visualstudio.com/docs/python/python-tutorial#_configure-and-run-the-debugger).
## Logging
Logging is a means of tracking events that happen when software runs. An event is described by a descriptive message that can optionally contain data about variables that are defined as the code is executing.
Logging has two main purposes: to record events of interest, such as an error, and to act as an auditable account of what happened after the fact.
Although Python has a built-in logger, we will see an example of logging using [**loguru**](https://github.com/Delgan/loguru), a package that makes logging a little easier and has some nice settings.
Let's see how to log a debug message:
```
from loguru import logger
logger.debug("Simple logging!")
```
The default message includes the time, the type of log entry message it is, what bit of code it happened in (including a line number), and the message itself (basically all the info we need). There are different levels of code messages. They are:
- CRITICAL
- ERROR
- WARNING
- SUCCESS
- INFO
- DEBUG
- TRACE
You can find advice on what level to use for what message [here](https://reflectoring.io/logging-levels/), but it will depend a bit on what you're using your logs for.
What we've just seen are logging messages written out to the console, which doesn't persist. This is clearly no good for auditing what happened long after the fact (and it may not be that good for debugging either) so we also need a way to write a log to a file. This snippet of code
```python
logger.add("file_{time}.log")
```
tells **loguru** to send your logging messages to a *log file* instead of to the console. This is really handy for auditing what happened when your code executed long after it ran. You can choose any name for your log file, using "{time}" as part of the string is a shorthand that tells **loguru** to use the current datetime to name the file.
Log files can become quite numerous and quite large, which you might not want. Those logs from 6 months ago may just be taking up space and not be all that useful, for example. So, what you can also do, is tell **loguru** to use a new log file. Some examples of this would be `logger.add("file_1.log", rotation="500 MB")` to clean up a file after it reaches 500 MB in size, `rotation="12:00"` to refresh the log file at lunch time, and `retention="10 days"` to keep the file for 10 days.
One further feature that is worth being aware of is the capability to trace what caused errors, including the trace back through functions and modules, and report them in the log. Of course, you can debug these using the console, but sometimes having such complex errors written to a file (in full) can be handy. This example of a full traceback comes from the **loguru** documentation. The script would have:
```python
logger.add("out.log", backtrace=True, diagnose=True) # Caution, may leak sensitive data if used in production
def func(a, b):
return a / b
def nested(c):
try:
func(5, c)
except ZeroDivisionError:
logger.exception("What?!")
nested(0)
```
while the log file would record:
```
2018-07-17 01:38:43.975 | ERROR | __main__:nested:10 - What?!
Traceback (most recent call last):
File "test.py", line 12, in <module>
nested(0)
└ <function nested at 0x7f5c755322f0>
> File "test.py", line 8, in nested
func(5, c)
│ └ 0
└ <function func at 0x7f5c79fc2e18>
File "test.py", line 4, in func
return a / b
│ └ 0
└ 5
ZeroDivisionError: division by zero
```
| github_jupyter |
```
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import torch
print(torch.__version__)
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data_utils
from torch.utils.data import DataLoader, Dataset, Sampler
from torch.utils.data.dataloader import default_collate
from torch.utils.tensorboard import SummaryWriter
from pytorch_lightning.metrics import Accuracy
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
INPUT_SIZE = 36
HIDDEN_SIZE = 25
OUTPUT_SIZE = 5
LEARNING_RATE = 1e-2
EPOCHS = 400
BATCH_SIZE = 256
EMBEDDING_SIZE = 5
class CustomDataset(Dataset):
# Конструктор, где считаем датасет
def __init__(self):
X = pd.read_csv('./data/X_cat.csv', sep='\t', index_col=0)
target = pd.read_csv('./data/y_cat.csv', sep='\t', index_col=0, names=['status']) # header=-1,
weekday_columns = ['Weekday_0', 'Weekday_1', 'Weekday_2',
'Weekday_3', 'Weekday_4', 'Weekday_5', 'Weekday_6']
weekdays = np.argmax(X[weekday_columns].values, axis=1)
X.drop(weekday_columns, axis=1, inplace=True)
X['Weekday_cos'] = np.cos(2 * np.pi / 7.) * weekdays
X['Weekday_sin'] = np.sin(2 * np.pi / 7.) * weekdays
X['Hour_cos'] = np.cos(2 * np.pi / 24.) * X['Hour'].values
X['Hour_sin'] = np.sin(2 * np.pi / 24.) * X['Hour'].values
X['Month_cos'] = np.cos(2 * np.pi / 12.) * X['Month'].values
X['Month_sin'] = np.sin(2 * np.pi / 12.) * X['Month'].values
X['Gender'] = np.argmax(X[['Sex_Female', 'Sex_Male', 'Sex_Unknown']].values, axis=1)
X.drop(['Sex_Female', 'Sex_Male', 'Sex_Unknown'], axis=1, inplace=True)
print(X.shape)
print(X.head())
target = target.iloc[:, :].values
target[target == 'Died'] = 'Euthanasia'
le = LabelEncoder()
self.y = le.fit_transform(target)
self.X = X.values
self.columns = X.columns.values
self.embedding_column = 'Gender'
self.nrof_emb_categories = 3
self.numeric_columns = ['IsDog', 'Age', 'HasName', 'NameLength', 'NameFreq', 'MixColor', 'ColorFreqAsIs',
'ColorFreqBase', 'TabbyColor', 'MixBreed', 'Domestic', 'Shorthair', 'Longhair',
'Year', 'Day', 'Breed_Chihuahua Shorthair Mix', 'Breed_Domestic Medium Hair Mix',
'Breed_Domestic Shorthair Mix', 'Breed_German Shepherd Mix', 'Breed_Labrador Retriever Mix',
'Breed_Pit Bull Mix', 'Breed_Rare',
'SexStatus_Flawed', 'SexStatus_Intact', 'SexStatus_Unknown',
'Weekday_cos', 'Weekday_sin', 'Hour_cos', 'Hour_sin',
'Month_cos', 'Month_sin']
return
def __len__(self):
return len(self.X)
# Переопределяем метод,
# который достает по индексу наблюдение из датасет
def __getitem__(self, idx):
row = self.X[idx, :]
row = {col: torch.tensor(row[i]) for i, col in enumerate(self.columns)}
return row, self.y[idx]
class MLPNet(nn.Module):
def __init__(self, input_size, hidden_size, output_size, nrof_cat, emb_dim,
emb_columns, numeric_columns):
super(MLPNet, self).__init__()
self.emb_columns = emb_columns
self.numeric_columns = numeric_columns
self.emb_layer = torch.nn.Embedding(nrof_cat, emb_dim)
self.feature_bn = torch.nn.BatchNorm1d(input_size)
self.linear1 = torch.nn.Linear(input_size, hidden_size)
self.linear1.apply(self.init_weights)
self.bn1 = torch.nn.BatchNorm1d(hidden_size)
self.linear2 = torch.nn.Linear(hidden_size, hidden_size)
self.linear2.apply(self.init_weights)
self.bn2 = torch.nn.BatchNorm1d(hidden_size)
self.linear3 = torch.nn.Linear(hidden_size, output_size)
def init_weights(self, m):
if type(m) == nn.Linear:
torch.nn.init.xavier_uniform(m.weight)
# m.bias.data.fill_(0.001)
def forward(self, x):
emb_output = self.emb_layer(torch.tensor(x[self.emb_columns], dtype=torch.int64))
numeric_feats = torch.tensor(pd.DataFrame(x)[self.numeric_columns].values, dtype=torch.float32)
concat_input = torch.cat([numeric_feats, emb_output], dim=1)
output = self.feature_bn(concat_input)
output = self.linear1(output)
output = self.bn1(output)
output = torch.relu(output)
output = self.linear2(output)
output = self.bn2(output)
output = torch.relu(output)
output = self.linear3(output)
predictions = torch.softmax(output, dim=1)
return predictions
def run_train(model, train_loader):
step = 0
for epoch in range(EPOCHS):
model.train()
for features, label in train_loader:
# Reset gradients
optimizer.zero_grad()
output = model(features)
# Calculate error and backpropagate
loss = criterion(output, label)
loss.backward()
acc = accuracy(output, label).item()
# Update weights with gradients
optimizer.step()
step += 1
if step % 100 == 0:
print('EPOCH %d STEP %d : train_loss: %f train_acc: %f' %
(epoch, step, loss.item(), acc))
return step
animal_dataset = CustomDataset()
train_loader = data_utils.DataLoader(dataset=animal_dataset,
batch_size=BATCH_SIZE, shuffle=True)
model = MLPNet(INPUT_SIZE, HIDDEN_SIZE, OUTPUT_SIZE, animal_dataset.nrof_emb_categories,
EMBEDDING_SIZE,
animal_dataset.embedding_column, animal_dataset.numeric_columns)
criterion = nn.CrossEntropyLoss()
accuracy = Accuracy()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
step = run_train(model, train_loader)
```
| github_jupyter |
```
# Initialize Otter Grader
import otter
grader = otter.Notebook()
```

# In-class Assignment (Feb 9)
Run the following two cells to load the required modules and read the data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
df = pd.read_csv("Video_Games_Sales_cleaned_sampled.csv")
df.head(5)
```
## Exploring Data with Pandas
### Q1:
How many data points (rows) are there in this dataset? Store it in ```num_rows```.
<!--
BEGIN QUESTION
name: q1
manual: false
-->
```
# your code here
num_rows = df.shape[0] # SOLUTION
print(num_rows)
```
### Q2
What are the max and min values in Global Sales? What about the quartiles (25%, 50%, and 75%)? Can you answer this question with a one-liner code?
<!--
BEGIN QUESTION
name: q2
manual: false
-->
```
# your code here
df["Global_Sales"].describe() # SOLUTION
```
### Q3
What are the unique genres and consoles that the dataset contains? Store them in ```genre_unique``` and ```console_unique```.
<!--
BEGIN QUESTION
name: q3
manual: false
-->
```
# your code here
genre_unique = df["Genre"].unique() # SOLUTION
console_unique = df["Console"].unique() # SOLUTION
print("All genres:", genre_unique)
print("All consoles:", console_unique)
```
### Q4
What are the top five games with the most global sales?
<!--
BEGIN QUESTION
name: q4
manual: false
-->
```
# your code here
df.sort_values(by="Global_Sales",ascending=False).head(5) # SOLUTION
```
### Q5 (Optional: Do it if you had enough time)
How many games in the dataset are developed by Nintendo? What are their names?
<!--
BEGIN QUESTION
name: q5
manual: false
-->
```
# your code here
# BEGIN SOLUTION
arr_name_by_nintendo = df.loc[df["Developer"] == "Nintendo","Name"]
print (arr_name_by_nintendo.nunique())
print (arr_name_by_nintendo.unique())
# END SOLUTION
```
## Linear Regression
Suppose that you want to regress the global sales on four features: Critic_Score, Critic_Count, User_Score, and User_Count.
The input matrix $X$ and the output $y$ are given to you below.
```
## No need for modification, just run this cell
X = df[['Critic_Score', 'Critic_Count', 'User_Score', 'User_Count']].values
y = df[['Global_Sales']].values
```
### Q6
Use train_test_split function in sklearn to split the dataset into training and test sets. Set 80% of the dataset aside for training and use the rest for testing. (set random_state=0)
<!--
BEGIN QUESTION
name: q6
manual: false
-->
```
# your code here
# BEGIN SOLUTION
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=0)
# END SOLUTION
```
### Q7
Train your linear regression model using the training set you obtained above. Then, store the coefficients and the intercept of your model in ```coefs``` and ```intercept```, respectively.
<!--
BEGIN QUESTION
name: q7
manual: false
-->
```
# your code here
# BEGIN SOLUTION NO PROMPT
model = LinearRegression()
model.fit(X_train,y_train)
# END SOLUTION
coefs = model.coef_ # SOLUTION
intercept = model.intercept_ # SOLUTION
print("Coefficients:", coefs)
print("Intercept:", intercept)
```
### Q8 (Optional: Do it if you had enough time.)
Compute the mean-squared-error of your model's prediction on the training and test sets and store them in ```train_error``` and ```test_error```, respectively.
<!--
BEGIN QUESTION
name: q8
manual: false
-->
```
# your code here
# BEGIN SOLUTION NO PROMPT
y_pred_train = model.predict(X_train)
y_pred_test = model.predict(X_test)
# END SOLUTION
train_error = mean_squared_error(y_train, y_pred_train) # SOLUTION
test_error = mean_squared_error(y_test, y_pred_test) # SOLUTION
print(train_error)
print(test_error)
```
# Submit
Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output.
**Please save before submitting!**
```
# Save your notebook first, then run this cell to create a pdf for your reference.
```
| github_jupyter |
# SLU10 - Classification: Exercise notebook
```
import pandas as pd
import numpy as np
```
In this notebook you will practice the following:
- What classification is for
- Logistic regression
- Cost function
- Binary classification
You thought that you would get away without implementing your own little Logistic Regression? Hah!
# Exercise 1. Implement the Logistic Function
*aka the sigmoid function*
As a very simple warmup, you will implement the logistic function. Let's keep this simple!
Here's a quick reminder of the formula:
$$\hat{p} = \frac{1}{1 + e^{-z}}$$
**Complete here:**
```
def logistic_function(z):
"""
Implementation of the logistic function by hand
Args:
z (np.float64): a float
Returns:
proba (np.float64): the predicted probability for a given observation
"""
# define the numerator and the denominator and obtain the predicted probability
# clue: you can use np.exp()
numerator = None
denominator = None
proba = None
# YOUR CODE HERE
raise NotImplementedError()
return proba
z = 1.2
print('Predicted probability: %.2f' % logistic_function(z))
```
Expected output:
Predicted probability: 0.77
```
z = 3.4
assert np.isclose(np.round(logistic_function(z),2), 0.97)
z = -2.1
assert np.isclose(np.round(logistic_function(z),2), 0.11)
```
# Exercise 2: Make Predictions From Observations
The next step is to implement a function that receives observations and returns predicted probabilities.
For instance, remember that for an observation with two variables we have:
$$z = \beta_0 + \beta_1 x_1 + \beta_2 x_2$$
where $\beta_0$ is the intercept and $\beta_1, \beta_2$ are the coefficients.
**Complete here:**
```
def predict_proba(x, coefficients):
"""
Implementation of a function that returns a predicted probability for a given data observation
Args:
x (np.array): a numpy array of shape (n,)
- n: number of variables
coefficients (np.array): a numpy array of shape (n + 1,)
- coefficients[0]: intercept
- coefficients[1:]: remaining coefficients
Returns:
proba (np.array): the predicted probability for a given data observation
"""
# start by assigning the intercept to z
# clue: the intercept is the first element of the list of coefficients
z = None
# YOUR CODE HERE
raise NotImplementedError()
# sum the remaining variable * coefficient products to z
# clue: the variables and coefficients indeces are not exactly aligned, but correctly ordered
for i in range(None): # iterate through the observation variables (clue: you can use len())
z += None # multiply the variable value by its coefficient and add to z
# YOUR CODE HERE
raise NotImplementedError()
# obtain the predicted probability from z
# clue: we already implemented something that can give us that
proba = None
# YOUR CODE HERE
raise NotImplementedError()
return proba
x = np.array([0.2,2.32,1.3,3.2])
coefficients = np.array([2.1,0.22,-2, 0.4, 0.1])
print('Predicted probability: %.3f' % predict_proba(x, coefficients))
```
Expected output:
Predicted probability: 0.160
```
x = np.array([1,0,2,3.2])
coefficients = np.array([-0.2,2,-6, 1.2, -1])
assert np.isclose(np.round(predict_proba(x, coefficients),2), 0.73)
x = np.array([3.2,1.2,-1.2])
coefficients = np.array([-1.,3.1,-3,4])
assert np.isclose(np.round(predict_proba(x, coefficients),2), 0.63)
```
# Exercise 3: Compute the Cross-Entropy Cost Function
As you will implement stochastic gradient descent, you only have to do the following for each prediction:
$$H_{\hat{p}}(y) = - (y \log(\hat{p}) + (1-y) \log (1-\hat{p}))$$
**Complete here:**
```
def cross_entropy(y, proba):
"""
Implementation of a function that returns the Cross-Entropy loss
Args:
y (np.int64): an integer
proba (np.float64): a float
Returns:
loss (np.float): a float with the resulting loss for a given prediction
"""
# compute the inner left side of the loss function (for when y == 1)
# clue: use np.log()
left = None
# YOUR CODE HERE
raise NotImplementedError()
# compute the inner right side of the loss function (for when y == 0)
right = None
# YOUR CODE HERE
raise NotImplementedError()
# compute the total loss
# clue: do not forget the minus sign
loss = None
# YOUR CODE HERE
raise NotImplementedError()
return loss
y = 1
proba = 0.7
print('Computed loss: %.3f' % cross_entropy(y, proba))
```
Expected output:
Computed loss: 0.357
```
y = 1
proba = 0.35
assert np.isclose(np.round(cross_entropy(y, proba),3), 1.050)
y = 1
proba = 0.77
assert np.isclose(np.round(cross_entropy(y, proba),3), 0.261)
```
# Exercise 4: Obtain the Optimized Coefficients
Now that the warmup is done, let's do the most interesting exercise. Here you will implement the optimized coefficients through Stochastic Gradient Descent.
Quick reminders:
$$H_{\hat{p}}(y) = - \frac{1}{N}\sum_{i=1}^{N} \left [{ y_i \ \log(\hat{p}_i) + (1-y_i) \ \log (1-\hat{p}_i)} \right ]$$
and
$$\beta_{0(t+1)} = \beta_{0(t)} - learning\_rate \frac{\partial H_{\hat{p}}(y)}{\partial \beta_{0(t)}}$$
$$\beta_{t+1} = \beta_t - learning\_rate \frac{\partial H_{\hat{p}}(y)}{\partial \beta_t}$$
which can be simplified to
$$\beta_{0(t+1)} = \beta_{0(t)} + learning\_rate \left [(y - \hat{p}) \ \hat{p} \ (1 - \hat{p})\right]$$
$$\beta_{t+1} = \beta_t + learning\_rate \left [(y - \hat{p}) \ \hat{p} \ (1 - \hat{p}) \ x \right]$$
You will have to initialize a numpy array full of zeros for the coefficients. If you have a training set $X$, you can initialize it this way:
```python
coefficients = np.zeros(X.shape[1]+1)
```
where the $+1$ is adding the intercept.
You will also iterate through the training set $X$ alongside their respective labels $Y$. To do so simultaneously you can do it this way:
```python
for x_sample, y_sample in zip(X, Y):
...
```
**Complete here:**
```
def compute_coefficients(x_train, y_train, learning_rate = 0.1, n_epoch = 50, verbose = False):
"""
Implementation of a function that returns the optimized intercept and coefficients
Args:
x_train (np.array): a numpy array of shape (m, n)
m: number of training observations
n: number of variables
y_train (np.array): a numpy array of shape (m,)
learning_rate (np.float64): a float
n_epoch (np.int64): an integer of the number of full training cycles to perform on the training set
Returns:
coefficients (np.array): a numpy array of shape (n+1,)
"""
# initialize the coefficients array with zeros
# clue: use np.zeros()
coefficients = None
# YOUR CODE HERE
raise NotImplementedError()
# run the stochastic gradient descent algorithm n_epoch times and update the coefficients
for epoch in range(None): # iterate n_epoch times
loss = None # initialize the cross entropy loss with an empty list
for x, y in zip(None, None): # iterate through the training set observations and labels
proba = None # compute the predicted probability
loss.append(None) # compute the cross entropy loss and append it to the list
coefficients[0] += None # update the intercept
for i in range(None): # iterate through the observation variables (clue: use len())
coefficients[i + 1] += None # update each coefficient
loss = None # average the obtained cross entropies (clue: use np.mean())
# YOUR CODE HERE
raise NotImplementedError()
if((epoch%10==0) & verbose):
print('>epoch=%d, learning_rate=%.3f, error=%.3f' % (epoch, learning_rate, loss))
return coefficients
x_train = np.array([[1,2,3], [2,5,9], [3,1,4], [8,2,9]])
y_train = np.array([0,1,0,1])
learning_rate = 0.1
n_epoch = 50
coefficients = compute_coefficients(x_train, y_train, learning_rate=learning_rate, n_epoch=n_epoch, verbose=True)
print('Computed coefficients:')
print(coefficients)
```
Expected output:
>epoch=0, learning_rate=0.100, error=0.811
>epoch=10, learning_rate=0.100, error=0.675
>epoch=20, learning_rate=0.100, error=0.640
>epoch=30, learning_rate=0.100, error=0.606
>epoch=40, learning_rate=0.100, error=0.574
Computed coefficients:
[-0.82964483 0.02698239 -0.04632395 0.27761155]
```
x_train = np.array([[3,1,3], [1,0,9], [3,3,4], [2,-1,10]])
y_train = np.array([0,1,0,1])
learning_rate = 0.3
n_epoch = 100
coefficients = compute_coefficients(x_train, y_train, learning_rate=learning_rate, n_epoch=n_epoch, verbose=False)
assert np.allclose(coefficients, np.array([-0.25917811, -1.15128387, -0.85317139, 0.55286134]))
x_train = np.array([[3,-1,-2], [-6,9,3], [3,-1,4], [5,1,6]])
coefficients = compute_coefficients(x_train, y_train, learning_rate=learning_rate, n_epoch=n_epoch, verbose=False)
assert np.allclose(coefficients, np.array([-0.53111811, -0.16120628, 2.20202909, 0.27270437]))
```
# Exercise 5: Normalize Data
Just a quick and easy function to normalize the data. It is crucial that your variables are adjusted between $[0;1]$ (normalized) or standardized so that you can correctly analyse some logistic regression coefficients for your possible future employer.
You only have to implement this formula
$$ x_{normalized} = \frac{x - x_{min}}{x_{max} - x_{min}}$$
Don't forget that the `axis` argument is critical when obtaining the maximum, minimum and mean values! As you want to obtain the maximum and minimum values of each individual feature, you have to specify `axis=0`. Thus, if you wanted to obtain the maximum values of each feature of data $X$, you would do the following:
```python
X_max = np.max(X, axis=0)
```
**Complete here:**
```
def normalize_data(data):
"""
Implementation of a function that normalizes your data variables
Args:
data (np.array): a numpy array of shape (m, n)
m: number of observations
n: number of variables
Returns:
normalized_data (np.array): a numpy array of shape (m, n)
"""
# compute the numerator
# clue: use np.min()
numerator = None
# YOUR CODE HERE
raise NotImplementedError()
# compute the numerator
# clue: use np.max() and np.min()
denominator = None
# YOUR CODE HERE
raise NotImplementedError()
# obtain the normalized data
normalized_data = None
# YOUR CODE HERE
raise NotImplementedError()
return normalized_data
data = np.array([[9,5,2], [7,7,3], [2,2,11], [1,5,2], [10,1,3], [0,9,5]])
normalized_data = normalize_data(data)
print('Before normalization:')
print(data)
print('\n-------------------\n')
print('After normalization:')
print(normalized_data)
```
Expected output:
Before normalization:
[[ 9 5 2]
[ 7 7 3]
[ 2 2 11]
[ 1 5 2]
[10 1 3]
[ 0 9 5]]
-------------------
After normalization:
[[0.9 0.5 0. ]
[0.7 0.75 0.11111111]
[0.2 0.125 1. ]
[0.1 0.5 0. ]
[1. 0. 0.11111111]
[0. 1. 0.33333333]]
```
data = np.array([[9,5,2,6], [7,5,1,3], [2,2,11,1]])
normalized_data = normalize_data(data)
assert np.allclose(normalized_data, np.array([[1., 1., 0.1, 1.],[0.71428571, 1., 0., 0.4],[0., 0., 1., 0.]]))
data = np.array([[9,5,3,1], [1,3,1,3], [2,2,4,6]])
normalized_data = normalize_data(data)
assert np.allclose(normalized_data, np.array([[1., 1., 0.66666667, 0.],[0., 0.33333333, 0., 0.4],
[0.125, 0., 1., 1.]]))
```
# Exercise 6: Putting it All Together
The Wisconsin Breast Cancer Diagnostic dataset is another data science classic. It is the result of extraction of breast cell's nuclei characteristics to understand which of them are the most relevent for developing breast cancer.
Your quest, is to first analyze this dataset from the materials that you've learned in the previous SLUs and then create a logistic regression model that can correctly classify cancer cells from healthy ones.
Dataset description:
1. Sample code number: id number
2. Clump Thickness
3. Uniformity of Cell Size
4. Uniformity of Cell Shape
5. Marginal Adhesion
6. Single Epithelial Cell Size
7. Bare Nuclei
8. Bland Chromatin
9. Normal Nucleoli
10. Mitoses
11. Class: (2 for benign, 4 for malignant) > We will modify to (0 for benign, 1 for malignant) for simplicity
The data is loaded for you below.
```
columns = ['Sample code number','Clump Thickness','Uniformity of Cell Size','Uniformity of Cell Shape',
'Marginal Adhesion','Single Epithelial Cell Size','Bare Nuclei','Bland Chromatin','Normal Nucleoli',
'Mitoses','Class']
data = pd.read_csv('data/breast-cancer-wisconsin.csv',names=columns, index_col=0)
data["Bare Nuclei"] = data["Bare Nuclei"].replace(['?'],np.nan)
data = data.dropna()
data["Bare Nuclei"] = data["Bare Nuclei"].map(int)
data.Class = data.Class.map(lambda x: 1 if x == 4 else 0)
X = data.drop('Class').values
y_train = data.Class.values
```
You will also have to return several values, such as the number of cancer and healthy cells. To do so, remember that you can do masks in numpy arrays. If you had a numpy array of labels called `labels` and wanted to obtain the ones with label $3$, you would do the following:
```python
filtered_labels = labels[labels==3]
```
You will additionally be asked to obtain the number of correct cancer cell predictions. Imagine that you have a numpy array with the predictions called `predictions` and a numpy array with the correct labels called `labels` and you wanted to obtain the number of correct predictions of a label $4$. You would do the following:
```python
n_correct_predictions = labels[(labels==4) & (predictions==4)].shape[0]
```
Also, don't forget to use these values for your logistic regression!
```
# Hyperparameters
learning_rate = 0.01
n_epoch = 100
# For validation
verbose = True
```
Now let's do this!
**Complete here:**
```
# STEP ONE: Initial analysis and data processing
# How many cells have cancer? (clue: use y_train)
n_cancer = None
# YOUR CODE HERE
raise NotImplementedError()
# How many cells are healthy? (clue: use y_train)
n_healthy = None
# YOUR CODE HERE
raise NotImplementedError()
# Normalize the training data X (clue: we have already implemented this)
x_train = None
# YOUR CODE HERE
raise NotImplementedError()
print("Number of cells with cancer: %i" % n_cancer)
print("\nThe last three normalized rows:")
print(x_train[-3:])
```
Expected output:
Number of cells with cancer: 239
The last three normalized rows:
[[0.44444444 1. 1. 0.22222222 0.66666667 0.22222222
0.77777778 1. 0.11111111 1. ]
[0.33333333 0.77777778 0.55555556 0.33333333 0.22222222 0.33333333
1. 0.55555556 0. 1. ]
[0.33333333 0.77777778 0.77777778 0.44444444 0.33333333 0.44444444
1. 0.33333333 0. 1. ]]
```
# STEP TWO: Model training and predictions
# What coefficients can we get? (clue: we have already implemented this)
# note: don't forget to use all the hyperparameters defined above
coefficients = None
# YOUR CODE HERE
raise NotImplementedError()
# Initialize the predicted probabilities list
probas = None
# YOUR CODE HERE
raise NotImplementedError()
# What are the predicted probabilities on the training data?
for x in None: # iterate through the training data x_train
probas.append(None) # append the list the predicted probability (clue: we already implemented this)
# YOUR CODE HERE
raise NotImplementedError()
# If we had to say whether a cells had breast cancer, what are the predictions?
# clue 1: Hard assign the predicted probabilities by rounding them to the nearest integer
# clue 2: use np.round()
preds = None
# YOUR CODE HERE
raise NotImplementedError()
print("\nThe last three coefficients:")
print(coefficients[-3:])
print("\nThe last three obtained probas:")
print(probas[-3:])
print("\nThe last three predictions:")
print(preds[-3:])
```
Expected output:
>epoch=0, learning_rate=0.010, error=0.617
>epoch=10, learning_rate=0.010, error=0.209
>epoch=20, learning_rate=0.010, error=0.143
>epoch=30, learning_rate=0.010, error=0.114
>epoch=40, learning_rate=0.010, error=0.097
>epoch=50, learning_rate=0.010, error=0.086
>epoch=60, learning_rate=0.010, error=0.077
>epoch=70, learning_rate=0.010, error=0.071
>epoch=80, learning_rate=0.010, error=0.066
>epoch=90, learning_rate=0.010, error=0.062
The last three coefficients:
[0.70702475 0.33306501 3.27480969]
The last three obtained probas:
[0.9679181578309998, 0.9356364708465178, 0.9482109014966041]
The last three predictions:
[1. 1. 1.]
```
# STEP THREE: Results analysis
# How many cells were predicted to have breast cancer? (clue: use preds and len() or .shape)
n_predicted_cancer = None
# YOUR CODE HERE
raise NotImplementedError()
# How many cells with cancer were correctly detected? (clue: use y_train, preds and len() or .shape)
n_correct_cancer_predictions = None
# YOUR CODE HERE
raise NotImplementedError()
print("Number of correct cancer predictions: %i" % n_correct_cancer_predictions)
```
Expected output:
Number of correct cancer predictions: 239
```
print('You have a dataset with %s cells with cancer and %s healthy cells. \n\n'
'After analysing the data and training your own logistic regression classifier you find out that it correctly '
'identified %s out of %s cancer cells which were all of them. You feel very lucky and happy. However, shortly '
'after you get somewhat suspicious after getting such amazing results. You feel that they should not be '
'that good, but you do not know how to be sure of it. This, because you trained and tested on the same '
'dataset, which does not seem right! You say to yourself that you will definitely give your best focus when '
'doing the next Small Learning Unit 11, which will tackle exactly that.' %
(n_cancer, n_healthy, n_predicted_cancer, n_correct_cancer_predictions))
assert np.allclose(probas[:3], np.array([0.05075437808498781, 0.30382227212694596, 0.05238389294132284]))
assert np.isclose(n_predicted_cancer, 239)
assert np.allclose(coefficients[:3], np.array([-3.22309346, 0.40712798, 0.80696792]))
assert np.isclose(n_correct_cancer_predictions, 239)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/adasegroup/ML2022_seminars/blob/master/seminar1/seminar01.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Seminar 1. Machine learning on Titanic data
The notebook provides an intro to the exploratory analysis of the data, data preprocessing and application of machine learning methods.
The notebook is based on kaggle kernel "Titanic: Machine Learning from Disaster" https://www.kaggle.com/omarelgabry/a-journey-through-titanic by Omar El Gabry.
Data is from a toy competition on kaggle named Titanic.
The goal of the competition is to predict who survived and who died during the sinking of the RMS Titanic.
### Documentation to go through:
* https://docs.python.org/3/
* https://pandas.pydata.org/docs
* https://matplotlib.org/contents.html
* https://docs.scipy.org/doc/
* http://scikit-learn.org/stable/documentation.html
### Some additional info:
* http://www.scipy-lectures.org/
* https://www.kaggle.com/
* https://pydata.org/
```
# importing data processing tools: pandas and numpy
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
```
# Load data
```
# get titanic files as a DataFrame
titanic_dataframe = pd.read_csv("https://raw.githubusercontent.com/adasegroup/ML2022_seminars/master/seminar1/titanic/train.csv", index_col='PassengerId')
```
# Look through the data
```
# preview the data
titanic_dataframe.head()
# list the features
print(titanic_dataframe.keys())
# column selection by name
titanic_dataframe['Age']
# row selection by id
titanic_dataframe.loc[1]
# column selection by index
titanic_dataframe.iloc[:, 0]
# row selection by index
titanic_dataframe.iloc[0, :]
```
### Hints and tips
You can use ```%time``` or ```tqdm``` to track the code timing.
Note that ```pandas``` is column oriented data structure.
```
%time titanic_dataframe['Fare'].mean()
data_titanic_transpose = titanic_dataframe.T
%time data_titanic_transpose.loc['Fare'].mean()
from tqdm import tqdm
for i in tqdm(range(100000000)):
pass
```
## Data Dictionary
| Variable | Definition | Key |
| ------------- |:-------------|: -----|
| survival | Survival | 0 = No, 1 = Yes |
| pclass | Ticket class | 1 = 1st, 2 = 2nd, 3 = 3rd |
| sex | Sex | |
| Age | Age in years | |
| sibsp | # of siblings / spouses aboard the Titanic | |
| parch | # of parents / children aboard the Titanic | |
| ticket | Ticket number | |
| fare | Passenger fare | |
| cabin | Cabin number | |
| embarked | Port of Embarkation | C = Cherbourg, Q = Queenstown, S = Southampton |
```
titanic_dataframe.info()
titanic_dataframe.describe()
```
### Hints and tips
Write ```?``` after the function you are interested in or just press ``` Shift + Tab``` for the function short referense.
Double ``` Shift + Tab``` will expand to the full reference.
```
# call information for the function
titanic_dataframe.drop?
# drop unnecessary columns, these columns won't be useful in analysis and prediction
titanic_dataframe.drop(['Name','Ticket'], axis=1, inplace=True)
titanic_dataframe.info()
for column_name in titanic_dataframe.columns:
print(column_name, 'null', titanic_dataframe[column_name].isnull().sum())
# It has a lot of NaN values, so it won't cause a remarkable impact on prediction
titanic_dataframe.drop("Cabin", axis=1, inplace=True)
# Count various embarked values
print(titanic_dataframe["Embarked"].value_counts())
# Fill the two missing values with the most occurred value, which is "S".
titanic_dataframe["Embarked"] = titanic_dataframe["Embarked"].fillna("S")
print(titanic_dataframe["Embarked"].value_counts())
# Groupby
titanic_dataframe.groupby("Survived").count()
```
### Tasks:
1. What is the mean value and stds of ages for every passanger class?
2. In what port of embarked the absolute difference between the amount men and women was the greatest?
3. What is a number of NaN values in every column?
4. Replace NaN values in age with median value and calculate the std value.
```
titanic_dataframe.groupby("Pclass").Age.mean()
titanic_dataframe.groupby("Pclass").Age.std()
titanic_dataframe.groupby(["Embarked", "Sex"]).count()
titanic_dataframe.isnull().sum()
median_age = titanic_dataframe["Age"].median()
median_age
titanic_dataframe["Age"] = titanic_dataframe["Age"].fillna(median_age)
titanic_dataframe.info()
```
# Plotting
```
# visualization tools: matplotlib, seaborn
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
# Simple plot
x = titanic_dataframe['Age']
y = titanic_dataframe['Fare']
plt.plot(x, y, 'o')
plt.xlabel('Age')
plt.ylabel('Fare')
```

```
# Catplot plot represents share of survived passengers for different embarkment ports
sns.catplot?
sns.catplot(x = 'Embarked', y = 'Survived', data=titanic_dataframe, height=4, aspect=3, kind = 'point')
figure_handle, (axis1, axis2, axis3) = plt.subplots(1, 3, figsize=(15, 5))
sns.countplot(x='Embarked', data=titanic_dataframe, ax=axis1)
sns.countplot(x='Survived', hue="Embarked", data=titanic_dataframe, order=[1, 0], ax=axis2)
# group by embarked, and get the mean for survived passengers for each value in Embarked
sns.barplot(x='Embarked', y='Survived', data=titanic_dataframe[["Embarked", "Survived"]], order=['S','C','Q'], ax=axis3)
```

```
# consider Embarked column in predictions,
# and remove "S" dummy variable,
# and leave "C" & "Q", since they seem to have a good rate for Survival.
# OR, don't create dummy variables for Embarked column, just drop it,
# because logically, Embarked doesn't seem to be useful in prediction.
embark_dummies_titanic = pd.get_dummies(titanic_dataframe['Embarked'])
embark_dummies_titanic
embark_dummies_titanic.drop(['S'], axis=1, inplace=True)
titanic_dataframe = titanic_dataframe.join(embark_dummies_titanic)
titanic_dataframe.drop(['Embarked'], axis=1, inplace=True)
# Examine fare variable
# convert from float to int
titanic_dataframe['Fare'] = titanic_dataframe['Fare'].astype(int)
# get fare for survived & didn't survive passengers
fare_not_survived = titanic_dataframe["Fare"][titanic_dataframe["Survived"] == 0]
fare_survived = titanic_dataframe["Fare"][titanic_dataframe["Survived"] == 1]
# get average and std for fare of survived/not survived passengers
avgerage_fare = DataFrame([fare_not_survived.mean(), fare_survived.mean()])
std_fare = DataFrame([fare_not_survived.std(), fare_survived.std()])
# plot
titanic_dataframe['Fare'].plot(kind='hist', figsize=(15, 3), bins=100, xlim=(0, 50))
std_fare.index.names = ["Survived"]
avgerage_fare.index.names = ["Survived"]
avgerage_fare.plot(yerr=std_fare, kind='bar', legend=False)
# Do the same thing for pclass variable with no confidence interval visible
print(titanic_dataframe[["Pclass", "Survived"]].groupby(['Pclass'], as_index=True))
# adjust the figure size
plt.figure(figsize=[10,5])
sns.barplot(x='Pclass', y='Survived', data=titanic_dataframe[["Pclass", "Survived"]], order=[1, 2, 3])
# Age
fig, (axis1, axis2) = plt.subplots(1, 2, figsize=(15, 4))
axis1.set_title('Original Age values - Titanic')
axis2.set_title('New Age values - Titanic')
# get average, std, and number of NaN values in titanic_df
average_age_titanic = titanic_dataframe["Age"].mean()
std_age_titanic = titanic_dataframe["Age"].std()
count_nan_age_titanic = titanic_dataframe["Age"].isnull().sum()
# generate random numbers between (mean - std) & (mean + std)
random_ages = np.random.randint(average_age_titanic - std_age_titanic,
average_age_titanic + std_age_titanic,
size=count_nan_age_titanic)
# plot original Age values
# NOTE: drop all null values, and convert to int
titanic_dataframe['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
# fill NaN values in Age column with random values generated
titanic_dataframe.loc[np.isnan(titanic_dataframe["Age"]), "Age"] = random_ages
# convert from float to int
titanic_dataframe['Age'] = titanic_dataframe['Age'].astype(int)
# plot new Age Values
titanic_dataframe['Age'].hist(bins=70, ax=axis2)
# .... continue with plotting of Age column
# peaks for survived/not survived passengers by their age
facet = sns.FacetGrid(titanic_dataframe, hue="Survived", aspect=3)
facet.map(sns.kdeplot, 'Age', shade=True)
facet.set(xlim=(0, titanic_dataframe['Age'].max()))
facet.add_legend()
# average survived passengers by age
figure_handle, axis1 = plt.subplots(1, 1, figsize=(18, 4))
average_age = titanic_dataframe[["Age", "Survived"]].groupby(['Age'], as_index=False).mean()
sns.barplot(x='Age', y='Survived', data=average_age)
# Instead of having two columns Parch & SibSp,
# we can have only one column that represents if a passenger had any family member aboard or not,
# Meaning, if having any family member (whether parent, brother, ...etc) increases chances of Survival or not.
titanic_dataframe['Family'] = titanic_dataframe["Parch"] + titanic_dataframe["SibSp"]
titanic_dataframe.loc[titanic_dataframe['Family'] > 0, 'Family'] = 1
titanic_dataframe.loc[titanic_dataframe['Family'] == 0, 'Family'] = 0
# drop Parch & SibSp
titanic_dataframe.drop(['SibSp','Parch'], axis=1, inplace=True)
# plot Family
figure_handle, (axis1, axis2) = plt.subplots(1, 2, sharex=True, figsize=(10, 5))
sns.countplot(x='Family', data=titanic_dataframe, order=[1, 0], ax=axis1)
axis1.set_xticklabels(["With Family", "Alone"], rotation=0)
# average of survived for those who had/didn't have any family member
sns.barplot(x='Family', y='Survived', data=titanic_dataframe[["Family", "Survived"]], order=[1, 0], ax=axis2)
# Sex variable
# As we see, children(age < ~16) on aboard seem to have a high chances for Survival.
# So, we can classify passengers as males, females, and child
def get_person(passenger):
age, sex = passenger
return 'child' if age < 16 else sex
titanic_dataframe['Person'] = titanic_dataframe[['Age','Sex']].apply(get_person, axis=1)
# No need to use Sex column since we created Person column
#titanic_dataframe.drop(['Sex'], axis=1, inplace=True)
# create dummy variables for Person column, & drop Male as it has the lowest average of survived passengers
person_dummies_titanic = pd.get_dummies(titanic_dataframe['Person'])
person_dummies_titanic.columns = ['Child', 'Female', 'Male']
person_dummies_titanic.drop(['Male'], axis=1, inplace=True)
titanic_dataframe = titanic_dataframe.join(person_dummies_titanic)
figure_handle, (axis1, axis2) = plt.subplots(1, 2, figsize=(10, 5))
sns.countplot(x='Person', data=titanic_dataframe, ax=axis1)
# average of survived for each Person(male, female, or child)
sns.barplot(x='Person', y='Survived', data=titanic_dataframe[["Person", "Survived"]],
ax=axis2, order=['male', 'female', 'child'])
# we don't need person variable after introduction of the corresponding dummy variables
titanic_dataframe.drop(['Person'], axis=1, inplace=True)
# Pclass
sns.catplot('Pclass', 'Survived', order=[1, 2, 3], data=titanic_dataframe, height=5, kind = 'point')
# The goal is to create dummy variables for class and joint it to the initial dataframe
# create dummy variables for Pclass column, & drop 3rd class as it has the lowest average of survived passengers
pclass_dummies_titanic = pd.get_dummies(titanic_dataframe['Pclass'])
pclass_dummies_titanic.columns = ['Class_1', 'Class_2', 'Class_3']
pclass_dummies_titanic.drop(['Class_3'], axis=1, inplace=True)
titanic_dataframe = titanic_dataframe.join(pclass_dummies_titanic)
```
### Task
1. Is distribution of age similar for men and women?
2. Compare Age distribution for all three classes.
```
gender_titanic1 = titanic_dataframe[titanic_dataframe.Sex == "male"]
gender_titanic2 = titanic_dataframe[titanic_dataframe.Sex == "female"]
fig, (axis1, axis2) = plt.subplots(1, 2, figsize=(15, 4))
axis1.set_title('Age histogram - Titanic, male')
axis2.set_title('Age histogram - Titanic, female')
gender_titanic1["Age"].hist(ax = axis1, bins = 50)
gender_titanic2["Age"].hist(ax = axis2, bins = 50)
cls_titanic1 = titanic_dataframe.loc[(titanic_dataframe.Class_1 == 1) & (titanic_dataframe.Class_2 == 0)]
cls_titanic2 = titanic_dataframe.loc[(titanic_dataframe.Class_2 == 1) & (titanic_dataframe.Class_1 == 0)]
cls_titanic3 = titanic_dataframe.loc[(titanic_dataframe.Class_1 == 0) & (titanic_dataframe.Class_2 == 0)]
fig, (axis1, axis2, axis3) = plt.subplots(1, 3, figsize=(15, 4))
axis1.set_title('Age histogram - Titanic, Class1')
axis2.set_title('Age histogram - Titanic, Class2')
axis3.set_title('Age histogram - Titanic, Class3')
cls_titanic1["Age"].hist(ax = axis1, bins = 50)
cls_titanic2["Age"].hist(ax = axis2, bins = 50)
cls_titanic3["Age"].hist(ax = axis3, bins = 50)
```
## It's time for Machine learning!

```
# machine learning tools: various methods from scikit-learn
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
titanic_dataframe.head()
titanic_dataframe.drop('Person', axis=1, inplace=True)
# titanic_dataframe.drop("Person")
titanic_dataframe.head()
train, test = train_test_split(titanic_dataframe, train_size=0.5, test_size=0.5)
train_x = train.drop(['Survived'], axis=1)
train_y = train['Survived']
test_x = test.drop(['Survived'], axis=1)
test_y = test['Survived']
# Logistic Regression
logistic_regression_model = LogisticRegression(solver='liblinear')
logistic_regression_model.fit(train_x, train_y)
train_prediction = logistic_regression_model.predict(train_x)
test_prediction = logistic_regression_model.predict(test_x)
train_accuracy = accuracy_score(train_y, train_prediction)
test_accuracy = accuracy_score(test_y, test_prediction)
print('Train Accuracy:', train_accuracy)
print('Test Accuracy:', test_accuracy)
# get Correlation Coefficient for each feature using Logistic Regression
coeff_df = DataFrame(titanic_dataframe.columns.delete(0))
coeff_df.columns = ['Features']
coeff_df["Coefficient Estimate"] = pd.Series(logistic_regression_model.coef_[0])
# preview
coeff_df
# Support Vector Machines
svm_model = SVC(C=1.0, gamma=0.5)
svm_model.fit(train_x, train_y)
train_prediction = svm_model.predict(train_x)
test_prediction = svm_model.predict(test_x)
train_accuracy = accuracy_score(train_y, train_prediction)
test_accuracy = accuracy_score(test_y, test_prediction)
print('Train Accuracy:', train_accuracy)
print('Test Accuracy:', test_accuracy)
# Random Forests
random_forest_model = RandomForestClassifier(n_estimators=10)
random_forest_model.fit(train_x, train_y)
train_prediction = random_forest_model.predict(train_x)
test_prediction = random_forest_model.predict(test_x)
train_accuracy = accuracy_score(train_y, train_prediction)
test_accuracy = accuracy_score(test_y, test_prediction)
print('Train Accuracy:', train_accuracy)
print('Test Accuracy:', test_accuracy)
# K nearest neighbours
knn_model = KNeighborsClassifier(n_neighbors=1)
knn_model.fit(train_x, train_y)
train_prediction = knn_model.predict(train_x)
test_prediction = knn_model.predict(test_x)
train_accuracy = accuracy_score(train_y, train_prediction)
test_accuracy = accuracy_score(test_y, test_prediction)
print('Train Accuracy:', train_accuracy)
print('Test Accuracy:', test_accuracy)
```
### Task 3
Explore **sklearn** and find the best classifier!
| github_jupyter |
# PTN Template
This notebook serves as a template for single dataset PTN experiments
It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where)
But it is intended to be executed as part of a *papermill.py script. See any of the
experimentes with a papermill script to get started with that workflow.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
```
# Required Parameters
These are allowed parameters, not defaults
Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)
Papermill uses the cell tag "parameters" to inject the real parameters below this cell.
Enable tags to see what I mean
```
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"labels_source",
"labels_target",
"domains_source",
"domains_target",
"num_examples_per_domain_per_label_source",
"num_examples_per_domain_per_label_target",
"n_shot",
"n_way",
"n_query",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_transforms_source",
"x_transforms_target",
"episode_transforms_source",
"episode_transforms_target",
"pickle_name",
"x_net",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"torch_default_dtype"
}
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.0001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["num_examples_per_domain_per_label_source"]=100
standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 100
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "target_accuracy"
standalone_parameters["x_transforms_source"] = ["unit_power"]
standalone_parameters["x_transforms_target"] = ["unit_power"]
standalone_parameters["episode_transforms_source"] = []
standalone_parameters["episode_transforms_target"] = []
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# uncomment for CORES dataset
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
standalone_parameters["labels_source"] = ALL_NODES
standalone_parameters["labels_target"] = ALL_NODES
standalone_parameters["domains_source"] = [1]
standalone_parameters["domains_target"] = [2,3,4,5]
standalone_parameters["pickle_name"] = "cores.stratified_ds.2022A.pkl"
# Uncomment these for ORACLE dataset
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# standalone_parameters["labels_source"] = ALL_SERIAL_NUMBERS
# standalone_parameters["labels_target"] = ALL_SERIAL_NUMBERS
# standalone_parameters["domains_source"] = [8,20, 38,50]
# standalone_parameters["domains_target"] = [14, 26, 32, 44, 56]
# standalone_parameters["pickle_name"] = "oracle.frame_indexed.stratified_ds.2022A.pkl"
# standalone_parameters["num_examples_per_domain_per_label_source"]=1000
# standalone_parameters["num_examples_per_domain_per_label_target"]=1000
# Uncomment these for Metahan dataset
# standalone_parameters["labels_source"] = list(range(19))
# standalone_parameters["labels_target"] = list(range(19))
# standalone_parameters["domains_source"] = [0]
# standalone_parameters["domains_target"] = [1]
# standalone_parameters["pickle_name"] = "metehan.stratified_ds.2022A.pkl"
# standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# standalone_parameters["num_examples_per_domain_per_label_source"]=200
# standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# Parameters
parameters = {
"experiment_name": "tuned_1v2:oracle.run2_limited",
"device": "cuda",
"lr": 0.0001,
"labels_source": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"labels_target": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"episode_transforms_source": [],
"episode_transforms_target": [],
"domains_source": [8, 32, 50],
"domains_target": [14, 20, 26, 38, 44],
"num_examples_per_domain_per_label_source": 2000,
"num_examples_per_domain_per_label_target": 2000,
"n_shot": 3,
"n_way": 16,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"pickle_name": "oracle.Run2_10kExamples_stratified_ds.2022A.pkl",
"x_transforms_source": ["unit_mag"],
"x_transforms_target": ["unit_mag"],
"dataset_seed": 154325,
"seed": 154325,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
# (This is due to the randomized initial weights)
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
###################################
# Build the dataset
###################################
if p.x_transforms_source == []: x_transform_source = None
else: x_transform_source = get_chained_transform(p.x_transforms_source)
if p.x_transforms_target == []: x_transform_target = None
else: x_transform_target = get_chained_transform(p.x_transforms_target)
if p.episode_transforms_source == []: episode_transform_source = None
else: raise Exception("episode_transform_source not implemented")
if p.episode_transforms_target == []: episode_transform_target = None
else: raise Exception("episode_transform_target not implemented")
eaf_source = Episodic_Accessor_Factory(
labels=p.labels_source,
domains=p.domains_source,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_source,
example_transform_func=episode_transform_source,
)
train_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test()
eaf_target = Episodic_Accessor_Factory(
labels=p.labels_target,
domains=p.domains_target,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_target,
example_transform_func=episode_transform_target,
)
train_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test()
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
# Some quick unit tests on the data
from steves_utils.transforms import get_average_power, get_average_magnitude
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_source))
assert q_x.dtype == eval(p.torch_default_dtype)
assert s_x.dtype == eval(p.torch_default_dtype)
print("Visually inspect these to see if they line up with expected values given the transforms")
print('x_transforms_source', p.x_transforms_source)
print('x_transforms_target', p.x_transforms_target)
print("Average magnitude, source:", get_average_magnitude(q_x[0].numpy()))
print("Average power, source:", get_average_power(q_x[0].numpy()))
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_target))
print("Average magnitude, target:", get_average_magnitude(q_x[0].numpy()))
print("Average power, target:", get_average_power(q_x[0].numpy()))
###################################
# Build the model
###################################
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
```
import pandas as pd
from os.path import join
data_path = "../Dataset-1/selfie_dataset.txt"#join("..", "..", "Dataset-1", "selfie_dataset.txt")
image_path = "../Dataset-1/images"#join("..", "..", "Dataset-1", "selfie_dataset.txt")#join("..", "..", "Dataset-1", "images")
headers = [
"image_name", "score", "partial_faces" ,"is_female" ,"baby" ,"child" ,"teenager" ,"youth" ,"middle_age" ,"senior" ,"white" ,"black" ,"asian" ,"oval_face" ,"round_face" ,"heart_face" ,"smiling" ,"mouth_open" ,"frowning" ,"wearing_glasses" ,"wearing_sunglasses" ,"wearing_lipstick" ,"tongue_out" ,"duck_face" ,"black_hair" ,"blond_hair" ,"brown_hair" ,"red_hair" ,"curly_hair" ,"straight_hair" ,"braid_hair" ,"showing_cellphone" ,"using_earphone" ,"using_mirror", "braces" ,"wearing_hat" ,"harsh_lighting", "dim_lighting"
]
df_image_details = pd.read_csv(data_path, names=headers, delimiter=" ")
df_image_details.head(3)
df_image_details = df_image_details[df_image_details.is_female != 0]
df_image_details.replace(to_replace=-1, value=0, inplace=True)
image_names = df_image_details.image_name.values.copy()
image_scores = df_image_details[headers[1]].values.copy()
image_attrs = df_image_details[headers[2:]].values.copy()
image_paths = [join(image_path, iname) + '.jpg' for iname in image_names]
image_paths_train, image_paths_test = image_paths[:-1000], image_paths[-1000:]
image_attrs_train, image_attrs_test = image_attrs[:-1000], image_attrs[-1000:]
image_scores_train, image_scores_test = image_scores[:-1000], image_scores[-1000:]
from keras.utils import Sequence
import numpy as np
import cv2
class ImageGenerator(Sequence):
def __init__(self, x_set, y_set, batch_size):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
def __len__(self):
return int(np.ceil(len(self.x) / float(self.batch_size)))
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y1 = self.y[0][idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y2 = self.y[1][idx * self.batch_size:(idx + 1) * self.batch_size]
# read your data here using the batch lists, batch_x and batch_y
x = [self.read_image(filename) for filename in batch_x]
y1 = [atrributes for atrributes in batch_y1]
y2 = [atrributes for atrributes in batch_y2]
return [np.array(x), np.array(x)], [np.array(y1), np.array(y2)]
def read_image(self, fname):
im = cv2.imread(fname)
im = cv2.resize(im, (224, 224), interpolation=cv2.INTER_CUBIC)
return im / 255.
```
# Training Model
```
from keras.applications import resnet50
from keras.layers import Dense, Conv2D, MaxPool2D, Input, Flatten, concatenate, Dropout
from keras.models import Model, load_model
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
model_rnet = load_model("resnet50.hdf5")
for layer in model_rnet.layers:
layer.trainable = False
model_classification = Dense(1024, activation='relu')(model_rnet.get_layer('avg_pool').output)
model_classification = Dropout(0.5)(model_classification)
model_classification = Dense(512, activation='relu')(model_classification)
model_classification = Dense(36, activation='sigmoid', name='classification')(model_classification)
model_regression_input = Input((224, 224, 3), name='input_regression')
model_regression = Conv2D(16, 3)(model_regression_input)
model_regression = MaxPool2D()(model_regression)
model_regression = Conv2D(24, 5)(model_regression)
model_regression = MaxPool2D()(model_regression)
model_regression = Conv2D(32, 5)(model_regression)
model_regression = MaxPool2D()(model_regression)
model_regression = Flatten()(model_regression)
model_regression = Dense(128)(model_regression)
model_regression = concatenate([model_regression, model_classification])
model_regression = Dense(1, name='regression')(model_regression)
model = Model(inputs=[model_rnet.input, model_regression_input], outputs=[model_classification, model_regression])
model.summary()
model.compile(
optimizer='adam',
loss={
'regression': 'mean_squared_error',
'classification': 'binary_crossentropy'
},
metrics=[
'accuracy'
]
)
train_gen = ImageGenerator(image_paths_train, (image_attrs_train, image_scores_train), batch_size=128)
test_gen = ImageGenerator(image_paths_test, (image_attrs_test, image_scores_test), batch_size=128)
train_len = len(image_paths_train)
test_len = len(image_paths_test)
train_len, test_len
model.fit_generator(train_gen, validation_data=test_gen, epochs=200,
steps_per_epoch=train_len // 128,
validation_steps=10, use_multiprocessing=False,
callbacks=[
ReduceLROnPlateau(patience=2, verbose=1),
ModelCheckpoint('chpt-4_2.hdf5', verbose=1, save_best_only=True)
])
```
| github_jupyter |
# Railroad Diagrams
The code in this notebook helps with drawing syntax-diagrams. It is a (slightly customized) copy of the [excellent library from Tab Atkins jr.](https://github.com/tabatkins/railroad-diagrams), which unfortunately is not available as a Python package.
**Prerequisites**
* This notebook needs some understanding on advanced concepts in Python and Graphics, notably
* classes
* the Python `with` statement
* Scalable Vector Graphics
```
import fuzzingbook_utils
import re
import io
class C:
# Display constants
DEBUG = False # if true, writes some debug information into attributes
VS = 8 # minimum vertical separation between things. For a 3px stroke, must be at least 4
AR = 10 # radius of arcs
DIAGRAM_CLASS = 'railroad-diagram' # class to put on the root <svg>
STROKE_ODD_PIXEL_LENGTH = True # is the stroke width an odd (1px, 3px, etc) pixel length?
INTERNAL_ALIGNMENT = 'center' # how to align items when they have extra space. left/right/center
CHAR_WIDTH = 8.5 # width of each monospace character. play until you find the right value for your font
COMMENT_CHAR_WIDTH = 7 # comments are in smaller text by default
DEFAULT_STYLE = '''\
svg.railroad-diagram {
background-color:hsl(100,100%,100%);
}
svg.railroad-diagram path {
stroke-width:3;
stroke:black;
fill:rgba(0,0,0,0);
}
svg.railroad-diagram text {
font:bold 14px monospace;
text-anchor:middle;
}
svg.railroad-diagram text.label{
text-anchor:start;
}
svg.railroad-diagram text.comment{
font:italic 12px monospace;
}
svg.railroad-diagram rect{
stroke-width:3;
stroke:black;
fill:hsl(0,62%,82%);
}
'''
def e(text):
text = re.sub(r"&",'&', str(text))
text = re.sub(r"<",'<', str(text))
text = re.sub(r">",'>', str(text))
return str(text)
def determineGaps(outer, inner):
diff = outer - inner
if C.INTERNAL_ALIGNMENT == 'left':
return 0, diff
elif C.INTERNAL_ALIGNMENT == 'right':
return diff, 0
else:
return diff/2, diff/2
def doubleenumerate(seq):
length = len(list(seq))
for i,item in enumerate(seq):
yield i, i-length, item
def addDebug(el):
if not C.DEBUG:
return
el.attrs['data-x'] = "{0} w:{1} h:{2}/{3}/{4}".format(type(el).__name__, el.width, el.up, el.height, el.down)
class DiagramItem(object):
def __init__(self, name, attrs=None, text=None):
self.name = name
# up = distance it projects above the entry line
# height = distance between the entry/exit lines
# down = distance it projects below the exit line
self.height = 0
self.attrs = attrs or {}
self.children = [text] if text else []
self.needsSpace = False
def format(self, x, y, width):
raise NotImplementedError # Virtual
def addTo(self, parent):
parent.children.append(self)
return self
def writeSvg(self, write):
write(u'<{0}'.format(self.name))
for name, value in sorted(self.attrs.items()):
write(u' {0}="{1}"'.format(name, e(value)))
write(u'>')
if self.name in ["g", "svg"]:
write(u'\n')
for child in self.children:
if isinstance(child, DiagramItem):
child.writeSvg(write)
else:
write(e(child))
write(u'</{0}>'.format(self.name))
def __eq__(self, other):
return type(self) == type(other) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class Path(DiagramItem):
def __init__(self, x, y):
self.x = x
self.y = y
DiagramItem.__init__(self, 'path', {'d': 'M%s %s' % (x, y)})
def m(self, x, y):
self.attrs['d'] += 'm{0} {1}'.format(x,y)
return self
def l(self, x, y):
self.attrs['d'] += 'l{0} {1}'.format(x,y)
return self
def h(self, val):
self.attrs['d'] += 'h{0}'.format(val)
return self
def right(self, val):
return self.h(max(0, val))
def left(self, val):
return self.h(-max(0, val))
def v(self, val):
self.attrs['d'] += 'v{0}'.format(val)
return self
def down(self, val):
return self.v(max(0, val))
def up(self, val):
return self.v(-max(0, val))
def arc_8(self, start, dir):
# 1/8 of a circle
arc = C.AR
s2 = 1/math.sqrt(2) * arc
s2inv = (arc - s2)
path = "a {0} {0} 0 0 {1} ".format(arc, "1" if dir == 'cw' else "0")
sd = start+dir
if sd == 'ncw':
offset = [s2, s2inv]
elif sd == 'necw':
offset = [s2inv, s2]
elif sd == 'ecw':
offset = [-s2inv, s2]
elif sd == 'secw':
offset = [-s2, s2inv]
elif sd == 'scw':
offset = [-s2, -s2inv]
elif sd == 'swcw':
offset = [-s2inv, -s2]
elif sd == 'wcw':
offset = [s2inv, -s2]
elif sd == 'nwcw':
offset = [s2, -s2inv]
elif sd == 'nccw':
offset = [-s2, s2inv]
elif sd == 'nwccw':
offset = [-s2inv, s2]
elif sd == 'wccw':
offset = [s2inv, s2]
elif sd == 'swccw':
offset = [s2, s2inv]
elif sd == 'sccw':
offset = [s2, -s2inv]
elif sd == 'seccw':
offset = [s2inv, -s2]
elif sd == 'eccw':
offset = [-s2inv, -s2]
elif sd == 'neccw':
offset = [-s2, -s2inv]
path += " ".join(str(x) for x in offset)
self.attrs['d'] += path
return self
def arc(self, sweep):
x = C.AR
y = C.AR
if sweep[0] == 'e' or sweep[1] == 'w':
x *= -1
if sweep[0] == 's' or sweep[1] == 'n':
y *= -1
cw = 1 if sweep == 'ne' or sweep == 'es' or sweep == 'sw' or sweep == 'wn' else 0
self.attrs['d'] += 'a{0} {0} 0 0 {1} {2} {3}'.format(C.AR, cw, x, y)
return self
def format(self):
self.attrs['d'] += 'h.5'
return self
def __repr__(self):
return 'Path(%r, %r)' % (self.x, self.y)
def wrapString(value):
return value if isinstance(value, DiagramItem) else Terminal(value)
class Style(DiagramItem):
def __init__(self, css):
self.name = 'style'
self.css = css
self.height = 0
self.width = 0
self.needsSpace = False
def __repr__(self):
return 'Style(%r)' % css
def format(self, x, y, width):
return self
def writeSvg(self, write):
# Write included stylesheet as CDATA. See https:#developer.mozilla.org/en-US/docs/Web/SVG/Element/style
cdata = u'/* <![CDATA[ */\n{css}\n/* ]]> */\n'.format(css=self.css)
write(u'<style>{cdata}</style>'.format(cdata=cdata))
class Diagram(DiagramItem):
def __init__(self, *items, **kwargs):
# Accepts a type=[simple|complex] kwarg
DiagramItem.__init__(self, 'svg', {'class': C.DIAGRAM_CLASS, 'xmlns': "http://www.w3.org/2000/svg"})
self.type = kwargs.get("type", "simple")
self.items = [wrapString(item) for item in items]
if items and not isinstance(items[0], Start):
self.items.insert(0, Start(self.type))
if items and not isinstance(items[-1], End):
self.items.append(End(self.type))
self.css = kwargs.get("css", C.DEFAULT_STYLE)
if self.css:
self.items.insert(0, Style(self.css))
self.up = 0
self.down = 0
self.height = 0
self.width = 0
for item in self.items:
if isinstance(item, Style):
continue
self.width += item.width + (20 if item.needsSpace else 0)
self.up = max(self.up, item.up - self.height)
self.height += item.height
self.down = max(self.down - item.height, item.down)
if self.items[0].needsSpace:
self.width -= 10
if self.items[-1].needsSpace:
self.width -= 10
self.formatted = False
def __repr__(self):
if self.css:
items = ', '.join(map(repr, self.items[2:-1]))
else:
items = ', '.join(map(repr, self.items[1:-1]))
pieces = [] if not items else [items]
if self.css != C.DEFAULT_STYLE:
pieces.append('css=%r' % self.css)
if self.type != 'simple':
pieces.append('type=%r' % self.type)
return 'Diagram(%s)' % ', '.join(pieces)
def format(self, paddingTop=20, paddingRight=None, paddingBottom=None, paddingLeft=None):
if paddingRight is None:
paddingRight = paddingTop
if paddingBottom is None:
paddingBottom = paddingTop
if paddingLeft is None:
paddingLeft = paddingRight
x = paddingLeft
y = paddingTop + self.up
g = DiagramItem('g')
if C.STROKE_ODD_PIXEL_LENGTH:
g.attrs['transform'] = 'translate(.5 .5)'
for item in self.items:
if item.needsSpace:
Path(x, y).h(10).addTo(g)
x += 10
item.format(x, y, item.width).addTo(g)
x += item.width
y += item.height
if item.needsSpace:
Path(x, y).h(10).addTo(g)
x += 10
self.attrs['width'] = self.width + paddingLeft + paddingRight
self.attrs['height'] = self.up + self.height + self.down + paddingTop + paddingBottom
self.attrs['viewBox'] = "0 0 {width} {height}".format(**self.attrs)
g.addTo(self)
self.formatted = True
return self
def writeSvg(self, write):
if not self.formatted:
self.format()
return DiagramItem.writeSvg(self, write)
def parseCSSGrammar(self, text):
token_patterns = {
'keyword': r"[\w-]+\(?",
'type': r"<[\w-]+(\(\))?>",
'char': r"[/,()]",
'literal': r"'(.)'",
'openbracket': r"\[",
'closebracket': r"\]",
'closebracketbang': r"\]!",
'bar': r"\|",
'doublebar': r"\|\|",
'doubleand': r"&&",
'multstar': r"\*",
'multplus': r"\+",
'multhash': r"#",
'multnum1': r"{\s*(\d+)\s*}",
'multnum2': r"{\s*(\d+)\s*,\s*(\d*)\s*}",
'multhashnum1': r"#{\s*(\d+)\s*}",
'multhashnum2': r"{\s*(\d+)\s*,\s*(\d*)\s*}"
}
class Sequence(DiagramItem):
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
self.needsSpace = True
self.up = 0
self.down = 0
self.height = 0
self.width = 0
for item in self.items:
self.width += item.width + (20 if item.needsSpace else 0)
self.up = max(self.up, item.up - self.height)
self.height += item.height
self.down = max(self.down - item.height, item.down)
if self.items[0].needsSpace:
self.width -= 10
if self.items[-1].needsSpace:
self.width -= 10
addDebug(self)
def __repr__(self):
items = ', '.join(map(repr, self.items))
return 'Sequence(%s)' % items
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
Path(x, y).h(leftGap).addTo(self)
Path(x+leftGap+self.width, y+self.height).h(rightGap).addTo(self)
x += leftGap
for i,item in enumerate(self.items):
if item.needsSpace and i > 0:
Path(x, y).h(10).addTo(self)
x += 10
item.format(x, y, item.width).addTo(self)
x += item.width
y += item.height
if item.needsSpace and i < len(self.items)-1:
Path(x, y).h(10).addTo(self)
x += 10
return self
class Stack(DiagramItem):
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
self.needsSpace = True
self.width = max(item.width + (20 if item.needsSpace else 0) for item in self.items)
# pretty sure that space calc is totes wrong
if len(self.items) > 1:
self.width += C.AR*2
self.up = self.items[0].up
self.down = self.items[-1].down
self.height = 0
last = len(self.items) - 1
for i,item in enumerate(self.items):
self.height += item.height
if i > 0:
self.height += max(C.AR*2, item.up + C.VS)
if i < last:
self.height += max(C.AR*2, item.down + C.VS)
addDebug(self)
def __repr__(self):
items = ', '.join(repr(item) for item in self.items)
return 'Stack(%s)' % items
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
Path(x, y).h(leftGap).addTo(self)
x += leftGap
xInitial = x
if len(self.items) > 1:
Path(x, y).h(C.AR).addTo(self)
x += C.AR
innerWidth = self.width - C.AR*2
else:
innerWidth = self.width
for i,item in enumerate(self.items):
item.format(x, y, innerWidth).addTo(self)
x += innerWidth
y += item.height
if i != len(self.items)-1:
(Path(x,y)
.arc('ne').down(max(0, item.down + C.VS - C.AR*2))
.arc('es').left(innerWidth)
.arc('nw').down(max(0, self.items[i+1].up + C.VS - C.AR*2))
.arc('ws').addTo(self))
y += max(item.down + C.VS, C.AR*2) + max(self.items[i+1].up + C.VS, C.AR*2)
x = xInitial + C.AR
if len(self.items) > 1:
Path(x, y).h(C.AR).addTo(self)
x += C.AR
Path(x, y).h(rightGap).addTo(self)
return self
class OptionalSequence(DiagramItem):
def __new__(cls, *items):
if len(items) <= 1:
return Sequence(*items)
else:
return super(OptionalSequence, cls).__new__(cls)
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
self.needsSpace = False
self.width = 0
self.up = 0
self.height = sum(item.height for item in self.items)
self.down = self.items[0].down
heightSoFar = 0
for i,item in enumerate(self.items):
self.up = max(self.up, max(C.AR * 2, item.up + C.VS) - heightSoFar)
heightSoFar += item.height
if i > 0:
self.down = max(self.height + self.down, heightSoFar + max(C.AR*2, item.down + C.VS)) - self.height
itemWidth = item.width + (20 if item.needsSpace else 0)
if i == 0:
self.width += C.AR + max(itemWidth, C.AR)
else:
self.width += C.AR*2 + max(itemWidth, C.AR) + C.AR
addDebug(self)
def __repr__(self):
items = ', '.join(repr(item) for item in self.items)
return 'OptionalSequence(%s)' % items
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
Path(x, y).right(leftGap).addTo(self)
Path(x + leftGap + self.width, y + self.height).right(rightGap).addTo(self)
x += leftGap
upperLineY = y - self.up
last = len(self.items) - 1
for i,item in enumerate(self.items):
itemSpace = 10 if item.needsSpace else 0
itemWidth = item.width + itemSpace
if i == 0:
# Upper skip
(Path(x,y)
.arc('se')
.up(y - upperLineY - C.AR*2)
.arc('wn')
.right(itemWidth - C.AR)
.arc('ne')
.down(y + item.height - upperLineY - C.AR*2)
.arc('ws')
.addTo(self))
# Straight line
(Path(x, y)
.right(itemSpace + C.AR)
.addTo(self))
item.format(x + itemSpace + C.AR, y, item.width).addTo(self)
x += itemWidth + C.AR
y += item.height
elif i < last:
# Upper skip
(Path(x, upperLineY)
.right(C.AR*2 + max(itemWidth, C.AR) + C.AR)
.arc('ne')
.down(y - upperLineY + item.height - C.AR*2)
.arc('ws')
.addTo(self))
# Straight line
(Path(x,y)
.right(C.AR*2)
.addTo(self))
item.format(x + C.AR*2, y, item.width).addTo(self)
(Path(x + item.width + C.AR*2, y + item.height)
.right(itemSpace + C.AR)
.addTo(self))
# Lower skip
(Path(x,y)
.arc('ne')
.down(item.height + max(item.down + C.VS, C.AR*2) - C.AR*2)
.arc('ws')
.right(itemWidth - C.AR)
.arc('se')
.up(item.down + C.VS - C.AR*2)
.arc('wn')
.addTo(self))
x += C.AR*2 + max(itemWidth, C.AR) + C.AR
y += item.height
else:
# Straight line
(Path(x, y)
.right(C.AR*2)
.addTo(self))
item.format(x + C.AR*2, y, item.width).addTo(self)
(Path(x + C.AR*2 + item.width, y + item.height)
.right(itemSpace + C.AR)
.addTo(self))
# Lower skip
(Path(x,y)
.arc('ne')
.down(item.height + max(item.down + C.VS, C.AR*2) - C.AR*2)
.arc('ws')
.right(itemWidth - C.AR)
.arc('se')
.up(item.down + C.VS - C.AR*2)
.arc('wn')
.addTo(self))
return self
class AlternatingSequence(DiagramItem):
def __new__(cls, *items):
if len(items) == 2:
return super(AlternatingSequence, cls).__new__(cls)
else:
raise Exception("AlternatingSequence takes exactly two arguments got " + len(items))
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
self.needsSpace = False
arc = C.AR
vert = C.VS
first = self.items[0]
second = self.items[1]
arcX = 1 / math.sqrt(2) * arc * 2
arcY = (1 - 1 / math.sqrt(2)) * arc * 2
crossY = max(arc, vert)
crossX = (crossY - arcY) + arcX
firstOut = max(arc + arc, crossY/2 + arc + arc, crossY/2 + vert + first.down)
self.up = firstOut + first.height + first.up
secondIn = max(arc + arc, crossY/2 + arc + arc, crossY/2 + vert + second.up)
self.down = secondIn + second.height + second.down
self.height = 0
firstWidth = (20 if first.needsSpace else 0) + first.width
secondWidth = (20 if second.needsSpace else 0) + second.width
self.width = 2*arc + max(firstWidth, crossX, secondWidth) + 2*arc
addDebug(self)
def __repr__(self):
items = ', '.join(repr(item) for item in self.items)
return 'AlternatingSequence(%s)' % items
def format(self, x, y, width):
arc = C.AR
gaps = determineGaps(width, self.width)
Path(x,y).right(gaps[0]).addTo(self)
x += gaps[0]
Path(x+self.width, y).right(gaps[1]).addTo(self)
# bounding box
# Path(x+gaps[0], y).up(self.up).right(self.width).down(self.up+self.down).left(self.width).up(self.down).addTo(self)
first = self.items[0]
second = self.items[1]
# top
firstIn = self.up - first.up
firstOut = self.up - first.up - first.height
Path(x,y).arc('se').up(firstIn-2*arc).arc('wn').addTo(self)
first.format(x + 2*arc, y - firstIn, self.width - 4*arc).addTo(self)
Path(x + self.width - 2*arc, y - firstOut).arc('ne').down(firstOut - 2*arc).arc('ws').addTo(self)
# bottom
secondIn = self.down - second.down - second.height
secondOut = self.down - second.down
Path(x,y).arc('ne').down(secondIn - 2*arc).arc('ws').addTo(self)
second.format(x + 2*arc, y + secondIn, self.width - 4*arc).addTo(self)
Path(x + self.width - 2*arc, y + secondOut).arc('se').up(secondOut - 2*arc).arc('wn').addTo(self)
# crossover
arcX = 1 / Math.sqrt(2) * arc * 2
arcY = (1 - 1 / Math.sqrt(2)) * arc * 2
crossY = max(arc, C.VS)
crossX = (crossY - arcY) + arcX
crossBar = (self.width - 4*arc - crossX)/2
(Path(x+arc, y - crossY/2 - arc).arc('ws').right(crossBar)
.arc_8('n', 'cw').l(crossX - arcX, crossY - arcY).arc_8('sw', 'ccw')
.right(crossBar).arc('ne').addTo(self))
(Path(x+arc, y + crossY/2 + arc).arc('wn').right(crossBar)
.arc_8('s', 'ccw').l(crossX - arcX, -(crossY - arcY)).arc_8('nw', 'cw')
.right(crossBar).arc('se').addTo(self))
return self
class Choice(DiagramItem):
def __init__(self, default, *items):
DiagramItem.__init__(self, 'g')
assert default < len(items)
self.default = default
self.items = [wrapString(item) for item in items]
self.width = C.AR * 4 + max(item.width for item in self.items)
self.up = self.items[0].up
self.down = self.items[-1].down
self.height = self.items[default].height
for i, item in enumerate(self.items):
if i in [default-1, default+1]:
arcs = C.AR*2
else:
arcs = C.AR
if i < default:
self.up += max(arcs, item.height + item.down + C.VS + self.items[i+1].up)
elif i == default:
continue
else:
self.down += max(arcs, item.up + C.VS + self.items[i-1].down + self.items[i-1].height)
self.down -= self.items[default].height # already counted in self.height
addDebug(self)
def __repr__(self):
items = ', '.join(repr(item) for item in self.items)
return 'Choice(%r, %s)' % (self.default, items)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y + self.height).h(rightGap).addTo(self)
x += leftGap
innerWidth = self.width - C.AR * 4
default = self.items[self.default]
# Do the elements that curve above
above = self.items[:self.default][::-1]
if above:
distanceFromY = max(
C.AR * 2,
default.up
+ C.VS
+ above[0].down
+ above[0].height)
for i,ni,item in doubleenumerate(above):
Path(x, y).arc('se').up(distanceFromY - C.AR * 2).arc('wn').addTo(self)
item.format(x + C.AR * 2, y - distanceFromY, innerWidth).addTo(self)
Path(x + C.AR * 2 + innerWidth, y - distanceFromY + item.height).arc('ne') \
.down(distanceFromY - item.height + default.height - C.AR*2).arc('ws').addTo(self)
if ni < -1:
distanceFromY += max(
C.AR,
item.up
+ C.VS
+ above[i+1].down
+ above[i+1].height)
# Do the straight-line path.
Path(x, y).right(C.AR * 2).addTo(self)
self.items[self.default].format(x + C.AR * 2, y, innerWidth).addTo(self)
Path(x + C.AR * 2 + innerWidth, y+self.height).right(C.AR * 2).addTo(self)
# Do the elements that curve below
below = self.items[self.default + 1:]
if below:
distanceFromY = max(
C.AR * 2,
default.height
+ default.down
+ C.VS
+ below[0].up)
for i, item in enumerate(below):
Path(x, y).arc('ne').down(distanceFromY - C.AR * 2).arc('ws').addTo(self)
item.format(x + C.AR * 2, y + distanceFromY, innerWidth).addTo(self)
Path(x + C.AR * 2 + innerWidth, y + distanceFromY + item.height).arc('se') \
.up(distanceFromY - C.AR * 2 + item.height - default.height).arc('wn').addTo(self)
distanceFromY += max(
C.AR,
item.height
+ item.down
+ C.VS
+ (below[i + 1].up if i+1 < len(below) else 0))
return self
class MultipleChoice(DiagramItem):
def __init__(self, default, type, *items):
DiagramItem.__init__(self, 'g')
assert 0 <= default < len(items)
assert type in ["any", "all"]
self.default = default
self.type = type
self.needsSpace = True
self.items = [wrapString(item) for item in items]
self.innerWidth = max(item.width for item in self.items)
self.width = 30 + C.AR + self.innerWidth + C.AR + 20
self.up = self.items[0].up
self.down = self.items[-1].down
self.height = self.items[default].height
for i, item in enumerate(self.items):
if i in [default-1, default+1]:
minimum = 10 + C.AR
else:
minimum = C.AR
if i < default:
self.up += max(minimum, item.height + item.down + C.VS + self.items[i+1].up)
elif i == default:
continue
else:
self.down += max(minimum, item.up + C.VS + self.items[i-1].down + self.items[i-1].height)
self.down -= self.items[default].height # already counted in self.height
addDebug(self)
def __repr__(self):
items = ', '.join(map(repr, self.items))
return 'MultipleChoice(%r, %r, %s)' % (self.default, self.type, items)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y + self.height).h(rightGap).addTo(self)
x += leftGap
default = self.items[self.default]
# Do the elements that curve above
above = self.items[:self.default][::-1]
if above:
distanceFromY = max(
10 + C.AR,
default.up
+ C.VS
+ above[0].down
+ above[0].height)
for i,ni,item in doubleenumerate(above):
(Path(x + 30, y)
.up(distanceFromY - C.AR)
.arc('wn')
.addTo(self))
item.format(x + 30 + C.AR, y - distanceFromY, self.innerWidth).addTo(self)
(Path(x + 30 + C.AR + self.innerWidth, y - distanceFromY + item.height)
.arc('ne')
.down(distanceFromY - item.height + default.height - C.AR - 10)
.addTo(self))
if ni < -1:
distanceFromY += max(
C.AR,
item.up
+ C.VS
+ above[i+1].down
+ above[i+1].height)
# Do the straight-line path.
Path(x + 30, y).right(C.AR).addTo(self)
self.items[self.default].format(x + 30 + C.AR, y, self.innerWidth).addTo(self)
Path(x + 30 + C.AR + self.innerWidth, y + self.height).right(C.AR).addTo(self)
# Do the elements that curve below
below = self.items[self.default + 1:]
if below:
distanceFromY = max(
10 + C.AR,
default.height
+ default.down
+ C.VS
+ below[0].up)
for i, item in enumerate(below):
(Path(x+30, y)
.down(distanceFromY - C.AR)
.arc('ws')
.addTo(self))
item.format(x + 30 + C.AR, y + distanceFromY, self.innerWidth).addTo(self)
(Path(x + 30 + C.AR + self.innerWidth, y + distanceFromY + item.height)
.arc('se')
.up(distanceFromY - C.AR + item.height - default.height - 10)
.addTo(self))
distanceFromY += max(
C.AR,
item.height
+ item.down
+ C.VS
+ (below[i + 1].up if i+1 < len(below) else 0))
text = DiagramItem('g', attrs={"class": "diagram-text"}).addTo(self)
DiagramItem('title', text="take one or more branches, once each, in any order" if self.type=="any" else "take all branches, once each, in any order").addTo(text)
DiagramItem('path', attrs={
"d": "M {x} {y} h -26 a 4 4 0 0 0 -4 4 v 12 a 4 4 0 0 0 4 4 h 26 z".format(x=x+30, y=y-10),
"class": "diagram-text"
}).addTo(text)
DiagramItem('text', text="1+" if self.type=="any" else "all", attrs={
"x": x + 15,
"y": y + 4,
"class": "diagram-text"
}).addTo(text)
DiagramItem('path', attrs={
"d": "M {x} {y} h 16 a 4 4 0 0 1 4 4 v 12 a 4 4 0 0 1 -4 4 h -16 z".format(x=x+self.width-20, y=y-10),
"class": "diagram-text"
}).addTo(text)
DiagramItem('text', text=u"↺", attrs={
"x": x + self.width - 10,
"y": y + 4,
"class": "diagram-arrow"
}).addTo(text)
return self
class HorizontalChoice(DiagramItem):
def __new__(cls, *items):
if len(items) <= 1:
return Sequence(*items)
else:
return super(HorizontalChoice, cls).__new__(cls)
def __init__(self, *items):
DiagramItem.__init__(self, 'g')
self.items = [wrapString(item) for item in items]
allButLast = self.items[:-1]
middles = self.items[1:-1]
first = self.items[0]
last = self.items[-1]
self.needsSpace = False
self.width = (C.AR # starting track
+ C.AR*2 * (len(self.items)-1) # inbetween tracks
+ sum(x.width + (20 if x.needsSpace else 0) for x in self.items) #items
+ (C.AR if last.height > 0 else 0) # needs space to curve up
+ C.AR) #ending track
# Always exits at entrance height
self.height = 0
# All but the last have a track running above them
self._upperTrack = max(
C.AR*2,
C.VS,
max(x.up for x in allButLast) + C.VS
)
self.up = max(self._upperTrack, last.up)
# All but the first have a track running below them
# Last either straight-lines or curves up, so has different calculation
self._lowerTrack = max(
C.VS,
max(x.height+max(x.down+C.VS, C.AR*2) for x in middles) if middles else 0,
last.height + last.down + C.VS
)
if first.height < self._lowerTrack:
# Make sure there's at least 2*C.AR room between first exit and lower track
self._lowerTrack = max(self._lowerTrack, first.height + C.AR*2)
self.down = max(self._lowerTrack, first.height + first.down)
addDebug(self)
def format(self, x, y, width):
# Hook up the two sides if self is narrower than its stated width.
leftGap, rightGap = determineGaps(width, self.width)
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y + self.height).h(rightGap).addTo(self)
x += leftGap
first = self.items[0]
last = self.items[-1]
# upper track
upperSpan = (sum(x.width+(20 if x.needsSpace else 0) for x in self.items[:-1])
+ (len(self.items) - 2) * C.AR*2
- C.AR)
(Path(x,y)
.arc('se')
.up(self._upperTrack - C.AR*2)
.arc('wn')
.h(upperSpan)
.addTo(self))
# lower track
lowerSpan = (sum(x.width+(20 if x.needsSpace else 0) for x in self.items[1:])
+ (len(self.items) - 2) * C.AR*2
+ (C.AR if last.height > 0 else 0)
- C.AR)
lowerStart = x + C.AR + first.width+(20 if first.needsSpace else 0) + C.AR*2
(Path(lowerStart, y+self._lowerTrack)
.h(lowerSpan)
.arc('se')
.up(self._lowerTrack - C.AR*2)
.arc('wn')
.addTo(self))
# Items
for [i, item] in enumerate(self.items):
# input track
if i == 0:
(Path(x,y)
.h(C.AR)
.addTo(self))
x += C.AR
else:
(Path(x, y - self._upperTrack)
.arc('ne')
.v(self._upperTrack - C.AR*2)
.arc('ws')
.addTo(self))
x += C.AR*2
# item
itemWidth = item.width + (20 if item.needsSpace else 0)
item.format(x, y, itemWidth).addTo(self)
x += itemWidth
# output track
if i == len(self.items)-1:
if item.height == 0:
(Path(x,y)
.h(C.AR)
.addTo(self))
else:
(Path(x,y+item.height)
.arc('se')
.addTo(self))
elif i == 0 and item.height > self._lowerTrack:
# Needs to arc up to meet the lower track, not down.
if item.height - self._lowerTrack >= C.AR*2:
(Path(x, y+item.height)
.arc('se')
.v(self._lowerTrack - item.height + C.AR*2)
.arc('wn')
.addTo(self))
else:
# Not enough space to fit two arcs
# so just bail and draw a straight line for now.
(Path(x, y+item.height)
.l(C.AR*2, self._lowerTrack - item.height)
.addTo(self))
else:
(Path(x, y+item.height)
.arc('ne')
.v(self._lowerTrack - item.height - C.AR*2)
.arc('ws')
.addTo(self))
return self
def Optional(item, skip=False):
return Choice(0 if skip else 1, Skip(), item)
class OneOrMore(DiagramItem):
def __init__(self, item, repeat=None):
DiagramItem.__init__(self, 'g')
repeat = repeat or Skip()
self.item = wrapString(item)
self.rep = wrapString(repeat)
self.width = max(self.item.width, self.rep.width) + C.AR * 2
self.height = self.item.height
self.up = self.item.up
self.down = max(
C.AR * 2,
self.item.down + C.VS + self.rep.up + self.rep.height + self.rep.down)
self.needsSpace = True
addDebug(self)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y +self.height).h(rightGap).addTo(self)
x += leftGap
# Draw item
Path(x, y).right(C.AR).addTo(self)
self.item.format(x + C.AR, y, self.width - C.AR * 2).addTo(self)
Path(x + self.width - C.AR, y + self.height).right(C.AR).addTo(self)
# Draw repeat arc
distanceFromY = max(C.AR*2, self.item.height + self.item.down + C.VS + self.rep.up)
Path(x + C.AR, y).arc('nw').down(distanceFromY - C.AR * 2) \
.arc('ws').addTo(self)
self.rep.format(x + C.AR, y + distanceFromY, self.width - C.AR*2).addTo(self)
Path(x + self.width - C.AR, y + distanceFromY + self.rep.height).arc('se') \
.up(distanceFromY - C.AR * 2 + self.rep.height - self.item.height).arc('en').addTo(self)
return self
def __repr__(self):
return 'OneOrMore(%r, repeat=%r)' % (self.item, self.rep)
def ZeroOrMore(item, repeat=None, skip=False):
result = Optional(OneOrMore(item, repeat), skip)
return result
class Start(DiagramItem):
def __init__(self, type="simple", label=None):
DiagramItem.__init__(self, 'g')
if label:
self.width = max(20, len(label) * C.CHAR_WIDTH + 10)
else:
self.width = 20
self.up = 10
self.down = 10
self.type = type
self.label = label
addDebug(self)
def format(self, x, y, _width):
path = Path(x, y-10)
if self.type == "complex":
path.down(20).m(0, -10).right(self.width).addTo(self)
else:
path.down(20).m(10, -20).down(20).m(-10, -10).right(self.width).addTo(self)
if self.label:
DiagramItem('text', attrs={"x":x, "y":y-15, "style":"text-anchor:start"}, text=self.label).addTo(self)
return self
def __repr__(self):
return 'Start(type=%r, label=%r)' % (self.type, self.label)
class End(DiagramItem):
def __init__(self, type="simple"):
DiagramItem.__init__(self, 'path')
self.width = 20
self.up = 10
self.down = 10
self.type = type
addDebug(self)
def format(self, x, y, _width):
if self.type == "simple":
self.attrs['d'] = 'M {0} {1} h 20 m -10 -10 v 20 m 10 -20 v 20'.format(x, y)
elif self.type == "complex":
self.attrs['d'] = 'M {0} {1} h 20 m 0 -10 v 20'
return self
def __repr__(self):
return 'End(type=%r)' % self.type
class Terminal(DiagramItem):
def __init__(self, text, href=None, title=None):
DiagramItem.__init__(self, 'g', {'class': 'terminal'})
self.text = text
self.href = href
self.title = title
self.width = len(text) * C.CHAR_WIDTH + 20
self.up = 11
self.down = 11
self.needsSpace = True
addDebug(self)
def __repr__(self):
return 'Terminal(%r, href=%r, title=%r)' % (self.text, self.href, self.title)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y).h(rightGap).addTo(self)
DiagramItem('rect', {'x': x + leftGap, 'y': y - 11, 'width': self.width,
'height': self.up + self.down, 'rx': 10, 'ry': 10}).addTo(self)
text = DiagramItem('text', {'x': x + width / 2, 'y': y + 4}, self.text)
if self.href is not None:
a = DiagramItem('a', {'xlink:href':self.href}, text).addTo(self)
text.addTo(a)
else:
text.addTo(self)
if self.title is not None:
DiagramItem('title', {}, self.title).addTo(self)
return self
class NonTerminal(DiagramItem):
def __init__(self, text, href=None, title=None):
DiagramItem.__init__(self, 'g', {'class': 'non-terminal'})
self.text = text
self.href = href
self.title = title
self.width = len(text) * C.CHAR_WIDTH + 20
self.up = 11
self.down = 11
self.needsSpace = True
addDebug(self)
def __repr__(self):
return 'NonTerminal(%r, href=%r, title=%r)' % (self.text, self.href, self.title)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y).h(rightGap).addTo(self)
DiagramItem('rect', {'x': x + leftGap, 'y': y - 11, 'width': self.width,
'height': self.up + self.down}).addTo(self)
text = DiagramItem('text', {'x': x + width / 2, 'y': y + 4}, self.text)
if self.href is not None:
a = DiagramItem('a', {'xlink:href':self.href}, text).addTo(self)
text.addTo(a)
else:
text.addTo(self)
if self.title is not None:
DiagramItem('title', {}, self.title).addTo(self)
return self
class Comment(DiagramItem):
def __init__(self, text, href=None, title=None):
DiagramItem.__init__(self, 'g')
self.text = text
self.href = href
self.title = title
self.width = len(text) * C.COMMENT_CHAR_WIDTH + 10
self.up = 11
self.down = 11
self.needsSpace = True
addDebug(self)
def __repr__(self):
return 'Comment(%r, href=%r, title=%r)' % (self.text, self.href, self.title)
def format(self, x, y, width):
leftGap, rightGap = determineGaps(width, self.width)
# Hook up the two sides if self is narrower than its stated width.
Path(x, y).h(leftGap).addTo(self)
Path(x + leftGap + self.width, y).h(rightGap).addTo(self)
text = DiagramItem('text', {'x': x + width / 2, 'y': y + 5, 'class': 'comment'}, self.text)
if self.href is not None:
a = DiagramItem('a', {'xlink:href':self.href}, text).addTo(self)
text.addTo(a)
else:
text.addTo(self)
if self.title is not None:
DiagramItem('title', {}, self.title).addTo(self)
return self
class Skip(DiagramItem):
def __init__(self):
DiagramItem.__init__(self, 'g')
self.width = 0
self.up = 0
self.down = 0
addDebug(self)
def format(self, x, y, width):
Path(x, y).right(width).addTo(self)
return self
def __repr__(self):
return 'Skip()'
def show_diagram(graph, log=False):
with io.StringIO() as f:
d = Diagram(graph)
if log:
print(d)
d.writeSvg(f.write)
mysvg = f.getvalue()
return mysvg
```
| github_jupyter |
# **Jupyter Notebook to demonstrate (simple) Linear Regression for Advertising/Sales Predicition**
Linear Regression is a simple yet powerful and mostly used algorithm in data science. There are a plethora of real-world applications of Linear Regression.
The purpose of this tutorial/notebook is to get a clear idea on how a linear regression can be used to solve a marketing problem, such as selecting the right channels to advertise a product.
Problem Statement and Example Data: Build a model which predicts sales based on the money spent on different platforms for marketing. Using this notebook, we will build a linear regression model to predict Sales using an appropriate predictor variable, based on the advertising dataset.
### Useful resources:
* [Kaggle](https://www.kaggle.com) Multiple Lineare Regression Notebooks and Tutorials
* [Analytics Vidhya Blog](https://www.analyticsvidhya.com/blog/2021/10/everything-you-need-to-know-about-linear-regression/) "Everything you need to Know about Linear Regression"
* [CodeSource](https://codesource.io/building-a-regression-model-to-predict-sales-revenue-using-sci-kit-learn/) "Building a Regression Model to Predict Sales Revenue using Sci-Kit Learn"
---
Author:
* dr.daniel benninger [> Linkedin](https://www.linkedin.com/in/danielbenninger/)
History:
* v1, June 2021, dbe --- adapted version for CAS DA4
## Setup Environment - Load necessary Libraries and Functions
First, we need to import some libraries:
* pandas: data manipulation and analysis
* numpy : library for scientific computing in Python, used for working with arrays and matrices
* matplotlib : plotting library for data visualization
* rcParams: To change the matplotlib properties like figure size
* seaborn: data visualization library based on matplotlib
* sklearn: xxx
* statsmodels: Using statsmodels module classes and functions for linear regression
**Note:** Make sure they are installed already before importing them
```
# Supress Warnings
import warnings
warnings.filterwarnings('ignore')
# Import the numpy and pandas package
import numpy as np
import pandas as pd
# Import the data visualization package
import matplotlib.pyplot as plt
import seaborn as sns
from pylab import rcParams
# configure plot area
rcParams['figure.figsize'] = 12, 8
```
## Reading and Understanding the Data
```
# Input data files - in colab environment - are available in the "/content/sample_data" directory.
# For example: check the current files in the input directory
from subprocess import check_output
print(check_output(["ls", "/content/sample_data"]).decode("utf8"))
#advertising = pd.DataFrame(pd.read_csv("../input/advertising.csv"))
#load Advertising dataset from local directory
advertising = pd.read_csv("/content/sample_data/DATA_Werbung.csv")
advertising.head()
```
## Data Inspection
```
advertising.shape
advertising.info()
advertising.describe()
```
## Data Cleaning & Analysis
```
# Checking Null values
advertising.isnull().sum()*100/advertising.shape[0]
```
Note: There are no *NULL* values in the dataset, hence it is clean.
```
# Analysis with Box-Whisker Plots (Lageparameter Analyse)
fig, axs = plt.subplots(3, figsize = (10,5))
plt1 = sns.boxplot(advertising['TV'], ax = axs[0])
plt2 = sns.boxplot(advertising['Newspaper'], ax = axs[1])
plt3 = sns.boxplot(advertising['Radio'], ax = axs[2])
plt.tight_layout()
```
Note: There are no considerable *outliers* present in the data.
## Diagnostic Analytics - Exploratory Data Analysis
### Univariate Analysis
Focus on the **Sales** (Target) Variable
```
# Analysis with Box-Whisker Plots (Lageparameter Analyse)
fig, axs = plt.subplots(1, figsize = (10,5))
sns.boxplot(advertising['Sales'])
plt.show()
```
### Bivariate Analysis
Focus on the Observation and Target Variable combinations
```
# Analysis with Scatterplot (Pairplots)
# Let's see how Sales are related with other variables
sns.pairplot(advertising, x_vars=['TV', 'Newspaper', 'Radio'], y_vars='Sales', height=4, aspect=1, kind='scatter')
plt.title('Pairwise Scatterplots')
plt.show()
# Analysis with Heatmap
# Let's see the correlation between different variables
sns.heatmap(advertising.corr(), cmap="YlOrBr", annot = True)
plt.show()
```
Note: As is visible from the *pairplot* and the *heatmap*, the variable `TV` seems to be most correlated with `Sales`.
So let's go ahead and perform **simple linear regression** using `TV` as our feature variable.
## Model Building - Linear Regression
#### Performing Simple Linear Regression
The equation of linear regression:<br>
$y = c + m_1x_1 + m_2x_2 + ... + m_nx_n$
- $y$ is the response (*target*)
- $c$ is the intercept
- $m_1$ is the coefficient for the first feature (*observation*)
- $m_n$ is the coefficient for the nth feature<br>
In our case:
$y = c + m_1 \times TV$
The $m$ values are called the model **coefficients** or **model parameters**.
---
### Generic Steps in Model Building
1) We first assign
* the feature (**observation**) (`TV`, in this case) to the **variable `X`**
* and the response (**target**) variable (`Sales`) to the **variable `y`**.
```
X = advertising['TV']
y = advertising['Sales']
```
2) Then split Dataset into Train-Test Parts
You now need to split our variable into training and testing sets. You'll perform this by importing `train_test_split` from the `sklearn.model_selection` library. It is usually a good practice to keep 70% of the data in your train dataset and the rest 30% in your test dataset
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.7, test_size = 0.3, random_state = 100)
# Let's have a look at the TRAIN dataset
X_train.head()
y_train.head()
```
### Build a Linear Model
You first need to import the `statsmodel.api` library using which you'll perform the linear regression.
```
import statsmodels.api as sm
```
By default, the `statsmodels` library fits a line on the dataset which passes through the origin. But in order to have an intercept, you need to manually use the `add_constant` attribute of `statsmodels`.
And once you've added the constant to your `X_train` dataset, you can go ahead and fit a regression line using the **Ordinary Least Squares** (`OLS`) attribute of `statsmodels` as shown below
```
# Add a constant to get an intercept
X_train_sm = sm.add_constant(X_train)
# Fit the Regression Line using 'OLS' (ordinary least square)
lr = sm.OLS(y_train, X_train_sm).fit()
# Print the regression parameters,
# i.e. the intercept and the slope of the fitted regression line
lr.params
# Performing a summary operation lists out all the different parameters of the regression line fitted
print(lr.summary())
```
#### Key Statistics of the Linear Model
Looking at the key values from the summary above of the linear model, we are concerned with:
1. The coefficients and significance (**p-values**)
2. **R-squared**
3. **F statistic** and its significance
##### 1. The coefficient for TV is 0.054, with a very low p value
*The coefficient is statistically significant*. So the association is not purely by chance.
##### 2. R - squared is 0.816
Meaning that 81.6% of the variance in `Sales` is explained by `TV`
*This is a decent R-squared value.*
###### 3. F statistic has a very low p value (practically low)
Meaning that *the model fit is statistically significant*, and the explained variance isn't purely by chance.
---
Note: **The fit is significant**
Let's visualize how well the model fit the data.
From the parameters that we get, our linear regression equation becomes:
$ Sales = 6.948 + 0.054 \times TV $
```
# Visualize Orginal Data and Linear Model
plt.scatter(X_train, y_train)
plt.plot(X_train, 6.948 + 0.054*X_train, 'r')
plt.title('Original Data and Linear Model')
# Set x-axis label
plt.xlabel('TV')
# Set y-axis label
plt.ylabel('Sales')
plt.show()
```
## Model Evaluation
**Residual analysis**, to validate assumptions of the model, and hence the reliability for inference
#### Distribution of the Error Terms
We need to check if the error terms are also normally distributed (which is infact, one of the major assumptions of linear regression), let us plot the *histogram of the error terms* and see what it looks like.
```
y_train_pred = lr.predict(X_train_sm)
res = (y_train - y_train_pred)
plt.figure()
sns.distplot(res, bins = 15)
plt.title('Model Evaluation: Distribution of the Error Terms', fontsize = 15)
plt.xlabel('y_train - y_train_pred', fontsize = 15) # X-label
plt.show()
```
Note: The residuals are following the *normally distributed with a mean 0*. All good!
#### Looking for Patterns in the Residuals
```
plt.scatter(X_train,res)
plt.title('Model Evaluation: Residual Patterns', fontsize = 15)
plt.ylabel('y_train - y_train_pred', fontsize = 15) # Y-label
plt.show()
```
We are confident that the model fit isn't by chance, and has decent predictive power. The normality of residual terms allows some inference on the coefficients.
**Although**, the variance of residuals increasing with X indicates that there is significant variation that this model is unable to explain.
As you can see, the regression line is a pretty good fit to the data!
### Predictions on the Test Set
Now that you have fitted a regression line on your train dataset, it's time to make some predictions on the test data. For this, you first need to add a constant to the `X_test` data like you did for `X_train` and then you can simply go on and predict the y values corresponding to `X_test` using the `predict` attribute of the fitted regression line.
```
# Add a constant to X_test
X_test_sm = sm.add_constant(X_test)
# Predict the y values corresponding to X_test_sm
y_pred = lr.predict(X_test_sm)
y_pred.head()
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
```
##### Looking at the Root Mean Squared Error (RMSE) on the test set
```
#Returns the mean squared error (RMSE); we'll take a square root
RMSE = np.sqrt(mean_squared_error(y_test, y_pred))
print('Root Mean Squared Error (RMSE): ', RMSE)
```
###### Checking the R-squared on the test set
```
r_squared = r2_score(y_test, y_pred)
print('R-squared: ',r_squared)
```
##### Visualizing the Fit on the Test Set
```
plt.scatter(X_test, y_test)
plt.plot(X_test, 6.948 + 0.054 * X_test, 'r')
plt.title('Model Evaluation: Visualizing the Fit on the Test Set', fontsize = 15)
plt.ylabel('Sales') # Y-label
plt.xlabel('TV') # X-label
plt.show()
```
## Model Deployment
1) Define Linear Model Function
```
def lr_model_prediction (Xarg):
intercept = 6.948
coeff_X = 0.054
result = intercept + coeff_X * Xarg
return result
```
2) Apply Linear Model Function to new observation values
```
TV_ads = 375
print('Sales Prediction:',lr_model_prediction(X_new),' for TV ads volume: ', TV_ads)
```
| github_jupyter |
```
# Copyright 2020 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TensorFlow Cloud - Putting it all together
In this example, we will use all of the features outlined in the [Keras cloud guide](https://www.tensorflow.org/guide/keras/training_keras_models_on_cloud) to train a state-of-the-art model to classify dog breeds using feature extraction. Let's begin by installing TensorFlow Cloud and importing a few important packages.
## Setup
```
!pip install tensorflow-cloud
import datetime
import os
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_cloud as tfc
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Model
```
### Cloud Configuration
In order to run TensorFlow Cloud from a Colab notebook, we'll need to upload our [authentication key](https://cloud.google.com/docs/authentication/getting-started) and specify our [Cloud storage bucket](https://cloud.google.com/storage/docs/creating-buckets) for image building and publishing.
```
if not tfc.remote():
from google.colab import files
key_upload = files.upload()
key_path = list(key_upload.keys())[0]
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = key_path
os.system(f"gcloud auth activate-service-account --key-file {key_path}")
GCP_BUCKET = "[your-bucket-name]" #@param {type:"string"}
```
## Model Creation
### Dataset preprocessing
We'll be loading our training data from TensorFlow Datasets:
```
(ds_train, ds_test), metadata = tfds.load(
"stanford_dogs",
split=["train", "test"],
shuffle_files=True,
with_info=True,
as_supervised=True,
)
NUM_CLASSES = metadata.features["label"].num_classes
```
Let's visualize this dataset:
```
print("Number of training samples: %d" % tf.data.experimental.cardinality(ds_train))
print("Number of test samples: %d" % tf.data.experimental.cardinality(ds_test))
print("Number of classes: %d" % NUM_CLASSES)
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image)
plt.title(int(label))
plt.axis("off")
```
Here we will resize and rescale our images to fit into our model's input, as well as create batches.
```
IMG_SIZE = 224
BATCH_SIZE = 64
BUFFER_SIZE = 2
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
def input_preprocess(image, label):
image = tf.keras.applications.resnet50.preprocess_input(image)
return image, label
ds_train = ds_train.map(
input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
ds_train = ds_train.batch(batch_size=BATCH_SIZE, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=BATCH_SIZE, drop_remainder=True)
```
### Model Architecture
We're using ResNet50 pretrained on ImageNet, from the Keras Applications module.
```
inputs = tf.keras.layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
base_model = tf.keras.applications.ResNet50(
weights="imagenet", include_top=False, input_tensor=inputs
)
x = tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x = tf.keras.layers.Dropout(0.5)(x)
outputs = tf.keras.layers.Dense(NUM_CLASSES)(x)
model = tf.keras.Model(inputs, outputs)
base_model.trainable = False
```
### Callbacks using Cloud Storage
```
MODEL_PATH = "resnet-dogs"
checkpoint_path = os.path.join("gs://", GCP_BUCKET, MODEL_PATH, "save_at_{epoch}")
tensorboard_path = os.path.join(
"gs://", GCP_BUCKET, "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
)
callbacks = [
# TensorBoard will store logs for each epoch and graph performance for us.
keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1),
# ModelCheckpoint will save models after each epoch for retrieval later.
keras.callbacks.ModelCheckpoint(checkpoint_path),
# EarlyStopping will terminate training when val_loss ceases to improve.
keras.callbacks.EarlyStopping(monitor="val_loss", patience=3),
]
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
```
Here, we're using the `tfc.remote()` flag to designate a smaller number of epochs than intended for the full training job when running locally. This enables easy debugging on Colab.
```
if tfc.remote():
epochs = 500
train_data = ds_train
test_data = ds_test
else:
epochs = 1
train_data = ds_train.take(5)
test_data = ds_test.take(5)
callbacks = None
model.fit(
train_data, epochs=epochs, callbacks=callbacks, validation_data=test_data, verbose=2
)
if tfc.remote():
SAVE_PATH = os.path.join("gs://", GCP_BUCKET, MODEL_PATH)
model.save(SAVE_PATH)
```
Our model requires two additional libraries. We'll create a `requirements.txt` which specifies those libraries:
```
requirements = ["tensorflow-datasets", "matplotlib"]
f = open("requirements.txt", 'w')
f.write('\n'.join(requirements))
f.close()
```
Let's add a job label so we can document our job logs later:
```
job_labels = {"job":"resnet-dogs"}
```
### Train on Cloud
All that's left to do is run our model on Cloud. To recap, our `run()` call enables:
- A model that will be trained and stored on Cloud, including checkpoints
- Tensorboard callback logs that will be accessible through tensorboard.dev
- Specific python library requirements that will be fulfilled
- Customizable job labels for log documentation
- Real-time streaming logs printed in Colab
- Deeply customizable machine configuration (ours will use two Tesla T4s)
- An automatic resolution of distribution strategy for this configuration
```
tfc.run(
requirements_txt="requirements.txt",
distribution_strategy="auto",
chief_config=tfc.MachineConfig(
cpu_cores=8,
memory=30,
accelerator_type=tfc.AcceleratorType.NVIDIA_TESLA_T4,
accelerator_count=2,
),
docker_config=tfc.DockerConfig(
image_build_bucket=GCP_BUCKET,
),
job_labels=job_labels,
stream_logs=True,
)
```
### Evaluate your model
We'll use the cloud storage directories we saved for callbacks in order to load tensorboard and retrieve the saved model. Tensorboard logs can be used to monitor training performance in real-time
```
!tensorboard dev upload --logdir $tensorboard_path --name "ResNet Dogs"
if tfc.remote():
model = tf.keras.models.load_model(SAVE_PATH)
model.evaluate(test_data)
```
| github_jupyter |
# Project Euler Problems 1 and 2
> Multiples of 3 or 5 and even Fibonacci numbers in Python
- toc: false
- badges: true
- comments: true
- categories: [euler, programming]
In order to stay fresh with general programming skills I am going to attempt various Project Euler problems and walk through my solutions. For those of you that do not know [Project Euler](https://projecteuler.net/about) it "is a series of challenging mathematical/computer programming problems." Simply put, it is a great way to practice your computer programming skills.
In this blog I'm going to work on Project Euler problems [1](https://projecteuler.net/problem=1) and [2](https://projecteuler.net/problem=2).
## Problem 1: Multiples of 3 or 5
*If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.*
*Find the sum of all the multiples of 3 or 5 below 1000.*
#### Breaking down the problem:
It's pretty straightforward. We want all the natural numbers (so positive integers... no decimals) under 1,000 that are multiples of 3 or 5. A multiple is simply "a number that can be divided by another number without a remainder." So, multiples of 3 include 3, 6, 9, 12...
Then we just want to take the sum of all those numbers.
I am going to create a function with 2 arguments:
1. A list of the numbers we want multiples of (so in this case 3 and 5)
2. The max number we want (in this case 1000)
The function will then do the following steps:
1. Initialize a list (`sum_list`). This is where we will store the multiples of the numbers we are interested in. For the case of this problem, I mean all the multiples of 3 and 5.
2. Loop through 1 and the `max_number` (1000 for this particular problem). We'll call this number `i`.
3. Loop through each of the dividers (e.g. 3 and 5) and use the modulo operator, `%`, to determine if the number, `i`, is even or odd. The modulo operator returns the remainder of a division problem (e.g. `4 % 1 = 1`).
4. If the number `i` has no remainder when divided by a divider we will `.append` (or add) it to the `sum_list` we created earlier.
5. One problem this creates is there could be duplicates. For example, 15 would show up twice in `sum_list` as both 3 and 5 go into it. We can solve this by removing duplicates in the `sum_list`. The [`set()`](https://www.w3schools.com/python/ref_func_set.asp) function is any easy way to do this. The function converts the object into a Python [set](https://www.w3schools.com/python/python_sets.asp) (one of the 4 major Python data classes along with lists, tuples, and dictionaries). Sets "do not allow duplicate values" so the use of `list(set(sum_list))` will convert the sum_list into a set, effectively dropping duplicate values, then converting it back into a list.
6. The last step is to use the `sum()` function to calculate the sum of all the multiples stored in `sum_list`.
```
def euler_1(dividers, max_number):
sum_list = []
for i in range(1, max_number):
for div in dividers:
if i % div == 0:
sum_list.append(i)
sum_list = list(set(sum_list))
return(sum(sum_list))
```
Running our function with the arguments from the question (multiples of 3 and 5 up to 1000) we get an answer of 233,168.
```
euler_1([3,5], 1000)
```
## Problem 1: Even Fibonacci numbers
*Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:*
*1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...*
*By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.*
#### Breaking down the problem:
This problem will involve 3 parts:
1. Creating a Fibonacci sequence that stops right before 4,000,000
2. Sorting out only the even numbers
3. Finding the sum of all those even numbers
Similar to the first Euler problem, I'm going to create a function that takes two arguments:
1. The maximum number for the Fibonacci sequence (4,000,000) in this case
2. And, a boolean variable determining if we want to sort even numbers
The function will then do the following steps:
1. Create a variable called `modulo_div` that will allow us to toggle whether we want to sum the even-valued terms (such as in this problem) or odd-valued terms
2. Create a list, `fibonacci`, with the first two terms of the Fibonacci sequence (1 and 2)
3. Create a variable, `i`, which will serve as an iterator
4. Put `i` into a while loop. Then, add together the last two numbers of the `fibonacci` list and make that sum the value of `i`. For example, on the first iteration we will sum 1 and 2 making `i` equal to 3.
5. As long as the value of `i` is less than 4,000,000 add it to the end of `fibonacci` using `append()`. If the value of `i` exceeds 4,000,000 do not add it to the list and break the loop.
6. Once the loop is broken we create a new list, `fibonacci_portion`. Use a Python list comprehension to go through the `fibonacci` list and only add even numbers. It will be the same method we used to gather odd numbers in the previous problem (using the modulo operator).
7. Finally, return the sum of the fibonacci_portion list (so all even numbers in the Fibonacci sequence up to 4,000,000)
```
def euler_2(max_number, even = True):
if even == True:
modulo_div = 2
else:
modulo_div = 1
fibonacci = [1,2]
i = 1
while i > 0:
i = sum(fibonacci[-2:])
if i > 4000000:
break
fibonacci.append(i)
fibonacci_portion = [j for j in fibonacci if j % modulo_div == 0]
return(sum(fibonacci_portion))
```
Running our function with the arguments from the question (a maximum of 4,000,000 and looking at even numbers) we get an answer of 4,613,732.
```
euler_2(4000000, even = True)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ginttone/test_visuallization/blob/master/2_T_autompg_xgboost.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## 데이터 로딩
```
import pandas as pd
df = pd.read_csv('./auto-mpg.csv', header=None)
df.columns = ['mpg','cylinders','displacement','horsepower','weight',
'acceleration','model year','origin','name']
df.info()
df[['horsepower','name']].describe(include='all')
```
## replace()
```
df['horsepower'].value_counts()
df['horsepower'].unique()
df_horsepower = df['horsepower'].replace(to_replace='?', value=None, inplace=False)
df_horsepower.unique()
df_horsepower = df_horsepower.astype('float')
df_horsepower.mean()
df['horsepower'] = df_horsepower.fillna(104)
df.info()
df['name'].unique()
df.head()
```
## 분류와 연속 컬럼 구분
```
df.head(8)
```
### check columns
- 연속형 : displacement, horsepower, weight, acceleration, mpg
- 분류형 : model year, name, cylinders, origin
```
df['name'].value_counts()
df['origin'].value_counts()
df['mpg'].describe(include='all')
df['mpg'].value_counts()
```
## 정규화 단계
```
Y = df['mpg']
X_contiuns = df[['displacement', 'horsepower', 'weight', 'acceleration']]
X_category = df[['model year', 'cylinders', 'origin']]
from sklearn import preprocessing
scaler = preprocessing.StandardScaler()
type(scaler)
scaler.fit(X_contiuns)
X = scaler.transform(X_contiuns)
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
type(lr)
lr.fit(X,Y)
lr.score(X,Y)
df.head(1)
```
### X_contiuns = df[['displacement', 'horsepower', 'weight', 'acceleration']]
```
x_cusmter = scaler.transform([[307.0,130.0,3504.0,12.0]])
x_cusmter.shape
lr.predict(x_cusmter)
```
### XGboost
```
import xgboost as xgb
model_xgb = xgb.XGBRegressor()
model_xgb.fit(X, Y)
model_xgb.score(X,Y)
model_xgb.predict(x_cusmter)
```
### LightXGboost
```
from lightgbm import LGBMRegressor
model_lxgb = LGBMRegressor()
model_lxgb.fit(X, Y)
model_lxgb.score(X, Y)
```
## pickle
```
import pickle
pickle.dump(lr, open('./autompg_lr.pkl','wb'))
```
```
!ls -l ./autompg_lr.pkl
pickle.load(open('./autompg_lr.pkl', 'rb'))
pickle.dump(scaler, open('./autompg_standardscaler.pkl','wb'))
```
## One hot encoding
```
X_category.head(3)
X_category['origin'].value_counts()
# 1, 2, 3
#? | ? | ?
# 1 | 0 | 0 -> 1
# 0 | 1 | 0 -> 2
# 0 | 0 | 1 _ 3
# data, prefix=None
df_origin = pd.get_dummies(X_category['origin'], prefix='origin')
df_cylinders = pd.get_dummies(X_category['cylinders'], prefix='cylinders')
df_origin.shape, df_cylinders.shape
X_contiuns.head(3)
# X_contiuns + df_cylinders + df_origin
# objs, axis=0
X = pd.concat([X_contiuns, df_cylinders, df_origin], axis='columns')
X.head(5)
#스텐다드 스케일러 적용(앞에서한 스텐다드 스케일러는 연속형4개 이었어 지금은 컬럼이 더많아졌어 그래서 새로함)
scaler_xgb = preprocessing.StandardScaler()
scaler_xgb.fit(X)
X=scaler_xgb.transform(X)
X
```
xgboost하고서 서비스단계(predict)
one hot encoding
대상: origin, cylinders
백터를 임의로 만들어줘야해
예) if cylinder == 5:
[0,0,1,0,0]
pickle로 담아갈필요가 없어
```
import pickle
pickle.dump(scaler_xgb,open('./scaler_xgb.pkl','wb'))
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, Y)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
import xgboost
xgb = xgboost.XGBRegressor()
xgb
xgb.fit(x_train, y_train)
pickle.dump(xgb,open('./xgb_model.pkl','wb'))
xgb.score(x_train, y_train)
```
```
xgb.score(x_test, y_test)
X[0]
```
| github_jupyter |
##### Load the data to dataframes
```
from pathlib import Path
import pandas as pd
%run util.ipynb
from termcolor import colored
file_april_2020_parent_address = "../Data/ProlificAcademic/April 2020/Data/CRISIS_Parent_April_2020.csv"
file_april_2020_adult_address = "../Data/ProlificAcademic/April 2020/Data/CRISIS_Adult_April_2020.csv"
file_may_2020_parent_address = "../Data/ProlificAcademic/May 2020/Data/CRISIS_Parent_May_2020.csv"
file_may_2020_adult_address = "../Data/ProlificAcademic/May 2020/Data/CRISIS_Adult_May_2020.csv"
file_april_2021_parent_address = "../Data/ProlificAcademic/April 2021/Data/CRISIS_Parent_April_2021.csv"
file_april_2021_adult_address = "../Data/ProlificAcademic/April 2021/Data/CRISIS_Adult_April_2021.csv"
file_april_2020_parent = Path(file_april_2020_parent_address)
file_april_2020_adult = Path(file_april_2020_adult_address)
file_may_2020_parent = Path(file_may_2020_parent_address)
file_may_2020_adult = Path(file_may_2020_adult_address)
file_april_2021_parent = Path(file_april_2021_parent_address)
file_april_2021_adult = Path(file_april_2021_adult_address)
if file_april_2020_parent.exists():
print("Reading file {}.".format(file_april_2020_parent))
df_april_2020_parent = pd.read_csv(file_april_2020_parent)
#print(df_april_2020_parent.head())
else:
print("file {} does not exists.".format(file_april_2020_parent))
if file_april_2020_adult.exists():
print("Reading file {}.".format(file_april_2020_adult))
df_april_2020_adult = pd.read_csv(file_april_2020_adult)
#print(df_april_2020_adult.head())
else:
print("file {} does not exists.".format(file_april_2020_adult))
if file_may_2020_parent.exists():
print("Reading file {}.".format(file_may_2020_parent))
df_may_2020_parent = pd.read_csv(file_may_2020_parent)
#print(df_may_2020_parent.head())
else:
print("file {} does not exists.".format(file_may_2020_parent))
if file_may_2020_adult.exists():
print("Reading file {}.".format(file_may_2020_adult))
df_may_2020_adult = pd.read_csv(file_may_2020_adult)
#print(df_may_2020_adult.head())
else:
print("file {} does not exists.".format(file_may_2020_adult))
if file_april_2021_parent.exists():
print("Reading file {}.".format(file_april_2021_parent))
df_april_2021_parent = pd.read_csv(file_april_2021_parent)
#print(df_april_2021_parent.head())
else:
print("file {} does not exists.".format(file_april_2021_parent))
if file_april_2020_adult.exists():
print("Reading file {}.".format(file_april_2021_adult))
df_april_2021_adult = pd.read_csv(file_april_2021_adult)
#print(df_april_2021_adult.head())
else:
print("file {} does not exists.".format(file_april_2021_adult))
pd.set_option('display.max_columns', None)
```
##### Check `polarity` and `subjective` of specifypositive
Check to see how the statement is positive(or negative) and if the statements are public opinions or factual information.
```
df_specifypositive_april_2020_adult = df_april_2020_adult.filter(['specifypositive'], axis=1)
df_specifypositive_april_2020_adult = clean_data(df_specifypositive_april_2020_adult)
df_specifypositive_may_2020_adult = df_may_2020_parent.filter(['specifypositive'], axis=1)
df_specifypositive_may_2020_adult = clean_data(df_specifypositive_may_2020_adult)
df_specifypositive_april_2021_adult = df_april_2021_parent.filter(['specifypositive'], axis=1)
df_specifypositive_april_2021_adult = clean_data(df_specifypositive_april_2021_adult)
from textblob import TextBlob
import matplotlib.pyplot as plt
df_specifypositive_april_2020_adult['polarity'] = df_specifypositive_april_2020_adult['specifypositive'].apply(lambda x: TextBlob(x).polarity)
df_specifypositive_may_2020_adult['polarity'] = df_specifypositive_may_2020_adult['specifypositive'].apply(lambda x: TextBlob(x).polarity)
df_specifypositive_april_2021_adult['polarity'] = df_specifypositive_april_2021_adult['specifypositive'].apply(lambda x: TextBlob(x).polarity)
#Plot
fig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=(10,4)) # 1 row, 2 columns
df_specifypositive_april_2020_adult['polarity'].plot.hist(color='salmon', title='Comments Polarity, April 2020',ax=ax1)
df_specifypositive_may_2020_adult['polarity'].plot.hist(color='salmon', title='Comments Polarity, May 2020',ax=ax2)
df_specifypositive_april_2021_adult['polarity'].plot.hist(color='salmon', title='Comments Polarity, April 2020',ax=ax3)
df_specifypositive_april_2020_adult['subjective'] = df_specifypositive_april_2020_adult['specifypositive'].apply(lambda x: TextBlob(x).subjectivity)
df_specifypositive_may_2020_adult['subjective'] = df_specifypositive_may_2020_adult['specifypositive'].apply(lambda x: TextBlob(x).subjectivity)
df_specifypositive_april_2021_adult['subjective'] = df_specifypositive_april_2021_adult['specifypositive'].apply(lambda x: TextBlob(x).subjectivity)
#Plot
fig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=(12,4)) # 1 row, 2 columns
df_specifypositive_april_2020_adult['subjective'].plot.hist(color='salmon', title='Comments subjective, April 2020',ax=ax1)
df_specifypositive_may_2020_adult['subjective'].plot.hist(color='salmon', title='Comments subjective, May 2020',ax=ax2)
df_specifypositive_april_2021_adult['subjective'].plot.hist(color='salmon', title='Comments subjective, April 2020',ax=ax3)
```
- Find correlation between positivechange and selected columns `(social media, isolated, video game, outdoor)`
- Normalize the Date
- The correlation is calculated for April 2020, May 2020 and April 2021
- Ex: What is the correlation between spending time outdoor and positivechange.
- positivechange : Has COVID-19 led to any positive changes in your child's life?
- outdoor: ...how many days per week did your child spend time outdoors?
```
# Select columns and clean data (removing null value)
# 3 different list of column'names is selected since the column headers are different.
selected_col_names_2021 = ['positivechange','tvmedia','socialmedia','peopletoturnto','peopletotalkto','isolated','leftout','lackcompanionship','videogames','outdoors']
selected_col_names_2020_1 = ['positivechange','priortvmedia','priorsocialmedia','peopletoturnto','peopletotalkto','isolated','priorlonely','lackcompanionship','priorvideogames','outdoorsprior']
selected_col_names_2020_2 = ['positivechange','priortvmedia_2','priorsocialmedia_2','peopletoturnto_2','peopletotalkto','isolated','priorlonely_2','lackcompanionship','priorvideogames_2','outdoorsprior_2']
df_corr_april_2021_adult = df_april_2021_adult.filter(selected_col_names_2021, axis=1)
df_corr_april_2021_adult = clean_data(df_corr_april_2021_adult)
df_corr_april_2020_adult = df_april_2020_adult.filter(selected_col_names_2020_1, axis=1)
df_corr_april_2020_adult = clean_data(df_corr_april_2020_adult)
df_corr_may_2020_adult = df_may_2020_adult.filter(selected_col_names_2020_2, axis=1)
df_corr_may_2020_adult = clean_data(df_corr_may_2020_adult)
from sklearn import preprocessing
# Normolize data to make sure that all of the data looks and reads the same way across all records.
df_normolized__april_2020_adult = zscore(df_corr_april_2020_adult,selected_col_names_2020_1)
df_normolized__may_2020_adult = zscore(df_corr_may_2020_adult,selected_col_names_2020_2)
df_normolized__april_2021_adult = zscore(df_corr_april_2021_adult,selected_col_names_2021)
#Check correlation between positivechange and 3 selected features.
corr_positivechange_socialmedia_april_2020 = df_normolized__april_2020_adult['positivechange'].corr(df_normolized__april_2020_adult['priorsocialmedia'])
corr_positivechange_isolated_april_2020 = df_normolized__april_2020_adult['positivechange'].corr(df_normolized__april_2020_adult['isolated'])
corr_positivechange_videogame_april_2020 = df_normolized__april_2020_adult['positivechange'].corr(df_normolized__april_2020_adult['priorvideogames'])
corr_positivechange_outdoor_april_2020 = df_normolized__april_2020_adult['positivechange'].corr(df_normolized__april_2020_adult['outdoorsprior'])
print(colored('April 2020 - social media:','blue'),colored(corr_positivechange_socialmedia_april_2020,'green'))
print(colored("April 2020 - isolated:",'blue'),colored(corr_positivechange_isolated_april_2020,'green'))
print(colored("April 2020 - video game:",'blue'),colored(corr_positivechange_videogame_april_2020,'green'))
print(colored("April 2020 - outdoor:",'blue'),colored(corr_positivechange_outdoor_april_2020,'green'))
corr_positivechange_socialmedia_may_2020 = df_normolized__may_2020_adult['positivechange'].corr(df_normolized__may_2020_adult['priorsocialmedia_2'])
corr_positivechange_isolated_may_2020 =df_normolized__may_2020_adult['positivechange'].corr(df_normolized__may_2020_adult['isolated'])
corr_positivechange_videogame_may_2020 = df_normolized__may_2020_adult['positivechange'].corr(df_normolized__may_2020_adult['priorvideogames_2'])
corr_positivechange_outdoor_may_2020 = df_normolized__may_2020_adult['positivechange'].corr(df_normolized__may_2020_adult['outdoorsprior_2'])
print()
print(colored("May 2020 - social media: ",'blue'),colored(corr_positivechange_socialmedia_may_2020,'green'))
print(colored("May 2020 - isolated:",'blue'),colored(corr_positivechange_isolated_may_2020,'green'))
print(colored("May 2020 - video game:",'blue'),colored(corr_positivechange_videogame_may_2020,'green'))
print(colored("May 2020 - outdoor:",'blue'),colored(corr_positivechange_videogame_may_2020,'green'))
corr_positivechange_socialmedia_april_2021 = df_normolized__april_2021_adult['positivechange'].corr(df_normolized__april_2021_adult['socialmedia'])
corr_positivechange_isolated_april_2021 =df_normolized__april_2021_adult['positivechange'].corr(df_normolized__april_2021_adult['isolated'])
corr_positivechange_videogame_april_2021 =df_normolized__april_2021_adult['positivechange'].corr(df_normolized__april_2021_adult['videogames'])
corr_positivechange_outdoor_april_2021 = df_normolized__april_2021_adult['positivechange'].corr(df_normolized__april_2021_adult['outdoors'])
print()
print(colored("April 2021 - social media:",'blue'),colored(corr_positivechange_socialmedia_april_2021,'green'))
print(colored("April 2021 - isolated:",'blue'),colored(corr_positivechange_isolated_april_2021,'green'))
print(colored("april 2021 - video game:",'blue'),colored(corr_positivechange_videogame_april_2021,'green'))
print(colored("april 2021 - outdoor:",'blue'),colored(corr_positivechange_outdoor_april_2021,'green'))
import seaborn as sns
fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
fig.suptitle('Positive Change - Social Media')
sns.regplot(ax=axes[0],x=df_normolized__april_2020_adult["positivechange"], y=df_normolized__april_2020_adult["priorsocialmedia"]).set(title='april_2020_adult')
sns.regplot(ax=axes[1],x=df_normolized__may_2020_adult["positivechange"], y=df_normolized__may_2020_adult["priorsocialmedia_2"]).set(title='may_2020_adult')
sns.regplot(ax=axes[2],x=df_normolized__april_2021_adult["positivechange"], y=df_normolized__april_2021_adult["socialmedia"]).set(title='april_2021_adult')
```
`Longitudinal study` of `positivechange` and `Social media` shows that positivechange and Social media have <font color='red'>Inverse Correlation</font>.
```
fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
fig.suptitle('Positive Change - isolated')
sns.regplot(ax=axes[0],x=df_normolized__april_2020_adult["positivechange"], y=df_normolized__april_2020_adult["isolated"]).set(title='april_2020_adult')
sns.regplot(ax=axes[1],x=df_normolized__may_2020_adult["positivechange"], y=df_normolized__may_2020_adult["isolated"]).set(title='may_2020_adult')
sns.regplot(ax=axes[2],x=df_normolized__april_2021_adult["positivechange"], y=df_normolized__april_2021_adult["isolated"]).set(title='april_2021_adult')
```
`Longitudinal study` of `positivechange` and `isolated` shows that positivechange and Social media have <font color='red'>Inverse Correlation</font>.
```
fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
fig.suptitle('Positive Change - videogame')
sns.regplot(ax=axes[0],x=df_normolized__april_2020_adult["positivechange"], y=df_normolized__april_2020_adult["priorvideogames"]).set(title='april_2020_adult')
sns.regplot(ax=axes[1],x=df_normolized__may_2020_adult["positivechange"], y=df_normolized__may_2020_adult["priorvideogames_2"]).set(title='may_2020_adult')
sns.regplot(ax=axes[2],x=df_normolized__april_2021_adult["positivechange"], y=df_normolized__april_2021_adult["videogames"]).set(title='april_2021_adult')
```
`Longitudinal study` of `positivechange` and `video game` shows that positivechange and Social media have <font color='green'>Direct Correlation</font>.
```
fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)
fig.suptitle('Positive Change - outdoor')
sns.regplot(ax=axes[0],x=df_normolized__april_2020_adult["positivechange"], y=df_normolized__april_2020_adult["outdoorsprior"]).set(title='april_2020_adult')
sns.regplot(ax=axes[1],x=df_normolized__may_2020_adult["positivechange"], y=df_normolized__may_2020_adult["outdoorsprior_2"]).set(title='may_2020_adult')
sns.regplot(ax=axes[2],x=df_normolized__april_2021_adult["positivechange"], y=df_normolized__april_2021_adult["outdoors"]).set(title='april_2021_adult')
```
`Longitudinal study` of `positivechange` and `outdoor` shows that positivechange and outdoor have <font color='blue'>No Relation</font>.
| github_jupyter |
# Distributed Training with Keras
## Import dependencies
```
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow import keras
import os
print(tf.__version__)
```
## Dataset - Fashion MNIST
```
#datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
#mnist_train, mnist_test = datasets['train'], datasets['test']
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
```
## Define a distribution Strategy
```
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
num_train_examples = len(train_images)#info.splits['train'].num_examples
print(num_train_examples)
num_test_examples = len(test_images) #info.splits['test'].num_examples
print(num_test_examples)
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
#train_dataset = train_images.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
#eval_dataset = test_images.map(scale).batch(BATCH_SIZE)
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
with strategy.scope():
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
# Define the checkpoint directory to store the checkpoints
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
# Function for decaying the learning rate.
# You can define any decay function you need.
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >= 3 and epoch < 7:
return 1e-4
else:
return 1e-5
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print('\nLearning rate for epoch {} is {}'.format(epoch + 1,
model.optimizer.lr.numpy()))
from tensorflow.keras.callbacks import ModelCheckpoint
#checkpoint = ModelCheckpoint(ckpt_model,
# monitor='val_accuracy',
# verbose=1,
# save_best_only=True,
# mode='max')
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
#model.fit(train_dataset, epochs=12, callbacks=callbacks)
history = model.fit(train_images, train_labels,validation_data=(test_images, test_labels),
epochs=15,callbacks=callbacks)
history.history.keys
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
```
Manually Principal Component Analysis
```
#Reading wine data
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# in the data first column is class label and rest
# 13 columns are different features
X,y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
#Splitting Data into training set and test set
#using scikit-learn
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y, random_state=0)
#Standardarising all the columns
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
# covariance matrix using numpy
cov_mat = np.cov(X_train_std.T)
# eigen pair
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vecs[:3])
# only three rows are printed
# representing relative importance of features
tot = eigen_vals.sum()
var_exp = [(i/tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1,14), var_exp, alpha=0.5, align='center',
label='Individual explained variance')
plt.step(range(1,14), cum_var_exp, where='mid',
label='CUmmulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
# plots explained variance ration of the features
# Explained variance is variance of one feature / sum of all the variances
# sorting the eigenpairs by decreasing order of the eigenvalues:
# list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k:k[0], reverse=True)
# We take first two features which account for about 60% of variance
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
# w is projection matrix
print('Matrix W:\n', w)
# converting 13 feature data to 2 feature data
X_train_pca = X_train_std.dot(w)
# Plotting the features on
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l,c,m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train==l, 0],
X_train_pca[y_train==l, 1],
c = c, label=l, marker = m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
```
Using Scikit Learn
```
# Class to plot decision region
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
x1_min, x1_max = X[:, 0].min()-1, X[:, 0].max()+1
x2_min, x2_max = X[:, 1].min()-1, X[:, 1].max()+1
xx1,xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1,xx2,Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x = X[y==cl, 0],
y = X[y==cl, 1],
alpha = 0.6,
color = cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
# Plotting decision region of training set after applying PCA
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
lr = LogisticRegression(multi_class='ovr',
random_state=1,
solver = 'lbfgs')
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
# plotting decision regions of test data set after applying PCA
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
# finding explained variance ratio using scikit learn
pca1 = PCA(n_components=None)
X_train_pca1 = pca1.fit_transform(X_train_std)
pca1.explained_variance_ratio_
```
| github_jupyter |
```
import os, shutil, csv
original_dataset_dir = '/Users/mithyyin/Documents/GitHub/TeamEve/Classfication_small_datasets_inception_v3/waste_original_dataset' #directory name of your biendata
#original_dataset_dir =r'C:\Users\oscarscaro\Documents\GitHub\TeamEve\Classfication_small_datasets_inception_v3\images_withoutrect'
base_dir = './data_small' #create a directory for the data subset
#os.mkdir(base_dir)
#creating a new folder for each set
train_dir = os.path.join(base_dir, 'train')
#os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
#os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
#os.mkdir(test_dir)
batch_size = 16 #try 32, 128
epoch = 100 # try 50, 100.
#data_augmentation = True
#Pre-process steps 2,
train_datagen = ImageDataGenerator(rescale=1./255)
#changed here, major changes
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir, #train_dir is the path where you store all the validaiton folder, chnage this
target_size = (img_rows,img_cols), #try 1920,1080
batch_size = batch_size,
class_mode = 'categorical')
#tr_crops = crop_generator(train_generator)
validation_generator = test_datagen.flow_from_directory( #debug here
validation_dir, #train_dir is the path where you store all the validation folder, change this
target_size = (img_cols,img_cols),
batch_size = batch_size,
class_mode = 'categorical')
#val_crops = crop_generator(val_gen)
import numpy as np
# Sys
import warnings
# Keras Core
from keras.layers.convolutional import MaxPooling2D, Convolution2D, AveragePooling2D
from keras.layers import Input, Dropout, Dense, Flatten, Activation
from keras.layers.normalization import BatchNormalization
from keras.layers.merge import concatenate
from keras import regularizers
from keras import initializers
from keras.models import Model
# Backend
from keras import backend as K
# Utils
from keras.utils.layer_utils import convert_all_kernels_in_model
from keras.utils.data_utils import get_file
#########################################################################################
# Implements the Inception Network v4 (http://arxiv.org/pdf/1602.07261v1.pdf) in Keras. #
#########################################################################################
WEIGHTS_PATH = 'https://github.com/kentsommer/keras-inceptionV4/releases/download/2.1/inception-v4_weights_tf_dim_ordering_tf_kernels.h5'
WEIGHTS_PATH_NO_TOP = 'https://github.com/kentsommer/keras-inceptionV4/releases/download/2.1/inception-v4_weights_tf_dim_ordering_tf_kernels_notop.h5'
def preprocess_input(x):
x = np.divide(x, 255.0)
x = np.subtract(x, 0.5)
x = np.multiply(x, 2.0)
return x
def conv2d_bn(x, nb_filter, num_row, num_col,
padding='same', strides=(1, 1), use_bias=False):
"""
Utility function to apply conv + BN.
(Slightly modified from https://github.com/fchollet/keras/blob/master/keras/applications/inception_v3.py)
"""
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
x = Convolution2D(nb_filter, (num_row, num_col),
strides=strides,
padding=padding,
use_bias=use_bias,
kernel_regularizer=regularizers.l2(0.00004),
kernel_initializer=initializers.VarianceScaling(scale=2.0, mode='fan_in', distribution='normal', seed=None))(x)
x = BatchNormalization(axis=channel_axis, momentum=0.9997, scale=False)(x)
x = Activation('relu')(x)
return x
def block_inception_a(input):
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
branch_0 = conv2d_bn(input, 96, 1, 1)
branch_1 = conv2d_bn(input, 64, 1, 1)
branch_1 = conv2d_bn(branch_1, 96, 3, 3)
branch_2 = conv2d_bn(input, 64, 1, 1)
branch_2 = conv2d_bn(branch_2, 96, 3, 3)
branch_2 = conv2d_bn(branch_2, 96, 3, 3)
branch_3 = AveragePooling2D((3,3), strides=(1,1), padding='same')(input)
branch_3 = conv2d_bn(branch_3, 96, 1, 1)
x = concatenate([branch_0, branch_1, branch_2, branch_3], axis=channel_axis)
return x
def block_reduction_a(input):
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
branch_0 = conv2d_bn(input, 384, 3, 3, strides=(2,2), padding='valid')
branch_1 = conv2d_bn(input, 192, 1, 1)
branch_1 = conv2d_bn(branch_1, 224, 3, 3)
branch_1 = conv2d_bn(branch_1, 256, 3, 3, strides=(2,2), padding='valid')
branch_2 = MaxPooling2D((3,3), strides=(2,2), padding='valid')(input)
x = concatenate([branch_0, branch_1, branch_2], axis=channel_axis)
return x
def block_inception_b(input):
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
branch_0 = conv2d_bn(input, 384, 1, 1)
branch_1 = conv2d_bn(input, 192, 1, 1)
branch_1 = conv2d_bn(branch_1, 224, 1, 7)
branch_1 = conv2d_bn(branch_1, 256, 7, 1)
branch_2 = conv2d_bn(input, 192, 1, 1)
branch_2 = conv2d_bn(branch_2, 192, 7, 1)
branch_2 = conv2d_bn(branch_2, 224, 1, 7)
branch_2 = conv2d_bn(branch_2, 224, 7, 1)
branch_2 = conv2d_bn(branch_2, 256, 1, 7)
branch_3 = AveragePooling2D((3,3), strides=(1,1), padding='same')(input)
branch_3 = conv2d_bn(branch_3, 128, 1, 1)
x = concatenate([branch_0, branch_1, branch_2, branch_3], axis=channel_axis)
return x
def block_reduction_b(input):
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
branch_0 = conv2d_bn(input, 192, 1, 1)
branch_0 = conv2d_bn(branch_0, 192, 3, 3, strides=(2, 2), padding='valid')
branch_1 = conv2d_bn(input, 256, 1, 1)
branch_1 = conv2d_bn(branch_1, 256, 1, 7)
branch_1 = conv2d_bn(branch_1, 320, 7, 1)
branch_1 = conv2d_bn(branch_1, 320, 3, 3, strides=(2,2), padding='valid')
branch_2 = MaxPooling2D((3, 3), strides=(2, 2), padding='valid')(input)
x = concatenate([branch_0, branch_1, branch_2], axis=channel_axis)
return x
def block_inception_c(input):
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
branch_0 = conv2d_bn(input, 256, 1, 1)
branch_1 = conv2d_bn(input, 384, 1, 1)
branch_10 = conv2d_bn(branch_1, 256, 1, 3)
branch_11 = conv2d_bn(branch_1, 256, 3, 1)
branch_1 = concatenate([branch_10, branch_11], axis=channel_axis)
branch_2 = conv2d_bn(input, 384, 1, 1)
branch_2 = conv2d_bn(branch_2, 448, 3, 1)
branch_2 = conv2d_bn(branch_2, 512, 1, 3)
branch_20 = conv2d_bn(branch_2, 256, 1, 3)
branch_21 = conv2d_bn(branch_2, 256, 3, 1)
branch_2 = concatenate([branch_20, branch_21], axis=channel_axis)
branch_3 = AveragePooling2D((3, 3), strides=(1, 1), padding='same')(input)
branch_3 = conv2d_bn(branch_3, 256, 1, 1)
x = concatenate([branch_0, branch_1, branch_2, branch_3], axis=channel_axis)
return x
def inception_v4_base(input):
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
# Input Shape is 299 x 299 x 3 (th) or 3 x 299 x 299 (th)
net = conv2d_bn(input, 32, 3, 3, strides=(2,2), padding='valid')
net = conv2d_bn(net, 32, 3, 3, padding='valid')
net = conv2d_bn(net, 64, 3, 3)
branch_0 = MaxPooling2D((3,3), strides=(2,2), padding='valid')(net)
branch_1 = conv2d_bn(net, 96, 3, 3, strides=(2,2), padding='valid')
net = concatenate([branch_0, branch_1], axis=channel_axis)
branch_0 = conv2d_bn(net, 64, 1, 1)
branch_0 = conv2d_bn(branch_0, 96, 3, 3, padding='valid')
branch_1 = conv2d_bn(net, 64, 1, 1)
branch_1 = conv2d_bn(branch_1, 64, 1, 7)
branch_1 = conv2d_bn(branch_1, 64, 7, 1)
branch_1 = conv2d_bn(branch_1, 96, 3, 3, padding='valid')
net = concatenate([branch_0, branch_1], axis=channel_axis)
branch_0 = conv2d_bn(net, 192, 3, 3, strides=(2,2), padding='valid')
branch_1 = MaxPooling2D((3,3), strides=(2,2), padding='valid')(net)
net = concatenate([branch_0, branch_1], axis=channel_axis)
# 35 x 35 x 384
# 4 x Inception-A blocks
for idx in range(4):
net = block_inception_a(net)
# 35 x 35 x 384
# Reduction-A block
net = block_reduction_a(net)
# 17 x 17 x 1024
# 7 x Inception-B blocks
for idx in range(7):
net = block_inception_b(net)
# 17 x 17 x 1024
# Reduction-B block
net = block_reduction_b(net)
# 8 x 8 x 1536
# 3 x Inception-C blocks
for idx in range(3):
net = block_inception_c(net)
return net
def inception_v4(num_classes, dropout_keep_prob, weights, include_top):
'''
Creates the inception v4 network
Args:
num_classes: number of classes
dropout_keep_prob: float, the fraction to keep before final layer.
Returns:
logits: the logits outputs of the model.
'''
# Input Shape is 299 x 299 x 3 (tf) or 3 x 299 x 299 (th)
if K.image_data_format() == 'channels_first':
inputs = Input((3, 299, 299))
else:
inputs = Input((299, 299, 3))
# Make inception base
x = inception_v4_base(inputs)
# Final pooling and prediction
if include_top:
# 1 x 1 x 1536
x = AveragePooling2D((8,8), padding='valid')(x)
x = Dropout(dropout_keep_prob)(x)
x = Flatten()(x)
# 1536
x = Dense(units=num_classes, activation='softmax')(x)
model = Model(inputs, x, name='inception_v4')
# load weights
if weights == 'imagenet':
if K.image_data_format() == 'channels_first':
if K.backend() == 'tensorflow':
warnings.warn('You are using the TensorFlow backend, yet you '
'are using the Theano '
'image data format convention '
'(`image_data_format="channels_first"`). '
'For best performance, set '
'`image_data_format="channels_last"` in '
'your Keras config '
'at ~/.keras/keras.json.')
if include_top:
weights_path = get_file(
'inception-v4_weights_tf_dim_ordering_tf_kernels.h5',
WEIGHTS_PATH,
cache_subdir='models',
md5_hash='9fe79d77f793fe874470d84ca6ba4a3b')
else:
weights_path = get_file(
'inception-v4_weights_tf_dim_ordering_tf_kernels_notop.h5',
WEIGHTS_PATH_NO_TOP,
cache_subdir='models',
md5_hash='9296b46b5971573064d12e4669110969')
model.load_weights(weights_path, by_name=True)
return model
def create_model(num_classes=204, dropout_prob=0.2, weights=None, include_top=True):
return inception_v4(num_classes, dropout_prob, weights, include_top)
model = create_model()
model.summary()
#compiling the model
model.compile('Adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# model checkpoint
#change the name of the model
filepath="waste_sort_weights_best_updated.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc',
verbose=1, save_best_only=True, mode='max')
# early stopping added in
# should not be a problem if you had the latest version of keras
early = EarlyStopping(monitor="val_loss",
mode="min",
patience=150, restore_best_weights=True)
callbacks_list = [checkpoint, early]
#train the model
history = net.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=epoch,
verbose=1,
validation_data=validation_generator,
validation_steps=50,
callbacks = callbacks_list) #change here
# Training plots
epochs = [i for i in range(1, len(history.history['loss'])+1)]
plt.plot(epochs, history.history['loss'], color='blue', label="training_loss")
plt.plot(epochs, history.history['val_loss'], color='red', label="validation_loss")
plt.legend(loc='best')
plt.title('training')
plt.xlabel('epoch')
plt.savefig(TRAINING_PLOT_FILE, bbox_inches='tight')
plt.show()
plt.plot(epochs, history.history['acc'], color='blue', label="training_accuracy")
plt.plot(epochs, history.history['val_acc'], color='red',label="validation_accuracy")
plt.legend(loc='best')
plt.title('validation')
plt.xlabel('epoch')
plt.savefig(VALIDATION_PLOT_FILE, bbox_inches='tight')
plt.show()
```
| github_jupyter |
# Problem Statement
## About Company
Company deals in all home loans. They have presence across all urban, semi urban and rural areas. Customer first apply for home loan after that company validates the customer eligibility for loan.
## Problem
Company wants to automate the loan eligibility process (real time) based on customer detail provided while filling online application form. These details are Gender, Marital Status, Education, Number of Dependents, Income, Loan Amount, Credit History and others. To automate this process, they have given a problem to identify the customers segments, those are eligible for loan amount so that they can specifically target these customers.
## Data
| Variable | Description|
|---|------|
|Loan_ID|Unique Loan ID|
|Gender| Male/ Female|
|Married| Applicant married (Y/N)
|Dependents| Number of dependents
|Education| Applicant Education (Graduate/ Under Graduate)
|Self_Employed| Self employed (Y/N)
|ApplicantIncome| Applicant income
|CoapplicantIncome| Coapplicant income
|LoanAmount| Loan amount in thousands
|Loan_Amount_Term | Term of loan in months
|Credit_History |credit history meets guidelines
|Property_Area | Urban/ Semi Urban/ Rural
|Loan_Status |Loan approved (Y/N)
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("data/loan_prediction_train.csv")
df.head()
```
## Clear Data
```
df.apply(lambda x: sum(x.isnull()), axis=0)
df = df.dropna()
df.apply(lambda x: sum(x.isnull()), axis=0)
df.describe()
df['Property_Area'].value_counts()
```
## Distribution analysis
```
df['ApplicantIncome'].hist(bins=50)
df.boxplot(column='ApplicantIncome')
df.boxplot(column='ApplicantIncome', by='Education')
df['LoanAmount'].hist(bins=50)
df.boxplot(column='LoanAmount')
df.head()
```
## Building a Preditive Model
```
df.dtypes
df.head()
from sklearn.preprocessing import LabelEncoder
var_mod = ['Gender', 'Married', 'Dependents', 'Education', 'Self_Employed', 'Property_Area', 'Loan_Status']
le = LabelEncoder()
for var in var_mod:
df[var] = le.fit_transform(df[var])
df.head()
df.corr()
plt.scatter(df['Credit_History'], df['Loan_Status'], alpha=0.1)
noise1 = np.random.normal(0, 0.1, len(df))
noise2 = np.random.normal(0, 0.1, len(df))
plt.scatter(df['Credit_History']+noise1, df['Loan_Status'] + noise2, alpha=0.1)
from sklearn.linear_model import LogisticRegression
logit_model = LogisticRegression()
logit_model
predictors = df[['Credit_History']]
logit_model.fit(df[['Credit_History']], df['Loan_Status'])
from sklearn.cross_validation import KFold
df.shape
kf = KFold(len(df), n_folds=5)
error = []
for train, test in kf:
train_predictors = df[['Credit_History']].iloc[train,:]
train_target = df['Loan_Status'].iloc[train]
logit_model.fit(train_predictors, train_target)
error.append(logit_model.score(df[['Credit_History']].iloc[test,:], df['Loan_Status'].iloc[test]))
print("Cross-Validation Score ", np.mean(error))
def fit_model(model, data, predictors, outcome, num_fold=5):
kf =KFold(data.shape[0], n_folds=num_fold)
error = []
for train, test in kf:
train_predictors = data[predictors].iloc[train,:]
train_target = data[outcome].iloc[train]
model.fit(train_predictors, train_target)
error.append(model.score(data[predictors].iloc[test,:], data[outcome].iloc[test]))
print("Cross-Validation Score :", np.mean(error))
model.fit(data[predictors], data[outcome])
accuracy = model.score(data[predictors], data[outcome])
print("Accuracy: ", accuracy)
return model
logit_model = LogisticRegression()
logit_model = fit_model(logit_model, df, ['Credit_History'], 'Loan_Status')
df.columns
predictor_list = ['Gender', 'Married', 'Dependents', 'Education',
'Self_Employed', 'ApplicantIncome', 'CoapplicantIncome',
'LoanAmount','Loan_Amount_Term', 'Credit_History', 'Property_Area']
logit_model = fit_model(logit_model, df, predictor_list, 'Loan_Status')
from sklearn.tree import DecisionTreeClassifier
decision_tree_model = DecisionTreeClassifier()
predictor_var = ['Credit_History', 'Gender', 'Married', 'Education']
outcome_var = 'Loan_Status'
decision_tree_model = fit_model(decision_tree_model, df, predictor_var, outcome_var)
predictor_var = ['Credit_History', 'Loan_Amount_Term', 'LoanAmount']
decision_tree_model = fit_model(decision_tree_model, df, predictor_var, outcome_var)
```
| github_jupyter |
# Run a batch of samples on the HPC cluster
This experiment is part of a series which should help us validate the Kingston, ON model.
## Set-up orchistration and compute environments
To set-up access to the remote compute server:
1. On the local host generate keys:
```
ssh-keygen -t rsa
```
1. Copy those keys to the remote host:
```
cat ~/.ssh/id_rsa.pub | ssh user@hostname 'cat >> .ssh/authorized_keys'
```
```
!cat /src/ansible/playbooks/hosts
```
### Install simulator on compute environments
```
!ansible-playbook -i /src/ansible/playbooks/hosts /src/ansible/playbooks/covid19sim.yml --extra-vars 'host=mulab' --user=paredes
```
### Copy Kingston configuration file to compute environments
```
!ansible -i /src/ansible/playbooks/hosts mulab --user=paredes -m copy -a "src=params/kingston_0xdfc056a4fdb804e60e964b2cc5aae6ea.yml dest=~/COVI-AgentSim/src/covid19sim/configs/simulation/region/kingston0xdfc056a4fdb804e60e964b2cc5aae6ea.yaml"
```
### Generate random seeds
```
!/opt/conda/bin/conda run -n covisim conda install numpy -y
cpus = 16
compute = 16
n_samples = cpus * compute
n_samples
import numpy as np
from numpy.random import default_rng
import pandas as pd
rng = default_rng()
seed_list = [rng.integers(low=0, high=1e4) for _ in range(n_samples)]
len(np.unique(seed_list)), pd.DataFrame(seed_list).hist()
```
### Build run commands
```
args_dict = {'region': 'kingston0xdfc056a4fdb804e60e964b2cc5aae6ea',
'n_people': 3000,
'simulation_days': 60,
'init_fraction_sick': 0.002,
'N_BEHAVIOR_LEVELS': 2,
'intervention': 'no_intervention',
'tune': True,
'track': 'light',
'GLOBAL_MOBILITY_SCALING_FACTOR': 0.85,
'APP_UPTAKE': -1,
'USE_INFERENCE_SERVER': False,
'INTERVENTION_DAY': -1}
args_dict
args_str = ' '.join([f'{k}={v}' for k, v in args_dict.items()])
args_str
import random
import subprocess
run_id = hex(random.getrandbits(128))
args_list = [f'~/.conda/envs/covisim/bin/python ~/COVI-AgentSim/src/covid19sim/run.py seed={s} outdir=~/kingston-abm/experiments/validation/results/data/{run_id} {args_str}\n' for s in seed_list]
file_name = f'val-2-{run_id}.cmd'
with open(file_name, 'w') as arg_file:
arg_file.writelines(args_list)
#subprocess.run(f'cat {file_name} | parallel -j4, shell=True, capture_output=True)
file_name
```
## Run simulations
```
!cat val-2-0x57e7aa91fbc783d1f3cd1bf719dd5a35.cmd | parallel --sshloginfile nodefile
```
## Set up the local analysis environment
```
!ansible-playbook -i /src/ansible/playbooks/hosts /src/ansible/playbooks/covid19sim.yml
!ipython kernel install --name covisim
!/opt/conda/bin/conda run -n covisim conda install ipykernel -y
!/opt/conda/bin/conda run -n covisim conda install matplotlib -y
import os
run_id = '0x57e7aa91fbc783d1f3cd1bf719dd5a35'
samples = os.listdir(f'../data/{run_id}')
len(samples)
import pandas as pd
data = [os.listdir(f'../data/{run_id}/{s}') for s in samples]
data_df = pd.DataFrame(data, columns=['params', 'log', 'metrics'])
data_df['sample'] = samples
data_df.head()
import pickle
cases_list = []
i=0
for _, data in data_df.iterrows():
with open(f'/src/experiments/data/{run_id}/{data["sample"]}/{data["metrics"]}', 'rb') as tracker:
tracker_dict = pickle.load(tracker)
cases_list.append(pd.DataFrame(tracker_dict['cases_per_day'], columns=[i]))
i += 1
cases_df = pd.concat(cases_list, axis=1)
cases_df.head()
!ansible -i /src/ansible/playbooks/hosts mulab --user=paredes -a "ls ~/kingston-abm/experiments/validation/results/data/0x57e7aa91fbc783d1f3cd1bf719dd5a35/"
cases_df.transpose().head()
%matplotlib inline
cases_df.transpose().boxplot(figsize=(20,10))
!/opt/conda/bin/conda run -n covisim conda install seaborn -y
```
### Plot compartment model
```
import pickle
s_list = []
e_list = []
i_list = []
r_list = []
i=0
for _, data in data_df.iterrows():
with open(f'/src/experiments/data/{run_id}/{data["sample"]}/{data["metrics"]}', 'rb') as tracker:
tracker_dict = pickle.load(tracker)
s_list.append(pd.DataFrame(tracker_dict['s'], columns=[i]))
e_list.append(pd.DataFrame(tracker_dict['e'], columns=[i]))
i_list.append(pd.DataFrame(tracker_dict['i'], columns=[i]))
r_list.append(pd.DataFrame(tracker_dict['r'], columns=[i]))
i += 1
s_df = pd.concat(s_list, axis=1)
e_df = pd.concat(e_list, axis=1)
i_df = pd.concat(i_list, axis=1)
r_df = pd.concat(r_list, axis=1)
s_df.head(), \
e_df.head(), \
i_df.head(), \
r_df.head()
s_df.transpose().boxplot(figsize=(20,10))
e_df.transpose().boxplot(figsize=(20,10))
i_df.transpose().boxplot(figsize=(20,10))
r_df.transpose().boxplot(figsize=(20,10))
s_df['day'] = s_df.index
e_df['day'] = e_df.index
i_df['day'] = i_df.index
r_df['day'] = r_df.index
s_long_df = pd.melt(s_df, id_vars=['day'], value_vars=[i for i in range(256)])
e_long_df = pd.melt(e_df, id_vars=['day'], value_vars=[i for i in range(256)])
i_long_df = pd.melt(i_df, id_vars=['day'], value_vars=[i for i in range(256)])
r_long_df = pd.melt(r_df, id_vars=['day'], value_vars=[i for i in range(256)])
s_long_df['seir'] = 's'
e_long_df['seir'] = 'e'
i_long_df['seir'] = 'i'
r_long_df['seir'] = 'r'
seir_long_df = pd.concat([s_long_df, e_long_df, i_long_df, r_long_df])
seir_long_df.head()
import seaborn as sns
%matplotlib inline
sns.lineplot(data=seir_long_df,
x='day',
y='value',
hue='seir',
estimator='mean',
ci=68)
```
## All configuration p
```
import glob
import os
import yaml
#os.path.abspath('../data/*dd5a35/*seed-1054*')
with open(glob.glob('../data/*dd5a35/*seed-1054*/*.yaml')[0], 'r') as f:
config = yaml.safe_load(f)
config.keys()
config['n_people'], config['simulation_days']
```
## Other metrics
```
import pickle
file_name = '/src/experiments/validation/results/data/0x57e7aa91fbc783d1f3cd1bf719dd5a35/sim_v2_people-3000_days-60_init-0.002_uptake--1_seed-1054_20210706-135347_580349/tracker_data_n_3000_seed_1054_20210706-140112.pkl'
with open(file_name, 'rb') as results_file:
tracker = pickle.load(results_file)
tracker.keys()
```
| github_jupyter |
# Monte Carlo Methods
In this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms.
While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.
### Part 0: Explore BlackjackEnv
We begin by importing the necessary packages.
```
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
import sys
print(sys.version)
```
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
```
env = gym.make('Blackjack-v0')
```
Each state is a 3-tuple of:
- the player's current sum $\in \{0, 1, \ldots, 31\}$,
- the dealer's face up card $\in \{1, \ldots, 10\}$, and
- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).
The agent has two potential actions:
```
STICK = 0
HIT = 1
```
Verify this by running the code cell below.
```
print(env.observation_space)
print(env.action_space)
print(env.action_space)
```
Execute the code cell below to play Blackjack with a random policy.
(_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
```
for i_episode in range(3):
state = env.reset()
while True:
action = env.action_space.sample()
print(state, action)
state, reward, done, info = env.step(action)
print(state)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
```
### Part 1: MC Prediction
In this section, you will write your own implementation of MC prediction (for estimating the action-value function).
We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy.
The function accepts as **input**:
- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.
It returns as **output**:
- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
```
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
```
Execute the code cell below to play Blackjack with the policy.
(*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
```
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
```
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.
Your algorithm has three arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `generate_episode`: This is a function that returns an episode of interaction.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
```
N = defaultdict(lambda: np.zeros(env.action_space.n))
print(N)
import time
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
start=time.time()
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate an episode
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
#discounts = np.array([gamma**i for i in range(len(episode))])
reward = episode[-1][-1]
#print(episode, len(episode), reward)
for i, state in enumerate(states):
action = actions[i]
g = gamma**(len(episode)-1-i)*reward
#g = sum(discounts[:len(states)-i]*rewards[i:])
returns_sum[state][action] += g
N[state][action]+= 1
Q[state][action]= returns_sum[state][action]/N[state][action]
print("elapsed:", time.time()-start)
return Q
Q = mc_prediction_q(env, 1, generate_episode_from_limit_stochastic)
```
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.
To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
```
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
```
### Part 2: MC Control
In this section, you will write your own implementation of constant-$\alpha$ MC control.
Your algorithm has four arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `alpha`: This is the step-size parameter for the update step.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.
(_Feel free to define additional functions to help you to organize your code._)
```
def get_prob(Q_state, epsilon):
probs = epsilon*np.ones_like(Q_state)/len(Q_state)
probs[np.argmax(probs)]+=1-epsilon
return probs
#get_prob([40, 2], 0.1)
def generate_episode_epsilon_greedy(env, Q, epsilon):
episode = []
state = env.reset()
nA = env.action_space.n
while True:
# get probability
if state in Q:
probs = get_prob(Q[state], epsilon)
else:
probs = np.ones_like(Q[state])/nA
action = np.random.choice(np.arange(nA), p=probs)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, Q, episode, gamma, alpha):
states, actions, rewards = zip(*episode)
#discounts = np.array([gamma**i for i in range(len(episode))])
reward = episode[-1][-1]
#print(episode, len(episode), reward)
for i, state in enumerate(states):
action = actions[i]
g = gamma**(len(episode)-1-i)*reward
#g = sum(discounts[:len(states)-i]*rewards[i:])
Q[state][action] += alpha*(g-Q[state][action])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon_start=1.0, epsilon_decay=0.999999, epsilon_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon=epsilon_start
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{} Epsilon {}.".format(i_episode, num_episodes, epsilon), end="")
sys.stdout.flush()
## TODO: complete the function
# generate episode using epsilon-greedy
epsilon = max(epsilon*epsilon_decay, epsilon_min)
episode = generate_episode_epsilon_greedy(env, Q, epsilon)
# update Q using constant alpha
Q = update_Q(env, Q, episode, gamma, alpha)
policy = dict((k,np.argmax(v)) for k,v in Q.items())
return policy, Q
```
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
```
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 1000000, 0.1)
```
Next, we plot the corresponding state-value function.
```
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
```
Finally, we visualize the policy that is estimated to be optimal.
```
# plot the policy
plot_policy(policy)
```
The **true** optimal policy $\pi_*$ can be found in Figure 5.2 of the [textbook](http://go.udacity.com/rl-textbook) (and appears below). Compare your final estimate to the optimal policy - how close are you able to get? If you are not happy with the performance of your algorithm, take the time to tweak the decay rate of $\epsilon$, change the value of $\alpha$, and/or run the algorithm for more episodes to attain better results.

```
for k, v in policy.items():
if k[2]:
print(k,v)
```
| github_jupyter |
# Narowcast Server service migration to Distribution Services
## 1. Getting data from NC
### 1.1 List of NC Services
```
# Run this SQL code against Narrocast Server database
"""
select
names1.MR_OBJECT_ID AS serviceID,
names1.MR_OBJECT_NAME AS service_name,
parent1.MR_OBJECT_NAME AS foldername,
names2.MR_OBJECT_NAME AS publication_name,
names3.MR_OBJECT_NAME AS document_name,
info3.MR_OBJECT_SUBTYPE AS doc_type,
names4.MR_OBJECT_ID AS info_obj_id,
names4.MR_OBJECT_NAME AS info_obj_name,
info4.MR_OBJECT_SUBTYPE AS info_obj_subtype
from
MSTROBJNAMES names1,
MSTROBJINFO info1,
MSTROBJNAMES parent1,
MSTROBJDEPN dpns,
MSTROBJNames names2,
MSTROBJDEPN dpns2,
MSTROBJNames names3,
MSTROBJINFO info3,
MSTROBJDEPN dpns3,
MSTROBJNames names4,
MSTROBJInfo info4
where names1.MR_OBJECT_ID = dpns.MR_INDEP_OBJID
and names1.MR_OBJECT_ID = info1.MR_OBJECT_ID
and info1.MR_PARENT_ID = parent1.MR_OBJECT_ID
and dpns.MR_DEPN_OBJID = names2.MR_OBJECT_ID
and names2.MR_OBJECT_ID = dpns2.MR_INDEP_OBJID
and dpns2.MR_DEPN_OBJID = names3.MR_OBJECT_ID
and names3.MR_OBJECT_ID = dpns3.MR_INDEP_OBJID
and names3.MR_OBJECT_ID = info3.MR_OBJECT_ID
and dpns3.MR_DEPN_OBJID = names4.MR_OBJECT_ID
and dpns3.MR_DEPN_OBJID = info4.MR_OBJECT_ID
and names1.MR_Object_Type = 19
and names2.MR_Object_Type = 16
and names3.MR_Object_Type = 14
and names4.MR_Object_Type = 4
and info4.MR_OBJECT_SubType <> 1
"""
```
<img src="Images/NC_services.png">
### 1.2 NC Service details
```
"""
select
names1.MR_OBJECT_ID AS serviceID, --This is Service ID
names1.MR_OBJECT_NAME AS service_name,
names2.MR_OBJECT_NAME AS subset_name,
a11.MR_ADD_DISPLAY AS dispname,
a11.MR_PHYSICAL_ADD AS email,
a13.MR_USER_NAME,
sp.MR_INFOSOURCE_ID,
sp.MR_QUES_OBJ_ID,
po.mr_seq,
sp.MR_USER_PREF,
po.MR_PREF_OBJ
from
MSTROBJNames names1,
MSTROBJINFO info1,
MSTROBJDEPN dpns,
MSTROBJNames names2,
MSTRSUBSCRIPTIONS a12,
MSTRADDRESSES a11,
MSTRUSERS a13,
MSTRSUBPREF sp,
MSTRPREFOBJS po
where names1.MR_Object_Type = 19
and names2.MR_Object_Type = 17
and info1.MR_STATUS =1
and names1.MR_OBJECT_ID = info1.MR_OBJECT_ID
and names1.MR_OBJECT_ID = dpns.MR_INDEP_OBJID
and dpns.MR_DEPN_OBJID = names2.MR_OBJECT_ID
and names2.MR_OBJECT_ID = a12.MR_SUB_SET_ID
and a11.MR_ADDRESS_ID = a12.MR_ADDRESS_ID
and a12.MR_SUB_GUID = sp.MR_SUB_GUID
and sp.MR_PREF_OBJ_ID = po.MR_PREF_OBJ_ID
and a12.MR_USER_ID = a13.MR_USER_ID
and names1.MR_OBJECT_ID = '047886F8A7474F4A929EC6DD135F0A98' --Filter for Service ID
"""
```
<img src="Images/service_details.png">
```
with open('narrowcast_emails.csv', encoding="utf8", newline='') as f:
email_list = [x.strip() for x in f]
```
## Automate tasks in MicroStrategy
```
from mstrio.connection import Connection
from mstrio.distribution_services import EmailSubscription, Content
from mstrio.users_and_groups.user import list_users
from datetime import datetime
#### Parameters ####
api_login, api_password = 'administrator', ''
base_url = 'Insert Env URL'
project_id = 'Insert Project ID'
conn = Connection(base_url,api_login,api_password)
```
### Get users' default addresses
```
users = list_users(connection=conn)
default_addresses=[]
for u in users:
if u.addresses:
user_addresses = [[u.name, u.id, uad['value']] for uad in u.addresses if uad['isDefault']==True]
default_addresses.extend(user_addresses)
```
### Create a list of recipients
```
# From MSTR Metadata
for d in default_addresses:
print(d)
# From Narrowcast
for e in email_list:
print(e)
# Match Metadata with Narrowcast
matched_emails = [d[1] for d in default_addresses if d[2] in email_list]
for m in matched_emails:
print(m)
```
### Create a subscription
```
# create an email subscription
recipient_ids = matched_emails[:]
content_id = 'Insert Content ID'
schedule_id = 'Insert Schedule ID'
subscription_name = 'REST_API_'+datetime.now().strftime("%Y-%m-%d__%H-%M")
subject_txt='Email Subject'
message_txt="Message Text"
EmailSubscription.create(connection=conn,
name=subscription_name,
project_id=project_id,
send_now = True,
contents=[Content(id=content_id, type='report', name='Report 1',
personalization=Content.Properties(format_type='EXCEL'))],
schedules_ids=[schedule_id],
recipients=recipient_ids,
email_subject=subject_txt,
email_message=message_txt,
email_send_content_as="data")
```
| github_jupyter |
# High-level Keras (Theano) Example
```
# Lots of warnings!
# Not sure why Keras creates model with float64?
%%writefile ~/.theanorc
[global]
device = cuda0
force_device= True
floatX = float32
warn_float64 = warn
import os
import sys
import numpy as np
os.environ['KERAS_BACKEND'] = "theano"
import theano
import keras as K
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from common.params import *
from common.utils import *
# Force one-gpu
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# Performance Improvement
# 1. Make sure channels-first (not last)
K.backend.set_image_data_format('channels_first')
# 2. CuDNN auto-tune
theano.config.dnn.conv.algo_fwd = "time_once"
theano.config.dnn.conv.algo_bwd_filter = "time_once"
theano.config.dnn.conv.algo_bwd_data = "time_once"
print("OS: ", sys.platform)
print("Python: ", sys.version)
print("Keras: ", K.__version__)
print("Numpy: ", np.__version__)
print("Theano: ", theano.__version__)
print(K.backend.backend())
print(K.backend.image_data_format())
print("GPU: ", get_gpu_name())
print(get_cuda_version())
print("CuDNN Version ", get_cudnn_version())
def create_symbol(n_classes=N_CLASSES):
model = Sequential()
model.add(Conv2D(50, kernel_size=(3, 3), padding='same', activation='relu',
input_shape=(3, 32, 32)))
model.add(Conv2D(50, kernel_size=(3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(100, kernel_size=(3, 3), padding='same', activation='relu'))
model.add(Conv2D(100, kernel_size=(3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(n_classes, activation='softmax'))
return model
def init_model(m, lr=LR, momentum=MOMENTUM):
m.compile(
loss = "categorical_crossentropy",
optimizer = K.optimizers.SGD(lr, momentum),
metrics = ['accuracy'])
return m
%%time
# Data into format for library
x_train, x_test, y_train, y_test = cifar_for_library(channel_first=True, one_hot=True)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
print(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype)
%%time
# Load symbol
sym = create_symbol()
%%time
# Initialise model
model = init_model(sym)
model.summary()
%%time
# Main training loop: 1m33s
model.fit(x_train,
y_train,
batch_size=BATCHSIZE,
epochs=EPOCHS,
verbose=1)
%%time
# Main evaluation loop: 2.47s
y_guess = model.predict(x_test, batch_size=BATCHSIZE)
y_guess = np.argmax(y_guess, axis=-1)
y_truth = np.argmax(y_test, axis=-1)
print("Accuracy: ", 1.*sum(y_guess == y_truth)/len(y_guess))
```
| github_jupyter |
```
from sqlalchemy import create_engine
import pandas as pd
import matplotlib.pyplot as plot
import json
import pymysql
import statsmodels.formula.api as sm
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from sklearn.cross_validation import cross_val_score
from collections import OrderedDict, defaultdict
import pickle
%matplotlib inline
conn_str = "mysql+pymysql://dublinbikesadmin:dublinbikes2018@dublinbikes.cglcinwmtg3w.eu-west-1.rds.amazonaws.com/dublinbikes"
conn = create_engine(conn_str)
query = """
SELECT * from bike_dynamic
"""
df_bike = pd.read_sql_query(con=conn, sql=query)
query = """
SELECT * from weather_info
"""
df_weather = pd.read_sql_query(con=conn, sql=query)
df_weather.to_csv ('weather.csv', index=False)
# Convert csv and json files into dataframes
df_w = pd.read_csv('weather_clean.csv')
df_w.head(30)
df_w.rename(columns={'dt_txt':'last_update'}, inplace=True)
df_w['last_update'] = pd.to_datetime(df_w['last_update'], errors='coerce')
df_bike['last_update'] = pd.to_datetime(df_bike['last_update'])
df_bike['day'] = df_bike['last_update'].dt.weekday_name
df_bike['hour'] = df_bike['last_update'].dt.hour
df_w['day'] = df_w['last_update'].dt.weekday_name
df_w['hour'] = df_w['last_update'].dt.hour
df_w['date'] = df_w['last_update'].dt.day
df_bike['date'] = df_bike['last_update'].dt.day
merged_w= pd.merge(df_bike, df_w, how='right', on=['date', 'hour', 'day'])
merged_weather = pd.get_dummies(merged_w, columns=["day"])
merged_weather.shape
merged_weather.mainDescription.unique()
df_rain = merged_weather.loc[(merged_weather['mainDescription'] == 'Rain') | (merged_weather['mainDescription'] == 'Drizzle') | (merged_weather['mainDescription'] == 'Fog') | (merged_weather['mainDescription'] == 'Mist') | (merged_weather['mainDescription'] == 'Snow')]
df_dry = merged_weather.loc[(merged_weather['mainDescription'] == 'Clear') | (merged_weather['mainDescription'] == 'Clouds')]
df_rain.shape
df_dry.shape
# creating df for each station for rain data
dfRain_1 = df_rain[df_rain['number'] == 1]
dfRain_2 = df_rain[df_rain['number'] == 2]
dfRain_3 = df_rain[df_rain['number'] == 3]
dfRain_4 = df_rain[df_rain['number'] == 4]
dfRain_5 = df_rain[df_rain['number'] == 5]
dfRain_6 = df_rain[df_rain['number'] == 6]
dfRain_7 = df_rain[df_rain['number'] == 7]
dfRain_8 = df_rain[df_rain['number'] == 8]
dfRain_9 = df_rain[df_rain['number'] == 9]
dfRain_10 = df_rain[df_rain['number'] == 10]
dfRain_11 = df_rain[df_rain['number'] == 11]
dfRain_12 = df_rain[df_rain['number'] == 12]
dfRain_13 = df_rain[df_rain['number'] == 13]
dfRain_14 = df_rain[df_rain['number'] == 14]
dfRain_15 = df_rain[df_rain['number'] == 15]
dfRain_16 = df_rain[df_rain['number'] == 16]
dfRain_17 = df_rain[df_rain['number'] == 17]
dfRain_18 = df_rain[df_rain['number'] == 18]
dfRain_19 = df_rain[df_rain['number'] == 19]
dfRain_20 = df_rain[df_rain['number'] == 20]
dfRain_21 = df_rain[df_rain['number'] == 21]
dfRain_22 = df_rain[df_rain['number'] == 22]
dfRain_23 = df_rain[df_rain['number'] == 23]
dfRain_24 = df_rain[df_rain['number'] == 24]
dfRain_25 = df_rain[df_rain['number'] == 25]
dfRain_26 = df_rain[df_rain['number'] == 26]
dfRain_27 = df_rain[df_rain['number'] == 27]
dfRain_28 = df_rain[df_rain['number'] == 28]
dfRain_29 = df_rain[df_rain['number'] == 29]
dfRain_30 = df_rain[df_rain['number'] == 30]
dfRain_31 = df_rain[df_rain['number'] == 31]
dfRain_32 = df_rain[df_rain['number'] == 32]
dfRain_33 = df_rain[df_rain['number'] == 33]
dfRain_34 = df_rain[df_rain['number'] == 34]
dfRain_35 = df_rain[df_rain['number'] == 35]
dfRain_36 = df_rain[df_rain['number'] == 36]
dfRain_37 = df_rain[df_rain['number'] == 37]
dfRain_38 = df_rain[df_rain['number'] == 38]
dfRain_39 = df_rain[df_rain['number'] == 39]
dfRain_40 = df_rain[df_rain['number'] == 40]
dfRain_41 = df_rain[df_rain['number'] == 41]
dfRain_42 = df_rain[df_rain['number'] == 42]
dfRain_43 = df_rain[df_rain['number'] == 43]
dfRain_44 = df_rain[df_rain['number'] == 44]
dfRain_45 = df_rain[df_rain['number'] == 45]
dfRain_46 = df_rain[df_rain['number'] == 46]
dfRain_47 = df_rain[df_rain['number'] == 47]
dfRain_48 = df_rain[df_rain['number'] == 48]
dfRain_49 = df_rain[df_rain['number'] == 49]
dfRain_50 = df_rain[df_rain['number'] == 50]
dfRain_51 = df_rain[df_rain['number'] == 51]
dfRain_52 = df_rain[df_rain['number'] == 52]
dfRain_53 = df_rain[df_rain['number'] == 53]
dfRain_54 = df_rain[df_rain['number'] == 54]
dfRain_55 = df_rain[df_rain['number'] == 55]
dfRain_56 = df_rain[df_rain['number'] == 56]
dfRain_57 = df_rain[df_rain['number'] == 57]
dfRain_58 = df_rain[df_rain['number'] == 58]
dfRain_59 = df_rain[df_rain['number'] == 59]
dfRain_60 = df_rain[df_rain['number'] == 60]
dfRain_61 = df_rain[df_rain['number'] == 61]
dfRain_62 = df_rain[df_rain['number'] == 62]
dfRain_63 = df_rain[df_rain['number'] == 63]
dfRain_64 = df_rain[df_rain['number'] == 64]
dfRain_65 = df_rain[df_rain['number'] == 65]
dfRain_66 = df_rain[df_rain['number'] == 66]
dfRain_67 = df_rain[df_rain['number'] == 67]
dfRain_68 = df_rain[df_rain['number'] == 68]
dfRain_69 = df_rain[df_rain['number'] == 69]
dfRain_70 = df_rain[df_rain['number'] == 70]
dfRain_71 = df_rain[df_rain['number'] == 71]
dfRain_72 = df_rain[df_rain['number'] == 72]
dfRain_73 = df_rain[df_rain['number'] == 73]
dfRain_74 = df_rain[df_rain['number'] == 74]
dfRain_75 = df_rain[df_rain['number'] == 75]
dfRain_76 = df_rain[df_rain['number'] == 76]
dfRain_77 = df_rain[df_rain['number'] == 77]
dfRain_78 = df_rain[df_rain['number'] == 78]
dfRain_79 = df_rain[df_rain['number'] == 79]
dfRain_80 = df_rain[df_rain['number'] == 80]
dfRain_81 = df_rain[df_rain['number'] == 81]
dfRain_82 = df_rain[df_rain['number'] == 82]
dfRain_83 = df_rain[df_rain['number'] == 83]
dfRain_84 = df_rain[df_rain['number'] == 84]
dfRain_85 = df_rain[df_rain['number'] == 85]
dfRain_86 = df_rain[df_rain['number'] == 86]
dfRain_87 = df_rain[df_rain['number'] == 87]
dfRain_88 = df_rain[df_rain['number'] == 88]
dfRain_89 = df_rain[df_rain['number'] == 89]
dfRain_90 = df_rain[df_rain['number'] == 90]
dfRain_91 = df_rain[df_rain['number'] == 91]
dfRain_92 = df_rain[df_rain['number'] == 92]
dfRain_93 = df_rain[df_rain['number'] == 93]
dfRain_94 = df_rain[df_rain['number'] == 94]
dfRain_95 = df_rain[df_rain['number'] == 95]
dfRain_96 = df_rain[df_rain['number'] == 96]
dfRain_97 = df_rain[df_rain['number'] == 97]
dfRain_98 = df_rain[df_rain['number'] == 98]
dfRain_99 = df_rain[df_rain['number'] == 99]
dfRain_100 = df_rain[df_rain['number'] == 100]
dfRain_101 = df_rain[df_rain['number'] == 101]
dfRain_102 = df_rain[df_rain['number'] == 102]
dfRain_103 = df_rain[df_rain['number'] == 103]
dfRain_104 = df_rain[df_rain['number'] == 104]
dfRain_105 = df_rain[df_rain['number'] == 105]
# creating df for each station (sunny weather)
dfDry_1 = df_dry[df_dry['number'] == 1]
dfDry_2 = df_dry[df_dry['number'] == 2]
dfDry_3 = df_dry[df_dry['number'] == 3]
dfDry_4 = df_dry[df_dry['number'] == 4]
dfDry_5 = df_dry[df_dry['number'] == 5]
dfDry_6 = df_dry[df_dry['number'] == 6]
dfDry_7 = df_dry[df_dry['number'] == 7]
dfDry_8 = df_dry[df_dry['number'] == 8]
dfDry_9 = df_dry[df_dry['number'] == 9]
dfDry_10 = df_dry[df_dry['number'] == 10]
dfDry_11 = df_dry[df_dry['number'] == 11]
dfDry_12 = df_dry[df_dry['number'] == 12]
dfDry_13 = df_dry[df_dry['number'] == 13]
dfDry_14 = df_dry[df_dry['number'] == 14]
dfDry_15 = df_dry[df_dry['number'] == 15]
dfDry_16 = df_dry[df_dry['number'] == 16]
dfDry_17 = df_dry[df_dry['number'] == 17]
dfDry_18 = df_dry[df_dry['number'] == 18]
dfDry_19 = df_dry[df_dry['number'] == 19]
dfDry_20 = df_dry[df_dry['number'] == 20]
dfDry_21 = df_dry[df_dry['number'] == 21]
dfDry_22 = df_dry[df_dry['number'] == 22]
dfDry_23 = df_dry[df_dry['number'] == 23]
dfDry_24 = df_dry[df_dry['number'] == 24]
dfDry_25 = df_dry[df_dry['number'] == 25]
dfDry_26 = df_dry[df_dry['number'] == 26]
dfDry_27 = df_dry[df_dry['number'] == 27]
dfDry_28 = df_dry[df_dry['number'] == 28]
dfDry_29 = df_dry[df_dry['number'] == 29]
dfDry_30 = df_dry[df_dry['number'] == 30]
dfDry_31 = df_dry[df_dry['number'] == 31]
dfDry_32 = df_dry[df_dry['number'] == 32]
dfDry_33 = df_dry[df_dry['number'] == 33]
dfDry_34 = df_dry[df_dry['number'] == 34]
dfDry_35 = df_dry[df_dry['number'] == 35]
dfDry_36 = df_dry[df_dry['number'] == 36]
dfDry_37 = df_dry[df_dry['number'] == 37]
dfDry_38 = df_dry[df_dry['number'] == 38]
dfDry_39 = df_dry[df_dry['number'] == 39]
dfDry_40 = df_dry[df_dry['number'] == 40]
dfDry_41 = df_dry[df_dry['number'] == 41]
dfDry_42 = df_dry[df_dry['number'] == 42]
dfDry_43 = df_dry[df_dry['number'] == 43]
dfDry_44 = df_dry[df_dry['number'] == 44]
dfDry_45 = df_dry[df_dry['number'] == 45]
dfDry_46 = df_dry[df_dry['number'] == 46]
dfDry_47 = df_dry[df_dry['number'] == 47]
dfDry_48 = df_dry[df_dry['number'] == 48]
dfDry_49 = df_dry[df_dry['number'] == 49]
dfDry_50 = df_dry[df_dry['number'] == 50]
dfDry_51 = df_dry[df_dry['number'] == 51]
dfDry_52 = df_dry[df_dry['number'] == 52]
dfDry_53 = df_dry[df_dry['number'] == 53]
dfDry_54 = df_dry[df_dry['number'] == 54]
dfDry_55 = df_dry[df_dry['number'] == 55]
dfDry_56 = df_dry[df_dry['number'] == 56]
dfDry_57 = df_dry[df_dry['number'] == 57]
dfDry_58 = df_dry[df_dry['number'] == 58]
dfDry_59 = df_dry[df_dry['number'] == 59]
dfDry_60 = df_dry[df_dry['number'] == 60]
dfDry_61 = df_dry[df_dry['number'] == 61]
dfDry_62 = df_dry[df_dry['number'] == 62]
dfDry_63 = df_dry[df_dry['number'] == 63]
dfDry_64 = df_dry[df_dry['number'] == 64]
dfDry_65 = df_dry[df_dry['number'] == 65]
dfDry_66 = df_dry[df_dry['number'] == 66]
dfDry_67 = df_dry[df_dry['number'] == 67]
dfDry_68 = df_dry[df_dry['number'] == 68]
dfDry_69 = df_dry[df_dry['number'] == 69]
dfDry_70 = df_dry[df_dry['number'] == 70]
dfDry_71 = df_dry[df_dry['number'] == 71]
dfDry_72 = df_dry[df_dry['number'] == 72]
dfDry_73 = df_dry[df_dry['number'] == 73]
dfDry_74 = df_dry[df_dry['number'] == 74]
dfDry_75 = df_dry[df_dry['number'] == 75]
dfDry_76 = df_dry[df_dry['number'] == 76]
dfDry_77 = df_dry[df_dry['number'] == 77]
dfDry_78 = df_dry[df_dry['number'] == 78]
dfDry_79 = df_dry[df_dry['number'] == 79]
dfDry_80 = df_dry[df_dry['number'] == 80]
dfDry_81 = df_dry[df_dry['number'] == 81]
dfDry_82 = df_dry[df_dry['number'] == 82]
dfDry_83 = df_dry[df_dry['number'] == 83]
dfDry_84 = df_dry[df_dry['number'] == 84]
dfDry_85 = df_dry[df_dry['number'] == 85]
dfDry_86 = df_dry[df_dry['number'] == 86]
dfDry_87 = df_dry[df_dry['number'] == 87]
dfDry_88 = df_dry[df_dry['number'] == 88]
dfDry_89 = df_dry[df_dry['number'] == 89]
dfDry_90 = df_dry[df_dry['number'] == 90]
dfDry_91 = df_dry[df_dry['number'] == 91]
dfDry_92 = df_dry[df_dry['number'] == 92]
dfDry_93 = df_dry[df_dry['number'] == 93]
dfDry_94 = df_dry[df_dry['number'] == 94]
dfDry_95 = df_dry[df_dry['number'] == 95]
dfDry_96 = df_dry[df_dry['number'] == 96]
dfDry_97 = df_dry[df_dry['number'] == 97]
dfDry_98 = df_dry[df_dry['number'] == 98]
dfDry_99 = df_dry[df_dry['number'] == 99]
dfDry_100 = df_dry[df_dry['number'] == 100]
dfDry_101 = df_dry[df_dry['number'] == 101]
dfDry_102 = df_dry[df_dry['number'] == 102]
dfDry_103 = df_dry[df_dry['number'] == 103]
dfDry_104 = df_dry[df_dry['number'] == 104]
dfDry_105 = df_dry[df_dry['number'] == 105]
def predictionsRain(x):
lm_r = sm.ols(formula="available_bikes ~ humidity + temp_max + hour + day_Monday + day_Tuesday + day_Wednesday + day_Thursday + day_Friday + day_Sunday", data=x).fit()
return lm_r
def predictionsDry(x):
lm_d = sm.ols(formula="available_bikes ~ deg + humidity + temp_max", data=x).fit()
return lm_d
def create_df (dfNumber, predictor):
rain_predictions = pd.DataFrame({'Monday': dfNumber.day_Monday, 'Tuesday': dfNumber.day_Tuesday, 'Wednesday': dfNumber.day_Wednesday, 'Thursday': dfNumber.day_Thursday, 'Friday': dfNumber.day_Friday, 'Saturday': dfNumber.day_Saturday, 'Sunday': dfNumber.day_Sunday, 'hour': dfNumber.hour, 'PredictedBikes': predictor.predict(dfNumber)})
return rain_predictions
```
Creating dataframes for each station now for rainy weather
```
lmRain_1 = predictionsRain(dfRain_1)
lmRain_2 = predictionsRain(dfRain_2)
lmRain_3 = predictionsRain(dfRain_3)
lmRain_4 = predictionsRain(dfRain_4)
lmRain_5 = predictionsRain(dfRain_5)
lmRain_6 = predictionsRain(dfRain_6)
lmRain_7 = predictionsRain(dfRain_7)
lmRain_8 = predictionsRain(dfRain_8)
lmRain_9 = predictionsRain(dfRain_9)
lmRain_10 = predictionsRain(dfRain_10)
lmRain_11 = predictionsRain(dfRain_11)
lmRain_12 = predictionsRain(dfRain_12)
lmRain_13 = predictionsRain(dfRain_13)
lmRain_14 = predictionsRain(dfRain_14)
lmRain_15 = predictionsRain(dfRain_15)
lmRain_16 = predictionsRain(dfRain_16)
lmRain_17 = predictionsRain(dfRain_17)
lmRain_18 = predictionsRain(dfRain_18)
lmRain_19 = predictionsRain(dfRain_19)
#lmRain_20 = predictionsRain(dfRain_20)
lmRain_21 = predictionsRain(dfRain_21)
lmRain_22 = predictionsRain(dfRain_22)
lmRain_23 = predictionsRain(dfRain_23)
lmRain_24 = predictionsRain(dfRain_24)
lmRain_25 = predictionsRain(dfRain_25)
lmRain_26 = predictionsRain(dfRain_26)
lmRain_27 = predictionsRain(dfRain_27)
lmRain_28 = predictionsRain(dfRain_28)
lmRain_29 = predictionsRain(dfRain_29)
lmRain_30 = predictionsRain(dfRain_30)
lmRain_31 = predictionsRain(dfRain_31)
lmRain_32 = predictionsRain(dfRain_32)
lmRain_33 = predictionsRain(dfRain_33)
lmRain_34 = predictionsRain(dfRain_34)
lmRain_35 = predictionsRain(dfRain_35)
lmRain_36 = predictionsRain(dfRain_36)
lmRain_37 = predictionsRain(dfRain_37)
lmRain_38 = predictionsRain(dfRain_38)
lmRain_39 = predictionsRain(dfRain_39)
lmRain_40 = predictionsRain(dfRain_40)
lmRain_41 = predictionsRain(dfRain_41)
lmRain_42 = predictionsRain(dfRain_42)
lmRain_43 = predictionsRain(dfRain_43)
lmRain_44 = predictionsRain(dfRain_44)
lmRain_45 = predictionsRain(dfRain_45)
lmRain_46 = predictionsRain(dfRain_46)
lmRain_47= predictionsRain(dfRain_47)
lmRain_48 = predictionsRain(dfRain_48)
lmRain_49 = predictionsRain(dfRain_49)
lmRain_50 = predictionsRain(dfRain_50)
lmRain_51 = predictionsRain(dfRain_51)
lmRain_52= predictionsRain(dfRain_52)
lmRain_53 = predictionsRain(dfRain_53)
lmRain_54 = predictionsRain(dfRain_54)
lmRain_55 = predictionsRain(dfRain_55)
lmRain_56= predictionsRain(dfRain_56)
lmRain_57 = predictionsRain(dfRain_57)
lmRain_58 = predictionsRain(dfRain_58)
lmRain_59 = predictionsRain(dfRain_59)
lmRain_60= predictionsRain(dfRain_60)
lmRain_61 = predictionsRain(dfRain_61)
lmRain_62= predictionsRain(dfRain_62)
lmRain_63 = predictionsRain(dfRain_63)
lmRain_64= predictionsRain(dfRain_64)
lmRain_65 = predictionsRain(dfRain_65)
lmRain_66= predictionsRain(dfRain_66)
lmRain_67 = predictionsRain(dfRain_67)
lmRain_68= predictionsRain(dfRain_68)
lmRain_69 = predictionsRain(dfRain_69)
lmRain_70= predictionsRain(dfRain_70)
lmRain_71 = predictionsRain(dfRain_71)
lmRain_72= predictionsRain(dfRain_72)
lmRain_73 = predictionsRain(dfRain_73)
lmRain_74= predictionsRain(dfRain_74)
lmRain_75= predictionsRain(dfRain_75)
lmRain_76= predictionsRain(dfRain_76)
lmRain_77 = predictionsRain(dfRain_77)
lmRain_78= predictionsRain(dfRain_78)
lmRain_79 = predictionsRain(dfRain_79)
lmRain_80= predictionsRain(dfRain_80)
lmRain_81 = predictionsRain(dfRain_81)
lmRain_82= predictionsRain(dfRain_82)
lmRain_83 = predictionsRain(dfRain_83)
lmRain_84= predictionsRain(dfRain_84)
lmRain_85 = predictionsRain(dfRain_85)
lmRain_86= predictionsRain(dfRain_86)
lmRain_87 = predictionsRain(dfRain_87)
lmRain_88= predictionsRain(dfRain_88)
lmRain_89 = predictionsRain(dfRain_89)
lmRain_90= predictionsRain(dfRain_90)
lmRain_91 = predictionsRain(dfRain_91)
lmRain_92= predictionsRain(dfRain_92)
lmRain_93 = predictionsRain(dfRain_93)
lmRain_94= predictionsRain(dfRain_94)
lmRain_95 = predictionsRain(dfRain_95)
lmRain_96= predictionsRain(dfRain_96)
lmRain_97 = predictionsRain(dfRain_97)
lmRain_98= predictionsRain(dfRain_98)
lmRain_99 = predictionsRain(dfRain_99)
lmRain_100= predictionsRain(dfRain_100)
lmRain_101 = predictionsRain(dfRain_101)
lmRain_102= predictionsRain(dfRain_102)
lmRain_103 = predictionsRain(dfRain_103)
lmRain_104= predictionsRain(dfRain_104)
lmRain_105= predictionsRain(dfRain_105)
df_weather_rain = df_weather.loc[df_weather['mainDescription'] == 'Rain']
df_weather_rain.shape
df_weather_dry = df_weather.loc[df_weather['mainDescription'] == 'Clouds']
df_weather_dry.shape
df = df_bike[['day', 'hour']].copy()
df= df.drop_duplicates()
df = df.reset_index()
df_weather_rain = df_weather_rain[200:368]
df_weather_rain = df_weather_rain.reset_index()
result = pd.merge(df, df_weather_rain, right_index=True, left_index=True)
result =pd.get_dummies(result, columns=["day"])
df_dry = df_bike[['day', 'hour']].copy()
df_dry = df_dry.drop_duplicates()
df_dry = df_dry.reset_index()
df_weather_dry = df_weather_dry[1100:1268]
df_weather_dry = df_weather_dry.reset_index()
result_dry = pd.merge(df_dry, df_weather_dry, right_index=True, left_index=True)
result_dry = pd.get_dummies(result_dry, columns=["day"])
stationRain1=create_df(result, lmRain_1)
stationRain2=create_df(result, lmRain_2)
stationRain3=create_df(result, lmRain_3)
stationRain4=create_df(result, lmRain_4)
stationRain5=create_df(result, lmRain_5)
stationRain6=create_df(result, lmRain_6)
stationRain7=create_df(result, lmRain_7)
stationRain8=create_df(result, lmRain_8)
stationRain9=create_df(result, lmRain_9)
stationRain10=create_df(result, lmRain_10)
stationRain11=create_df(result, lmRain_11)
stationRain12=create_df(result, lmRain_12)
stationRain13=create_df(result, lmRain_13)
stationRain14=create_df(result, lmRain_14)
stationRain15=create_df(result, lmRain_15)
stationRain16=create_df(result, lmRain_16)
stationRain17=create_df(result, lmRain_17)
stationRain18=create_df(result, lmRain_18)
stationRain19=create_df(result, lmRain_19)
#stationRain20=create_df(result, lmRain_20)
stationRain21=create_df(result, lmRain_21)
stationRain22=create_df(result, lmRain_22)
stationRain23=create_df(result, lmRain_23)
stationRain24=create_df(result, lmRain_24)
stationRain25=create_df(result, lmRain_25)
stationRain26=create_df(result, lmRain_26)
stationRain27=create_df(result, lmRain_27)
stationRain28=create_df(result, lmRain_28)
stationRain29=create_df(result, lmRain_29)
stationRain30=create_df(result, lmRain_30)
stationRain31=create_df(result, lmRain_31)
stationRain32=create_df(result, lmRain_32)
stationRain33=create_df(result, lmRain_33)
stationRain34=create_df(result, lmRain_34)
stationRain35=create_df(result, lmRain_35)
stationRain36=create_df(result, lmRain_36)
stationRain37=create_df(result, lmRain_37)
stationRain38=create_df(result, lmRain_38)
stationRain39=create_df(result, lmRain_49)
stationRain40=create_df(result, lmRain_40)
stationRain41=create_df(result, lmRain_41)
stationRain42=create_df(result, lmRain_42)
stationRain43=create_df(result, lmRain_43)
stationRain44=create_df(result, lmRain_44)
stationRain45=create_df(result, lmRain_45)
stationRain46=create_df(result, lmRain_46)
stationRain47=create_df(result, lmRain_47)
stationRain48=create_df(result, lmRain_48)
stationRain49=create_df(result, lmRain_49)
stationRain50=create_df(result, lmRain_50)
stationRain51=create_df(result, lmRain_51)
stationRain52=create_df(result, lmRain_52)
stationRain53=create_df(result, lmRain_53)
stationRain54=create_df(result, lmRain_54)
stationRain55=create_df(result, lmRain_55)
stationRain56=create_df(result, lmRain_56)
stationRain57=create_df(result, lmRain_57)
stationRain58=create_df(result, lmRain_58)
stationRain59=create_df(result, lmRain_59)
stationRain60=create_df(result, lmRain_60)
stationRain61=create_df(result, lmRain_61)
stationRain62=create_df(result, lmRain_62)
stationRain63=create_df(result, lmRain_63)
stationRain64=create_df(result, lmRain_64)
stationRain65=create_df(result, lmRain_65)
stationRain66=create_df(result, lmRain_66)
stationRain67=create_df(result, lmRain_67)
stationRain68=create_df(result, lmRain_68)
stationRain69=create_df(result, lmRain_69)
stationRain70=create_df(result, lmRain_70)
stationRain71=create_df(result, lmRain_71)
stationRain72=create_df(result, lmRain_72)
stationRain73=create_df(result, lmRain_73)
stationRain74=create_df(result, lmRain_74)
stationRain75=create_df(result, lmRain_75)
stationRain76=create_df(result, lmRain_76)
stationRain77=create_df(result, lmRain_77)
stationRain78=create_df(result, lmRain_78)
stationRain79=create_df(result, lmRain_79)
stationRain80=create_df(result, lmRain_80)
stationRain81=create_df(result, lmRain_81)
stationRain82=create_df(result, lmRain_82)
stationRain83=create_df(result, lmRain_83)
stationRain84=create_df(result, lmRain_84)
stationRain85=create_df(result, lmRain_85)
stationRain86=create_df(result, lmRain_86)
stationRain87=create_df(result, lmRain_87)
stationRain88=create_df(result, lmRain_88)
stationRain89=create_df(result, lmRain_89)
stationRain90=create_df(result, lmRain_90)
stationRain91=create_df(result, lmRain_91)
stationRain92=create_df(result, lmRain_91)
stationRain93=create_df(result, lmRain_92)
stationRain94=create_df(result, lmRain_93)
stationRain95=create_df(result, lmRain_94)
stationRain96=create_df(result, lmRain_95)
stationRain97=create_df(result, lmRain_96)
stationRain98=create_df(result, lmRain_97)
stationRain99=create_df(result, lmRain_99)
stationRain100=create_df(result, lmRain_100)
stationRain101=create_df(result, lmRain_101)
stationRain102=create_df(result, lmRain_102)
stationRain103=create_df(result, lmRain_103)
stationRain104=create_df(result, lmRain_104)
stationRain105=create_df(result, lmRain_105)
stationRain90.shape
```
Creating dataframes for each station now for dry weather
```
lmDry_1 = predictionsDry(dfDry_1)
lmDry_2 = predictionsDry(dfDry_2)
lmDry_3 = predictionsDry(dfDry_3)
lmDry_4 = predictionsDry(dfDry_4)
lmDry_5 = predictionsDry(dfDry_5)
lmDry_6 = predictionsDry(dfDry_6)
lmDry_7 = predictionsDry(dfDry_7)
lmDry_8 = predictionsDry(dfDry_8)
lmDry_9 = predictionsDry(dfDry_9)
lmDry_10 = predictionsDry(dfDry_10)
lmDry_11 = predictionsDry(dfDry_11)
lmDry_12 = predictionsDry(dfDry_12)
lmDry_13 = predictionsDry(dfDry_13)
lmDry_14 = predictionsDry(dfDry_14)
lmDry_15 = predictionsDry(dfDry_15)
lmDry_16 = predictionsDry(dfDry_16)
lmDry_17 = predictionsDry(dfDry_17)
lmDry_18 = predictionsDry(dfDry_18)
lmDry_19 = predictionsDry(dfDry_19)
#lmDry_20 = predictionsDry(dfDry_20)
lmDry_21 = predictionsDry(dfDry_21)
lmDry_22 = predictionsDry(dfDry_22)
lmDry_23 = predictionsDry(dfDry_23)
lmDry_24 = predictionsDry(dfDry_24)
lmDry_25 = predictionsDry(dfDry_25)
lmDry_26 = predictionsDry(dfDry_26)
lmDry_27 = predictionsDry(dfDry_27)
lmDry_28 = predictionsDry(dfDry_28)
lmDry_29 = predictionsDry(dfDry_29)
lmDry_30 = predictionsDry(dfDry_30)
lmDry_31 = predictionsDry(dfDry_31)
lmDry_32 = predictionsDry(dfDry_32)
lmDry_33 = predictionsDry(dfDry_33)
lmDry_34 = predictionsDry(dfDry_34)
lmDry_35 = predictionsDry(dfDry_35)
lmDry_36 = predictionsDry(dfDry_36)
lmDry_37 = predictionsDry(dfDry_37)
lmDry_38 = predictionsDry(dfDry_38)
lmDry_39 = predictionsDry(dfDry_39)
lmDry_40 = predictionsDry(dfDry_40)
lmDry_41 = predictionsDry(dfDry_41)
lmDry_42 = predictionsDry(dfDry_42)
lmDry_43 = predictionsDry(dfDry_43)
lmDry_44 = predictionsDry(dfDry_44)
lmDry_45 = predictionsDry(dfDry_45)
lmDry_46 = predictionsDry(dfDry_46)
lmDry_47= predictionsDry(dfDry_47)
lmDry_48 = predictionsDry(dfDry_48)
lmDry_49 = predictionsDry(dfDry_49)
lmDry_50 = predictionsDry(dfDry_50)
lmDry_51 = predictionsDry(dfDry_51)
lmDry_52= predictionsDry(dfDry_52)
lmDry_53 = predictionsDry(dfDry_53)
lmDry_54 = predictionsDry(dfDry_54)
lmDry_55 = predictionsDry(dfDry_55)
lmDry_56= predictionsDry(dfDry_56)
lmDry_57 = predictionsDry(dfDry_57)
lmDry_58 = predictionsDry(dfDry_58)
lmDry_59 = predictionsDry(dfDry_59)
lmDry_60= predictionsDry(dfDry_60)
lmDry_61 = predictionsDry(dfDry_61)
lmDry_62= predictionsDry(dfDry_62)
lmDry_63 = predictionsDry(dfDry_63)
lmDry_64= predictionsDry(dfDry_64)
lmDry_65 = predictionsDry(dfDry_65)
lmDry_66= predictionsDry(dfDry_66)
lmDry_67 = predictionsDry(dfDry_67)
lmDry_68= predictionsDry(dfDry_68)
lmDry_69 = predictionsDry(dfDry_69)
lmDry_70= predictionsDry(dfDry_70)
lmDry_71 = predictionsDry(dfDry_71)
lmDry_72= predictionsDry(dfDry_72)
lmDry_73 = predictionsDry(dfDry_73)
lmDry_74= predictionsDry(dfDry_74)
lmDry_75= predictionsDry(dfDry_75)
lmDry_76= predictionsDry(dfDry_76)
lmDry_77 = predictionsDry(dfDry_77)
lmDry_78= predictionsDry(dfDry_78)
lmDry_79 = predictionsDry(dfDry_79)
lmDry_80= predictionsDry(dfDry_80)
lmDry_81 = predictionsDry(dfDry_81)
lmDry_82= predictionsDry(dfDry_82)
lmDry_83 = predictionsDry(dfDry_83)
lmDry_84= predictionsDry(dfDry_84)
lmDry_85 = predictionsDry(dfDry_85)
lmDry_86= predictionsDry(dfDry_86)
lmDry_87 = predictionsDry(dfDry_87)
lmDry_88= predictionsDry(dfDry_88)
lmDry_89 = predictionsDry(dfDry_89)
lmDry_90= predictionsDry(dfDry_90)
lmDry_91 = predictionsDry(dfDry_91)
lmDry_92= predictionsDry(dfDry_92)
lmDry_93 = predictionsDry(dfDry_93)
lmDry_94= predictionsDry(dfDry_94)
lmDry_95 = predictionsDry(dfDry_95)
lmDry_96= predictionsDry(dfDry_96)
lmDry_97 = predictionsDry(dfDry_97)
lmDry_98= predictionsDry(dfDry_98)
lmDry_99 = predictionsDry(dfDry_99)
lmDry_100= predictionsDry(dfDry_100)
lmDry_101 = predictionsDry(dfDry_101)
lmDry_102= predictionsDry(dfDry_102)
lmDry_103 = predictionsDry(dfDry_103)
lmDry_104= predictionsDry(dfDry_104)
lmDry_105= predictionsDry(dfDry_105)
stationDry1=create_df(result_dry, lmDry_1)
stationDry2=create_df(result_dry, lmDry_2)
stationDry3=create_df(result_dry, lmDry_3)
stationDry4=create_df(result_dry, lmDry_4)
stationDry5=create_df(result_dry, lmDry_5)
stationDry6=create_df(result_dry, lmDry_6)
stationDry7=create_df(result_dry, lmDry_7)
stationDry8=create_df(result_dry, lmDry_8)
stationDry9=create_df(result_dry, lmDry_9)
stationDry10=create_df(result_dry, lmDry_10)
stationDry11=create_df(result_dry, lmDry_11)
stationDry12=create_df(result_dry, lmDry_12)
stationDry13=create_df(result_dry, lmDry_13)
stationDry14=create_df(result_dry, lmDry_14)
stationDry15=create_df(result_dry, lmDry_15)
stationDry16=create_df(result_dry, lmDry_16)
stationDry17=create_df(result_dry, lmDry_17)
stationDry18=create_df(result_dry, lmDry_18)
stationDry19=create_df(result_dry, lmDry_19)
#stationDry20=create_df(result_dry, lmDry_20)
stationDry21=create_df(result_dry, lmDry_21)
stationDry22=create_df(result_dry, lmDry_22)
stationDry23=create_df(result_dry, lmDry_23)
stationDry24=create_df(result_dry, lmDry_24)
stationDry25=create_df(result_dry, lmDry_25)
stationDry26=create_df(result_dry, lmDry_26)
stationDry27=create_df(result_dry, lmDry_27)
stationDry28=create_df(result_dry, lmDry_28)
stationDry29=create_df(result_dry, lmDry_29)
stationDry30=create_df(result_dry, lmDry_30)
stationDry31=create_df(result_dry, lmDry_31)
stationDry32=create_df(result_dry, lmDry_32)
stationDry33=create_df(result_dry, lmDry_33)
stationDry34=create_df(result_dry, lmDry_34)
stationDry35=create_df(result_dry, lmDry_35)
stationDry36=create_df(result_dry, lmDry_36)
stationDry37=create_df(result_dry, lmDry_37)
stationDry38=create_df(result_dry, lmDry_38)
stationDry39=create_df(result_dry, lmDry_49)
stationDry40=create_df(result_dry, lmDry_40)
stationDry41=create_df(result_dry, lmDry_41)
stationDry42=create_df(result_dry, lmDry_42)
stationDry43=create_df(result_dry, lmDry_43)
stationDry44=create_df(result_dry, lmDry_44)
stationDry45=create_df(result_dry, lmDry_45)
stationDry46=create_df(result_dry, lmDry_46)
stationDry47=create_df(result_dry, lmDry_47)
stationDry48=create_df(result_dry, lmDry_48)
stationDry49=create_df(result_dry, lmDry_49)
stationDry50=create_df(result_dry, lmDry_50)
stationDry51=create_df(result_dry, lmDry_51)
stationDry52=create_df(result_dry, lmDry_52)
stationDry53=create_df(result_dry, lmDry_53)
stationDry54=create_df(result_dry, lmDry_54)
stationDry55=create_df(result_dry, lmDry_55)
stationDry56=create_df(result_dry, lmDry_56)
stationDry57=create_df(result_dry, lmDry_57)
stationDry58=create_df(result_dry, lmDry_58)
stationDry59=create_df(result_dry, lmDry_59)
stationDry60=create_df(result_dry, lmDry_60)
stationDry61=create_df(result_dry, lmDry_61)
stationDry62=create_df(result_dry, lmDry_62)
stationDry63=create_df(result_dry, lmDry_63)
stationDry64=create_df(result_dry, lmDry_64)
stationDry65=create_df(result_dry, lmDry_65)
stationDry66=create_df(result_dry, lmDry_66)
stationDry67=create_df(result_dry, lmDry_67)
stationDry68=create_df(result_dry, lmDry_68)
stationDry69=create_df(result_dry, lmDry_69)
stationDry70=create_df(result_dry, lmDry_70)
stationDry71=create_df(result_dry, lmDry_71)
stationDry72=create_df(result_dry, lmDry_72)
stationDry73=create_df(result_dry, lmDry_73)
stationDry74=create_df(result_dry, lmDry_74)
stationDry75=create_df(result_dry, lmDry_75)
stationDry76=create_df(result_dry, lmDry_76)
stationDry77=create_df(result_dry, lmDry_77)
stationDry78=create_df(result_dry, lmDry_78)
stationDry79=create_df(result_dry, lmDry_79)
stationDry80=create_df(result_dry, lmDry_80)
stationDry81=create_df(result_dry, lmDry_81)
stationDry82=create_df(result_dry, lmDry_82)
stationDry83=create_df(result_dry, lmDry_83)
stationDry84=create_df(result_dry, lmDry_84)
stationDry85=create_df(result_dry, lmDry_85)
stationDry86=create_df(result_dry, lmDry_86)
stationDry87=create_df(result_dry, lmDry_87)
stationDry88=create_df(result_dry, lmDry_88)
stationDry89=create_df(result_dry, lmDry_89)
stationDry90=create_df(result_dry, lmDry_90)
stationDry91=create_df(result_dry, lmDry_91)
stationDry92=create_df(result_dry, lmDry_91)
stationDry93=create_df(result_dry, lmDry_92)
stationDry94=create_df(result_dry, lmDry_93)
stationDry95=create_df(result_dry, lmDry_94)
stationDry96=create_df(result_dry, lmDry_95)
stationDry97=create_df(result_dry, lmDry_96)
stationDry98=create_df(result_dry, lmDry_97)
stationDry99=create_df(result_dry, lmDry_99)
stationDry100=create_df(result_dry, lmDry_100)
stationDry101=create_df(result_dry, lmDry_101)
stationDry102=create_df(result_dry, lmDry_102)
stationDry103=create_df(result_dry, lmDry_103)
stationDry104=create_df(result_dry, lmDry_104)
stationDry105=create_df(result_dry, lmDry_105)
def convert(station,strstation):
station = eval(station)
df_rp1 = station[['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday']].copy()
df_rp1 = pd.get_dummies(df_rp1).idxmax(1)
dfrp1 = df_rp1.to_frame()
df = station.join(dfrp1)
df = df.rename(columns={0: 'day'})
station = df[['PredictedBikes','day','hour']].copy()
names = [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday', 'Saturday', 'Sunday']
station['day'] = station['day'].astype('category', categories=names, ordered=True)
parse(station,strstation)
def parse(station, strstation):
d = defaultdict(list)
y = station.groupby(['day','hour'])['PredictedBikes'].mean()
y = OrderedDict(y)
i = 0
for k,v in y.items():
if v < 0:
d[k[1]].append(0)
else:
d[k[1]].append(v)
file = "pickleFiles/"+strstation+".pickle"
pickle_out = open(file,"wb")
pickle.dump(d, pickle_out)
pickle_out.close()
for i in range(1,105):
if i == 20:
pass
else:
station = "stationRain"+str(i)
convert(station, station)
for i in range(1,105):
if i == 20:
pass
else:
station = "stationDry"+str(i)
convert(station, station)
day = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
pickle_out = open("pickleFiles/day.pickle","wb")
pickle.dump(day, pickle_out)
pickle_out.close()
```
| github_jupyter |
### 6. Python API Training - Continuous Model Training [Solution]
<b>Author:</b> Thodoris Petropoulos <br>
<b>Contributors:</b> Rajiv Shah
This is the 6th exercise to complete in order to finish your `Python API Training for DataRobot` course! This exercise teaches you how to deploy a trained model, make predictions (**Warning**: Multiple ways of getting predictions out of DataRobot), and monitor drift to replace a model.
Here are the actual sections of the notebook alongside time to complete:
1. Connect to DataRobot. [3min]<br>
2. Retrieve the first project created in `Exercise 4 - Model Factory`. [5min]
3. Search for the `recommended for deployment` model and deploy it as a rest API. [20min]
4. Create a scoring procedure using dataset (1) that will force data drift on that deployment. [25min]
5. Check data drift. Does it look like data is drifting?. [3min]
6. Create a new project using data (2). [5min]
7. Replace the previously deployed model with the new `recommended for deployment` model from the new project. [10min]
Each section will have specific instructions so do not worry if things are still blurry!
As always, consult:
- [API Documentation](https://datarobot-public-api-client.readthedocs-hosted.com)
- [Samples](https://github.com/datarobot-community/examples-for-data-scientists)
- [Tutorials](https://github.com/datarobot-community/tutorials-for-data-scientists)
The last two links should provide you with the snippets you need to complete most of these exercises.
<b>Data</b>
(1) The dataset we will be using throughout these exercises is the well-known `readmissions dataset`. You can access it or directly download it through DataRobot's public S3 bucket [here](https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes.csv).
(2) This dataset will be used to retrain the model. It can be accessed [here](https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes_scoring.csv) through DataRobot's public S3 bucket.
### Import Libraries
Import libraries here as you start finding out what libraries are needed. The DataRobot package is already included for your convenience.
```
import datarobot as dr
#Proposed Libraries needed
import pandas as pd
```
### 1. Connect to DataRobot [3min]
```
#Possible solution
dr.Client(config_path='../../github/config.yaml')
```
### 2. Retrieve the first project created in `Exercise 4 - Model Factory` . [5min]
This should be the first project created during the exercise. Not one of the projects created using a sample of `readmission_type_id`.
```
#Proposed Solution
project = dr.Project.get('YOUR_PROJECT_ID')
```
### 3. Search for the `recommended for deployment` model and deploy it as a rest API. [10min]
**Hint**: The recommended model can be found using the `DataRobot.ModelRecommendation` method.
**Hint 2**: Use the `update_drift_tracking_settings` method on the DataRobot Deployment object to enable data drift tracking.
```
# Proposed Solution
#Find the recommended model
recommended_model = dr.ModelRecommendation.get(project.id).get_model()
#Deploy the model
prediction_server = dr.PredictionServer.list()[0]
deployment = dr.Deployment.create_from_learning_model(recommended_model.id, label='Readmissions Deployment', default_prediction_server_id=prediction_server.id)
deployment.update_drift_tracking_settings(feature_drift_enabled=True)
```
### 4. Create a scoring procedure using dataset (1) that will force data drift on that deployment. [25min]
**Instructions**
1. Take the first 100 rows of dataset (1) and save them to a Pandas DataFrame
2. Score 5 times using these observations to force drift.
3. Use the deployment you created during `question 3`.
**Hint**: The easiest way to score using a deployed model in DataRobot is to go to the `Deployments` page within DataRobot and navigate to the `Integrations` and `scoring code` tab. There you will find sample code for Python that you can use to score.
**Hint 2**: The only thing you will have to change for the code to work is change the filename variable to point to the csv file to be scored and create a for loop.
```
# Proposed Solution
#Save the dataset that is going to be scored as a csv file
scoring_dataset = pd.read_csv('https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes.csv').head(100)
scoring_dataset.to_csv('scoring_dataset.csv', index=False)
#This has been copied from the `integrations` tab.
#The only thing you actually have to do is change the filename variable in the bottom of the script and
#create the for loop.
"""
Usage:
python datarobot-predict.py <input-file.csv>
This example uses the requests library which you can install with:
pip install requests
We highly recommend that you update SSL certificates with:
pip install -U urllib3[secure] certifi
"""
import sys
import json
import requests
DATAROBOT_KEY = ''
API_KEY = ''
USERNAME = ''
DEPLOYMENT_ID = ''
MAX_PREDICTION_FILE_SIZE_BYTES = 52428800 # 50 MB
class DataRobotPredictionError(Exception):
"""Raised if there are issues getting predictions from DataRobot"""
def make_datarobot_deployment_predictions(data, deployment_id):
"""
Make predictions on data provided using DataRobot deployment_id provided.
See docs for details:
https://app.eu.datarobot.com/docs/users-guide/predictions/api/new-prediction-api.html
Parameters
----------
data : str
Feature1,Feature2
numeric_value,string
deployment_id : str
The ID of the deployment to make predictions with.
Returns
-------
Response schema:
https://app.eu.datarobot.com/docs/users-guide/predictions/api/new-prediction-api.html#response-schema
Raises
------
DataRobotPredictionError if there are issues getting predictions from DataRobot
"""
# Set HTTP headers. The charset should match the contents of the file.
headers = {'Content-Type': 'text/plain; charset=UTF-8', 'datarobot-key': DATAROBOT_KEY}
url = 'https://cfds.orm.eu.datarobot.com/predApi/v1.0/deployments/{deployment_id}/'\
'predictions'.format(deployment_id=deployment_id)
# Make API request for predictions
predictions_response = requests.post(
url,
auth=(USERNAME, API_KEY),
data=data,
headers=headers,
)
_raise_dataroboterror_for_status(predictions_response)
# Return a Python dict following the schema in the documentation
return predictions_response.json()
def _raise_dataroboterror_for_status(response):
"""Raise DataRobotPredictionError if the request fails along with the response returned"""
try:
response.raise_for_status()
except requests.exceptions.HTTPError:
err_msg = '{code} Error: {msg}'.format(
code=response.status_code, msg=response.text)
raise DataRobotPredictionError(err_msg)
def main(filename, deployment_id):
"""
Return an exit code on script completion or error. Codes > 0 are errors to the shell.
Also useful as a usage demonstration of
`make_datarobot_deployment_predictions(data, deployment_id)`
"""
if not filename:
print(
'Input file is required argument. '
'Usage: python datarobot-predict.py <input-file.csv>')
return 1
data = open(filename, 'rb').read()
data_size = sys.getsizeof(data)
if data_size >= MAX_PREDICTION_FILE_SIZE_BYTES:
print(
'Input file is too large: {} bytes. '
'Max allowed size is: {} bytes.'
).format(data_size, MAX_PREDICTION_FILE_SIZE_BYTES)
return 1
try:
predictions = make_datarobot_deployment_predictions(data, deployment_id)
except DataRobotPredictionError as exc:
print(exc)
return 1
print(json.dumps(predictions, indent=4))
return 0
for i in range(0,5):
filename = 'scoring_dataset.csv'
main(filename, DEPLOYMENT_ID)
```
### 5. Check data drift. Does it look like data is drifting?. [3min]
Check data drift from within the `Deployments` page in the UI. Is data drift marked as red?
### 6. Create a new project using data (2). [5min]
Link to data: https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes_scoring.csv
```
#Proposed solution
new_project = dr.Project.create(sourcedata = 'https://s3.amazonaws.com/datarobot_public_datasets/10k_diabetes_scoring.csv',
project_name = '06_New_Project')
new_project.set_target(target = 'readmitted', mode = 'quick', worker_count = -1)
new_project.wait_for_autopilot()
```
### 7. Replace the previously deployed model with the new `recommended for deployment` model from the new project. [10min]
**Hint**: You will have to provide a reason why you are replacing the model. Try: `dr.enums.MODEL_REPLACEMENT_REASON.DATA_DRIFT`.
```
#Proposed Solution
new_recommended_model = dr.ModelRecommendation.get(new_project.id).get_model()
deployment.replace_model(new_recommended_model.id, dr.enums.MODEL_REPLACEMENT_REASON.DATA_DRIFT)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Обзор Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Смотрите на TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ru/guide/keras/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Запустите в Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ru/guide/keras/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Изучайте код на GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ru/guide/keras/overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Скачайте ноутбук</a>
</td>
</table>
Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [docs-ru@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ru).
Это руководство даст вам основы для начала работы с Keras. Чтение займет 10 минут.
## Импортируйте tf.keras
`tf.keras` является реализацией TensorFlow
[спецификации Keras API](https://keras.io). Это высокоуровневый
API для построения и обучения моделей включающий первоклассную поддержку для
TensorFlow-специфичной функциональности, такой как [eager execution](../eager.ipynb),
конвейеры `tf.data`, и [Estimators](../estimator.ipynb).
`tf.keras` делает использование TensorFlow проще не жертвуя при этом гибкостью и
производительностью.
Для начала, импортируйте `tf.keras` как часть установки вашей TensorFlow:
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version существуют только в Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
```
`tf.keras` может выполнять любой Keras-совместимый код, но имейте ввиду:
* Версия `tf.keras` в последнем релизе TensorFlow может отличаться от
последней версии `keras` в PyPI. Проверьте `tf.keras.__version__`.
* При [сохранении весоа моделей](./save_and_serialize.ipynb), `tf.keras` делает это по умолчанию
[в формате checkpoint](../checkpoint.ipynb). Передайте параметр `save_format='h5'` для
использования HDF5 (или добавьте к имени файла расширение `.h5`).
## Постройте простую модель
### Последовательная модель
В Keras, вы собираете *слои (layers)* для построения *моделей (models)*. Модель это (обычно) граф
слоев. Наиболее распространенным видом модели является стек слоев:
модель `tf.keras.Sequential`.
Построение простой полносвязной сети (т.е. многослойного перцептрона):
```
from tensorflow.keras import layers
model = tf.keras.Sequential()
# Добавим полносвязный слой с 64 узлами к модели:
model.add(layers.Dense(64, activation='relu'))
# Добавим другой слой:
model.add(layers.Dense(64, activation='relu'))
# Добавим слой softmax с 10 выходами:
model.add(layers.Dense(10, activation='softmax'))
```
Вы можете найти короткий, но полный пример того, как использовать последовательные (Sequential) модели [здесь](https://www.tensorflow.org/tutorials/quickstart/beginner).
Чтобы узнать о построении более сложных чем последовательные (Sequential), см:
- [Руководство по Keras Functional API](./functional.ipynb)
- [Руководство по написанию слоев и моделей с сабклассингом с нуля](./custom_layers_and_models.ipynb)
### Настройте слои
Доступно много разновидностей слоев `tf.keras.layers`. Большинство из них используют общий конструктор
аргументов:
* `activation`: Установка функции активации для слоя. В этом параметре
указывается имя встроенной функции или вызываемый объект. У параметра
нет значения по умолчанию.
* `kernel_initializer` И `bias_initializer`: Схемы инициализации
создающие веса слоя (ядро и сдвиг). В этом параметре может быть имя
или вызываемый объект. По умолчанию используется инициализатор `"Glorot uniform"`.
* `kernel_regularizer` и `bias_regularizer`: Схемы регуляризации
добавляемые к весам слоя (ядро и сдвиг), такие как L1 или L2
регуляризации. По умолчанию регуляризация не устанавливается.
Следующие примеры слоев `tf.keras.layers.Dense` используют
аргументы конструктора:
```
# Создать слой с сигмоидой:
layers.Dense(64, activation='sigmoid')
# Или:
layers.Dense(64, activation=tf.keras.activations.sigmoid)
# Линейный слой с регуляризацией L1 с коэфициентом 0.01 примененной к матрице ядра:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# Линейный слой с регуляризацией L2 с коэффициентом 0.01 примененной к вектору сдвига:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# Линейный слой с ядром инициализированным случайной ортогональной матрицей:
layers.Dense(64, kernel_initializer='orthogonal')
# Линейный слой с вектором сдвига инициализированным значениями 2.0:
layers.Dense(64, bias_initializer=tf.keras.initializers.Constant(2.0))
```
## Обучение и оценка
### Настройка обучения
После того как модель сконструирована, настройте процесс ее обучения вызовом
метода `compile`:
```
model = tf.keras.Sequential([
# Добавляем полносвязный слой с 64 узлами к модели:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Добавляем другой слой:
layers.Dense(64, activation='relu'),
# Добавляем слой softmax с 10 выходами:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.keras.optimizers.Adam(0.01),
loss='categorical_crossentropy',
metrics=['accuracy'])
```
`tf.keras.Model.compile` принимает три важных аргумента:
* `optimizer`: Этот объект определяет процедуру обучения. Передайте в него экземпляры
оптимизатора из модуля `tf.keras.optimizers`, такие как
`tf.keras.optimizers.Adam` или
`tf.keras.optimizers.SGD`. Если вы просто хотите использовать значения по умолчанию, вы также можете указать оптимизаторы ключевыми словами, такими как `'adam'` или `'sgd'`.
* `loss`: Это функция которая минимизируется в процессе обучения. Среди распространенных вариантов
mean square error (`mse`), `categorical_crossentropy`, и
`binary_crossentropy`. Функции потерь указываются по имени или по
передаче вызываемого объекта из модуля `tf.keras.losses`.
* `metrics`: Используются для мониторинга обучения. Это строковые имена или вызываемые объекты из
модуля `tf.keras.metrics`.
* Кроме того, чтобы быть уверенным, что модель обучается и оценивается eagerly, проверьте что вы передали компилятору параметр `run_eagerly=True`
Далее посмотрим несколько примеров конфигурации модели для обучения:
```
# Сконфигурируем модель для регрессии со среднеквадратичной ошибкой.
model.compile(optimizer=tf.keras.optimizers.Adam(0.01),
loss='mse', # срееднеквадратичная ошибка
metrics=['mae']) # средняя абсолютная ошибка
# Сконфигурируем модель для категориальной классификации.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=[tf.keras.metrics.CategoricalAccuracy()])
```
### Обучение на данных NumPy
Для небольших датасетов используйте помещающиеся в память массивы [NumPy](https://www.numpy.org/)
для обучения и оценки модели. Модель "обучается" на тренировочных даннных
используя метод `fit`:
```
import numpy as np
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
```
`tf.keras.Model.fit` принимает три важных аргумента:
* `epochs`: Обучение разбито на *эпохи*. Эпоха это одна итерация
по всем входным данным (это делается небольшими партиями).
* `batch_size`: При передаче данных NumPy, модель разбивает данные на меньшие
блоки (batches) и итерирует по этим блокам во время обучения. Это число
указывает размер каждого блока данных. Помните, что последний блок может быть меньшего
размера если общее число записей не делится на размер партии.
* `validation_data`: При прототипировании модели вы хотите легко отслеживать её
производительность на валидационных данных. Передача с этим аргументом кортежа входных данных
и меток позволяет модели отопражать значения функции потерь и метрики в режиме вывода
для передаваемых данных в конце каждой эпохи.
Вот пример использования `validation_data`:
```
import numpy as np
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
val_data = np.random.random((100, 32))
val_labels = np.random.random((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
```
### Обучение с использованием наборов данных tf.data
Используйте [Datasets API](../data.ipynb) для масштабирования больших баз данных
или обучения на нескольких устройствах. Передайте экземпляр `tf.data.Dataset` в метод
`fit`:
```
# Создает экземпляр учебного датасета:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
model.fit(dataset, epochs=10)
```
Поскольку `Dataset` выдает данные пакетами, этот кусок кода не требует аргумента `batch_size`.
Датасеты могут быть также использованы для валидации:
```
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32)
model.fit(dataset, epochs=10,
validation_data=val_dataset)
```
### Оценка и предсказание
Методы `tf.keras.Model.evaluate` и `tf.keras.Model.predict` могут использовать данные
NumPy и `tf.data.Dataset`.
Вот так можно *оценить* потери в режиме вывода и метрики для предоставленных данных:
```
# С массивом Numpy
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
model.evaluate(data, labels, batch_size=32)
# С датасетом
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
model.evaluate(dataset)
```
А вот как *предсказать* вывод последнего уровня в режиме вывода для предоставленных данных,
в виде массива NumPy:
```
result = model.predict(data, batch_size=32)
print(result.shape)
```
Полное руководство по обучению и оценке модели, включая описание написания пользовательских циклов обучения с нуля, см. в [руководстве по обучению и оценке] (./ train_and_evaluate.ipynb).
## Построение сложных моделей
### The Functional API
Модель `tf.keras.Sequential` это простой стек слоев с помощью которого
нельзя представить произвольную модель. Используйте
[Keras functional API](./functional.ipynb)
для построения сложных топологий моделей, таких как:
* Модели с несколькими входами,
* Модели с несколькими выходами,
* Модели с общими слоями (один и тот же слой вызывается несколько раз),
* Модели с непоследовательными потоками данных (напр. остаточные связи).
Построение модели с functional API работает следующим образом:
1. Экземпляр слоя является вызываемым и возвращает тензор.
2. Входные и выходные тензоры используются для определения экземпляра
`tf.keras.Model`
3. Эта модель обучается точно так же как и `Sequential` модель.
Следующий пример использует functional API для построения простой, полносвязной
сети:
```
inputs = tf.keras.Input(shape=(32,)) # Возвращает входной плейсхолдер
# Экземпляр слоя вызывается на тензор и возвращает тензор.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
```
Создайте экземпляр модели с данными входами и выходами.
```
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# Шаг компиляции определяет конфигурацию обучения.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Обучение за 5 эпох
model.fit(data, labels, batch_size=32, epochs=5)
```
### Сабклассинг моделей
Создайте полностью настраиваемую модель с помощью сабклассинга `tf.keras.Model` и определения
вашего собственного прямого распространения. Создайте слои в методе `__init__` и установите их как
атрибуты экземпляра класса. Определите прямое распространение в методе `call`.
Сабклассинг модели особенно полезен когда включен
[eager execution](../eager.ipynb), поскольку он позволяет написать
прямое распространение императивно.
Примечание: если вам нужно чтобы ваша модель *всегда* выполнялась императивно, вы можете установить `dynamic=True` когда вызываете конструктор `super`.
> Ключевой момент: Используйте правильный API для работы. Хоть сабклассинг модели обеспечивает
гибкость, за нее приходится платить большей сложностью и большими возможностями для
пользовательских ошибок. Если это возможно выбирайте functional API.
Следующий пример показывает сабклассированную `tf.keras.Model` использующую пользовательское прямое
распространение, которое не обязательно выполнять императивно:
```
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Определим свои слои тут.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Определим тут свое прямое распространение,
# с использованием ранее определенных слоев (в `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
```
Создайте экземпляр класса новой модели:
```
model = MyModel(num_classes=10)
# Шаг компиляции определяет конфигурацию обучения.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Обучение за 5 эпох.
model.fit(data, labels, batch_size=32, epochs=5)
```
### Пользовательские слои
Создайте пользовательский слой сабклассингом `tf.keras.layers.Layer` и реализацией
следующих методов:
* `__init__`: Опционально определите подслои которые будут использоваться в этом слое.
* `build`: Создайте веса слоя. Добавьте веса при помощи метода
`add_weight`.
* `call`: Определите прямое распространение.
* Опционально, слой может быть сериализован реализацией метода `get_config`
и метода класса `from_config`.
Ниже пример пользовательского слоя который осуществляет умножение матрицы (`matmul`) поданной на вход
с матрицей ядра:
```
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Создадим обучаемую весовую переменную для этого слоя.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
```
Создайте модель с использованием вашего пользовательского слоя:
```
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# Шаг компиляции определяет конфигурацию обучения
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Обучение за 5 эпох.
model.fit(data, labels, batch_size=32, epochs=5)
```
Узнайте больше о создании новых слоев и моделей с нуля с помощью сабклассинга в [Руководстве написания слоев и моделей с нуля](./custom_layers_and_models.ipynb).
## Колбеки
Колбек это объект переданный модели чтобы кастомизировать и расширить ее поведение
во время обучения. Вы можете написать свой пользовательский колбек или использовать встроенный
`tf.keras.callbacks` который включает:
* `tf.keras.callbacks.ModelCheckpoint`: Сохранение контрольных точек модели за
регулярные интервалы.
* `tf.keras.callbacks.LearningRateScheduler`: Динамичное изменение шага
обучения.
* `tf.keras.callbacks.EarlyStopping`: Остановка обучения в том случае когда
результат на валидации перестает улучшаться.
* `tf.keras.callbacks.TensorBoard`: Мониторинг поведения модели с помощью
[TensorBoard](https://tensorflow.org/tensorboard).
Для использования `tf.keras.callbacks.Callback`, передайте ее методу модели `fit`:
```
callbacks = [
# Остановить обучение если `val_loss` перестанет улучшаться в течение 2 эпох
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Записать логи TensorBoard в каталог `./logs`
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
```
<a name='save_and_restore'></a>
## Сохранение и восстановление
<a name="weights_only"></a>
### Сохранение только значений весов
Сохраните и загрузите веса модели с помощью `tf.keras.Model.save_weights`:
```
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Сохраним веса в файл TensorFlow Checkpoint
model.save_weights('./weights/my_model')
# Восстановим состояние модели,
# для этого необходима модель с такой же архитектурой.
model.load_weights('./weights/my_model')
```
По умолчанию веса модели сохраняются в формате
[TensorFlow checkpoint](../checkpoint.ipynb). Веса могут быть также
сохранены в формате Keras HDF5 (значение по умолчанию для универсальной
реализации Keras):
```
# Сохранение весов в файл HDF5
model.save_weights('my_model.h5', save_format='h5')
# Восстановление состояния модели
model.load_weights('my_model.h5')
```
### Сохранение только конфигурации модели
Конфигурация модели может быть сохранена - это сериализует архитектуру модели
без всяких весов. Сохраненная конфигурация может восстановить и инициализировать ту же
модель, даже без кода определяющего исходную модель. Keras поддерживает
форматы сериализации JSON и YAML:
```
# Сериализация модели в формат JSON
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
```
Восстановление модели (заново инициализированной) из JSON:
```
fresh_model = tf.keras.models.model_from_json(json_string)
```
Сериализация модели в формат YAML требует установки `pyyaml` *перед тем как импортировать TensorFlow*:
```
yaml_string = model.to_yaml()
print(yaml_string)
```
Восстановление модели из YAML:
```
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
```
Внимание: сабклассированные модели не сериализуемы, потому что их архитектура
определяется кодом Python в теле метода `call`.
### Сохранение всей модели в один файл
Вся модель может быть сохранена в файл содержащий значения весов, конфигурацию
модели, и даже конфигурацию оптимизатора. Это позволит вам
установить контрольную точку модели и продолжить обучение позже с точно того же положения
даже без доступа к исходному коду.
```
# Создадим простую модель
model = tf.keras.Sequential([
layers.Dense(10, activation='softmax', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Сохраним всю модель в файл HDF5
model.save('my_model.h5')
# Пересоздадим в точности эту модель включая веса и оптимизатор.
model = tf.keras.models.load_model('my_model.h5')
```
Узнайте больше о сохранении и сериализации моделей Keras в руководстве по [сохранению и сериализации моделей](./save_and_serialize.ipynb).
<a name="eager_execution"></a>
## Eager execution
[Eager execution](../eager.ipynb) это императивное программирование
среда которая выполняет операции немедленно. Это не требуется для
Keras, но поддерживается `tf.keras` и полезно для проверки вашей программы и
отладки.
Все строящие модели API `tf.keras` совместимы eager execution.
И хотя могут быть использованы `Sequential` и functional API, eager execution
особенно полезно при *сабклассировании модели* и построении *пользовательских слоев* — эти API
требуют от вас написание прямого распространения в виде кода (вместо API которые
создают модели путем сборки существующих слоев).
См [руководство eager execution](../eager.ipynb) для
примеров использования моделей Keras с пользовательскими циклами обучения и `tf.GradientTape`.
Вы можете также найти полный коротки пример [тут](https://www.tensorflow.org/tutorials/quickstart/advanced).
## Распределение
### Множественные GPU
`tf.keras` модели можно запускать на множестве GPU с использованием
`tf.distribute.Strategy`. Этот API обеспечивает распределенное
обучение на нескольких GPU практически без изменений в существующем коде.
На данный момент, `tf.distribute.MirroredStrategy` единственная поддерживаемая
стратегия распределения. `MirroredStrategy` выполняет репликацию в графах с
синхронным обучением используя all-reduce на одной машине. Для использования
`distribute.Strategy` , вложите инсталляцию оптимизатора, конструкцию и компиляцию модели в `Strategy`'s `.scope()`, затем
обучите модель.
Следующий пример распределяет `tf.keras.Model` между множеством GPU на
одной машине.
Сперва определим модель внутри области распределенной стратегии:
```
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.keras.optimizers.SGD(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
```
Затем обучим модель на данных как обычно:
```
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.shuffle(buffer_size=1024).batch(32)
model.fit(dataset, epochs=1)
```
Для дополнительной информации см. [полное руководство по распределенному обучению наа TensorFlow](../distributed_training.ipynb).
| github_jupyter |
```
import pandas as pd
df = pd.read_csv('X.txt',sep=',')
df.head()
df.收益率 = df.收益率.str.strip(to_strip='%')
df.head()
df.sort_values(by=['收益率','天数','车型','城市车商','众筹金额'],ascending=False).head()
df.to_csv('X.csv',index=False,encoding='utf8')
df.info()
df.城市车商.value_counts()
df[(df.城市车商.str[0]>=u'\u4e00') & (df.城市车商.str[0]<=u'\u9fff')].to_csv('zw_data.csv',index=False)
df.城市车商 = df.城市车商.str.strip()
df[(df.城市车商.str[0]<u'\u4e00') | (df.城市车商.str[0]>u'\u9fff')].to_csv('en_data.csv',index=False)
df = pd.read_csv('en_data.csv')
df.groupby(by='城市车商').mean().sort_values(by='收益率',ascending=False)
df['城市']= df.城市车商.str[:4]
df.head()
df.groupby(by='城市').mean().sort_values(by='收益率',ascending=False).head(10)
df.groupby(by='城市').mean().sort_values(by='收益率',ascending=False).head(10).收益率.plot(kind='bar')
df[df.城市=='CNVB'].groupby(by='城市车商').mean().sort_values(by='收益率',ascending=False).head(10).收益率.plot(kind='bar')
df[df.城市=='NWVB'].groupby(by='城市车商').mean().sort_values(by='收益率',ascending=False).head(10).收益率.plot(kind='bar')
df[df.城市=='NWVB'].groupby(by='车型').agg(['mean','count']).sort_values(by=[('收益率','count'),('收益率','mean')],ascending=False).head(20)
df.groupby(by='车型').agg(['mean','count'])[df.groupby(by='车型').agg(['mean','count'])[('收益率','count')]>10].sort_values(by=[('收益率','mean'),('收益率','count')],ascending=False).head(20)
df.groupby(by='车型').agg(['mean','count'])[df.groupby(by='车型').agg(['mean','count'])[('收益率','count')]>10].sort_values(by=[('收益率','mean'),('收益率','count')],ascending=False).head(20)
df2 = df[df.城市=='NWVB'].groupby(by='车型').agg(['mean','count']).reset_index()
```
### NWVB城市的收益率车型排行,大于5辆车以上:
```
df2[df2[('收益率','count')]>5].sort_values(by=[('收益率','mean'),('收益率','count')],ascending=False).head(20)
```
### 失败率?
```
df = pd.read_csv('X.csv')
df.head()
df[df.天数.str.isdigit()].to_csv('X2.csv',encoding='utf8')
df = pd.read_csv('X2.csv')
df[(df.收益率>=7)&(df.收益率<=9)&(df.天数>85)].groupby(by='车型')['收益率'].count().reset_index().sort_values(by='收益率',ascending=False).head(10)
df.groupby(by='车型')['收益率'].count().reset_index().sort_values(by='收益率',ascending=False).head(10)
df.sample(1400).收益率.plot(kind='hist',bins=8)
df.reset_index().sample(2000)[(df.收益率<100)&(df.天数<90)].plot(kind='scatter',x=['天数'],y=['收益率'],s=1)
df['天数乘收益率'] = df.天数*df.收益率
df.groupby(by='车型')['车型','天数乘收益率'].agg(['mean','count']).sort_values(by=('天数乘收益率','mean'),ascending=False).head(10)
```
## 保存带BOM的UTF8,否则excel乱码
```
df.to_csv('X2.csv',encoding='utf-8-sig',index=False)
from scipy.stats import zscore
import seaborn as sns
import numpy as np
df = pd.read_csv('X2.csv')
numeric_cols = df.select_dtypes(include=[np.number]).columns
numeric_cols
df[['天数乘收益率']].apply(zscore).head()
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.font_manager import FontProperties
font=FontProperties(fname='/Users/max/Library/Fonts/msyh.ttf',size=10)
plt.rcParams['font.sans-serif'] = ['Microsoft YaHei'] # 中文字体设置-黑体
plt.rcParams['axes.unicode_minus'] = False # 解决保存图像是负号'-'显示为方块的问题
sns.set(font='Microsoft YaHei') # 解决Seaborn中文显示问题
sns.pairplot(df[numeric_cols].sample(2000))
```
### 可见,天数越长,每万元收益越高,即90天卖不出去的车是每万元收益最高的。
### 另外一种策略,是投资年化高的,不断滚动,使总收益最大化。
```
sns.jointplot(x='天数',y='每万元分红',data=df.sample(1000),xlim=(0,100),kind="reg",color="m");
df['收益率除天数'] = df.收益率/df.天数
df.head()
df2 = df.groupby(by='车型')['车型','收益率除天数'].agg(['median','count','std'])
```
### 收益率除天数最差情况,8% 90天,即0.089
```
df2[df2[('收益率除天数','count')]>3].sort_values(by=('收益率除天数','median'),ascending=False).head(20)
df[df.车型=='现代劳恩斯']
df.to_csv('X3.csv',encoding='utf-8-sig',index=False)
df2.to_csv('X32.csv',encoding='utf-8-sig',index=False)
df = pd.read_csv('X3.csv')
```
### 下图可见,随着天数增加,收益与天数比值快速降低,收益与天数比值高的集中于20天以内
```
sns.jointplot(y='收益率除天数',x='天数',data=df.sample(400),kind="scatter",color="m",ylim=(0,3),xlim=(0,90));
```
### 下图可见,众筹金额在20万以内的,往往收益率与天数比值更可能大
```
sns.jointplot(y='收益率除天数',x='众筹金额',data=df.sample(1000),kind="kde",color="m",ylim=(0,3),xlim=(0,400000),s=2,cbar=True);
sns.set_context("notebook", font_scale=1.2, rc={"lines.linewidth": 2.5})
sns.kdeplot(df.sample(1000).众筹金额.astype(float),df.sample(1000).收益率除天数,cmap="Reds", shade=True, shade_lowest=False,kernel='epa',cbar=True).set(ylim=(0,3),xlim=(0,500000));
df.收益率除天数.agg(['mean','std'])
plt.hist(df.收益率除天数,range=(0,3));
sns.kdeplot(df.sample(2000).收益率除天数,cumulative=True,shade=True,cbar=True).set(xlim=(0,3))
sns.jointplot(x='天数',y='众筹金额',data=df.sample(1000),kind='hex',bins=10,xlim=(0,90),ylim=(0,500000))
```
### 优质车商:回款快,收益率高
```
df2 = df.groupby(by='城市车商')['城市车商','收益率除天数'].agg(['median','count','std','min','max'])
df2[df2[('收益率除天数','count')]>3].reset_index().sort_values(by=('收益率除天数','median'),ascending=False).head(20)
```
| github_jupyter |
<div>
<img src="figures/svtLogo.png"/>
</div>
<h1><center>Mathematical Optimization for Engineers</center></h1>
<h2><center>Lab 14 - Uncertainty</center></h2>
We want to optimize the total annualized cost of a heating and electric power system. Three different technologies are present:
- a gas boiler
- a combined heat and power plant
- a photovoltaic module
We first the the nominal case without uncertanties.
Next, we will consider a two-stage approach to consider uncertainties in the electricity demand and the power producable via PV.
Uncertain variables are the solar power and the power demand.
```
# import cell
from scipy.optimize import minimize, NonlinearConstraint, Bounds
class Boiler():
"""Boiler
Gas in, heat out
"""
def __init__(self):
self.M = 0.75
def invest_cost(self, Qdot_nom):
inv = 100 * Qdot_nom ** self.M
return inv
def oper_cost(self, Qdot_nom, op_load):
cost_gas = 60
cost_gas_oper = Qdot_nom * cost_gas * op_load
return cost_gas_oper
def heat(self, Qdot_nom, op_load):
eta_th = 0.9 - (1 - op_load) * 0.05
return Qdot_nom * op_load * eta_th
class CHP():
"""Combined-heat-and-power (CHP) engine
Gas in, heat and power out
"""
def __init__(self):
self.c_ref = 150
self.M = 0.85 # [-], cost exponent
self.cost_gas = 60
def invest_cost(self, Qdot_nom):
inv = self.c_ref * (Qdot_nom) ** self.M
return inv
def oper_cost(self, Qdot_nom, op_load):
cost_gas_oper = Qdot_nom * op_load * self.cost_gas
return cost_gas_oper
def elec_out(self, Qdot_nom, op_load):
eta_el = 0.3 - (1 - op_load) * 0.1
out_pow = eta_el * Qdot_nom * op_load
return out_pow
def heat(self, Qdot_nom, op_load):
eta_th = 0.6 - (1-op_load) * 0.05
return Qdot_nom * eta_th * op_load
class PV:
"""Photovoltaic modules (PV)
solar
"""
def __init__(self):
self.M = 0.9 # [-], cost exponent
def invest_cost(self, p_nom):
inv = 200 * p_nom ** self.M
return inv
def oper_cost(self, out_nom):
return 0
def elec_out(self, p_nom, op_load, solar):
return p_nom * op_load * solar
def objective_function(x, PV, Boiler, CHP, scenarios):
total_cost = 0
design_PV = x[0]
design_boiler = x[1]
design_CHP = x[2]
# investment costs
# your code here
# expected operating costs
# your code here
return total_cost
def constraint_function(x, PV, Boiler, CHP, scenarios):
heat_demand = 200
design_PV = x[0]
design_boiler = x[1]
design_CHP = x[2]
# loop over all uncertatintes
# heat demand
# electricty demand
return c
def print_solution(x):
print('PV design: ', x[0])
print('Boiler design: ', x[1])
print('CHP design: ', x[2])
# nominal case
scenario1 = {"p": 1.0, "solar":1.0, "elec": 100}
scenarios = [scenario1] # base scenario
# now consider different scenarios
myPV = PV()
myBoiler = Boiler()
myCHP = CHP()
cons = lambda x: constraint_function(x, myPV, myBoiler, myCHP, scenarios)
obj = lambda x: objective_function(x, myPV, myBoiler, myCHP, scenarios)
# constraints need bounds
# your code here
# bounds for operation 0 . 1
x_guess = [200,200,200, 1,1,1 ]
# bounds for decision variables
# your code here
bnds = Bounds(lbs, ubs)
res = minimize(obj, x_guess, method = 'SLSQP', bounds=bnds,
constraints = nonlinear_constraints,
options={"maxiter": 15, 'iprint': 2, 'disp': True})
print_solution(res.x)
# nominal
# uncertanties: power demand and solar power (relative 1.0)
scenario1 = {"p": 0.40, "solar":1.0, "elec": 100}
scenario2 = {"p": 0.3, "solar":1.0, "elec": 120}
scenario3 = {"p": 0.3, "solar":0.5, "elec": 80}
# put scenarios together
# your code here
myPV = PV()
myBoiler = Boiler()
myCHP = CHP()
cons = lambda x: constraint_function(x, myPV, myBoiler, myCHP, scenarios)
obj = lambda x: objective_function(x, myPV, myBoiler, myCHP, scenarios)
# bounds and constraints
# your code here
res = minimize(obj, x_guess, method = 'SLSQP', bounds=bnds,
constraints = nonlinear_constraints,
options={"maxiter": 15, 'iprint': 2, 'disp': True})
print_solution(res.x)
```
| github_jupyter |
```
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
```
注意点:
- b 零初始值
- w 初始化要用 tf,不要用 np
```
# 读取数据集MNIST,并放在当前目录data文件夹下MNIST文件夹中,如果该地址没有数据,则下载数据至该文件夹
# 一张图片有 28*28=784 个像素点,每个点用一个浮点数表示其亮度;
mnist = input_data.read_data_sets("./data/MNIST/", one_hot=True)
#该函数用于输出生成图片
def plot(samples):
fig = plt.figure(figsize=(4, 4))
gs = gridspec.GridSpec(4, 4)
gs.update(wspace=0.05, hspace=0.05)
for i, sample in enumerate(samples):
ax = plt.subplot(gs[i])
plt.axis('off')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_aspect('equal')
plt.imshow(sample.reshape(28, 28), cmap='Greys_r')
return fig
batch_size = 128
z_dim = 200
def variable_init(size):
# He initialization: sqrt(2./dim of the previous layer)
# np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2./layers_dims[l-1])
in_dim = size[0]
return tf.random_normal(shape=size, stddev=np.sqrt(2./in_dim))
# 定义并初始化变量
X = tf.placeholder(tf.float32, shape=(None, 784))
Z = tf.placeholder(tf.float32, shape=(None, z_dim))
DW1 = tf.Variable(variable_init([784, 128]))
Db1 = tf.Variable(tf.zeros(shape=[128]))
DW2 = tf.Variable(variable_init([128, 1]))
Db2 = tf.Variable(tf.zeros(shape=[1]))
theta_D = [DW1, DW1, Db1, Db2]
GW1 = tf.Variable(variable_init([z_dim, 128]))
Gb1 = tf.Variable(tf.zeros(shape=[128]))
GW2 = tf.Variable(variable_init([128, 784]))
Gb2 = tf.Variable(tf.zeros(shape=[784]))
theta_G = [GW1, GW2, Gb1, Gb2]
# 定义随机噪声生成器
# 函数 Z,生成 z
def noise_maker(m, n):
return np.random.uniform(-1.0, 1.0, size=[m, n])
# 定义数据生成器,将 z 变成 概率分布
# 生成的结果为:是不是图片
# 生成 N * 784 的结果
def generator(z):
# tanh, relu。。。都可以
Gh1 = tf.nn.relu(tf.matmul(z, GW1) + Gb1)
G_logit = tf.matmul(Gh1, GW2) + Gb2
# 这里用 sigmoid 是因为不需要加起来概率等于 1
G_prob = tf.nn.sigmoid(G_logit)
return G_prob
# 定义判别器
def discriminator(x):
# tanh relu。。。
Dh1 = tf.nn.relu(tf.matmul(x, DW1) + Db1)
D_logit = tf.matmul(Dh1, DW2) + Db2
# D_prob = tf.nn.sigmoid(D_logit)
return D_logit # , D_prob
# 定义损失函数
D_real_logit = discriminator(X) # D_real_prob,
D_fake_logit = discriminator(generator(Z)) # D_fake_prob,
D_X = tf.concat([D_real_logit, D_fake_logit], 1)
D_y = tf.concat([tf.ones_like(D_real_logit), tf.zeros_like(D_fake_logit)], 1)
D_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_X, labels=D_y))
G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_fake_logit, labels=tf.ones_like(D_fake_logit)))
D_opt = tf.train.AdamOptimizer().minimize(D_loss, var_list=theta_D)
G_opt = tf.train.AdamOptimizer().minimize(G_loss, var_list=theta_G)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
if not os.path.exists('out_exercise/'):
os.makedirs('out_exercise/')
i = 0
for it in range(20000):
if it % 2000 == 0:
# 16 幅图
samples = sess.run(generator(Z), feed_dict={Z: noise_maker(16, z_dim)})
fig = plot(samples)
plt.savefig('out_exercise/{}.png'.format(str(i).zfill(3)), bbox_inches='tight')
i += 1
plt.close(fig)
X_mb, _ = mnist.train.next_batch(batch_size)
_, D_loss_curr = sess.run([D_opt, D_loss], feed_dict={X: X_mb, Z: noise_maker(batch_size, z_dim)})
_, G_loss_curr = sess.run([G_opt, G_loss], feed_dict={Z: noise_maker(batch_size, z_dim)})
# sam,fakeprob,fakelogit = sess.run([generator(Z), D_fake_prob, D_fake_logit],
# feed_dict={X: X_mb, Z: noise_maker(batch_size, z_dim)})
if it % 2000 == 0:
print('Iter: {} D_loss: {:.4}, G_loss: {:.4}'.format(it, D_loss_curr, G_loss_curr))
samples.shape
plot(samples)
```
| github_jupyter |
```
import pickle
import numpy as np
import collections
import matplotlib.pyplot as plt
import copy
import matplotlib.ticker as ticker
import mpmath as mp
from mpmath import gammainc
def power_law(x,s_min, s_max, alpha):
C = ((1-alpha)/(s_max**(1-alpha)-s_min**(1-alpha)))
return [C*x[i]**(-alpha) for i in range(len(x))]
def stretched_exponential(x,s_min,lamda,beta):
C = beta*np.exp((s_min**beta)/lamda)/lamda
return [C*(x[i]**(beta-1))*np.exp(-(x[i]**beta)/lamda) for i in range(len(x))]
def exponential(x,s_min,lamda):
C = lamda*np.exp(lamda*s_min)
return [C*np.exp(-lamda*x[i]) for i in range(len(x))]
def truncated_power_law(x,s_min,alpha,lamda):
C = float(((lamda)**(1-alpha))/gammainc(1-alpha, s_min*lamda))
return [C*(x[i]**(-alpha))*(np.exp(-lamda*x[i])) for i in range(len(x))]
grid_sizes = [128,256,512,1024]
density = 0.59
replicates = [1,2,3,4,5]
numbers_per_replicate = 2000000
dpt = []
npt = []
dictionary = {}
for grid_size in grid_sizes:
del dpt
del npt
dpt = []
npt = []
for replicate in replicates:
filename_dp = "dp_transformations_" + str(grid_size) + "_" + str(density) + "_" + str(numbers_per_replicate) + "_" + str(replicate) + ".txt"
filename_np = "np_transformations_" + str(grid_size) + "_" + str(density) + "_" + str(numbers_per_replicate) + "_" + str(replicate) + ".txt"
with open(filename_dp) as f:
dpt.extend([tuple(map(int, i.split(' '))) for i in f])
with open(filename_np) as f:
npt.extend([tuple(map(int, i.split(' '))) for i in f])
npscs = [abs(i[1]-i[0]) for i in npt]
npsc_freqs = dict(collections.Counter(npscs))
npsc_freqs = {k: v / (len(npscs)) for k, v in npsc_freqs.items()}
dpscs = [abs(i[1]-i[0]) for i in dpt]
dpsc_freqs = dict(collections.Counter(dpscs))
dpsc_freqs = {k: v / (len(dpscs)) for k, v in dpsc_freqs.items()}
np_lists = sorted(npsc_freqs.items())
dp_lists = sorted(dpsc_freqs.items())
np_x, np_y = zip(*np_lists)
dp_x, dp_y = zip(*dp_lists)
cy_dp = []
cy_np = []
for i in range(len(dp_y)):
cy_dp.append(sum([dp_y[j] for j in range(i,len(dp_y))]))
for i in range(len(np_y)):
cy_np.append(sum([np_y[j] for j in range(i,len(np_y))]))
dictionary[str(grid_size)+'_np_x'] = np_x
dictionary[str(grid_size)+'_dp_x'] = dp_x
dictionary[str(grid_size)+'_np_y'] = np_y
dictionary[str(grid_size)+'_dp_y'] = dp_y
dictionary[str(grid_size)+'_cy_dp'] = cy_dp
dictionary[str(grid_size)+'_cy_np'] = cy_np
# Determined using the Fitting Notebook
x_max = max(np_x)
x_min = 5
alpha = 1.608
lamda = 1/224091.1
x = [i for i in range(x_min,x_max+1)]
y = truncated_power_law(x, x_min, alpha, lamda)
A = (np_y[4]/y[0])
y = [(np_y[4]/y[0])*i for i in y]
fig, ax1 = plt.subplots(figsize=(5,4))
# These are in unitless percentages of the figure size. (0,0 is bottom left)
left, bottom, width, height = [0.58, 0.58, 0.3, 0.27]
ax2 = fig.add_axes([left, bottom, width, height])
ax1.scatter(dictionary['1024_dp_x'], dictionary['1024_dp_y'],s=10,marker='o',label='DP',color='red',facecolors='none')
ax1.scatter(dictionary['1024_np_x'], dictionary['1024_np_y'],s=10,marker='d',label='P',color='black',facecolors='none')
ax1.plot(x,y,color="blue",linewidth=2,label='fit')
ax1.set_xscale('log')
ax1.set_yscale('log')
ax1.set_xlabel('$|\Delta s|$')
ax1.set_ylabel('$P(S=|\Delta s|)$')
ax1.legend(loc="lower left")
ax1.set_ylim(5*10**(-8),10**(0))
ax1.set_xlim(1,300000)
ax1.tick_params(bottom=True,top=True,left=True,right=True)
ax2.scatter(dictionary['1024_np_x'], dictionary['1024_cy_np'],marker='.',s=0.8,color='red',label='1024')
ax2.scatter(dictionary['512_np_x'], dictionary['512_cy_np'],marker='.',s=0.8,color='blue',label='512')
ax2.scatter(dictionary['256_np_x'], dictionary['256_cy_np'],marker='.',s=0.8,color='black',label='256')
ax2.scatter(dictionary['128_np_x'], dictionary['128_cy_np'],marker='.',s=0.8,color='orange',label='128')
ax2.set_xscale('log')
ax2.set_yscale('log')
ax2.set_xlabel('$|\Delta s|$')
ax2.set_ylabel('$P(S<|\Delta s|)$')
#ax2.legend(loc="lower left")
ax2.set_ylim(5*10**(-8),10**(0))
ax2.set_xlim(1,300000)
ax2.text(0.55, 0.85, 'P', transform=ax2.transAxes, ha="right")
ax1.set_title('$P: \\rho = $'+str(density))
#plt.savefig("Figure_5.png",dpi=300)
del dictionary
```
| github_jupyter |
# TensorFlow Tutorial #02
# Convolutional Neural Network
These lessons are adapted from [tutorials](https://github.com/Hvass-Labs/TensorFlow-Tutorials)
by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/) / [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ)
which are published under the [MIT License](https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/LICENSE) which allows very broad use for both academic and commercial purposes.
## Introduction
The previous tutorial showed that a simple linear model had about 91% classification accuracy for recognizing hand-written digits in the MNIST data-set.
In this tutorial we will implement a simple Convolutional Neural Network in TensorFlow which has a classification accuracy of about 99%, or more if you make some of the suggested exercises.
Convolutional Networks work by moving small filters across the input image. This means the filters are re-used for recognizing patterns throughout the entire input image. This makes the Convolutional Networks much more powerful than Fully-Connected networks with the same number of variables. This in turn makes the Convolutional Networks faster to train.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. Beginners to TensorFlow may also want to study the first tutorial before proceeding to this one.
## Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below.
```
from IPython.display import Image
Image('images/02_network_flowchart.png')
```
The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.
These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are down-sampled again to 7x7 pixels.
The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.
The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.
These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.
Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow.
## Convolutional Layer

Convolutional Networks [https://youtu.be/jajksuQW4mc](https://youtu.be/jajksuQW4mc)

Introduction to Deep Learning: What Are Convolutional Neural Networks? [https://youtu.be/ixF5WNpTzCA](https://youtu.be/ixF5WNpTzCA)

MIT 6.S191 Lecture 3: Convolutional Neural Networks [https://youtu.be/v5JvvbP0d44](https://youtu.be/v5JvvbP0d44)
The following chart shows the basic idea of processing an image in the first convolutional layer. The input image depicts the number 7 and four copies of the image are shown here, so we can see more clearly how the filter is being moved to different positions of the image. For each position of the filter, the dot-product is being calculated between the filter and the image pixels under the filter, which results in a single pixel in the output image. So moving the filter across the entire input image results in a new image being generated.
The red filter-weights means that the filter has a positive reaction to black pixels in the input image, while blue pixels means the filter has a negative reaction to black pixels.
In this case it appears that the filter recognizes the horizontal line of the 7-digit, as can be seen from its stronger reaction to that line in the output image.
```
Image('images/02_convolution.png')
```
The step-size for moving the filter across the input is called the stride. There is a stride for moving the filter horizontally (x-axis) and another stride for moving vertically (y-axis).
In the source-code below, the stride is set to 1 in both directions, which means the filter starts in the upper left corner of the input image and is being moved 1 pixel to the right in each step. When the filter reaches the end of the image to the right, then the filter is moved back to the left side and 1 pixel down the image. This continues until the filter has reached the lower right corner of the input image and the entire output image has been generated.
When the filter reaches the end of the right-side as well as the bottom of the input image, then it can be padded with zeroes (white pixels). This causes the output image to be of the exact same dimension as the input image.
Furthermore, the output of the convolution may be passed through a so-called Rectified Linear Unit (ReLU), which merely ensures that the output is positive because negative values are set to zero. The output may also be down-sampled by so-called max-pooling, which considers small windows of 2x2 pixels and only keeps the largest of those pixels. This halves the resolution of the input image e.g. from 28x28 to 14x14 pixels.
Note that the second convolutional layer is more complicated because it takes 16 input channels. We want a separate filter for each input channel, so we need 16 filters instead of just one. Furthermore, we want 36 output channels from the second convolutional layer, so in total we need 16 x 36 = 576 filters for the second convolutional layer. It can be a bit challenging to understand how this works.
## Imports
```
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
```
This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
```
tf.__version__
```
## Configuration of Neural Network
The configuration of the Convolutional Neural Network is defined here for convenience, so you can easily find and change these numbers and re-run the Notebook.
```
# Convolutional Layer 1.
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 16 # There are 16 of these filters.
# Convolutional Layer 2.
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 36 # There are 36 of these filters.
# Fully-connected layer.
fc_size = 128 # Number of neurons in fully-connected layer.
```
## Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
```
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
```
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
```
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
```
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
```
data.test.cls = np.argmax(data.test.labels, axis=1)
```
## Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
```
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
```
### Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
```
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
```
### Plot a few images to see if data is correct
```
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
```
## TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
* Placeholder variables used for inputting data to the graph.
* Variables that are going to be optimized so as to make the convolutional network perform better.
* The mathematical formulas for the convolutional network.
* A cost measure that can be used to guide the optimization of the variables.
* An optimization method which updates the variables.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
### TensorFlow graphs
TensorFlow is based on graph based computation – “what on earth is that?”, you might say. It’s an alternative way of conceptualising mathematical calculations. TensorFlow is the second machine learning framework that Google created and used to design, build, and train deep learning models.You can use the TensorFlow library do to numerical computations, which in itself doesn’t seem all too special, but these computations are done with data flow graphs. In these graphs, nodes represent mathematical operations, while the edges represent the data, which usually are multidimensional data arrays or tensors, that are communicated between these edges.
You see? The name “TensorFlow” is derived from the operations which neural networks perform on multidimensional data arrays or tensors! It’s literally a flow of tensors.
Lets work with some basics of tensor flow by defining two constants and printing their product out.
### Helper-functions for creating new variables
Functions for creating new TensorFlow variables in the given shape and initializing them with random values. Note that the initialization is not actually done at this point, it is merely being defined in the TensorFlow graph.
```
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
```
### Helper-function for creating a new Convolutional Layer
This function creates a new convolutional layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 4-dim tensor with the following dimensions:
1. Image number.
2. Y-axis of each image.
3. X-axis of each image.
4. Channels of each image.
Note that the input channels may either be colour-channels, or it may be filter-channels if the input is produced from a previous convolutional layer.
The output is another 4-dim tensor with the following dimensions:
1. Image number, same as input.
2. Y-axis of each image. If 2x2 pooling is used, then the height and width of the input images is divided by 2.
3. X-axis of each image. Ditto.
4. Channels produced by the convolutional filters.
```
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
```
### Helper-function for flattening a layer
A convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.
```
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
```
### Helper-function for creating a new Fully-Connected Layer
This function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 2-dim tensor of shape `[num_images, num_inputs]`. The output is a 2-dim tensor of shape `[num_images, num_outputs]`.
```
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
```
### Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
```
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
```
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
```
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
```
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
```
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
```
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
```
y_true_cls = tf.argmax(y_true, dimension=1)
```
### Convolutional Layer 1
Create the first convolutional layer. It takes `x_image` as input and creates `num_filters1` different filters, each having width and height equal to `filter_size1`. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.
```
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True)
```
Check the shape of the tensor that will be output by the convolutional layer. It is (?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?), each image is 14 pixels wide and 14 pixels high, and there are 16 different channels, one channel for each of the filters.
```
layer_conv1
```
### Convolutional Layer 2
Create the second convolutional layer, which takes as input the output from the first convolutional layer. The number of input channels corresponds to the number of filters in the first convolutional layer.
```
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True)
```
Check the shape of the tensor that will be output from this convolutional layer. The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images, with each image having width and height of 7 pixels, and there are 36 channels, one for each filter.
```
layer_conv2
```
### Flatten Layer
The convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.
```
layer_flat, num_features = flatten_layer(layer_conv2)
```
Check that the tensors now have shape (?, 1764) which means there's an arbitrary number of images which have been flattened to vectors of length 1764 each. Note that 1764 = 7 x 7 x 36.
```
layer_flat
num_features
```
### Fully-Connected Layer 1
Add a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is `fc_size`. ReLU is used so we can learn non-linear relations.
```
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_size,
use_relu=True)
```
Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and `fc_size` == 128.
```
layer_fc1
```
### Fully-Connected Layer 2
Add another fully-connected layer that outputs vectors of length 10 for determining which of the 10 classes the input image belongs to. Note that ReLU is not used in this layer.
```
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=fc_size,
num_outputs=num_classes,
use_relu=False)
layer_fc2
```
### Predicted Class
The second fully-connected layer estimates how likely it is that the input image belongs to each of the 10 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one and the 10 elements sum to one. This is calculated using the so-called softmax function and the result is stored in `y_pred`.
```
y_pred = tf.nn.softmax(layer_fc2)
```
The class-number is the index of the largest element.
```
y_pred_cls = tf.argmax(y_pred, dimension=1)
```
### Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model `y_pred` to the desired output `y_true`.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the network layers.
TensorFlow has a built-in function for calculating the cross-entropy. Note that the function calculates the softmax internally so we must use the output of `layer_fc2` directly rather than `y_pred` which has already had the softmax applied.
```
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
```
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
```
cost = tf.reduce_mean(cross_entropy)
```
### Optimization Method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the `AdamOptimizer` which is an advanced form of Gradient Descent.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
```
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
```
### Performance Measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
```
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
```
This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
```
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
```
## TensorFlow Run
### Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
```
session = tf.Session()
```
### Initialize variables
The variables for `weights` and `biases` must be initialized before we start optimizing them.
```
session.run(tf.global_variables_initializer())
```
### Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
```
train_batch_size = 64
```
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
```
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
```
### Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
```
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
```
### Helper-function to plot confusion matrix
```
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
```
### Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
```
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
```
## Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
```
print_test_accuracy()
```
## Performance after 1 optimization iteration
The classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
```
optimize(num_iterations=1)
print_test_accuracy()
```
## Performance after 100 optimization iterations
After 100 optimization iterations, the model has significantly improved its classification accuracy.
```
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
```
## Performance after 1000 optimization iterations
After 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
```
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
```
## Performance after 10,000 optimization iterations
After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
```
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
```
## Visualization of Weights and Layers
In trying to understand why the convolutional neural network can recognize handwritten digits, we will now visualize the weights of the convolutional filters and the resulting output images.
### Helper-function for plotting convolutional weights
```
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
```
### Helper-function for plotting the output of a convolutional layer
```
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
```
### Input Images
Helper-function for plotting an image.
```
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
```
Plot an image from the test-set which will be used as an example below.
```
image1 = data.test.images[0]
plot_image(image1)
```
Plot another example image from the test-set.
```
image2 = data.test.images[13]
plot_image(image2)
```
### Convolution Layer 1
Now plot the filter-weights for the first convolutional layer.
Note that positive weights are red and negative weights are blue.
```
plot_conv_weights(weights=weights_conv1)
```
Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to 14 x 14 pixels which is half the resolution of the original input image.
```
plot_conv_layer(layer=layer_conv1, image=image1)
```
The following images are the results of applying the convolutional filters to the second image.
```
plot_conv_layer(layer=layer_conv1, image=image2)
```
It is difficult to see from these images what the purpose of the convolutional filters might be. It appears that they have merely created several variations of the input image, as if light was shining from different angles and casting shadows in the image.
### Convolution Layer 2
Now plot the filter-weights for the second convolutional layer.
There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.
Note again that positive weights are red and negative weights are blue.
```
plot_conv_weights(weights=weights_conv2, input_channel=0)
```
There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
```
plot_conv_weights(weights=weights_conv2, input_channel=1)
```
It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.
Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.
Note that these are down-sampled yet again to 7 x 7 pixels which is half the resolution of the images from the first conv-layer.
```
plot_conv_layer(layer=layer_conv2, image=image1)
```
And these are the results of applying the filter-weights to the second image.
```
plot_conv_layer(layer=layer_conv2, image=image2)
```
From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images.
These images are then flattened and input to the fully-connected layer, but that is not shown here.
### Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
```
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
session.close()
```
## Conclusion
We have seen that a Convolutional Neural Network works much better at recognizing hand-written digits than the simple linear model in Tutorial #01. The Convolutional Network gets a classification accuracy of about 99%, or even more if you make some adjustments, compared to only 91% for the simple linear model.
However, the Convolutional Network is also much more complicated to implement, and it is not obvious from looking at the filter-weights why it works and why it sometimes fails.
So we would like an easier way to program Convolutional Neural Networks and we would also like a better way of visualizing their inner workings.
## Exercises
These are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.
You may want to backup this Notebook before making any changes.
* Do you get the exact same results if you run the Notebook multiple times without changing any parameters? What are the sources of randomness?
* Run another 10,000 optimization iterations. Are the results better?
* Change the learning-rate for the optimizer.
* Change the configuration of the layers, such as the number of convolutional filters, the size of those filters, the number of neurons in the fully-connected layer, etc.
* Add a so-called drop-out layer after the fully-connected layer. Note that the drop-out probability should be zero when calculating the classification accuracy, so you will need a placeholder variable for this probability.
* Change the order of ReLU and max-pooling in the convolutional layer. Does it calculate the same thing? What is the fastest way of computing it? How many calculations are saved? Does it also work for Sigmoid-functions and average-pooling?
* Add one or more convolutional and fully-connected layers. Does it help performance?
* What is the smallest possible configuration that still gives good results?
* Try using ReLU in the last fully-connected layer. Does the performance change? Why?
* Try not using pooling in the convolutional layers. Does it change the classification accuracy and training time?
* Try using a 2x2 stride in the convolution instead of max-pooling? What is the difference?
* Remake the program yourself without looking too much at this source-code.
* Explain to a friend how the program works.
### MIT 6.S191: Convolutional Neural Networks

MIT 6.S191: Convolutional Neural Networks [https://youtu.be/NVH8EYPHi30](https://youtu.be/NVH8EYPHi30)

MIT 6.S191: Issues in Image Classification https://youtu.be/QYwESy6isuc
MIT 6.S191: Introduction to Deep Learning [http://introtodeeplearning.com/](http://introtodeeplearning.com)
## License (MIT)
Copyright (c) 2016 by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| github_jupyter |
Assessment Requirements
Each group is required to complete the following two tasks:
1. Generate a sparse representation for Paper Bodies (i.e. paper text without Title, Authors, Abstract and References). The sparse representation consists of two files: a. Vocabulary index file b. Sparse count vectors file
2. Generate a CSV file (stats.csv) containing three columns: a. Top 10 most frequent terms appearing in all Titles b. Top 10 most frequent Authors c. Top 10 most frequent terms appearing in all Abstracts
Note: In case of ties in any of the above fields, settle the tie based on alphabetical ascending order. (example: if the author named John appeared as many times as Mark, then John shall be selected over Mark)
```
!pip install vocab
import re
import pandas as pd
import requests
import os
#import pdfminer
import tqdm
from tqdm import tqdm_notebook as tqdm
# segmentation first, then find capital words, then loop through and lower each word, then tokenize.
import io
from io import StringIO
#from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
#from pdfminer.converter import TextConverter
#from pdfminer.layout import LAParams
#from pdfminer.pdfpage import PDFPage
import os
import sys, getopt
import nltk.data
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
from nltk.collocations import BigramCollocationFinder
from nltk.metrics import BigramAssocMeasures
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from nltk.tokenize import MWETokenizer
from itertools import chain
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.tokenize import RegexpTokenizer
from nltk.tokenize import MWETokenizer
from nltk.probability import *
from nltk.tokenize import word_tokenize
from nltk.collocations import BigramCollocationFinder
from nltk.metrics import BigramAssocMeasures
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from nltk.tokenize import MWETokenizer
from itertools import chain
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.stem.porter import *
nltk.download('punkt')
# segmentation first, then find capital words, then loop through and lower each word, then tokenize.
import io
from io import StringIO
#from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
#from pdfminer.converter import TextConverter
#from pdfminer.layout import LAParams
#from pdfminer.pdfpage import PDFPage
import os
import sys, getopt
import nltk.data
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
from itertools import chain
#!pip install Tabula-py #uncomment to install Tabula-py
#!pip install pdfminer.six #uncomment to install Tabula-py
#!pip install pdfminer3k #uncomment to install Tabula-py
#!pip install tqdm
#pdfminer3k is a Python 3 port of pdfminer
import tabula
# readinf the PDF file that contain Table Data
# you can find find the pdf file with complete code in below
# read_pdf will save the pdf table into Pandas Dataframe
df = tabula.read_pdf("Group102.pdf", pages='all')
# in order to print first 5 lines of Table
df = df[df['filename'] != 'filename']
if not os.path.exists('data'):
os.makedirs('data') # make a dataset, to store all the pdf files downloaded
for each in tqdm(df.iterrows()):
response = requests.get(each[1][1])
with open('data/'+ str(each[1][0]),'wb') as f:
f.write(response.content)
#converts pdf, returns its text content as a string
def convert(fname, pages=None):
if not pages:
pagenums = set()
else:
pagenums = set(pages)
output = io.StringIO()
manager = PDFResourceManager()
converter = TextConverter(manager, output, laparams=LAParams())
interpreter = PDFPageInterpreter(manager, converter)
infile = open(fname, 'rb')
for page in PDFPage.get_pages(infile, pagenums):
interpreter.process_page(page)
infile.close()
converter.close()
text = output.getvalue()
output.close
return text
#converts all pdfs in directory pdfDir, saves all resulting txt files to txtdir
def convertMultiple(pdfDir, txtDir):
if pdfDir == "": pdfDir = os.getcwd() + "\\" #if no pdfDir passed in
for pdf in tqdm(os.listdir(pdfDir)): #iterate through pdfs in pdf directory
fileExtension = pdf.split(".")[-1]
if fileExtension == "pdf":
pdfFilename = pdfDir + pdf
text = convert(pdfFilename) #get string of text content of pdf
textFilename = txtDir + pdf + ".txt"
textFile = open(textFilename, "w") #make text file
textFile.write(text) #write text to text file
# set paths accordingly:
pdfDir = "C:/your_path_here/"
txtDir = "C://your_path_here/"
#An empty list to store all the given stopwords
stopwords=[]
#Opening the given stopwords file and storing the words in the stopwords list
with open('stopwords_en.txt') as f:
stopwords = f.read().splitlines()
def selective_lower(sentence): #E
aux_sentence = ''
#A.
sentence = sentence.replace('the','')
sentence = sentence.replace('The','')
sentence = sentence.replace('fi','fi')
sentence = sentence.replace('ff','ff')
sentence = sentence.replace('- ','')
sentence = sentence.replace('-\n','')
sentence = sentence.replace('\n',' ')
cap_set = re.findall(r'(?!^)\b([A-Z]\w+)',sentence)
for word in sentence.split(" "):
if (len(word) > 2) and (word not in stopwords):
if (word not in cap_set):
aux_sentence += word.lower() + str(' ')
elif (len(word) > 2) and (word not in stopwords):
aux_sentence += word + str(' ')
aux_sentence = re.sub('[^A-Za-z]+', ' ', aux_sentence.strip())
return aux_sentence.strip()
import os
from tqdm import tqdm
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
body_dict={}
def get_data(directory):
shortword = re.compile(r'\W*\b\w{1,2}\b')
body_dict={}
for filename in tqdm(os.listdir(directory)):
filepdf = filename.replace('.pdf','')
raw_body = convert(str(os.path.join(directory, filename)))
sentence_list = sent_detector.tokenize(raw_body.strip())
body = []
start = 0
stop = 0
for i in range(len(sentence_list)):
if 'Paper Body' in sentence_list[i]:
start = i
sentence_list[i] = sentence_list[i].replace('Paper Body','')
if '2 Reference' in sentence_list[i]:
stop = i
sentence_list[i] = sentence_list[i].replace('w indows','windows')
sentence_list[i] = sentence_list[i].replace('W indows','Windows')
# this is to find the start and stop of Paper body
for i in range(start, stop):
body.append(sentence_list[i])
for i in range(len(body)):
body[i] = body[i].replace('\n',' ') #replace all the new line
body[i] = selective_lower(body[i]) #E:.
for i in range(start, stop):
body.append(sentence_list[i])
for i in range(len(body)):
body[i] = body[i].replace('\n',' ') #replace all the new line
body[i] = selective_lower(body[i]).strip() #E:.
body[i] = shortword.sub('',body[i]) #make sure shortword removed
body_dict[filepdf] = " ".join(body)
return body_dict
pathpdf = 'data/'
tokenizer = RegexpTokenizer("[A-Za-z]\w+(?:[-'?]\w+)?")
body_dict = get_data(pathpdf)
data = pd.DataFrame.from_dict(body_dict,orient='index')
data = data.reset_index()
data.columns = ['filename','body']
def tokenize(body):
tokenized_body = tokenizer.tokenize(body_dict[body]) #tokenizing the string
return (body, tokenized_body) # return a tuple of file name and a list of tokens
#calling the tokenize method in a loop for all the elements in the dictionary
tokenized_body = dict(tokenize(body) for body in body_dict.keys())
#data['tokenized'] = data['body'].apply(tokenizer.tokenize)
all_tokens = list(chain.from_iterable(tokenized_body.values())) #maybe this should be done finally
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(analyzer = "word")
data_features = vectorizer.fit_transform([' '.join(value) for value in tokenized_body.values()])
print (data_features.shape)
words = list(chain.from_iterable(tokenized_body.values()))
vocab = set(words)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(analyzer = "word")
words_per_doc = list(chain.from_iterable([set(value) for value in tokenized_body.values()]))
wpd = FreqDist(words_per_doc)
wpd.most_common(25)
word_to_remove = []
for word, count in wpd.items():
if (count/200 < 0.03) or (count/200 > 0.95):
word_to_remove.append(word)
len(word_to_remove)
all_tokens = [token for token in all_tokens if token not in word_to_remove]
len(set(all_tokens))
for file in list(tokenized_body.keys()):
tokenized_body[file] = [token for token in tokenized_body[file] if token not in word_to_remove]
#Finding the top 200 bigrams
finder=BigramCollocationFinder.from_words(all_tokens)
bigrams=finder.nbest(BigramAssocMeasures.likelihood_ratio, 200)
bigrams_list=[x for x in bigrams if not any(c.isdigit() for c in x)]
bigrams_list
#Eliminating numbers from bigrams
bigrams_list=[x for x in bigrams if not any(c.isdigit() for c in x)]
#Preserving these bigrams and putting it back in the dictionary, along with the unigrams
mwetokenizer = MWETokenizer(bigrams_list,separator='__')
#colloc_body is a dictionary that contains both the bigrams as well as the unigrams
colloc_body = dict((body, mwetokenizer.tokenize(data)) for body,data in tokenized_body.items())
#Using the porterstemmer method
ps = PorterStemmer()
#An empty string to store the content of a particular body
strcontent=''
#An empty dictionary to append the stemmed data back
stemmed_dict=dict()
#Looping to stem each value in the dictionary
for key,body in colloc_body.items():
for word in body:
if (word[0].islower()) and ('__' not in word):
#Temporarily storing the data in an empty string
strcontent=strcontent+ ' ' + ps.stem(word)
#Assigning the string to the respective key
stemmed_dict[key]=strcontent
#Again emptying the string to store the next resume
strcontent=''
#Loop to again word tokenize each body in the dictionary and assigning it back to its body number
for key,body in stemmed_dict.items():
stemmed_dict[key]=word_tokenize(body)
for key,body in colloc_body.items():
for word in body:
if (word[0].isupper) and ('__' in word):
stemmed_dict[key].append(word)
uni_tokens = []
for file in list(stemmed_dict.keys()):
for word in stemmed_dict[file]:
uni_tokens.append(word)
len(set(uni_tokens))
vocab = list(set(uni_tokens))
word2index = {token: token_index for token_index, token in enumerate(uni_tokens.index2word)}
word2index['hi'] == 30308 # True
vocab
from vocab import Vocab, UnkVocab
v = Vocab()
vocab_index = v.word2index(vocab, train=True)
vocab_serial = dict(zip(vocab,vocab_index))
file = open('Group102_vocab.txt', "w")
for k, v in vocab_serial.items():
file.write(str(k) + ':'+ str(v) + '\n')
file.close()
```
| github_jupyter |
# Python Strings
- Strings is one of the most important data types.
- Let's know how it is declared, defined, accessed and common operations performed on the python strings.
## Strings
- A string is a collection or series of characters.
- An important point to remember is that python strings are **immutable**. Once,we created a string object, we cannot change them.
- However, observe that python doesn't have character data type. It just treats a single character as a string of length one.
- Anything enclosed in quotes(either single or double) is considered as a string.
- Indexing in strings starts from 0.
## Declaration Of Strings
- The declaration of a string is very simple. It is similar to declaration as that of any other basic data type.
- **Syntax**
> *var_name = 'some text within quotes'* or *var_name = "some text within quotes"*
- Lets look at an example!
```
MyString = "Hello World!"
```
## Accessing Strings
- Now, that we know how to initialise strings, lets look at how to access them.
- We can use name of the variable to which the string is assigned, if we want to access the complete string.
- To access only one character of a string ,we use indices.
- To access a substring (a subsequence/part of total string), we use slicing.
**Syntax**:
> 1) *var_name* - to access the whole string.\
2) *var_name[index]* - to access a certain character at 'index' position.\
3) *var_name[index1:index2]* - to access a sequence of characters from index1 to index2-1 positions in the string.
- Let's look at few examples of accessing strings.
```
MyString = "I love GirlScript"
print(MyString)
print(MyString[0])
print(MyString[7:]) #if index2 is left empty it prints upto the end.
```
## Operations On Strings
Now, we had the basic knowledge of what string is, how they are declared and accessed . Let's see the operations that can be performed on strings.\
Some of the common operations are
- length
- Copy
- Concatenation
- Comparison
- Finding if a substring exists in a string.
These are some of the common operations performed on strings. Let's look at how these are performed.
### Finding length of string
- The *len()* function in python gives us the length of string.
- This can be done as follows:
```
MyString = "I love GirlScript"
print("The length of MyString is "+ str(len(MyString))) # At this point, all you have to notice is that len(MyString) is giving the length of MyString variable.
# I'll get back to the part of str() and '+' later in this tutorial.
```
### Copying Strings
- There are many ways to copy strings in python. However, most of them creates a copy of the object but the new and old variables points same object.
- Lets look into the ways how strings can be copied.
1. Using **Slice operator**:
- We can create a new variable and then use the actual variable that holds the string along with [ : ].
- Let's look at an example.
```
MyString = "I love GirlScript"
MyString2 = MyString[:]
print(MyString2)
```
2. Using **Assignment Operator**
- We could simply use '=' to make a copy of string and now both the old and new variables refer same string object.
- An example is as shown:
```
MyString = "I love GirlScript"
MyString2 = MyString
print(MyString2)
```
3. Using **str()**
- Now, lets learn about str() function in python.
- str() converts any value within braces into string object and returns it.
- Copying via str() can be done as shown:
```
MyString = "I love GirlScript"
MyString2 = str(MyString)
print(MyString2)
```
4.We can even come up with our own program to copy strings. Here is one, to copy strings.
```
MyString = "I love GirlScript"
MyString2 = '' # currently an empty string.
for i in MyString:
MyString2 += i #appending i to previous MyString2 and assigning new value to MyString. + is a way of cancatenation which we shall learn in detail in next section.
print(MyString2)
```
## Concatenation
Concatenation is adding two or more strings and forming a larger string. This can be done using '+' operator.\
An example is as shown:
```
MyString = "I love GirlScript"
MyString2 = " Winter of Contribution"
MyString3 = MyString + MyString2
print(MyString3)
```
However, print() has its special of treating all its parameters.\
Anything written in print() seperated by comma(,) is first converted to a string object and concatenated by adding a space in between. Lets see an example:
```
print(MyString,MyString2)
```
# Comparison
String comparison is one of the most frequently used operations. \
Strings can be considered just like basic data type such as int and comparison can be one using
- ==
- <
- >
- <=
- >=
What these comparison operators generally does is, it checks the ascii value of each character until a mismatched value occurs, if there's no mismatch the strings are equal.
```
print('a' > 'b') #output is false as ascii value of 'a' 97 is less than ascii value of 'b' 98
print('ab' == 'ab') #returns true as strings are same
print('ac' >= 'abd') #prints true as 'c'(99) is greater than 'b'(98)
print('ac' <= 'abd') #false as 'c' is greater than 'b'
print('ac' < 'abd') #result follows from above discussion
```
## Finding a substring in a given string
find() return the index at which substring occurs in given string and -1 if substring is not found.\
find() takes in 3 parameters
- str -> points substring
- beginning index -> specifies starting index from where search should start(this is 0 by default)
- ending index -> This is the ending index, by default it is equal to the length of the string.
Lets look at an example of how its done:
```
MyString = "I love GirlScript"
Substr = "GirlScript"
print(MyString.find(Substr))
```
These are the common operations that are performed on the strings. There are several other built-in string methods such as islower(),isupper(), strip(), split(), capitalisze(),count() etc.
# Conclusion
Python strings are very much helpful and are also easy to handle. However, one must perform manipulations on strings keeping in mind that they are immutable. performing operations such as :
> MyString[1] = '$'
throws an error. Every programmer needs a firm knowledge of strings.
| github_jupyter |
# Visualize gene expression
This notebook visualizes the gene expression data for the template and simulated experiments in order to:
1. Validate that the structure of the gene expression data and simulated data are consistent
2. To visualize the signal that is in the experiments
```
%load_ext autoreload
%load_ext rpy2.ipython
%autoreload 2
import os
import pandas as pd
import umap
import pickle
import glob
import seaborn as sns
from sklearn.decomposition import PCA
from keras.models import load_model
import plotnine as pn
from ponyo import utils
from generic_expression_patterns_modules import plot
```
## Load config parameters
```
# Read in config variables
base_dir = os.path.abspath(os.path.join(os.getcwd(), "../"))
config_filename = os.path.abspath(
os.path.join(base_dir, "configs", "config_pseudomonas_33245.tsv")
)
params = utils.read_config(config_filename)
# Load config params
local_dir = params['local_dir']
project_id = params['project_id']
num_simulated = params['num_simulated']
pval_name = "adj.P.Val"
logFC_name = "logFC"
run=0
# Manual settings to visualize/troubleshoot volcano plots for other datasets
# Will pull these out to archive later
"""vae_model_dir = params['vae_model_dir']
template_filename = params['mapped_template_filename']
normalized_compendium_filename = params['normalized_compendium_filename']
scaler_filename = params['scaler_filename']"""
# Settings for running visualization using pseudomonas config file
vae_model_dir = os.path.join(base_dir,"pseudomonas_analysis", "models", "NN_2500_30")
template_filename = params['processed_template_filename']
normalized_compendium_filename = params['normalized_compendium_filename']
scaler_filename = params['scaler_filename']
"""# Settings for running visualization using human cancer config file
vae_model_dir = os.path.join(base_dir,"human_cancer_analysis", "models", "NN_2500_30")
template_filename = params['processed_template_filename']
normalized_compendium_filename = params['normalized_compendium_filename']
scaler_filename = params['scaler_filename']"""
"""# Settings for running visualization using human_general config file
vae_model_dir = os.path.join(base_dir,"human_general_analysis", "models", "NN_2500_30")
template_filename = os.path.join(base_dir,"human_general_analysis", params['processed_template_filename'])
normalized_compendium_filename = params['normalized_compendium_filename']
scaler_filename = os.path.join(base_dir, "human_general_analysis", params['scaler_filename'])"""
```
## Volcano plots
```
# Check number of DEGs
template_DE_stats_filename = os.path.join(
local_dir,
"DE_stats",
f"DE_stats_template_data_{project_id}_real.txt"
)
template_DE_stats = pd.read_csv(
template_DE_stats_filename,
sep="\t",
header=0,
index_col=0
)
selected = template_DE_stats[(template_DE_stats[pval_name]<0.01) & (abs(template_DE_stats[logFC_name])>1)]
print(selected.shape)
plot.make_volcano_plot_template(
template_DE_stats_filename,
project_id,
pval_name,
logFC_name
)
simulated_DE_stats_dir = os.path.join(local_dir, "DE_stats")
plot.make_volcano_plot_simulated(
simulated_DE_stats_dir,
project_id,
pval_name,
logFC_name,
num_simulated,
5,
5,
20,
15
)
```
## Plot distribution of DE stats
```
sns.distplot(template_DE_stats[logFC_name], kde=False)
simulated_DE_stats_filename = os.path.join(
simulated_DE_stats_dir,
f"DE_stats_simulated_data_{project_id}_{run}.txt"
)
simulated_DE_stats = pd.read_csv(
simulated_DE_stats_filename,
sep="\t",
header=0,
index_col=0
)
sns.distplot(simulated_DE_stats[logFC_name], kde=False)
```
## PCA of latent space
```
# Get decoded simulated experiment
simulated_filename = os.path.join(
local_dir,
"pseudo_experiment",
f"selected_simulated_data_{project_id}_{run}.txt"
)
normalized_compendium_data = pd.read_csv(normalized_compendium_filename, sep="\t", index_col=0, header=0)
template_data = pd.read_csv(template_filename, sep="\t", index_col=0, header=0)
simulated_data = pd.read_csv(simulated_filename, sep="\t", index_col=0, header=0)
print(template_data.shape)
template_data
sns.distplot(template_data.iloc[0:2].mean(), kde=False)
sns.distplot(template_data.iloc[2:].mean(), kde=False)
sns.distplot(simulated_data.iloc[0:2].mean(), kde=False)
sns.distplot(simulated_data.iloc[2:].mean(), kde=False)
# Normalize simulated_data
# Load pickled file
with open(scaler_filename, "rb") as scaler_fh:
scaler = pickle.load(scaler_fh)
normalized_simulated_data = scaler.transform(simulated_data)
normalized_simulated_data = pd.DataFrame(
normalized_simulated_data,
columns=simulated_data.columns,
index=simulated_data.index,
)
print(normalized_simulated_data.shape)
normalized_simulated_data.head()
# If template experiment included in training compendium
# Get normalized template data
sample_ids = list(template_data.index)
normalized_template_data = normalized_compendium_data.loc[sample_ids]
print(normalized_template_data.shape)
normalized_template_data.head()
"""# If template experiment NOT included in training compendium
with open(scaler_filename, "rb") as scaler_fh:
scaler = pickle.load(scaler_fh)
normalized_template_data = scaler.transform(template_data)
normalized_template_data = pd.DataFrame(
normalized_template_data,
columns=template_data.columns,
index=template_data.index,
)"""
# Label samples
normalized_compendium_data['sample group'] = "compendium"
normalized_template_data['sample group'] = "template"
normalized_simulated_data['sample group'] = "simulated"
normalized_all_data = pd.concat([normalized_template_data,
normalized_simulated_data,
normalized_compendium_data
])
# Plot
# Drop label column
normalized_all_data_numeric = normalized_all_data.drop(['sample group'], axis=1)
model = umap.UMAP(random_state=1).fit(normalized_all_data_numeric)
normalized_all_data_UMAPencoded = model.transform(normalized_all_data_numeric)
normalized_all_data_UMAPencoded_df = pd.DataFrame(data=normalized_all_data_UMAPencoded,
index=normalized_all_data.index,
columns=['1','2'])
# Add back label column
normalized_all_data_UMAPencoded_df['sample group'] = normalized_all_data['sample group']
# Plot
fig = pn.ggplot(normalized_all_data_UMAPencoded_df, pn.aes(x='1', y='2'))
fig += pn.geom_point(pn.aes(color='sample group'), alpha=0.4)
fig += pn.labs(x ='UMAP 1',
y = 'UMAP 2',
title = 'Gene expression data in gene space')
fig += pn.theme_bw()
fig += pn.theme(
legend_title_align = "center",
plot_background=pn.element_rect(fill='white'),
legend_key=pn.element_rect(fill='white', colour='white'),
legend_title=pn.element_text(family='sans-serif', size=15),
legend_text=pn.element_text(family='sans-serif', size=12),
plot_title=pn.element_text(family='sans-serif', size=15),
axis_text=pn.element_text(family='sans-serif', size=12),
axis_title=pn.element_text(family='sans-serif', size=15)
)
fig += pn.scale_color_manual(['#bdbdbd', 'red', 'blue'])
fig += pn.guides(colour=pn.guide_legend(override_aes={'alpha': 1}))
print(fig)
```
## PCA in latent space
```
# Model files
model_encoder_filename = glob.glob(os.path.join(vae_model_dir, "*_encoder_model.h5"))[0]
weights_encoder_filename = glob.glob(os.path.join(vae_model_dir, "*_encoder_weights.h5"))[0]
model_decoder_filename = glob.glob(os.path.join(vae_model_dir, "*_decoder_model.h5"))[0]
weights_decoder_filename = glob.glob(os.path.join(vae_model_dir, "*_decoder_weights.h5"))[0]
# Load saved models
loaded_model = load_model(model_encoder_filename, compile=False)
loaded_decode_model = load_model(model_decoder_filename, compile=False)
loaded_model.load_weights(weights_encoder_filename)
loaded_decode_model.load_weights(weights_decoder_filename)
# PCA model
pca = PCA(n_components=2)
# Encode compendium
normalized_compendium = normalized_compendium_data.drop(['sample group'], axis=1)
compendium_encoded = loaded_model.predict_on_batch(normalized_compendium)
compendium_encoded_df = pd.DataFrame(data=compendium_encoded,
index=normalized_compendium.index)
# Get and save PCA model
model1 = pca.fit(compendium_encoded_df)
compendium_PCAencoded = model1.transform(compendium_encoded_df)
compendium_PCAencoded_df = pd.DataFrame(data=compendium_PCAencoded,
index=compendium_encoded_df.index,
columns=['1','2'])
# Add label
compendium_PCAencoded_df['sample group'] = 'compendium'
# Encode template experiment
normalized_template_data = normalized_template_data.drop(['sample group'], axis=1)
template_encoded = loaded_model.predict_on_batch(normalized_template_data)
template_encoded_df = pd.DataFrame(data=template_encoded,
index=normalized_template_data.index)
template_PCAencoded = model1.transform(template_encoded_df)
template_PCAencoded_df = pd.DataFrame(data=template_PCAencoded,
index=template_encoded_df.index,
columns=['1','2'])
# Add back label column
template_PCAencoded_df['sample group'] = 'template'
# Use stored encoded simulated data
# Note: We cannot encode the decoded simulated experiment since we are not using tied weights
# Re-encoded the decoded simulated experiment will not yield a linear latent space shift
encoded_simulated_filename = os.path.join(
local_dir,
"pseudo_experiment",
f"selected_simulated_encoded_data_{project_id}_{run}.txt"
)
simulated_encoded_df = pd.read_csv(encoded_simulated_filename,header=0, sep='\t', index_col=0)
sample_ids = list(template_data.index)
simulated_encoded_df = simulated_encoded_df.loc[sample_ids]
simulated_PCAencoded = model1.transform(simulated_encoded_df)
simulated_PCAencoded_df = pd.DataFrame(data=simulated_PCAencoded,
index=simulated_encoded_df.index,
columns=['1','2'])
# Add back label column
simulated_PCAencoded_df['sample group'] = 'simulated'
# Concatenate dataframes
combined_PCAencoded_df = pd.concat([compendium_PCAencoded_df,
template_PCAencoded_df,
simulated_PCAencoded_df])
print(combined_PCAencoded_df.shape)
combined_PCAencoded_df.head()
# Plot
fig1 = pn.ggplot(combined_PCAencoded_df, pn.aes(x='1', y='2'))
fig1 += pn.geom_point(pn.aes(color='sample group'), alpha=0.4)
fig1 += pn.labs(x ='PC 1',
y = 'PC 2',
title = 'Gene expression data in latent space')
fig1 += pn.theme_bw()
fig1 += pn.theme(
legend_title_align = "center",
plot_background=pn.element_rect(fill='white'),
legend_key=pn.element_rect(fill='white', colour='white'),
legend_title=pn.element_text(family='sans-serif', size=15),
legend_text=pn.element_text(family='sans-serif', size=12),
plot_title=pn.element_text(family='sans-serif', size=15),
axis_text=pn.element_text(family='sans-serif', size=12),
axis_title=pn.element_text(family='sans-serif', size=15)
)
fig1 += pn.scale_color_manual(['#bdbdbd', 'red', 'blue'])
fig1 += pn.guides(colour=pn.guide_legend(override_aes={'alpha': 1}))
print(fig1)
```
## UMAP of latent space
```
# Get and save PCA model
model2 = umap.UMAP(random_state=1).fit(compendium_encoded_df)
compendium_UMAPencoded = model2.transform(compendium_encoded_df)
compendium_UMAPencoded_df = pd.DataFrame(data=compendium_UMAPencoded,
index=compendium_encoded_df.index,
columns=['1','2'])
# Add label
compendium_UMAPencoded_df['sample group'] = 'compendium'
template_UMAPencoded = model2.transform(template_encoded_df)
template_UMAPencoded_df = pd.DataFrame(data=template_UMAPencoded,
index=template_encoded_df.index,
columns=['1','2'])
# Add back label column
template_UMAPencoded_df['sample group'] = 'template'
simulated_UMAPencoded = model2.transform(simulated_encoded_df)
simulated_UMAPencoded_df = pd.DataFrame(data=simulated_UMAPencoded,
index=simulated_encoded_df.index,
columns=['1','2'])
# Add back label column
simulated_UMAPencoded_df['sample group'] = 'simulated'
# Concatenate dataframes
combined_UMAPencoded_df = pd.concat([compendium_UMAPencoded_df,
template_UMAPencoded_df,
simulated_UMAPencoded_df])
print(combined_UMAPencoded_df.shape)
combined_UMAPencoded_df.head()
# Plot
fig2 = pn.ggplot(combined_UMAPencoded_df, pn.aes(x='1', y='2'))
fig2 += pn.geom_point(pn.aes(color='sample group'), alpha=0.4)
fig2 += pn.labs(x ='UMAP 1',
y = 'UMAP 2',
title = 'Gene expression data in latent space')
fig2 += pn.theme_bw()
fig2 += pn.theme(
legend_title_align = "center",
plot_background=pn.element_rect(fill='white'),
legend_key=pn.element_rect(fill='white', colour='white'),
legend_title=pn.element_text(family='sans-serif', size=15),
legend_text=pn.element_text(family='sans-serif', size=12),
plot_title=pn.element_text(family='sans-serif', size=15),
axis_text=pn.element_text(family='sans-serif', size=12),
axis_title=pn.element_text(family='sans-serif', size=15)
)
fig2 += pn.scale_color_manual(['#bdbdbd', 'red', 'blue'])
fig2 += pn.guides(colour=pn.guide_legend(override_aes={'alpha': 1}))
print(fig2)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes this Model Maker library to illustrate the adaption and conversion of a commonly-used text classification model to classify movie reviews on a mobile device.
## Prerequisites
To run this example, we first need to install several required packages, including Model Maker package that in github [repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
```
!pip install git+https://github.com/tensorflow/examples.git#egg=tensorflow-examples[model_maker]
```
Import the required packages.
```
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_maker.core.data_util.text_dataloader import TextClassifierDataLoader
from tensorflow_examples.lite.model_maker.core.task.model_spec import AverageWordVecModelSpec
from tensorflow_examples.lite.model_maker.core.task.model_spec import BertClassifierModelSpec
from tensorflow_examples.lite.model_maker.core.task import text_classifier
```
## Simple End-to-End Example
### Get the data path
Let's get some texts to play with this simple end-to-end example.
```
data_path = tf.keras.utils.get_file(
fname='aclImdb',
origin='http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz',
untar=True)
```
You could replace it with your own text folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your images to the cloud, you could try to run the library locally following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker) in github.
### Run the example
The example just consists of 6 lines of code as shown below, representing 5 steps of the overall process.
Step 0. Choose a `model_spec` that represents a model for text classifier.
```
model_spec = AverageWordVecModelSpec()
```
Step 1. Load train and test data specific to an on-device ML app and preprocess the data according to specific `model_spec`.
```
train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=model_spec, class_labels=['pos', 'neg'])
test_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'test'), model_spec=model_spec, is_training=False, shuffle=False)
```
Step 2. Customize the TensorFlow model.
```
model = text_classifier.create(train_data, model_spec=model_spec)
```
Step 3. Evaluate the model.
```
loss, acc = model.evaluate(test_data)
```
Step 4. Export to TensorFlow Lite model.
You could download it in the left sidebar same as the uploading part for your own use.
```
model.export(export_dir='.')
```
After this simple 5 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [text classification](https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification) reference app.
## Detailed Process
In the above, we tried the simple end-to-end example. The following walks through the example step by step to show more detail.
### Step 0: Choose a model_spec that represents a model for text classifier.
each `model_spec` object represents a specific model for the text classifier. Currently, we support averging word embedding model and BERT-base model.
```
model_spec = AverageWordVecModelSpec()
```
### Step 1: Load Input Data Specific to an On-device ML App
The IMDB dataset contains 25000 movie reviews for training and 25000 movie reviews for testing from the [Internet Movie Database](https://www.imdb.com/). The dataset has two classes: positive and negative movie reviews.
Download the archive version of the dataset and untar it.
The IMDB dataset has the following directory structure:
<pre>
<b>aclImdb</b>
|__ <b>train</b>
|______ <b>pos</b>: [1962_10.txt, 2499_10.txt, ...]
|______ <b>neg</b>: [104_3.txt, 109_2.txt, ...]
|______ unsup: [12099_0.txt, 1424_0.txt, ...]
|__ <b>test</b>
|______ <b>pos</b>: [1384_9.txt, 191_9.txt, ...]
|______ <b>neg</b>: [1629_1.txt, 21_1.txt]
</pre>
Note that the text data under `train/unsup` folder are unlabeled documents for unsupervised learning and such data should be ignored in this tutorial.
```
data_path = tf.keras.utils.get_file(
fname='aclImdb',
origin='http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz',
untar=True)
```
Use `TextClassifierDataLoader` to load data.
As for `from_folder()` method, it could load data from the folder. It assumes that the text data of the same class are in the same subdirectory and the subfolder name is the class name. Each text file contains one movie review sample.
Parameter `class_labels` is used to specify which subfolder should be considered. As for `train` folder, this parameter is used to skip `unsup` subfolder.
```
train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=model_spec, class_labels=['pos', 'neg'])
test_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'test'), model_spec=model_spec, is_training=False, shuffle=False)
train_data, validation_data = train_data.split(0.9)
```
### Step 2: Customize the TensorFlow Model
Create a custom text classifier model based on the loaded data. Currently, we support averaging word embedding and BERT-base model.
```
model = text_classifier.create(train_data, model_spec=model_spec, validation_data=validation_data)
```
Have a look at the detailed model structure.
```
model.summary()
```
### Step 3: Evaluate the Customized Model
Evaluate the result of the model, get the loss and accuracy of the model.
Evaluate the loss and accuracy in `test_data`. If no data is given the results are evaluated on the data that's splitted in the `create` method.
```
loss, acc = model.evaluate(test_data)
```
### Step 4: Export to TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format that could be later used in on-device ML application. Meanwhile, save the text labels in label file and vocabulary in vocab file. The default TFLite filename is `model.tflite`, the default label filename is `label.txt`, the default vocab filename is `vocab`.
```
model.export(export_dir='.')
```
The TensorFlow Lite model file and label file could be used in the [text classification](https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification) reference app.
In detail, we could add `movie_review_classifier.tflite`, `text_label.txt` and `vocab.txt` to the [assets directory](https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification/android/app/src/main/assets) folder. Meanwhile, change the filenames in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/text_classification/android/app/src/main/java/org/tensorflow/lite/examples/textclassification/TextClassificationClient.java#L43).
Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
```
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('model.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test data and calculate accuracy.
accurate_count = 0
for text, label in test_data.dataset:
# Add batch dimension and convert to float32 to match with the model's input
# data format.
text = tf.expand_dims(text, 0)
# Run inference.
interpreter.set_tensor(input_index, text)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
```
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains split the text to tokens by '\W', encode the tokens to ids, the pad the text with `pad_id` to have the length of `seq_length`.
## Advanced Usage
The `create` function is the critical part of this library in which parameter `model_spec` defines the specification of the model, currently `AverageWordVecModelSpec` and `BertModelSpec` is supported. The `create` function contains the following steps for `AverageWordVecModelSpec`:
1. Tokenize the text and select the top `num_words` most frequent words to generate the vocubulary. The default value of `num_words` in `AverageWordVecModelSpec` object is `10000`.
2. Encode the text string tokens to int ids.
3. Create the text classifier model. Currently, this library supports one model: average the word embedding of the text with RELU activation, then leverage softmax dense layer for classification. As for [Embedding layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding), the input dimension is the size of the vocabulary, the output dimension is `AverageWordVecModelSpec` object's variable `wordvec_dim` which default value is `16`, the input length is `AverageWordVecModelSpec` object's variable `seq_len` which default value is `256`.
4. Train the classifier model. The default epoch is `2` and the default batch size is `32`.
In this section, we describe several advanced topics, including adjusting the model, changing the training hyperparameters etc.
## Adjust the model
We could adjust the model infrastructure like variables `wordvec_dim`, `seq_len` in `AverageWordVecModelSpec` class.
* `wordvec_dim`: Dimension of word embedding.
* `seq_len`: length of sequence.
For example, we could train with larger `wordvec_dim`. If we change the model, we need to construct the new `model_spec` firstly.
```
new_model_spec = AverageWordVecModelSpec(wordvec_dim=32)
```
Secondly, we should get the preprocessed data accordingly.
```
new_train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=new_model_spec, class_labels=['pos', 'neg'])
new_train_data, new_validation_data = new_train_data.split(0.9)
```
Finally, we could train the new model.
```
model = text_classifier.create(new_train_data, model_spec=new_model_spec, validation_data=new_validation_data)
```
### Change the training hyperparameters
We could also change the training hyperparameters like `epochs` and `batch_size` that could affect the model accuracy. For instance,
* `epochs`: more epochs could achieve better accuracy, but may lead to overfitting.
* `batch_size`: number of samples to use in one training step.
For example, we could train with more epochs.
```
model = text_classifier.create(train_data, model_spec=model_spec, validation_data=validation_data, epochs=5)
```
Evaluate the newly retrained model with 5 training epochs.
```
loss, accuracy = model.evaluate(test_data)
```
### Change the Model
We could change the model by changing the `model_spec`. The following shows how we change to BERT-base model.
First, we could change `model_spec` to `BertModelSpec`.
```
model_spec = BertClassifierModelSpec()
```
The remaining steps remains the same.
Load data and preprocess the data according to `model_spec`.
```
train_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'train'), model_spec=model_spec, class_labels=['pos', 'neg'])
test_data = TextClassifierDataLoader.from_folder(os.path.join(data_path, 'test'), model_spec=model_spec, is_training=False, shuffle=False)
```
Then retrain the model. Note that it could take a long time to retrain the BERT model. we just set `epochs` equals 1 to demonstrate it.
```
model = text_classifier.create(train_data, model_spec=model_spec, epochs=1)
```
| github_jupyter |
# Regression, body and brain
## About this page
This is a Jupyter Notebook. It can be run as an interactive demo, or you can
read it as a web page.
You don't need to understand the code on this page, the text will tell you
what the code is doing.
You can also [run this demo
interactively](https://mybinder.org/v2/gh/matthew-brett/bio145/master?filepath=on_correlation.ipynb).
## The example problem
We are going to do regression of body weights and brain weights of some animals, and then look at the correlation.
## Some setup
We first need to get our environment set up to run the code and plots we
need.
```
# Code to get set up. If you are running interactively, you can execute
# this cell by pressing Shift and Enter at the same time.
# Libraries for arrays and plotting
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make plots look a little bit more fancy
plt.style.use('fivethirtyeight')
# Import library for statistical routines
import scipy.stats as sps
# Print array numbers to 4 digits of precisiono
np.set_printoptions(precision=4, suppress=True)
```
## Starting with a line
Here are the body weights (in kilograms) from the 8 animals:
```
body_weight = np.array([3.3, 465, 27.7, 521, 192, 2.5, 0.3, 55.5])
```
These are the corresponding brain weights (in grams):
```
brain_weight = np.array([25.6, 423, 115, 655, 180, 12.1, 1.9, 175])
```
We believe that there is some relationship between `body_weight` and `brain_weight`.
Plotting them together we get:
```
plt.plot(body_weight, brain_weight, '+')
plt.xlabel('Body')
plt.ylabel('Brain');
```
It looks like there may be some sort of straight line relationship. We could
try to find a good line to fit the data. Here I will do some magic to work
out a good line.
```
slope, intercept, r, p, se = sps.linregress(body_weight, brain_weight)
print(f'Slope: {slope:.4f}')
print(f'Intercept: {intercept:.4f}')
```
We also got the correlation *r* value from this calculation. Here it is, for
future reference. We will come back to this later:
```
# Correlation "r" value
print(f'Correlation r: {r:.4f}')
```
This is the squared *r* value ($r^2$):
```
r ** 2
```
Here is the line drawn on the plot of the data:
```
# Plot data with the prediction
plt.plot(body_weight, brain_weight, 'k+')
mx = max(body_weight)
x_vals = [0, mx]
y_vals = [intercept, intercept + slope * mx]
plt.plot(x_vals, y_vals, 'b')
plt.xlabel('Body')
plt.ylabel('Brain')
plt.title('Body vs Brain with nice line');
```
## How do we chose a good line?
The line gives a *prediction* of what `brain_weight` should be, for any value
of `body_weight`. If we have some value `x` for `body_weight`, then we can
predict the value `y` of `brain_weight`, with `y = intercept + slope * x`.
For example, here are the first values for `body_weight` and `brain_weight`:
```
print(f'First body_weight value {body_weight[0]}')
print(f'First brain_weight value {brain_weight[0]}')
```
The second value is the *actual* value of `brain_weight`. The *predicted*
value of `brain_weight`, for this value of `body_weight` is:
```
predicted = intercept + body_weight[0]
predicted
```
The *error* for our line, is the difference between the actual and predicted
value.
```
actual = brain_weight[0]
error = actual - predicted
error
```
This is the error for the first value. We can get the errors for all the
values in the same way.
This is the calculation of error for all 12 values. As usual, you don't need
to understand the code in detail:
```
all_predicted = intercept + body_weight * slope
all_errors = brain_weight - all_predicted
all_errors
```
Notice the first value for `all_errors` is the same as the value for `error`
we saw above.
The errors here are the distances between the prediction line and the points
on the plot. Here I show the errors as red lines. Don't worry about the code
below, it's not important to the idea.
```
# Plot data with the prediction and errors
plt.plot(body_weight, brain_weight, 'k+', ms=15)
mx = max(body_weight)
x_vals = [0, mx]
y_vals = [intercept, intercept + slope * mx]
plt.plot(x_vals, y_vals, 'b')
# Draw the error lines
for i in range(len(body_weight)):
x_vals = [body_weight[i], body_weight[i]]
y_vals = [all_predicted[i], brain_weight[i]]
plt.plot(x_vals, y_vals, 'r')
plt.xlabel('Body weight')
plt.ylabel('Brain weight')
plt.title('body_weight vs brain_weight, and errors');
```
A good line will make the errors as small as possible. Therefore, a good line
will make the lengths of the red lines as short as possible.
We need to generate a single number, from the errors, that gives an overall
measure of the size of the errors.
We cannot just add up the errors, because the negative and positive errors
will cancel out. Even if the errors are a mixture of large positive and large
negative, the sum could be very small.
The usual thing to do, is to square all the errors, to make sure they are all
positive. Then we add all the squared errors. This gives the *sum of squared
error* or SSE.
```
# A reminder of the errors we calculated above
all_errors
# Square all the errors
squared_errors = all_errors ** 2
squared_errors
# Calculate the sum of the squared errors
SSE = sum(squared_errors)
SSE
```
The line is a good one when SSE is small. In fact, the usual "best fit" line
chosen by packages such as Excel, is the line that gives the lowest SSE value,
of all possible lines.
It is the line that minimizes the squared error, often called the *least squares* line.
This is the line that I found by sort-of magic, above. If you like, try other
slopes and intercepts. You will find that they always have a higher SSE value
than the slope and intercept I have used here.
## Regression and correlation
Above, you have seen regression, using the *least squares* line.
Correlation is a version of the same thing, but where we have *standardized*
the data.
We standardize data by subtracting the mean, and dividing by the standard
deviation.
We do this, to put the x and y values onto the same scale.
For example, here is a histogram of the `body_weight` values, to give you an idea
of their position and spread.
```
plt.hist(body_weight)
plt.title("Body weight values");
```
In correlation, we are interested to know whether the *variation* in the (e.g)
`body_weight` values, predicts the variation in the (e.g) `brain_weight` values.
Variation, is variation around the mean. To show variation, we subtract the
mean. We refer to the values, with the mean subtracted, as *mean centered*.
```
centered_x = body_weight - body_weight.mean()
plt.hist(centered_x)
plt.title('Mean-centered body weight values');
```
Finally, the values for the spread either side of zero depends on the units of
the measurement. We measure the spread, with standard deviation:
```
std_x = np.std(centered_x)
std_x
```
We would like to re-express our data to have a standard spread, that is
comparable for the `x` / `body_weight` values and the `y` / `brain_weight` values.
For example, we might like to ensure the data have a standard deviation of 1.
To do this, we divide the centered values by the standard deviation.
```
standard_x = centered_x / std_x
plt.hist(standard_x)
plt.title('Standardized body weight values');
```
You will see below, that the mean of these values is now 0, and the standard deviation is 1.
```
print(f'Mean of standard x: {np.mean(standard_x):.4f}')
print(f'Standard deviation: {np.std(standard_x):.4f}')
```
Our `body_weight` values are now *standardized*.
We do the same for our `y` / `brain_weight` values:
```
# Standarize the y / brain_weight values
centered_y = brain_weight - brain_weight.mean()
standard_y = centered_y / np.std(centered_y)
print(f'Mean of standard y: {np.mean(standard_y):.4f}')
print(f'Standard deviation: {np.std(standard_y):.4f}')
```
The correlation value *r* is just the slope of the regression line relating
our standardized `x` / `body_weight` and standardized `y` / `brain_weight`:
```
std_slope, std_intercept, std_r, p, se = sps.linregress(standard_x, standard_y)
print(f'Standardized slope (=correlation r): {std_slope:.4f}')
print(f'Standardized intercept: {std_intercept:.4f}')
```
It turns out that, when we standardize the x and y values, as we did here, the
*intercept* for the least-squares line must be zero, for mathematical reasons
that are not important for our current purpose.
Notice that the slope above is the same as the `r` value for the original
regression line:
```
print(f'Standardized slope: {std_slope:.4f}')
print(f'Original r for regression: {r:.4f}')
```
Here is the plot of standardized `body_weight` against standardized `brain_weight`,
with the least-squares line:
```
# Plot standard data with the prediction
plt.plot(standard_x, standard_y, '+')
mx = max(standard_x)
mn = min(standard_x)
x_vals = [mn, mx]
y_vals = [std_intercept + std_slope * mn, std_intercept + std_slope * mx]
plt.plot(x_vals, y_vals, 'b')
plt.title('Standardized body weight against standardized brain weight');
```
Notice that the plot has the point (0, 0) at its center, and that the line
goes through the (0, 0) point. The slope of the line, is the correlation
value *r*.
It turns out that, if we do this standardization procedure, the slope of the
line can only vary between 1 (where the standardized `x` values are the same as
the standardized `y` values) and -1 (where the standardized `x` values are the
exact negative of the standardized `y` values).
| github_jupyter |
# Retraining of top performing FFNN
## Imports
```
# General imports
import sys
import os
sys.path.insert(1, os.path.join(os.pardir, 'src'))
from itertools import product
# Data imports
import cv2
import torch
import mlflow
import numpy as np
from mlflow.tracking.client import MlflowClient
from torchvision import datasets, transforms
# Homebrew imports
import model
from utils import one_hot_encode_index
from optimizers import Adam
from activations import Softmax, ReLU
from layers import Dropout, LinearLayer
from loss import CategoricalCrossEntropyLoss
# pytorch imports
from torch import nn, cuda, optim, no_grad
import torch.nn.functional as F
from torchvision import transforms
## TESTING
import importlib
importlib.reload(model)
##
```
## Finding best runs
```
# querying results to see best 2 performing homebrew models
query = "params.data_split = '90/10' and params.type = 'FFNN' and params.framework = 'homebrew'"
hb_runs = MlflowClient().search_runs(
experiment_ids="8",
filter_string=query,
max_results=1,
order_by=["metrics.validation_accuracy DESC"]
)
query = "params.data_split = '90/10' and params.type = 'FFNN' and params.framework = 'pytorch'"
pt_runs = MlflowClient().search_runs(
experiment_ids="8",
filter_string=query,
max_results=1,
order_by=["metrics.validation_accuracy DESC"]
)
```
## Setup data loaders
```
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(32),
transforms.RandomHorizontalFlip(),
transforms.Grayscale(num_output_channels=1),
transforms.ToTensor(),
transforms.Normalize([0.5],[0.5])
])
test_transforms = transforms.Compose([transforms.Resize(33),
transforms.CenterCrop(32),
transforms.Grayscale(num_output_channels=1),
transforms.ToTensor(),
transforms.Normalize([0.5],[0.5])
])
# setting up data loaders
data_dir = os.path.join(os.pardir, 'data', 'Plant_leave_diseases_32')
train_data = datasets.ImageFolder(os.path.join(data_dir, 'train'), transform=train_transforms)
test_data = datasets.ImageFolder(os.path.join(data_dir, 'validation'), transform=test_transforms)
```
## Training 'Homebrew' models
```
# Getting Configs
par = hb_runs[0].data.params
config = {'data_split': par['data_split'],
'decay': np.float64(par['decay']),
'dropout': np.float64(par['dropout']),
'framework': par['framework'],
'learning_rate': np.float64(par['learning_rate']),
'max_epochs': int(par['max_epochs']),
'resolution': int(par['resolution']),
'type': par['type']}
mlflow.set_experiment("Plant Leaf Disease")
train_loader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
validation_loader = torch.utils.data.DataLoader(test_data, batch_size=64, shuffle=True)
# initialize model
mdl = model.Model(Adam(learning_rate=config['learning_rate'], decay=config['decay']),
CategoricalCrossEntropyLoss())
# Config early stop
mdl.add_early_stop(25)
# save config
mdl.set_save_config(model_name='FFNN_top_homebrew', save_path=os.path.join('models'))
# Defining architecture
mdl.set_sequence([
LinearLayer(32*32, 1024),
ReLU(),
Dropout(config['dropout']),
LinearLayer(1024, 512),
ReLU(),
Dropout(config['dropout']),
LinearLayer(512, 39),
Softmax()
])
with mlflow.start_run():
mlflow.log_params(config)
mdl.train_with_loader(train_loader, epochs=config['max_epochs'], validation_loader=validation_loader, cls_count=39, flatten_input=True)
```
### Training pytorch Mode
```
#### Net and training function
class PlantDiseaseNet(nn.Module):
def __init__(self, input_size=1024, l1=1024, l2=512, output_size=39, dropout_p=0.5):
super(PlantDiseaseNet, self).__init__()
self.fc1 = nn.Linear(input_size, l1)
self.fc2 = nn.Linear(l1, l2)
self.fc3 = nn.Linear(l2, output_size)
self.dropout = nn.Dropout(dropout_p)
def forward(self, x):
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = F.log_softmax(self.fc3(x), dim=1)
return x
def train(model, train_loader, validation_loader, config, n_epochs=10, stopping_treshold=None):
if torch.cuda.is_available():
print('CUDA is available! Training on GPU ...')
model.cuda()
# Loss and optimizer setup
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=config['learning_rate'])
# Setting minimum validation loss to inf
validation_loss_minimum = np.Inf
train_loss_history = []
validation_loss_history = []
for epoch in range(1, n_epochs +1):
training_loss = 0.0
validation_loss = 0.0
# Training loop
training_accuracies = []
for X, y in train_loader:
# Moving data to gpu if using
if torch.cuda.is_available():
X, y = X.cuda(), y.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(X)
# calculate the batch loss
loss = criterion(output, y)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
training_loss += loss.item()*X.size(0)
# calculating accuracy
ps = torch.exp(output)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == y.view(*top_class.shape)
training_accuracies.append(torch.mean(equals.type(torch.FloatTensor)).item())
# Validation Loop
with torch.no_grad():
accuracies = []
for X, y in validation_loader:
# Moving data to gpu if using
if torch.cuda.is_available():
X, y = X.cuda(), y.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(X)
# calculate the batch loss
loss = criterion(output, y)
# update validation loss
validation_loss += loss.item()*X.size(0)
# calculating accuracy
ps = torch.exp(output)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == y.view(*top_class.shape)
accuracies.append(torch.mean(equals.type(torch.FloatTensor)).item())
# Mean loss
mean_training_loss = training_loss/len(train_loader.sampler)
mean_validation_loss = validation_loss/len(validation_loader.sampler)
mean_train_accuracy = sum(training_accuracies)/len(training_accuracies)
mean_accuracy = sum(accuracies)/len(accuracies)
train_loss_history.append(mean_training_loss)
validation_loss_history.append(mean_validation_loss)
# Printing epoch stats
print(f'Epoch: {epoch}/{n_epochs}, ' +\
f'Training Loss: {mean_training_loss:.3f}, '+\
f'Train accuracy {mean_train_accuracy:.3f} ' +\
f'Validation Loss: {mean_validation_loss:.3f}, '+\
f'Validation accuracy {mean_accuracy:.3f}')
# logging with mlflow
if mlflow.active_run():
mlflow.log_metric('loss', mean_training_loss, step=epoch)
mlflow.log_metric('accuracy', mean_train_accuracy, step=epoch)
mlflow.log_metric('validation_accuracy', mean_accuracy, step=epoch)
mlflow.log_metric('validation_loss', mean_validation_loss, step=epoch)
# Testing for early stopping
# Testing for early stopping
if stopping_treshold:
if mean_validation_loss < validation_loss_minimum:
validation_loss_minimum = mean_validation_loss
print('New minimum validation loss (saving model)')
save_pth = os.path.join('models',f'{config["name"]}.pt')
torch.save(model.state_dict(), save_pth)
elif len([v for v in validation_loss_history[-stopping_treshold:] if v > validation_loss_minimum]) >= stopping_treshold:
print(f"Stopping early at epoch: {epoch}/{n_epochs}")
break
```
### Training Pytorch models
```
# getting configs
# # Getting Configs
par = pt_runs[0].data.params
config = {'data_split': par['data_split'],
'decay': np.float64(par['decay']),
'dropout': np.float64(par['dropout']),
'framework': par['framework'],
'learning_rate': np.float64(par['learning_rate']),
'max_epochs': int(par['max_epochs']),
'resolution': int(par['resolution']),
'type': par['type'],
'name': 'top_pytorch'}
# Set up data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
validation_loader = torch.utils.data.DataLoader(test_data, batch_size=64, shuffle=True)
# Initializing the model
mdl = PlantDiseaseNet(input_size=config['resolution']**2, dropout_p=config['dropout'])
print("Starting training on network: \n", mdl)
mlflow.set_experiment("Plant Leaf Disease")
with mlflow.start_run():
mlflow.log_params(config)
tlh, vlh = train(mdl, train_loader, validation_loader, config, n_epochs=config['max_epochs'], stopping_treshold=50)
```
| github_jupyter |
## Face and Facial Keypoint detection
After you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing.
1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook).
2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN.
3. Use your trained model to detect facial keypoints on the image.
---
In the next python cell we load in required libraries for this section of the project.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
```
#### Select an image
Select an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory.
```
import cv2
# load in color image for face detection
image = cv2.imread('images/obamas.jpg')
# switch red and blue color channels
# --> by default OpenCV assumes BLUE comes first, not RED as in many images
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# plot the image
fig = plt.figure(figsize=(9,9))
plt.imshow(image)
```
## Detect all faces in an image
Next, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image.
In the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors.
An example of face detection on a variety of images is shown below.
<img src='images/haar_cascade_ex.png' width=80% height=80%/>
```
# load in a haar cascade classifier for detecting frontal faces
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
# run the detector
# the output here is an array of detections; the corners of each detection box
# if necessary, modify these parameters until you successfully identify every face in a given image
faces = face_cascade.detectMultiScale(image, 1.2, 2)
# make a copy of the original image to plot detections on
image_with_detections = image.copy()
# loop over the detected faces, mark the image where each face is found
for (x,y,w,h) in faces:
# draw a rectangle around each detected face
# you may also need to change the width of the rectangle drawn depending on image resolution
cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3)
fig = plt.figure(figsize=(9,9))
plt.imshow(image_with_detections)
```
## Loading in a trained model
Once you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector.
First, load your best model by its filename.
```
import torch
from models import Net
net = Net()
## TODO: load the best saved model parameters (by your path name)
## You'll need to un-comment the line below and add the correct name for *your* saved model
net.load_state_dict(torch.load('saved_models/keypoints_model_1.pt'))
## print out your net and prepare it for testing (uncomment the line below)
net.eval()
```
## Keypoint detection
Now, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images.
### TODO: Transform each detected face into an input Tensor
You'll need to perform the following steps for each detected face:
1. Convert the face from RGB to grayscale
2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
4. Reshape the numpy image into a torch image.
**Hint**: The sizes of faces detected by a Haar detector and the faces your network has been trained on are of different sizes. If you find that your model is generating keypoints that are too small for a given face, try adding some padding to the detected `roi` before giving it as input to your model.
You may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps.
### TODO: Detect and display the predicted keypoints
After each face has been appropriately converted into an input Tensor for your network to see as input, you can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be "un-normalized" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face:
<img src='images/michelle_detected.png' width=30% height=30%/>
```
image_copy = np.copy(image)
# loop over the detected faces from your haar cascade
for (x,y,w,h) in faces:
# Select the region of interest that is the face in the image
roi = image_copy[y-50:y+h+50, x-50:x+w+50]
## TODO: Convert the face region from RGB to grayscale
gray = cv2.cvtColor(roi, cv2.COLOR_RGB2GRAY)
## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
gray = gray/255.0
## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
gray = cv2.resize(gray, (224,224))
## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W)
roi_copy = torch.from_numpy(gray.reshape(1,1,224,224))
roi_copy = roi_copy.type(torch.FloatTensor)
## TODO: Make facial keypoint predictions using your loaded, trained network
output = net(roi_copy)
## TODO: Display each detected face and the corresponding keypoints
torch.squeeze(output)
output = output.view(68, -1)
predicted = output.data.numpy()
predicted = predicted*50.0+100
plt.imshow(gray, cmap='gray')
plt.scatter(predicted[:, 0], predicted[:, 1], s=60, marker='.', c='g')
plt.show()
```
| github_jupyter |
```
2+2
answer = 2+2
print(answer)
new_variable = 9
new_variable = 6
print(new_variable)
```
This ia markdown cell
# This is a heading
My program is awesome
```
import numpy
data = numpy.loadtxt(fname='data/inflammation-01.csv',delimiter=',')
print(data)
print(type(data[0,0]))
print(data.dtype)
print(data.shape)
print('first value in data:', data[0,0])
print('middle value in data:',data[30,20])
print(data[0:5,0:10])
print(data[4:8,0:10])
print(data[:3, 36:])
print(numpy.mean(data))
maxval,minval,stdval = numpy.max(data),numpy.min(data),numpy.std(data)
print('maximum inflammation: ',maxval)
print('minimum inflammation: ',minval)
print('standard deviation: ',stdval)
patient_0 = data[0,:]
print('Maximum inflammation for patient 0:',numpy.max(patient_0))
print(numpy.mean(data,axis=1).shape)
element = 'oxy gen'
print('first three characters:', element[0:3])
print(element[:4])
print(element[4:])
print(element[:])
print(element[-1])
print(element[-2])
print(element[1:-1])
patient3_week1 = data[3,0:7]
print(patient3_week1)
diff_values = numpy.diff(patient3_week1)
print(diff_values)
diff_values = numpy.diff(data,axis=1)
diff_values.shape
import matplotlib.pyplot
image = matplotlib.pyplot.imshow(data)
ave_inflammation = numpy.min(data,axis=0)
ave_plot = matplotlib.pyplot.plot(ave_inflammation)
figure = matplotlib.pyplot.figure(figsize=(10.0,3.0))
axes1 = figure.add_subplot(1,3,1)
axes2 = figure.add_subplot(1,3,2)
axes3 = figure.add_subplot(1,3,3)
axes1.set_ylabel('average')
axes2.set_ylabel('max')
axes3.set_ylabel('min')
axes1.plot(numpy.mean(data, axis=0))
axes2.plot(numpy.max(data, axis=0))
axes3.plot(numpy.min(data, axis=0))
figure.tight_layout()
'String'.startswith("Str")
'String'.startswith("str")
import glob
path = 'data/'
filenames = sorted(glob.glob(path+'*'))
large_files = [] #Starts with inflammation
small_files = [] #Starts with small
other_files = []
for filename in filenames:
if filename.startswith(path+"inflammation"):
large_files.append(filename)
elif filename.startswith(path+'small'):
small_files.append(filename)
else:
other_files.append(filename)
print(large_files)
print(small_files)
print(other_files)
ave_inflammation = numpy.min(data,axis=0)
ave_plot = matplotlib.pyplot.plot(ave_inflammation)
ave_inflammation = numpy.min(data,axis=0)
ave_plot = matplotlib.pyplot.plot(ave_inflammation)
def fahr_to_celsius(temp):
return ((temp - 32)*(5/9))
fahr_to_celsius(32)
print('freezing point of water:', fahr_to_celsius(32), 'C')
print('boiling point of water:',fahr_to_celsius(212), 'C')
def celsius_to_kelvin(temp_c):
return temp_c + 273.15
print('freezing point of water in Kelvin:',celsius_to_kelvin(0))
def fahr_to_kelvin(temp_f,message):
temp_c = fahr_to_celsius(temp_f)
print(temp_c)
temp_k = celsius_to_kelvin(temp_c)
print(message)
return temp_c,temp_k
temp_c,temp_k = fahr_to_kelvin(212.0,"Message text")
print(temp_c,temp_k)
def visualize(filename): #Take a filename and make plots
data = numpy.loadtxt(fname=filename,delimiter = ',')
figure = matplotlib.pyplot.figure(figsize=(10.0,3.0))
axes1 = figure.add_subplot(1,3,1)
axes2 = figure.add_subplot(1,3,2)
axes3 = figure.add_subplot(1,3,3)
axes1.set_ylabel('average')
axes2.set_ylabel('max')
axes3.set_ylabel('min')
axes1.plot(numpy.mean(data, axis=0))
axes2.plot(numpy.max(data, axis=0))
axes3.plot(numpy.min(data, axis=0))
figure.tight_layout()
matplotlib.pyplot.show()
def detect_problems(filename): #Take a filename; Conditional example
data=numpy.loadtxt(fname=filename,delimiter=',')
if numpy.max(data,axis=0)[0] == 0 and numpy.max(data,axis=0)[20] == 20:
print('Suspicious looking maxima')
elif numpy.sum(numpy.min(data,axis=0)) ==0:
print('Minima add up to zero')
else:
print('Seems OK!')
filenames = sorted(glob.glob('data/inflammation*.csv'))
for filename in filenames[:3]:
print(filename)
visualize(filename)
detect_problems(filename)
def offset_mean(data,target_mean_value=0,message='Message text'):
'''Returns a new array containing the origional data
with its mean offset to match the desired value
target_mean_value defaults to zero
'''
print(message)
return(data-numpy.mean(data)) + target_mean_value
z = numpy.zeros((2,2))
print(offset_mean(z,3,'new_message'))
print(offset_mean(z,message='new text'))
help(offset_mean)
data = numpy.loadtxt(fname='data/inflammation-01.csv',delimiter=',')
print(offset_mean(data,0))
print('origional min,mean,max',numpy.min(data),
numpy.mean(data),numpy.max(data))
offset_data= offset_mean(data,0)
print('offset min,mean,max',numpy.min(offset_data),
numpy.mean(offset_data),numpy.max(offset_data))
def outer(input_string):
return input_string[0] + input_string[-1]
print(outer('hydrogen'))
#hm
numbers = [2,1.5,2,-7]
total = 0
for num in numbers:
assert num > 0, 'Data should only contain positive values'
total += num
print('total is:', total)
def normalize_rectangle(rect):
'''Normalize a rectangle so that it is at the origin
and is one unit long in its long axis
Input should be of the format (x0,y0,x1,y1)
x0,y0 and x1,y1 define the lower left and upper right corners'''
assert len(rect) == 4, 'Rectangles must contain 4 coordinates'
x0,y0,x1,y1 = rect
assert x0 < x1, 'Invalid X coordinates'
assert y0 < y1, ' Invalid Y coordinates'
dx = x1-x0
dy = y1-y0
if dx > dy:
scaled = float(dx)/dy
upper_x,upper_y = 1.0,scaled
else:
scaled = float(dx)/dy
upper_x,upper_y = scaled, 1.0
assert 0 < upper_x <= 1.0, 'Calculated upper X coordinate invalid'
assert 0 < upper_y <= 1.0, ' Calculated upper Y coordinate invalid'
return (0,0,upper_x,upper_y)
print(normalize_rectangle((0.0,1.0,2.0,)))
print(normalize_rectangle([4.0,2.0,1.0,5.0]))
print(normalize_rectangle((0.0,0.0,1.0,5.0)))
print(normalize_rectangle((0.0,0.0,5.0,1.0)))
assert range_overlap([(0,1)]) == (0,1)
assert range_overlap([(2,3),(2,4)]) == (2,3)
assert range_overlap([(0,1),(0,2),(-1,1)]) == (0,1)
assert range_overlap([(0,1),(5,6)]) == None
assert range_overlap([(0,1),(1,2)]) == None
def range_overlap(ranges):
'''Return common overlap among a set of [(left,right)] ranges'''
initial = ranges[0]
max_left = initial[0]
min_right = initial[1]
for (left,right) in ranges:
max_left = max(max_left,left)
min_right = min(min_right,right)
if max_left<min_right:
return(max_left,min_right)
else:
return None
range_overlap([(0,1),(5,6)])
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from numpy.linalg import inv
from astropy.table import Table, Column, vstack, hstack, unique, SortedArray,SCEngine
import astropy.units as u
from astropy.io import fits, ascii
import glob
import os
import numpy
from scipy.signal import medfilt
from scipy.interpolate import interp1d
from numba import njit
import tqdm
from pandas import DataFrame
# read in high S/N table
# loop of over the rows
# Night and Date to make spectra file name: PETAL_LOC, NIGHT, TILEID
# load hdul
# cut r camera, "r_wavelength"
# Grab wavelength
# Grab corresponding spectrum
# FIBER #
highSN = Table.read("/Volumes/GoogleDrive/My Drive/HighS_N.fits") # reads in table from previous code
PETAL = (highSN['PETAL_LOC'].data) # load each column as an array
NIGHT = (highSN['NIGHT'].data) # load each column as an array
TILEID = (highSN['TILEID'].data) # load each column as an array
combined = np.vstack((PETAL, TILEID,NIGHT)).T # combines each element of each array together
print(combined)
tileid = [] # creates an empty array
for row in highSN:
file = str(row['PETAL_LOC']) + '-' + str(row['TILEID']) + '-' + str(row['NIGHT']) # goes through each element of the array and grabs the wanted elements of each one. This then combines them in the right format
file_tileid = str(row['TILEID']) + '/' + str(row['NIGHT']) + '/coadd-' + file # this grabs the necessary element of each array and combines them to make part of our path in the next cell
tileid.append(file_tileid) # appends the file created above to them empty array
# print(file)
file = ['/Volumes/GoogleDrive/My Drive/andes (1)/tiles/' + x +'.fits' for x in tileid] # this combines all of the elements grabbed above to make a filepath
# for x,y in zip(list1,list2):
for i in range(2843,3843):
hdul = fits.open(file[i]) # opens the fit data that belongs to the i sub_file and gets the information from that file
r_wave = (hdul['R_WAVELENGTH'].data) # Takes the chosen row of the hdul file
r_flux = (hdul['R_FLUX'].data) # Takes the chosen row of the hdul file
r_ivar = (hdul['R_IVAR'].data) # Takes the chosen row of the hdul file
FIBER = (highSN['FIBER'].data) # Takes the chosen row of the hdul file
fibermap = hdul['FIBERMAP'].data # Takes the chosen row of the hdul file
fibers = fibermap['FIBER']
# print(FIBER[i]) # prints each element of FIBER
index = (np.where(np.in1d(fibers, FIBER[i]))) # prints which index is where fibers and FIBER matches
# print(fibers[np.where(np.in1d(fibers, FIBER[i]))]) # plugs in the index to make sure this is where the number matches
index_ = list(index[0]) # converts the first element of the tuple created and converts it to a list.
# print(index_[0]) # prints the first element of the list
rflux = r_flux[index_[0],:] # plugs in the index found above and finds the matching spectrum
rivar = r_ivar[index_[0],:] # plugs in the index found above and finds the matching spectrum
rwave = r_wave
np.savez('/Volumes/GoogleDrive/My Drive/rflux.npz', rflux = rflux, overwrite = True) # saves the multiple arrays to one file
np.savez('/Volumes/GoogleDrive/My Drive/rivar.npz', rivar = rivar, overwrite = True)
np.savez('/Volumes/GoogleDrive/My Drive/rwave.npz', rwave = rwave, overwrite = True)
# plt.title('Spectrum', fontsize = 15) # places a title and sets font size
# plt.xlabel('Wavelength', fontsize = 15) # places a label on the x axis and sets font size
# plt.ylabel('$\\mathrm{flux\\,[10^{-17}\\, erg \\, cm^{-2} \\, s^{-1} \\, \\AA^{-1}] }$', fontsize = 15) # places a label on the y axis and sets font size
# plt.plot(r_wave, rflux) # plots the lists we just created using a function from matplotlib.pyplot. This plots both the x and y lists.
# plt.show()
# Check the IVAR array for 0 or negative values
# Set the flux = `np.nan` for any pixels that have that IVAR <=0
from scipy.signal import medfilt
from scipy.interpolate import interp1d
from numba import njit
import tqdm
import matplotlib.pyplot as plt
flux = np.load('/Volumes/GoogleDrive/My Drive/rflux.npz', allow_pickle=True)# Load the spectra (expecting a list of spectra, length of the list is the number of objects,
waves = np.load('/Volumes/GoogleDrive/My Drive/rwave.npz', allow_pickle=True)# Load the corresponding wavelength grid for each spectrum (also a list)
ivar = np.load('/Volumes/GoogleDrive/My Drive/rivar.npz', allow_pickle=True)
s = 3800
e = 7400
# common_wave = numpy.exp(numpy.linspace(numpy.log(s), numpy.log(e), 4200)) # Define the wavelength grid you would like to work with
flux = flux['rflux']
waves = waves['rwave']
ivar = ivar['rivar']
badpix = ivar <= 0
flux[badpix] = np.nan
print(flux)
nof_objects = len(flux)
@njit
def remove_outliers_and_nans(flux, flux_):
###
# Use nearby pixels to remove 3 sigma outlires and nans
##
nof_features = flux.size
d = 5
for f in range(d,nof_features-d):
val_flux = flux[f]
leave_out = numpy.concatenate((flux[f-d:f],flux[f+1:f+d]))
leave_out_mean = numpy.nanmean(leave_out)
leave_out_std = numpy.nanstd(leave_out)
if abs(val_flux - leave_out_mean) > 3*leave_out_std:
flux_[f] = leave_out_mean
d_ = d
while not numpy.isfinite(flux_[f]):
val_flux = flux[f]
d_ = d_ + 1
leave_out = numpy.concatenate((flux[f-d_:f],flux[f+1:f+d_]))
leave_out_mean = numpy.nanmean(leave_out)
flux_[f] = leave_out_mean
return flux_
specs_same_grid = []
for wave, spec in zip(waves, flux):
specs_same_grid += [numpy.interp(wave, flux)]
specs_same_grid = numpy.array(specs_same_grid)
for i in range(nof_objects):
flux = flux.copy()
flux_ = flux.copy()
# remove outliers and nans
specs_final[i] = remove_outliers_and_nans(flux, flux_)
# 5 pixel median filter (to remove some of the noise)
specs_final[i] = medfilt(specs_final[i], 5)
# specs_same_grid = []
# for wave, spec in zip(waves, specs):
# specs_same_grid += [numpy.interp(common_wave, wave, spec)]
# specs_same_grid = numpy.array(specs_same_grid)
# specs_final = numpy.zeros(specs_same_grid.shape)
# for i in range(nof_objects):
# s = specs_same_grid[i].copy()
# s_ = s.copy()
# # remove outliers and nans
# specs_final[i] = remove_outliers_and_nans(s, s_)
# # 5 pixel median filter (to remove some of the noise)
# specs_final[i] = medfilt(specs_final[i], 5)
plt.figure(figsize = (15,7))
idx = numpy.random.choice(specs.shape[0])
plt.rcParams['figure.figsize'] = 10, 4
plt.figure()
plt.title('Original')
plt.step(waves[idx], specs[idx], "k")
plt.xlabel("observed wavelength")
plt.ylabel("$\\mathrm{flux\\,[10^{-17}\\, erg \\, cm^{-2} \\, s^{-1} \\, \\AA^{-1}] }$")
plt.tight_layout()
plt.figure()
plt.title('Noise removed, common grid')
plt.step(common_wave, specs_final[idx], "k")
plt.xlabel("observed wavelength")
plt.ylabel("$\\mathrm{flux\\,[10^{-17}\\, erg \\, cm^{-2} \\, s^{-1} \\, \\AA^{-1}] }$")
plt.tight_layout()
import umap
fit = umap.UMAP()
em = fit.fit_transform(specs_final)
x = em[:,0]
y = em[:,1]
plt.figure(figsize = (8,7))
plt.scatter(x, y)
plt.show()
```
| github_jupyter |
This notebook is scratch space for some relatively simple tweaks I'm making to ScienceBase Items in the NDC in order to better position the system for building new data indexing code against. It requires authentication for using the sciencebasepy package (in PyPI) to write changes to ScienceBase.
```
import sciencebasepy
from IPython.display import display
sb = sciencebasepy.SbSession()
```
This little function is something I might spruce up and put in a pynggdpp package I'm considering. It uses the ScienceBase Vocab to retrieve a "fully qualified" term for use. Another approach would be to generalize it and contribute it to the sciencebasepy package, but of course, the ScienceBase Vocab kind of sucks in terms of its long-term potential. I could, instead, put some time into developing a more robust vocabulary, express it through the ESIP Community Ontology Repository, and then build code around terms resolvable to that source.
```
import requests
def ndc_collection_type_tag(tag_name):
vocab_search_url = f'https://www.sciencebase.gov/vocab/5bf3f7bce4b00ce5fb627d57/terms?nodeType=term&format=json&name={tag_name}'
r_vocab_search = requests.get(vocab_search_url).json()
if len(r_vocab_search['list']) == 1:
tag = {'type':'theme','name':r_vocab_search['list'][0]['name'],'scheme':r_vocab_search['list'][0]['scheme']}
return tag
else:
return None
username = input("Username: ")
sb.loginc(str(username))
```
# Set item type tags
I opted to use a simple vocabulary that sets items as ndc_organization, ndc_folder, or ndc_collection to help classify the primary items in the catalog as to their function. I did this in batches, being careful to review the items from a given data owner to see whether or not they did anything "out of the orginary" before applying tags. The parent ID supplied in the first line of this block deterimined the given batch of items to run through. The main thing I did through this was to flag certain items as "folders," basically extraneous organizational constructs that some data owners decided to employ directly in ScienceBase. We may revisit this as we get into IGSN work as there may be a desire to set these up as actual collections with subcollections.
```
collection_items = sb.get_child_ids('5ad902ade4b0e2c2dd27a82c')
item_count = 0
for sbid in collection_items:
this_item = sb.get_item(sbid, {'fields':'tags'})
isFolder = None
if 'tags' in this_item.keys():
isFolder = next((t for t in this_item['tags'] if t['name'] == 'ndc_folder'), None)
if isFolder is None:
item = {'id':sbid,'tags':[ndc_collection_type_tag('ndc_collection')]}
print(item)
sb.update_item(item)
item_count = item_count + 1
print('===========', item_count)
```
# Identify and flag metadata.xml files
In the original setup of the NDC from its roots in the Comprehensive Science Catalog, the results of a survey for collections from the State Geological Surveys were pulled from a Filemaker database into "metadata.xml" files that were processed into the Item model. These files are still onboard the ScienceBase Items, which is a reasonable thing to do and keep around in case we want to reprocess them in a different way. It seems reasonable to go ahead and verify these files and flag them with a title so that they can be separated out from files to examine for possible collection item processing.
Just to make sure I don't inadvertenly flag something wrong, I'll write this process to open up and look at the individual "metadata.xml" files to ensure they are what I think they are before setting a title property.
```
parentId = '4f4e4760e4b07f02db47dfb4'
queryRoot = 'https://www.sciencebase.gov/catalog/items?format=json&max=1000&'
tag_scheme_collections = ndc_collection_type_tag('ndc_collection')
fields_collections = 'title,files'
sb_query_collections = f'{queryRoot}fields={fields_collections}&folderId={parentId}&filter=tags%3D{tag_scheme_collections}'
r_ndc_collections = requests.get(sb_query_collections).json()
for collection in [c for c in r_ndc_collections['items'] if 'files' in c.keys() and next((f for f in c['files'] if f['name'] == 'metadata.xml'), None) is not None]:
the_files = collection['files']
f_metadata_xml = next(f for f in the_files if f['name'] == 'metadata.xml')
if requests.get(f_metadata_xml['url']).text[39:47] == '<NGGDPP>':
new_files = []
for f in collection['files']:
if f['name'] == 'metadata.xml':
f['title'] = 'Collection Metadata Source File'
new_files.append(f)
new_item = {'id':collection['id'], 'files':new_files}
print(sb.update_item(new_item)['link']['url'])
# Just to make sure
different_purpose_metadata_xml = [c for c in r_ndc_collections['items'] if 'files' in c.keys() and next((f for f in c['files'] if f['name'] == 'metadata.xml' and 'title' in f.keys() and f['title'] != 'Collection Metadata Source File'), None) is not None]
display(different_purpose_metadata_xml)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.