code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
|---|---|
# Predicting Remaining Useful Life (advanced)
<p style="margin:30px">
<img style="display:inline; margin-right:50px" width=50% src="https://www.featuretools.com/wp-content/uploads/2017/12/FeatureLabs-Logo-Tangerine-800.png" alt="Featuretools" />
<img style="display:inline" width=15% src="https://upload.wikimedia.org/wikipedia/commons/e/e5/NASA_logo.svg" alt="NASA" />
</p>
This notebook has a more advanced workflow than [the other notebook](Simple%20Featuretools%20RUL%20Demo.ipynb) for predicting Remaining Useful Life (RUL). If you are a new to either this dataset or Featuretools, I would recommend reading the other notebook first.
## Highlights
* Demonstrate how novel entityset structures improve predictive accuracy
* Use TSFresh Primitives from a featuretools [addon](https://docs.featuretools.com/getting_started/install.html#add-ons)
* Improve Mean Absolute Error by tuning hyper parameters with [BTB](https://github.com/HDI-Project/BTB)
Here is a collection of mean absolute errors from both notebooks. Though we've used averages where possible (denoted by \*), the randomness in the Random Forest Regressor and how we choose labels from the train data changes the score.
| | Train/Validation MAE| Test MAE|
|---------------------------------|---------------------|----------|
| Median Baseline | 72.06* | 50.66* |
| Simple Featuretools | 40.92* | 39.56 |
| Advanced: Custom Primitives | 35.90* | 28.84 |
| Advanced: Hyperparameter Tuning | 34.80* | 27.85 |
# Step 1: Load Data
We load in the train data using the same function we used in the previous notebook:
```
import composeml as cp
import numpy as np
import pandas as pd
import featuretools as ft
import utils
import os
from tqdm import tqdm
from sklearn.cluster import KMeans
data_path = 'data/train_FD004.txt'
data = utils.load_data(data_path)
data.head()
```
We also make cutoff times by using [Compose](https://compose.featurelabs.com) for generating labels on engines that reach at least 100 cycles. For each engine, we generate 10 labels that are spaced 10 cycles apart.
```
def remaining_useful_life(df):
return len(df) - 1
lm = cp.LabelMaker(
target_entity='engine_no',
time_index='time',
labeling_function=remaining_useful_life,
)
label_times = lm.search(
data.sort_values('time'),
num_examples_per_instance=10,
minimum_data=100,
gap=10,
verbose=True,
)
label_times.head()
```
We're going to make 5 sets of cutoff times to use for cross validation by random sampling the labels times we created previously.
```
splits = 5
cutoff_time_list = []
for i in range(splits):
sample = label_times.sample(n=249, random_state=i)
sample.sort_index(inplace=True)
cutoff_time_list.append(sample)
cutoff_time_list[0].head()
```
We're going to do something fancy for our entityset. The values for `operational_setting` 1-3 are continuous but create an implicit relation between different engines. If two engines have a similar `operational_setting`, it could indicate that we should expect the sensor measurements to mean similar things. We make clusters of those settings using [KMeans](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) from scikit-learn and make a new entity from the clusters.
```
nclusters = 50
def make_entityset(data, nclusters, kmeans=None):
X = data[[
'operational_setting_1',
'operational_setting_2',
'operational_setting_3',
]]
if kmeans is None:
kmeans = KMeans(n_clusters=nclusters).fit(X)
data['settings_clusters'] = kmeans.predict(X)
es = ft.EntitySet('Dataset')
es.entity_from_dataframe(
dataframe=data,
entity_id='recordings',
index='index',
time_index='time',
)
es.normalize_entity(
base_entity_id='recordings',
new_entity_id='engines',
index='engine_no',
)
es.normalize_entity(
base_entity_id='recordings',
new_entity_id='settings_clusters',
index='settings_clusters',
)
return es, kmeans
es, kmeans = make_entityset(data, nclusters)
es
```
## Visualize EntitySet
```
es.plot()
```
# Step 2: DFS and Creating a Model
In addition to changing our `EntitySet` structure, we're also going to use the [Complexity](http://tsfresh.readthedocs.io/en/latest/api/tsfresh.feature_extraction.html#tsfresh.feature_extraction.feature_calculators.cid_ce) time series primitive from the featuretools [addon](https://docs.featuretools.com/getting_started/install.html#add-ons) of ready-to-use TSFresh Primitives.
```
from featuretools.tsfresh import CidCe
fm, features = ft.dfs(
entityset=es,
target_entity='engines',
agg_primitives=['last', 'max', CidCe(normalize=False)],
trans_primitives=[],
chunk_size=.26,
cutoff_time=cutoff_time_list[0],
max_depth=3,
verbose=True,
)
fm.to_csv('advanced_fm.csv')
fm.head()
```
We build 4 more feature matrices with the same feature set but different cutoff times. That lets us test the pipeline multiple times before using it on test data.
```
fm_list = [fm]
for i in tqdm(range(1, splits)):
es = make_entityset(data, nclusters, kmeans=kmeans)[0]
fm = ft.calculate_feature_matrix(
entityset=es,
features=features,
chunk_size=.26,
cutoff_time=cutoff_time_list[i],
)
fm_list.append(fm)
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.feature_selection import RFE
def pipeline_for_test(fm_list, hyperparams=None, do_selection=False):
scores = []
regs = []
selectors = []
hyperparams = hyperparams or {
'n_estimators': 100,
'max_feats': 50,
'nfeats': 50,
}
for fm in fm_list:
X = fm.copy().fillna(0)
y = X.pop('remaining_useful_life')
n_estimators = int(hyperparams['n_estimators'])
max_features = int(hyperparams['max_feats'])
max_features = min(max_features, int(hyperparams['nfeats']))
reg = RandomForestRegressor(n_estimators=n_estimators, max_features=max_features)
X_train, X_test, y_train, y_test = train_test_split(X, y)
if do_selection:
reg2 = RandomForestRegressor(n_estimators=10, n_jobs=3)
selector = RFE(reg2, int(hyperparams['nfeats']), step=25)
selector.fit(X_train, y_train)
X_train = selector.transform(X_train)
X_test = selector.transform(X_test)
selectors.append(selector)
reg.fit(X_train, y_train)
regs.append(reg)
preds = reg.predict(X_test)
mae = mean_absolute_error(preds, y_test)
scores.append(mae)
return scores, regs, selectors
scores, regs, selectors = pipeline_for_test(fm_list)
print([float('{:.1f}'.format(score)) for score in scores])
mean, std = np.mean(scores), np.std(scores)
info = 'Average MAE: {:.1f}, Std: {:.2f}\n'
print(info.format(mean, std))
most_imp_feats = utils.feature_importances(fm_list[0], regs[0])
data_test = utils.load_data('data/test_FD004.txt')
es_test, _ = make_entityset(
data_test,
nclusters,
kmeans=kmeans,
)
fm_test = ft.calculate_feature_matrix(
entityset=es_test,
features=features,
verbose=True,
chunk_size=.26,
)
X = fm_test.copy().fillna(0)
y = pd.read_csv(
'data/RUL_FD004.txt',
sep=' ',
header=None,
names=['remaining_useful_life'],
index_col=False,
)
preds = regs[0].predict(X)
mae = mean_absolute_error(preds, y)
print('Mean Abs Error: {:.2f}'.format(mae))
```
# Step 3: Feature Selection and Scoring
Here, we'll use [Recursive Feature Elimination](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html). In order to set ourselves up for later optimization, we're going to write a generic `pipeline` function which takes in a set of hyperparameters and returns a score. Our pipeline will first run `RFE` and then split the remaining data for scoring by a `RandomForestRegressor`. We're going to pass in a list of hyperparameters, which we will tune later.
Lastly, we can use that selector and regressor to score the test values.
# Step 4: Hyperparameter Tuning
Because of the way we set up our pipeline, we can use a Gaussian Process to tune the hyperparameters. We will use [BTB](https://github.com/HDI-Project/BTB) from the [HDI Project](https://github.com/HDI-Project). This will search through the hyperparameters `n_estimators` and `max_feats` for RandomForest, and the number of features for RFE to find the hyperparameter set that has the best average score.
```
from btb import HyperParameter, ParamTypes
from btb.tuning import GP
def run_btb(fm_list, n=30, best=45):
hyperparam_ranges = [
('n_estimators', HyperParameter(ParamTypes.INT, [10, 200])),
('max_feats', HyperParameter(ParamTypes.INT, [5, 50])),
('nfeats', HyperParameter(ParamTypes.INT, [10, 70])),
]
tuner = GP(hyperparam_ranges)
shape = (n, len(hyperparam_ranges))
tested_parameters = np.zeros(shape, dtype=object)
scores = []
print('[n_est, max_feats, nfeats]')
best_hyperparams = None
best_sel = None
best_reg = None
for i in tqdm(range(n)):
hyperparams = tuner.propose()
cvscores, regs, selectors = pipeline_for_test(
fm_list,
hyperparams=hyperparams,
do_selection=True,
)
bound = np.mean(cvscores)
tested_parameters[i, :] = hyperparams
tuner.add(hyperparams, -np.mean(cvscores))
if np.mean(cvscores) + np.std(cvscores) < best:
best = np.mean(cvscores)
best_hyperparams = hyperparams
best_reg = regs[0]
best_sel = selectors[0]
info = '{}. {} -- Average MAE: {:.1f}, Std: {:.2f}'
mean, std = np.mean(cvscores), np.std(cvscores)
print(info.format(i, best_hyperparams, mean, std))
print('Raw: {}'.format([float('{:.1f}'.format(s)) for s in cvscores]))
return best_hyperparams, (best_sel, best_reg)
best_hyperparams, best_pipeline = run_btb(fm_list, n=30)
X = fm_test.copy().fillna(0)
y = pd.read_csv(
'data/RUL_FD004.txt',
sep=' ',
header=None,
names=['remaining_useful_life'],
index_col=False,
)
preds = best_pipeline[1].predict(best_pipeline[0].transform(X))
score = mean_absolute_error(preds, y)
print('Mean Abs Error on Test: {:.2f}'.format(score))
most_imp_feats = utils.feature_importances(
X.iloc[:, best_pipeline[0].support_],
best_pipeline[1],
)
```
# Appendix: Averaging old scores
To make a fair comparison between the previous notebook and this one, we should average scores where possible. The work in this section is exactly the work in the previous notebook plus some code for taking the average in the validation step.
```
from featuretools.primitives import Min
old_fm, features = ft.dfs(
entityset=es,
target_entity='engines',
agg_primitives=['last', 'max', 'min'],
trans_primitives=[],
cutoff_time=cutoff_time_list[0],
max_depth=3,
verbose=True,
)
old_fm_list = [old_fm]
for i in tqdm(range(1, splits)):
es = make_entityset(data, nclusters, kmeans=kmeans)[0]
old_fm = ft.calculate_feature_matrix(
entityset=es,
features=features,
cutoff_time=cutoff_time_list[i],
)
old_fm_list.append(fm)
old_scores = []
median_scores = []
for fm in old_fm_list:
X = fm.copy().fillna(0)
y = X.pop('remaining_useful_life')
X_train, X_test, y_train, y_test = train_test_split(X, y)
reg = RandomForestRegressor(n_estimators=10)
reg.fit(X_train, y_train)
preds = reg.predict(X_test)
mae = mean_absolute_error(preds, y_test)
old_scores.append(mae)
medianpredict = [np.median(y_train) for _ in y_test]
mae = mean_absolute_error(medianpredict, y_test)
median_scores.append(mae)
print([float('{:.1f}'.format(score)) for score in old_scores])
mean, std = np.mean(old_scores), np.std(old_scores)
info = 'Average MAE: {:.2f}, Std: {:.2f}\n'
print(info.format(mean, std))
print([float('{:.1f}'.format(score)) for score in median_scores])
mean, std = np.mean(median_scores), np.std(median_scores)
info = 'Baseline by Median MAE: {:.2f}, Std: {:.2f}\n'
print(info.format(mean, std))
y = pd.read_csv(
'data/RUL_FD004.txt',
sep=' ',
header=None,
names=['remaining_useful_life'],
index_col=False,
)
median_scores_2 = []
for ct in cutoff_time_list:
medianpredict2 = [np.median(ct['remaining_useful_life'].values) for _ in y.values]
mae = mean_absolute_error(medianpredict2, y)
median_scores_2.append(mae)
print([float('{:.1f}'.format(score)) for score in median_scores_2])
mean, std = np.mean(median_scores_2), np.std(median_scores_2)
info = 'Baseline by Median MAE: {:.2f}, Std: {:.2f}\n'
print(info.format(mean, std))
# Save output files
os.makedirs("output", exist_ok=True)
fm.to_csv('output/advanced_train_feature_matrix.csv')
cutoff_time_list[0].to_csv('output/advanced_train_label_times.csv')
fm_test.to_csv('output/advanced_test_feature_matrix.csv')
```
<p>
<img src="https://www.featurelabs.com/wp-content/uploads/2017/12/logo.png" alt="Featuretools" />
</p>
Featuretools was created by the developers at [Feature Labs](https://www.featurelabs.com/). If building impactful data science pipelines is important to you or your business, please [get in touch](https://www.featurelabs.com/contact).
|
github_jupyter
|
# Tutorial Machine Learning
EXERCÍCIO 1_Iris Flower B
### 1. Livrarias a utilizar
```
import numpy as np
import pandas as pd
```
### 2. Importação dos dados
#### Importamos o dataset para iniciar a análise
```
iris=pd.read_csv('IRIS.csv')
```
#### Visualizamos os 5 primeiros dados do dataset
```
print(iris.head())
```
### 3. Analisando os dados
#### Analisamos os dados que temos disponíveis
```
print('Informação do datasete:')
print(iris.info())
print('Descrição das species de Iris')
print('Distribuição das species de Iris')
print(iris.groupby('species').size())
```
### 4. Visualização dos dados
```
import matplotlib.pyplot as plt
```
#### Gráfico da Sépala - Comprimento Vs Largura
```
fig=iris[iris.species=='Iris-setosa'].plot(kind='scatter', x='sepal_length', y='sepal_width', color='blue', label='Iris-setosa')
fig=iris[iris.species=='Iris-versicolor'].plot(kind='scatter', x='sepal_length', y='sepal_width', color='green', label='Iris-versicolor', ax=fig)
fig=iris[iris.species=='Iris-virginica'].plot(kind='scatter', x='sepal_length', y='sepal_width', color='red', label='Iris-virginica', ax=fig)
fig.set_xlabel('Sépala - Comprimento')
fig.set_ylabel('Sépala - Largura')
fig.set_title('Sépala - Comprimento Vs Largura')
plt.show()
```
#### Gráfico da Pétala - Comprimento Vs Largura
```
fig=iris[iris.species=='Iris-setosa'].plot(kind='scatter', x='petal_length', y='petal_width', color='blue', label='Iris-setosa')
fig=iris[iris.species=='Iris-versicolor'].plot(kind='scatter', x='petal_length', y='petal_width', color='green', label='Iris-versicolor', ax=fig)
fig=iris[iris.species=='Iris-virginica'].plot(kind='scatter', x='petal_length', y='petal_width', color='red', label='Iris-virginica', ax=fig)
fig.set_xlabel('Pétala - Comprimento')
fig.set_ylabel('Pétala - Largura')
fig.set_title('Pétala - Comprimento Vs Largura')
plt.show()
```
### 5. Aplicação de algoritmos de Machine Learning
```
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
```
### 6. Modelo com todos os dados
#### Separar todos os dados com as características e os rótulos ou resultados
```
x=np.array(iris.drop(['species'], 1))
y=np.array(iris['species'])
```
#### Separar os dados de treinamento e teste para utilização dos algoritmos
```
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size=0.2)
print('São {} dados para treinamento e {} dados para teste'.format(x_train.shape[0], x_test.shape[0]))
```
#### Modelo de Regressão logística
```
algoritmo=LogisticRegression()
algoritmo.fit(x_train, y_train)
y_pred=algoritmo.predict(x_test)
print ('Precisão Regressão Logística:{}'.format(algoritmo.score(x_train, y_train)))
```
#### Modelo de Máquinas de Vetores de Suporte
```
algoritmo=SVC()
algoritmo.fit(x_train, y_train)
y_pred=algoritmo.predict(x_test)
print ('Precisão Máquinas de Vetores de Suport:{}'.format(algoritmo.score(x_train, y_train)))
```
#### Modelo de vizinhos mais próximos
```
algoritmo=KNeighborsClassifier(n_neighbors=5)
algoritmo.fit(x_train, y_train)
y_pred=algoritmo.predict(x_test)
print ('Precisão Vizinhos mais próximos:{}'.format(algoritmo.score(x_train, y_train)))
```
#### Modelo de Árvore de Decisão Classificação
```
algoritmo=DecisionTreeClassifier()
algoritmo.fit(x_train, y_train)
y_pred=algoritmo.predict(x_test)
print ('Precisão Árvore de Decisão Classificação:{}'.format(algoritmo.score(x_train, y_train)))
```
|
github_jupyter
|
```
import source_synphot.passband
import source_synphot.io
import source_synphot.source
import astropy.table as at
from collections import OrderedDict
import pysynphot as S
%matplotlib notebook
%pylab
def myround(x, prec=2, base=.5):
return round(base * round(float(x)/base),prec)
models = at.Table.read('ckmodels.txt',format='ascii')
logZ = 0.
model_sed_names = []
temp = []
for s in models:
teff = max(3500.,s['teff'])
logg = myround(s['logg'])
# the models with logg < 1 are just padded with 0s
if logg >= 1:
temp.append(teff)
modstring = 'ckmod{:.0f}_{:.1f}_{:.2f}'.format(teff,logZ, logg)
model_sed_names.append(modstring)
model_sed = source_synphot.source.load_source(model_sed_names)
passbands = at.Table.read('source_synphot/passbands/pbzptmag.txt',format='ascii')
pbnames = [x['obsmode'] for x in passbands if x['passband'].startswith("WFIRST")]
model_mags = 0.
model = 'AB'
pbnames += ['sdss,g', 'sdss,r', 'sdss,i', 'sdss,z']
pbs = source_synphot.passband.load_pbs(pbnames, model_mags, model)
pbnames = pbs.keys()
print(pbnames)
color1 = 'sdss,g_sdss,r'
color2 = 'sdss,r_sdss,i'
col1 = []
col2 = []
# construct color-color vectors
for modelname in model_sed:
model= model_sed[modelname]
model = S.ArraySpectrum(model.wave, model.flux, name=modelname)
c1, c2 = color1.split('_')
pb1, zp1 = pbs[c1]
pb2, zp2 = pbs[c2]
c3, c4 = color2.split('_')
pb3, zp3 = pbs[c3]
pb4, zp4 = pbs[c4]
thiscol1 = source_synphot.passband.syncolor(model, pb1, pb2, zp1, zp2)
thiscol2 = source_synphot.passband.syncolor(model, pb3, pb4, zp3, zp4)
col1.append(thiscol1)
col2.append(thiscol2)
col1 = array(col1)
col2 = array(col2)
# select only useful objects
good = ~isnan(col1)* ~isnan(col2)
good = array(good)
from astroML.plotting import scatter_contour
from astroML.datasets import fetch_sdss_S82standards
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
data = fetch_sdss_S82standards()
g = data['mmu_g']
r = data['mmu_r']
i = data['mmu_i']
fig, ax = plt.subplots(figsize=(6, 6))
scatter_contour(g - r, r - i, threshold=200, log_counts=True, ax=ax,
histogram2d_args=dict(bins=40),
plot_args=dict(marker=',', linestyle='none', color='black'),
contour_args=dict(cmap=plt.cm.bone))
ax.set_xlabel(r'${\rm g - r}$')
ax.set_ylabel(r'${\rm r - i}$')
ax.set_xlim(-0.6, 2.5)
ax.set_ylim(-0.6, 2.5)
plot(col1[good], col2[good], marker='o', linestyle='None')
z = linspace(1e-8, 0.2, 1000, endpoint=True)
outspec = source_synphot.source.pre_process_source('./sn2000fa-20001215.flm',15.3,pbs['sdss,r'][0],0.021)
figure()
plot(outspec.wave, outspec.flux)
source_synphot.passband.synphot(outspec, pbs['sdss,r'][0], zp=pbs['sdss,r'][1])
# if you believe the observations, this should be kinda like the absolute magnitude of an SNIa at peak.
mag = source_synphot.passband.synphot_over_redshifts(outspec, z, pbs['sdss,r'][0], pbs['sdss,r'][1])
figure()
plot(z, mag)
xscale('log')
ind = np.abs(z-0.021).argmin()
print(mag[ind])
# We should get back what we normalized to for input at the same redshift. Not bad!
```
|
github_jupyter
|
# In-Class Coding Lab: Web Services and APIs
### Overview
The web has long evolved from user-consumption to device consumption. In the early days of the web when you wanted to check the weather, you opened up your browser and visited a website. Nowadays your smart watch / smart phone retrieves the weather for you and displays it on the device. Your device can't predict the weather. It's simply consuming a weather based service.
The key to making device consumption work are API's (Application Program Interfaces). Products we use everyday like smartphones, Amazon's Alexa, and gaming consoles all rely on API's. They seem "smart" and "powerful" but in actuality they're only interfacing with smart and powerful services in the cloud.
API consumption is the new reality of programming; it is why we cover it in this course. Once you undersand how to conusme API's you can write a program to do almost anything and harness the power of the internet to make your own programs look "smart" and "powerful."
This lab covers how to properly use consume web service API's with Python. Here's what we will cover.
1. Understading requests and responses
1. Proper error handling
1. Parameter handling
1. Refactoring as a function
```
# Run this to make sure you have the pre-requisites!
!pip install -q requests
```
## Part 1: Understanding Requests and responses
In this part we learn about the Python requests module. http://docs.python-requests.org/en/master/user/quickstart/
This module makes it easy to write code to send HTTP requests over the internet and handle the responses. It will be the cornerstone of our API consumption in this course. While there are other modules which accomplish the same thing, `requests` is the most straightforward and easiest to use.
We'll begin by importing the modules we will need. We do this here so we won't need to include these lines in the other code we write in this lab.
```
# start by importing the modules we will need
import requests
import json
```
### The request
As you learned in class and your assigned readings, the HTTP protocol has **verbs** which consititue the type of request you will send to the remote resource, or **url**. Based on the url and request type, you will get a **response**.
The following line of code makes a **get** request (that's the HTTP verb) to Google's Geocoding API service. This service attempts to convert the address (in this case `Syracuse University`) into a set of coordinates global coordinates (Latitude and Longitude), so that location can be plotted on a map.
```
url = 'https://nominatim.openstreetmap.org/search?q=Hinds+Hall+Syracuse+University&format=json'
response = requests.get(url)
```
### The response
The `get()` method returns a `Response` object variable. I called it `response` in this example but it could be called anything.
The HTTP response consists of a *status code* and *body*. The status code lets you know if the request worked, while the body of the response contains the actual data.
```
response.ok # did the request work?
response.text # what's in the body of the response, as a raw string
```
### Converting responses into Python object variables
In the case of **web site url's** the response body is **HTML**. This should be rendered in a web browser. But we're dealing with Web Service API's so...
In the case of **web API url's** the response body could be in a variety of formats from **plain text**, to **XML** or **JSON**. In this course we will only focus on JSON format because as we've seen these translate easily into Python object variables.
Let's convert the response to a Python object variable. I this case it will be a Python dictionary
```
geodata = response.json() # try to decode the response from JSON format
geodata # this is now a Python object variable
```
With our Python object, we can now walk the list of dictionary to retrieve the latitude and longitude
```
lat = geodata[0]['lat']
lon =geodata[0]['lon']
print(lat, lon)
```
In the code above we "walked" the Python list of dictionary to get to the location
- `geodata` is a list
- `geodata[0]` is the first item in that list, a dictionary
- `geodata[0]['lat']` is a dictionary key which represents the latitude
- `geodata[0]['lon']` is a dictionary key which represents the longitude
It should be noted that this process will vary for each API you call, so its important to get accustomed to performing this task. You'll be doing it quite often.
One final thing to address. What is the type of `lat` and `lon`?
```
type(lat), type(lon)
```
Bummer they are strings. we want them to be floats so we will need to parse the strings with the `float()` function:
```
lat = float(geodata[0]['lat'])
lon = float(geodata[0]['lon'])
print("Latitude: %f, Longitude: %f" % (lat, lon))
```
### Now You Try It!
Walk the `geodata` object variable and reteieve the value under the key `display_name` and the key `bounding_box`
```
# todo:
# retrieve the place_id put in a variable
# retrieve the formatted_address put it in a variable
# print both of them out
```
## Part 2: Parameter Handling
In the example above we hard-coded "Hinds Hall Syracuse University" into the request:
```
url = 'https://nominatim.openstreetmap.org/search?q=Hinds+Hall+Syracuse+University&format=json'
```
A better way to write this code is to allow for the input of any location and supply that to the service. To make this work we need to send parameters into the request as a dictionary. This way we can geolocate any address!
You'll notice that on the url, we are passing **key-value pairs** the key is `q` and the value is `Hinds+Hall+Syracuse+University`. The other key is `format` and the value is `json`. Hey, Python dictionaries are also key-value pairs so:
```
url = 'https://nominatim.openstreetmap.org/search' # base URL without paramters after the "?"
search = 'Hinds Hall Syracuse University'
options = { 'q' : search, 'format' : 'json'}
response = requests.get(url, params = options)
geodata = response.json()
coords = { 'lat' : float(geodata[0]['lat']), 'lng' : float(geodata[0]['lon']) }
print("Search for:", search)
print("Coordinates:", coords)
print("%s is located at (%f,%f)" %(search, coords['lat'], coords['lng']))
```
### Looking up any address
RECALL: For `requests.get(url, params = options)` the part that says `params = options` is called a **named argument**, which is Python's way of specifying an optional function argument.
With our parameter now outside the url, we can easily re-write this code to work for any location! Go ahead and execute the code and input `Queens, NY`. This will retrieve the coordinates `(40.728224,-73.794852)`
```
url = 'https://nominatim.openstreetmap.org/search' # base URL without paramters after the "?"
search = input("Enter a loacation to Geocode: ")
options = { 'q' : search, 'format' : 'json'}
response = requests.get(url, params = options)
geodata = response.json()
coords = { 'lat' : float(geodata[0]['lat']), 'lng' : float(geodata[0]['lon']) }
print("Search for:", search)
print("Coordinates:", coords)
print("%s is located at (%f,%f)" %(search, coords['lat'], coords['lng']))
```
### So useful, it should be a function!
One thing you'll come to realize quickly is that your API calls should be wrapped in functions. This promotes **readability** and **code re-use**. For example:
```
def get_coordinates(search):
url = 'https://nominatim.openstreetmap.org/search' # base URL without paramters after the "?"
options = { 'q' : search, 'format' : 'json'}
response = requests.get(url, params = options)
geodata = response.json()
coords = { 'lat' : float(geodata[0]['lat']), 'lng' : float(geodata[0]['lon']) }
return coords
# main program here:
location = input("Enter a location: ")
coords = get_coordinates(location)
print("%s is located at (%f,%f)" %(location, coords['lat'], coords['lng']))
```
### Other request methods
Not every API we call uses the `get()` method. Some use `post()` because the amount of data you provide it too large to place on the url.
An example of this is the **Text-Processing.com** sentiment analysis service. http://text-processing.com/docs/sentiment.html This service will detect the sentiment or mood of text. You give the service some text, and it tells you whether that text is positive, negative or neutral.
```
# 'you suck' == 'negative'
url = 'http://text-processing.com/api/sentiment/'
options = { 'text' : 'you suck'}
response = requests.post(url, data = options)
sentiment = response.json()
sentiment
# 'I love cheese' == 'positive'
url = 'http://text-processing.com/api/sentiment/'
options = { 'text' : 'I love cheese'}
response = requests.post(url, data = options)
sentiment = response.json()
sentiment
```
In the examples provided we used the `post()` method instead of the `get()` method. the `post()` method has a named argument `data` which takes a dictionary of data. The key required by **text-processing.com** is `text` which hold the text you would like to process for sentiment.
We use a post in the event the text we wish to process is very long. Case in point:
```
tweet = "Arnold Schwarzenegger isn't voluntarily leaving the Apprentice, he was fired by his bad (pathetic) ratings, not by me. Sad end to a great show"
url = 'http://text-processing.com/api/sentiment/'
options = { 'text' : tweet }
response = requests.post(url, data = options)
sentiment = response.json()
sentiment
```
### Now You Try It!
Use the above example to write a program which will input any text and print the sentiment using this API!
```
# todo write code here
```
## Part 3: Proper Error Handling (In 3 Simple Rules)
When you write code that depends on other people's code from around the Internet, there's a lot that can go wrong. Therefore we perscribe the following advice:
```
Assume anything that CAN go wrong WILL go wrong
```
### Rule 1: Don't assume the internet 'always works'
The first rule of programming over a network is to NEVER assume the network is available. You need to assume the worst. No WiFi, user types in a bad url, the remote website is down, etc.
We handle this in the `requests` module by catching the `requests.exceptions.RequestException` Here's an example:
```
url = "http://this is not a website"
try:
response = requests.get(url) # throws an exception when it cannot connect
# internet is broken
except requests.exceptions.RequestException as e:
print("ERROR: Cannot connect to ", url)
print("DETAILS:", e)
```
### Rule 2: Don't assume the response you get back is valid
Assuming the internet is not broken (Rule 1) You should now check for HTTP response 200 which means the url responded successfully. Other responses like 404 or 501 indicate an error occured and that means you should not keep processing the response.
Here's one way to do it:
```
url = 'http://www.syr.edu/mikeisawesum' # this should 404
try:
response = requests.get(url)
if response.ok: # same as response.status_code == 200
data = response.text
else: # Some other non 200 response code
print("There was an Error requesting:", url, " HTTP Response Code: ", response.status_code)
# internet is broken
except requests.exceptions.RequestException as e:
print("ERROR: Cannot connect to ", url)
print("DETAILS:", e)
```
### Rule 2a: Use exceptions instead of if else in this case
Personally I don't like to use `if ... else` to handle an error. Instead, I prefer to instruct `requests` to throw an exception of `requests.exceptions.HTTPError` whenever the response is not ok. This makes the code you write a little cleaner.
Errors are rare occurences, and so I don't like error handling cluttering up my code.
```
url = 'http://www.syr.edu/mikeisawesum' # this should 404
try:
response = requests.get(url) # throws an exception when it cannot connect
response.raise_for_status() # throws an exception when not 'ok'
data = response.text
# response not ok
except requests.exceptions.HTTPError as e:
print("ERROR: Response from ", url, 'was not ok.')
print("DETAILS:", e)
# internet is broken
except requests.exceptions.RequestException as e:
print("ERROR: Cannot connect to ", url)
print("DETAILS:", e)
```
### Rule 3: Don't assume the data you get back is the data you expect.
And finally, do not assume the data arriving the the `response` is the data you expected. Specifically when you try and decode the `JSON` don't assume that will go smoothly. Catch the `json.decoder.JSONDecodeError`.
```
url = 'http://www.syr.edu' # this is HTML, not JSON
try:
response = requests.get(url) # throws an exception when it cannot connect
response.raise_for_status() # throws an exception when not 'ok'
data = response.json() # throws an exception when cannot decode json
# cannot decode json
except json.decoder.JSONDecodeError as e:
print("ERROR: Cannot decode the response into json")
print("DETAILS", e)
# response not ok
except requests.exceptions.HTTPError as e:
print("ERROR: Response from ", url, 'was not ok.')
print("DETAILS:", e)
# internet is broken
except requests.exceptions.RequestException as e:
print("ERROR: Cannot connect to ", url)
print("DETAILS:", e)
```
### Now You try it!
Using the last example above, write a program to input a location, call the `get_coordinates()` function, then print the coordindates. Make sure to handle all three types of exceptions!!!
```
# todo write code here to input a location, look up coordinates, and print
# it should handle errors!!!
```
|
github_jupyter
|
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import os
import numpy as np
from hangul.read_data import load_data, load_all_labels, load_images
import umap
import pandas as pd
from sklearn.decomposition import PCA, FastICA
from scipy.stats import gaussian_kde
from bokeh.palettes import Greys256, Inferno256, Plasma256, inferno, viridis, grey
import matplotlib.font_manager as mfm
from hangul import style
fonts = ['GothicA1-Regular', 'NanumMyeongjo', 'NanumBrush', 'Stylish-Regular']
fontsfolder = '/data/hangul/h5s'
fontnames = os.listdir(fontsfolder)
for font in [fonts[0]]:
filename = os.path.join(fontsfolder, '{}/{}_24.h5'.format(font, font))
image, label, initial, medial, final = load_data(filename, median_shape=True)
imf, style = load_all_labels(filename)
font_path = "/home/ahyeon96/GothicA1-Regular.ttf"
prop = mfm.FontProperties(fname=font_path)
image = image.reshape(11172,-1)
emb = umap.UMAP(random_state=0, learning_rate=0.01, n_neighbors = 100).fit_transform(image)
np.savez('/home/ahyeon96/hangul_misc/umap_emb.npz', data=emb)
emb = np.load('/home/ahyeon96/hangul_misc/umap_emb.npz')
emb = emb['data']
x = emb[:,0]
y = emb[:,1]
xmin = np.min(x)
xmax = np.max(x)
ymin = np.min(y)
ymax = np.max(y)
X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
positions = np.vstack([X.ravel(), Y.ravel()])
values = np.vstack([x, y])
kernel = gaussian_kde(values)
Z = np.reshape(kernel(positions).T, X.shape)
# appendix figure 9
fig, axes = plt.subplots(1, 3, sharex='col', sharey='row', figsize=(18,6))
title = ['Initial', 'Medial', 'Final']
abc = ['A', 'B', 'C']
ranges = [19,21,28]
init_chars = ['ᆨ', 'ᆩ', 'ᆫ', 'ᆮ', 'ㄸ', 'ㄹ', 'ㅁ', 'ㅂ', 'ㅃ', 'ㅅ', 'ㅆ', 'ㅇ', 'ㅈ', 'ㅉ', 'ㅊ', 'ㅋ', 'ㅌ', 'ㅍ', 'ㅎ']
med_chars = ['ㅏ', 'ㅐ', 'ㅑ', 'ㅒ', 'ㅓ', 'ㅔ', 'ㅕ', 'ㅖ', 'ㅗ', 'ㅘ', 'ㅙ',
'ㅚ', 'ㅛ', 'ㅜ', 'ㅝ', 'ㅞ', 'ㅟ', 'ㅠ', 'ㅡ', 'ㅢ', 'ㅣ']
fin_chars = ['', 'ㄱ', 'ㄲ', 'ᆪ', 'ㄴ', 'ᆬ', 'ᆭ', 'ㄷ', 'ㄹ', 'ᆰ', 'ᆱ', 'ᆲ', 'ᆳ', 'ᆴ', 'ᆵ', 'ᆶ',
'ㅁ', 'ㅂ', 'ᆹ', 'ㅅ', 'ㅆ', 'ㅇ', 'ㅈ', 'ㅊ', 'ㅋ', 'ㅌ', 'ㅍ', 'ㅎ']
small = ['ᆪ', 'ㄴ', 'ᆬ', 'ᆭ','ᆰ', 'ᆱ', 'ᆲ', 'ᆳ', 'ᆴ', 'ᆵ', 'ᆶ','ᆹ']
label_name = [init_chars, med_chars, fin_chars]
sizes = [10,10,8]
ints = [21*28,19*28,19*21]
eps = 0.3
for i in range(3):
lab = label[:,i]
axes[i].imshow(np.rot90(Z), cmap=plt.cm.Reds, extent=[xmin, xmax, ymin, ymax], vmin=np.min(Z), vmax=1.5*np.max(Z), aspect='auto',
rasterized=True)
for k in range(ranges[i]):
idxs = np.nonzero(lab == k)[0]
for j in np.random.permutation(ints[i])[:sizes[i]]:
xj = x[idxs[j]]
yj = y[idxs[j]]
if (abs(xj-xmax) > eps) and (abs(xj-xmin) > eps) and (abs(yj-ymax) > eps) and (abs(yj-ymin) > eps):
if label_name[i][k] in small:
axes[i].text(xj, yj, label_name[i][k], fontproperties=prop, fontsize=28,
horizontalalignment='center', verticalalignment='center')
else:
axes[i].text(xj, yj, label_name[i][k], fontproperties=prop, fontsize=20,
horizontalalignment='center', verticalalignment='center')
axes[i].set_xlim([xmin, xmax])
axes[i].set_ylim([ymin, ymax])
axes[i].set_title('GothicA1-Regular {}'.format(title[i]), fontsize=22)
axes[i].text(-5.5,10, '{}'.format(abc[i]), fontsize=22, fontweight='bold')
plt.tight_layout()
plt.savefig('/home/ahyeon96/hangul_misc/umap_chars.png', dpi=300)
plt.show()
# umap single font
label_type = ['Initial', 'Medial', 'Final']
label_num = [19,21,28]
fig, ax = plt.subplots(1,3, sharey=True, figsize=(25,5))
for i in range(3):
labels = label[:,i]
transformed_umap = umap.UMAP(random_state=0, learning_rate=0.01, n_neighbors = 100).fit_transform(image)
g = ax[i].scatter(transformed_umap[:,0], transformed_umap[:,1], c=labels, cmap=plt.get_cmap("jet", label_num[i]), marker='.', alpha=0.5)
ax[i].set_title('UMAP AppleGothic {}'.format(label_type[i]), fontsize=20)
plt.colorbar(g, ticks=range(label_num[i]), ax=ax[i])
plt.savefig('/Users/ahyeon/Desktop/umap_single.pdf')
plt.show()
# appendix figure 10
label_type = ['Initial', 'Medial', 'Final']
label_num = [2,5,3]
plot_labs = ['A', 'B', 'C']
colorbar_labs = [['single','double'],['below','right-single','right-double','below-right-single','below-right-double'],
['none','single','double']]
ticks = [[0.25, 0.75],[0.4,1.2,2,2.8,3.6],[0.35,1,1.65]]
f, ax = plt.subplots(1,3, sharey=True, figsize=(24,6))
image = image.reshape(11172,-1)
for i in range(3):
labels = style[:,i]
g = ax[i].scatter(emb[:,0], emb[:,1], c=labels, cmap=plt.get_cmap("viridis", label_num[i]), marker='.')
ax[i].set_title('GothicA1-Regular\n {} Geometry'.format(label_type[i]), fontsize=20)
ax[i].text(-7,10, plot_labs[i], fontweight='bold', fontsize=26)
cbar = plt.colorbar(g, ticks=ticks[i], ax=ax[i])
cbar.ax.set_yticklabels(colorbar_labs[i], fontsize=12)
plt.tight_layout()
plt.savefig('/home/ahyeon96/hangul_misc/umap_geometry.pdf', dpi=300, bbox_inches='tight')
plt.show()
```
|
github_jupyter
|
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from bs4 import BeautifulSoup
import os
import unicodedata
```
## Reviews from Roger Ebert
```
# Reading Roger Ebert review from text files online
import requests
import glob
folder = 'ebert_reviews'
# Create folder if it doesn't already exists
if not os.path.exists(folder):
os.makedirs(folder)
ebert_review_urls = ['https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9900_1-the-wizard-of-oz-1939-film/1-the-wizard-of-oz-1939-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9901_2-citizen-kane/2-citizen-kane.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9901_3-the-third-man/3-the-third-man.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9902_4-get-out-film/4-get-out-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9902_5-mad-max-fury-road/5-mad-max-fury-road.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9902_6-the-cabinet-of-dr.-caligari/6-the-cabinet-of-dr.-caligari.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9903_7-all-about-eve/7-all-about-eve.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9903_8-inside-out-2015-film/8-inside-out-2015-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9903_9-the-godfather/9-the-godfather.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9904_10-metropolis-1927-film/10-metropolis-1927-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9904_11-e.t.-the-extra-terrestrial/11-e.t.-the-extra-terrestrial.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9904_12-modern-times-film/12-modern-times-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9904_14-singin-in-the-rain/14-singin-in-the-rain.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9905_15-boyhood-film/15-boyhood-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9905_16-casablanca-film/16-casablanca-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9905_17-moonlight-2016-film/17-moonlight-2016-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9906_18-psycho-1960-film/18-psycho-1960-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9906_19-laura-1944-film/19-laura-1944-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9906_20-nosferatu/20-nosferatu.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9907_21-snow-white-and-the-seven-dwarfs-1937-film/21-snow-white-and-the-seven-dwarfs-1937-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9907_22-a-hard-day27s-night-film/22-a-hard-day27s-night-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9907_23-la-grande-illusion/23-la-grande-illusion.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9908_25-the-battle-of-algiers/25-the-battle-of-algiers.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9908_26-dunkirk-2017-film/26-dunkirk-2017-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9908_27-the-maltese-falcon-1941-film/27-the-maltese-falcon-1941-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9909_29-12-years-a-slave-film/29-12-years-a-slave-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9909_30-gravity-2013-film/30-gravity-2013-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9909_31-sunset-boulevard-film/31-sunset-boulevard-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990a_32-king-kong-1933-film/32-king-kong-1933-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990a_33-spotlight-film/33-spotlight-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990a_34-the-adventures-of-robin-hood/34-the-adventures-of-robin-hood.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990b_35-rashomon/35-rashomon.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990b_36-rear-window/36-rear-window.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990b_37-selma-film/37-selma-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990c_38-taxi-driver/38-taxi-driver.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990c_39-toy-story-3/39-toy-story-3.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990c_40-argo-2012-film/40-argo-2012-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990d_41-toy-story-2/41-toy-story-2.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990d_42-the-big-sick/42-the-big-sick.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990d_43-bride-of-frankenstein/43-bride-of-frankenstein.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990d_44-zootopia/44-zootopia.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990e_45-m-1931-film/45-m-1931-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990e_46-wonder-woman-2017-film/46-wonder-woman-2017-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990e_48-alien-film/48-alien-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990f_49-bicycle-thieves/49-bicycle-thieves.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990f_50-seven-samurai/50-seven-samurai.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990f_51-the-treasure-of-the-sierra-madre-film/51-the-treasure-of-the-sierra-madre-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9910_52-up-2009-film/52-up-2009-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9910_53-12-angry-men-1957-film/53-12-angry-men-1957-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9910_54-the-400-blows/54-the-400-blows.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9911_55-logan-film/55-logan-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9911_57-army-of-shadows/57-army-of-shadows.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9912_58-arrival-film/58-arrival-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9912_59-baby-driver/59-baby-driver.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9913_60-a-streetcar-named-desire-1951-film/60-a-streetcar-named-desire-1951-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9913_61-the-night-of-the-hunter-film/61-the-night-of-the-hunter-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9913_62-star-wars-the-force-awakens/62-star-wars-the-force-awakens.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9913_63-manchester-by-the-sea-film/63-manchester-by-the-sea-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9914_64-dr.-strangelove/64-dr.-strangelove.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9914_66-vertigo-film/66-vertigo-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9914_67-the-dark-knight-film/67-the-dark-knight-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9915_68-touch-of-evil/68-touch-of-evil.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9915_69-the-babadook/69-the-babadook.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9915_72-rosemary27s-baby-film/72-rosemary27s-baby-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9916_73-finding-nemo/73-finding-nemo.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9916_74-brooklyn-film/74-brooklyn-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9917_75-the-wrestler-2008-film/75-the-wrestler-2008-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9917_77-l.a.-confidential-film/77-l.a.-confidential-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9918_78-gone-with-the-wind-film/78-gone-with-the-wind-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9918_79-the-good-the-bad-and-the-ugly/79-the-good-the-bad-and-the-ugly.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9918_80-skyfall/80-skyfall.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9919_82-tokyo-story/82-tokyo-story.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9919_83-hell-or-high-water-film/83-hell-or-high-water-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9919_84-pinocchio-1940-film/84-pinocchio-1940-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9919_85-the-jungle-book-2016-film/85-the-jungle-book-2016-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991a_86-la-la-land-film/86-la-la-land-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991b_87-star-trek-film/87-star-trek-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991b_89-apocalypse-now/89-apocalypse-now.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991c_90-on-the-waterfront/90-on-the-waterfront.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991c_91-the-wages-of-fear/91-the-wages-of-fear.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991c_92-the-last-picture-show/92-the-last-picture-show.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991d_93-harry-potter-and-the-deathly-hallows-part-2/93-harry-potter-and-the-deathly-hallows-part-2.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991d_94-the-grapes-of-wrath-film/94-the-grapes-of-wrath-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991d_96-man-on-wire/96-man-on-wire.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991e_97-jaws-film/97-jaws-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991e_98-toy-story/98-toy-story.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991e_99-the-godfather-part-ii/99-the-godfather-part-ii.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991e_100-battleship-potemkin/100-battleship-potemkin.txt']
# access url, read content and write to local file
for url in ebert_review_urls:
r = requests.get(url)
with open(os.path.join(folder, url.split('/')[-1]),encoding='utf-8', 'wb') as file:
file.write(r.content)
len(os.listdir(folder))
```
Note not all 100 movies have been reviewed by Roger Ebert. As a matter of fact, we only have 88 out of the 100 best movies list.
```
# Parsing each Review
review_list = []
# read all txt files in folder
for review in glob.glob(folder+'/*.txt'):
with open(review, encoding='utf-8') as file:
title = file.readline().strip()
review_url = file.readline().strip()
review_text = file.read().strip()
review_dict = {'title': title, 'review_url': review_url, 'review': review_text}
review_list.append(review_dict)
df_reviews = pd.DataFrame(review_list, columns=review_dict.keys())
df_reviews = df_reviews.sort_values('title').reset_index(drop=True)
df_reviews.head()
# Saving it locally
df_reviews.to_csv('movies_review_text.csv', index=False)
```
|
github_jupyter
|
```
#export
from local.torch_basics import *
from local.test import *
from local.layers import *
from local.data.all import *
from local.notebook.showdoc import show_doc
from local.optimizer import *
from local.learner import *
#default_exp callback.hook
```
# Model hooks
> Callback and helper function to add hooks in models
```
from local.utils.test import *
```
## What are hooks?
Hooks are function you can attach to a particular layer in your model and that will be executed in the foward pass (for forward hooks) or backward pass (for backward hooks). Here we begin with an introduction around hooks, but you should jump to `HookCallback` if you quickly want to implemet one (and read the following example `ActivationStats`).
Forward hooks are functions that take three arguments, the layer it's applied to, the input of that layer and the output of that layer.
```
tst_model = nn.Linear(5,3)
def example_forward_hook(m,i,o): print(m,i,o)
x = torch.randn(4,5)
hook = tst_model.register_forward_hook(example_forward_hook)
y = tst_model(x)
hook.remove()
```
Backward hooks are functions that take three arguments: the layer it's applied to, the gradients of the loss with respect to the input, and the gradients with respect to the output.
```
def example_backward_hook(m,gi,go): print(m,gi,go)
hook = tst_model.register_backward_hook(example_backward_hook)
x = torch.randn(4,5)
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
hook.remove()
```
Hooks can change the input/output of a layer, or the gradients, print values or shapes. If you want to store something related to theses inputs/outputs, it's best to have you hook associated to a class so that it can put it in the state of an instance of that class.
## Hook -
```
#export
@docs
class Hook():
"Create a hook on `m` with `hook_func`."
def __init__(self, m, hook_func, is_forward=True, detach=True, cpu=False):
self.hook_func,self.detach,self.cpu,self.stored = hook_func,detach,cpu,None
f = m.register_forward_hook if is_forward else m.register_backward_hook
self.hook = f(self.hook_fn)
self.removed = False
def hook_fn(self, module, input, output):
"Applies `hook_func` to `module`, `input`, `output`."
if self.detach: input,output = to_detach(input, cpu=self.cpu),to_detach(output, cpu=self.cpu)
self.stored = self.hook_func(module, input, output)
def remove(self):
"Remove the hook from the model."
if not self.removed:
self.hook.remove()
self.removed=True
def __enter__(self, *args): return self
def __exit__(self, *args): self.remove()
_docs = dict(__enter__="Register the hook",
__exit__="Remove the hook")
```
This will be called during the forward pass if `is_forward=True`, the backward pass otherwise, and will optionally `detach` and put on the `cpu` the (gradient of the) input/output of the model before passing them to `hook_func`. The result of `hook_func` will be stored in the `stored` attribute of the `Hook`.
```
tst_model = nn.Linear(5,3)
hook = Hook(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hook.stored, y)
show_doc(Hook.hook_fn)
show_doc(Hook.remove)
```
> Note: It's important to properly remove your hooks for your model when you're done to avoid them being called again next time your model is applied to some inputs, and to free the memory that go with their state.
```
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
hook = Hook(tst_model, example_forward_hook)
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
hook.remove()
test_stdout(lambda: tst_model(x), "")
```
### Context Manager
Since it's very important to remove your `Hook` even if your code is interrupted by some bug, `Hook` can be used as context managers.
```
show_doc(Hook.__enter__)
show_doc(Hook.__exit__)
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
with Hook(tst_model, example_forward_hook) as h:
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
test_stdout(lambda: tst_model(x), "")
#export
def _hook_inner(m,i,o): return o if isinstance(o,Tensor) or is_listy(o) else list(o)
def hook_output(module, detach=True, cpu=False, grad=False):
"Return a `Hook` that stores activations of `module` in `self.stored`"
return Hook(module, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
```
The activations stored are the gradients if `grad=True`, otherwise the output of `module`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU.
```
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
with hook_output(tst_model) as h:
y = tst_model(x)
test_eq(y, h.stored)
assert not h.stored.requires_grad
with hook_output(tst_model, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
test_close(2*y / y.numel(), h.stored[0])
#cuda
with hook_output(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
test_eq(h.stored.device, torch.device('cpu'))
```
## Hooks -
```
#export
@docs
class Hooks():
"Create several hooks on the modules in `ms` with `hook_func`."
def __init__(self, ms, hook_func, is_forward=True, detach=True, cpu=False):
self.hooks = [Hook(m, hook_func, is_forward, detach, cpu) for m in ms]
def __getitem__(self,i): return self.hooks[i]
def __len__(self): return len(self.hooks)
def __iter__(self): return iter(self.hooks)
@property
def stored(self): return [o.stored for o in self]
def remove(self):
"Remove the hooks from the model."
for h in self.hooks: h.remove()
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
_docs = dict(stored = "The states saved in each hook.",
__enter__="Register the hooks",
__exit__="Remove the hooks")
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
hooks = Hooks(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hooks.stored[0], layers[0](x))
test_eq(hooks.stored[1], F.relu(layers[0](x)))
test_eq(hooks.stored[2], y)
hooks.remove()
show_doc(Hooks.stored, name='Hooks.stored')
show_doc(Hooks.remove)
```
### Context Manager
Like `Hook` , you can use `Hooks` as context managers.
```
show_doc(Hooks.__enter__)
show_doc(Hooks.__exit__)
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
with Hooks(layers, lambda m,i,o: o) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
#export
def hook_outputs(modules, detach=True, cpu=False, grad=False):
"Return `Hooks` that store activations of all `modules` in `self.stored`"
return Hooks(modules, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
```
The activations stored are the gradients if `grad=True`, otherwise the output of `modules`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU.
```
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
x = torch.randn(4,5)
with hook_outputs(layers) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
for s in h.stored: assert not s.requires_grad
with hook_outputs(layers, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
g = 2*y / y.numel()
test_close(g, h.stored[2][0])
g = g @ layers[2].weight.data
test_close(g, h.stored[1][0])
g = g * (layers[0](x) > 0).float()
test_close(g, h.stored[0][0])
#cuda
with hook_outputs(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
for s in h.stored: test_eq(s.device, torch.device('cpu'))
```
## HookCallback -
To make hooks easy to use, we wrapped a version in a Callback where you just have to implement a `hook` function (plus any element you might need).
```
#export
def has_params(m):
"Check if `m` has at least one parameter"
return len(list(m.parameters())) > 0
assert has_params(nn.Linear(3,4))
assert has_params(nn.LSTM(4,5,2))
assert not has_params(nn.ReLU())
#export
class HookCallback(Callback):
"`Callback` that can be used to register hooks on `modules`"
def __init__(self, hook=None, modules=None, do_remove=True, is_forward=True, detach=True, cpu=False):
self.modules,self.do_remove = modules,do_remove
self.is_forward,self.detach,self.cpu = is_forward,detach,cpu
if hook is not None: setattr(self, 'hook', hook)
def begin_fit(self):
"Register the `Hooks` on `self.modules`."
if not self.modules:
self.modules = [m for m in flatten_model(self.model) if has_params(m)]
self.hooks = Hooks(self.modules, self.hook, self.is_forward, self.detach, self.cpu)
def after_fit(self):
"Remove the `Hooks`."
if self.do_remove: self._remove()
def _remove(self):
if getattr(self, 'hooks', None): self.hooks.remove()
def __del__(self): self._remove()
```
You can either subclass and implement a `hook` function (along with any event you want) or pass that a `hook` function when initializing. Such a function needs to take three argument: a layer, input and output (for a backward hook, input means gradient with respect to the inputs, output, gradient with respect to the output) and can either modify them or update the state according to them.
If not provided, `modules` will default to the layers of `self.model` that have a `weight` attribute. Depending on `do_remove`, the hooks will be properly removed at the end of training (or in case of error). `is_forward` , `detach` and `cpu` are passed to `Hooks`.
The function called at each forward (or backward) pass is `self.hook` and must be implemented when subclassing this callback.
```
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def after_batch(self): test_eq(self.hooks.stored[0], self.pred)
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
class TstCallback(HookCallback):
def __init__(self, modules=None, do_remove=True, detach=True, cpu=False):
super().__init__(None, modules, do_remove, False, detach, cpu)
def hook(self, m, i, o): return o
def after_batch(self):
if self.training:
test_eq(self.hooks.stored[0][0], 2*(self.pred-self.yb)/self.pred.shape[0])
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
show_doc(HookCallback.begin_fit)
show_doc(HookCallback.after_fit)
```
An example of such a `HookCallback` is the following, that stores the mean and stds of activations that go through the network.
```
#exports
@docs
class ActivationStats(HookCallback):
"Callback that record the mean and std of activations."
def begin_fit(self):
"Initialize stats."
super().begin_fit()
self.stats = []
def hook(self, m, i, o): return o.mean().item(),o.std().item()
def after_batch(self):
"Take the stored results and puts it in `self.stats`"
if self.training: self.stats.append(self.hooks.stored)
def after_fit(self):
"Polish the final result."
self.stats = tensor(self.stats).permute(2,1,0)
super().after_fit()
_docs = dict(hook="Take the mean and std of the output")
learn = synth_learner(n_trn=5, cbs = ActivationStats())
learn.fit(1)
learn.activation_stats.stats
```
The first line contains the means of the outputs of the model for each batch in the training set, the second line their standard deviations.
```
#hide
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def begin_fit(self):
super().begin_fit()
self.means,self.stds = [],[]
def after_batch(self):
if self.training:
self.means.append(self.hooks.stored[0].mean().item())
self.stds.append (self.hooks.stored[0].std() .item())
learn = synth_learner(n_trn=5, cbs = [TstCallback(), ActivationStats()])
learn.fit(1)
test_eq(learn.activation_stats.stats[0].squeeze(), tensor(learn.tst.means))
test_eq(learn.activation_stats.stats[1].squeeze(), tensor(learn.tst.stds))
```
## Model summary
```
#export
def total_params(m):
"Give the number of parameters of a module and if it's trainable or not"
params = sum([p.numel() for p in m.parameters()])
trains = [p.requires_grad for p in m.parameters()]
return params, (False if len(trains)==0 else trains[0])
test_eq(total_params(nn.Linear(10,32)), (32*10+32,True))
test_eq(total_params(nn.Linear(10,32, bias=False)), (32*10,True))
test_eq(total_params(nn.BatchNorm2d(20)), (20*2, True))
test_eq(total_params(nn.BatchNorm2d(20, affine=False)), (0,False))
test_eq(total_params(nn.Conv2d(16, 32, 3)), (16*32*3*3 + 32, True))
test_eq(total_params(nn.Conv2d(16, 32, 3, bias=False)), (16*32*3*3, True))
#First ih layer 20--10, all else 10--10. *4 for the four gates
test_eq(total_params(nn.LSTM(20, 10, 2)), (4 * (20*10 + 10) + 3 * 4 * (10*10 + 10), True))
#export
def layer_info(learn):
def _track(m, i, o):
return (m.__class__.__name__,)+total_params(m)+(apply(lambda x:x.shape, o),)
layers = [m for m in flatten_model(learn.model)]
xb,_ = learn.data.train_dl.one_batch()
with Hooks(layers, _track) as h:
_ = learn.model.eval()(apply(lambda o:o[:1], xb))
return h.stored
m = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
learn = synth_learner()
learn.model=m
test_eq(layer_info(learn), [
('Linear', 100, True, [1, 50]),
('ReLU', 0, False, [1, 50]),
('BatchNorm1d', 100, True, [1, 50]),
('Linear', 51, True, [1, 1])
])
#export core
class PrettyString(str):
"Little hack to get strings to show properly in Jupyter."
def __repr__(self): return self
#export
def _print_shapes(o, bs):
if isinstance(o, torch.Size): return ' x '.join([str(bs)] + [str(t) for t in o[1:]])
else: return [_print_shapes(x, bs) for x in o]
#export
@patch
def summary(self:Learner):
"Print a summary of the model, optimizer and loss function."
infos = layer_info(self)
xb,_ = self.data.train_dl.one_batch()
n,bs = 64,find_bs(xb)
inp_sz = _print_shapes(apply(lambda x:x.shape, xb), bs)
res = f"{self.model.__class__.__name__} (Input shape: {inp_sz})\n"
res += "=" * n + "\n"
res += f"{'Layer (type)':<20} {'Output Shape':<20} {'Param #':<10} {'Trainable':<10}\n"
res += "=" * n + "\n"
ps,trn_ps = 0,0
for typ,np,trn,sz in infos:
if sz is None: continue
ps += np
if trn: trn_ps += np
res += f"{typ:<20} {_print_shapes(sz, bs):<20} {np:<10,} {str(trn):<10}\n"
res += "_" * n + "\n"
res += f"\nTotal params: {ps:,}\n"
res += f"Total trainable params: {trn_ps:,}\n"
res += f"Total non-trainable params: {ps - trn_ps:,}\n\n"
res += f"Optimizer used: {self.opt_func}\nLoss function: {self.loss_func}\n\nCallbacks:\n"
res += '\n'.join(f" - {cb}" for cb in sort_by_run(self.cbs))
return PrettyString(res)
m = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
for p in m[0].parameters(): p.requires_grad_(False)
learn = synth_learner()
learn.model=m
learn.summary()
```
## Export -
```
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
```
|
github_jupyter
|
## Demo of 1D regression with an Attentive Neural Process with Recurrent Neural Network (ANP-RNN) model
This notebook will provide a simple and straightforward demonstration on how to utilize an Attentive Neural Process with a Recurrent Neural Network (ANP-RNN) to regress context and target points to a sine curve.
First, we need to import all necessary packages and modules for our task:
```
import os
import sys
import torch
# from matplotlib import pyplot as plt
# Provide access to modules in repo.
sys.path.insert(0, os.path.abspath('neural_process_models'))
sys.path.insert(0, os.path.abspath('misc'))
from neural_process_models.anp_rnn import ANP_RNN_Model
from misc.test_sin_regression.Sin_Wave_Data import sin_wave_data, plot_functions
```
The `sin_wave_data` class, defined in `misc/test_sin_regression/Sin_Data_Wave.py`, represents the curve that we will try to regress to. From instances of this class, we are able to sample context and target points from the curve to serve as inputs for our neural process.
The default parameters of this class will produce a "ground truth" curve defined as the sum of the following:
1. A sine curve with amplitude 1, frequency 1, and phase 1.
2. A sine curve with amplitude 2, frequency 2, and phase 1.
3. A measured amount of noise (0.1).
Let us create an instance of this class:
```
data = sin_wave_data()
```
Next, we need to instantiate our model. The ANP model is implemented under the `NeuralProcessModel` class under the file `neural_process_models/attentive_neural_process.py`.
We will use the following parameters for our example model:
* 1 for x-dimension and y-dimension (since this is 1D regression)
* 4 hidden layers of dimension 256 for encoders and decoder
* 256 as the latent dimension for encoders and decoder
* We will utilize a self-attention process.
* We will utilize a deterministic path for the encoder.
Let us create an instance of this class, as well as set some hyperparameters for our training:
```
np_model = ANP_RNN_Model(x_dim=1,
y_dim=1,
mlp_hidden_size_list=[256, 256, 256, 256],
latent_dim=256,
use_rnn=True,
use_self_attention=True,
le_self_attention_type="laplace",
de_self_attention_type="laplace",
de_cross_attention_type="laplace",
use_deter_path=True)
optim = torch.optim.Adam(np_model.parameters(), lr=1e-4)
num_epochs = 1000
batch_size = 16
```
Now, let us train our model. For each epoch, we will print the loss at that epoch.
Additionally, every 50 epochs, an image will be generated and displayed, using `pyplot`. This will give you an opportunity to more closely analyze and/or save the images, if you would like.
```
for epoch in range(1, num_epochs + 1):
print("step = " + str(epoch))
np_model.train()
plt.clf()
optim.zero_grad()
ctt_x, ctt_y, tgt_x, tgt_y = data.query(batch_size=batch_size,
context_x_start=-6,
context_x_end=6,
context_x_num=200,
target_x_start=-6,
target_x_end=6,
target_x_num=200)
mu, sigma, log_p, kl, loss = np_model(ctt_x, ctt_y, tgt_x, tgt_y)
print('loss = ', loss)
loss.backward()
optim.step()
np_model.eval()
if epoch % 50 == 0:
plt.ion()
plot_functions(tgt_x.numpy(),
tgt_y.numpy(),
ctt_x.numpy(),
ctt_y.numpy(),
mu.detach().numpy(),
sigma.detach().numpy())
title_str = 'Training at epoch ' + str(epoch)
plt.title(title_str)
plt.pause(0.1)
plt.ioff()
plt.show()
```
|
github_jupyter
|
<img src="images/usm.jpg" width="480" height="240" align="left"/>
# MAT281 - Laboratorio N°02
## Objetivos del laboratorio
* Reforzar conceptos básicos de clasificación.
## Contenidos
* [Problema 01](#p1)
<a id='p1'></a>
## I.- Problema 01
<img src="https://www.xenonstack.com/wp-content/uploads/xenonstack-credit-card-fraud-detection.png" width="360" height="360" align="center"/>
El conjunto de datos se denomina `creditcard.csv` y consta de varias columnas con información acerca del fraude de tarjetas de crédito, en donde la columna **Class** corresponde a: 0 si no es un fraude y 1 si es un fraude.
En este ejercicio se trabajará el problemas de clases desbalancedas. Veamos las primeras cinco filas dle conjunto de datos:
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix,accuracy_score,recall_score,precision_score,f1_score
from sklearn.dummy import DummyClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
%matplotlib inline
sns.set_palette("deep", desat=.6)
sns.set(rc={'figure.figsize':(11.7,8.27)})
# cargar datos
df = pd.read_csv(os.path.join("data","creditcard.csv"), sep=";")
df.head()
```
Analicemos el total de fraudes respecto a los casos que nos son fraudes:
```
# calcular proporciones
df_count = pd.DataFrame()
df_count["fraude"] =["no","si"]
df_count["total"] = df["Class"].value_counts()
df_count["porcentaje"] = 100*df_count["total"] /df_count["total"] .sum()
df_count
```
Se observa que menos del 1% corresponde a registros frudulentos. La pregunta que surgen son:
* ¿ Cómo deben ser el conjunto de entrenamiento y de testeo?
* ¿ Qué modelos ocupar?
* ¿ Qué métricas ocupar?
Por ejemplo, analicemos el modelos de regresión logística y apliquemos el procedimiento estándar:
```
# datos
y = df.Class
X = df.drop('Class', axis=1)
# split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=27)
# Creando el modelo
lr = LogisticRegression(solver='liblinear').fit(X_train, y_train)
# predecir
lr_pred = lr.predict(X_test)
# calcular accuracy
accuracy_score(y_test, lr_pred)
```
En general el modelo tiene un **accuracy** del 99,9%, es decir, un podría suponer que el modelo predice casi perfectamente, pero eso esta lejos de ser así. Para ver por qué es necesario seguir los siguientes pasos:
### 1. Cambiar la métrica de rendimiento
El primer paso es comparar con distintas métricas, para eso ocupemos las 4 métricas clásicas abordadas en el curso:
* accuracy
* precision
* recall
* f-score
En este punto deberá poner las métricas correspondientes y comentar sus resultados.
```
# metrics
y_true = list(y_test)
y_pred = list(lr.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
print('\nMetricas:\n ')
print('accuracy: ',accuracy_score(y_test, lr_pred))
print('recall: ',recall_score(y_test, lr_pred))
print('precision: ',precision_score(y_test, lr_pred))
print('f-score: ',f1_score(y_test, lr_pred))
print("")
```
##### accuracy : posee gran certeza en la predicción de tarjetas con fraude y no fraude respecto al total de la muestra.
##### recall:respecto a las predicciones hechas para tarjetas con fraude se observa un decaimiento en la certeza de las predicciones, probablemente dado que el modelo este entrenado preferentemente para predecir casos sin fraude, dada la cantidad de ejemplos con esta categoria.Por otro lado, estoy dando enfasis por medio de esta métrica a los casos que dije que no eran fraudes cuando si lo eran.
##### precisión: Esta métrica indica la importancia relativa de las predicciones correctas como no fraude respecto a las señaladas como no fraude aún cuando fueron hechas incorrectamente. Respecto, a recall está es más elevada implicando. El error en equivocarse en predecir fraude cuando no lo es, es menos grave. En tal caso, la metrica de recall que implica equivocarse en decir que una tarjeta no posee fraude cuando lo tiene es más importante, este análisis pensando en los factores del denominador de ambas métricas.
##### f-score: seria el equilibrio entre las otras métricas, ponderando precisión y recall. En este caso, se encuentra intermedia entre el reclla y precisión.
##### Comentarios finales, usar el accuracy por si solo para analizar los resultados del problema sería incompleto. Análizando las otras métricas propondría mejorar el problema equilibrando los datos de ejemplos.
### 2. Cambiar algoritmo
El segundo paso es comparar con distintos modelos. Debe tener en cuenta que el modelo ocupado resuelva el problema supervisado de clasificación.
En este punto deberá ajustar un modelo de **random forest**, aplicar las métricas y comparar con el modelo de regresión logística.
```
# train model
rfc = RandomForestClassifier(n_estimators=5).fit(X_train, y_train) # algoritmo random forest
# metrics
y_true = list(y_test)
y_pred = list(rfc.predict(X_test)) # predicciones con random forest
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
print('\nMetricas:\n ')
print('accuracy: ',accuracy_score(y_true,y_pred))
print('recall: ',recall_score(y_true,y_pred))
print('precision: ',precision_score(y_true,y_pred))
print('f-score: ',f1_score(y_true,y_pred))
print("")
```
###### En este caso se puede observar que las métricas de Recall, precisión y f-score mejoran, en comparación al modelo de regresión logisticas. Sin embargo, el orden de certeza se mantiene accuracy, precisión, f-score y recal (Respectivamente).
##### cambiar de modelo de predicción probablemente puede ayudar a mejorar en cierto nivel las métricas, sin embargo, no soluciona el problema de desbalanceo de clases y puede seguir existiendo cierto nivel de sesgo en clasificar clases.
###### nota:En este caso los estimadores se seleccionaron arbitrariamente como 5 para el algoritmo de random forest.
### 3. Técnicas de remuestreo: sobremuestreo de clase minoritaria
El tercer paso es ocupar ténicas de remuestreo, pero sobre la clase minoritaria. Esto significa que mediantes ténicas de remuestreo trataremos de equiparar el número de elementos de la clase minoritaria a la clase mayoritaria.
```
from sklearn.utils import resample
# concatenar el conjunto de entrenamiento
X = pd.concat([X_train, y_train], axis=1)
# separar las clases
not_fraud = X[X.Class==0]
fraud = X[X.Class==1]
# remuestrear clase minoritaria
fraud_upsampled = resample(fraud,
replace=True, # sample with replacement
n_samples=len(not_fraud), # match number in majority class
random_state=27) # reproducible results
# recombinar resultados
upsampled = pd.concat([not_fraud, fraud_upsampled])
# chequear el número de elementos por clases
upsampled.Class.value_counts()
# datos de entrenamiento sobre-balanceados
y_train = upsampled.Class
X_train = upsampled.drop('Class', axis=1)
```
Ocupando estos nuevos conjunto de entrenamientos, vuelva a aplicar el modelos de regresión logística y calcule las correspondientes métricas. Además, justifique las ventajas y desventjas de este procedimiento.
```
upsampled = LogisticRegression(solver='liblinear').fit(X_train, y_train) # algoritmo de regresion logistica
# metrics
y_true = list(y_test)
y_pred = list(upsampled.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
print('\nMetricas:\n ')
print('accuracy: ',accuracy_score(y_true,y_pred))
print('recall: ',recall_score(y_true,y_pred))
print('precision: ',precision_score(y_true,y_pred))
print('f-score: ',f1_score(y_true,y_pred))
print("")
```
##### En este caso, las metricas en general disminuyeron su valor en comparación a la clase desbalanceada. Cabe señalar que la metrica de acurracy se dispara respecto a las otras metricas. Probablemente incurra en una especie de sobreajuste del clasificador, no siendo util para extrapolar a otras realidades. Redoblar la clase minoritaria quizás sea más efectiva cuando haya un desbalanceo no tan extremo como el problema de ejemplo, y sobre esta misma clase minoritaria haya algo más de variabilidad de los datos de la clase para realizar mejores extrapolaciones.
### 4. Técnicas de remuestreo - Ejemplo de clase mayoritaria
El cuarto paso es ocupar ténicas de remuestreo, pero sobre la clase mayoritaria. Esto significa que mediantes ténicas de remuestreo trataremos de equiparar el número de elementos de la clase mayoritaria a la clase minoritaria.
```
# remuestreo clase mayoritaria
not_fraud_downsampled = resample(not_fraud,
replace = False, # sample without replacement
n_samples = len(fraud), # match minority n
random_state = 27) # reproducible results
# recombinar resultados
downsampled = pd.concat([not_fraud_downsampled, fraud])
# chequear el número de elementos por clases
downsampled.Class.value_counts()
# datos de entrenamiento sub-balanceados
y_train = downsampled.Class
X_train = downsampled.drop('Class', axis=1)
```
Ocupando estos nuevos conjunto de entrenamientos, vuelva a aplicar el modelos de regresión logística y calcule las correspondientes métricas. Además, justifique las ventajas y desventjas de este procedimiento.
```
undersampled = LogisticRegression(solver='liblinear').fit(X_train, y_train) # modelo de regresi+on logística
# metrics
y_true = list(y_test)
y_pred = list(undersampled.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
print('\nMetricas:\n ')
print('accuracy: ',accuracy_score(y_true,y_pred))
print('recall: ',recall_score(y_true,y_pred))
print('precision: ',precision_score(y_true,y_pred))
print('f-score: ',f1_score(y_true,y_pred))
print("")
```
##### La métrica bajan en comparación al análisis de clases desbalancedas presenta métricas con valores más bajos. Respecto a la metodologia anterior las métricas dan resultados levemente mejores para precision, f-score y accuracy. Sin embargo, no son suficientes para decir que está metodologia es mejor que la anterior.
##### La desventaja que mencionaria de esta metodologia es que al intentar equilibrar la clase mayoritaria a la minoritaria, se puede perder información importante para clasificar una de las clases.
### 5. Conclusiones
Para finalizar el laboratorio, debe realizar un análisis comparativo con los disintos resultados obtenidos en los pasos 1-4. Saque sus propias conclusiones del caso.
##### Cuando los ejemplos dentro de un problema de clasificación se encuentran desproporcionados, el algoritmo de clasificador puede favorecer la clase mayoritaria. Para identificar esto es necesario analizar la proporción de las clases, y no solo basarse en la métrica de accuracy, revisando otras como precision, recall y f-score. Esto es altamente recomendable cuando el error que se comente al acertar es más importante que solo clasificar, lo que consideran alguna de estas otras métricas mencionadas.
##### Para analizar la certeza de lo modelos analizados, más alla de variar el algoritmo que puede aportar en mejorar cierto nivel de confianza en las predicciones, es importante agregar alguna metodologia de balanceo de clases para analizar estos problemas. Cuando se intenta doblar la clase minoritaria a una clase mayoritaria, se puede caer en un sobre ajuste del clasificador, dado que los ejemplos de la clase minoritaria pueden no aportar información nueva y se mantiene la tendencia sesgada en las predicciones. Por otro lado, al disminuir la clase mayoritaria a la minoritaria se puede perder información importante de la clase mayoritaria para clasificar. Se recomienda analizar el caso a caso.
|
github_jupyter
|
# Venture Funding with Deep Learning
You work as a risk management associate at Alphabet Soup, a venture capital firm. Alphabet Soup’s business team receives many funding applications from startups every day. This team has asked you to help them create a model that predicts whether applicants will be successful if funded by Alphabet Soup.
The business team has given you a CSV containing more than 34,000 organizations that have received funding from Alphabet Soup over the years. With your knowledge of machine learning and neural networks, you decide to use the features in the provided dataset to create a binary classifier model that will predict whether an applicant will become a successful business. The CSV file contains a variety of information about these businesses, including whether or not they ultimately became successful.
## Instructions:
The steps for this challenge are broken out into the following sections:
* Prepare the data for use on a neural network model.
* Compile and evaluate a binary classification model using a neural network.
* Optimize the neural network model.
### Prepare the Data for Use on a Neural Network Model
Using your knowledge of Pandas and scikit-learn’s `StandardScaler()`, preprocess the dataset so that you can use it to compile and evaluate the neural network model later.
Open the starter code file, and complete the following data preparation steps:
1. Read the `applicants_data.csv` file into a Pandas DataFrame. Review the DataFrame, looking for categorical variables that will need to be encoded, as well as columns that could eventually define your features and target variables.
2. Drop the “EIN” (Employer Identification Number) and “NAME” columns from the DataFrame, because they are not relevant to the binary classification model.
3. Encode the dataset’s categorical variables using `OneHotEncoder`, and then place the encoded variables into a new DataFrame.
4. Add the original DataFrame’s numerical variables to the DataFrame containing the encoded variables.
> **Note** To complete this step, you will employ the Pandas `concat()` function that was introduced earlier in this course.
5. Using the preprocessed data, create the features (`X`) and target (`y`) datasets. The target dataset should be defined by the preprocessed DataFrame column “IS_SUCCESSFUL”. The remaining columns should define the features dataset.
6. Split the features and target sets into training and testing datasets.
7. Use scikit-learn's `StandardScaler` to scale the features data.
### Compile and Evaluate a Binary Classification Model Using a Neural Network
Use your knowledge of TensorFlow to design a binary classification deep neural network model. This model should use the dataset’s features to predict whether an Alphabet Soup–funded startup will be successful based on the features in the dataset. Consider the number of inputs before determining the number of layers that your model will contain or the number of neurons on each layer. Then, compile and fit your model. Finally, evaluate your binary classification model to calculate the model’s loss and accuracy.
To do so, complete the following steps:
1. Create a deep neural network by assigning the number of input features, the number of layers, and the number of neurons on each layer using Tensorflow’s Keras.
> **Hint** You can start with a two-layer deep neural network model that uses the `relu` activation function for both layers.
2. Compile and fit the model using the `binary_crossentropy` loss function, the `adam` optimizer, and the `accuracy` evaluation metric.
> **Hint** When fitting the model, start with a small number of epochs, such as 20, 50, or 100.
3. Evaluate the model using the test data to determine the model’s loss and accuracy.
4. Save and export your model to an HDF5 file, and name the file `AlphabetSoup.h5`.
### Optimize the Neural Network Model
Using your knowledge of TensorFlow and Keras, optimize your model to improve the model's accuracy. Even if you do not successfully achieve a better accuracy, you'll need to demonstrate at least two attempts to optimize the model. You can include these attempts in your existing notebook. Or, you can make copies of the starter notebook in the same folder, rename them, and code each model optimization in a new notebook.
> **Note** You will not lose points if your model does not achieve a high accuracy, as long as you make at least two attempts to optimize the model.
To do so, complete the following steps:
1. Define at least three new deep neural network models (the original plus 2 optimization attempts). With each, try to improve on your first model’s predictive accuracy.
> **Rewind** Recall that perfect accuracy has a value of 1, so accuracy improves as its value moves closer to 1. To optimize your model for a predictive accuracy as close to 1 as possible, you can use any or all of the following techniques:
>
> * Adjust the input data by dropping different features columns to ensure that no variables or outliers confuse the model.
>
> * Add more neurons (nodes) to a hidden layer.
>
> * Add more hidden layers.
>
> * Use different activation functions for the hidden layers.
>
> * Add to or reduce the number of epochs in the training regimen.
2. After finishing your models, display the accuracy scores achieved by each model, and compare the results.
3. Save each of your models as an HDF5 file.
```
# Imports
import pandas as pd
from pathlib import Path
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,OneHotEncoder
```
---
## Prepare the data to be used on a neural network model
### Step 1: Read the `applicants_data.csv` file into a Pandas DataFrame. Review the DataFrame, looking for categorical variables that will need to be encoded, as well as columns that could eventually define your features and target variables.
```
# Read the applicants_data.csv file from the Resources folder into a Pandas DataFrame
applicant_data_df = pd.read_csv('Resources/applicants_data.csv')
# Review the DataFrame
applicant_data_df
# Review the data types associated with the columns
applicant_data_df.dtypes
```
### Step 2: Drop the “EIN” (Employer Identification Number) and “NAME” columns from the DataFrame, because they are not relevant to the binary classification model.
```
# Drop the 'EIN' and 'NAME' columns from the DataFrame
applicant_data_df = applicant_data_df.drop(columns=['EIN', 'NAME'])
# Review the DataFrame
applicant_data_df
```
### Step 3: Encode the dataset’s categorical variables using `OneHotEncoder`, and then place the encoded variables into a new DataFrame.
```
# Create a list of categorical variables
categorical_variables = list(applicant_data_df.dtypes[applicant_data_df.dtypes == "object"].index)
# Display the categorical variables list
display(categorical_variables)
# Create a OneHotEncoder instance
enc = OneHotEncoder(sparse=False)
# Encode the categorcal variables using OneHotEncoder
encoded_data = enc.fit_transform(applicant_data_df[categorical_variables])
# Create a DataFrame with the encoded variables
encoded_df = pd.DataFrame(encoded_data, columns=enc.get_feature_names(categorical_variables))
# Review the DataFrame
encoded_df
```
### Step 4: Add the original DataFrame’s numerical variables to the DataFrame containing the encoded variables.
> **Note** To complete this step, you will employ the Pandas `concat()` function that was introduced earlier in this course.
```
# Add the numerical variables from the original DataFrame to the one-hot encoding DataFrame
encoded_df = pd.concat([encoded_df, applicant_data_df[['STATUS', 'ASK_AMT', 'IS_SUCCESSFUL']]], axis = 1)
# Review the Dataframe
encoded_df
```
### Step 5: Using the preprocessed data, create the features (`X`) and target (`y`) datasets. The target dataset should be defined by the preprocessed DataFrame column “IS_SUCCESSFUL”. The remaining columns should define the features dataset.
```
# Define the target set y using the IS_SUCCESSFUL column
y = encoded_df['IS_SUCCESSFUL']
# Display a sample of y
y
# Define features set X by selecting all columns but IS_SUCCESSFUL
X = encoded_df.drop(columns=['IS_SUCCESSFUL'])
# Review the features DataFrame
X
```
### Step 6: Split the features and target sets into training and testing datasets.
```
# Split the preprocessed data into a training and testing dataset
# Assign the function a random_state equal to 1
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
```
### Step 7: Use scikit-learn's `StandardScaler` to scale the features data.
```
# Create a StandardScaler instance
scaler = StandardScaler()
# Fit the scaler to the features training dataset
X_scaler = scaler.fit(X_train)
# Fit the scaler to the features training dataset
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
```
---
## Compile and Evaluate a Binary Classification Model Using a Neural Network
### Step 1: Create a deep neural network by assigning the number of input features, the number of layers, and the number of neurons on each layer using Tensorflow’s Keras.
> **Hint** You can start with a two-layer deep neural network model that uses the `relu` activation function for both layers.
```
# Define the the number of inputs (features) to the model
number_input_features = 116
# Review the number of features
number_input_features
# Define the number of neurons in the output layer
number_output_neurons = 1
# Define the number of hidden nodes for the first hidden layer
hidden_nodes_layer1 = 58
# Review the number hidden nodes in the first layer
hidden_nodes_layer1
# Define the number of hidden nodes for the second hidden layer
hidden_nodes_layer2 = 28
# Review the number hidden nodes in the second layer
hidden_nodes_layer2
# Create the Sequential model instance
nn = Sequential()
# Add the first hidden layer
nn.add(Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="relu"))
# Add the second hidden layer
nn.add(Dense(units=hidden_nodes_layer2, activation='relu'))
# Add the output layer to the model specifying the number of output neurons and activation function
nn.add(Dense(units=number_output_neurons, activation='sigmoid'))
# Display the Sequential model summary
nn.summary()
```
### Step 2: Compile and fit the model using the `binary_crossentropy` loss function, the `adam` optimizer, and the `accuracy` evaluation metric.
```
# Compile the Sequential model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Fit the model using 50 epochs and the training data
nn.fit(X_train_scaled, y_train, epochs = 50)
```
### Step 3: Evaluate the model using the test data to determine the model’s loss and accuracy.
```
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled, y_test, verbose = 2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
### Step 4: Save and export your model to an HDF5 file, and name the file `AlphabetSoup.h5`.
```
# Set the model's file path
file_path = Path('Resources/AlphabetSoup.h5')
# Export your model to a HDF5 file
nn.save(file_path)
```
---
## Optimize the neural network model
### Step 1: Define at least three new deep neural network models (resulting in the original plus 3 optimization attempts). With each, try to improve on your first model’s predictive accuracy.
> **Rewind** Recall that perfect accuracy has a value of 1, so accuracy improves as its value moves closer to 1. To optimize your model for a predictive accuracy as close to 1 as possible, you can use any or all of the following techniques:
>
> * Adjust the input data by dropping different features columns to ensure that no variables or outliers confuse the model.
>
> * Add more neurons (nodes) to a hidden layer.
>
> * Add more hidden layers.
>
> * Use different activation functions for the hidden layers.
>
> * Add to or reduce the number of epochs in the training regimen.
### Alternative Model 1
```
# Define the the number of inputs (features) to the model
number_input_features = len(X_train.iloc[0])
# Review the number of features
number_input_features
# Define the number of neurons in the output layer
number_output_neurons_A1 = 1
# Define the number of hidden nodes for the first hidden layer
hidden_nodes_layer1_A1 = 200
# Review the number of hidden nodes in the first layer
hidden_nodes_layer1_A1
# Define the number of hidden nodes for the second hidden layer
hidden_nodes_layer2_A1 = 100
# Review the number of hidden nodes in the second layer
hidden_nodes_layer2_A1
# Define the number of hidden nodes for the third hidden layer
hidden_nodes_layer3_A1 = 50
# Review the number of hidden nodes in the third layer
hidden_nodes_layer3_A1
# Create the Sequential model instance
nn_A1 = Sequential()
# First hidden layer
nn_A1.add(Dense(units = hidden_nodes_layer1_A1, input_dim = number_input_features, activation = 'relu'))
# Second hidden layer
nn_A1.add(Dense(units = hidden_nodes_layer2_A1, activation = 'relu'))
# Third hidden layer
nn_A1.add(Dense(units = hidden_nodes_layer3_A1, activation = 'relu'))
# Output layer
nn_A1.add(Dense(1, activation = 'sigmoid'))
# Check the structure of the model
nn_A1.summary()
# Compile the Sequential model
nn_A1.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Fit the model using 50 epochs and the training data
fit_model_A1 = nn_A1.fit(X_train_scaled, y_train, epochs=50)
```
#### Alternative Model 2
```
# Define the the number of inputs (features) to the model
number_input_features = len(X_train.iloc[0])
# Review the number of features
number_input_features
# Define the number of neurons in the output layer
number_output_neurons_A2 = 1
# Define the number of hidden nodes for the first hidden layer
hidden_nodes_layer1_A2 = 150
# Review the number of hidden nodes in the first layer
hidden_nodes_layer1_A2
# Define the number of hidden nodes for the second hidden layer
hidden_nodes_layer2_A2 = 75
# Review the number of hidden nodes in the second layer
hidden_nodes_layer2_A2
# Define the number of hidden nodes for the third hidden layer
hidden_nodes_layer3_A2 = 35
# Review the number of hidden nodes in the third layer
hidden_nodes_layer3_A2
# Define the number of hidden nodes for the fourth hidden layer
hidden_nodes_layer4_A2 = 15
# Review the number of hidden nodes in the fourth layer
hidden_nodes_layer4_A2
# Create the Sequential model instance
nn_A2 = Sequential()
# First hidden layer
nn_A2.add(Dense(units = hidden_nodes_layer1_A2, input_dim = number_input_features, activation = 'relu'))
# Second hidden layer
nn_A2.add(Dense(units = hidden_nodes_layer2_A2, activation = 'relu'))
# Third hidden layer
nn_A2.add(Dense(units = hidden_nodes_layer3_A2, activation = 'relu'))
# Fourth hidden layer
nn_A2.add(Dense(units = hidden_nodes_layer4_A2, activation = 'relu'))
# Output layer
nn_A2.add(Dense(1, activation = 'sigmoid'))
# Check the structure of the model
nn_A2.summary()
# Compile the model
nn_A2.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Fit the model
nn_A2.fit(X_train_scaled, y_train, epochs=50)
```
### Step 2: After finishing your models, display the accuracy scores achieved by each model, and compare the results.
```
print("Original Model Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled, y_test, verbose = 2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
print("Alternative Model 1 Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn_A1.evaluate(X_test_scaled, y_test, verbose = 2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
print("Alternative Model 2 Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn_A2.evaluate(X_test_scaled, y_test, verbose = 2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
### Step 3: Save each of your alternative models as an HDF5 file.
```
# Set the file path for the first alternative model
file_path = Path('Resources/AlphabetSoup_A1.h5')
# Export your model to a HDF5 file
nn_A1.save(file_path)
# Set the file path for the second alternative model
file_path = Path('Resources/AlphabetSoup_A2.h5')
# Export your model to a HDF5 file
nn_A2.save(file_path)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/rawalk/DS-Unit-2-Linear-Models/blob/master/DSPT6_LS_DS_224.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 2, Module 4*
---
# Classification Metrics
- get and interpret the **confusion matrix** for classification models
- use classification metrics: **precision, recall**
- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**
- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve)
### Setup
Run the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.
Libraries
- category_encoders
- ipywidgets
- matplotlib
- numpy
- pandas
- scikit-learn
- seaborn
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
```
# Get and interpret the confusion matrix for classification models
## Overview
First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
```
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
```
## Follow Along
Scikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
```
import sklearn
sklearn.__version__
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val,
values_format='.0f', xticks_rotation='vertical', cmap='Blues')
plot_confusion_matrix(pipeline, X_val, y_val,
normalize='true',
values_format='.2f',
xticks_rotation='vertical', cmap='Blues')
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_val, y_pred)
cm
cm.sum(axis=1)
cm.sum(axis=1)[:, np.newaxis]
normalized_cm = cm/cm.sum(axis=1)[:, np.newaxis]
normalized_cm
import seaborn as sns
from sklearn.utils.multiclass import unique_labels
def plot_cm(y_val, y_pred, normalize=False):
cols = unique_labels(y_val)
cm = confusion_matrix(y_val, y_pred)
if normalize:
cm = cm/cm.sum(axis=1)[:, np.newaxis]
fmt = '.2f'
else:
fmt = '.0f'
df_cm = pd.DataFrame(cm, columns = ['Predicted ' + str(col) for col in cols],
index = ['Actual ' + str(col) for col in cols])
plt.figure(figsize=(10,8))
sns.heatmap(df_cm, annot=True, cmap='Blues', fmt=fmt)
plot_cm(y_val, y_pred, normalize=True)
unique_labels(y_val)
```
#### How many correct predictions were made?
```
7005 + 332 + 4351
np.diag(cm).sum()
```
#### How many total predictions were made?
```
len(y_val)
cm.sum()
```
#### What was the classification accuracy?
```
(7005 + 332 + 4351)/len(y_pred)
np.diag(cm).sum()/cm.sum()
from sklearn.metrics import accuracy_score
accuracy_score(y_val, y_pred)
```
# Use classification metrics: precision, recall
## Overview
[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.html#classification-report)
```
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
```
#### Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)
> Both precision and recall are based on an understanding and measure of relevance.
> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.
> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results.
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/26/Precisionrecall.svg/700px-Precisionrecall.svg.png" width="400">
## Follow Along
#### [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context))
```
plot_cm(y_val, y_pred)
```
#### How many correct predictions of "non functional"?
```
correct_pred_non_func = 4351
```
#### How many total predictions of "non functional"?
```
total_pred_non_func = 4351 + 156 + 622
total_pred_non_func
```
#### What's the precision for "non functional"?
```
precision_non_func = correct_pred_non_func/total_pred_non_func
precision_non_func
```
#### How many actual "non functional" waterpumps?
```
actual_non_func = 1098 + 68 + 4351
actual_non_func
```
#### What's the recall for "non functional"?
```
recall_non_func = correct_pred_non_func/actual_non_func
recall_non_func
print(classification_report(y_val, y_pred))
f1_score_non_func = 2*(precision_non_func*recall_non_func)/(precision_non_func + recall_non_func)
f1_score_non_func
```
# Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets
## Overview
### Imagine this scenario...
Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
```
len(test)
```
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
```
len(train) + len(val)
```
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
```
y_train.value_counts(normalize=True)
2000 * 0.46
```
**Can you do better than random at prioritizing inspections?**
In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
```
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
```
We already made our validation set the same size as our test set.
```
len(val) == len(test)
```
We can refit our model, using the redefined target.
Then make predictions for the validation set.
```
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
```
## Follow Along
#### Look at the confusion matrix:
```
plot_cm(y_val, y_pred)
```
#### How many total predictions of "True" ("non functional" or "functional needs repair") ?
```
y_pred
5032+977
y_pred.sum()
```
### We don't have "budget" to take action on all these predictions
- But we can get predicted probabilities, to rank the predictions.
- Then change the threshold, to change the number of positive predictions, based on our budget.
### Get predicted probabilities and plot the distribution
```
pipeline.predict_proba(X_val)
pipeline.predict(X_val)
pipeline.predict_proba(X_val)[:, 1] > 0.5
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
sns.distplot(y_pred_proba)
```
### Change the threshold
```
thres = 0.5
y_pred = y_pred_proba > thres
ax = sns.distplot(y_pred_proba)
ax.axvline(thres, color='red')
pd.Series(y_pred).value_counts()
```
### Or, get exactly 2,000 positive predictions
Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
```
def set_thres(y_true, y_pred_proba, thres=0.5):
y_pred = y_pred_proba > thres
ax = sns.distplot(y_pred_proba)
ax.axvline(thres, color='red')
plt.show()
print(classification_report(y_true, y_pred))
plot_cm(y_true, y_pred)
set_thres(y_val, y_pred_proba, thres=0.6)
from ipywidgets import interact, fixed
interact(set_thres,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
thres=(0, 1, 0.05))
```
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.
Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.
Let's look at a random sample of 50 out of these top 2,000:
```
results = pd.DataFrame({'y_val': y_val, 'y_pred_proba': y_pred_proba})
top2000 = results.sort_values(by='y_pred_proba', ascending=False)[:2000]
top2000
top2000.sample(n=50)
```
So how many of our recommendations were relevant? ...
```
n_trips = 2000
print(f'Baseline: {n_trips*0.46} waterpump repairs in {n_trips} trips')
print(f"With model: {top2000['y_val'].sum()} waterpump repairs in {n_trips} trips")
```
What's the precision for this subset of 2,000 predictions?
```
precision_at_K = top2000['y_val'].sum()/n_trips
print(f'Precision @ K=2000: {precision_at_K}')
```
### In this scenario ...
Accuracy _isn't_ the best metric!
Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)
Then, evaluate with the precision for "non functional"/"functional needs repair".
This is conceptually like **Precision@K**, where k=2,000.
Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)
> Precision at k is the proportion of recommended items in the top-k set that are relevant
> Mathematically precision@k is defined as: `Precision@k = (# of recommended items @k that are relevant) / (# of recommended items @k)`
> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.
We asked, can you do better than random at prioritizing inspections?
If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)
But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!
So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
We will predict which 2,000 are most likely non-functional or in need of repair.
We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.
So we're confident that our predictive model will help triage and prioritize waterpump inspections.
### But ...
This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.
Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?
Yes — the most common such metric is **ROC AUC.**
## Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)
[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"
ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative."
ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**
ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.**
#### Scikit-Learn docs
- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.html#receiver-operating-characteristic-roc)
- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)
- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html)
#### More links
- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)
- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
```
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
y_pred
roc_auc_score(y_val, y_pred)
```
**Recap:** ROC AUC measures how well a classifier ranks predicted probabilities. So, when you get your classifier’s ROC AUC score, you need to use predicted probabilities, not discrete predictions.
Your code may look something like this:
```python
from sklearn.metrics import roc_auc_score
y_pred_proba = model.predict_proba(X_test_transformed)[:, -1] # Probability for last class
print('Test ROC AUC:', roc_auc_score(y_test, y_pred_proba))
```
ROC AUC ranges from 0 to 1. Higher is better. A naive majority class baseline will have an ROC AUC score of 0.5.
|
github_jupyter
|
# Environmental Covariates
Datasets listed in [Supplementary_Data_File_1._environmental_covariates - Google Sheet](https://docs.google.com/spreadsheets/d/1hPw9G1A34SnlbDJ8sk3LYfwgoLN0Gbail2xdNb7viGc/edit#gid=106509025).
```
#default_exp data.env
#export
import os, sys
import shutil, io, subprocess
from tqdm import tqdm
from pathlib import Path
import requests, wget
from urlpath import URL
import math
import numpy as np
import pandas as pd
import xarray as xr
import rioxarray
from earthshotsoil.core import *
```
# Helper functions
```
def check_id(data_id):
if data_id in DATA_SRC:
print(f'Data_id "{data_id}" already exists. '
'Check that it is not re-used.')
raise
def add_download(id, download):
ENCOV = ENCOV.append(
{'id':id, 'download':URL(download)}, ignore_index=True)
def add_filename(id, filename):
ENCOV = ENCOV.append(
{'id':id, 'filename':filename}, ignore_index=True)
def download_file(url):
local_filename = url.split('/')[-1]
# NOTE the stream=True parameter below
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
# If you have chunk encoded response uncomment if
# and set chunk_size parameter to None.
#if chunk:
f.write(chunk)
return local_filename
def execute_wgt(url, dst='./'):
url = URL(url)
cmd = ['wget', url, '-O', Path(dst)/url.name]
proc = subprocess.Popen(cmd, stderr=subprocess.PIPE)
return proc
# def monitor_wgt_proc(proc):
# while True:
# line = proc.stderr.readline()
# if line=='' and proc.poll() is not None:
# break
# else:
# print(f'\r{line}', end='')
# proc.stderr.close()
# return_code = proc.wait()
# return return_code
def unzip(src, dst='./'):
'''
Unpack zip file at `src` to directory `dst`.
'''
dst.mkdir(exist_ok=True)
proc = subprocess.Popen(
['unzip', str(src), '-d', str(dst)],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return proc
```
# Load data index
Load Google Sheet for data sources and info
```
#export
DIR_DATA = Path('../data')
pth_csv = DIR_DATA / 'environmental_covariate.csv'
ENCOV = pd.read_csv(pth_csv)
```
## Earthenv Cloud
Global 1-km Cloud Cover: https://www.earthenv.org/cloud
```
var_downloads = [
('EarthEnvCloudCover_MODCF_interannualSD',
'https://data.earthenv.org/cloud/MODCF_interannualSD.tif'),
('EarthEnvCloudCover_MODCF_intraannualSD',
'https://data.earthenv.org/cloud/MODCF_intraannualSD.tif'),
('EarthEnvCloudCover_MODCF_meanannual',
'https://data.earthenv.org/cloud/MODCF_meanannual.tif'),
('EarthEnvCloudCover_MODCF_seasonality_concentration',
'https://data.earthenv.org/cloud/'
'MODCF_seasonality_concentration.tif'),
('EarthEnvCloudCover_MODCF_seasonality_theta',
'https://data.earthenv.org/cloud/MODCF_seasonality_theta.tif'),
]
for variable, download in var_downloads:
ENCOV.loc[ENCOV.Variable==variable, 'Source / Link'] = download
```
Upload to GEE, then record the ID.
```
for variable, _ in var_downloads:
gee_id = f'users/bingosaucer/{variable}'
ENCOV.loc[
ENCOV.Variable==variable, 'GEE ID'] = gee_id
```
## Earthenv Topography
Global 1,5,10,100-km Topography: https://www.earthenv.org/topography.
Couldn't obtain the direct download URLs for these using the browser's developer tools.
```
var_fns = [
('EarthEnvTopoMed_1stOrderPartialDerivEW',
'dx_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_1stOrderPartialDerivNS',
'dy_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_2ndOrderPartialDerivEW',
'dxx_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_2ndOrderPartialDerivNS',
'dyy_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_AspectCosine',
'aspectcosine_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_AspectSine',
'aspectsine_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_Eastness',
'eastness_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_Elevation',
'elevation_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_Northness',
'northness_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_ProfileCurvature',
'pcurv_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_Roughness',
'roughness_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_Slope',
'slope_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_TangentialCurvature',
'tcurv_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_TerrainRuggednessIndex',
'tri_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_TopoPositionIndex',
'tpi_1KMmd_GMTEDmd.tif'),
('EarthEnvTopoMed_VectorRuggednessMeasure',
'vrm_1KMmd_GMTEDmd.tif')
]
for variable, filename in var_fns:
ENCOV.loc[ENCOV.Variable==variable, 'filename'] = filename
```
After uploading to GEE by user `bingosaucer`, say, set `gee_id`:
```
gee_user = 'bingosaucer'
is_topo = ENCOV.Variable.apply(lambda o: str(o).startswith('EarthEnvTopoMed'))
ENCOV.loc[is_topo, 'GEE ID'] = (
ENCOV.loc[is_topo, 'Variable'].apply(
lambda var: f'users/{gee_user}/{var}')
)
```
## FanEtAl_Depth_to_Water_Table_AnnualMean
http://thredds-gfnl.usc.es/thredds/catalog/GLOBALWTDFTP/catalog.html
```
var_wtb = 'FanEtAl_Depth_to_Water_Table_AnnualMean'
var_downloads = [
('NAMERICA_WTD_annualmean',
'http://thredds-gfnl.usc.es/thredds/fileServer/'
'GLOBALWTDFTP/annualmeans/NAMERICA_WTD_annualmean.nc'),
('SAMERICA_WTD_annualmean',
'http://thredds-gfnl.usc.es/thredds/fileServer/'
'GLOBALWTDFTP/annualmeans/SAMERICA_WTD_annualmean.nc'),
('OCEANA_WTD_annualmean',
'http://thredds-gfnl.usc.es/thredds/fileServer/'
'GLOBALWTDFTP/annualmeans/OCEANA_WTD_annualmean.nc'),
('EURASIA_WTD_annualmean',
'http://thredds-gfnl.usc.es/thredds/fileServer/'
'GLOBALWTDFTP/annualmeans/EURASIA_WTD_annualmean.nc'),
('AFRICA_WTD_annualmean',
'http://thredds-gfnl.usc.es/thredds/fileServer/'
'GLOBALWTDFTP/annualmeans/AFRICA_WTD_annualmean.nc'),
]
procs_watertb = []
for variable, download in var_downloads:
print('Downloading', download)
proc = execute_wgt(download, dst=DIR_DATA)
procs_watertb.append(proc)
for proc in procs_watertb:
print(proc.poll())
dict_wtb = ENCOV[ENCOV.Variable==var_wtb].squeeze().to_dict()
for variable, download in var_downloads:
d = dict_wtb.copy()
d.update(
{'Variable': variable,
'Source / Link': download})
print(d, end='\n\n')
%%time
ds = rioxarray.open_rasterio(f'../data/{url.name}')
ds.rio.write_crs('epsg:4326', inplace=True)
ds = ds.squeeze()
ds.rio.crs
%%time
# ds.WTD.rio.to_raster(f'../data/{url.stem}.tiff')
ds.WTD.rio.to_raster(f'../data/NAMERICA_WTD_annualmean.tiff')
xr.open_rasterio('../data/FanEtAl_Depth_to_Water_Table_AnnualMean.tiff')
```
## MODIS_LAI
https://explorer.earthengine.google.com/#detail/MODIS%2F006%2FMCD15A3H
## ISRIC Data
ISRIC World Soil Information.
Data Hub: https://data.isric.org/geonetwork/srv/eng/catalog.search#/home
## WCS_Human_Footprint_2009
> Human Footprint 2009
http://wcshumanfootprint.org/
### Full dataset
How to unpack full dataset download:
```
$ unzip doi_10.5061_dryad.052q5__v2.zip
$ brew install p7zip
$ 7za x HumanFootprintv2.7z
```
```
path = '../data/Dryadv3/Maps'
ns = [f'{path}/{n}' for n in os.listdir(path) if n.endswith('.tif')]
sum([os.path.getsize(n) for n in ns]) / 1e9
%%time
da_fullset = xr.open_rasterio(
'../data/Dryadv3/Maps/HFP2009.tif', chunks={'x':10, 'y':10})
%%time
(da_fullset - da).sum()
```
### Summary 2009
## WorldClim2
https://www.worldclim.org/data/index.html
```
def worldclim2_histdata_src():
'''
WorldClim v2.1 historical climate data found at:
https://www.worldclim.org/data/worldclim21.html
'''
d = {}
d['minimum temperature'] = {
'download':
URL('http://biogeo.ucdavis.edu/data/worldclim/v2.1/base/'
'wc2.1_30s_tmin.zip'),
'units': 'C'}
d['maximum temperature'] = {
'download':
URL('http://biogeo.ucdavis.edu/data/worldclim/v2.1/base/'
'wc2.1_30s_tmax.zip'),
'units': 'C'}
d['average temperature'] = {
'download':
URL('http://biogeo.ucdavis.edu/data/worldclim/v2.1/base/'
'wc2.1_30s_tavg.zip'),
'units': 'C'}
d['precipitation'] = {
'download':
URL('http://biogeo.ucdavis.edu/data/worldclim/v2.1/base/'
'wc2.1_30s_prec.zip'),
'units': 'mm'}
d['solar radiation'] = {
'download':
URL('http://biogeo.ucdavis.edu/data/worldclim/v2.1/base/'
'wc2.1_30s_srad.zip'),
'units': 'kJ m^-2 day^-1'}
d['wind speed'] = {
'download':
URL('http://biogeo.ucdavis.edu/data/worldclim/v2.1/base/'
'wc2.1_30s_wind.zip'),
'units': 'm s^-1'}
d['water vapor pressure'] = {
'download':
URL('http://biogeo.ucdavis.edu/data/worldclim/v2.1/base/'
'wc2.1_30s_vapr.zip'),
'units': 'kPa'}
return d
WORLDCLIM2 = worldclim2_histdata_src()
```
Download a couple of variables.
```
vns = ['wind speed', 'water vapor pressure']
urls = [WORLDCLIM2[vn]['download'] for vn in vns]
procs = [execute_wgt(url, dst=DIR_DATA) for url in urls]
```
Unpack a variable and set it up for GEE.
```
vn = 'water vapor pressure'
collection_id = 'WorldClim2_' + '_'.join(vn.split())
collection_download = WORLD_CLIM2_SRC[vn]['download']
pth_zip = DIR_DATA / collection_download.name
dir_unzip = DIR_DATA / collection_id
# proc = unzip(pth_zip, dir_unzip)
fns = [n.name for n in dir_unzip.ls() if n.name.endswith('.tif')]
months = [n.split('.')[1].split('_')[-1] for n in fns]
asset_ids = [f'{collection_id}_{m}' for m in months]
```
Register the Image Collection.
```
ENCOV = ENCOV.append(
{'id': collection_id, 'download': collection_download},
ignore_index=True)
```
Register Images.
```
ENCOV = ENCOV.append(
pd.DataFrame({'id': asset_ids, 'filename': fns}))
ENCOV.loc[ENCOV.id.isin(asset_ids),
'download'] = collection_download
```
Create the Image Collection in GEE (in the browser).
Set `gee_user` for the collection and individual assets.
```
ENCOV.loc[ENCOV.id.isin(asset_ids), 'gee_user'] = 'bingosaucer'
```
## ConsensusLandCover_Human_Development_Percentage
```
humandev_source = [
('Evergreen/Deciduous Needleleaf Trees', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_1.tif'),
('Evergreen Broadleaf Trees', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_2.tif'),
('Deciduous Broadleaf Trees', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_3.tif'),
('Mixed/Other Trees', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_4.tif'),
('Shrubs', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_5.tif'),
('Herbaceous Vegetation', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_6.tif'),
('Cultivated and Managed Vegetation', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_7.tif'),
('Regularly Flooded Vegetation', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_8.tif'),
('Urban/Built-up', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_9.tif'),
('Snow/Ice', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_10.tif'),
('Barren', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_11.tif'),
('Open Water', 'https://data.earthenv.org/consensus_landcover/with_DISCover/consensus_full_class_12.tif')
]
```
Download the files.
```
for i in range(5, len(humandev_source)):
url = humandev_source[i][1]
! wget {url} -O {DIR_DATA / Path(url).name}
```
Check projection is available.
```
rda = rioxarray.open_rasterio(DIR_DATA / 'consensus_full_class_4.tif')
rda.rio.crs
geeids = [f'users/bingosaucer/{Path(url).stem}'
for _, url in humandev_source]
variables = [f'humandev_{Path(url).stem}'
for _, url in humandev_source]
```
Update data index.
```
doi = ENCOV[
ENCOV.Variable==('ConsensusLandCover_Human'
'_Development_Percentage')].doi.values[0]
for i in range(len(humandev_source)):
desc, download = humandev_source[i]
desc = f'Consensus Land Cover. {desc}'
ENCOV = ENCOV.append(
{'Variable': variables[i],
'Description': desc,
'Source / Link': download,
'doi': doi,
'GEE ID': geeids[i]}, ignore_index=True)
```
Create an Image Collection for these Images, and then update the index.
```
# ENCOV.append(
# {'Variable': 'ConsensusLandCover',
# 'Description': 'Global 1-km Consensus Land Cover',
# 'Source / Link': 'http://www.earthenv.org//landcover',
# 'doi': ,
# 'GEE ID': }, ignore_index=True)
```
# Google Earth Engine Access
Sets all assets under my account to be public.
```
on_mygee = ENCOV['GEE ID'].apply(lambda o: 'bingosaucer' in str(o))
for _, r in tqdm(ENCOV[on_mygee].iterrows()):
try:
subprocess.check_call(
['earthengine', 'acl', 'set', 'public', r['GEE ID']])
except subprocess.CalledProcessError:
continue
ENCOV.dropna(axis=0, how='all', inplace=True)
ENCOV.to_csv(DIR_DATA / 'environmental_covariate.csv', index=False)
```
Available
```
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', None)
ENCOV[ENCOV['GEE ID'].notnull()][['Variable', 'GEE ID', 'Description']]
```
Not yet available
```
ENCOV[ENCOV['GEE ID'].isnull()]
```
# Reference
Reading GeoTIFF:
- http://xarray.pydata.org/en/stable/io.html#rasterio
- http://xarray.pydata.org/en/stable/generated/xarray.open_rasterio.html#xarray-open-rasterio
|
github_jupyter
|
# T1548.001 - Abuse Elevation Control Mechanism: Setuid and Setgid
An adversary may perform shell escapes or exploit vulnerabilities in an application with the setsuid or setgid bits to get code running in a different user’s context. On Linux or macOS, when the setuid or setgid bits are set for an application, the application will run with the privileges of the owning user or group respectively. (Citation: setuid man page). Normally an application is run in the current user’s context, regardless of which user or group owns the application. However, there are instances where programs need to be executed in an elevated context to function properly, but the user running them doesn’t need the elevated privileges.
Instead of creating an entry in the sudoers file, which must be done by root, any user can specify the setuid or setgid flag to be set for their own applications. These bits are indicated with an "s" instead of an "x" when viewing a file's attributes via <code>ls -l</code>. The <code>chmod</code> program can set these bits with via bitmasking, <code>chmod 4777 [file]</code> or via shorthand naming, <code>chmod u+s [file]</code>.
Adversaries can use this mechanism on their own malware to make sure they're able to execute in elevated contexts in the future.(Citation: OSX Keydnap malware).
## Atomic Tests
```
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
```
### Atomic Test #1 - Make and modify binary from C source
Make, change owner, and change file attributes on a C source code file
**Supported Platforms:** macos, linux
Elevation Required (e.g. root or admin)
#### Attack Commands: Run with `sh`
```sh
cp PathToAtomicsFolder/T1548.001/src/hello.c /tmp/hello.c
sudo chown root /tmp/hello.c
sudo make /tmp/hello
sudo chown root /tmp/hello
sudo chmod u+s /tmp/hello
/tmp/hello
```
```
Invoke-AtomicTest T1548.001 -TestNumbers 1
```
### Atomic Test #2 - Set a SetUID flag on file
This test sets the SetUID flag on a file in Linux and macOS.
**Supported Platforms:** macos, linux
Elevation Required (e.g. root or admin)
#### Attack Commands: Run with `sh`
```sh
sudo touch /tmp/evilBinary
sudo chown root /tmp/evilBinary
sudo chmod u+s /tmp/evilBinary
```
```
Invoke-AtomicTest T1548.001 -TestNumbers 2
```
### Atomic Test #3 - Set a SetGID flag on file
This test sets the SetGID flag on a file in Linux and macOS.
**Supported Platforms:** macos, linux
Elevation Required (e.g. root or admin)
#### Attack Commands: Run with `sh`
```sh
sudo touch /tmp/evilBinary
sudo chown root /tmp/evilBinary
sudo chmod g+s /tmp/evilBinary
```
```
Invoke-AtomicTest T1548.001 -TestNumbers 3
```
## Detection
Monitor the file system for files that have the setuid or setgid bits set. Monitor for execution of utilities, like chmod, and their command-line arguments to look for setuid or setguid bits being set.
|
github_jupyter
|
# Deep Neural Network for Image Classification: Application
When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course!
You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation.
**After this assignment you will be able to:**
- Build and apply a deep neural network to supervised learning.
Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
- dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
```
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v3 import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
**Problem Statement**: You are given a dataset ("data.h5") containing:
- a training set of m_train images labelled as cat (1) or non-cat (0)
- a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
Let's get more familiar with the dataset. Load the data by running the cell below.
```
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
```
The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
```
# Example of a picture
index = 10
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
```
As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width:450px;height:300px;">
<caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption>
```
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
```
$12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
## 3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models:
- A 2-layer neural network
- An L-layer deep neural network
You will then compare the performance of these models, and also try out different values for $L$.
Let's look at the two architectures.
### 3.1 - 2-layer neural network
<img src="images/2layerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: ***INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT***. </center></caption>
<u>Detailed Architecture of figure 2</u>:
- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$.
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.
- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.
- You then repeat the same process.
- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias).
- Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.
### 3.2 - L-layer deep neural network
It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:
<img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: ***[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID***</center></caption>
<u>Detailed Architecture of figure 3</u>:
- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.
- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.
- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.
### 3.3 - General methodology
As usual you will follow the Deep Learning methodology to build the model:
1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:
a. Forward propagation
b. Compute cost function
c. Backward propagation
d. Update parameters (using parameters, and grads from backprop)
4. Use trained parameters to predict labels
Let's now implement those two models!
## 4 - Two-layer neural network
**Question**: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: *LINEAR -> RELU -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:
```python
def initialize_parameters(n_x, n_h, n_y):
...
return parameters
def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
def compute_cost(AL, Y):
...
return cost
def linear_activation_backward(dA, cache, activation):
...
return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
...
return parameters
```
```
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1, W2, b2". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, activation='relu')
A2, cache2 = linear_activation_forward(A1, W2, b2, activation='sigmoid')
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (≈ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, activation='sigmoid')
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, activation='relu')
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
```
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
```
**Expected Output**:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.6930497356599888 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.6464320953428849 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.048554785628770206 </td>
</tr>
</table>
Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.
Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
```
predictions_train = predict(train_x, train_y, parameters)
```
**Expected Output**:
<table>
<tr>
<td> **Accuracy**</td>
<td> 1.0 </td>
</tr>
</table>
```
predictions_test = predict(test_x, test_y, parameters)
```
**Expected Output**:
<table>
<tr>
<td> **Accuracy**</td>
<td> 0.72 </td>
</tr>
</table>
**Note**: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting.
Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.
## 5 - L-layer Neural Network
**Question**: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: *[LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:
```python
def initialize_parameters_deep(layers_dims):
...
return parameters
def L_model_forward(X, parameters):
...
return AL, caches
def compute_cost(AL, Y):
...
return cost
def L_model_backward(AL, Y, caches):
...
return grads
def update_parameters(parameters, grads, learning_rate):
...
return parameters
```
```
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 4-layer model
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization. (≈ 1 line of code)
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
You will now train the model as a 4-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
```
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
```
**Expected Output**:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.771749 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.672053 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.092878 </td>
</tr>
</table>
```
pred_train = predict(train_x, train_y, parameters)
```
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.985645933014
</td>
</tr>
</table>
```
pred_test = predict(test_x, test_y, parameters)
```
**Expected Output**:
<table>
<tr>
<td> **Test Accuracy**</td>
<td> 0.8 </td>
</tr>
</table>
Congrats! It seems that your 4-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set.
This is good performance for this task. Nice job!
Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course).
## 6) Results Analysis
First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
```
print_mislabeled_images(classes, test_x, test_y, pred_test)
```
**A few types of images the model tends to do poorly on include:**
- Cat body in an unusual position
- Cat appears against a background of a similar color
- Unusual cat color and species
- Camera Angle
- Brightness of the picture
- Scale variation (cat is very large or small in image)
## 7) Test with your own image (optional/ungraded exercise) ##
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
```
## START CODE HERE ##
my_image = "my_image.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_image = my_image/255.
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
```
**References**:
- for auto-reloading external module: http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
|
github_jupyter
|
## assignment 04: Decision Tree construction
```
# If working in colab, uncomment the following line
# ! wget https://raw.githubusercontent.com/girafe-ai/ml-mipt/21f_basic/homeworks_basic/assignment0_04_tree/tree.py -nc
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
from sklearn.base import BaseEstimator
from sklearn.datasets import make_classification, make_regression, load_digits, load_boston
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import accuracy_score, mean_squared_error
import pandas as pd
%load_ext autoreload
%autoreload 2
```
Let's fix the `random_state` (a.k.a. random seed).
```
RANDOM_STATE = 42
```
__Your ultimate task for today is to impement the `DecisionTree` class and use it to solve classification and regression problems.__
__Specifications:__
- The class inherits from `sklearn.BaseEstimator`;
- Constructor is implemented for you. It has the following parameters:
* `max_depth` - maximum depth of the tree; `np.inf` by default
* `min_samples_split` - minimal number of samples in the leaf to make a split; `2` by default;
* `criterion` - criterion to select the best split; in classification one of `['gini', 'entropy']`, default `gini`; in regression `variance`;
- `fit` method takes `X` (`numpy.array` of type `float` shaped `(n_objects, n_features)`) and `y` (`numpy.array` of type float shaped `(n_objects, 1)` in regression; `numpy.array` of type int shaped `(n_objects, 1)` with class labels in classification). It works inplace and fits the `DecisionTree` class instance to the provided data from scratch.
- `predict` method takes `X` (`numpy.array` of type `float` shaped `(n_objects, n_features)`) and returns the predicted $\hat{y}$ values. In classification it is a class label for every object (the most frequent in the leaf; if several classes meet this requirement select the one with the smallest class index). In regression it is the desired constant (e.g. mean value for `variance` criterion)
- `predict_proba` method (works only for classification (`gini` or `entropy` criterion). It takes `X` (`numpy.array` of type `float` shaped `(n_objects, n_features)`) and returns the `numpy.array` of type `float` shaped `(n_objects, n_features)` with class probabilities for every object from `X`. Class $i$ probability equals the ratio of $i$ class objects that got in this node in the training set.
__Small recap:__
To find the optimal split the following functional is evaluated:
$$G(j, t) = H(Q) - \dfrac{|L|}{|Q|} H(L) - \dfrac{|R|}{|Q|} H(R),$$
where $Q$ is the dataset from the current node, $L$ and $R$ are left and right subsets defined by the split $x^{(j)} < t$.
1. Classification. Let $p_i$ be the probability of $i$ class in subset $X$ (ratio of the $i$ class objects in the dataset). The criterions are defined as:
* `gini`: Gini impurity $$H(R) = 1 -\sum_{i = 1}^K p_i^2$$
* `entropy`: Entropy $$H(R) = -\sum_{i = 1}^K p_i \log(p_i)$$ (One might use the natural logarithm).
2. Regression. Let $y_l$ be the target value for the $R$, $\mathbf{y} = (y_1, \dots, y_N)$ – all targets for the selected dataset $X$.
* `variance`: $$H(R) = \dfrac{1}{|R|} \sum_{y_j \in R}(y_j - \text{mean}(\mathbf{y}))^2$$
* `mad_median`: $$H(R) = \dfrac{1}{|R|} \sum_{y_j \in R}|y_j - \text{median}(\mathbf{y})|$$
**Hints and comments**:
* No need to deal with categorical features, they will not be present.
* Siple greedy recursive procedure is enough. However, you can speed it up somehow (e.g. using percentiles).
* Please, do not copy implementations available online. You are supposed to build very simple example of the Decision Tree.
File `tree.py` is waiting for you. Implement all the needed methods in that file.
### Check yourself
```
from tree import entropy, gini, variance, mad_median, DecisionTree
```
#### Simple check
```
X = np.ones((4, 5), dtype=float) * np.arange(4)[:, None]
y = np.arange(4)[:, None] + np.asarray([0.2, -0.3, 0.1, 0.4])[:, None]
class_estimator = DecisionTree(max_depth=10, criterion_name='gini')
(X_l, y_l), (X_r, y_r) = class_estimator.make_split(1, 1., X, y)
assert np.array_equal(X[:1], X_l)
assert np.array_equal(X[1:], X_r)
assert np.array_equal(y[:1], y_l)
assert np.array_equal(y[1:], y_r)
```
#### Classification problem
```
digits_data = load_digits().data
digits_target = load_digits().target[:, None] # to make the targets consistent with our model interfaces
X_train, X_test, y_train, y_test = train_test_split(digits_data, digits_target, test_size=0.2, random_state=RANDOM_STATE)
assert len(y_train.shape) == 2 and y_train.shape[0] == len(X_train)
class_estimator = DecisionTree(max_depth=10, criterion_name='gini')
class_estimator.fit(X_train, y_train)
ans = class_estimator.predict(X_test)
accuracy_gini = accuracy_score(y_test, ans)
print(accuracy_gini)
reference = np.array([0.09027778, 0.09236111, 0.08333333, 0.09583333, 0.11944444,
0.13888889, 0.09930556, 0.09444444, 0.08055556, 0.10555556])
class_estimator = DecisionTree(max_depth=10, criterion_name='entropy')
class_estimator.fit(X_train, y_train)
ans = class_estimator.predict(X_test)
accuracy_entropy = accuracy_score(y_test, ans)
print(accuracy_entropy)
assert 0.84 < accuracy_gini < 0.9
assert 0.86 < accuracy_entropy < 0.9
assert np.sum(np.abs(class_estimator.predict_proba(X_test).mean(axis=0) - reference)) < 1e-4
```
Let's use 5-fold cross validation (`GridSearchCV`) to find optimal values for `max_depth` and `criterion` hyperparameters.
```
param_grid = {'max_depth': range(3,11), 'criterion_name': ['gini', 'entropy']}
gs = GridSearchCV(DecisionTree(), param_grid=param_grid, cv=5, scoring='accuracy', n_jobs=-2)
%%time
gs.fit(X_train, y_train)
gs.best_params_
assert gs.best_params_['criterion_name'] == 'entropy'
assert 6 < gs.best_params_['max_depth'] < 9
plt.figure(figsize=(10, 8))
plt.title("The dependence of quality on the depth of the tree")
plt.plot(np.arange(3,11), gs.cv_results_['mean_test_score'][:8], label='Gini')
plt.plot(np.arange(3,11), gs.cv_results_['mean_test_score'][8:], label='Entropy')
plt.legend(fontsize=11, loc=1)
plt.xlabel("max_depth")
plt.ylabel('accuracy')
plt.show()
```
#### Regression problem
```
regr_data = load_boston().data
regr_target = load_boston().target[:, None] # to make the targets consistent with our model interfaces
RX_train, RX_test, Ry_train, Ry_test = train_test_split(regr_data, regr_target, test_size=0.2, random_state=RANDOM_STATE)
regressor = DecisionTree(max_depth=10, criterion_name='mad_median')
regressor.fit(RX_train, Ry_train)
predictions_mad = regressor.predict(RX_test)
mse_mad = mean_squared_error(Ry_test, predictions_mad)
print(mse_mad)
regressor = DecisionTree(max_depth=10, criterion_name='variance')
regressor.fit(RX_train, Ry_train)
predictions_mad = regressor.predict(RX_test)
mse_var = mean_squared_error(Ry_test, predictions_mad)
print(mse_var)
assert 9 < mse_mad < 20
assert 8 < mse_var < 12
param_grid_R = {'max_depth': range(2,9), 'criterion_name': ['variance', 'mad_median']}
gs_R = GridSearchCV(DecisionTree(), param_grid=param_grid_R, cv=5, scoring='neg_mean_squared_error', n_jobs=-2)
gs_R.fit(RX_train, Ry_train)
gs_R.best_params_
assert gs_R.best_params_['criterion_name'] == 'mad_median'
assert 3 < gs_R.best_params_['max_depth'] < 7
var_scores = gs_R.cv_results_['mean_test_score'][:7]
mad_scores = gs_R.cv_results_['mean_test_score'][7:]
plt.figure(figsize=(10, 8))
plt.title("The dependence of neg_mse on the depth of the tree")
plt.plot(np.arange(2,9), var_scores, label='variance')
plt.plot(np.arange(2,9), mad_scores, label='mad_median')
plt.legend(fontsize=11, loc=1)
plt.xlabel("max_depth")
plt.ylabel('neg_mse')
plt.show()
```
|
github_jupyter
|
# Homework 2
## Due: January 30, 2018, 8 a.m.
Please give a complete, justified solution to each question below. A single-term answer without explanation will receive no credit.
Please complete each question on its own sheet of paper (or more if necessary), and upload to [Gradsescope](https://gradescope.com/).
$$
\newcommand{\R}{\mathbb{R}}
\newcommand{\dydx}{\frac{dy}{dx}}
\newcommand{\proj}{\textrm{proj}}
% For boldface vectors:
\renewcommand{\vec}[1]{\mathbf{#1}}
$$
1\. Suppose $\vec a \ne \vec 0$.
- Suppose $\vec a \cdot \vec b = \vec a \cdot \vec c$. Does it follow that $\vec b = \vec c$?
- Suppose $\vec a \times \vec b = \vec a \times \vec c$. Does it follow that $\vec b = \vec c$?
- Suppose $\vec a \cdot \vec b = \vec a \cdot \vec c$ and $\vec a \times \vec b = \vec a \times \vec c$. Does it follow that $\vec b = \vec c$?
2\. Find parametric equations for the following lines:
- the line that goes through the points $(0, -3, 1)$ and $(5, 2, 2)$
- the line that goes through the point $(3, 2, 1)$ and is perpendicular to the plane described by the equation
$$4x + 3y - 5z = 3.$$
Find equations for the following planes:
- the plane that passes through the point $(1, -1, 1)$ and contains the line with symmetric equations
$$x = 2y = 3z.$$
- the plane that contains all points that are equidistant from the points $(3, 2, -1)$ and $(-7, 4, -3)$.
3\. Show that if $\langle a, b, c \rangle$ is a unit vector, then the distance between the parallel planes given by
$$
ax+by+cz = d_1 ~~~\text{ and }~~~ ax+by+cz = d_2
$$ is $\left|d_1-d_2\right|$. _Do not just quote a formula in the text. Use projection._
4\. Show that the planes $$ 2x + y -3 z = 4 ~~~\text{ and } ~~~ 4x + 2y -6 z = -2 $$
are parallel and find the distance between the two parallel planes above.
5\. (1) Sketch the trace at $z=0$ for each relation below and (2) match the equation with the surface it defines.
<table text-align="left">
<tr><td width="25%">(i) $x = y^2 - z^2$</td>
<td width="25%">(iii) $x^2 = y^2 + z^2$</td></tr>
<tr><td> (ii) $9y^2 + z^2 = 16$ </td>
<td> (iv) $z^2 + x^2 - y^2 = 1$</td></tr>
</table>
<table>
<tr><td width="25%">(A) <img src="hw-10-6-fig2.png" width="45%"></td>
<td width="25%">(B) <img src="hw-10-6-fig1.png" width="45%"></td></tr>
<tr><td>(C) <img src="hw-10-6-fig4.png" width="45%"></td>
<td>(D) <img src="hw-10-6-fig3.png" width="45%"></td></tr>
</table>
6\. Find an equation for the surface consisting of all points $P$ for which the distance from $P$ to the $y$-axis is half the distance from $P$ to the $xz$-plane. Sketch or briefly describe the surface.
7\. Find paramtric equations for a straight line through the point $(1,1,0)$ that lies entirely on the surface of the hyperboloid $$
x^2+y^2 = z^2 +2.
$$
|
github_jupyter
|
# Import Libraries
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
```
Transforms.compose-Composes several transforms together.
ToTensor()-Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor
Converts a PIL Image or numpy.ndarray (H x W x C) in the range
[0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]
if the PIL Image belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1)
or if the numpy.ndarray has dtype = np.uint8
Normalize- Normalize a tensor image with mean and standard deviation.
Given mean: ``(M1,...,Mn)`` and std: ``(S1,..,Sn)`` for ``n`` channels, this transform
will normalize each channel of the input ``torch.*Tensor`` i.e.
``input[channel] = (input[channel] - mean[channel]) / std[channel]
Resize-Resize the input PIL Image to the given size.
CenterCrop-Crops the given PIL Image at the center.
Pad-Pad the given PIL Image on all sides with the given "pad" value
RandomTransforms-Base class for a list of transformations with randomness
# Dataset and Creating Train/Test Split
```
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
```
# Dataloader Arguments & Test/Train Dataloaders
```
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
```
# The model
Let's start with the model we first saw
```
#defining the network structure
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU(),
nn.Dropout(0.15), # output_size = 26
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU(),
nn.Dropout(0.15), # output_size = 24
nn.Conv2d(in_channels=10, out_channels=20, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(20),
nn.ReLU(),
nn.Dropout(0.15),# output_size = 22
nn.MaxPool2d(2, 2)# output_size = 11
)
self.conv2 = nn.Sequential(
nn.Conv2d(in_channels=20, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU(),
nn.Dropout(0.15)# output_size = 11
)
self.conv3 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU(),
nn.Dropout(0.15),# output_size = 9
nn.Conv2d(in_channels=10, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Dropout(0.15)# output_size = 7
)
self.conv4 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU(),
nn.Dropout(0.15),#output_size = 5
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU(),
nn.Dropout(0.15),# output_size = 5
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(5, 5), padding=0, bias=False)
)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = x.view(-1, 10)
return F.log_softmax(x)
```
# Model Params
Can't emphasize on how important viewing Model Summary is.
Unfortunately, there is no in-built model visualizer, so we have to take external help
```
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
```
# Training and Testing
Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs.
Let's write train and test functions
```
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
```
# Let's Train and test our model
```
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.0335, momentum=0.9)
EPOCHS = 15
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
import matplotlib.pyplot as plt
fig, axs = plt.subplots(2,2,figsize=(15,10))
axs[0, 0].plot(train_losses)
axs[0, 0].set_title("Training Loss")
axs[1, 0].plot(train_acc)
axs[1, 0].set_title("Training Accuracy")
axs[0, 1].plot(test_losses)
axs[0, 1].set_title("Test Loss")
axs[1, 1].plot(test_acc)
axs[1, 1].set_title("Test Accuracy")
```
|
github_jupyter
|
```
import matplotlib.pyplot as plt
x = [1, 2.1, 0.4, 8.9, 7.1, 0.1, 3, 5.1, 6.1, 3.4, 2.9, 9]
y = [1, 3.4, 0.7, 1.3, 9, 0.4, 4, 1.9, 9, 0.3, 4.0, 2.9]
plt.scatter(x,y, color='red')
w = [0.1, 0.2, 0.4, 0.8, 1.6, 2.1, 2.5, 4, 6.5, 8, 10]
z = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
plt.plot(z, w, color='lightblue', linewidth=2)
c = [0,1,2,3,4, 5, 6, 7, 8, 9, 10]
plt.plot(c)
plt.ylabel('some numbers')
plt.xlabel('some more numbers')
plt.savefig('plot.png')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
x = np.random.rand(10)
y = np.random.rand(10)
plt.plot(x,y,'--', x**2, y**2,'-.')
plt.savefig('lines.png')
plt.axis('equal')
plt.show()
"""
Demo of custom tick-labels with user-defined rotation.
"""
import matplotlib.pyplot as plt
x = [1, 2, 3, 4]
y = [1, 4, 9, 6]
labels = ['Frogs', 'Hogs', 'Bogs', 'Slogs']
plt.plot(x, y, 'ro')
# You can specify a rotation for the tick labels in degrees or with keywords.
plt.xticks(x, labels, rotation='vertical')
# Pad margins so that markers don't get clipped by the axes
plt.margins(0.2)
plt.savefig('ticks.png')
plt.show()
import matplotlib.pyplot as plt
x = [0.5, 0.6, 0.8, 1.2, 2.0, 3.0]
y = [10, 15, 20, 25, 30, 35]
z = [1, 2, 3, 4]
w = [10, 20, 30, 40]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y, color='lightblue', linewidth=3)
ax.scatter([2,3.4,4, 5.5],
[5,10,12, 15],
color='black',
marker='^')
ax.set_xlim(0, 6.5)
ax2 = fig.add_subplot(222)
ax2.plot(z, w, color='lightgreen', linewidth=3)
ax2.scatter([3,5,7],
[5,15,25],
color='red',
marker='*')
ax2.set_xlim(1, 7.5)
plt.savefig('mediumplot.png')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
# First way: the top plot, all in one #
x = np.random.rand(10)
y = np.random.rand(10)
figure1 = plt.plot(x,y)
# Second way: the lower 4 plots#
x1 = np.random.rand(10)
x2 = np.random.rand(10)
x3 = np.random.rand(10)
x4 = np.random.rand(10)
y1 = np.random.rand(10)
y2 = np.random.rand(10)
y3 = np.random.rand(10)
y4 = np.random.rand(10)
figure2, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
ax1.plot(x1,y1)
ax2.plot(x2,y2)
ax3.plot(x3,y3)
ax4.plot(x4,y4)
plt.savefig('axes.png')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 1, 500)
y = np.sin(4 * np.pi * x) * np.exp(-5 * x)
fig, ax = plt.subplots()
ax.fill(x, y, color='lightblue')
#ax.grid(True, zorder=5)
plt.savefig('fill.png')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
x, y = np.random.randn(2, 100)
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax1.xcorr(x, y, usevlines=True, maxlags=50, normed=True, lw=2)
#ax1.grid(True)
ax1.axhline(0, color='black', lw=2)
ax2 = fig.add_subplot(212, sharex=ax1)
ax2.acorr(x, usevlines=True, normed=True, maxlags=50, lw=2)
#ax2.grid(True)
ax2.axhline(0, color='black', lw=2)
plt.savefig('advanced.png')
plt.show()
```
|
github_jupyter
|
```
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import torch.nn.functional as F
from torch.autograd import Variable
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import Dataset, IterableDataset, DataLoader
# import tqdm
import numpy as np
import pandas as pd
import math
seed = 7
torch.manual_seed(seed)
np.random.seed(seed)
pfamA_motors = pd.read_csv("../../data/pfamA_motors.csv")
df_dev = pd.read_csv("../../data/df_dev.csv")
motor_toolkit = pd.read_csv("../../data/motor_tookits.csv")
pfamA_motors_balanced = pfamA_motors.groupby('clan').apply(lambda _df: _df.sample(4500,random_state=1))
pfamA_motors_balanced = pfamA_motors_balanced.apply(lambda x: x.reset_index(drop = True))
pfamA_motors_balanced.to_csv("../../data/pfamA_motors_balanced.csv",index = False)
pfamA_target_name = ["PF00349","PF00022","PF03727","PF06723",\
"PF14450","PF03953","PF12327","PF00091","PF10644",\
"PF13809","PF14881","PF00063","PF00225","PF03028"]
pfamA_target = pfamA_motors.loc[pfamA_motors["pfamA_acc"].isin(pfamA_target_name),:]
# shuffle pfamA_target and pfamA_motors_balanced
pfamA_target = pfamA_target.sample(frac = 1)
pfamA_target_ind = pfamA_target.iloc[:,0]
print(pfamA_target_ind[0:5])
print(pfamA_motors_balanced.shape)
pfamA_motors_balanced = pfamA_motors_balanced.sample(frac = 1)
pfamA_motors_balanced_ind = pfamA_motors_balanced.iloc[:,0]
print(pfamA_motors_balanced_ind[0:5])
print(pfamA_target.shape)
pfamA_motors_balanced.head()
pfamA_target.head()
aminoacid_list = [
'A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L',
'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y'
]
clan_list = ["actin_like","tubulin_c","tubulin_binding","p_loop_gtpase"]
aa_to_ix = dict(zip(aminoacid_list, np.arange(1, 21)))
clan_to_ix = dict(zip(clan_list, np.arange(0, 4)))
def word_to_index(seq,to_ix):
"Returns a list of indices (integers) from a list of words."
return [to_ix.get(word, 0) for word in seq]
ix_to_aa = dict(zip(np.arange(1, 21), aminoacid_list))
ix_to_clan = dict(zip(np.arange(0, 4), clan_list))
def index_to_word(ixs,ix_to):
"Returns a list of words, given a list of their corresponding indices."
return [ix_to.get(ix, 'X') for ix in ixs]
def prepare_sequence(seq):
idxs = word_to_index(seq[0:-1],aa_to_ix)
return torch.tensor(idxs, dtype=torch.long)
def prepare_labels(seq):
idxs = word_to_index(seq[1:],aa_to_ix)
return torch.tensor(idxs, dtype=torch.long)
def prepare_eval(seq):
idxs = word_to_index(seq[:],aa_to_ix)
return torch.tensor(idxs, dtype=torch.long)
prepare_labels('YCHXXXXX')
# set device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
# Hyperparameters
input_size = len(aminoacid_list) + 1
num_layers = 1
hidden_size = 128
output_size = len(aminoacid_list) + 1
embedding_size= 10
learning_rate = 0.001
# Create Bidirectional LSTM
class BRNN(nn.Module):
def __init__(self,input_size, embedding_size, hidden_size, num_layers, output_size):
super(BRNN,self).__init__()
self.embedding_size = embedding_size
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = num_layers
self.log_softmax = nn.LogSoftmax(dim= 1)
self.aa_embedding = nn.Embedding(input_size, embedding_size)
self.lstm = nn.LSTM(input_size = embedding_size,
hidden_size = hidden_size,
num_layers = num_layers,
bidirectional = True)
#hidden_state: a forward and a backward state for each layer of LSTM
self.fc = nn.Linear(hidden_size*2, output_size)
def aa_encoder(self, input):
"Helper function to map single aminoacids to the embedding space."
projected = self.embedding(input)
return projected
def forward(self,seq):
# embed each aa to the embedded space
embedding_tensor = self.aa_embedding(seq)
# initialization could be neglected as the default is 0 for h0 and c0
# initialize hidden state
# h0 = torch.zeros(self.num_layers*2,x.size(0),self.hidden_size).to(device)
# initialize cell_state
# c0 = torch.zeros(self.num_layers*2,x.size(0),self.hidden_size).to(device)
# shape(seq_len = len(sequence), batch_size = 1, input_size = -1)
# (5aa,1 sequence per batch, 10-dimension embedded vector)
#output of shape (seq_len, batch, num_directions * hidden_size):
out, (hn, cn) = self.lstm(embedding_tensor.view(len(seq), 1, -1))
# decoded_space = self.fc(out.view(len(seq), -1))
decoded_space = self.fc(out.view(len(seq), -1))
decoded_scores = F.log_softmax(decoded_space, dim=1)
return decoded_scores, hn
# initialize network
model = BRNN(input_size, embedding_size, hidden_size, num_layers, output_size).to(device)
# model.load_state_dict(torch.load("../../data/bidirectional_lstm_5_201008.pt"))
loss_function = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr = learning_rate)
model.train()
```
## Proceed weight updates using motor_balanced
```
#Train Network
# loss_vector = []
running_loss = 0
print_every = 1000
for epoch in np.arange(0, pfamA_motors_balanced.shape[0]):
seq = pfamA_motors_balanced.iloc[epoch, 3]
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
# Step 2. Get our inputs ready for the network, that is, turn them into
# Tensors of word indices.
sentence_in = prepare_sequence(seq)
targets = prepare_labels(seq)
sentence_in = sentence_in.to(device = device)
targets = targets.to(device = device)
# Step 3. Run our forward pass.
model.zero_grad()
aa_scores, hn = model(sentence_in)
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss = loss_function(aa_scores, targets)
loss.backward()
optimizer.step()
if epoch % print_every == 0:
print(f"At Epoch: %.2f"% epoch)
print(f"Loss %.2f"% loss)
# Print current loss
# loss_vector.append(loss)
torch.save(model.state_dict(), "../../data/mini_lstm_5_balanced.pt")
```
## Proceed weight updates using the entire pfam_motor set
```
#Train Network
# loss_vector = []
running_loss = 0
print_every = 1000
for epoch in np.arange(0, pfamA_target.shape[0]):
seq = pfamA_target.iloc[epoch, 3]
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
# Step 2. Get our inputs ready for the network, that is, turn them into
# Tensors of word indices.
sentence_in = prepare_sequence(seq)
targets = prepare_labels(seq)
sentence_in = sentence_in.to(device = device)
targets = targets.to(device = device)
# Step 3. Run our forward pass.
model.zero_grad()
aa_scores, hn = model(sentence_in)
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss = loss_function(aa_scores, targets)
loss.backward()
optimizer.step()
if epoch % print_every == 0:
print(f"At Epoch: %.2f"% epoch)
print(f"Loss %.2f"% loss)
# Print current loss
# loss_vector.append(loss)
torch.save(model.state_dict(), "../../data/mini_lstm_5_balanced_target.pt")
print("done")
```
|
github_jupyter
|
<center><h1> Sorting Algorithms Visualizer.</h1></center>
<center><h3>Made Using Matplotlib Animation.</h3></center>
<ceter><img src='https://miro.medium.com/max/1400/0*qwkWXc-wzW2D8ggV.jpg'></center>
### Sorting Algorithms
* Quick Sort
* Merge Sort
* Insertion Sort
* Selection Sort
* Bubble Sort
# Importing Libraries to be used.
```
import random
import time
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.animation as animation
sns.set_style('darkgrid')
```
### Helper function to swap elements i and j of list.
```
def swap(A, i, j):
"""Helper function to swap elements i and j of list."""
if i != j:
A[i], A[j] = A[j], A[i]
```
# Bubble Sort Algorithm
```
def bubblesort(A):
"""In-place bubble sort."""
if len(A) == 1:
return
swapped = True
for i in range(len(A) - 1):
if not swapped:
break
swapped = False
for j in range(len(A) - 1 - i):
if A[j] > A[j + 1]:
swap(A, j, j + 1)
swapped = True
yield A
```
# Insertion sort Algorithm
```
def insertionsort(A):
"""In-place insertion sort."""
for i in range(1, len(A)):
j = i
while j > 0 and A[j] < A[j - 1]:
swap(A, j, j - 1)
j -= 1
yield A
```
# Merge sort Algorithm
```
def mergesort(A, start, end):
"""Merge sort."""
if end <= start:
return
mid = start + ((end - start + 1) // 2) - 1
yield from mergesort(A, start, mid)
yield from mergesort(A, mid + 1, end)
yield from merge(A, start, mid, end)
yield A
```
# Algorithm to merge sublists
```
def merge(A, start, mid, end):
"""Helper function for merge sort."""
merged = []
leftIdx = start
rightIdx = mid + 1
while leftIdx <= mid and rightIdx <= end:
if A[leftIdx] < A[rightIdx]:
merged.append(A[leftIdx])
leftIdx += 1
else:
merged.append(A[rightIdx])
rightIdx += 1
while leftIdx <= mid:
merged.append(A[leftIdx])
leftIdx += 1
while rightIdx <= end:
merged.append(A[rightIdx])
rightIdx += 1
for i, sorted_val in enumerate(merged):
A[start + i] = sorted_val
yield A
```
# Quick Sort Algorithm
```
def quicksort(A, start, end):
"""In-place quicksort."""
if start >= end:
return
pivot = A[end]
pivotIdx = start
for i in range(start, end):
if A[i] < pivot:
swap(A, i, pivotIdx)
pivotIdx += 1
yield A
swap(A, end, pivotIdx)
yield A
yield from quicksort(A, start, pivotIdx - 1)
yield from quicksort(A, pivotIdx + 1, end)
```
# Selection Sort Algorithm
```
def selectionsort(A):
"""In-place selection sort."""
if len(A) == 1:
return
for i in range(len(A)):
# Find minimum unsorted value.
minVal = A[i]
minIdx = i
for j in range(i, len(A)):
if A[j] < minVal:
minVal = A[j]
minIdx = j
yield A
swap(A, i, minIdx)
yield A
```
# Function to give a new color each time
```
def colors(N):
if N%5==0:
colors='skyblue'
elif N%5==1:
colors='green'
elif N%5==2:
colors='orange'
elif N%5==3:
colors='red'
elif N%5==4:
colors='pink'
return colors
```
# Main function for Image Animation and plotting
```
if __name__ == "__main__":
# Get user input to determine range of integers (1 to N) and desired
# sorting method (algorithm).
N = int(input("Enter number of integers: "))
#A=list(map(int,input("Enter the Integers :").split()))
method_msg = "Enter sorting method:\n(b)ubble\n(i)nsertion\n(m)erge \
\n(q)uick\n(s)election\n"
method = input(method_msg)
# Build and randomly shuffle list of integers.
A = [x + 1 for x in range(20,N+20)]
random.seed(time.time())
random.shuffle(A)
yl1=min(A)
yl2=max(A)
# Get appropriate generator to supply to matplotlib FuncAnimation method.
if method == "b":
title = "Bubble sort"
generator = bubblesort(A)
elif method == "i":
title = "Insertion sort"
generator = insertionsort(A)
elif method == "m":
title = "Merge sort"
generator = mergesort(A, 0, N - 1)
elif method == "q":
title = "Quick sort"
generator = quicksort(A, 0, N - 1)
else:
title = "Selection sort"
generator = selectionsort(A)
fig, ax = plt.subplots()
ax.set_title(title)
# Initializing a bar plot.
bar_rects = ax.bar(range(len(A)), A, align="edge",color=colors(N))
# Set axis limits. Set y axis upper limit high enough that the tops of
# the bars won't overlap with the text label.
ax.set_xlim(0,N)
ax.set_ylim(max(yl1-2,0),yl2+2)
# Place a text label in the upper-left corner of the plot to display
# number of operations performed.
text = ax.text(0.02, 0.95, "", transform=ax.transAxes)
# Define function update_fig() for use with matplotlib.pyplot.FuncAnimation().
# To track the number of operations, i.e., iterations
iteration = [0]
def update_fig(A, rects, iteration):
for rect, val in zip(rects, A):
rect.set_height(val)
iteration[0] += 1
text.set_text("# Number of operations: {}".format(iteration[0]))
anim = animation.FuncAnimation(fig, func=update_fig,
fargs=(bar_rects, iteration), frames=generator, interval=2,
repeat=False)
plt.show()
```
<center><h1> Below is a Example.</h1></center>
<center><img src='https://i.ibb.co/LQbCvgN/ezgif-com-crop.gif' width="700"></center>
|
github_jupyter
|
```
import pandas as pd
import sklearn as sk
import json
import ast
import pickle
import math
import matplotlib.pyplot as plt
df = pd.read_json('/data/accessible_POIs/great-britain-latest.json')
df.loc[:,'id'] = df['Node'].apply(lambda x: dict(x)['id'])
df.loc[:,'access'] = df['Node'].apply(lambda x: dict(x)['tags'].get('access') if 'access' in dict(x)['tags'] else 'NONE')
df.loc[:,'barrier'] = df['Node'].apply(lambda x: dict(x)['tags'].get('barrier'))
df.loc[:,'bicycle'] = df['Node'].apply(lambda x: dict(x)['tags'].get('bicycle'))
df.loc[:,'motor_vehicle'] = df['Node'].apply(lambda x: dict(x)['tags'].get('motor_vehicle'))
df.loc[:,'opening_hours'] = df['Node'].apply(lambda x: dict(x)['tags'].get('opening_hours'))
df.loc[:,'wheelchair'] = df['Node'].apply(lambda x: dict(x)['tags'].get('wheelchair'))
df.loc[:,'amenity'] = df['Node'].apply(lambda x: dict(x)['tags'].get('amenity'))
df.loc[:,'lon'] = df['Node'].apply(lambda x: dict(x)['lonlat'][0])
df.loc[:,'lat'] = df['Node'].apply(lambda x: dict(x)['lonlat'][1])
df.drop(['Node','Way','Relation'], axis=1, inplace=True)
df
df.to_pickle('/shared/accessible_pois.pkl')
from zipfile import ZipFile
with ZipFile('/data/All_POIs_by_country/pois_by_countries.zip', 'r') as z:
z.extract('geojson/great-britain-latest.json', '/shared/great-britain-latest.json')
#z.extract('geojson/great-britain-latest.geojson', '/shared/great-britain-latest.geojson')
with open('/shared/great-britain-latest.json','r') as j:
data = json.load(j)
df = pd.json_normalize(data)
df
with open('/shared/great-britain-latest.json','r') as j:
data = json.load(j)
df = pd.json_normalize(data['features'], max_level=3)
df
def extract_key(x,key):
if type(x) == float:
return None
x_ = x.split(',')
x_ = [y.replace('\'','').replace('"','') for y in x_]
for k in x_:
if key in k:
return k[k.find('>')+1:]
return None
df.loc[:,'shop'] = df['properties.other_tags'].apply(extract_key, args=('shop',))
df.loc[:,'amenity'] = df['properties.other_tags'].apply(extract_key, args=('amenity',))
df.loc[:,'wheelchair'] = df['properties.other_tags'].apply(extract_key, args=('wheelchair',))
df.loc[:,'barrier'] = df['properties.other_tags'].apply(extract_key, args=('barrier',))
df.loc[:,'access'] = df['properties.other_tags'].apply(extract_key, args=('access',))
df.loc[:,'lon'] = df['geometry.coordinates'].apply(lambda x: list(x)[0])
df.loc[:,'lat'] = df['geometry.coordinates'].apply(lambda x: list(x)[1])
df.to_csv('all_pois_wordcloud.csv')
len(df)
df = df[df['wheelchair'].isin(['yes','no','limited','designated'])]
#df.drop(['geometry.coordinates','type','geometry.type'], axis=1, inplace=True)
df.to_csv('accessible_pois_wordcloud.csv')
set(df['wheelchair'])
plt.figure(figsize=(30,10))
df['wheelchair'].value_counts().plot(kind='bar')
len(df)
_ = df
_['wheelchair'].value_counts().plot(kind='bar')
_
df['properties.other_tags'].to_csv('wordcloud.csv')
df
```
|
github_jupyter
|
### Image Captioning
To perform image captioning we are going to apply an approach similar to the work described in references [1],[2], and [3]. The approach applied here uses a recurrent neural network (RNN) to train a network to generate image captions. The input to the RNN is comprised of a high-level representation of an image and a caption describing it. The Microsoft Common Object in Context (MSCOCO) data set is used for this because it has many images and five captions for each one in most cases. In the previous section, we learned how to create and train a simple RNN. For this part, we will learn how to concatenate a feature vector that represents the images with its corresponding sentence and feed this into an RNN.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import inspect
import time
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
#import reader
import collections
import os
import re
import json
import matplotlib.pyplot as plt
from scipy import ndimage
from scipy import misc
import sys
sys.path.insert(0, '/data/models/slim')
slim=tf.contrib.slim
from nets import vgg
from preprocessing import vgg_preprocessing
%matplotlib inline
!nvidia-smi
```
### MSCOCO Captions
We are going to build on our RNN example. First, we will look at the data and evaluate a single image, its captions, and feature vector.
```
TRAIN_IMAGE_PATH='/data/mscoco/train2014/'
## Read Training files
with open("/data/mscoco/captions_train2014.json") as data_file:
data=json.load(data_file)
image_feature_vectors={}
tf.reset_default_graph()
one_image=ndimage.imread(TRAIN_IMAGE_PATH+data["images"][0]['file_name'])
#resize for vgg network
resize_img=misc.imresize(one_image,[224,224])
if len(one_image.shape)!= 3: #Check to see if the image is grayscale if True mirror colorband
resize_img=np.asarray(np.dstack((resize_img, resize_img, resize_img)), dtype=np.uint8)
processed_image = vgg_preprocessing.preprocess_image(resize_img, 224, 224, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
network,endpts= vgg.vgg_16(processed_images, is_training=False)
init_fn = slim.assign_from_checkpoint_fn(os.path.join('/data/mscoco/vgg_16.ckpt'),slim.get_model_variables('vgg_16'))
sess = tf.Session()
init_fn(sess)
NETWORK,ENDPTS=sess.run([network,endpts])
sess.close()
print('fc7 array for a single image')
print(ENDPTS['vgg_16/fc7'][0][0][0])
plt.plot(ENDPTS['vgg_16/fc7'][0][0][0])
plt.xlabel('feature vector index')
plt.ylabel('amplitude')
plt.title('fc7 feature vector')
data["images"][0]['file_name']
```
How can you look at feature maps from the first convolutional layer? Look here if you need a [hint](#answer1 "The output from the convolutional layer is in the form of height, width, and number of feature maps. FEATUREMAPID can be any value between 0 and the number of feature maps minus 1.").
```
print(ENDPTS['vgg_16/conv1/conv1_1'][0].shape)
FEATUREMAPID=0
print('input image and feature map from conv1')
plt.subplot(1,2,1)
plt.imshow(resize_img)
plt.subplot(1,2,2)
plt.imshow(ENDPTS['vgg_16/conv1/conv1_1'][0][:,:,FEATUREMAPID])
```
How can you look at the response of different layers in your network?
Next, we are going to combine the feature maps with their respective captions. Many of the images have five captions. Run the code below to view the captions for one image.
```
CaptionsForOneImage=[]
for k in range(len(data['annotations'])):
if data['annotations'][k]['image_id']==data["images"][0]['id']:
CaptionsForOneImage.append([data['annotations'][k]['caption'].lower()])
plt.imshow(resize_img)
print('MSCOCO captions for a single image')
CaptionsForOneImage
```
A file with feature vectors from 2000 of the MSCOCO images has been created. Next, you will load these and train. Please note this step can take more than 5 minutes to run.
```
example_load=np.load('/data/mscoco/train_vgg_16_fc7_2000.npy').tolist()
image_ids=example_load.keys()
#Create 3 lists image_id, feature maps, and captions.
image_id_key=[]
feature_maps_to_id=[]
caption_to_id=[]
for observed_image in image_ids:
for k in range(len(data['annotations'])):
if data['annotations'][k]['image_id']==observed_image:
image_id_key.append([observed_image])
feature_maps_to_id.append(example_load[observed_image])
caption_to_id.append(re.sub('[^A-Za-z0-9]+',' ',data['annotations'][k]['caption']).lower()) #remove punctuation
print('number of images ',len(image_ids))
print('number of captions ',len(caption_to_id))
```
In the cell above we created three lists, one for the image_id, feature map. and caption. To verify that the indices of each list are aligned, display the image id and caption for one image.
```
STRING='%012d' % image_id_key[0][0]
exp_image=ndimage.imread(TRAIN_IMAGE_PATH+'COCO_train2014_'+STRING+'.jpg')
plt.imshow(exp_image)
print('image_id ',image_id_key[:5])
print('the captions for this image ')
print(caption_to_id[:5])
num_steps=20
######################################################################
##Create a list of all of the sentences.
DatasetWordList=[]
for dataset_caption in caption_to_id:
DatasetWordList+=str(dataset_caption).split()
#Determine number of distint words
distintwords=collections.Counter(DatasetWordList)
#Order words
count_pairs = sorted(distintwords.items(), key=lambda x: (-x[1], x[0])) #ascending order
words, occurence = list(zip(*count_pairs))
#DictionaryLength=occurence.index(4) #index for words that occur 4 times or less
words=['PAD','UNK','EOS']+list(words)#[:DictionaryLength])
word_to_id=dict(zip(words, range(len(words))))
##################### Tokenize Sentence #######################
Tokenized=[]
for full_words in caption_to_id:
EmbeddedSentence=[word_to_id[word] for word in full_words.split() if word in word_to_id]+[word_to_id['EOS']]
#Pad sentences that are shorter than the number of steps
if len(EmbeddedSentence)<num_steps:
b=[word_to_id['PAD']]*num_steps
b[:len(EmbeddedSentence)]=EmbeddedSentence
if len(EmbeddedSentence)>num_steps:
b=EmbeddedSentence[:num_steps]
if len(b)==EmbeddedSentence:
b=EmeddedSentence
#b=[word_to_id['UNK'] if x>=DictionaryLength else x for x in b] #turn all words used 4 times or less to 'UNK'
#print(b)
Tokenized+=[b]
print("Number of words in this dictionary ", len(words))
#Tokenized Sentences
Tokenized[::2000]
```
The next cell contains functions for queuing our data and the RNN model. What should the output for each function be? If you need a hint look [here](#answer2 "The data_queue function batches the data for us, this needs to return tokenized_caption, input_feature_map. The RNN model should return prediction before the softmax is applied and is defined as pred.").
```
def data_queue(caption_input,feature_vector,batch_size,):
train_input_queue = tf.train.slice_input_producer(
[caption_input, np.asarray(feature_vector)],num_epochs=10000,
shuffle=True) #False before
##Set our train data and label input shape for the queue
TrainingInputs=train_input_queue[0]
FeatureVectors=train_input_queue[1]
TrainingInputs.set_shape([num_steps])
FeatureVectors.set_shape([len(feature_vector[0])]) #fc7 is 4096
min_after_dequeue=1000000
capacity = min_after_dequeue + 3 * batch_size
#input_x, target_y
tokenized_caption, input_feature_map = tf.train.batch([TrainingInputs, FeatureVectors],
batch_size=batch_size,
capacity=capacity,
num_threads=6)
return tokenized_caption,input_feature_map
def rnn_model(Xconcat,input_keep_prob,output_keep_prob,num_layers,num_hidden):
#Create a multilayer RNN
#reuse=False for training but reuse=True for sharing
layer_cell=[]
for _ in range(num_layers):
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=num_hidden, state_is_tuple=True)
lstm_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell,
input_keep_prob=input_keep_prob,
output_keep_prob=output_keep_prob)
layer_cell.append(lstm_cell)
cell = tf.contrib.rnn.MultiRNNCell(layer_cell, state_is_tuple=True)
outputs, last_states = tf.contrib.rnn.static_rnn(
cell=cell,
dtype=tf.float32,
inputs=tf.unstack(Xconcat))
output_reshape=tf.reshape(outputs, [batch_size*(num_steps),num_hidden]) #[12==batch_size*num_steps,num_hidden==12]
pred=tf.matmul(output_reshape, variables_dict["weights_mscoco"]) +variables_dict["biases_mscoco"]
return pred
tf.reset_default_graph()
#######################################################################################################
# Parameters
num_hidden=2048
num_steps=num_steps
dict_length=len(words)
batch_size=4
num_layers=2
train_lr=0.00001
#######################################################################################################
TrainingInputs=Tokenized
FeatureVectors=feature_maps_to_id
## Variables ##
# Learning rate placeholder
lr = tf.placeholder(tf.float32, shape=[])
#tf.get_variable_scope().reuse_variables()
variables_dict = {
"weights_mscoco":tf.Variable(tf.truncated_normal([num_hidden,dict_length],
stddev=1.0,dtype=tf.float32),name="weights_mscoco"),
"biases_mscoco": tf.Variable(tf.truncated_normal([dict_length],
stddev=1.0,dtype=tf.float32), name="biases_mscoco")}
tokenized_caption, input_feature_map=data_queue(TrainingInputs,FeatureVectors,batch_size)
mscoco_dict=words
TrainInput=tf.constant(word_to_id['PAD'],shape=[batch_size,1],dtype=tf.int32)
#Pad the beginning of our caption. The first step now only has the image feature vector. Drop the last time step
#to timesteps to 20
TrainInput=tf.concat([tf.constant(word_to_id['PAD'],shape=[batch_size,1],dtype=tf.int32),
tokenized_caption],1)[:,:-1]
X_one_hot=tf.nn.embedding_lookup(np.identity(dict_length), TrainInput) #[batch,num_steps,dictionary_length][2,6,7]
#ImageFeatureTensor=input_feature_map
Xconcat=tf.concat([input_feature_map+tf.zeros([num_steps,batch_size,4096]),
tf.unstack(tf.to_float(X_one_hot),num_steps,1)],2)#[:num_steps,:,:]
pred=rnn_model(Xconcat,1.0,1.0,num_layers,num_hidden)
#the full caption is the target sentence
y_one_hot=tf.unstack(tf.nn.embedding_lookup(np.identity(dict_length), tokenized_caption),num_steps,1) #[batch,num_steps,dictionary_length][2,6,7]
y_target_reshape=tf.reshape(y_one_hot,[batch_size*num_steps,dict_length])
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y_target_reshape))
optimizer = tf.train.MomentumOptimizer(lr,0.9)
gvs = optimizer.compute_gradients(cost,aggregation_method = tf.AggregationMethod.EXPERIMENTAL_TREE)
capped_gvs = [(tf.clip_by_value(grad, -10., 10.), var) for grad, var in gvs]
train_op=optimizer.apply_gradients(capped_gvs)
saver = tf.train.Saver()
init_op = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
#Load a pretrained network
saver.restore(sess, '/data/mscoco/rnn_layermodel_iter40000')
print('Model restored from file')
for i in range(100):
loss,y_pred,target_caption,_=sess.run([cost,pred,tokenized_caption,train_op],feed_dict={lr:train_lr})
if i% 10==0:
print("iteration: ",i, "loss: ",loss)
MODEL_NAME='rnn_model_iter'+str(i)
saver.save(sess, MODEL_NAME)
print('saved trained network ',MODEL_NAME)
print("Done Training")
coord.request_stop()
coord.join(threads)
sess.close()
```
We can use the function below to estimate how well the network is able to predict the next word in the caption. You can evaluate a single image and its caption from the last batch using the index of the batch. If you need a hint look [here](#answer3 "if the batch_size is 4, batch_id may be any value between 0 and 3.").
##### Please note that depending on the status of the neural network at the time it was saved, incomplete, incoherent, and sometimes inappropriate captions could be generated.
```
def show_next_predicted_word(batch_id,batch_size,id_of_image,target_caption,predicted_caption,words,PATH):
Target=[words[ind] for ind in target_caption[batch_id]]
Prediction_Tokenized=np.argmax(predicted_caption[batch_id::batch_size],1)
Prediction=[words[ind] for ind in Prediction_Tokenized]
STRING2='%012d' % id_of_image
img=ndimage.imread(PATH+STRING2+'.jpg')
return Target,Prediction,img,STRING2
#You can change the batch id to a number between [0 , batch_size-1]
batch_id=0
image_id_for_predicted_caption=[x for x in range(len(Tokenized)) if target_caption[batch_id].tolist()== Tokenized[x]][0]
t,p,input_img,string_out=show_next_predicted_word(batch_id,batch_size,image_id_key[image_id_for_predicted_caption][0]
,target_caption,y_pred,words,TRAIN_IMAGE_PATH+'COCO_train2014_')
print('Caption')
print(t)
print('Predicted Words')
print(p)
plt.imshow(input_img)
```
##### Questions
[1] Can the show_next_predicted_word function be used for deployment?
Probably not. Can you think of any reason why? Each predicted word is based on the previous ground truth word. In a deployment scenario, we will only have the feature map from our input image.
[2] Can you load your saved network and use it to generate a caption from a validation image?
The validation images are stored in /data/mscoco/val2014. A npy file of the feature vectors is stored /data/mscoco/val_vgg_16_fc7_100.npy. For a hint on how to add this look [here](#answer4 "You can change this parameter to val_load=np.load(/data/mscoco/val_vgg_16_fc7_100.npy).tolist()").
[3] Do you need to calculate the loss or cost when only performing inference?
[4] Do you use dropout when performing inference?
```
##Load and test our test set
val_load=np.load('/data/mscoco/val_vgg_16_fc7_100.npy').tolist()
val_ids=val_load.keys()
#Create 3 lists image_id, feature maps, and captions.
val_id_key=[]
val_map_to_id=[]
val_caption_to_id=[]
for observed_image in val_ids:
val_id_key.append([observed_image])
val_map_to_id.append(val_load[observed_image])
print('number of images ',len(val_ids))
print('number of captions ',len(val_map_to_id))
```
The cell below will load a feature vector from one of the images in the validation data set and use it with our pretrained network to generate a caption. Use the VALDATA variable to propagate and image through our RNN and generate a caption. You also need to load the network you just created during training. Look here if you need a [hint](#answer5 "Any of the of the data points in our validation set can be used here. There are 501 captions. Any number between 0 and 501-1 can be used for the VALDATA parameter, such as VALDATA=430. The pretrained network file that you just saved is rnn_model_iter99, insert this string into saver.restore(sess,FILENAME)").
##### Please note that depending on the status of the neural network at the time it was saved, incomplete, incoherent, and sometimes inappropriate captions could be generated.
```
tf.reset_default_graph()
batch_size=1
num_steps=20
print_topn=0 #0for do not display
printnum0f=3
#Choose a image to caption
VALDATA=54 #ValImage fc7 feature vector
variables_dict = {
"weights_mscoco":tf.Variable(tf.truncated_normal([num_hidden,dict_length],
stddev=1.0,dtype=tf.float32),name="weights_mscoco"),
"biases_mscoco": tf.Variable(tf.truncated_normal([dict_length],
stddev=1.0,dtype=tf.float32), name="biases_mscoco")}
StartCaption=np.zeros([batch_size,num_steps],dtype=np.int32).tolist()
CaptionPlaceHolder = tf.placeholder(dtype=tf.int32, shape=(batch_size , num_steps))
ValFeatureMap=val_map_to_id[VALDATA]
X_one_hot=tf.nn.embedding_lookup(np.identity(dict_length), CaptionPlaceHolder) #[batch,num_steps,dictionary_length][2,6,7]
#ImageFeatureTensor=input_feature_map
Xconcat=tf.concat([ValFeatureMap+tf.zeros([num_steps,batch_size,4096]),
tf.unstack(tf.to_float(X_one_hot),num_steps,1)],2)#[:num_steps,:,:]
pred=rnn_model(Xconcat,1.0,1.0,num_layers,num_hidden)
pred=tf.nn.softmax(pred)
saver = tf.train.Saver()
init_op = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
#Load a pretrained network
saver.restore(sess, 'rnn_model_iter99')
print('Model restored from file')
for i in range(num_steps-1):
predict_next_word=sess.run([pred],feed_dict={CaptionPlaceHolder:StartCaption})
INDEX=np.argmax(predict_next_word[0][i])
StartCaption[0][i+1]=INDEX
##Post N most probable next words at each step
if print_topn !=0:
print("Top ",str(printnum0f), "predictions for the", str(i+1), "word in the predicted caption" )
result_args = np.argsort(predict_next_word[0][i])[-printnum0f:][::-1]
NextWord=[words[x] for x in result_args]
print(NextWord)
coord.request_stop()
coord.join(threads)
sess.close()
STRING2='%012d' % val_id_key[VALDATA][0]
img=ndimage.imread('/data/mscoco/val2014/COCO_val2014_'+STRING2+'.jpg')
plt.imshow(img)
plt.title('COCO_val2014_'+STRING2+'.jpg')
PredictedCaption=[words[x] for x in StartCaption[0]]
print("predicted sentence: ",PredictedCaption[1:])
#Free our GPU memory before proceeding to the next part of the lab
import os
os._exit(00)
```
## References
[1] Donahue, J, et al. "Long-term recurrent convolutional networks for visual recognition and description." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
[2]Vinyals, Oriol, et al. "Show and tell: Lessons learned from the 2015 mscoco image captioning challenge." IEEE transactions on pattern analysis and machine intelligence 39.4 (2017): 652-663.
[3] TensorFlow Show and Tell:A Neural Image Caption Generator [example] (https://github.com/tensorflow/models/tree/master/im2txt)
[4] Karapthy, A. [NeuralTalk2](https://github.com/karpathy/neuraltalk2)
[5]Lin, Tsung-Yi, et al. "Microsoft coco: Common objects in context." European Conference on Computer Vision. Springer International Publishing, 2014.
|
github_jupyter
|
```
import numpy as np
import sys
```
# The Basics
```
a = np.array([1,2,3], dtype='int32')
print(a)
b = np.array([[1.0,2.0,3.0],[4.0,5.0,6.0]])
print(b)
#Get Dimension
a.ndim
#Get Shape
a.shape
b.shape
#Get Type
a.dtype
#Get size
a.itemsize
#Get Total Size
a.nbytes
b.itemsize
```
## Accessing / Changing specific elements, row, columns
```
a = np.array([[1,2,3,4,5,6,7],[8,9,10,11,12,13,14]])
print(a)
#Get element
a[1, 5]
#Get specific row
a[0, :]
#Get specific column
a[:, 2]
#Getting fancier[startindex:endindex:stepsize]
a[0, 1:6:2]
a[1,5] = 20
print(a)
a[:,2] = [1,2]
print(a)
#3d example
b = np.array([[[1,2], [3,4]], [[5,6],[7,8]]])
print(b)
#Get specific element(work outside in)
b[0, 1, 1]
b[:,1,:]
```
## Initializing Different types of arrays
```
#All 0s matrix
np.zeros((2,3,3))
#All ones matrix
np.ones((4,2,2), dtype='int32')
#Any number
np.full((2,2), 99, dtype="float32")
# Any other number (full-like)
np.full_like(a.shape, 4)
# Random decimal numbers
np.random.rand(4,2,3)
#Random integers
np.random.randint(-4,7, size=(3,3))
#The identity matrix
np.identity(3)
#Repeat an array
arr = np.array([[1,2,3]])
r1 = np.repeat(arr, 3, axis=0)
print(r1)
output = np.ones((5,5))
print(output)
z = np.zeros((3,3))
z[1,1] = 9
print(z)
output[1:-1, 1:-1] = z
print(output)
```
### Be careful when copying arrays
```
a = np.array([1,2,3])
b = a.copy()
b[0] = 100
print(b)
print(a)
```
# Mathematics
```
a = np.array([1,2,3,4])
a
a+2
a*2
a/2
a+=2
a
b = np.array([1,0,1,0])
a + b
a * b
a ** 2
#Take a sin
np.sin(a)
#Look up scipy routines for more mathematics
```
# Linear Algebra
```
a = np.ones((2,3))
print(a)
b = np.full((3,2),2)
print(b)
np.matmul(a,b)
c = np.identity(3)
np.linalg.det(c)
# https://numpy.org/doc/stable/reference/routines.linalg.html
```
## Statistics
```
stats = np.array([[1,2,3],[4,5,6]])
stats
np.min(stats)
np.max(stats, axis=0)
np.sum(stats)
```
# Reorganizing Arrays
```
before = np.array([[1,2,3,4],[5,6,7,8]])
print(before)
after = before.reshape((2,2,2))
print(after)
#Vertically stacking vectors
v1 = np.array([1,2,3,4])
v2 = np.array([5,6,7,8])
np.vstack([v1,v2,v2,v2])
#Horizontal stack
np.hstack((v1,v2))
```
## Load Data from file
```
filedata = np.genfromtxt('data.txt', delimiter=',')
filedata = filedata.astype('int32')
print(filedata)
```
#### Boolean Masking and Advanced Indexing
```
filedata[filedata > 50]
#You can index with list in numpy
a = np.array([1,2,3,4,5,6,7,8,9])
a[[1,2,8]]
np.any(filedata>50, axis=0)
(filedata > 50) & (filedata < 100)
```
## Save and Load Arrays
```
x = np.arange(0,10,1)
y = x**2
print(x)
print(y)
np.savez("x_y-squared.npz", x-axis= x, )
```
|
github_jupyter
|
# Classifying Fashion-MNIST
Now it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.
<img src='assets/fashion-mnist-sprite.png' width=500px>
In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.
First off, let's load the dataset through torchvision.
```
import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here we can see one of the images.
```
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
```
## Building the network
Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.
```
# TODO: Define your network architecture here
import torch
from torch import nn
import torch.nn.functional as F
from torch import optim
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
```
# Train the network
Now you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).
Then write the training code. Remember the training pass is a fairly straightforward process:
* Make a forward pass through the network to get the logits
* Use the logits to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
```
# TODO: Create the network, define the criterion and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
# TODO: Train the network here
epochs = 5
for e in range(epochs):
running_loss = 0
for image, label in trainloader:
images = image.view(image.shape[0], -1)
# TODO: Training pass
optimizer.zero_grad()
output = model(images)
loss = criterion(output, label)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
with torch.no_grad():
logps = model(img)
# TODO: Calculate the class probabilities (softmax) for img
ps = torch.exp(logps)
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
```
|
github_jupyter
|
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plotly.com/python/getting-started/#initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Using a Single Slider to Set the Range
```
import plotly.plotly as py
import ipywidgets as widgets
from ipywidgets import interact, interactive, fixed
from IPython.core.display import HTML
from IPython.display import display, clear_output
from plotly.widgets import GraphWidget
styles = '''<style>.widget-hslider { width: 100%; }
.widget-hbox { width: 100% !important; }
.widget-slider { width: 100% !important; }</style>'''
HTML(styles)
#this widget will display our plotly chart
graph = GraphWidget("https://plotly.com/~jordanpeterson/889")
fig = py.get_figure("https://plotly.com/~jordanpeterson/889")
#find the range of the slider.
xmin, xmax = fig['layout']['xaxis']['range']
# use the interact decorator to tie a widget to the listener function
@interact(y=widgets.FloatRangeSlider(min=xmin, max=xmax, step=(xmax-xmin)/1000.0, continuous_update=False))
def update_plot(y):
graph.relayout({'xaxis.range[0]': y[0], 'xaxis.range[1]': y[1]})
#display the app
graph
%%html
<img src='https://cloud.githubusercontent.com/assets/12302455/16469485/42791e90-3e1f-11e6-8db4-2364bd610ce4.gif'>
```
#### Using Two Sliders to Set Range
```
import plotly.plotly as py
import ipywidgets as widgets
from ipywidgets import interact, interactive, fixed
from IPython.core.display import HTML
from IPython.display import display, clear_output
from plotly.widgets import GraphWidget
from traitlets import link
styles = '''<style>.widget-hslider { width: 100%; }
.widget-hbox { width: 100% !important; }
.widget-slider { width: 100% !important; }</style>'''
HTML(styles)
#this widget will display our plotly chart
graph = GraphWidget("https://plotly.com/~jordanpeterson/889")
fig = py.get_figure("https://plotly.com/~jordanpeterson/889")
#find the range of the slider.
xmin, xmax = fig['layout']['xaxis']['range']
# let's define our listener functions that will respond to changes in the sliders
def on_value_change_left(change):
graph.relayout({'xaxis.range[0]': change['new']})
def on_value_change_right(change):
graph.relayout({'xaxis.range[1]': change['new']})
# define the sliders
left_slider = widgets.FloatSlider(min=xmin, max=xmax, value=xmin, description="Left Slider")
right_slider = widgets.FloatSlider(min=xmin, max=xmax, value=xmax, description="Right Slider")
# put listeners on slider activity
left_slider.observe(on_value_change_left, names='value')
right_slider.observe(on_value_change_right, names='value')
# set a relationship between the left and right slider
link((left_slider, 'max'), (right_slider, 'value'))
link((left_slider, 'value'), (right_slider, 'min'))
# display our app
display(left_slider)
display(right_slider)
display(graph)
%%html
<img src='https://cloud.githubusercontent.com/assets/12302455/16469486/42891d0e-3e1f-11e6-9576-02c5f6c3d3c9.gif'>
```
#### Sliders with 3d Plots
```
import plotly.plotly as py
import ipywidgets as widgets
import numpy as np
from ipywidgets import interact, interactive, fixed
from IPython.core.display import HTML
from IPython.display import display, clear_output
from plotly.widgets import GraphWidget
g = GraphWidget('https://plotly.com/~DemoAccount/10147/')
x = y = np.arange(-5,5,0.1)
yt = x[:,np.newaxis]
# define our listener class
class z_data:
def __init__(self):
self.z = np.cos(x*yt)+np.sin(x*yt)*2
def on_z_change(self, name):
new_value = name['new']
self.z = np.cos(x*yt*(new_value+1)/100)+np.sin(x*yt*(new_value+1/100))
self.replot()
def replot(self):
g.restyle({ 'z': [self.z], 'colorscale': 'Viridis'})
# create sliders
z_slider = widgets.FloatSlider(min=0,max=30,value=1,step=0.05, continuous_update=False)
z_slider.description = 'Frequency'
z_slider.value = 1
# initialize listener class
z_state = z_data()
# activate listener on our slider
z_slider.observe(z_state.on_z_change, 'value')
# display our app
display(z_slider)
display(g)
%%html
<img src="https://cloud.githubusercontent.com/assets/12302455/16569550/bd02e030-4205-11e6-8087-d41c9b5d3681.gif">
```
#### Reference
```
help(GraphWidget)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'slider_example.ipynb', 'python/slider-widget/', 'IPython Widgets | plotly',
'Interacting with Plotly charts using Sliders',
title = 'Slider Widget with Plotly',
name = 'Slider Widget with Plotly',
has_thumbnail='true', thumbnail='thumbnail/ipython_widgets.jpg',
language='python', page_type='example_index',
display_as='chart_events', order=20,
ipynb= '~notebook_demo/91')
```
|
github_jupyter
|
# Process an interferogram with ASF HyP3
https://hyp3-docs.asf.alaska.edu/using/sdk/
## Search for scenes
scenes over grand mesa, colorado using https://asf.alaska.edu/api/
```
import requests
import shapely.geometry
roi = shapely.geometry.box(-108.3,39.2,-107.8,38.8)
polygonWKT = roi.wkt
baseurl = "https://api.daac.asf.alaska.edu/services/search/param"
data = dict(
intersectsWith=polygonWKT,
platform='Sentinel-1',
processingLevel="SLC",
beamMode='IW',
output='json',
start='2020-10-30T11:59:59Z',
end='2020-11-30T11:59:59Z',
#relativeOrbit=None,
#flightDirection=None,
)
r = requests.get(baseurl, params=data, timeout=100)
print(r.url)
# load results into pandas dataframe
import pandas as pd
df = pd.DataFrame(r.json()[0])
df.head()
# Easier to explore the inventory in plots
import hvplot.pandas
from bokeh.models.formatters import DatetimeTickFormatter
formatter = DatetimeTickFormatter(years='%m-%d')
timeseries = df.hvplot.scatter(x='startTime', y='relativeOrbit', c='relativeOrbit',
xformatter=formatter,
title='Acquisition times (UTC)')
import geopandas as gpd
import geoviews as gv
import panel as pn
gf_aoi = gpd.GeoDataFrame(geometry=[roi])
polygons = df.stringFootprint.apply(shapely.wkt.loads)
gf_footprints = gpd.GeoDataFrame(df, crs="EPSG:4326", geometry=polygons)
tiles = gv.tile_sources.StamenTerrainRetina.options(width=600, height=400)
aoi = gf_aoi.hvplot(geo=True, fill_color=None, line_color='m', hover=False)
footprints = gf_footprints.hvplot.polygons(geo=True, legend=False, alpha=0.2, c='relativeOrbit', title='Sentinel-1 Tracks')
mapview = tiles * footprints * aoi
pn.Column(mapview,timeseries)
```
```
df.relativeOrbit.unique()
orbit = '129'
reference = '2020-11-11'
secondary = '2020-10-30'
dfS = df[df.relativeOrbit == orbit]
granule1 = dfS.loc[dfS.sceneDate.str.startswith(reference), 'granuleName'].values[0]
granule2 = dfS.loc[dfS.sceneDate.str.startswith(secondary), 'granuleName'].values[0]
print(f'granule1: {granule1}')
print(f'granule2: {granule2}')
for ref in [reference, secondary]:
print(dfS.loc[dfS.sceneDate.str.startswith(ref), 'downloadUrl'].values[0])
```
## Process an InSAR pair (interferogram)
examples:
- https://nbviewer.jupyter.org/github/ASFHyP3/hyp3-sdk/blob/main/docs/sdk_example.ipynb
- https://hyp3-docs.asf.alaska.edu/using/sdk/
```
import hyp3_sdk
# ~/.netrc file used for credentials
hyp3 = hyp3_sdk.HyP3()
# Processing quota
hyp3.check_quota() #199 (200 scenes per month?)
job = hyp3.submit_insar_job(granule1,
granule2,
name='gm_20201111_20201030',
include_los_displacement=True,
include_inc_map=True)
# All jobs you've submitted
# NOTE: processing w/ defaults uses INSAR_GAMMA
# NOTE: re-run this cell to update results of batch job
batch = hyp3.find_jobs()
job = batch.jobs[0] # most recent job
job
# If you have lists of dictionaries, visualizing with a pandas dataframe is convenient
df = pd.DataFrame([job.to_dict() for job in batch])
df.head()
# Actually no, expiration time is not available for download...
#pd.to_datetime(df.expiration_time[0]) - pd.to_datetime(df.request_time[0])
# ImportError: IProgress not found. Please update jupyter and ipywidgets.
# but I think this still succeeeds
job.download_files()
!ls -ltrh
# requires ipywidgets
#hyp3.watch(job)
```
## Process multiple pairs in batch mode
```
# with progress bar
#from tqdm.auto import tqdm
#insar_jobs = sdk.Batch()
#for reference in tqdm(granules):
# neighbors_metadata = asf_search.get_nearest_neighbors(reference, max_neighbors=2)
# for secondary_metadata in neighbors_metadata:
# insar_jobs += hyp3.submit_insar_job(reference, secondary_metadata['granuleName'], name='insar-example')
#print(insar_jobs)
# Can also submit jobs via web interface # Can also visit https://hyp3.asf.alaska.edu/pending_products
# Which then shows logs that can be sorted into 'submitted, failed, etc...'
```
|
github_jupyter
|
# Activity #1: MarketMap
* another way to visualize mappable data
## 1.a : explore the dataset
```
# our usual stuff
%matplotlib inline
import pandas as pd
import numpy as np
#!pip install xlrd # JPN, might have to run this
# note: this is quering from the web! How neat is that??
df = pd.read_excel('https://query.data.world/s/ivl45pdpubos6jpsii3djsjwm2pcjv', skiprows=5)
# the above might take a while to load all the data
# what is in this dataframe? lets take a look at the top
df.head()
# this dataset is called: "Surgery Charges Across the U.S."
# and its just showing us how much different procedures
# cost from different hospitals
# what kinds of data are we working with?
df.dtypes
# lets look at some summary data
# recall: this is like R's "summary" function
df.describe()
# so, things like the mean zipcode aren't
# meaningful, same thing with provider ID
# But certainly looking at the average
# total payments, discharges, might
# be useful
# lets look at how many seperate types of surgery are
# represented in this dataset:
df["DRG Definition"].unique().size
# what about how many provider (hospital) names?
df["Provider Name"].unique().size
# how many states are represented
df["Provider State"].unique().size
# what are the state codes?
df["Provider State"].unique()
# lets figure out what the most common surgeries are via how
# many many folks are discharged after each type of surgery
# (1)
most_common = df.groupby("DRG Definition")["Total Discharges"].sum()
most_common
# (2) but lets sort by the largest on top
most_common = df.groupby("DRG Definition")["Total Discharges"].sum().sort_values(ascending=False)
most_common
# (3) lets look at only the top 5, for fun
most_common[:5]
# (4) or we can only look at the names of the top 5:
most_common[:5].index.values
```
## 1.b: formatting data for MarketMap
* here we are going to practice doing some fancy things to clean this data
* this will be good practice for when you run into other datasets "in the wild"
```
# (1) lets create a little table of total discharges for
# each type of surgery & state
total_discharges = df.groupby(["DRG Definition", "Provider State"])["Total Discharges"].sum()
total_discharges
# (2) the above is not intuative, lets prettify it
total_discharges = df.groupby(["DRG Definition", "Provider State"])["Total Discharges"].sum().unstack()
total_discharges
```
### Aside: lets quick check out what are the most frequent surgeries
```
# for our map, we are going to want to
# normalize the discharges or each surgery
# for each
# state by the total discharges across all
# states for a particular type of surger
# lets add this to our total_discharges DF
total_discharges["Total"] = total_discharges.sum(axis = 1)
total_discharges["Total"].head() # just look at the first few
# finally, lets check out the most often
# performed surgery across all states
# we can do this by sorting our DF by this total we just
# calculated:
total_discharges.sort_values(by = "Total",
ascending=False,
inplace = True)
# now lets just look at the first few of our
# sorted array
total_discharges.head()
# so, from this we see that joint replacement
# or reattachment of a lower extremeity is
# the most likely surgery (in number of discharges)
# followed by surgeries for sepsis and then heart failure
# neat. We won't need these for plotting, so we can remove our
# total column we just calculated
del total_discharges["Total"]
total_discharges.head()
# now we see that we are back to just states & surgeries
# *but* our sorting is still by the total that we
# previously calculated.
# spiffy!
```
## 1.c: plot data with bqplot
```
import bqplot
# by default bqplot does not import
# all packages, we have to
# explicitely import market_map
import bqplot.market_map # for access to market_map
# lets do our usual thing, but with a market map
# instead of a heat map
# scales:
x_sc, y_sc = bqplot.OrdinalScale(), bqplot.OrdinalScale() # note, just a different way to call things
c_sc = bqplot.ColorScale(scheme="Blues")
# just a color axes for now:
c_ax = bqplot.ColorAxis(scale = c_sc, orientation = 'vertical')
# lets make the market map:
# (1) what should we plot for our color? lets take a look:
total_discharges.iloc[0].values, total_discharges.columns.values
# this is the total discharges for the most
# popular surgical procedure
# the columns will be states
# (2) lets put this into a map
mmap = bqplot.market_map.MarketMap(color = total_discharges.iloc[0].values,
names = total_discharges.columns.values,
scales={'color':c_sc},
axes=[c_ax])
# (3) ok, but just clicking on things doesn't tell us too much
# lets add a little label to print out the total of the selected
import ipywidgets
label = ipywidgets.Label()
# link to market map
def get_data(change):
# (3.1)
#print(change['owner'].selected)
# (3.2) loop
v = 0.0 # to store total value
for s in change['owner'].selected:
v += total_discharges.iloc[0][total_discharges.iloc[0].index == s].values
if v > 0: # in case nothing is selected
# what are we printing?
l = 'Total discharges of ' + \
total_discharges.iloc[0].name + \
' = ' + str(v[0]) # note: v is by default an array
label.value = l
mmap.observe(get_data,'selected')
#mmap
# (3)
ipywidgets.VBox([label,mmap])
```
## Discussion:
* think back to the map we had last week: we can certainly plot this information with a more geo-realistic map
* what are the pros & cons of each style of map? What do each highlight? How are each biased?
## IF we have time: Re-do with other mapping system:
```
from us_state_abbrev import us_state_abbrev
sc_geo = bqplot.AlbersUSA()
state_data = bqplot.topo_load('map_data/USStatesMap.json')
#(1)
#states_map = bqplot.Map(map_data=state_data, scales={'projection':sc_geo})
#(2)
# library from last time
from states_utils import get_ids_and_names
ids, state_names = get_ids_and_names(states_map)
# color maps
import matplotlib.cm as cm
cmap = cm.Blues
# most popular surgery
popSurg = total_discharges.iloc[0]
# here, we will go through the process of getting colors to plot
# each state with its similar color to the marketmap above:
#!pip install webcolors
from webcolors import rgb_to_hex
d = {} # empty dict to store colors
for s in states_map.map_data['objects']['subunits']['geometries']:
if s['properties'] is not None:
#print(s['properties']['name'], s['id'])
# match states to abbreviations
state_abbrev = us_state_abbrev[s['properties']['name']]
#print(state_abbrev)
v = popSurg[popSurg.index == state_abbrev].values[0]
# renorm v to colors and then number of states
v = (v - popSurg.values.min())/(popSurg.values.max()-popSurg.values.min())
#print(v, int(cmap(v)[0]), int(cmap(v)[1]), int(cmap(v)[2]))
# convert to from 0-1 to 0-255 rgbs
c = [int(cmap(v)[i]*255) for i in range(3)]
#d[s['id']] = rgb_to_hex([int(cmap(v)[0]*255), int(cmap(v)[1]*255), int(cmap(v)[2]*255)])
d[s['id']] = rgb_to_hex(c)
def_tt = bqplot.Tooltip(fields=['name'])
states_map = bqplot.Map(map_data=state_data, scales={'projection':sc_geo}, colors = d, tooltip=def_tt)
# add interactions
states_map.interactions = {'click': 'select', 'hover': 'tooltip'}
# (3)
label = ipywidgets.Label()
# link to heat map
def get_data(change):
v = 0.0 # to store total value
if change['owner'].selected is not None:
for s in change['owner'].selected:
#print(s)
sn = state_names[s == ids][0]
state_abbrev = us_state_abbrev[sn]
v += popSurg[popSurg.index == state_abbrev].values[0]
if v > 0: # in case nothing is selected
# what are we printing?
l = 'Total discharges of ' + \
popSurg.name + \
' = ' + str(v) # note: v is by default an array
label.value = l
states_map.observe(get_data,'selected')
fig=bqplot.Figure(marks=[states_map],
title='US States Map Example',
fig_margin={'top': 0, 'bottom': 0, 'left': 0, 'right': 0}) # try w/o first and see
#fig
# (3)
ipywidgets.VBox([label,fig])
```
# Activity #2: Real quick ipyleaflets
* since cartopy wasn't working for folks, we'll quickly look at another option: ipyleaflets
```
#!pip install ipyleaflet
from ipyleaflet import *
# note: you might have to close and reopen you notebook
# to see the map
m = Map(center=(52, 10), zoom=8, basemap=basemaps.Hydda.Full)
#(2) street maps
strata_all = basemap_to_tiles(basemaps.Strava.All)
m.add_layer(strata_all)
m
```
### Note: more examples available here - https://github.com/jupyter-widgets/ipyleaflet/tree/master/examples
# Activity #3: Networked data - Simple example
```
# lets start with some very basic node data
# **copy paste into chat **
node_data = [
{"label": "Luke Skywalker", "media": "Star Wars", "shape": "rect"},
{"label": "Jean-Luc Picard", "media": "Star Trek", "shape": "rect"},
{"label": "Doctor Who", "media": "Doctor Who", "shape": "rect"},
{"label": "Pikachu", "media": "Detective Pikachu", "shape": "circle"},
]
# we'll use bqplot.Graph to plot these
graph = bqplot.Graph(node_data=node_data,
colors = ["red", "red", "red", "red"])
fig = bqplot.Figure(marks = [graph])
fig
# you note I can pick them up and move them around, but they aren't connected in any way
# lets make some connections
node_data = [
{"label": "Luke Skywalker", "media": "Star Wars", "shape": "rect"},
{"label": "Jean-Luc Picard", "media": "Star Trek", "shape": "rect"},
{"label": "Doctor Who", "media": "Doctor Who", "shape": "rect"},
{"label": "Pikachu", "media": "Detective Pikachu", "shape": "circle"},
]
# lets link the 0th entry (luke skywalker) to both
# jean-luc picard (1th entry) and pikachu (3rd entry)
link_data = [{'source': 0, 'target': 1}, {'source': 0, 'target': 3}]
graph = bqplot.Graph(node_data=node_data, link_data=link_data,
colors = ["red", "red", "red", "red"])
#(2) we can also play with the springiness of our links:
graph.charge = -300 # setting it to positive makes them want to overlap and is, ingeneral, a lot of fun
# -300 is default
# (3) we can also change the link type:
graph.link_type = 'line' # arc = default, line, slant_line
# (4) highlight link direction, or not
graph.directed = False
fig = bqplot.Figure(marks = [graph])
fig
# we can do all the same things we've done with
# our previous map plots:
# for example, we can add a tooltip:
#(1)
tooltip = bqplot.Tooltip(fields=["media"])
graph = bqplot.Graph(node_data=node_data, link_data=link_data,
colors = ["red", "red", "red", "red"],
tooltip=tooltip)
# we can also do interactive things with labels
label = ipywidgets.Label()
# note here that the calling sequence
# is a little different - instead
# of "change" we have "obj" and
# "element"
def printstuff(obj, element):
# (1.1)
#print(obj)
#print(element)
label.value = 'Media = ' + element['data']['media']
graph.on_element_click(printstuff)
fig = bqplot.Figure(marks = [graph])
ipywidgets.VBox([label,fig])
```
# Activity #4: Network data - subset of facebook friends dataset
* from: https://snap.stanford.edu/data/egonets-Facebook.html
* dataset of friends lists
#### Info about this dataset:
* the original file you can read in has about 80,000 different connections
* it is ordered by the most connected person (person 0) at the top
* because this network would be computationally slow and just a hairball - we're going to be working with downsampled data
* for example, a file tagged "000090_000010" starts with the 10th most connected person, and only included connections up to the 90th most connected person
* Its worth noting that this dataset (linked here and on the webpage) also includes feature data like gender, last name, school, etc - however it is too sparse to be of visualization use to us
Check out the other social network links at the SNAP data webpage!
```
# from 10 to 150 connections, a few large nodes
#filename = 'facebook_combined_sm000150_000010.txt'
# this might be too large: one large node, up to 100 connections
#filename='facebook_combined_sm000100.txt'
# start here
filename = 'facebook_combined_sm000090_000010.txt'
# then this one
#filename = 'facebook_combined_sm000030_000000.txt'
# note how different the topologies are
network = pd.read_csv('/Users/jillnaiman1/Downloads/'+filename,
sep=' ', names=['ind1', 'ind2'])
network
# build the network
node_data = []
link_data = []
color_data = [] # all same color
# add nodes
maxNet = max([network['ind1'].max(),network['ind2'].max()])
for i in range(maxNet+1):
node_data.append({"label": str(i), 'shape_attrs': {'r': 8} }) # small circles
# now, make links
for i in range(len(network)):
# we are linking the ith object to another jth object, but we
# gotta figure out with jth object it is
source_id = network.iloc[i]['ind1']
target_id = network.iloc[i]['ind2']
link_data.append({'source': source_id, 'target': target_id})
color_data.append('blue')
#link_data,node_data
#color_data
# plot
graph = bqplot.Graph(node_data=node_data,
link_data = link_data,
colors=color_data)
# play with these for different graphs
graph.charge = -100
graph.link_type = 'line'
graph.link_distance=50
# there is no direction to links
graph.directed = False
fig = bqplot.Figure(marks = [graph])
fig.layout.min_width='1000px'
fig.layout.min_height='900px'
# note: I think this has to be the layout for this to look right
fig
# in theory, we could color this network by what school folks are in, or some such
# but while the dataset does contain some of these features, the
# answer rate is too sparse for our subset here
```
# Note: the below is just prep if you want to make your own subset datasets
```
# prep fb data by downsampling
minCon = 0
maxCon = 30
G = pd.read_csv('/Users/jillnaiman1/Downloads/facebook_combined.txt',sep=' ', names=['ind1', 'ind2'])
Gnew = np.zeros([2],dtype='int')
# loop and append
Gnew = G.loc[G['ind1']==minCon].values[0]
for i in xrange(G.loc[G['ind1']==minCon].index[0],len(G)):
gl = G.loc[i].values
if (gl[0] <= maxCon) and (gl[1] <= maxCon) and (gl[0] >= minCon) and (gl[1] >= minCon):
Gnew = np.vstack((Gnew,gl))
np.savetxt('/Users/jillnaiman1/spring2019online/week09/data/facebook_combined_sm' + \
str(maxCon).zfill(6) + '_' + str(minCon).zfill(6) + '.txt', Gnew,fmt='%i')
graph.link_distance
```
|
github_jupyter
|
```
import sys
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0" #for training on gpu
from scipy import signal
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import pickle
import time
from random import shuffle
from tensorflow import keras
from tensorflow.keras.utils import to_categorical
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Conv1D, MaxPooling1D, Dropout, GlobalAveragePooling1D, Reshape
#path to data files
path = "./nine_movs_six_sub_split/"
#path where you want to save trained model and some other files
sec_path = "./"
def create_dataset(file_path, persons):
path = file_path + "{}_{}.txt"
sgn = []
lbl = []
for i in persons:
for j in range(9):
with open(path.format(i, j + 1), "rb") as fp: # Unpickling
data = pickle.load(fp)
for k in range(np.shape(data)[0]):
sgn.append(data[k])
lbl.append(j)
sgn = np.asarray(sgn, dtype=np.float32)
lbl = np.asarray(lbl, dtype=np.int32)
c = list(zip(sgn, lbl))
shuffle(c)
sgn, lbl = zip(*c)
sgn = np.asarray(sgn, dtype=np.float64)
lbl = np.asarray(lbl, dtype=np.int64)
print(sgn.shape)
train_signals = sgn[0:int(0.8 * len(sgn))]
train_labels = lbl[0:int(0.8 * len(lbl))]
val_signals = sgn[int(0.8*len(sgn)):]
val_labels = lbl[int(0.8*len(lbl)):]
#test_signals = sgn[int(0.8*len(sgn)):]
#test_labels = lbl[int(0.8*len(lbl)):]
train_labels = to_categorical(train_labels)
val_labels = to_categorical(val_labels)
#test_labels = to_categorical(test_labels)
return train_signals, train_labels, val_signals, val_labels
def create_dataset2(file_path, persons):
path = file_path + "{}_{}.txt"
sgn = []
lbl = []
i = persons
for j in range(9):
with open(path.format(i, j + 1), "rb") as fp: # Unpickling
data = pickle.load(fp)
for k in range(np.shape(data)[0]):
sgn.append(data[k])
lbl.append(j)
sgn = np.asarray(sgn, dtype=np.float32)
lbl = np.asarray(lbl, dtype=np.int32)
c = list(zip(sgn, lbl))
shuffle(c)
sgn, lbl = zip(*c)
sgn = np.asarray(sgn, dtype=np.float64)
lbl = np.asarray(lbl, dtype=np.int64)
print(sgn.shape)
train_signals = sgn[0:int(0.6 * len(sgn))]
train_labels = lbl[0:int(0.6 * len(lbl))]
val_signals = sgn[int(0.6*len(sgn)):int(0.8*len(sgn))]
val_labels = lbl[int(0.6*len(lbl)):int(0.8*len(lbl))]
test_signals = sgn[int(0.8*len(sgn)):]
test_labels = lbl[int(0.8*len(lbl)):]
train_labels = to_categorical(train_labels)
val_labels = to_categorical(val_labels)
test_labels = to_categorical(test_labels)
return train_signals, train_labels, val_signals, val_labels, test_signals, test_labels
def evaluate_model(model, expected_person_index = 2):
print("evaluate_model, expected_person_index:", expected_person_index)
persons = [1, 2, 3, 4, 5, 6]
persons.remove(expected_person_index)
train_signals, train_labels, val_signals, val_labels = create_dataset(path, persons)
model.evaluate(train_signals, train_labels)
train_signals, train_labels, val_signals, val_labels, test_signals, test_labels = create_dataset2(path, expected_person_index)
model.evaluate(train_signals, train_labels)
def plot_history(history):
plt.figure(figsize=(10,6))
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Эпоха')
plt.ylabel('Вероятность корректного распознавания')
#plt.ylim([0.5, 1])
plt.legend(loc='lower right')
plt.grid(True)
print(history.history['val_accuracy'])
# training model on 5 form 6 persons
a = [1, 3, 4, 5, 6]
train_signals, train_labels, val_signals, val_labels = create_dataset(path, a)
num_classes = 9
num_sensors = 1
input_size = train_signals.shape[1]
model = Sequential()
model.add(Reshape((input_size, num_sensors), input_shape=(input_size, )))
model.add(Conv1D(50, 10, activation='relu', input_shape=(input_size, num_sensors)))
model.add(Conv1D(25, 10, activation='relu'))
model.add(MaxPooling1D(4))
model.add(Conv1D(100, 10, activation='relu'))
model.add(Conv1D(50, 10, activation='relu'))
model.add(MaxPooling1D(4))
model.add(Dropout(0.5))
#next layers will be retrained
model.add(Conv1D(100, 10, activation='relu'))
model.add(GlobalAveragePooling1D())
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
#elapsed_time = time.time() - start_time # training time
#loss, accuracy = model.evaluate(val_signals, val_labels) # evaluating model on test data
#loss = float("{0:.3f}".format(loss))
#accuracy = float("{0:.3f}".format(accuracy))
#elapsed_time = float("{0:.3f}".format(elapsed_time))
#saving some data
#f = open(sec_path + "info.txt", 'w')
#f.writelines(["loss: ", str(loss), '\n', "accuracy: ", str(accuracy), '\n', "elapsed_time: ", str(elapsed_time), '\n'])
#saving model
#model.save(sec_path + "pretrained_model.h5")
#saving test data just in case
#cc = list(zip(test_signals, test_labels))
#with open(sec_path + "pretrained_model_test_data.txt", "wb") as fp:
# pickle.dump(cc, fp)
#saving history
#with open(sec_path + "pretrained_model_history.h5", "wb") as fp:
# pickle.dump(history.history, fp)
start_time = time.time()
history = model.fit(train_signals, train_labels,
steps_per_epoch=25,
epochs=100,
batch_size=None,
validation_data=(val_signals, val_labels),
#validation_steps=25
)
train_signals, train_labels, val_signals, val_labels, test_signals, test_labels = create_dataset2(path, 2)
plot_history(history)
evaluate_model(model, 2)
checkpoin_weights = []
for l in model.layers:
checkpoin_weights.append(l.get_weights())
model2 = Sequential()
model2.add(Reshape((input_size, num_sensors), input_shape=(input_size, )))
# Новый слой (КИХ фильтра)
# 1 свертка из 11 чисел
length_of_conv_filter = 11
model2.add(Conv1D(1, length_of_conv_filter, activation='linear', input_shape=(input_size, num_sensors), padding='same', name="Filter"))
model2.add(Dropout(0.5))
model2.add(Conv1D(50, 10, activation='relu', input_shape=(input_size, num_sensors), trainable='False'))
model2.add(Conv1D(25, 10, activation='relu', trainable='False'))
model2.add(MaxPooling1D(4))
model2.add(Conv1D(100, 10, activation='relu', trainable='False'))
model2.add(Conv1D(50, 10, activation='relu', trainable='False'))
model2.add(MaxPooling1D(4))
model2.add(Dropout(0.5))
#next layers will be retrained
model2.add(Conv1D(100, 10, activation='relu', trainable='False'))
model2.add(GlobalAveragePooling1D())
model2.add(Dense(num_classes, activation='softmax', trainable='False'))
#for i in range(1, 11):
# model2.layers[i+1].set_weights(checkpoin_weights[i])
w = model2.layers[1].get_weights()
print(w[0].shape)
# Подбираем параметры для первого слоя КИХ фильтра
w[0] = w[0] * 0
w[0][int(length_of_conv_filter/2),0,0] = 1
w[1] = w[1]*0
plt.plot(w[0].flatten())
w = model2.layers[1].set_weights(w)
# Устанавливаем веса для уже обученных слоев
n_layers = 11
for i in range(1, n_layers):
model2.layers[i+2].set_weights(checkpoin_weights[i])
# model2.layers[i+1].trainable = False
model2.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
model2.evaluate(train_signals, train_labels)
# !tensorboard dev upload --logdir ./ \
# --name "Simple experiment" \
# --description "Training results from https://colab.sandbox.google.com/github/tensorflow/tensorboard/blob/master/docs/tbdev_getting_started.ipynb" \
# --one_shot
# !tensorboard dev list
#keras.utils.plot_model(model2, 'dense_image_classifier.png', show_shapes=True)
keras.utils.plot_model(model2, 'dense_image_classifier2.png', show_shapes=True)
#keras.utils.plot_model(model2, 'dense_image_classifier.png', show_shapes=True)
history2 = model2.fit(train_signals, train_labels, epochs=25,
validation_data=(test_signals, test_labels))
plot_history(history2)
#функция вывода коэффициентов свёрточного слоя
def check_coef_conv_layer(model_name, num_layer, num_filter):
#сохраняем в переменную коэффициенты наблюдаемого слоя
layer = model_name.layers[num_layer].get_weights()
#коэффициенты 'а' наблюдаемого слоя первой сети
weights = layer[0]
#коэффициенты 'b' наблюдаемого слоя первой сети
biases = layer[1]
#вывод данных на экран
for i in range(10):
print("k{} = {:7.4f}".format(i, weights[i][0][num_filter]))
print("\nb = {:7.4f}".format(biases[num_filter]))
#функция вывода коеффициентов полносвязного слоя
def check_coef_dense_layer(model_name, num_layer, num_filter):
#сохраняем в переменную веса наблюдаемого слоя сети
layer = model_name.layers[num_layer].get_weights()
#коэффициенты 'а' наблюдаемого слоя сети
weights = layer[0]
#коэффициенты 'b' наблюдаемого слоя сети
biases = layer[1]
#вывод данных на экран
for i in range(10):
print("k{} = {:7.4f}".format(i, weights[i][num_filter]))
print("\nb = {:7.4f}".format(biases[num_filter]))
l = model.layers[10].get_weights()
plot_history(history2)
evaluate_model(model2, 2)
# keras.utils.plot_model(model, 'dense_image_classifier.png', show_shapes=True)
b = model2.layers[1].get_weights()
# b[0] - neurons weights
w, h = signal.freqz(b[0].flatten())
plt.figure(figsize=(7, 5))
plt.plot(w, 20 * np.log10(abs(h)), 'b', label='амплитудно-частотная характеристика 1 человек')
plt.grid(True)
plt.xlabel('Нормированная частота')
plt.ylabel('Амплитуда, dB')
plt.legend(loc='lower right')
#print(b[0])
#plt.set_xlabel('Frequency [rad/sample]')
#plt.set_ylabel('Amplitude [dB]', color='b')
plt.figure(figsize=(8,5))
print(b[0].flatten())
print(abs(b[0].flatten().min()))
# plt.plot(np.log10(b[0].flatten()+0.1), label='импульсная характеристика')
plt.plot(b[0].flatten(), label='импульсная характеристика')
plt.grid(True)
plt.xlabel('коэффициент')
plt.ylabel('значение')
plt.legend(loc='upper right')
plt.title('импульсная характеристика')
plt.plot(b[0].flatten())
print(b[0].flatten())
# Обучаем теперь на третьем субъекте
train_signals, train_labels, val_signals, val_labels, test_signals, test_labels = create_dataset2(path, 3)
model.evaluate(train_signals, train_labels)
history2 = model2.fit(train_signals, train_labels, epochs=25,
validation_data=(test_signals, test_labels))
evaluate_model(model2, 3)
```
|
github_jupyter
|
```
import csv
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
priceData = pd.read_csv('SP_500_close_2015.csv',index_col = 0)
priceData.head()
firms = pd.read_csv("SP_500_firms.csv")
firms.head()
percent_change = priceData.pct_change()
percent_change = percent_change.drop(percent_change.index[0])
percent_change.head()
#Or equivalently without using Pandas' built-in
#percent change function.
percent_changeD = {}
for i in percent_change:
percent_changeD[i] = []
for j in range(1,(len(priceData))):
ret = (priceData[i][j]-priceData[i][j-1])/priceData[i][j-1]
percent_changeD[i].append(ret)
percent_change2 = pd.DataFrame(data = percent_changeD,index=priceData.index[1:])
def fullname(ts):
return firms[firms.Symbol == ts].Name.values[0]
currMax = 0
for i in percent_change2:
for j in percent_change2.index:
if percent_change2[i][j] > currMax:
currMax = percent_change2[i][j]
bestCo = i
bestDate = j
print (fullname(bestCo), bestDate, currMax)
currMin = 1
for i in percent_change2:
for j in percent_change2.index:
if percent_change2[i][j] < currMin:
currMin = percent_change2[i][j]
worstCo = i
worstDate = j
print (fullname(worstCo), worstDate, currMin)
AnnualReturn = {}
yearMax = -math.inf
for i in percent_change2:
AnnualReturn[i] = (priceData[i][-1]-priceData[i][0])/priceData[i][0]
if AnnualReturn[i] > yearMax:
yearMax = AnnualReturn[i]
maxCo = i
print (yearMax, maxCo, fullname(maxCo))
AnnualReturn = {}
yearMin = math.inf
for i in percent_change2:
AnnualReturn[i] = (priceData[i][-1]-priceData[i][0])/priceData[i][0]
if AnnualReturn[i] < yearMin:
yearMin = AnnualReturn[i]
minCo = i
print (yearMin, minCo, fullname(minCo))
def mean(x):
return float(sum(x)) / len(x)
def std(x):
stdev = 0.0
for value in x:
difference = value - mean(x)
stdev = stdev + (difference ** 2)
stdev = (stdev / len(x))**(1/2)
return stdev
Volatility = {}
volMax = -math.inf
for i in percent_change2:
Volatility[i] = std(percent_change2[i])
if Volatility[i] > volMax:
volMax = Volatility[i]
volMaxC = i
print (volMax, volMaxC, fullname(volMaxC))
Volatility = {}
volMin = math.inf
for i in percent_change2:
Volatility[i] = std(percent_change2[i])
if Volatility[i] < volMin:
volMin = Volatility[i]
volMinC = i
print (volMin, volMinC, fullname(volMinC))
def corr(x,y):
xy = sum([a*b for a,b in zip(x,y)])
x2 = sum([i**2 for i in x])
y2 = sum([i**2 for i in y])
n = len(x)
numer = (n*xy - sum(x)*sum(y))
denom = ((n*x2 - sum(x)**2)**(1/2) * (n*y2 - sum(y)**2)**(1/2))
correlation = numer/denom
return correlation
correlations = {}
for i in percent_change:
correlations[i] = {}
for i in correlations:
for j in percent_change:
correlations[i][j]=[]
for company1 in percent_change:
for company2 in percent_change:
if not correlations[company1][company2]:
x=percent_change[company1]
y=percent_change[company2]
if company1 == company2:
correlations[company1][company2] = 1
correlations[company2][company1] = 1
else:
correlations[company1][company2] = corr(x,y)
correlations[company2][company1] = corr(x,y)
def corr_print(company1, company2):
print ("The correlation coefficient between {} and {} is {}."
.format(fullname(company1), fullname(company2), correlations[company1][company2]))
corr_print("AAPL", "MMM")
ticker_symbols = list(priceData.columns.values)
def top_bottomcorr(ts):
corr_tb = []
for ss in ticker_symbols:
if ss == ts:
continue
corr_co = correlations[ts][ss]
corr_tb.append((corr_co, ss))
corr_tb.sort()
print ("Most Correlated:", fullname(corr_tb[-1][1]), "(", corr_tb[-1][0],")")
print ("Least Correlated:", fullname(corr_tb[0][1]), "(", corr_tb[0][0],")")
top_bottomcorr("GOOG")
correlations = percent_change.corr()
correlations = correlations.where(np.triu(np.ones(correlations.shape)).astype(np.bool))
correlations = correlations.stack().reset_index()
correlations.columns = ['Company1', 'Company2', 'Correlation']
correlation_tuples = [tuple(x) for x in correlations.values]
def mergeSort(array):
if len(array) > 1:
mid = len(array) //2
left = array[:mid]
right = array [mid:]
mergeSort(left)
mergeSort(right)
i = 0
j = 0
k = 0
while i < len(left) and j < len(right):
if left[i][2] > right[j][2]:
array[k] = left[i]
i = i + 1
else:
array[k] = right[j]
j = j + 1
k = k+1
while i < len(left):
array[k] = left[i]
i += 1
k += 1
while j < len(right):
array[k] = right[j]
j += 1
k += 1
return(array)
sortedWeights = mergeSort(correlation_tuples)
class Digraph():
def __init__(self,filename = None):
self.edges = {}
self.numEdges = 0
def addNode(self,node):
self.edges[node] = set()
def add_Edge(self,src,dest,weight):
if not self.hasNode(src):
self.addNode(src)
self.edges[src] = {}
if not self.hasNode(dest):
self.addNode(dest)
self.edges[dest] = {}
if not self.hasEdge(src, dest):
self.numEdges += 1
self.edges[src][dest] = weight
def childrenOf(self, v):
# Returns a node's children
return self.edges[v].items()
def hasNode(self, v):
return v in self.edges
def hasEdge(self, v, w):
return w in self.edges[v]
def listEdges(self):
ll = []
for src,values in self.edges.items():
for dest,weight in values.items():
ll.append([src,dest,weight])
return ll
def __str__(self):
result = ''
for src in self.edges:
for dest,weight in self.edges[src].items():
result = result + src + '->'\
+ dest + ', length ' + str(weight) + '\n'
return result[:-1]
class Graph(Digraph):
def addEdge(self, src, dest, weight):
Digraph.addEdge(self, src, dest, weight)
Digraph.addEdge(self, dest, src, weight)
def init_graph(sortedWeights):
graph = Graph()
for x in sortedWeights:
graph.add_Edge(x[0],x[1],weight = x[2])
return graph
def init_nodePointers(graph):
nodePointers = {src:src for src in graph.edges}
return nodePointers
def init_nodeStarting(graph):
nodeStarting = {src:True for src in graph.edges}
return nodeStarting
def init_nodeBottom(graph):
nodeBottom = {src:True for src in graph.edges}
return nodeBottom
def findbottom(node, nodePointers):
source = node
destination = nodePointers[source]
while destination != source:
source = destination
destination = nodePointers[source]
return destination
def mergeSets(sortedWeights, k):
sortedWeights = [value for value in sortedWeights
if value[0] != value[1]]
graph = init_graph(sortedWeights)
nodePointers = init_nodePointers(graph)
nodeStarting = init_nodeStarting(graph)
nodeBottom = init_nodeBottom(graph)
counter = 0
for key in sortedWeights:
if counter < k:
bottom1 = findbottom(key[0], nodePointers)
bottom2 = findbottom(key[1], nodePointers)
if bottom1 != bottom2:
nodePointers[bottom2] = bottom1
nodeBottom[bottom2] = False
nodeStarting[bottom1] = False
counter += 1
return (nodePointers, nodeStarting, nodeBottom)
def recoverSets(nodePointers, nodeStarting, nodeBottom):
dict = {}
for b_key, b_value in nodeBottom.items():
if b_value:
dict.setdefault(b_key, set())
for s_key, s_value in nodeStarting.items():
if s_value and findbottom(s_key, nodePointers)== b_key:
bottom = findbottom(s_key, nodePointers)
current_node = s_key
while current_node != bottom:
dict[b_key].add(current_node)
current_node = nodePointers[current_node]
dict[b_key].add(b_key)
return list(dict.values())
nodePointers, nodeStarting, nodeBottom = mergeSets(sortedWeights, 100000)
print(recoverSets(nodePointers, nodeStarting, nodeBottom))
# print("For k = 100000, " + str(len(cluster_100000)) + " clusters are generated." + '\n')
nodePointers, nodeStarting, nodeBottom = mergeSets(sortedWeights, 2000)
cluster_2000 = recoverSets(nodePointers, nodeStarting, nodeBottom)
print(cluster_2000)
print("For k = 2000, " + str(len(cluster_2000)) + " clusters are generated." + '\n')
percent_change[['DAL', 'AAL', 'LUV', 'UAL', 'ALK']].plot()
pricesScaled = priceData.divide(priceData.iloc[0])
pricesScaled[['MAR', 'HOT', 'WYN']].plot()
```
|
github_jupyter
|
# 转置卷积
:label:`sec_transposed_conv`
到目前为止,我们所见到的卷积神经网络层,例如卷积层( :numref:`sec_conv_layer`)和汇聚层( :numref:`sec_pooling`),通常会减少下采样输入图像的空间维度(高和宽)。
然而如果输入和输出图像的空间维度相同,在以像素级分类的语义分割中将会很方便。
例如,输出像素所处的通道维可以保有输入像素在同一位置上的分类结果。
为了实现这一点,尤其是在空间维度被卷积神经网络层缩小后,我们可以使用另一种类型的卷积神经网络层,它可以增加上采样中间层特征图的空间维度。
在本节中,我们将介绍
*转置卷积*(transposed convolution) :cite:`Dumoulin.Visin.2016`,
用于逆转下采样导致的空间尺寸减小。
```
import torch
from torch import nn
from d2l import torch as d2l
```
## 基本操作
让我们暂时忽略通道,从基本的转置卷积开始,设步幅为1且没有填充。
假设我们有一个$n_h \times n_w$的输入张量和一个$k_h \times k_w$的卷积核。
以步幅为1滑动卷积核窗口,每行$n_w$次,每列$n_h$次,共产生$n_h n_w$个中间结果。
每个中间结果都是一个$(n_h + k_h - 1) \times (n_w + k_w - 1)$的张量,初始化为0。
为了计算每个中间张量,输入张量中的每个元素都要乘以卷积核,从而使所得的$k_h \times k_w$张量替换中间张量的一部分。
请注意,每个中间张量被替换部分的位置与输入张量中元素的位置相对应。
最后,所有中间结果相加以获得最终结果。
例如, :numref:`fig_trans_conv`解释了如何为$2\times 2$的输入张量计算卷积核为$2\times 2$的转置卷积。

:label:`fig_trans_conv`
我们可以对输入矩阵`X`和卷积核矩阵`K`(**实现基本的转置卷积运算**)`trans_conv`。
```
def trans_conv(X, K):
h, w = K.shape
Y = torch.zeros((X.shape[0] + h - 1, X.shape[1] + w - 1))
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Y[i: i + h, j: j + w] += X[i, j] * K
return Y
```
与通过卷积核“减少”输入元素的常规卷积(在 :numref:`sec_conv_layer`中)相比,转置卷积通过卷积核“广播”输入元素,从而产生大于输入的输出。
我们可以通过 :numref:`fig_trans_conv`来构建输入张量`X`和卷积核张量`K`从而[**验证上述实现输出**]。
此实现是基本的二维转置卷积运算。
```
X = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
K = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
trans_conv(X, K)
```
或者,当输入`X`和卷积核`K`都是四维张量时,我们可以[**使用高级API获得相同的结果**]。
```
X, K = X.reshape(1, 1, 2, 2), K.reshape(1, 1, 2, 2)
tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, bias=False)
tconv.weight.data = K
tconv(X)
```
## [**填充、步幅和多通道**]
与常规卷积不同,在转置卷积中,填充被应用于的输出(常规卷积将填充应用于输入)。
例如,当将高和宽两侧的填充数指定为1时,转置卷积的输出中将删除第一和最后的行与列。
```
tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, padding=1, bias=False)
tconv.weight.data = K
tconv(X)
```
在转置卷积中,步幅被指定为中间结果(输出),而不是输入。
使用 :numref:`fig_trans_conv`中相同输入和卷积核张量,将步幅从1更改为2会增加中间张量的高和权重,因此输出张量在 :numref:`fig_trans_conv_stride2`中。

:label:`fig_trans_conv_stride2`
以下代码可以验证 :numref:`fig_trans_conv_stride2`中步幅为2的转置卷积的输出。
```
tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, stride=2, bias=False)
tconv.weight.data = K
tconv(X)
```
对于多个输入和输出通道,转置卷积与常规卷积以相同方式运作。
假设输入有$c_i$个通道,且转置卷积为每个输入通道分配了一个$k_h\times k_w$的卷积核张量。
当指定多个输出通道时,每个输出通道将有一个$c_i\times k_h\times k_w$的卷积核。
同样,如果我们将$\mathsf{X}$代入卷积层$f$来输出$\mathsf{Y}=f(\mathsf{X})$,并创建一个与$f$具有相同的超参数、但输出通道数量是$\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\mathsf{X}$相同。
下面的示例可以解释这一点。
```
X = torch.rand(size=(1, 10, 16, 16))
conv = nn.Conv2d(10, 20, kernel_size=5, padding=2, stride=3)
tconv = nn.ConvTranspose2d(20, 10, kernel_size=5, padding=2, stride=3)
tconv(conv(X)).shape == X.shape
```
## [**与矩阵变换的联系**]
:label:`subsec-connection-to-mat-transposition`
转置卷积为何以矩阵变换命名呢?
让我们首先看看如何使用矩阵乘法来实现卷积。
在下面的示例中,我们定义了一个$3\times 3$的输入`X`和$2\times 2$卷积核`K`,然后使用`corr2d`函数计算卷积输出`Y`。
```
X = torch.arange(9.0).reshape(3, 3)
K = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
Y = d2l.corr2d(X, K)
Y
```
接下来,我们将卷积核`K`重写为包含大量0的稀疏权重矩阵`W`。
权重矩阵的形状是($4$,$9$),其中非0元素来自卷积核`K`。
```
def kernel2matrix(K):
k, W = torch.zeros(5), torch.zeros((4, 9))
k[:2], k[3:5] = K[0, :], K[1, :]
W[0, :5], W[1, 1:6], W[2, 3:8], W[3, 4:] = k, k, k, k
return W
W = kernel2matrix(K)
W
```
逐行连结输入`X`,获得了一个长度为9的矢量。
然后,`W`的矩阵乘法和向量化的`X`给出了一个长度为4的向量。
重塑它之后,可以获得与上面的原始卷积操作所得相同的结果`Y`:我们刚刚使用矩阵乘法实现了卷积。
```
Y == torch.matmul(W, X.reshape(-1)).reshape(2, 2)
```
同样,我们可以使用矩阵乘法来实现转置卷积。
在下面的示例中,我们将上面的常规卷积$2 \times 2$的输出`Y`作为转置卷积的输入。
想要通过矩阵相乘来实现它,我们只需要将权重矩阵`W`的形状转置为$(9, 4)$。
```
Z = trans_conv(Y, K)
Z == torch.matmul(W.T, Y.reshape(-1)).reshape(3, 3)
```
抽象来看,给定输入向量$\mathbf{x}$和权重矩阵$\mathbf{W}$,卷积的前向传播函数可以通过将其输入与权重矩阵相乘并输出向量$\mathbf{y}=\mathbf{W}\mathbf{x}$来实现。
由于反向传播遵循链式法则和$\nabla_{\mathbf{x}}\mathbf{y}=\mathbf{W}^\top$,卷积的反向传播函数可以通过将其输入与转置的权重矩阵$\mathbf{W}^\top$相乘来实现。
因此,转置卷积层能够交换卷积层的正向传播函数和反向传播函数:它的正向传播和反向传播函数将输入向量分别与$\mathbf{W}^\top$和$\mathbf{W}$相乘。
## 小结
* 与通过卷积核减少输入元素的常规卷积相反,转置卷积通过卷积核广播输入元素,从而产生形状大于输入的输出。
* 如果我们将$\mathsf{X}$输入卷积层$f$来获得输出$\mathsf{Y}=f(\mathsf{X})$并创造一个与$f$有相同的超参数、但输出通道数是$\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\mathsf{X}$相同。
* 我们可以使用矩阵乘法来实现卷积。转置卷积层能够交换卷积层的正向传播函数和反向传播函数。
## 练习
1. 在 :numref:`subsec-connection-to-mat-transposition`中,卷积输入`X`和转置的卷积输出`Z`具有相同的形状。他们的数值也相同吗?为什么?
1. 使用矩阵乘法来实现卷积是否有效率?为什么?
[Discussions](https://discuss.d2l.ai/t/3302)
|
github_jupyter
|
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Better performance with tf.function
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/function"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/function.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/function.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/function.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In TensorFlow 2, [eager execution](eager.ipynb) is turned on by default. The user interface is intuitive and flexible (running one-off operations is much easier and faster), but this can come at the expense of performance and deployability.
You can use `tf.function` to make graphs out of your programs. It is a transformation tool that creates Python-independent dataflow graphs out of your Python code. This will help you create performant and portable models, and it is required to use `SavedModel`.
This guide will help you conceptualize how `tf.function` works under the hood, so you can use it effectively.
The main takeaways and recommendations are:
- Debug in eager mode, then decorate with `@tf.function`.
- Don't rely on Python side effects like object mutation or list appends.
- `tf.function` works best with TensorFlow ops; NumPy and Python calls are converted to constants.
## Setup
```
import tensorflow as tf
```
Define a helper function to demonstrate the kinds of errors you might encounter:
```
import traceback
import contextlib
# Some helper code to demonstrate the kinds of errors you might encounter.
@contextlib.contextmanager
def assert_raises(error_class):
try:
yield
except error_class as e:
print('Caught expected exception \n {}:'.format(error_class))
traceback.print_exc(limit=2)
except Exception as e:
raise e
else:
raise Exception('Expected {} to be raised but no error was raised!'.format(
error_class))
```
## Basics
### Usage
A `Function` you define (for example by applying the `@tf.function` decorator) is just like a core TensorFlow operation: You can execute it eagerly; you can compute gradients; and so on.
```
@tf.function # The decorator converts `add` into a `Function`.
def add(a, b):
return a + b
add(tf.ones([2, 2]), tf.ones([2, 2])) # [[2., 2.], [2., 2.]]
v = tf.Variable(1.0)
with tf.GradientTape() as tape:
result = add(v, 1.0)
tape.gradient(result, v)
```
You can use `Function`s inside other `Function`s.
```
@tf.function
def dense_layer(x, w, b):
return add(tf.matmul(x, w), b)
dense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))
```
`Function`s can be faster than eager code, especially for graphs with many small ops. But for graphs with a few expensive ops (like convolutions), you may not see much speedup.
```
import timeit
conv_layer = tf.keras.layers.Conv2D(100, 3)
@tf.function
def conv_fn(image):
return conv_layer(image)
image = tf.zeros([1, 200, 200, 100])
# Warm up
conv_layer(image); conv_fn(image)
print("Eager conv:", timeit.timeit(lambda: conv_layer(image), number=10))
print("Function conv:", timeit.timeit(lambda: conv_fn(image), number=10))
print("Note how there's not much difference in performance for convolutions")
```
### Tracing
This section exposes how `Function` works under the hood, including implementation details *which may change in the future*. However, once you understand why and when tracing happens, it's much easier to use `tf.function` effectively!
#### What is "tracing"?
A `Function` runs your program in a [TensorFlow Graph](https://www.tensorflow.org/guide/intro_to_graphs#what_are_graphs). However, a `tf.Graph` cannot represent all the things that you'd write in an eager TensorFlow program. For instance, Python supports polymorphism, but `tf.Graph` requires its inputs to have a specified data type and dimension. Or you may perform side tasks like reading command-line arguments, raising an error, or working with a more complex Python object; none of these things can run in a `tf.Graph`.
`Function` bridges this gap by separating your code in two stages:
1) In the first stage, referred to as "**tracing**", `Function` creates a new `tf.Graph`. Python code runs normally, but all TensorFlow operations (like adding two Tensors) are *deferred*: they are captured by the `tf.Graph` and not run.
2) In the second stage, a `tf.Graph` which contains everything that was deferred in the first stage is run. This stage is much faster than the tracing stage.
Depending on its inputs, `Function` will not always run the first stage when it is called. See ["Rules of tracing"](#rules_of_tracing) below to get a better sense of how it makes that determination. Skipping the first stage and only executing the second stage is what gives you TensorFlow's high performance.
When `Function` does decide to trace, the tracing stage is immediately followed by the second stage, so calling the `Function` both creates and runs the `tf.Graph`. Later you will see how you can run only the tracing stage with [`get_concrete_function`](#obtaining_concrete_functions).
When we pass arguments of different types into a `Function`, both stages are run:
```
@tf.function
def double(a):
print("Tracing with", a)
return a + a
print(double(tf.constant(1)))
print()
print(double(tf.constant(1.1)))
print()
print(double(tf.constant("a")))
print()
```
Note that if you repeatedly call a `Function` with the same argument type, TensorFlow will skip the tracing stage and reuse a previously traced graph, as the generated graph would be identical.
```
# This doesn't print 'Tracing with ...'
print(double(tf.constant("b")))
```
You can use `pretty_printed_concrete_signatures()` to see all of the available traces:
```
print(double.pretty_printed_concrete_signatures())
```
So far, you've seen that `tf.function` creates a cached, dynamic dispatch layer over TensorFlow's graph tracing logic. To be more specific about the terminology:
- A `tf.Graph` is the raw, language-agnostic, portable representation of a TensorFlow computation.
- A `ConcreteFunction` wraps a `tf.Graph`.
- A `Function` manages a cache of `ConcreteFunction`s and picks the right one for your inputs.
- `tf.function` wraps a Python function, returning a `Function` object.
- **Tracing** creates a `tf.Graph` and wraps it in a `ConcreteFunction`, also known as a **trace.**
#### Rules of tracing
A `Function` determines whether to reuse a traced `ConcreteFunction` by computing a **cache key** from an input's args and kwargs. A **cache key** is a key that identifies a `ConcreteFunction` based on the input args and kwargs of the `Function` call, according to the following rules (which may change):
- The key generated for a `tf.Tensor` is its shape and dtype.
- The key generated for a `tf.Variable` is a unique variable id.
- The key generated for a Python primitive (like `int`, `float`, `str`) is its value.
- The key generated for nested `dict`s, `list`s, `tuple`s, `namedtuple`s, and [`attr`](https://www.attrs.org/en/stable/)s is the flattened tuple of leaf-keys (see `nest.flatten`). (As a result of this flattening, calling a concrete function with a different nesting structure than the one used during tracing will result in a TypeError).
- For all other Python types the key is unique to the object. This way a function or method is traced independently for each instance it is called with.
Note: Cache keys are based on the `Function` input parameters so changes to global and [free variables](https://docs.python.org/3/reference/executionmodel.html#binding-of-names) alone will not create a new trace. See [this section](#depending_on_python_global_and_free_variables) for recommended practices when dealing with Python global and free variables.
#### Controlling retracing
Retracing, which is when your `Function` creates more than one trace, helps ensures that TensorFlow generates correct graphs for each set of inputs. However, tracing is an expensive operation! If your `Function` retraces a new graph for every call, you'll find that your code executes more slowly than if you didn't use `tf.function`.
To control the tracing behavior, you can use the following techniques:
- Specify `input_signature` in `tf.function` to limit tracing.
```
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def next_collatz(x):
print("Tracing with", x)
return tf.where(x % 2 == 0, x // 2, 3 * x + 1)
print(next_collatz(tf.constant([1, 2])))
# You specified a 1-D tensor in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([[1, 2], [3, 4]]))
# You specified an int32 dtype in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([1.0, 2.0]))
```
- Specify a \[None\] dimension in `tf.TensorSpec` to allow for flexibility in trace reuse.
Since TensorFlow matches tensors based on their shape, using a `None` dimension as a wildcard will allow `Function`s to reuse traces for variably-sized input. Variably-sized input can occur if you have sequences of different length, or images of different sizes for each batch (See the [Transformer](../tutorials/text/transformer.ipynb) and [Deep Dream](../tutorials/generative/deepdream.ipynb) tutorials for example).
```
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def g(x):
print('Tracing with', x)
return x
# No retrace!
print(g(tf.constant([1, 2, 3])))
print(g(tf.constant([1, 2, 3, 4, 5])))
```
- Cast Python arguments to Tensors to reduce retracing.
Often, Python arguments are used to control hyperparameters and graph constructions - for example, `num_layers=10` or `training=True` or `nonlinearity='relu'`. So, if the Python argument changes, it makes sense that you'd have to retrace the graph.
However, it's possible that a Python argument is not being used to control graph construction. In these cases, a change in the Python value can trigger needless retracing. Take, for example, this training loop, which AutoGraph will dynamically unroll. Despite the multiple traces, the generated graph is actually identical, so retracing is unnecessary.
```
def train_one_step():
pass
@tf.function
def train(num_steps):
print("Tracing with num_steps = ", num_steps)
tf.print("Executing with num_steps = ", num_steps)
for _ in tf.range(num_steps):
train_one_step()
print("Retracing occurs for different Python arguments.")
train(num_steps=10)
train(num_steps=20)
print()
print("Traces are reused for Tensor arguments.")
train(num_steps=tf.constant(10))
train(num_steps=tf.constant(20))
```
If you need to force retracing, create a new `Function`. Separate `Function` objects are guaranteed not to share traces.
```
def f():
print('Tracing!')
tf.print('Executing')
tf.function(f)()
tf.function(f)()
```
### Obtaining concrete functions
Every time a function is traced, a new concrete function is created. You can directly obtain a concrete function, by using `get_concrete_function`.
```
print("Obtaining concrete trace")
double_strings = double.get_concrete_function(tf.constant("a"))
print("Executing traced function")
print(double_strings(tf.constant("a")))
print(double_strings(a=tf.constant("b")))
# You can also call get_concrete_function on an InputSpec
double_strings_from_inputspec = double.get_concrete_function(tf.TensorSpec(shape=[], dtype=tf.string))
print(double_strings_from_inputspec(tf.constant("c")))
```
Printing a `ConcreteFunction` displays a summary of its input arguments (with types) and its output type.
```
print(double_strings)
```
You can also directly retrieve a concrete function's signature.
```
print(double_strings.structured_input_signature)
print(double_strings.structured_outputs)
```
Using a concrete trace with incompatible types will throw an error
```
with assert_raises(tf.errors.InvalidArgumentError):
double_strings(tf.constant(1))
```
You may notice that Python arguments are given special treatment in a concrete function's input signature. Prior to TensorFlow 2.3, Python arguments were simply removed from the concrete function's signature. Starting with TensorFlow 2.3, Python arguments remain in the signature, but are constrained to take the value set during tracing.
```
@tf.function
def pow(a, b):
return a ** b
square = pow.get_concrete_function(a=tf.TensorSpec(None, tf.float32), b=2)
print(square)
assert square(tf.constant(10.0)) == 100
with assert_raises(TypeError):
square(tf.constant(10.0), b=3)
```
### Obtaining graphs
Each concrete function is a callable wrapper around a `tf.Graph`. Although retrieving the actual `tf.Graph` object is not something you'll normally need to do, you can obtain it easily from any concrete function.
```
graph = double_strings.graph
for node in graph.as_graph_def().node:
print(f'{node.input} -> {node.name}')
```
### Debugging
In general, debugging code is easier in eager mode than inside `tf.function`. You should ensure that your code executes error-free in eager mode before decorating with `tf.function`. To assist in the debugging process, you can call `tf.config.run_functions_eagerly(True)` to globally disable and reenable `tf.function`.
When tracking down issues that only appear within `tf.function`, here are some tips:
- Plain old Python `print` calls only execute during tracing, helping you track down when your function gets (re)traced.
- `tf.print` calls will execute every time, and can help you track down intermediate values during execution.
- `tf.debugging.enable_check_numerics` is an easy way to track down where NaNs and Inf are created.
- `pdb` (the [Python debugger](https://docs.python.org/3/library/pdb.html)) can help you understand what's going on during tracing. (Caveat: `pdb` will drop you into AutoGraph-transformed source code.)
## AutoGraph transformations
AutoGraph is a library that is on by default in `tf.function`, and transforms a subset of Python eager code into graph-compatible TensorFlow ops. This includes control flow like `if`, `for`, `while`.
TensorFlow ops like `tf.cond` and `tf.while_loop` continue to work, but control flow is often easier to write and understand when written in Python.
```
# A simple loop
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
tf.print(x)
x = tf.tanh(x)
return x
f(tf.random.uniform([5]))
```
If you're curious you can inspect the code autograph generates.
```
print(tf.autograph.to_code(f.python_function))
```
### Conditionals
AutoGraph will convert some `if <condition>` statements into the equivalent `tf.cond` calls. This substitution is made if `<condition>` is a Tensor. Otherwise, the `if` statement is executed as a Python conditional.
A Python conditional executes during tracing, so exactly one branch of the conditional will be added to the graph. Without AutoGraph, this traced graph would be unable to take the alternate branch if there is data-dependent control flow.
`tf.cond` traces and adds both branches of the conditional to the graph, dynamically selecting a branch at execution time. Tracing can have unintended side effects; check out [AutoGraph tracing effects](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#effects-of-the-tracing-process) for more information.
```
@tf.function
def fizzbuzz(n):
for i in tf.range(1, n + 1):
print('Tracing for loop')
if i % 15 == 0:
print('Tracing fizzbuzz branch')
tf.print('fizzbuzz')
elif i % 3 == 0:
print('Tracing fizz branch')
tf.print('fizz')
elif i % 5 == 0:
print('Tracing buzz branch')
tf.print('buzz')
else:
print('Tracing default branch')
tf.print(i)
fizzbuzz(tf.constant(5))
fizzbuzz(tf.constant(20))
```
See the [reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#if-statements) for additional restrictions on AutoGraph-converted if statements.
### Loops
AutoGraph will convert some `for` and `while` statements into the equivalent TensorFlow looping ops, like `tf.while_loop`. If not converted, the `for` or `while` loop is executed as a Python loop.
This substitution is made in the following situations:
- `for x in y`: if `y` is a Tensor, convert to `tf.while_loop`. In the special case where `y` is a `tf.data.Dataset`, a combination of `tf.data.Dataset` ops are generated.
- `while <condition>`: if `<condition>` is a Tensor, convert to `tf.while_loop`.
A Python loop executes during tracing, adding additional ops to the `tf.Graph` for every iteration of the loop.
A TensorFlow loop traces the body of the loop, and dynamically selects how many iterations to run at execution time. The loop body only appears once in the generated `tf.Graph`.
See the [reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#while-statements) for additional restrictions on AutoGraph-converted `for` and `while` statements.
#### Looping over Python data
A common pitfall is to loop over Python/NumPy data within a `tf.function`. This loop will execute during the tracing process, adding a copy of your model to the `tf.Graph` for each iteration of the loop.
If you want to wrap the entire training loop in `tf.function`, the safest way to do this is to wrap your data as a `tf.data.Dataset` so that AutoGraph will dynamically unroll the training loop.
```
def measure_graph_size(f, *args):
g = f.get_concrete_function(*args).graph
print("{}({}) contains {} nodes in its graph".format(
f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))
@tf.function
def train(dataset):
loss = tf.constant(0)
for x, y in dataset:
loss += tf.abs(y - x) # Some dummy computation.
return loss
small_data = [(1, 1)] * 3
big_data = [(1, 1)] * 10
measure_graph_size(train, small_data)
measure_graph_size(train, big_data)
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: small_data, (tf.int32, tf.int32)))
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: big_data, (tf.int32, tf.int32)))
```
When wrapping Python/NumPy data in a Dataset, be mindful of `tf.data.Dataset.from_generator` versus ` tf.data.Dataset.from_tensors`. The former will keep the data in Python and fetch it via `tf.py_function` which can have performance implications, whereas the latter will bundle a copy of the data as one large `tf.constant()` node in the graph, which can have memory implications.
Reading data from files via `TFRecordDataset`, `CsvDataset`, etc. is the most effective way to consume data, as then TensorFlow itself can manage the asynchronous loading and prefetching of data, without having to involve Python. To learn more, see the [`tf.data`: Build TensorFlow input pipelines](../../guide/data) guide.
#### Accumulating values in a loop
A common pattern is to accumulate intermediate values from a loop. Normally, this is accomplished by appending to a Python list or adding entries to a Python dictionary. However, as these are Python side effects, they will not work as expected in a dynamically unrolled loop. Use `tf.TensorArray` to accumulate results from a dynamically unrolled loop.
```
batch_size = 2
seq_len = 3
feature_size = 4
def rnn_step(inp, state):
return inp + state
@tf.function
def dynamic_rnn(rnn_step, input_data, initial_state):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
max_seq_len = input_data.shape[0]
states = tf.TensorArray(tf.float32, size=max_seq_len)
state = initial_state
for i in tf.range(max_seq_len):
state = rnn_step(input_data[i], state)
states = states.write(i, state)
return tf.transpose(states.stack(), [1, 0, 2])
dynamic_rnn(rnn_step,
tf.random.uniform([batch_size, seq_len, feature_size]),
tf.zeros([batch_size, feature_size]))
```
## Limitations
TensorFlow `Function` has a few limitations by design that you should be aware of when converting a Python function to a `Function`.
### Executing Python side effects
Side effects, like printing, appending to lists, and mutating globals, can behave unexpectedly inside a `Function`, sometimes executing twice or not all. They only happen the first time you call a `Function` with a set of inputs. Afterwards, the traced `tf.Graph` is reexecuted, without executing the Python code.
The general rule of thumb is to avoid relying on Python side effects in your logic and only use them to debug your traces. Otherwise, TensorFlow APIs like `tf.data`, `tf.print`, `tf.summary`, `tf.Variable.assign`, and `tf.TensorArray` are the best way to ensure your code will be executed by the TensorFlow runtime with each call.
```
@tf.function
def f(x):
print("Traced with", x)
tf.print("Executed with", x)
f(1)
f(1)
f(2)
```
If you would like to execute Python code during each invocation of a `Function`, `tf.py_function` is an exit hatch. The drawback of `tf.py_function` is that it's not portable or particularly performant, cannot be saved with SavedModel, and does not work well in distributed (multi-GPU, TPU) setups. Also, since `tf.py_function` has to be wired into the graph, it casts all inputs/outputs to tensors.
#### Changing Python global and free variables
Changing Python global and [free variables](https://docs.python.org/3/reference/executionmodel.html#binding-of-names) counts as a Python side effect, so it only happens during tracing.
```
external_list = []
@tf.function
def side_effect(x):
print('Python side effect')
external_list.append(x)
side_effect(1)
side_effect(1)
side_effect(1)
# The list append only happened once!
assert len(external_list) == 1
```
You should avoid mutating containers like lists, dicts, other objects that live outside the `Function`. Instead, use arguments and TF objects. For example, the section ["Accumulating values in a loop"](#accumulating_values_in_a_loop) has one example of how list-like operations can be implemented.
You can, in some cases, capture and manipulate state if it is a [`tf.Variable`](https://www.tensorflow.org/guide/variable). This is how the weights of Keras models are updated with repeated calls to the same `ConcreteFunction`.
#### Using Python iterators and generators
Many Python features, such as generators and iterators, rely on the Python runtime to keep track of state. In general, while these constructs work as expected in eager mode, they are examples of Python side effects and therefore only happen during tracing.
```
@tf.function
def buggy_consume_next(iterator):
tf.print("Value:", next(iterator))
iterator = iter([1, 2, 3])
buggy_consume_next(iterator)
# This reuses the first value from the iterator, rather than consuming the next value.
buggy_consume_next(iterator)
buggy_consume_next(iterator)
```
Just like how TensorFlow has a specialized `tf.TensorArray` for list constructs, it has a specialized `tf.data.Iterator` for iteration constructs. See the section on [AutoGraph transformations](#autograph_transformations) for an overview. Also, the [`tf.data`](https://www.tensorflow.org/guide/data) API can help implement generator patterns:
```
@tf.function
def good_consume_next(iterator):
# This is ok, iterator is a tf.data.Iterator
tf.print("Value:", next(iterator))
ds = tf.data.Dataset.from_tensor_slices([1, 2, 3])
iterator = iter(ds)
good_consume_next(iterator)
good_consume_next(iterator)
good_consume_next(iterator)
```
### Deleting tf.Variables between `Function` calls
Another error you may encounter is a garbage-collected variable. `ConcreteFunction`s only retain [WeakRefs](https://docs.python.org/3/library/weakref.html) to the variables they close over, so you must retain a reference to any variables.
```
external_var = tf.Variable(3)
@tf.function
def f(x):
return x * external_var
traced_f = f.get_concrete_function(4)
print("Calling concrete function...")
print(traced_f(4))
# The original variable object gets garbage collected, since there are no more
# references to it.
external_var = tf.Variable(4)
print()
print("Calling concrete function after garbage collecting its closed Variable...")
with assert_raises(tf.errors.FailedPreconditionError):
traced_f(4)
```
### All outputs of a tf.function must be return values
With the exception of `tf.Variable`s, a tf.function must return all its
outputs. Attempting to directly access any tensors from a function without
going through return values causes "leaks".
For example, the function below "leaks" the tensor `a` through the Python
global `x`:
```
x = None
@tf.function
def leaky_function(a):
global x
x = a + 1 # Bad - leaks local tensor
return a + 2
correct_a = leaky_function(tf.constant(1))
print(correct_a.numpy()) # Good - value obtained from function's returns
with assert_raises(AttributeError):
x.numpy() # Bad - tensor leaked from inside the function, cannot be used here
print(x)
```
This is true even if the leaked value is also returned:
```
@tf.function
def leaky_function(a):
global x
x = a + 1 # Bad - leaks local tensor
return x # Good - uses local tensor
correct_a = leaky_function(tf.constant(1))
print(correct_a.numpy()) # Good - value obtained from function's returns
with assert_raises(AttributeError):
x.numpy() # Bad - tensor leaked from inside the function, cannot be used here
print(x)
@tf.function
def captures_leaked_tensor(b):
b += x # Bad - `x` is leaked from `leaky_function`
return b
with assert_raises(TypeError):
captures_leaked_tensor(tf.constant(2))
```
Usually, leaks such as these occur when you use Python statements or data structures.
In addition to leaking inaccessible tensors, such statements are also likely wrong because they count as Python side effects, and are not guaranteed to execute at every function call.
Common ways to leak local tensors also include mutating an external Python collection, or an object:
```
class MyClass:
def __init__(self):
self.field = None
external_list = []
external_object = MyClass()
def leaky_function():
a = tf.constant(1)
external_list.append(a) # Bad - leaks tensor
external_object.field = a # Bad - leaks tensor
```
## Known Issues
If your `Function` is not evaluating correctly, the error may be explained by these known issues which are planned to be fixed in the future.
### Depending on Python global and free variables
`Function` creates a new `ConcreteFunction` when called with a new value of a Python argument. However, it does not do that for the Python closure, globals, or nonlocals of that `Function`. If their value changes in between calls to the `Function`, the `Function` will still use the values they had when it was traced. This is different from how regular Python functions work.
For that reason, we recommend a functional programming style that uses arguments instead of closing over outer names.
```
@tf.function
def buggy_add():
return 1 + foo
@tf.function
def recommended_add(foo):
return 1 + foo
foo = 1
print("Buggy:", buggy_add())
print("Correct:", recommended_add(foo))
print("Updating the value of `foo` to 100!")
foo = 100
print("Buggy:", buggy_add()) # Did not change!
print("Correct:", recommended_add(foo))
```
You can close over outer names, as long as you don't update their values.
#### Depending on Python objects
The recommendation to pass Python objects as arguments into `tf.function` has a number of known issues, that are expected to be fixed in the future. In general, you can rely on consistent tracing if you use a Python primitive or `tf.nest`-compatible structure as an argument or pass in a *different* instance of an object into a `Function`. However, `Function` will *not* create a new trace when you pass **the same object and only change its attributes**.
```
class SimpleModel(tf.Module):
def __init__(self):
# These values are *not* tf.Variables.
self.bias = 0.
self.weight = 2.
@tf.function
def evaluate(model, x):
return model.weight * x + model.bias
simple_model = SimpleModel()
x = tf.constant(10.)
print(evaluate(simple_model, x))
print("Adding bias!")
simple_model.bias += 5.0
print(evaluate(simple_model, x)) # Didn't change :(
```
Using the same `Function` to evaluate the updated instance of the model will be buggy since the updated model has the [same cache key](#rules_of_tracing) as the original model.
For that reason, we recommend that you write your `Function` to avoid depending on mutable object attributes or create new objects.
If that is not possible, one workaround is to make new `Function`s each time you modify your object to force retracing:
```
def evaluate(model, x):
return model.weight * x + model.bias
new_model = SimpleModel()
evaluate_no_bias = tf.function(evaluate).get_concrete_function(new_model, x)
# Don't pass in `new_model`, `Function` already captured its state during tracing.
print(evaluate_no_bias(x))
print("Adding bias!")
new_model.bias += 5.0
# Create new Function and ConcreteFunction since you modified new_model.
evaluate_with_bias = tf.function(evaluate).get_concrete_function(new_model, x)
print(evaluate_with_bias(x)) # Don't pass in `new_model`.
```
As [retracing can be expensive](https://www.tensorflow.org/guide/intro_to_graphs#tracing_and_performance), you can use `tf.Variable`s as object attributes, which can be mutated (but not changed, careful!) for a similar effect without needing a retrace.
```
class BetterModel:
def __init__(self):
self.bias = tf.Variable(0.)
self.weight = tf.Variable(2.)
@tf.function
def evaluate(model, x):
return model.weight * x + model.bias
better_model = BetterModel()
print(evaluate(better_model, x))
print("Adding bias!")
better_model.bias.assign_add(5.0) # Note: instead of better_model.bias += 5
print(evaluate(better_model, x)) # This works!
```
### Creating tf.Variables
`Function` only supports singleton `tf.Variable`s created once on the first call, and reused across subsequent function calls. The code snippet below would create a new `tf.Variable` in every function call, which results in a `ValueError` exception.
Example:
```
@tf.function
def f(x):
v = tf.Variable(1.0)
return v
with assert_raises(ValueError):
f(1.0)
```
A common pattern used to work around this limitation is to start with a Python None value, then conditionally create the `tf.Variable` if the value is None:
```
class Count(tf.Module):
def __init__(self):
self.count = None
@tf.function
def __call__(self):
if self.count is None:
self.count = tf.Variable(0)
return self.count.assign_add(1)
c = Count()
print(c())
print(c())
```
#### Using with multiple Keras optimizers
You may encounter `ValueError: tf.function only supports singleton tf.Variables created on the first call.` when using more than one Keras optimizer with a `tf.function`. This error occurs because optimizers internally create `tf.Variables` when they apply gradients for the first time.
```
opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)
opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)
@tf.function
def train_step(w, x, y, optimizer):
with tf.GradientTape() as tape:
L = tf.reduce_sum(tf.square(w*x - y))
gradients = tape.gradient(L, [w])
optimizer.apply_gradients(zip(gradients, [w]))
w = tf.Variable(2.)
x = tf.constant([-1.])
y = tf.constant([2.])
train_step(w, x, y, opt1)
print("Calling `train_step` with different optimizer...")
with assert_raises(ValueError):
train_step(w, x, y, opt2)
```
If you need to change the optimizer during training, a workaround is to create a new `Function` for each optimizer, calling the [`ConcreteFunction`](#obtaining_concrete_functions) directly.
```
opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)
opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)
# Not a tf.function.
def train_step(w, x, y, optimizer):
with tf.GradientTape() as tape:
L = tf.reduce_sum(tf.square(w*x - y))
gradients = tape.gradient(L, [w])
optimizer.apply_gradients(zip(gradients, [w]))
w = tf.Variable(2.)
x = tf.constant([-1.])
y = tf.constant([2.])
# Make a new Function and ConcreteFunction for each optimizer.
train_step_1 = tf.function(train_step).get_concrete_function(w, x, y, opt1)
train_step_2 = tf.function(train_step).get_concrete_function(w, x, y, opt2)
for i in range(10):
if i % 2 == 0:
train_step_1(w, x, y) # `opt1` is not used as a parameter.
else:
train_step_2(w, x, y) # `opt2` is not used as a parameter.
```
#### Using with multiple Keras models
You may also encounter `ValueError: tf.function only supports singleton tf.Variables created on the first call.` when passing different model instances to the same `Function`.
This error occurs because Keras models (which [do not have their input shape defined](https://www.tensorflow.org/guide/keras/custom_layers_and_models#best_practice_deferring_weight_creation_until_the_shape_of_the_inputs_is_known)) and Keras layers create `tf.Variables`s when they are first called. You may be attempting to initialize those variables inside a `Function`, which has already been called. To avoid this error, try calling `model.build(input_shape)` to initialize all the weights before training the model.
## Further reading
To learn about how to export and load a `Function`, see the [SavedModel guide](../../guide/saved_model). To learn more about graph optimizations that are performed after tracing, see the [Grappler guide](../../guide/graph_optimization). To learn how to optimize your data pipeline and profile your model, see the [Profiler guide](../../guide/profiler.md).
|
github_jupyter
|
### Strings - Quotation Marks
```
# Quotation marks must be matching. Both of the following work.
good_string = "Hello, how are you?"
another_good_string = 'Hello, how are you?'
# These strings will not work
bad_string = 'Don't do that'
another_bad_string = "Don't do that'
# Notice you enclose the whole sentence in doubles if there is
# a single within the sentence.
solution_to_bad_string = "Don't do that."
# If for some reason you need both, escape with backslash
# 'She said, "Don't do that!"'
my_escape_string = 'She said, "Don\'t do that!"'
print(my_escape_string)
# Multiple line breaks
my_super_long_string = '''This is a
string that spans
multiple lines.'''
print(my_super_long_string)
# Another way for multiple line breaks
my_long_string = ('This is a\n'
'string that spans\n'
'multiple lines.')
print(my_long_string)
```
### String Type
```
# As with numeric, can assign strings to variables
# and can check the type
my_string = 'Hello World'
type(my_string)
```
### String Operators + and *
```
# Use + to add two strings together
one = 'Hello, my name is '
two = 'Erin'
my_name = one + two
print(my_name)
# Use * to repeat a string a number of times
# Notice that I told Python to add space between the strings
repeat_this = 'I will use descriptive variable names. '
repeat_this * 3
```
### String Methods
```
# Notice the space at the end when the string prints
'Repeating string ' * 3
# Let's use a method to get rid of that space
# Use dot notation to call a method
'Repeating string Repeating string Repeating string '.strip()
# Another example
# Notice it removed white space from both start and end
' Repeating string Repeating string Repeating string '.strip()
my_str_variable = 'this IS my STRING to PLAY around WITH.'
# .capitalize()
cap_str = my_str_variable.capitalize()
# .upper()
upp_str = my_str_variable.upper()
# .lower()
low_str = my_str_variable.lower()
# .replace()
new_str = my_str_variable.replace('STR', 'fl')
# .split()
split_str = my_str_variable.split()
print(cap_str)
print(upp_str)
print(low_str)
print(new_str)
print(split_str)
# Want to know all the methods available for strings?
# type your string then dot-tab
my_str_variable
```
### String Indexing
```
# Grab a specific character
my_str_variable = 'Test String'
second_char = my_str_variable[1]
sixth_char = my_str_variable[5]
last_char = my_str_variable[-1]
third_from_last_char = my_str_variable[-3]
# Notice the zero indexing
print(second_char)
print(sixth_char)
print(last_char)
print(third_from_last_char)
# Grab characters in some subset (range)
my_str_variable = 'Test String'
# This is called 'slicing'
subset1 = my_str_variable[1:3]
subset2 = my_str_variable[5:9]
subset3 = my_str_variable[-6:-1]
subset4 = my_str_variable[1:]
subset5 = my_str_variable[:-1]
# Start at index, print everything up to end index
# Inclusive on left, exclusive on right
print(subset1)
print(subset2)
print(subset3)
print(subset4)
print(subset5)
# Grab characters in steps
my_str_variable = 'Test String'
every_second = my_str_variable[::2]
every_third_between210 = my_str_variable[2:10:3]
print(every_second)
print(every_third_between210)
```
### String Looping
```
string_to_loop = 'Denver is better than Colorado Springs'
# find out the length of your string (the number of characters)
length_of_str = len(string_to_loop)
print(length_of_str)
# Loop through string with while loop
# define your variables
string_to_loop = "What's Up?"
length_of_str = len(string_to_loop)
idx = 0
# loop until condition is met
while idx < length_of_str:
print(string_to_loop[idx])
idx += 1
# Loop through string with for loop
# define variables
string_to_loop = "What's Up?"
length_of_str = len(string_to_loop)
# loop until end of string
for index in range(length_of_str):
print(string_to_loop[index])
# Notice the range() constructor: this tells the for
# loop how long to continue. Thus our for loop will
# continue for the length of the string
# The following for loop will do the same as above,
# but it's considered cleaner code
string_to_loop = "What's Up?"
for char in string_to_loop:
print(char)
```
### Zen of Python
```
# The Zen of Python
import this
```
### String Formatting
```
my_name = 'Sean'
print('Hello, my name is {}.'.format(my_name))
# Now you can just update one variable without
# having to retype the entire sentence
my_name = 'Erin'
print('Hello, my name is {}.'.format(my_name))
# .format() told Python to format the string
# the {} are the location to format.
# Multiple values to insert?
name_one = 'Sean'
name_two = 'Erin'
print('{1} is cooler than {0}.'.format(name_two, name_one))
# If you don't tell .format() the order, it will
# assume the order.
print('{} is cooler than {}.'.format(name_two, name_one))
# .format() can also accept numbers
# numbers can be formatted
print("To be precise, that's {:.1f} times.".format(2))
# Here, the {:.1f} told Python that you would pass
# it a number which you wanted to be a float
# with only one decimal place.
'hello this is a test'.split()
```
### Lists
```
# Create a list by hard coding things into it
# Notice: lists are enclosed with []
my_first_lst = [1, 'hello', 3, 'goodbye']
# Create a list by wrapping list() around
# something you want to split apart
my_second_lst = list('hello')
print(my_first_lst)
print(my_second_lst)
# You can also create lists of lists
list_of_lists = [[1,2,3], ['erin', 'bob']]
print(list_of_lists)
# look what happens when you index into a list of lists
print(list_of_lists[0])
# what about getting into an inside list?
print(list_of_lists[1][0])
```
### List Methods
```
my_list = [1,2,3,'erin']
# .tab-complete to see methods for lists
my_list
my_lst = [1, 2, 3, 4]
# add an element to list
my_lst.append(5)
print(my_lst)
# remove last element and print it
print(my_lst.pop())
print(my_lst)
# remove element from list
my_lst.remove(4)
print(my_lst)
# reverse order of list
my_lst.reverse()
print(my_lst)
# sort the list
my_lst.sort()
print(my_lst)
```
### List Iteration
```
# define a list
list_of_nums = [1,2,3,4,5,6]
# loop through list and print
for num in list_of_nums:
print(num)
# What if you need the index in a list?
# There's a special method for that called
# enumerate()
list_of_letters = ['a','b','c','d','e']
for index, letter in enumerate(list_of_letters):
print(index, letter)
# Class challenge answer
list_of_nums = [1,2,3,456,32,75]
for idx, num in enumerate(list_of_nums):
if (num % 3 == 0) and (num % 5 == 0):
print(idx, num, 'FizzBuzz')
elif num % 3 == 0:
print(idx, num, 'Fizz')
elif num % 5 == 0:
print(idx, num, 'Buzz')
else:
print(idx, num)
```
### List Comprehensions
```
# Let's transform this for loop into a comprehension loop
my_list = [1,2,3,4,5,6]
# create an empty list that you will populate
my_squares = []
# loop through your number list and append
# the square of each number to the new list
for num in my_list:
my_squares.append(num**2)
print(my_squares)
# Now let's do the same thing with a list comprehension
my_squares_comp = [num**2 for num in my_list]
# Walk through this on white board
print(my_squares)
print(my_squares_comp)
# What about building a list with a conditional?
my_num_list = [1,2,3,4,89,1234]
even_numbers = []
for num in my_num_list:
if num % 2 == 0:
even_numbers.append(num)
# Now with list comprehension
even_numbers_comp = [num for num in my_num_list if num % 2 == 0]
print(even_numbers)
print(even_numbers_comp)
# Class challenge question and answer
class_names = ['bob', 'sally', 'fred']
short_names = [ ]
for name in class_names:
if len(name) <= 3:
short_names.append(name)
short_names_comp = [name for name in class_names if len(name) <= 3]
print(short_names)
print(short_names_comp)
```
|
github_jupyter
|
# FloPy Creating Layered Quadtree Grids with GRIDGEN
FloPy has a module that can be used to drive the GRIDGEN program. This notebook shows how it works.
The Flopy GRIDGEN module requires that the gridgen executable can be called using subprocess **(i.e., gridgen is in your path)**.
```
import os
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
from flopy.utils.gridgen import Gridgen
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
```
## Setup Base MODFLOW Grid
GRIDGEN works off of a base MODFLOW grid. The following information defines the basegrid.
```
Lx = 100.
Ly = 100.
nlay = 2
nrow = 51
ncol = 51
delr = Lx / ncol
delc = Ly / nrow
h0 = 10
h1 = 5
top = h0
botm = np.zeros((nlay, nrow, ncol), dtype=np.float32)
botm[1, :, :] = -10.
ms = flopy.modflow.Modflow(rotation=-20.)
dis = flopy.modflow.ModflowDis(ms, nlay=nlay, nrow=nrow, ncol=ncol, delr=delr,
delc=delc, top=top, botm=botm)
```
## Create the Gridgen Object
```
model_ws = os.path.join('.', 'data')
g = Gridgen(dis, model_ws=model_ws)
```
## Add an Optional Active Domain
Cells outside of the active domain will be clipped and not numbered as part of the final grid. If this step is not performed, then all cells will be included in the final grid.
```
# setup the active domain
adshp = os.path.join(model_ws, 'ad0')
adpoly = [[[(0, 0), (0, 60), (40, 80), (60, 0), (0, 0)]]]
# g.add_active_domain(adpoly, range(nlay))
```
## Refine the Grid
```
x = Lx * np.random.random(10)
y = Ly * np.random.random(10)
wells = list(zip(x, y))
g.add_refinement_features(wells, 'point', 3, range(nlay))
rf0shp = os.path.join(model_ws, 'rf0')
river = [[[(-20, 10), (60, 60)]]]
g.add_refinement_features(river, 'line', 3, range(nlay))
rf1shp = os.path.join(model_ws, 'rf1')
g.add_refinement_features(adpoly, 'polygon', 1, range(nlay))
rf2shp = os.path.join(model_ws, 'rf2')
```
## Plot the Gridgen Input
```
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
mm = flopy.plot.ModelMap(model=ms)
mm.plot_grid()
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', edgecolor='none')
flopy.plot.plot_shapefile(rf1shp, ax=ax, linewidth=10)
flopy.plot.plot_shapefile(rf0shp, ax=ax, facecolor='red', radius=1)
```
## Build the Grid
```
g.build(verbose=False)
```
## Plot the Grid
```
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
g.plot(ax, linewidth=0.5)
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', edgecolor='none', alpha=0.2)
flopy.plot.plot_shapefile(rf1shp, ax=ax, linewidth=10, alpha=0.2)
flopy.plot.plot_shapefile(rf0shp, ax=ax, facecolor='red', radius=1, alpha=0.2)
```
## Create a Flopy ModflowDisu Object
```
mu = flopy.modflow.Modflow(model_ws=model_ws, modelname='mfusg')
disu = g.get_disu(mu)
disu.write_file()
# print(disu)
```
## Intersect Features with the Grid
```
adpoly_intersect = g.intersect(adpoly, 'polygon', 0)
print(adpoly_intersect.dtype.names)
print(adpoly_intersect)
print(adpoly_intersect.nodenumber)
well_intersect = g.intersect(wells, 'point', 0)
print(well_intersect.dtype.names)
print(well_intersect)
print(well_intersect.nodenumber)
river_intersect = g.intersect(river, 'line', 0)
print(river_intersect.dtype.names)
# print(river_intersect)
# print(river_intersect.nodenumber)
```
## Plot Intersected Features
```
a = np.zeros((g.nodes), dtype=np.int)
a[adpoly_intersect.nodenumber] = 1
a[well_intersect.nodenumber] = 2
a[river_intersect.nodenumber] = 3
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
g.plot(ax, a=a, masked_values=[0], edgecolor='none', cmap='jet')
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', alpha=0.25)
```
|
github_jupyter
|
```
run_checks = False
run_sample = False
```
### Overview
This notebook works on the IEEE-CIS Fraud Detection competition. Here I build a simple XGBoost model based on a balanced dataset.
### Lessons:
. keep the categorical variables as single items
. Use a high max_depth for xgboost (maybe 40)
### Ideas to try:
. train divergence of expected value (eg. for TransactionAmt and distance based on the non-fraud subset (not all subset as in the case now)
. try using a temporal approach to CV
```
# all imports necessary for this notebook
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
import gc
import copy
import missingno as msno
import xgboost
from xgboost import XGBClassifier, XGBRegressor
from sklearn.model_selection import StratifiedKFold, cross_validate, train_test_split
from sklearn.metrics import roc_auc_score, r2_score
import warnings
warnings.filterwarnings('ignore')
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Helpers
def seed_everything(seed=0):
'''Seed to make all processes deterministic '''
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
def drop_correlated_cols(df, threshold, sample_frac = 1):
'''Drops one of two dataframe's columns whose pairwise pearson's correlation is above the provided threshold'''
if sample_frac != 1:
dataset = df.sample(frac = sample_frac).copy()
else:
dataset = df
col_corr = set() # Set of all the names of deleted columns
corr_matrix = dataset.corr()
for i in range(len(corr_matrix.columns)):
if corr_matrix.columns[i] in col_corr:
continue
for j in range(i):
if (corr_matrix.iloc[i, j] >= threshold) and (corr_matrix.columns[j] not in col_corr):
colname = corr_matrix.columns[i] # getting the name of column
col_corr.add(colname)
del dataset
gc.collect()
df.drop(columns = col_corr, inplace = True)
def calc_feature_difference(df, feature_name, indep_features, min_r2 = 0.1, min_r2_improv = 0, frac1 = 0.1,
max_depth_start = 2, max_depth_step = 4):
from copy import deepcopy
print("Feature name %s" %feature_name)
#print("Indep_features %s" %indep_features)
is_imrpoving = True
curr_max_depth = max_depth_start
best_r2 = float("-inf")
clf_best = np.nan
while is_imrpoving:
clf = XGBRegressor(max_depth = curr_max_depth)
rand_sample_indeces = df[df[feature_name].notnull()].sample(frac = frac1).index
clf.fit(df.loc[rand_sample_indeces, indep_features], df.loc[rand_sample_indeces, feature_name])
rand_sample_indeces = df[df[feature_name].notnull()].sample(frac = frac1).index
pred_y = clf.predict(df.loc[rand_sample_indeces, indep_features])
r2Score = r2_score(df.loc[rand_sample_indeces, feature_name], pred_y)
print("%d, R2 score %.4f" % (curr_max_depth, r2Score))
curr_max_depth = curr_max_depth + max_depth_step
if r2Score > best_r2:
best_r2 = r2Score
clf_best = deepcopy(clf)
if r2Score < best_r2 + (best_r2 * min_r2_improv) or (curr_max_depth > max_depth_start * max_depth_step and best_r2 < min_r2 / 2):
is_imrpoving = False
print("The best R2 score of %.4f" % ( best_r2))
if best_r2 > min_r2:
pred_feature = clf_best.predict(df.loc[:, indep_features])
return (df[feature_name] - pred_feature)
else:
return df[feature_name]
seed_everything()
pd.set_option('display.max_columns', 500)
#read data
folder_path = '/kaggle/input/ieee-fraud-detection/'
train_identity = pd.read_csv(f'{folder_path}train_identity.csv')
train_transaction = pd.read_csv(f'{folder_path}train_transaction.csv')
test_identity = pd.read_csv(f'{folder_path}test_identity.csv')
test_transaction = pd.read_csv(f'{folder_path}test_transaction.csv')
sample_submission = pd.read_csv(f'{folder_path}sample_submission.csv')
# Merge identity and transaction data
train_df = pd.merge(train_transaction, train_identity, on='TransactionID', how='left')
test_df = pd.merge(test_transaction, test_identity, on='TransactionID', how='left')
del train_identity, train_transaction, test_identity, test_transaction
gc.collect()
print(train_df.shape)
print(test_df.shape)
gc.collect()
df_missing = pd.DataFrame((train_df.isnull().mean() * 100), columns=['missing_perc_train'])
test_missing = (test_df.isnull().mean() * 100)
df_missing = df_missing.join(test_missing.rename('missing_perc_test')).reset_index()
df_missing.rename(columns = {'index' :'Feature'}, inplace=True)
df_missing['missing_percent_avg'] = (df_missing['missing_perc_train'] + df_missing['missing_perc_test']) / 2
df_missing.sort_values(by=['missing_percent_avg', 'missing_perc_train', 'missing_perc_test'], inplace=True)
#df_missing['abs_missing_percent_diff'] = np.abs(df_missing['missing_perc_train'] - df_missing['missing_perc_test'])
#df_missing.sort_values(by=['abs_missing_percent_diff'], ascending=False)
print(df_missing.shape)
df_missing.head()
df_missing[~df_missing.Feature.str.contains('isFraud')].set_index('Feature').plot(figsize=(15,7.5), grid=True)
df_missing[~df_missing.Feature.str.contains('isFraud')].loc[df_missing.missing_perc_train<50].set_index('Feature').plot(figsize=(15,7.5), grid=True)
#df_missing.loc[df_missing.missing_perc_train<50].shape
gc.collect()
train_df['is_train_df'] = 1
test_df['is_train_df'] = 0
print(train_df.shape)
print(test_df.shape)
cols_orig_train = train_df.columns
master_df = pd.concat([train_df, test_df], ignore_index=True, sort =True).reindex(columns=cols_orig_train)
print(master_df.shape)
#master_df.head()
if run_sample:
master_df = master_df.sample(frac = 0.25)
drop_correlated = False
if drop_correlated:
%%time
print(master_df.shape)
temp_df_must_keep = master_df[['TransactionID', 'TransactionDT', 'isFraud', 'is_train_df']].copy()
drop_correlated_cols(master_df, 0.95, sample_frac = 0.5)
cols_to_use = master_df.columns.difference(temp_df_must_keep.columns)
master_df = pd.merge(temp_df_must_keep, master_df[cols_to_use], left_index=True, right_index=True, how='left', validate = 'one_to_one')
master_df.sort_values(by=['TransactionID', 'TransactionDT'], inplace=True)
gc.collect()
print(master_df.shape)
master_df.head()
if drop_correlated:
del temp_df_must_keep
gc.collect()
master_df.sort_values(by=['TransactionID', 'TransactionDT'], inplace=True)
gc.collect()
del test_df, train_df
gc.collect()
cols_all = set(master_df.columns)
cols_target = 'isFraud'
cols_cat = {'id_12', 'id_13', 'id_14', 'id_15', 'id_16', 'id_17', 'id_18', 'id_19', 'id_20', 'id_21', 'id_22',
'id_23', 'id_24', 'id_25', 'id_26', 'id_27', 'id_28', 'id_29', 'id_30', 'id_31', 'id_32', 'id_33',
'id_34', 'id_35', 'id_36', 'id_37', 'id_38', 'DeviceType', 'DeviceInfo', 'ProductCD', 'card4',
'card6', 'M4','P_emaildomain', 'R_emaildomain', 'card1', 'card2', 'card3', 'card5', 'addr1',
'addr2', 'M1', 'M2', 'M3', 'M5', 'M6', 'M7', 'M8', 'M9'}
cols_cont = set([col for col in cols_all if col not in cols_cat and col != cols_target] )
# cols_cont.remove(cols_target)
print(len(cols_cat))
print(len(cols_cont))
print(len(cols_cat) + len(cols_cont))
msno.matrix(master_df[cols_cat].sample(10000))
msno.matrix(master_df[cols_cont].sample(10000))
master_df.loc[:, cols_cat] = master_df.loc[:, cols_cat].astype('category')
# Some FE
master_df[['P_emaildomain_1', 'P_emaildomain_2', 'P_emaildomain_3']] = master_df['P_emaildomain'].str.split('.', expand=True)
master_df[['R_emaildomain_1', 'R_emaildomain_2', 'R_emaildomain_3']] = master_df['R_emaildomain'].str.split('.', expand=True)
master_df['P_emaildomain_4'] = master_df['P_emaildomain'].str.replace('^[^.]+.', '', regex=True)
master_df['R_emaildomain_4'] = master_df['R_emaildomain'].str.replace('^[^.]+.', '', regex=True)
cols_cat.update(['P_emaildomain_1', 'P_emaildomain_2', 'P_emaildomain_3', 'P_emaildomain_4', 'R_emaildomain_1', 'R_emaildomain_2', 'R_emaildomain_3', 'R_emaildomain_4'])
print('P_emaildomain_1', master_df['P_emaildomain_1'].unique())
print(80 * '-')
print('P_emaildomain_2', master_df['P_emaildomain_2'].unique())
print(80 * '-')
print('P_emaildomain_3', master_df['P_emaildomain_3'].unique())
print(80 * '-')
print('P_emaildomain_4', master_df['P_emaildomain_4'].unique())
print(master_df.loc[:, master_df.dtypes == object].shape)
print(len(cols_cat))
temp_missing_cat = master_df.loc[:, cols_cat].isnull().sum()
temp_missing_cat.sort_values(inplace=True)
temp_missing_cat_train = master_df.loc[master_df['is_train_df'] ==1 , cols_cat].isnull().sum()
temp_missing_cat_test = master_df.loc[master_df['is_train_df'] ==0 , cols_cat].isnull().sum()
temp_len = len(master_df)
temp_len_train = len(master_df.loc[master_df['is_train_df'] ==1])
temp_len_test = len(master_df.loc[master_df['is_train_df'] ==0])
for col in temp_missing_cat.index:
temp_missing_percent = temp_missing_cat[col] * 100 / temp_len
temp_missing_percent_train = temp_missing_cat_train[col] * 100 / temp_len_train
temp_missing_percent_test = temp_missing_cat_test[col] * 100 / temp_len_test
print("\n%s, missing is: %.1f%% (train: %.1f%%, test: %.1f%%), n_unique is: %s\n"
%(col, temp_missing_percent, temp_missing_percent_train, temp_missing_percent_test, len(master_df.loc[:, col].unique()) ))
temp_unique_list = master_df.loc[master_df[col].notnull(), col].astype(str).unique()
temp_unique_list.sort()
print(master_df.loc[:, col].value_counts().iloc[0:10])
print(80* '-')
print(80* '-')
```
Further FE
```
#focus on id_31
master_df.id_31.astype(str).value_counts()[0:20]
#lowercase the whole column
master_df['id_31'] = master_df['id_31'].loc[master_df['id_31'].notnull()].str.lower()
temp = list(master_df['id_31'].unique())
temp.remove(np.nan)
#print(temp)
new_temp = []
import re
#DATA = "Hey, you - what are you doing here!?"
#print re.findall(r"[\w']+", DATA)
for item in temp:
#new_temp.extend(item.split())
new_temp.extend(re.findall(r"[\w']+", item))
new_temp
from collections import Counter
most_common_words= [word for word, word_count in Counter(new_temp).most_common(1000)]
#remove digits
most_common_words= [word for word in most_common_words if not word.isdigit()]
#remove single letter words
most_common_words= [word for word in most_common_words if len(word) > 1]
print(most_common_words)
temp_min_n_in_cat_to_keep = 1000
temp_added_cols = set()
for word in most_common_words:
temp_len = len(master_df['id_31'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains(word)])
if temp_len >= temp_min_n_in_cat_to_keep:
print("%s: %d \n" %(word, temp_len))
temp_new_col_name = 'id_31' + '_' + word
master_df[temp_new_col_name] = master_df['id_31'].str.contains(word)
temp_added_cols.add(temp_new_col_name)
print(master_df[temp_new_col_name].describe())
print(80* '-')
cols_cat = cols_cat.union(temp_added_cols)
#cols_cat
corr = master_df[temp_added_cols].astype('float16').corr()
corr.style.background_gradient(cmap='coolwarm')
gc.collect()
master_df['id_31'].loc[master_df['id_31_chrome']== True].loc[master_df['id_31_android']== True].value_counts()
master_df['id_31_chrome_version'] = master_df['id_31'].loc[master_df['id_31_chrome'] &
(master_df['id_31_generic']==False)].str.slice(start=7, stop=9)
master_df['id_31_chrome_version'].loc[master_df['id_31_chrome_version'] ==''] = np.nan
#master_df[['id_31', 'id_31_chrome_version']].loc[master_df['id_31_chrome_version'].notnull()].head(20)
master_df['id_31_chrome_version'].value_counts()
rolling_window = 1000
min_rolling_window = 10
temp_df = master_df[['id_31_chrome_version']].loc[master_df['id_31_chrome_version'].notnull()].astype('float16')
temp_df['id_31_chrome_version_newness'] = temp_df['id_31_chrome_version'] / temp_df['id_31_chrome_version'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean()
#train_df[new_col_name] = train_df[col] / train_df[col].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean().interpolate()
plt.plot(temp_df['id_31_chrome_version'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean())
plt.show()
plt.plot(temp_df['id_31_chrome_version_newness'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean())
master_df['id_31_chrome_version_newness'] = temp_df['id_31_chrome_version_newness']
master_df.drop(columns=['id_31_chrome_version'], inplace=True)
del temp_df
gc.collect()
cols_cont.add('id_31_chrome_version_newness')
master_df['id_31'].loc[master_df['id_31_safari']== True].value_counts()
master_df['id_31_safari_version'] = np.nan
master_df['id_31_safari_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('safari 8.0')] = 8
master_df['id_31_safari_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('safari 9.0')] = 9
master_df['id_31_safari_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('safari 10.0')] = 10
master_df['id_31_safari_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('safari 11.0')] = 11
master_df['id_31_safari_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('safari 12.0')] = 12
master_df['id_31_safari_version'].plot()
rolling_window = 20
min_rolling_window = 10
temp_df = master_df[['id_31_safari_version']].loc[master_df['id_31_safari_version'].notnull()].astype('float16')
temp_df['id_31_safari_version_newness'] = temp_df['id_31_safari_version'] / temp_df['id_31_safari_version'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean()
#train_df[new_col_name] = train_df[col] / train_df[col].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean().interpolate()
plt.plot(temp_df['id_31_safari_version'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean())
plt.show()
plt.plot(temp_df['id_31_safari_version_newness'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean())
temp_df['id_31_safari_version_newness'].hist()
master_df['id_31_safari_version_newness'] = temp_df['id_31_safari_version_newness']
master_df.drop(columns=['id_31_safari_version'], inplace=True)
del temp_df
gc.collect()
cols_cont.add('id_31_safari_version_newness')
# id_31 values excluding chrome and safari
master_df['id_31'].loc[(master_df['id_31_chrome']==False) &
(master_df['id_31_safari']== False)].astype(str).value_counts()[0:16]
master_df.id_31.loc[master_df['id_31_edge']==True].value_counts()
master_df['id_31_edge_version'] = np.nan
master_df['id_31_edge_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('edge 13.0')] = 13
master_df['id_31_edge_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('edge 14.0')] = 14
master_df['id_31_edge_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('edge 15.0')] = 15
master_df['id_31_edge_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('edge 16.0')] = 16
master_df['id_31_edge_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('edge 17.0')] = 17
master_df['id_31_edge_version'].loc[master_df['id_31'].notnull() & master_df['id_31'].str.contains('edge 18.0')] = 18
master_df['id_31_edge_version'].plot()
rolling_window = 100
min_rolling_window = 10
temp_df = master_df[['id_31_edge_version']].loc[master_df['id_31_edge_version'].notnull()].astype('float16')
temp_df['id_31_edge_version_newness'] = temp_df['id_31_edge_version'] / temp_df['id_31_edge_version'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean()
#train_df[new_col_name] = train_df[col] / train_df[col].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean().interpolate()
plt.plot(temp_df['id_31_edge_version'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean())
plt.show()
plt.plot(temp_df['id_31_edge_version_newness'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean())
master_df['id_31_edge_version_newness'] = temp_df['id_31_edge_version_newness']
master_df.drop(columns=['id_31_edge_version'], inplace=True)
del temp_df
gc.collect()
cols_cont.add('id_31_edge_version_newness')
master_df.id_31.loc[master_df['id_31_firefox']==True].value_counts()
master_df['id_31_firefox_version'] = master_df['id_31'].loc[master_df['id_31_firefox']==True].str.slice(start=-4, stop=-2)
master_df['id_31_firefox_version'].loc[master_df['id_31_firefox_version'] =='ef'] = np.nan
master_df['id_31_firefox_version'].loc[master_df['id_31_firefox_version'] =='er'] = np.nan
#master_df[['id_31', 'id_31_firefox_version']].loc[master_df['id_31_firefox_version'].notnull()].head(20)
master_df['id_31_firefox_version'].value_counts()
master_df['id_31_firefox_version'].astype('float16').plot()
rolling_window = 1000
min_rolling_window = 100
temp_df = master_df[['id_31_firefox_version']].loc[master_df['id_31_firefox_version'].notnull()].astype('float16')
temp_df['id_31_firefox_version_newness'] = temp_df['id_31_firefox_version'] / temp_df['id_31_firefox_version'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean()
#train_df[new_col_name] = train_df[col] / train_df[col].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean().interpolate()
plt.plot(temp_df['id_31_firefox_version'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean())
plt.show()
plt.plot(temp_df['id_31_firefox_version_newness'].rolling(rolling_window, center= True, min_periods=min_rolling_window).mean())
master_df['id_31_firefox_version_newness'] = temp_df['id_31_firefox_version_newness']
master_df.drop(columns=['id_31_firefox_version'], inplace=True)
del temp_df
gc.collect()
cols_cont.add('id_31_firefox_version_newness')
```
That's enough with the id_31 variable (but can do the same for the samsung kind)
```
# check 'DeviceInfo' variable
master_df['DeviceInfo'].astype(str).value_counts()[0:20]
```
No clear FE to do here .... move on
```
# check 'id_30'
master_df['id_30'].astype(str).value_counts()[0:30]
#lowercase the whole column
master_df['id_30'] = master_df['id_30'].loc[master_df['id_30'].notnull()].str.lower()
temp = list(master_df['id_30'].unique())
temp.remove(np.nan)
#print(temp)
new_temp = []
import re
#DATA = "Hey, you - what are you doing here!?"
#print re.findall(r"[\w']+", DATA)
for item in temp:
#new_temp.extend(item.split())
new_temp.extend(re.findall(r"[\w']+", item))
new_temp
from collections import Counter
most_common_words= [word for word, word_count in Counter(new_temp).most_common(1000)]
#remove digits
most_common_words= [word for word in most_common_words if not word.isdigit()]
#remove single letter words
most_common_words= [word for word in most_common_words if len(word) > 1]
print(most_common_words)
# Hard code most common words
most_common_words = ['mac', 'ios', 'android', 'windows', 'linux']
most_common_words
temp_min_n_in_cat_to_keep = 1000
temp_added_cols = set()
for word in most_common_words:
temp_len = len(master_df['id_30'].loc[master_df['id_30'].notnull() & master_df['id_30'].str.contains(word)])
if temp_len >= temp_min_n_in_cat_to_keep:
print("%s: %d \n" %(word, temp_len))
temp_new_col_name = 'id_30' + '_' + word
master_df[temp_new_col_name] = master_df['id_30'].str.contains(word)
temp_added_cols.add(temp_new_col_name)
print(master_df[temp_new_col_name].describe())
print(80* '-')
cols_cat = cols_cat.union(temp_added_cols)
#cols_cat
corr = master_df[temp_added_cols].astype('float16').corr()
corr.style.background_gradient(cmap='coolwarm')
```
enough FE with id_30 (though could code newness like done with 'id_31'
```
# FE of id_33
master_df['id_33'].loc[master_df['id_33'].notnull()].astype(str).value_counts()[0:30]
gc.collect()
temp_df = pd.DataFrame()
temp_df[['id_33_1', 'id_33_2']] = master_df['id_33'].loc[master_df['id_33'].notnull()].str.split('x', expand=True)
temp_df = temp_df.astype('float64')
temp_df['id_33_1'].loc[temp_df['id_33_1']==0] = np.nan
temp_df['id_33_2'].loc[temp_df['id_33_2']==0] = np.nan
temp_df['id_33_resolution'] = temp_df['id_33_1'] * temp_df['id_33_2']
temp_df['id_33_resolution'] = np.log(temp_df['id_33_resolution'])
temp_df.describe()
master_df['id_33_resolution'] = temp_df['id_33_resolution']
cols_cont.add('id_33_resolution')
del temp_df
gc.collect()
master_df['id_33_resolution'].hist()
```
Moving now to some FE of continuous variables
```
# Decimal part of the 'TransactionAmt' feature
master_df['TransactionAmt_decimal'] = ((master_df['TransactionAmt'] - master_df['TransactionAmt'].astype(int)) * 1000).astype(int)
# Length of the 'TransactionAmt' feature
master_df['TransactionAmt_decimal_length'] = master_df['TransactionAmt'].astype(str).str.split('.', expand=True)[1].str.len()
cols_cont.update(['TransactionAmt_decimal', 'TransactionAmt_decimal_length'])
master_df['TransactionAmt_decimal_length'].hist()
gc.collect()
## Thanks to FChmiel (https://www.kaggle.com/fchmiel) for these two functions
def make_day_feature(df, offset=0, tname='TransactionDT'):
"""
Creates a day of the week feature, encoded as 0-6.
Parameters:
-----------
df : pd.DataFrame
df to manipulate.
offset : float (default=0)
offset (in days) to shift the start/end of a day.
tname : str
Name of the time column in df.
"""
# found a good offset is 0.58
days = df[tname] / (3600*24)
encoded_days = np.floor(days-1+offset) % 7
return encoded_days
def make_hour_feature(df, tname='TransactionDT'):
"""
Creates an hour of the day feature, encoded as 0-23.
Parameters:
-----------
df : pd.DataFrame
df to manipulate.
tname : str
Name of the time column in df.
"""
hours = df[tname] / (3600)
encoded_hours = np.floor(hours) % 24
return encoded_hours
master_df['weekday'] = make_day_feature(master_df, offset=0.58)
master_df['hours'] = make_hour_feature(master_df)
cols_cat.update(['weekday', 'hours'])
# check all cols in either cols_cat or cols_cont
print(set(master_df.columns).difference(cols_cat.union(cols_cont)))
print(cols_cat.intersection(cols_cont))
master_df.memory_usage().sum()
temp_cols_cat_list = list(cols_cat)
master_df[temp_cols_cat_list] = master_df[temp_cols_cat_list].astype('category')
gc.collect()
master_df[cols_cat].describe()
master_df.memory_usage().sum()
master_df.drop(columns = ['id_30', 'id_31', 'id_33'], inplace = True)
cols_cat.remove('id_30')
cols_cat.remove('id_31')
cols_cat.remove('id_33')
cols_cat_dummified = set()
n_categories_to_keep = 24
for col in cols_cat:
print("%s, " %col, end="")
len_categories = len(master_df[col].loc[master_df[col].notnull()].unique())
temp_col = master_df.loc[:, [col]]
if n_categories_to_keep < len_categories:
top_cats = list(temp_col[col].value_counts(ascending = False, normalize=False).iloc[:n_categories_to_keep].index)
temp_col[col].cat.add_categories(['infrequent_category'], inplace = True)
top_cats.append('infrequent_category')
#print(list(top_cats))
temp_col.loc[temp_col[col].notnull() & ~temp_col[col].isin(top_cats), [col]] = 'infrequent_category'
temp_col[col].cat.remove_categories([cat for cat in temp_col[col].cat.categories if not cat in top_cats], inplace = True)
temp_col = pd.get_dummies(temp_col, dummy_na=True)
cols_cat_dummified.update(list(temp_col.columns))
master_df[temp_col.columns] = temp_col
del temp_col
gc.collect()
master_df[cols_cat_dummified].astype('category').describe()
master_df.shape
'''
for col in cols_cat:
master_df[col] = master_df[col].astype('category').cat.codes
'''
'''
plt.plot(master_df['V91'].rolling(1000, center= True, min_periods=100).mean())
temp = calc_feature_difference(master_df, 'V91', indep_features, min_r2=0, max_depths_list=[2, 4, 8, 16])
plt.plot(temp.rolling(1000, center= True, min_periods=100).mean())
'''
'''
%%time
length_ones = len(master_df[master_df['isFraud']==1])
train_balanced = pd.concat([master_df[master_df['isFraud']==1], (master_df[master_df['isFraud']==0]).sample(length_ones)], axis=0)
X_train, X_test, y_train, y_test = train_test_split(
train_balanced.drop(columns=['isFraud', 'TransactionID', 'TransactionDT']), train_balanced['isFraud'],
test_size=1/6, stratify =train_balanced['isFraud'], random_state=0)
print(X_train.shape)
print(X_test.shape)
clf = XGBClassifier(max_depth=40)
clf.fit(X_train, y_train)
pred_prob = clf.predict_proba(X_test)
pred_prob[:, 1]
roc_score = roc_auc_score(y_test, pred_prob[:, 1])
print("roc_auc score %.4f" % roc_score)
xgboost.plot_importance(clf, max_num_features=20, importance_type='gain')
xgboost.plot_importance(clf, max_num_features=20, importance_type='weight')
'''
'''
temp = clf.get_booster().get_score(importance_type='gain')
df_gain = pd.DataFrame(temp.keys(), columns=['Feature'])
df_gain['Feature_importance'] = temp.values()
df_gain = df_gain.sort_values(by=['Feature_importance'], ascending = False)
temp = clf.get_booster().get_score(importance_type='weight')
df_weight = pd.DataFrame(temp.keys(), columns=['Feature'])
df_weight['Feature_importance'] = temp.values()
df_weight = df_weight.sort_values(by=['Feature_importance'], ascending = False)
temp_must_keep = ['isFraud', 'TransactionID', 'TransactionDT', 'is_train_df', 'weekday', 'hours', 'TransactionDT', 'ProductCD', 'card1', 'card2', 'card3', 'card4', 'card5', 'card6', 'addr1', 'addr2']
temp_list_to_keep_100 = temp_must_keep.copy()
temp_list_to_keep_100.extend(df_gain['Feature'][0:100])
temp_list_to_keep_100.extend(df_weight['Feature'][0:100])
temp_list_to_keep_100 = list(set(temp_list_to_keep_100))
print("Len top 100 features according to gain and weight is %d" % len(temp_list_to_keep_100))
temp_list_to_keep_200 = temp_must_keep.copy()
temp_list_to_keep_200.extend(df_gain['Feature'][0:200])
temp_list_to_keep_200.extend(df_weight['Feature'][0:200])
temp_list_to_keep_200 = list(set(temp_list_to_keep_200))
print("Len top 200 features according to gain and weight is %d" % len(temp_list_to_keep_200))
temp_list_to_keep_300 = temp_must_keep.copy()
temp_list_to_keep_300.extend(df_gain['Feature'][0:300])
temp_list_to_keep_300.extend(df_weight['Feature'][0:300])
temp_list_to_keep_300 = list(set(temp_list_to_keep_300))
print("Len top 200 features according to gain and weight is %d" % len(temp_list_to_keep_300))
temp_list_to_keep_all = temp_must_keep.copy()
temp_list_to_keep_all.extend(df_gain['Feature'][:])
print("Len top (all) features according to gain and weight is %d" % len(temp_list_to_keep_all))
'''
'''
print(temp_list_to_keep_100)
print("\n\n" + (80 * '-') + "\n\n")
print(temp_list_to_keep_200)
print("\n\n" + (80 * '-') + "\n\n")
print(temp_list_to_keep_300)
print("\n\n" + (80 * '-') + "\n\n")
print(temp_list_to_keep_all)
print("\n\n" + (80 * '-') + "\n\n")
'''
temp_list_to_keep_100 = ['V112', 'M4_M1', 'P_emaildomain', 'R_emaildomain', 'card2_555.0', 'V320', 'V91', 'V314', 'V206', 'card2_225.0', 'id_31_chrome_version_newness', 'card4_american express', 'card1_9500', 'card1_9633', 'weekday_3.0', 'card1', 'D5', 'V145', 'P_emaildomain_1_infrequent_category', 'M7', 'M5_F', 'addr2_87.0', 'V20', 'hours', 'D2', 'addr2', 'P_emaildomain_verizon.net', 'id_01', 'V258', 'card3', 'V329', 'V223', 'card2_553.0', 'card1_17188', 'card2_infrequent_category', 'id_17_166.0', 'TransactionAmt_decimal', 'V162', 'C9', 'id_13', 'V177', 'id_14_-420.0', 'V304', 'dist2', 'V75', 'id_06', 'V254', 'V149', 'V269', 'V188', 'M4', 'V70', 'hours_5.0', 'weekday_2.0', 'id_20_612.0', 'D11', 'ProductCD_H', 'card6', 'addr1_299.0', 'C14', 'card1_infrequent_category', 'V131', 'dist1', 'D10', 'id_31_safari_version_newness', 'V152', 'D15', 'V283', 'V133', 'D14', 'V126', 'addr1_204.0', 'V324', 'V257', 'TransactionAmt', 'M6', 'V36', 'card2', 'V230', 'V209', 'id_02', 'V94', 'card4', 'hours_11.0', 'V173', 'P_emaildomain_comcast.net', 'card6_credit', 'ProductCD', 'V103', 'V310', 'V130', 'addr1_infrequent_category', 'card5_224.0', 'M8_F', 'V306', 'card2_490.0', 'P_emaildomain_hotmail.com', 'P_emaildomain_me.com', 'C10', 'M4_M0', 'card5', 'V244', 'V308', 'P_emaildomain_msn.com', 'id_17', 'P_emaildomain_infrequent_category', 'P_emaildomain_optonline.net', 'TransactionID', 'M9', 'id_20', 'card3_185.0', 'weekday_1.0', 'D3', 'V127', 'weekday_5.0', 'addr1_337.0', 'V285', 'V245', 'addr1_325.0', 'V35', 'addr1_485.0', 'V294', 'D1', 'D4', 'id_30_android', 'R_emaildomain_anonymous.com', 'card2_567.0', 'P_emaildomain_gmail.com', 'id_20_549.0', 'id_32_24.0', 'V12', 'V76', 'V102', 'C13', 'V87', 'V141', 'D9', 'ProductCD_R', 'card5_166.0', 'C1', 'id_05', 'isFraud', 'V201', 'M5', 'P_emaildomain_3', 'card2_321.0', 'V313', 'id_33_resolution', 'V99', 'V307', 'M3_F', 'V189', 'V191', 'addr1', 'V312', 'C8', 'C2', 'id_19', 'id_26', 'card3_150.0', 'M8', 'V53', 'weekday', 'R_emaildomain_2_com', 'card2_268.0', 'V56', 'C5', 'card2_481.0', 'addr1_472.0', 'id_20_333.0', 'V4', 'weekday_4.0', 'card3_144.0', 'C11', 'C6', 'is_train_df', 'M3', 'V176', 'TransactionDT', 'V208', 'id_14_60.0', 'P_emaildomain_2', 'M7_F', 'V315', 'V62', 'M6_F', 'V317', 'id_31_edge', 'DeviceInfo', 'V271', 'D8', 'V82', 'V128']
temp_list_to_keep_200 = ['M4_M1', 'addr1_315.0', 'card2_225.0', 'card1_9500', 'V288', 'V293', 'weekday_3.0', 'card1', 'D5', 'id_13_49.0', 'M7', 'V298', 'P_emaildomain_1_outlook', 'V77', 'card2_111.0', 'D2', 'addr2', 'V137', 'card1_17188', 'V75', 'V61', 'V79', 'V254', 'V269', 'M4', 'id_20_612.0', 'card3_143.0', 'D15', 'D14', 'V309', 'V230', 'V166', 'card4', 'id_17_100.0', 'hours_1.0', 'card5_229.0', 'DeviceInfo_Windows', 'card5', 'V117', 'id_31_ie', 'addr1_337.0', 'V35', 'addr1_485.0', 'card2_567.0', 'V291', 'V263', 'V12', 'V110', 'ProductCD_R', 'V49', 'id_05', 'isFraud', 'V201', 'P_emaildomain_3', 'V150', 'M3_F', 'V312', 'C8', 'card3_150.0', 'D13', 'addr1_264.0', 'D6', 'V225', 'V4', 'weekday_4.0', 'V302', 'M7_F', 'V97', 'card2_170.0', 'V82', 'V128', 'P_emaildomain', 'V74', 'card1_9633', 'V64', 'V100', 'V86', 'V243', 'V145', 'hours_22.0', 'M5_F', 'addr2_87.0', 'hours', 'R_emaildomain_gmail.com', 'id_19_infrequent_category', 'P_emaildomain_verizon.net', 'id_01', 'V258', 'id_19_193.0', 'V280', 'V10', 'TransactionAmt_decimal', 'V177', 'V311', 'V149', 'V188', 'V202', 'weekday_2.0', 'V78', 'card6', 'D10', 'V152', 'V126', 'id_19_312.0', 'addr1_204.0', 'V324', 'V257', 'M6', 'V209', 'id_02', 'V247', 'V105', 'hours_11.0', 'V173', 'P_emaildomain_comcast.net', 'V323', 'ProductCD', 'V103', 'M8_F', 'id_19_271.0', 'M4_M0', 'V244', 'id_17', 'TransactionID', 'weekday_1.0', 'card3_185.0', 'V301', 'D3', 'hours_15.0', 'V245', 'D4', 'P_emaildomain_anonymous.com', 'id_32_24.0', 'V102', 'D9', 'C1', 'id_20_507.0', 'id_25', 'id_20_401.0', 'V313', 'V99', 'V189', 'V191', 'addr1', 'card2_174.0', 'V48', 'id_26', 'V316', 'C5', 'C6', 'M3', 'V208', 'id_14_60.0', 'P_emaildomain_2', 'DeviceInfo', 'V271', 'V112', 'card2_555.0', 'V320', 'V91', 'V314', 'V206', 'card4_american express', 'V20', 'D7', 'id_18_15.0', 'V333', 'V223', 'card2_553.0', 'card2_infrequent_category', 'M2_F', 'P_emaildomain_yahoo.com', 'card5_195.0', 'card4_mastercard', 'V19', 'card2_361.0', 'dist2', 'id_06', 'P_emaildomain_4', 'R_emaildomain_hotmail.com', 'V70', 'hours_5.0', 'V115', 'ProductCD_H', 'V303', 'V40', 'C14', 'V54', 'V131', 'dist1', 'id_31_safari_version_newness', 'V283', 'V133', 'card5_236.0', 'addr1_184.0', 'V134', 'V37', 'TransactionAmt', 'V203', 'card2', 'DeviceInfo_Trident/7.0', 'card2_490.0', 'id_13_19.0', 'P_emaildomain_hotmail.com', 'P_emaildomain_infrequent_category', 'P_emaildomain_optonline.net', 'id_20_500.0', 'V285', 'addr1_325.0', 'R_emaildomain_anonymous.com', 'card1_12695', 'id_20_549.0', 'V76', 'card1_15885', 'addr1_231.0', 'V55', 'C13', 'V87', 'TransactionAmt_decimal_length', 'M5', 'card2_321.0', 'V129', 'V136', 'V165', 'P_emaildomain_outlook.com', 'addr1_441.0', 'M8', 'V144', 'id_19_321.0', 'card2_268.0', 'V56', 'addr1_472.0', 'id_20_333.0', 'V5', 'V296', 'V67', 'V176', 'TransactionDT', 'V3', 'V211', 'V315', 'card1_16132', 'V322', 'id_31_edge', 'D8', 'V171', 'R_emaildomain', 'P_emaildomain_bellsouth.net', 'id_31_chrome_version_newness', 'card1_12839', 'V13', 'hours_21.0', 'DeviceType', 'V21', 'P_emaildomain_1_infrequent_category', 'card1_10616', 'id_18', 'card3', 'V7', 'V329', 'id_14', 'card2_514.0', 'hours_17.0', 'id_17_166.0', 'V162', 'V45', 'C9', 'V38', 'id_13', 'V96', 'id_14_-420.0', 'M9_F', 'V304', 'V132', 'M2', 'id_16', 'V282', 'V221', 'D11', 'addr1_299.0', 'card1_infrequent_category', 'card2_360.0', 'hours_19.0', 'DeviceInfo_SM-J700M Build/MMB29K', 'V194', 'V36', 'V94', 'V224', 'addr1_infrequent_category', 'V130', 'card6_credit', 'V249', 'P_emaildomain_live.com', 'V310', 'card5_224.0', 'V306', 'P_emaildomain_me.com', 'C10', 'card4_discover', 'C12', 'V308', 'P_emaildomain_msn.com', 'M9', 'id_20', 'V264', 'V127', 'V122', 'weekday_5.0', 'V294', 'D1', 'V47', 'id_30_android', 'P_emaildomain_gmail.com', 'hours_18.0', 'id_38', 'V141', 'id_36', 'card5_166.0', 'V83', 'V2', 'id_33_resolution', 'V307', 'hours_3.0', 'addr1_330.0', 'hours_16.0', 'hours_20.0', 'C2', 'id_19', 'V281', 'V53', 'P_emaildomain_4_com', 'weekday', 'id_31_tablet_False', 'R_emaildomain_2_com', 'V44', 'card2_481.0', 'V318', 'card1_6019', 'card3_144.0', 'C11', 'card5_226.0', 'is_train_df', 'addr1_181.0', 'id_20_533.0', 'V62', 'addr1_433.0', 'M6_F', 'V317']
temp_list_to_keep_300 = ['M4_M1', 'addr1_315.0', 'card2_225.0', 'card1_9500', 'V288', 'V293', 'P_emaildomain_icloud.com', 'DeviceInfo_infrequent_category', 'weekday_3.0', 'card1', 'D5', 'id_13_49.0', 'M7', 'V298', 'P_emaildomain_1_outlook', 'V77', 'card2_111.0', 'D2', 'addr2', 'V137', 'card1_17188', 'hours_14.0', 'V75', 'V61', 'id_09', 'V79', 'V254', 'hours_12.0', 'V66', 'V269', 'M4', 'V265', 'id_19_427.0', 'id_20_612.0', 'id_20_325.0', 'V292', 'card3_143.0', 'D15', 'D14', 'V109', 'V69', 'id_19_266.0', 'V309', 'V230', 'V166', 'card4', 'id_17_100.0', 'hours_1.0', 'card5_229.0', 'V34', 'DeviceInfo_Windows', 'card5', 'V117', 'id_31_ie', 'addr1_387.0', 'addr1_337.0', 'V273', 'V35', 'addr1_485.0', 'card2_567.0', 'V291', 'id_19_410.0', 'V263', 'V12', 'id_31_android', 'id_13_52.0', 'id_03', 'V110', 'ProductCD_R', 'V6', 'P_emaildomain_aol.com', 'V49', 'id_05', 'isFraud', 'V201', 'P_emaildomain_3', 'V150', 'M3_F', 'V312', 'id_20_597.0', 'V270', 'C8', 'card3_150.0', 'D13', 'addr1_264.0', 'D6', 'V225', 'addr2_60.0', 'V4', 'weekday_4.0', 'V154', 'V214', 'id_20_infrequent_category', 'V302', 'M7_F', 'V97', 'card2_170.0', 'card1_7508', 'V82', 'V128', 'card2_583.0', 'P_emaildomain', 'V234', 'V74', 'card1_9633', 'P_emaildomain_live.com.mx', 'V64', 'V100', 'V146', 'V86', 'V243', 'V145', 'hours_22.0', 'M5_F', 'addr2_87.0', 'hours', 'R_emaildomain_gmail.com', 'id_19_infrequent_category', 'P_emaildomain_verizon.net', 'id_01', 'V258', 'id_19_193.0', 'V280', 'card1_2884', 'V248', 'card5_102.0', 'V10', 'TransactionAmt_decimal', 'V177', 'V311', 'V149', 'V188', 'V202', 'weekday_2.0', 'V78', 'card6', 'D10', 'V152', 'V218', 'V51', 'V126', 'id_19_312.0', 'V222', 'addr1_204.0', 'V324', 'V257', 'V267', 'V332', 'M6', 'V209', 'id_02', 'V247', 'V105', 'hours_11.0', 'V173', 'P_emaildomain_comcast.net', 'V323', 'ProductCD', 'V103', 'M8_F', 'id_19_271.0', 'M4_M0', 'V244', 'id_17', 'TransactionID', 'weekday_1.0', 'card3_185.0', 'V301', 'D3', 'hours_15.0', 'V245', 'D4', 'id_14_-360.0', 'P_emaildomain_anonymous.com', 'id_32_24.0', 'V102', 'D9', 'C1', 'id_20_507.0', 'id_25', 'id_20_401.0', 'V287', 'V313', 'V99', 'V189', 'V191', 'addr1', 'card2_174.0', 'V48', 'id_26', 'V316', 'C5', 'V279', 'C6', 'card5_117.0', 'M3', 'V208', 'id_14_60.0', 'P_emaildomain_2', 'V160', 'DeviceInfo', 'V271', 'V319', 'V112', 'id_20_595.0', 'card2_555.0', 'V320', 'V91', 'V314', 'card1_15066', 'V206', 'V25', 'card4_american express', 'V295', 'addr1_126.0', 'V20', 'D7', 'id_18_15.0', 'V333', 'V95', 'V223', 'card2_553.0', 'card2_infrequent_category', 'M2_F', 'P_emaildomain_yahoo.com', 'card5_195.0', 'card4_mastercard', 'V19', 'V272', 'card2_361.0', 'dist2', 'V23', 'id_06', 'P_emaildomain_4', 'V24', 'R_emaildomain_hotmail.com', 'V139', 'V337', 'V70', 'hours_5.0', 'id_20_222.0', 'V115', 'ProductCD_H', 'V135', 'V303', 'V40', 'C14', 'V54', 'V131', 'V161', 'dist1', 'id_31_safari_version_newness', 'V283', 'V133', 'card5_236.0', 'addr1_184.0', 'card5_198.0', 'V253', 'V134', 'V170', 'card3_infrequent_category', 'V37', 'TransactionAmt', 'hours_2.0', 'V203', 'card2', 'DeviceInfo_Trident/7.0', 'P_emaildomain_hotmail.com', 'id_13_19.0', 'card2_490.0', 'id_30_windows', 'P_emaildomain_infrequent_category', 'P_emaildomain_optonline.net', 'id_20_500.0', 'V285', 'addr1_325.0', 'R_emaildomain_anonymous.com', 'card1_12695', 'V192', 'id_20_549.0', 'P_emaildomain_1_hotmail', 'V76', 'card1_15885', 'addr1_231.0', 'V151', 'V55', 'C13', 'V87', 'TransactionAmt_decimal_length', 'V104', 'M5', 'card2_321.0', 'V290', 'V129', 'V136', 'V164', 'V165', 'P_emaildomain_outlook.com', 'addr1_441.0', 'M8', 'V144', 'id_19_321.0', 'card2_268.0', 'V56', 'addr1_472.0', 'id_20_333.0', 'V5', 'V296', 'V67', 'id_31_for', 'V60', 'id_34', 'V176', 'TransactionDT', 'V3', 'card2_215.0', 'V211', 'V315', 'card1_16132', 'V322', 'id_31_edge', 'D8', 'V171', 'V52', 'R_emaildomain', 'V226', 'P_emaildomain_bellsouth.net', 'id_31_chrome_version_newness', 'card5_219.0', 'card1_12839', 'V13', 'hours_21.0', 'V39', 'DeviceType', 'addr1_310.0', 'V21', 'P_emaildomain_1_infrequent_category', 'card1_10616', 'V116', 'id_31_chrome', 'card1_7919', 'id_18', 'card3', 'V7', 'V329', 'id_14', 'card2_514.0', 'hours_17.0', 'V124', 'id_17_166.0', 'V162', 'V45', 'C9', 'V38', 'id_13', 'V96', 'id_14_-420.0', 'M9_F', 'V304', 'card1_7585', 'V132', 'M2', 'id_16', 'P_emaildomain_cox.net', 'V282', 'V221', 'id_19_100.0', 'D11', 'V90', 'DeviceInfo_MacOS', 'addr1_299.0', 'hours_6.0', 'card1_infrequent_category', 'card2_360.0', 'hours_19.0', 'DeviceInfo_SM-J700M Build/MMB29K', 'V194', 'V143', 'V36', 'V94', 'V224', 'addr1_infrequent_category', 'V130', 'card6_credit', 'V249', 'P_emaildomain_live.com', 'V310', 'V187', 'card5_224.0', 'V306', 'P_emaildomain_me.com', 'C10', 'card4_discover', 'C12', 'V308', 'P_emaildomain_msn.com', 'M9', 'id_20', 'V264', 'V127', 'V122', 'weekday_5.0', 'V81', 'addr1_191.0', 'V294', 'D1', 'V47', 'id_30_android', 'V286', 'P_emaildomain_gmail.com', 'hours_18.0', 'id_38', 'D12', 'C4', 'hours_4.0', 'V141', 'id_36', 'hours_13.0', 'V331', 'card5_166.0', 'V83', 'V2', 'card5_137.0', 'V29', 'id_07', 'card1_12544', 'id_31_safari', 'id_33_resolution', 'V307', 'id_38_F', 'hours_3.0', 'addr1_272.0', 'addr1_330.0', 'hours_16.0', 'hours_20.0', 'C2', 'id_19', 'V281', 'V53', 'P_emaildomain_4_com', 'weekday', 'id_31_tablet_False', 'R_emaildomain_2_com', 'V44', 'card2_481.0', 'V318', 'card1_6019', 'card3_144.0', 'C11', 'V229', 'id_31_generic', 'card5_226.0', 'V236', 'card2_512.0', 'is_train_df', 'id_31_firefox_False', 'addr1_181.0', 'id_20_533.0', 'V62', 'addr1_433.0', 'V266', 'M6_F', 'V317']
temp_list_to_keep_all = ['isFraud', 'TransactionID', 'TransactionDT', 'is_train_df', 'weekday', 'hours', 'TransactionDT', 'ProductCD', 'card1', 'card2', 'card3', 'card4', 'card5', 'card6', 'addr1', 'addr2', 'V258', 'V91', 'V70', 'V294', 'V201', 'R_emaildomain_2_com', 'id_17', 'C8', 'C14', 'P_emaildomain_optonline.net', 'V162', 'card4_american express', 'addr2_87.0', 'V173', 'V324', 'V141', 'V223', 'P_emaildomain_me.com', 'V189', 'P_emaildomain_verizon.net', 'P_emaildomain_msn.com', 'id_17_166.0', 'V131', 'V320', 'C10', 'V102', 'card6', 'C5', 'C1', 'V94', 'V271', 'card3_150.0', 'D2', 'V112', 'V329', 'card6_credit', 'V312', 'addr1_337.0', 'V191', 'V283', 'ProductCD', 'V177', 'V285', 'ProductCD_R', 'V103', 'card2_567.0', 'V317', 'V208', 'card2_481.0', 'V269', 'ProductCD_H', 'card1_9633', 'id_14_-420.0', 'id_14_60.0', 'V145', 'P_emaildomain_3', 'id_26', 'id_30_android', 'V230', 'V87', 'id_20_612.0', 'card1_9500', 'V56', 'V254', 'V149', 'id_20_549.0', 'card2_268.0', 'id_32_24.0', 'card5_166.0', 'V257', 'card2_553.0', 'card2_225.0', 'P_emaildomain_comcast.net', 'card3_185.0', 'V206', 'addr1_485.0', 'R_emaildomain_anonymous.com', 'V315', 'id_20_333.0', 'V244', 'V99', 'V209', 'P_emaildomain_2', 'V152', 'V245', 'V176', 'card3', 'V133', 'card1_17188', 'card3_144.0', 'id_31_edge', 'hours_11.0', 'P_emaildomain_infrequent_category', 'V188', 'P_emaildomain_1_infrequent_category', 'V304', 'card2_555.0', 'addr1_472.0', 'hours_5.0', 'V20', 'R_emaildomain_gmail.com', 'C2', 'V171', 'V62', 'card1_15885', 'id_36', 'V333', 'V100', 'V263', 'V55', 'id_25', 'id_17_100.0', 'id_20_533.0', 'P_emaildomain_1_outlook', 'R_emaildomain_hotmail.com', 'id_19_312.0', 'V79', 'addr1_231.0', 'C11', 'V40', 'card2_360.0', 'card3_143.0', 'card1_10616', 'id_19_321.0', 'DeviceInfo_Trident/7.0', 'C13', 'V243', 'DeviceInfo_SM-J700M Build/MMB29K', 'V144', 'V309', 'V74', 'id_31_tablet_False', 'V225', 'V247', 'id_19_271.0', 'V298', 'card4_discover', 'P_emaildomain_bellsouth.net', 'card2_361.0', 'id_19_193.0', 'V249', 'V166', 'V293', 'V288', 'V130', 'card5_236.0', 'V137', 'id_20_401.0', 'V45', 'card1_6019', 'P_emaildomain_live.com', 'DeviceInfo_Windows', 'V21', 'addr1_433.0', 'V224', 'V280', 'V302', 'V281', 'id_13_19.0', 'V194', 'card1_12839', 'id_18_15.0', 'addr1_181.0', 'V117', 'V122', 'P_emaildomain_outlook.com', 'id_31_ie', 'C9', 'V44', 'V310', 'V47', 'V76', 'V323', 'V211', 'P_emaildomain_4', 'V86', 'V150', 'C6', 'V49', 'V54', 'V110', 'V115', 'V67', 'V311', 'card1_12695', 'V97', 'V303', 'V134', 'card1_16132', 'V105', 'V64', 'V301', 'addr1_184.0', 'M5', 'V322', 'card2_514.0', 'card5_229.0', 'card5_195.0', 'id_20_500.0', 'V136', 'C12', 'hours_12.0', 'V226', 'card1_7585', 'V248', 'card2_174.0', 'hours_6.0', 'V290', 'V229', 'id_31_android', 'P_emaildomain_cox.net', 'V160', 'card5_117.0', 'DeviceInfo_MacOS', 'M2_F', 'V154', 'V214', 'V187', 'V308', 'V165', 'M6_F', 'V331', 'V313', 'V61', 'addr1_387.0', 'P_emaildomain_1_hotmail', 'V116', 'D1', 'V104', 'V192', 'V129', 'V161', 'M4', 'id_20_597.0', 'V24', 'P_emaildomain_anonymous.com', 'V109', 'V37', 'D8', 'D14', 'V34', 'id_31_firefox_False', 'card5_226.0', 'V279', 'V286', 'V151', 'V90', 'id_19_100.0', 'P_emaildomain_live.com.mx', 'addr2_60.0', 'V13', 'id_20_595.0', 'V332', 'id_19_infrequent_category', 'card5_198.0', 'card1_2884', 'card1_7919', 'addr1_126.0', 'V266', 'card2_215.0', 'V95', 'D15', 'V52', 'V60', 'V83', 'V272', 'P_emaildomain_icloud.com', 'card2_170.0', 'id_19_410.0', 'V296', 'V273', 'V295', 'D3', 'V337', 'V66', 'id_03', 'V82', 'V146', 'V236', 'V170', 'V318', 'M3_F', 'card1_7508', 'V5', 'addr1_272.0', 'addr1_310.0', 'P_emaildomain_hotmail.com', 'R_emaildomain', 'id_20_222.0', 'card1_12544', 'V287', 'card2_490.0', 'card3_infrequent_category', 'card2_583.0', 'V270', 'V7', 'V253', 'id_13_52.0', 'V12', 'addr1_441.0', 'card5_138.0', 'V155', 'card2_512.0', 'M4_M1', 'V233', 'V38', 'card1_15066', 'V46', 'V202', 'V123', 'hours_13.0', 'TransactionAmt', 'addr1_330.0', 'card1_10112', 'card2_321.0', 'V127', 'V53', 'D10', 'V307', 'V314', 'V159', 'V250', 'P_emaildomain_aol.com', 'V231', 'V63', 'id_30_mac_False', 'D4', 'V75', 'V277', 'R_emaildomain_2', 'V321', 'V132', 'D7', 'V58', 'P_emaildomain_att.net', 'id_13_49.0', 'addr1_476.0', 'M6', 'id_14_-360.0', 'card2_375.0', 'id_20_325.0', 'V57', 'V114', 'addr1_325.0', 'hours_8.0', 'id_20_391.0', 'R_emaildomain_outlook.es', 'V175', 'id_35_F', 'V306', 'V126', 'V172', 'V2', 'addr1_299.0', 'V181', 'M1', 'V77', 'V221', 'V59', 'V316', 'id_20_563.0', 'DeviceType', 'DeviceInfo_infrequent_category', 'R_emaildomain_2_es', 'card2_111.0', 'dist2', 'id_09', 'V234', 'V71', 'id_31_safari', 'id_01', 'V124', 'V168', 'addr1_315.0', 'V81', 'card4_mastercard', 'V289', 'M3', 'V275', 'card2_476.0', 'V204', 'V85', 'R_emaildomain_cox.net', 'card2_infrequent_category', 'V260', 'V3', 'V128', 'id_20_infrequent_category', 'V125', 'V237', 'card4', 'V207', 'id_20_507.0', 'D13', 'id_14_-480.0', 'V96', 'addr1_512.0', 'id_19_417.0', 'M9_F', 'V205', 'R_emaildomain_aol.com', 'R_emaildomain_4', 'id_13_62.0', 'V267', 'V256', 'V183', 'V19', 'V156', 'DeviceInfo', 'id_33_resolution', 'V217', 'V291', 'card2', 'card5_102.0', 'V274', 'addr1_204.0', 'id_13_27.0', 'D6', 'V232', 'card5', 'V139', 'hours_1.0', 'card1_16075', 'id_02', 'V167', 'V35', 'V164', 'V300', 'V199', 'id_30_windows_False', 'card5_202.0', 'id_31_chrome_version_newness', 'card5_224.0', 'V6', 'M4_M0', 'card1_3154', 'V33', 'dist1', 'V335', 'id_08', 'V292', 'id_06', 'id_14', 'V23', 'hours_7.0', 'hours_17.0', 'id_07', 'card2_194.0', 'V278', 'card1', 'card2_399.0', 'card1_2803', 'card5_126.0', 'hours_20.0', 'V78', 'id_31_edge_False', 'D11', 'V25', 'V242', 'id_31_chrome_False', 'id_20_256.0', 'id_30_windows', 'addr1_143.0', 'id_31_samsung_False', 'id_05', 'V36', 'V180', 'D5', 'addr1_infrequent_category', 'card1_5812', 'D9', 'M5_F', 'V210', 'id_19_548.0', 'V29', 'V101', 'P_emaildomain', 'id_19_153.0', 'V143', 'id_19_176.0', 'hours_21.0', 'weekday_4.0', 'V228', 'P_emaildomain_gmail.com', 'addr1', 'addr1_123.0', 'id_16', 'id_20', 'V284', 'id_31_firefox', 'V218', 'P_emaildomain_4_net', 'hours_15.0', 'id_31_safari_version_newness', 'V238', 'hours_2.0', 'V174', 'id_38', 'V203', 'id_31_edge_version_newness', 'TransactionAmt_decimal_length', 'card5_146.0', 'id_34_match_status:1', 'id_34', 'card1_15497', 'TransactionAmt_decimal', 'V4', 'C7', 'V39', 'card5_219.0', 'hours_18.0', 'card2_545.0', 'V282', 'addr1_264.0', 'id_31_for', 'id_13_33.0', 'id_31_desktop', 'card1_infrequent_category', 'hours_16.0', 'V73', 'id_31_android_False', 'id_19_542.0', 'V135', 'hours_22.0', 'id_19_290.0', 'V178', 'V297', 'V51', 'id_18', 'C4', 'V319', 'M8', 'P_emaildomain_charter.net', 'V276', 'weekday_3.0', 'M2', 'V15', 'weekday_2.0', 'R_emaildomain_1_outlook', 'D12', 'V336', 'id_19_427.0', 'id_37', 'id_13', 'id_31_chrome', 'weekday_1.0', 'V213', 'V261', 'id_11', 'id_19', 'id_13_20.0', 'V17', 'V43', 'P_emaildomain_mail.com', 'V264', 'R_emaildomain_1_hotmail', 'hours_3.0', 'addr1_327.0', 'V216', 'id_18_13.0', 'hours_10.0', 'P_emaildomain_1_gmail', 'addr1_191.0', 'id_19_266.0', 'id_31_generic', 'hours_19.0', 'card2_408.0', 'R_emaildomain_icloud.com', 'V268', 'R_emaildomain_infrequent_category', 'V222', 'id_30_android_False', 'id_31_browser_False', 'hours', 'M7', 'V9', 'id_32', 'M9', 'V190', 'id_37_F', 'V26', 'card1_2616', 'card2_100.0', 'weekday_5.0', 'id_38_F', 'V84', 'P_emaildomain_yahoo.com', 'V169', 'card5_infrequent_category', 'id_31_firefox_version_newness', 'id_15', 'card5_162.0', 'V80', 'V251', 'weekday', 'id_18_12.0', 'hours_4.0', 'id_30_mac', 'V92', 'id_17_infrequent_category', 'id_19_215.0', 'id_31_safari_False', 'V42', 'V158', 'V8', 'V299', 'id_13_63.0', 'id_20_161.0', 'V98', 'P_emaildomain_4_com', 'hours_9.0', 'M8_F', 'V265', 'card5_137.0', 'R_emaildomain_yahoo.com', 'V184', 'id_20_214.0', 'id_24', 'id_19_352.0', 'V48', 'C3', 'id_20_305.0', 'V220', 'id_31_generic_False', 'id_13_14.0', 'M7_F', 'id_28', 'id_04', 'ProductCD_S', 'P_emaildomain_4_com.mx', 'V179', 'id_28_Found', 'V69', 'id_15_New', 'P_emaildomain_1_yahoo', 'P_emaildomain_2_com', 'id_31_mobile', 'V106', 'hours_14.0', 'DeviceType_desktop', 'id_19_529.0', 'R_emaildomain_4_com', 'id_31_browser', 'V11', 'id_31_desktop_False', 'id_30_ios', 'V10', 'V262', 'V235', 'V219', 'V215', 'V200', 'V30', 'card1_4461', 'V108', 'id_30_ios_False', 'id_14_-300.0', 'V212', 'V334', 'id_15_Found', 'V138', 'id_31_ie_False', 'id_31_mobile_False', 'id_29', 'id_20_600.0', 'id_20_489.0', 'V50', 'V246', 'id_20_127.0', 'V157', 'R_emaildomain_outlook.com', 'id_23', 'card3_119.0', 'id_31_samsung', 'V120', 'V186', 'V239', 'DeviceInfo_ALE-L23 Build/HuaweiALE-L23', 'addr2', 'id_31_for_False', 'V140', 'P_emaildomain_1_live', 'V259', 'id_19_567.0', 'V227', 'V18', 'V148', 'V111', 'R_emaildomain_1_infrequent_category', 'id_20_535.0', 'V147', 'id_12', 'V252', 'id_13_25.0', 'V326', 'V338', 'card5_118.0', 'id_13_24.0', 'V182', 'P_emaildomain_yahoo.com.mx', 'card5_223.0', 'V121', 'card1_18132', 'V328', 'V339', 'id_19_390.0', 'id_10', 'card5_150.0', 'id_36_F', 'id_30_linux_False', 'V185', 'R_emaildomain_comcast.net', 'id_31_ios_False', 'P_emaildomain_rocketmail.com', 'id_19_633.0', 'id_13_infrequent_category', 'V27', 'DeviceInfo_SM-G532M Build/MMB29T', 'V22']
temp_list_to_drop = [x for x in list(master_df.columns) if x not in temp_list_to_keep_all]
master_df.drop(columns = temp_list_to_drop, inplace = True)
master_df.to_csv('master_df_top_all.csv', index=False)
temp_list_to_drop = [x for x in list(master_df.columns) if x not in temp_list_to_keep_300]
master_df.drop(columns = temp_list_to_drop, inplace = True)
master_df.to_csv('master_df_top_300.csv', index=False)
temp_list_to_drop = [x for x in list(master_df.columns) if x not in temp_list_to_keep_200]
master_df.drop(columns = temp_list_to_drop, inplace = True)
master_df.to_csv('master_df_top_200.csv', index=False)
temp_list_to_drop = [x for x in list(master_df.columns) if x not in temp_list_to_keep_100]
master_df.drop(columns = temp_list_to_drop, inplace = True)
master_df.to_csv('master_df_top_100.csv', index=False)
```
|
github_jupyter
|
```
# Remember to execute this cell with Shift+Enter
import sys
sys.path.append('../')
import jupman
```
# Tools and scripts
## [Download exercises zip](../_static/generated/tools.zip)
[Browse files online](https://github.com/DavidLeoni/softpython-en/tree/master/tools)
<div class="alert alert-warning">
**REQUISITES:**
* **Having Python 3 and Jupyter installed:** if you haven't already, see [Installation](https://en.softpython.org/installation.html)
</div>
## Python interpreter
In these tutorials we will use extensively the notebook editor Jupyter, because it allows to comfortably execute Python code, display charts and take notes. But if we want only make calculations it is not mandatory at all!
The most immediate way (even if not very practical) to execute Python things is by using the _command line_ interpreter in the so-called _interactive mode,_ that is, having Python to wait commands which will be manually inserted one by one. This usage _does not_ require Jupyter, you only need to have installed Python. Note that in Mac OS X and many linux systems like Ubuntu, Python is already installed by default, although sometimes it might not be version 3. Let's try to understand which version we have on our system.
### Let's open system console
Open a console (in Windows: system menu -> Anaconda Prompt, in Mac OS X: run the Terminal)
In the console you find the so-called _prompt_ of commands. In this _prompt_ you can directly insert commands for the operating system.
<div class="alert alert-warning">
**WARNING**: the commands you give in the prompt are commands in the language of the operating system you are using, **NOT** Python language !!!!!
</div>
In Windows you should see something like this:
```
C:\Users\David>
```
In Mac / Linux it could be something like this:
```bash
david@my-computer:~$
```
### Listing files and folders
In system console, try:
**Windows**: type the command `dir` and press Enter
**Mac or Linux**: type the command `ls` and press Enter.
A listing with all the files in the current folder should appear. In my case appears a list like this:
<div class="alert alert-warning">
**LET ME REPEAT**: in this context `dir` and `ls` are commands of _the operating system,_ **NOT** of Python !!
</div>
Windows:
```
C:\Users\David> dir
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegDocente.pdf
backupsys java1.log
BaseXData java_error_in_IDEA_14362.log
```
Mac / Linux:
```
david@david-computer:~$ ls
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegistroDocenteStandard(1).pdf
backupsys java1.log RegistroDocenteStandard.pdf
BaseXData java_error_in_IDEA_14362.log
```
### Let's launch the Python interpreter
In the opened system console, simply type the command `python`:
<div class="alert alert-warning">
**WARNING**: If Python does not run, try typing `python3` with the `3` at the end of `python`
</div>
```
C:\Users\David> python
```
You should see appearing something like this (most probably won't be exactly the same). Note that Python version is contained in the first row. If it begins with `2.`, then you are not using the right one for this book - in that case try exiting the interpreter ([see how to exit](#Exiting-the-interpreter)) and then type `python3`
```
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on windows
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
<div class="alert alert-warning">
**CAREFUL** about the triple greater-than `>>>` at the beginning!
The triple greater-than `>>>` at the start tells us that differently from before now the console is expecting commands _in Python language._ So, the system commands we used before (`cd`, `dir`, ...) will NOT work anymore, or will give different results!
</div>
Now the console is expecting Python commands, so try inserting `3 + 5` and press Enter:
<div class="alert alert-warning">
**WARNING** DO NOT type `>>>`, only type the command which appears afterwards!
</div>
```
>>> 3 + 5
```
The writing `8` should appear:
```
8
```
Beyond calculations, we might tell PYthon to print something with the function `print("ciao")`
```
>>> print("ciao")
ciao
```
### Exiting the interpreter
To get out from the Python interpreter and go back to system prompt (that is, the one which accepts `cd` and `dir`/`ls` commands), type the Python comand `exit()`
After you actually exited the Python interpreter, the triple `>>>` should be gone (you should see it at the start of the line)
In Windows, you should see something similar:
```
>>> exit()
C:\Users\David>
```
in Mac / Linux it could be like this:
```
>>> exit()
david@my-computer:~$
```
Now you might go back to execute commands for the operating system like `dir` and `cd`:
**Windows**:
```
C:\Users\David> dir
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegDocente.pdf
backupsys java1.log
BaseXData java_error_in_IDEA_14362.log
```
**Mac / Linux**:
```
david@david-computer:~$ ls
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegistroDocenteStandard(1).pdf
backupsys java1.log RegistroDocenteStandard.pdf
BaseXData java_error_in_IDEA_14362.log
```
## Modules
Python Modules are simply text files which have the extension **.py** (for example `my_script.py`). When you write code in an editor, as a matter of fact you are implementing the corresponding module.
In Jupyter we use notebook files with the extension `.ipynb`, but to edit them you necessarily need Jupyter.
With `.py` files (alse said _script_ ) we can instead use any text editor, and we can then tell the interpreter to execute that file. Let's see how to do it.
### Simple text editor
1. With a text editor (_Notepad_ in Windows, or _TextEdit_ in Mac Os X) creates a text file, and put inside this code
```python
x = 3
y = 5
print(x + y)
```
2. Let's try to save it - it seems easy, but it is often definitely not, so read carefully!
<div class="alert alert-warning">
**WARNING**: when you are saving the file, **make sure the file have the extension** `.py` **!!**
</div>
Let's suppose to create the file `my_script.py` inside a folder called `MYFOLDER`:
* **WINDOWS**: if you use _Notepad_, in the save window you have to to set _Save as_ to _All files_ (otherwise the file will be wrongly saved like `my_script.py.txt` !)
* **MAC**: if you use _TextEdit,_ before saving click _Format_ and then _Convert to format Only text:_ **if you forget this passage, TextEdit in the save window will not allow you to save in the right format and you will probably end up with a** `.rtf` **file which we're not interested in**
3. Open a console (in Windows: system menu -> Anaconda Prompt, in Mac OS X: run the Terminal)
the console opens the so-called _commands prompt_. In this _prompt_ you can directly enter commands for the operating system (see [previous paragraph](#Python-interpreter)
<div class="alert alert-warning">
**WARNING**: the commands you give in the prompt are commands in the language of the operating system you are using, **NOT** Python language !!!!!
</div>
In Windows you should see something like this:
```
C:\Users\David>
```
In Mac / Linux it could be something like this:
```bash
david@my-computer:~$
```
Try for example to type the command `dir` (or `ls` for Mac / Linux) which shows all the files in the current folder. In my case a list like this appears:
<div class="alert alert-warning">
**LET ME REPEAT**: in this context `dir` / `ls` are commands of the _operating system,_ **NOT** Python.
</div>
```
C:\Users\David> dir
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegDocente.pdf
backupsys java1.log
BaseXData java_error_in_IDEA_14362.log
```
If you notice, in the list there is the name MYFOLDER, where I put `my_script.py`. To _enter_ the folder in the _prompt,_ you must first use the operating system command `cd` like this:
4. To enter a folder called MYFOLDER, type `cd MYFOLDER`:
```
C:\Users\David> cd MYFOLDER
C:\Users\David\MYFOLDER>
```
**What if I get into the wrong folder?**
If by chance you enter the wrong folder, like `DUMBTHINGS`, to go back of one folder, type `cd ..` (NOTE: `cd` is followed by one space and TWO dots `..` _one after the other_ )
```
C:\Users\David\DUMBTHINGS> cd ..
C:\Users\David\>
```
5. Make sure to be in the folder which contains `my_script.py`. If you aren't there, use commands `cd` and `cd ..` like above to navigate the folders.
Let's see what present in MYFOLDER with the system command `dir` (or `ls` if in Mac/Linux):
<div class="alert alert-warning">
**LET ME REPEAT**: inthis context `dir` (or `ls` is a command of the _operating system,_ **NOT** Python.
</div>
```
C:\Users\David\MYFOLDER> dir
my_script.py
```
`dir` is telling us that inside `MYFOLDER` there is our file `my_script.py`
6. From within `MYFOLDER`, type `python my_script.py`
```
C:\Users\David\MYFOLDER>python my_script.py
```
<div class="alert alert-warning">
**WARNING**: if Python does not run, try typing `python3 my_script.py` with `3` at the end of `python`
</div>
If everything went fine, you should see
```
8
C:\Users\David\MYFOLDER>
```
<div class="alert alert-warning">
**WARNING**: After executing a script this way, the console is awaiting new _system_ commands, **NOT** Python commands (so, there shouldn't be any triple greater-than `>>>`)
</div>
### IDE
In these tutorials we work on Jupyter notebooks with extension `.ipynb`, but to edit long `.py` files it's more convenient to use more traditional editors, also called IDE _(Integrated Development Environment)._ For Python we can use [Spyder](https://www.spyder-ide.org/), [Visual Studio Code](https://code.visualstudio.com/Download) or [PyCharme Community Edition](https://www.jetbrains.com/pycharm/download/).
Differently from Jupyter, these editors allow more easily code _debugging_ and _testing._
Let's try Spyder, which is the easiest - if you have Anaconda, you find it available inside Anaconda Navigator.
<div class="alert alert-info">
**INFO**: Whenever you run Spyder, it might ask you to perform an upgrade, in these cases you can just click No.
</div>
In the upper-left corner of the editor there is the code of the file `.py` you are editing. Such files are also said _script._ In the lower-right corner there is the console with the IPython interpreter (which is the same at the heart of Jupyter, here in textual form). When you execute the script, it's like inserting commands in that interpreter.
- To execute the whole script: press `F5`
- To execute only the current line or the selection: press `F9`
- To clear memory: after many executions the variables in the memory of the interpreter might get values you don't expect. To clear the memory, click on the gear to the right of the console, and select _Restart kernel_
**EXERCISE**: do some test, taking the file `my_script.py` we created before:
```python
x = 3
y = 5
print(x + y)
```
- once the code is in the script, hit `F5`
- select only `print(x+y)` and hit F9
- select only `x=3` and hit F9
- click on th gear the right of the console panel, and select _Restart kernel,_ then select only `print(x+y)` and hit F9. What happens?
Remember that if the memory of the interpreter has been cleared with _Restart kernel,_ and then you try executing a code row with variables defined in lines which were not exectued before, Python will not know which variables you are referring to and will show a `NameError`.

## Jupyter
Jupyter is an editor that allows to work on so called _notebooks,_ which are files ending with the extension `.ipynb`. They are documents divided in cells where in each cell you can insert commands and immediately see the respective output. Let's try opening this.
1. Unzip [exercises zip](../_static/generated/tools.zip) in a folder, you should obtain something like this:
```
tools
tools-sol.ipynb
tools.ipynb
jupman.py
```
<div class="alert alert-warning">
**WARNING: To correctly visualize the notebook, it MUST be in the unzipped folder.**
</div>
2. open Jupyter Notebook. Two things should appear, first a console and then a browser. In the browser navigate the files to reach the unzipped folder, and open the notebook `tools.ipynb`
<div class="alert alert-warning">
**WARNING: DO NOT click Upload button in Jupyer**
Just navigate until you reach the file.
</div>
<div class="alert alert-warning">
**WARNING: open the notebook WITHOUT the** `-sol` **at the end!**
Seeing now the solutions is too easy ;-)
</div>
3. Go on reading the exercises file, sometimes you will find paragraphs marked **Exercises** which will ask to write Python commands in the following cells.Exercises are graded by difficulty, from one star ✪ to four ✪✪✪✪
<div class="alert alert-warning">
**WARNING: In this book we use ONLY PYTHON 3** <br/>
If by chance you obtain weird behaviours, check you are using Python 3 and not 2. If by chance by typing `python` your operating system runs python 2, try executing the third by typing the command `python3`
</div>
<div class="alert alert-info">
**If you don't find Jupyter / something doesn't work:** have a look at [installation](https://en.softpython.org/installation.html#Jupyter-Notebook)
</div>
Useful shortcuts:
* to execute Python code inside a Jupyter cell, press `Control + Enter`
* to execute Python code inside a Jupyter cell AND select next cell, press `Shift + Enter`
* to execute Python code inside a Jupyter cell AND a create a new cell aftwerwards, press `Alt + Enter`
* when something seem wrong in computations, try to clean memory by running `Kernel->Restart and Run all`
**EXERCISE**: Let's try inserting a Python command: type in the cell below here `3 + 5`, then while in that cell press special keys `Control+Enter`. As a result, the number `8` should appear
**EXERCISE**: with Python we can write comments by starting a row with a sharp `#`. Like before, type in the next cell `3 + 5` but this time type it in the row under the writing `# write here`:
```
# write here
```
**EXERCISE**: In every cell Jupyter only shows the result of last executed row. Try inserting this code in the cell below and execute by pressing `Control+Enter`. Which result do you see?
```python
3 + 5
1 + 1
```
```
# write here
```
**EXERCISE**: Let's try now to create a new cell.
* While you are with curson the cell, press `Alt+Enter`. A new cell should be created after the current one.
* In the cell just created, insert `2 + 3` and press `Shift+Enter`. What happens to the cursor? Try the difference swith `Control+Enter`. If you don't understand the difference, try pressing many times `Shift+Enter` and see what happens.
### Printing an expression
Let's try to assign an expression to a variable:
```
coins = 3 + 2
```
Note the assignment by itself does not produce any output in the Jupyter cell. We can ask Jupyter the value of the variable by simply typing again the name in a cell:
```
coins
```
The effect is (almost always) the same we would obtain by explictly calling the function `print`:
```
print(coins)
```
What's the difference? For our convenience Jupyter will directly show the result of the last executed expression in the cell, but only the last one:
```
coins = 4
2 + 5
coins
```
If we want to be sure to print both, we need to use the function `print`:
```
coins = 4
print(2 + 5)
print(coins)
```
Furthermore, the result of last expression is shown only in Jupyter notebooks, if you are writig a normal `.py` script and you want to see results you must in any case use `print`.
If we want to print more expressions in one row, we can pass them as different parameters to `print` by separating them with a comma:
```
coins = 4
print(2+5, coins)
```
To `print` we can pass as many expressions as we want:
```
coins = 4
print(2 + 5, coins, coins*3)
```
If we also want to show some text, we can write it by creating so-called _strings_ between double quotes (we will see strings much more in detail in next chapters):
```
coins = 4
print("We have", coins, "golden coins, but we would like to have double:", coins * 2)
```
**QUESTION**: Have a look at following expressions, and for each one of them try to guess the result it produces. Try verifying your guesses both in Jupyter and another editor of files `.py` like Spyder:
1. ```python
x = 1
x
x
```
1. ```python
x = 1
x = 2
print(x)
```
1. ```python
x = 1
x = 2
x
```
1. ```python
x = 1
print(x)
x = 2
print(x)
```
1. ```python
print(zam)
print(zam)
zam = 1
zam = 2
```
1. ```python
x = 5
print(x,x)
```
1. ```python
x = 5
print(x)
print(x)
```
1. ```python
carpets = 8
length = 5
print("If I have", carpets, "carpets in sequence I walk for",
carpets * length, "meters.")
```
1. ```python
carpets = 8
length = 5
print("If", "I","have", carpets, "carpets","in", "sequence",
"I", "walk", "for", carpets * length, "meters.")
```
### Exercise - Castles in the air
Given two variables
```python
castles = 7
dirigibles = 4
```
write some code to print:
```
I've built 7 castles in the air
I have 4 steam dirigibles
I want a dirigible parked at each castle
So I will buy other 3 at the Steam Market
```
- **DO NOT** put numerical constants in your code like `7`, `4` or `3`! Write generic code which only uses the provided variables.
```
#jupman-purge-output
castles = 7
dirigibles = 4
# write here
print("I've built",castles, "castles in the air")
print("I have", dirigibles, "steam dirigibles")
print("I want a dirigible parked at each castle")
print("So I will buy other", castles - dirigibles, "at the Steam Market")
```
## Visualizing the execution with Python Tutor
We have seen some of the main data types. Before going further, let's see the right tools to understand what happens when we execute the code.
[Python tutor](http://pythontutor.com/) is a very good website to visualize online Python code execution, allowing to step forth and _back_ in code flow. Exploit it as much as you can, it should work with many of the examples we shall see in the book. Let's now try an example.
**Python tutor 1/4**
Go to [pythontutor.com](http://pythontutor.com/) and select _Python 3_

**Python tutor 2/4**
Make sure at least Python 3.6 is selected:

**Python tutor 3/4**
**Try inserting:**
```python
x = 5
y = 7
z = x + y
```

**Python tutor 4/4**
**By clicking on Next, you will see the changes in Python memory**

### Debugging code in Jupyter
Python Tutor is fantastic, but when you execute code in Jupyter and it doesn't work, what can you do? To inspect the execution, the editor usually makes available a tool called _debugger,_ which allows to execute instructions one by one. At present (August 2018), the Jupyter debugger is called [pdb](https://davidhamann.de/2017/04/22/debugging-jupyter-notebooks/) and it is extremely limited. To overcome its limitations, in this book we invented a custom solution which exploits Python Tutor.
If you insert Python code in a cell, and then **at the cell end** you write the instruction `jupman.pytut()`, the preceding code will be visualized inside Jupyter notebook with Python Tutor, as if by magic.
<div class="alert alert-warning">
**WARNING**: `jupman` is a collection of support functions we created just for this book.
Whenever you see commands which start with `jupman`, to make them work you need first to execute the cell at the beginning of the document. For convenience we report here that cell. If you already didn't, execute it now.
</div>
```
# Remember to execute this cell with Control+Enter
# These commands tell Python where to find the file jupman.py
import sys;
sys.path.append('../');
import jupman;
```
Now we are ready yo try Python Tutor with the magic function `jupman.pytut()`:
```
x = 5
y = 7
z = x + y
jupman.pytut()
```
#### Python Tutor : Limitation 1
Python Tutor is handy, but there are important limitations:
<div class="alert alert-warning">
**ATTENTION**: Python Tutor only looks inside one cell!
Whenever you use Python Tutor inside Jupyter, the only code Python tutors considers is the one inside the cell containing the command `jupman.pytut()`
</div>
So for example in the two following cells, only `print(w)` will appear inside Python tutor without the `w = 3`. If you try clicking _Forward_ in Python tutor, you will we warned that `w` was not defined.
```
w = 3
print(w)
jupman.pytut()
```
To have it work in Python Tutor you must put ALL the code in the SAME cell:
```
w = 3
print(w)
jupman.pytut()
```
#### Python Tutor : Limitation 2
<div class="alert alert-warning">
**WARNING: Python Tutor only uses functions from standard PYthon distribution**
PYthon Tutor is good to inspect simple algorithms with basic Python functions, if you use libraries from third parties it will not work.
</div>
If you use some library like `numpy`, you can try **only online** to select `Python 3.6 with Anaconda` :

### Exercise - tavern
Given the variables
```python
pirates = 10
each_wants = 5 # mugs of grog
kegs = 4
keg_capacity = 20 # mugs of grog
```
Try writing some code which prints:
```
In the tavern there are 10 pirates, each wants 5 mugs of grog
We have 4 kegs full of grog
From each keg we can take 20 mugs
Tonight the pirates will drink 50 mugs, and 30 will remain for tomorrow
```
- **DO NOT** use numerical constants in your code, instead try using the proposed variables
- To keep track of remaining kegs, make a variable `remaining_mugs`
- if you are using Jupyter, try using `jupman.pytut()` at the cell end to visualize execution
```
pirates = 10
each_wants = 5 # mugs of grog
kegs = 4
keg_capacity = 20 # mugs of grog
# write here
print("In the tavern there are", pirates, "pirates, each wants", each_wants, "mugs of grog")
print("We have", kegs, "kegs full of grog")
print("From each keg we can take", keg_capacity,"mugs")
remaining_mugs = kegs*keg_capacity - pirates*each_wants
print("Tonight the pirates will drink", pirates * each_wants, "mugs, and", remaining_mugs, "will remain for tomorrow")
#jupman.pytut()
```
## Python Architecture
While not strictly fundamental to understand the book, the following part is useful to understand what happens under the hood when you execute commands.
Let's go back to Jupyter: the notebook editor Jupyter is a very powerful tool and flexible, allows to execute Python code, not only that, also code written in other programming languages (R, Bash, etc) and formatting languages (HTML, Markdown, Latex, etc).
Se must keep in mind that the Python code we insert in cells of Jupyter notebooks (the files with extension `.ipynb`) is not certainly magically understood by your computer. Under the hood, a lot of transformations are performed so to allow you computer processor to understaned the instructions to be executed. We report here the main transformations which happen, from Jupyter to the processor (CPU):
### Python is a high level language
Let's try to understand well what happens when you execute a cell:
1. **source code**: First Jupyter checks if you wrote some Python _source code_ in the cell (it could also be other programming languages like R, Bash, or formatting like Markdown ...). By default Jupyter assumes your code is Python. Let's suppose there is the following code:
```python
x = 3
y = 5
print(x + y)
```
**EXERCISE**: Without going into code details, try copy/pasting it into the cell below. Making sure to have the cursor in the cell, execute it with `Control + Enter`. When you execute it an `8` should appear as calculation result. The `# write down here` as all rows beginning with a sharp `#` is only a comment which will be ignored by Python
```
# write down here
```
If you managed to execute the code, you can congratulate Python! It allowed you to execute a program written in a quite comprehensible language _independently_ from your operating system (Windows, Mac Os X, Linux ...) and from the processor of your computer (x86, ARM, ...)! Not only that, the notebook editor Jupyter also placed the result in your browser.
In detail, what happened? Let's see:
2. **bytecode**: When requesting the execution, Jupyter took the text written in the cell, and sent it to the so-called _Python compiler_ which transformed it into _bytecode_. The _bytecode_ is a longer sequence of instructions which is less intelligeble for us humans (**this is only an example, there is no need to understand it !!**):
```
2 0 LOAD_CONST 1 (3)
3 STORE_FAST 0 (x)
3 6 LOAD_CONST 2 (5)
9 STORE_FAST 1 (y)
4 12 LOAD_GLOBAL 0 (print)
15 LOAD_FAST 0 (x)
18 LOAD_FAST 1 (y)
21 BINARY_ADD
22 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
25 POP_TOP
26 LOAD_CONST 0 (None)
29 RETURN_VALUE
```
3. **machine code**: The _Python interpreter_ took the _bytecode_ above one instruction per time, and converted it into _machine code_ which can actually be understood by the processor (CPU) of your computer. To us the _machine code_ may look even longer and uglier of _bytecode_ but the processor is happy and by reading it produces the program results. Example of _machine code_ (**it is just an example, you do not need to understand it !!**):
```
mult:
push rbp
mov rbp, rsp
mov eax, 0
mult_loop:
cmp edi, 0
je mult_end
add eax, esi
sub edi, 1
jmp mult_loop
mult_end:
pop rbp
ret
```
We report in a table what we said above. In the table we explicitly write the file extension ni which we can write the various code formats
- The ones interesting for us are Jupyter notebooks `.ipynb` and Python source code files `.py`
- `.pyc` file smay be generated by the compiler when reading `.py` files, but they are not interesting to us, we will never need to edit the,
- `.asm` machine code also doesn't matter for us
| Tool | Language| File extension | Example|
|-----|-----------|---------|---|
| Jupyter Notebook| Python| .ipynb||
| Python Compiler | Python source code | .py |`x = 3`<br> `y = 5`<br> `print(x + y)`|
| Python Interpreter | Python bytecode | .pyc| `0 LOAD_CONST 1 (3)`<br>`3 STORE_FAST 0 (x)`|
| Processor (CPU) | Machine code| .asm |`cmp edi, 0`<br>`je mult_end`|
No that we now have an idea of what happens, we can maybe understand better the statement _Python is a high level language,_ that is, it's positioned high in the above table: when we write Python code, we are not interested in the generated _bytecode_ or _machine code,_ we can **just focus on the program logic**.
Besides, the Python code we write is **independent from the pc architecture**: if we have a Python interpreter installed on a computer, it will take care of converting the high-level code into the machine code of that particular architecture, which includes the operating system (Windows / Mac Os X / Linux) and processor (x86, ARM, PowerPC, etc).
### Performance
Everything has a price. If we want to write programs focusing only on the _high level logic_ without entering into the details of how it gets interpreted by the processor, we tyipcally need to give up on _performance._ Since Python is an _interpreted_ language has the downside of being slow. What if we really need efficiency? Luckily, Python can be extended with code written in _C language_ which typically is much more performant. Actually, even if you won't notice it, many functions of Python under the hood are directly written in the fast C language. If you really need performance (not in this book!) it might be worth writing first a prototype in Python and, once established it works, compile it into _C language_ by using [Cython compiler](http://cython.org/) and manually optimize the generated code.
|
github_jupyter
|
```
import copy
if __name__ == '__main__':
%run Tests.ipynb
%run MoleculeGenerator2.ipynb
%run Discrim.ipynb
%run Rewards.ipynb
%run PPO_WITH_TRICKS.ipynb
%run ChemEnv.ipynb
%run SupervisedPreTraining.ipynb
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# wants: a single class for pretraining and rl training
# also want a singler logger for everything
# also should put in cross validation for the supervised portion
# means a logger instance in the init method
class SupervisedToReinforcement():
def __init__(self,run_title, rewards_list, chem_env_kwargs, PPO_kwargs, svw_kwargs):
self.run_title = run_title
self.writer = SummaryWriter(f'./tb_logs/{run_title}/{run_title}_logs')
self.reward_module = FinalRewardModule(self.writer,rewards_list)
chem_env_kwargs['num_chunks'] = train_kwargs['num_chunks']
chem_env_kwargs['RewardModule'] = self.reward_module
chem_env_kwargs['writer'] = self.writer
self.ChemEnv = ChemEnv(**chem_env_kwargs)
input_dim = chem_env_kwargs['num_node_feats']
#self.policy = Spin2(input_dim,300,chem_env_kwargs['num_atom_types']).cuda()
self.policy = BaseLine(input_dim,800,chem_env_kwargs['num_atom_types']+1).cuda()
self.policy.apply(init_weights_recursive)
svw_kwargs['writer'] = self.writer
svw_kwargs['input_dim'] = input_dim
svw_kwargs['num_atom_types'] = chem_env_kwargs['num_atom_types']
print(svw_kwargs)
self.svw = Supervised_Trainer(self.policy, **svw_kwargs)
PPO_kwargs['env'] = self.ChemEnv
PPO_kwargs['actor'] = self.policy
PPO_kwargs['writer'] = self.writer
self.PPO = PPO_MAIN(**PPO_kwargs)
self.PPO.to_device(device)
def Train(self,total_epochs, batch_size, epochs_per_chunk, num_chunks, PPO_steps, cv_path):
self.svw.TrainModel(total_epochs)
# torch.save({
# 'model_state_dict': self.policy.state_dict(),
# 'optimizer_state_dict': self.svw.optim.state_dict()
# }, f'./{self.run_title}/SavedModel')
print("fra")
# self.PPO.learn(PPO_steps)
%run SupervisedPreTraining.ipynb
# rewards_list = [SizeSynth_norm()]
# rewards_list = [Synthesizability(), SizeReward()]
rewards_list = [ Synthesizability()]
chem_env_kwargs = {'max_nodes' : 12,
'num_atom_types' : 17,
'num_node_feats' : 54,
'num_edge_types' : 3,
'bond_padding' : 12,
'mol_featurizer': mol_to_graph_full,
'RewardModule' : None,
'writer' : None}
PPO_kwargs = {'env' : None,
'batch_size' : 32,
'timesteps_per_batch' : 1200,
'clip' : 0.08,
'a_lr' : 1e-4,
'c_lr' : 3e-4,
'n_updates_per_iteration' : 6,
'max_timesteps_per_episode' : 40,
'gamma' : .95,
'actor' : None}
svw_kwargs = {'batch_size' : 128, 'data_set_size' : 507528}
train_kwargs = {'total_epochs' : 15,
'batch_size' : 256,
'epochs_per_chunk' : 1,
'num_chunks' : 0,
'cv_path' : './CrossVal/chunk_11',
'PPO_steps' : 150000}
%run ChemEnv.ipynb
svtr = SupervisedToReinforcement('test_18',rewards_list,chem_env_kwargs,PPO_kwargs,svw_kwargs)
svtr.Train(**train_kwargs)
Chem.MolFromSmiles('CCCN(CC)C(=O)S')
svtr.PPO.inference()
env = svtr.ChemEnv
env.assignMol(Chem.MolFromSmiles('CCC(C)C(=O)O'))
print(env.last_action_node)
env.StateSpace
env.step(0,verbose=True)
env.StateSpace
PPO_kwargs = {'env' : env,
'batch_size' : 32,
'timesteps_per_batch' : 1200,
'clip' : 0.08,
'a_lr' : 1e-4,
'c_lr' : 3e-4,
'n_updates_per_iteration' : 6,
'max_timesteps_per_episode' : 40,
'gamma' : .95,
'actor' : svtr.svw.policy,
'writer': SummaryWriter(f'./tb_logs/3/3_logs')}
ppo_test = PPO_MAIN(**PPO_kwargs)
ppo_test.inference()
Chem.MolFromSmiles('CCC(C(N)=O)N1CC(C)CC1=O')
Chem.MolFromSmiles('CCCNC(=O)n1ccnc1C')
env.assignMol(Chem.MolFromSmiles('C.C'))
env.step(19,verbose=True)
env.StateSpace
chem_env_kwargs = {'max_nodes' : 12,
'num_atom_types' : 17,
'num_node_feats' : 54,
'num_edge_types' : 3,
'bond_padding' : 12,
'mol_featurizer': mol_to_graph_full,
'RewardModule' : rewards_list,
'writer' : SummaryWriter(f'./tb_logs/3/3_logs'),
'num_chunks': 1}
%run ChemEnv.ipynb
env = ChemEnv(**chem_env_kwargs)
env.assignMol(Chem.MolFromSmiles('CCC.N'))
env.step(2, verbose=True)
env.StateSpace
ppo_test = PPO_MAIN(**PPO_kwargs)
svtr.PPO.actor = svtr.policy
Chem.MolFromSmiles('CCC.N')
ppo_test.inference(True)
torch.save({
'model_state_dict': svtr.policy.state_dict(),
'optimizer_state_dict': svtr.svw.optim.state_dict()
}, './test_1/ah')
svtr.policy.state_dict()
model = Spin2(54,300,17)
model.load_state_dict(svtr.policy.state_dict())
%run ChemEnv.ipynb
svtr = SupervisedToReinforcement('test',rewards_list,chem_env_kwargs,PPO_kwargs)
env = svtr.ChemEnv
svtr.PPO.inference()
torch.save(svtr.PPO.actor.state_dict(), './model')
env = svtr.ChemEnv
env.reset()
env.step(14)
env.step(17)
env.step(14)
env.StateSpace
(Chem.MolFromSmiles('NCc1cccc([SH]=O)c1', sanitize = True))
Chem.MolFromSmiles('Nc1cc2ccc1SSC(S)C2O.c1ccnc1', sanitize = False)
env.reset()
#env.StateSpace = Chem.RWMol(Chem.MolFromSmiles('Nc1cc2ccc1SSC(S)C2O.c1ccnc1', sanitize = False))
#env.step(16)
#env.addEdge(1,0)
env.addBenzine()
env.addEdge(1,0)
env.StateSpace
env.addPyrrole()
env.addEdge(1,11)
# env.StateSpace.RemoveAtom(17)
# env.StateSpace.RemoveAtom(16)
# env.StateSpace.RemoveAtom(15)
# env.StateSpace.RemoveAtom(14)
# env.StateSpace.RemoveAtom(13)
#Chem.SanitizeMol(env.StateSpace)
env.StateSpace
for atom in env.StateSpace.GetAtoms():
print(atom.GetDegree(),atom.GetSymbol(),atom.GetIsAromatic())
t_mol = Chem.RWMol(Chem.MolFromSmiles('FC(CBr)c1ccccc1',sanitize = True))
t_mol
env.reset()
env.addBenzine()
env.addEdge(2,0)
env.StateSpace
t_mol = Chem.RWMol(Chem.MolFromSmiles('FC(CBr)c1ccccc1',sanitize = True))
env = svtr.ChemEnv
env.reset()
env.StateSpace = t_mol
# env.StateSpace
env.addEdge(2,7)
env.StateSpace
env = svtr.ChemEnv
env.reset()
# env.addPyrrole()
env.addBenzine()
env.addEdge(1,2)
# env.addNode('C')
# env.addEdge(2,4)
#env.addNode('C')
#env.addEdge(1,3)
env.StateSpace
mol2 = SanitizeNoKEKU(mol2)
mol2
mol2 = Chem.RWMol(Chem.MolFromSmiles('O=CC(=Bc1ccccc1P)P(Br)c1ccccc1.[NaH]', sanitize = True))
mol1 = Chem.RWMol(Chem.MolFromSmiles('CC.c1ccnc1', sanitize = False))
mol2.UpdatePropertyCache()
#mol2.AddAtom(Chem.Atom('C'))
#mol2.AddBond(0,5,Chem.BondType.SINGLE)
# print(mol2.NeedsUpdatePropertyCache())
# mol2.UpdatePropertyCache()
Chem.SanitizeMol(mol2)
mol1.AddBond(0,5,Chem.BondType.SINGLE)
Chem.SanitizeMol(mol1)
mol2
for atom in mol2.GetAtoms():
print(atom.GetSymbol(),atom.GetImplicitValence())
SanitizeNoKEKU(mol2)
cycles = list(mol2.GetRingInfo().AtomRings())
for cycle in cycles:
for atom_idx in cycle:
bonds = mol2.GetAtomWithIdx(atom_idx).GetBonds()
for bond_x in bonds:
if bond_x.GetBondType() == Chem.BondType.DOUBLE:
print("fraraf")
for atom in mol2.GetAtoms():
atom.UpdatePropertyCache()
print(atom.GetExplicitValence())
for bond in atom.GetBonds():
print(bond.GetBondType())
#env.reset()
env.addPyrrole()
env.StateSpace
env.step(17)
mol = Chem.MolFromSmiles('n1cccc1', sanitize = False)
mol.UpdatePropertyCache()
for bond in mol.GetBonds():
print(bond.GetBondType())
mol
Chem.MolFromSmiles('[nH]1cccc1')
def SanitizeNoKEKU(mol):
s_dict = {'SANITIZE_ADJUSTHS': Chem.rdmolops.SanitizeFlags.SANITIZE_ADJUSTHS,
'SANITIZE_ALL': Chem.rdmolops.SanitizeFlags.SANITIZE_ALL,
'SANITIZE_CLEANUP': Chem.rdmolops.SanitizeFlags.SANITIZE_CLEANUP,
'SANITIZE_CLEANUPCHIRALITY': Chem.rdmolops.SanitizeFlags.SANITIZE_CLEANUPCHIRALITY,
'SANITIZE_FINDRADICALS': Chem.rdmolops.SanitizeFlags.SANITIZE_FINDRADICALS,
'SANITIZE_KEKULIZE': Chem.rdmolops.SanitizeFlags.SANITIZE_KEKULIZE,
'SANITIZE_NONE': Chem.rdmolops.SanitizeFlags.SANITIZE_NONE,
'SANITIZE_PROPERTIES': Chem.rdmolops.SanitizeFlags.SANITIZE_PROPERTIES,
'SANITIZE_SETAROMATICITY': Chem.rdmolops.SanitizeFlags.SANITIZE_SETAROMATICITY,
'SANITIZE_SETCONJUGATION': Chem.rdmolops.SanitizeFlags.SANITIZE_SETCONJUGATION,
'SANITIZE_SETHYBRIDIZATION': Chem.rdmolops.SanitizeFlags.SANITIZE_SETHYBRIDIZATION,
'SANITIZE_SYMMRINGS': Chem.rdmolops.SanitizeFlags.SANITIZE_SYMMRINGS}
#mol = Chem.SanitizeMol(mol,s_dict['SANITIZE_KEKULIZE'])
mol = Chem.SanitizeMol(mol, s_dict['SANITIZE_ADJUSTHS'] | s_dict['SANITIZE_SETAROMATICITY'] |
s_dict['SANITIZE_CLEANUP'] | s_dict['SANITIZE_CLEANUPCHIRALITY'] |
s_dict['SANITIZE_FINDRADICALS'] | s_dict['SANITIZE_NONE'] |
s_dict['SANITIZE_PROPERTIES'] | s_dict['SANITIZE_SETCONJUGATION'] |
s_dict['SANITIZE_SETHYBRIDIZATION'] | s_dict['SANITIZE_SYMMRINGS']
)
return mol
True | False
mol = Chem.RWMol(Chem.MolFromSmiles('CC.c1ccnc1', sanitize = False))
#mol.AddBond(8,mol.GetNumAtoms()-1,Chem.BondType.SINGLE)
print(SanitizeNoKEKU(mol))
print(mol.GetAromaticAtoms().__len__())
mol
from rdkit import Chem
m = Chem.MolFromSmiles('CN(C)(C)C', sanitize=False)
problems = Chem.DetectChemistryProblems(m)
print(len(problems))
m
SanitizeNoKEKU(m)
Chem.SanitizeFlags.SANITIZE_ADJUSTHS
print(problems[0].GetType())
#print(problems[0].GetAtomIdx())
print(problems[0].Message())
Chem.MolFromSmiles('CN1C=CC=CC1=O')
Chem.MolFromSmiles('CN(C)(C)C', sanitize=False)
# wants: a single class for pretraining and rl training
# also want a singler logger for everything
# also should put in cross validation for the supervised portion
# means a logger instance in the init method
class SupervisedToReinforcement():
def __init__(self, PPO_env, PPO_Train_Steps, policy_model,rewards, run_title):
self.writer = SummaryWriter('./run_title')
self.reward_module = FinalRewardModule(sef.writer,rewards)
self.PPO_env = PPO_env
self.PPO_Train_Steps = PPO_Train_Steps
self.SV_trainer = Supervised_trainer(policy_model)
self.SV_trainer.writer = self.writer
self.PPO_env.env.RewardModule = self.reward_module
self.PPO_env.actor = self.policy_model
def Train():
sv_trainer.Train(20,16, 1,24)
self.PPO_env.learn(self.PPO_Train_Steps)
class AdversarialTraining():
def __init__(self, PPO_agent,Disc, epochs, G_steps,
D_steps, K, G_pretrain_steps, D_train_size,
D_batch_size,pre_train_env, smiles_values):
self.PPO_agent = PPO_agent
self.Disc = Disc
self.epochs = epochs
self.G_steps = G_steps
self.D_steps = D_steps
self.K = K
self.pre_train_env = pre_train_env
self.D_batch_size = D_batch_size
self.D_train_size = D_train_size
self.smiles_values = smiles_values
def mini_batch_reward_train(self, batch_size, num_batch):
for j in range(num_batch):
graphs = self.PPO_agent.generate_graphs(batch_size)
for model in self.reward_models:
model.TrainOnBatch(graphs)
def _preTrain(self):
env,batch_size,timesteps_per_batch,clip,a_lr,c_lr,
n_updates_per_iteration,max_timesteps_per_episode,gamma
t_dict = vars(self.PPO_agent)
PPO_agent_pre = PPO_MAIN(t_dict['env'],t_dict['batch_size'],t_dict['timesteps_per_batch'],
t_dict['clip'],t_dict['a_lr'], t_dict['c_lr'],
t_dict['n_updates_per_iteration'],t_dict['max_timesteps_per_episode'],
t_dict['gamma'])
PPO_agent_pre.learn(G_pretrain_steps)
self.PPO_agent.assignActor(PPO_agent_pre.actor)
def pull_real_samples(self, g_number):
graphs = smiles_to_graph([self.smiles_values[random.randint(0,len(self.smiles_values))] for _ in range(g_number)])
print(len(graphs), "graph len")
return graphs
def i_hate_python(self):
a = self.PPO_agent.generate_graphs(10)
def train(self, epochs):
self._preTrain()
for epoch in range(epochs):
print('G_train')
self.PPO_agent.learn(G_steps)
print('D_train')
for d_step in range(self.D_steps):
x_fake = self.PPO_agent.generate_graphs(self.D_steps)
x_real = self.pull_real_samples(self.D_train_size)
for k_step in range(self.K):
slices = list(range(0,self.D_train_size,self.D_batch_size)) + [self.D_train_size]
for idx in range(1,len(slices)):
slice_= slice(slices[idx-1],slices[idx])
print(slice_)
x_fake_batch = x_fake[slice_]
if x_fake_batch != []:
Y_fake_batch = torch.zeros(len(x_fake_batch),1)
x_real_batch = x_real[slice_]
Y_real_batch = torch.ones(len(x_real_batch),1)
self.Disc.train(x_fake_batch, Y_fake_batch)
self.Disc.train(x_real_batch,Y_real_batch)
```
|
github_jupyter
|
Lambda School Data Science, Unit 2: Predictive Modeling
# Applied Modeling, Module 3
### Objective
- Visualize and interpret partial dependence plots
### Links
- [Kaggle / Dan Becker: Machine Learning Explainability — Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots)
- [Christoph Molnar: Interpretable Machine Learning — Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) + [animated explanation](https://twitter.com/ChristophMolnar/status/1066398522608635904)
### Three types of model explanations this unit:
#### 1. Global model explanation: all features in relation to each other _(Last Week)_
- Feature Importances: _Default, fastest, good for first estimates_
- Drop-Column Importances: _The best in theory, but much too slow in practice_
- Permutaton Importances: _A good compromise!_
#### 2. Global model explanation: individual feature(s) in relation to target _(Today)_
- Partial Dependence plots
#### 3. Individual prediction explanation _(Tomorrow)_
- Shapley Values
_Note that the coefficients from a linear model give you all three types of explanations!_
### Setup
#### If you're using [Anaconda](https://www.anaconda.com/distribution/) locally
Install required Python packages, if you haven't already:
- [category_encoders](https://github.com/scikit-learn-contrib/categorical-encoding), version >= 2.0: `conda install -c conda-forge category_encoders` / `pip install category_encoders`
- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`
- [Plotly](https://medium.com/plotly/plotly-py-4-0-is-here-offline-only-express-first-displayable-anywhere-fc444e5659ee), version >= 4.0
```
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python package:
# category_encoders, version >= 2.0
!pip install --upgrade category_encoders pdpbox plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Applied-Modeling.git
!git pull origin master
# Change into directory for module
os.chdir('module3')
```
## Lending Club: Predict interest rate
```
import pandas as pd
# Stratified sample, 10% of expired Lending Club loans, grades A-D
# Source: https://www.lendingclub.com/info/download-data.action
history_location = '../data/lending-club/lending-club-subset.csv'
history = pd.read_csv(history_location)
history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True)
# Just use 36 month loans
history = history[history.term==' 36 months']
# Index & sort by issue date
history = history.set_index('issue_d').sort_index()
# Clean data, engineer feature, & select subset of features
history = history.rename(columns=
{'annual_inc': 'Annual Income',
'fico_range_high': 'Credit Score',
'funded_amnt': 'Loan Amount',
'title': 'Loan Purpose'})
history['Interest Rate'] = history['int_rate'].str.strip('%').astype(float)
history['Monthly Debts'] = history['Annual Income'] / 12 * history['dti'] / 100
columns = ['Annual Income',
'Credit Score',
'Loan Amount',
'Loan Purpose',
'Monthly Debts',
'Interest Rate']
history = history[columns]
history = history.dropna()
# Test on the last 10,000 loans,
# Validate on the 10,000 before that,
# Train on the rest
test = history[-10000:]
val = history[-20000:-10000]
train = history[:-20000]
# Assign to X, y
target = 'Interest Rate'
features = history.columns.drop('Interest Rate')
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
# The target has some right skew.
# It's not bad, but we'll log transform anyways
%matplotlib inline
import seaborn as sns
sns.distplot(y_train);
# Log transform the target
import numpy as np
y_train_log = np.log1p(y_train)
y_val_log = np.log1p(y_val)
y_test_log = np.log1p(y_test)
# Plot the transformed target's distribution
sns.distplot(y_train_log);
```
### Fit Linear Regression model, with original target
```
import category_encoders as ce
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
lr = make_pipeline(
ce.OrdinalEncoder(), # Not ideal for Linear Regression
StandardScaler(),
LinearRegression()
)
lr.fit(X_train, y_train)
print('Linear Regression R^2', lr.score(X_val, y_val))
```
### Fit Gradient Boosting model, with log transformed target
```
from xgboost import XGBRegressor
gb = make_pipeline(
ce.OrdinalEncoder(),
XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1)
)
gb.fit(X_train, y_train_log)
# print('Gradient Boosting R^2', gb.score(X_val, y_val_log))
# Convert back away from log space
```
### Explaining Linear Regression
```
example = X_val.iloc[[0]]
example
pred = lr.predict(example)[0]
print(f'Predicted Interest Rate: {pred:.2f}%')
def predict(model, example, log=False):
print('Vary income, hold other features constant', '\n')
example = example.copy()
preds = []
for income in range(20000, 200000, 20000):
example['Annual Income'] = income
pred = model.predict(example)[0]
if log:
pred = np.expm1(pred)
print(f'Predicted Interest Rate: {pred:.3f}%')
print(example.to_string(), '\n')
preds.append(pred)
print('Difference between predictions')
print(np.diff(preds))
predict(lr, example)
example2 = X_val.iloc[[2]]
predict(lr, example2);
```
### Explaining Gradient Boosting???
```
predict(gb, example, log=True)
predict(gb, example2, log=True)
```
## Partial Dependence Plots
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):
>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction.
[Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/pdp.html#examples)
> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.
> 1. Define grid along feature
> 2. Model predictions at grid points
> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve
> 4. Average curves to get a PDP (Partial Dependence Plot)
```
%matplotlib inline
import matplotlib.pyplot as plt
examples = pd.concat([example, example2])
for income in range(20000, 200000, 20000):
examples['Annual Income'] = income
preds_log = gb.predict(examples)
preds = np.expm1(preds_log)
for pred in preds:
plt.scatter(income, pred, color='grey')
plt.scatter(income, np.mean(preds), color='red')
```
## Partial Dependence Plots with 1 feature
#### PDPbox
- [Gallery](https://github.com/SauceCat/PDPbox#gallery)
- [API Reference: pdp_isolate](https://pdpbox.readthedocs.io/en/latest/pdp_isolate.html)
- [API Reference: pdp_plot](https://pdpbox.readthedocs.io/en/latest/pdp_plot.html)
```
# Later, when you save matplotlib images to include in blog posts or web apps,
# increase the dots per inch (double it), so the text isn't so fuzzy
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 72
```
#### You can customize it
PDPbox
- [API Reference: PDPIsolate](https://pdpbox.readthedocs.io/en/latest/PDPIsolate.html)
```
```
## Partial Dependence Plots with 2 features
See interactions!
PDPbox
- [Gallery](https://github.com/SauceCat/PDPbox#gallery)
- [API Reference: pdp_interact](https://pdpbox.readthedocs.io/en/latest/pdp_interact.html)
- [API Reference: pdp_interact_plot](https://pdpbox.readthedocs.io/en/latest/pdp_interact_plot.html)
Be aware of a bug in PDPBox version <= 0.20:
- With the `pdp_interact_plot` function, `plot_type='contour'` gets an error, but `plot_type='grid'` works
- This will be fixed in the next release of PDPbox: https://github.com/SauceCat/PDPbox/issues/40
```
```
### 3D with Plotly!
```
```
# Partial Dependence Plots with categorical features
1. I recommend you use Ordinal Encoder, outside of a pipeline, to encode your data first. Then use the encoded data with pdpbox.
2. There's some extra work to get readable category names on your plot, instead of integer category codes.
```
# Fit a model on Titanic data
import category_encoders as ce
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
df = sns.load_dataset('titanic')
df.age = df.age.fillna(df.age.median())
df = df.drop(columns='deck')
df = df.dropna()
target = 'survived'
features = df.columns.drop(['survived', 'alive'])
X = df[features]
y = df[target]
# Use Ordinal
encoder = ce.OrdinalEncoder()
X_encoded = encoder.fit_transform(X)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_encoded, y)
# Use Pdpbox
%matplotlib inline
import matplotlib.pyplot as plt
from pdpbox import pdp
feature = 'sex'
pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature)
pdp.pdp_plot(pdp_dist, feature);
# Look at the encoder's mappings
encoder.mapping
pdp.pdp_plot(pdp_dist, feature)
# Manually change the xticks labels
plt.xticks([1, 2], ['male', 'female']);
# Let's automate it
feature = 'sex'
for item in encoder.mapping:
if item['col'] == feature:
feature_mapping = item['mapping']
feature_mapping = feature_mapping[feature_mapping.index.dropna()]
category_names = feature_mapping.index.tolist()
category_codes = feature_mapping.values.tolist()
# Use Pdpbox
%matplotlib inline
import matplotlib.pyplot as plt
from pdpbox import pdp
feature = 'sex'
pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature)
pdp.pdp_plot(pdp_dist, feature)
# Automatically change the xticks labels
plt.xticks(category_codes, category_names);
features = ['sex', 'age']
interaction = pdp_interact(
model=model,
dataset=X_encoded,
model_features=X_encoded.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
pdp = interaction.pdp.pivot_table(
values='preds',
columns=features[0], # First feature on x axis
index=features[1] # Next feature on y axis
)[::-1] # Reverse the index order so y axis is ascending
pdp = pdp.rename(columns=dict(zip(category_codes, category_names)))
plt.figure(figsize=(10,8))
sns.heatmap(pdp, annot=True, fmt='.2f', cmap='viridis')
plt.title('Partial Dependence of Titanic survival, on sex & age');
```
|
github_jupyter
|
# Obtaining movie data, API-testing
```
# open questions:
# API only allows 1k requests per day..
# initial load (static database) or load on request, maybe another API required then?
# regular updates?
import requests
import pandas as pd
```
# get imdb ids
```
# uses links.csv, a list of random imdbIds from https://grouplens.org/datasets/movielens/ , to obtain imdb ids,
# then loops through some of them and puts them into a list
def get_ids(n):
dtype_dic= {'movieId': str,'imdbId' : str, "tmdbId": str}
IDdf = pd.read_csv("data/temp_links.csv", dtype = dtype_dic)
#IMDB IDs to eventually be used as index
idlist = list(IDdf["imdbId"].head(n))
return idlist
imdbIDs = get_ids(500)
imdbIDs
```
# get data from omdb
```
# http://www.omdbapi.com/
# API-key d3de5220
# max # of requests per day ~ 1k
# Receiving data from API and putting it into df
def get_data_from_omdb(imdbIDs):
df0 = pd.DataFrame()
for id in imdbIDs:
url = f"http://www.omdbapi.com/?i=tt{id}&apikey=d3de5220"
result = requests.get(url)
j = result.json()
df_single_movie = pd.DataFrame(j)
df0 = pd.concat([df0, df_single_movie])
return df0
def perform_cleaning(df):
# turns date of release into date format
df["Released"] = pd.to_datetime(df["Released"])
#converting "xx mins" into "xx"
def get_mins(x):
y = x.replace(" min", "")
return y
df["Runtime"] = df["Runtime"].apply(get_mins)
df["Runtime"] = pd.to_numeric(df["Runtime"])
# drops duplicates, for some reason same movie appears always three times in df when converting json file...
df0 = df.drop_duplicates("imdbID", keep = "first", inplace = False)
return df0
df_raw = get_data_from_omdb(imdbIDs)
df = df_raw.copy()
df = perform_cleaning(df_raw)
df.to_csv("data/OMDB.csv", index = False)
```
# "parked" code for now
```
#df0 = pd.read_csv("data/OMDB.csv")
#df0.info()
#df = pd.read_csv("data/OMDB.csv", dtype = df_dtypes)
#df.head(3)
# provide a list of datatypes that the columns shall have --> leading zeros?
'''
df_columns = ["Title", 'Year', 'Rated', 'Released', 'Runtime', 'Genre', 'Director','Writer','Actors','Plot','Language', \
'Country','Awards', 'Poster', 'Ratings','Metascore','imdbRating','imdbVotes','imdbID','Type','DVD',\
'BoxOffice','Production','Website','Response']
df_dtypes = {'Title': str,'Year' : int, "Rated": str, "Released" : str, "Runtime": int, "Genre": str, "Director": str, \
"Writer": str, "Actors": str, "Plot": str, "Language": str, "Country": str, "Awards": str, "Poster": str, \
"Ratings": str, "Metascore": int, "imdbRating": str, "imdbVotes": str, "imdbID": str, "Type": str, \
"DVD": str, "BoxOffice": str, "Production": str, "Website": str, "Response": str}
'''
```
|
github_jupyter
|
# Introduction #
Recall from the example in the previous lesson that Keras will keep a history of the training and validation loss over the epochs that it is training the model. In this lesson, we're going to learn how to interpret these learning curves and how we can use them to guide model development. In particular, we'll examine at the learning curves for evidence of *underfitting* and *overfitting* and look at a couple of strategies for correcting it.
# Interpreting the Learning Curves #
You might think about the information in the training data as being of two kinds: *signal* and *noise*. The signal is the part that generalizes, the part that can help our model make predictions from new data. The noise is that part that is *only* true of the training data; the noise is all of the random fluctuation that comes from data in the real-world or all of the incidental, non-informative patterns that can't actually help the model make predictions. The noise is the part might look useful but really isn't.
We train a model by choosing weights or parameters that minimize the loss on a training set. You might know, however, that to accurately assess a model's performance, we need to evaluate it on a new set of data, the *validation* data. (You could see our lesson on [model validation](https://www.kaggle.com/dansbecker/model-validation) in *Introduction to Machine Learning* for a review.)
When we train a model we've been plotting the loss on the training set epoch by epoch. To this we'll add a plot the validation data too. These plots we call the **learning curves**. To train deep learning models effectively, we need to be able to interpret them.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/tHiVFnM.png" width="500" alt="A graph of training and validation loss.">
<figcaption style="textalign: center; font-style: italic"><center>The validation loss gives an estimate of the expected error on unseen data.
</center></figcaption>
</figure>
Now, the training loss will go down either when the model learns signal or when it learns noise. But the validation loss will go down only when the model learns signal. (Whatever noise the model learned from the training set won't generalize to new data.) So, when a model learns signal both curves go down, but when it learns noise a *gap* is created in the curves. The size of the gap tells you how much noise the model has learned.
Ideally, we would create models that learn all of the signal and none of the noise. This will practically never happen. Instead we make a trade. We can get the model to learn more signal at the cost of learning more noise. So long as the trade is in our favor, the validation loss will continue to decrease. After a certain point, however, the trade can turn against us, the cost exceeds the benefit, and the validation loss begins to rise.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/eUF6mfo.png" width="600" alt="Two graphs. On the left, a line through a few data points with the true fit a parabola. On the right, a curve running through each datapoint with the true fit a parabola.">
<figcaption style="textalign: center; font-style: italic"><center>Underfitting and overfitting.
</center></figcaption>
</figure>
This trade-off indicates that there can be two problems that occur when training a model: not enough signal or too much noise. **Underfitting** the training set is when the loss is not as low as it could be because the model hasn't learned enough *signal*. **Overfitting** the training set is when the loss is not as low as it could be because the model learned too much *noise*. The trick to training deep learning models is finding the best balance between the two.
We'll look at a couple ways of getting more signal out of the training data while reducing the amount of noise.
# Capacity #
A model's **capacity** refers to the size and complexity of the patterns it is able to learn. For neural networks, this will largely be determined by how many neurons it has and how they are connected together. If it appears that your network is underfitting the data, you should try increasing its capacity.
You can increase the capacity of a network either by making it *wider* (more units to existing layers) or by making it *deeper* (adding more layers). Wider networks have an easier time learning more linear relationships, while deeper networks prefer more nonlinear ones. Which is better just depends on the dataset.
```
model = keras.Sequential([
layers.Dense(16, activation='relu'),
layers.Dense(1),
])
wider = keras.Sequential([
layers.Dense(32, activation='relu'),
layers.Dense(1),
])
deeper = keras.Sequential([
layers.Dense(16, activation='relu'),
layers.Dense(16, activation='relu'),
layers.Dense(1),
])
```
You'll explore how the capacity of a network can affect its performance in the exercise.
# Early Stopping #
We mentioned that when a model is too eagerly learning noise, the validation loss may start to increase during training. To prevent this, we can simply stop the training whenever it seems the validation loss isn't decreasing anymore. Interrupting the training this way is called **early stopping**.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/eP0gppr.png" width=500 alt="A graph of the learning curves with early stopping at the minimum validation loss, underfitting to the left of it and overfitting to the right.">
<figcaption style="textalign: center; font-style: italic"><center>We keep the model where the validation loss is at a minimum.
</center></figcaption>
</figure>
Once we detect that the validation loss is starting to rise again, we can reset the weights back to where the minimum occured. This ensures that the model won't continue to learn noise and overfit the data.
Training with early stopping also means we're in less danger of stopping the training too early, before the network has finished learning signal. So besides preventing overfitting from training too long, early stopping can also prevent *underfitting* from not training long enough. Just set your training epochs to some large number (more than you'll need), and early stopping will take care of the rest.
## Adding Early Stopping ##
In Keras, we include early stopping in our training through a callback. A **callback** is just a function you want run every so often while the network trains. The early stopping callback will run after every epoch. (Keras has [a variety of useful callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks) pre-defined, but you can [define your own](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LambdaCallback), too.)
```
from tensorflow.keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(
min_delta=0.001, # minimium amount of change to count as an improvement
patience=20, # how many epochs to wait before stopping
restore_best_weights=True,
)
```
These parameters say: "If there hasn't been at least an improvement of 0.001 in the validation loss over the previous 20 epochs, then stop the training and keep the best model you found." It can sometimes be hard to tell if the validation loss is rising due to overfitting or just due to random batch variation. The parameters allow us to set some allowances around when to stop.
As we'll see in our example, we'll pass this callback to the `fit` method along with the loss and optimizer.
# Example - Train a Model with Early Stopping #
Let's continue developing the model from the example in the last tutorial. We'll increase the capacity of that network but also add an early-stopping callback to prevent overfitting.
Here's the data prep again.
```
import pandas as pd
from IPython.display import display
red_wine = pd.read_csv('../input/dl-course-data/red-wine.csv')
# Create training and validation splits
df_train = red_wine.sample(frac=0.7, random_state=0)
df_valid = red_wine.drop(df_train.index)
display(df_train.head(4))
# Scale to [0, 1]
max_ = df_train.max(axis=0)
min_ = df_train.min(axis=0)
df_train = (df_train - min_) / (max_ - min_)
df_valid = (df_valid - min_) / (max_ - min_)
# Split features and target
X_train = df_train.drop('quality', axis=1)
X_valid = df_valid.drop('quality', axis=1)
y_train = df_train['quality']
y_valid = df_valid['quality']
```
Now let's increase the capacity of the network. We'll go for a fairly large network, but rely on the callback to halt the training once the validation loss shows signs of increasing.
```
from tensorflow import keras
from tensorflow.keras import layers, callbacks
early_stopping = callbacks.EarlyStopping(
min_delta=0.001, # minimium amount of change to count as an improvement
patience=20, # how many epochs to wait before stopping
restore_best_weights=True,
)
model = keras.Sequential([
layers.Dense(512, activation='relu', input_shape=[11]),
layers.Dense(512, activation='relu'),
layers.Dense(512, activation='relu'),
layers.Dense(1),
])
model.compile(
optimizer='adam',
loss='mae',
)
```
After defining the callback, add it as an argument in `fit` (you can have several, so put it in a list). Choose a large number of epochs when using early stopping, more than you'll need.
```
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=256,
epochs=500,
callbacks=[early_stopping], # put your callbacks in a list
verbose=0, # turn off training log
)
history_df = pd.DataFrame(history.history)
history_df.loc[:, ['loss', 'val_loss']].plot();
print("Minimum validation loss: {}".format(history_df['val_loss'].min()))
```
And sure enough, Keras stopped the training well before the full 500 epochs!
# Your Turn #
Now [**predict how popular a song is**] with the *Spotify* dataset.
**PRACTICES** > 04b_overfitting_underfitting
|
github_jupyter
|
# MLP ORF to GenCode
Use GenCode 38 and length-restricted data.
Use model pre-trained on Simulated ORF.
```
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Flatten,TimeDistributed
from keras.losses import BinaryCrossentropy
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/GenCodeTools.py')
with open('GenCodeTools.py', 'w') as f:
f.write(r.text)
from GenCodeTools import GenCodeLoader
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/KmerTools.py')
with open('KmerTools.py', 'w') as f:
f.write(r.text)
from KmerTools import KmerTools
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/DataPrep.py')
with open('DataPrep.py', 'w') as f:
f.write(r.text)
from DataPrep import DataPrep
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter
from SimTools.GenCodeTools import GenCodeLoader
from SimTools.KmerTools import KmerTools
from SimTools.DataPrep import DataPrep
BESTMODELPATH=DATAPATH+"BestModel-304"
LASTMODELPATH=DATAPATH+"LastModel"
```
## Data Load
```
PC_TRAINS=1000
NC_TRAINS=1000
PC_TESTS=40000
NC_TESTS=40000
PC_LENS=(200,4000)
NC_LENS=(200,4000) # Wen used 3500 for hyperparameter, 3000 for train
PC_FILENAME='gencode.v38.pc_transcripts.fa.gz'
NC_FILENAME='gencode.v38.lncRNA_transcripts.fa.gz'
PC_FULLPATH=DATAPATH+PC_FILENAME
NC_FULLPATH=DATAPATH+NC_FILENAME
MAX_K = 3
INPUT_SHAPE=(None,84) # 4^3 + 4^2 + 4^1
NEURONS=32
DROP_RATE=0.30
EPOCHS=200
SPLITS=3
FOLDS=3
show_time()
loader=GenCodeLoader()
loader.set_label(1)
loader.set_check_utr(False) # not ORF-restricted
loader.set_check_size(*PC_LENS) # length-restricted
pcdf=loader.load_file(PC_FULLPATH)
print("PC seqs loaded:",len(pcdf))
loader.set_label(0)
loader.set_check_utr(False)
loader.set_check_size(*NC_LENS) # length-restricted
ncdf=loader.load_file(NC_FULLPATH)
print("NC seqs loaded:",len(ncdf))
show_time()
def dataframe_extract_sequence(df):
return df['sequence'].tolist()
pc_all = dataframe_extract_sequence(pcdf)
nc_all = dataframe_extract_sequence(ncdf)
pcdf=None
ncdf=None
show_time()
print("PC seqs pass filter:",len(pc_all),type(pc_all))
print("NC seqs pass filter:",len(nc_all),type(nc_all))
#PC seqs pass filter: 55381
#NC seqs pass filter: 46919
print("Simulated sequence characteristics:")
oc = ORF_counter()
print("PC seqs")
oc.describe_sequences(pc_all)
print("NC seqs")
oc.describe_sequences(nc_all)
oc=None
show_time()
```
## Data Prep
```
dp = DataPrep()
Xseq,y=dp.combine_pos_and_neg(pc_all,nc_all)
nc_all=None
pc_all=None
nc_all=None
print("The first few shuffled labels:")
print(y[:30])
show_time()
Xfrq=KmerTools.seqs_to_kmer_freqs(Xseq,MAX_K)
Xseq = None
y=np.asarray(y)
show_time()
# Assume X and y were shuffled.
train_size=PC_TRAINS+NC_TRAINS
X_train=Xfrq[:train_size]
X_test=Xfrq[train_size:]
y_train=y[:train_size]
y_test=y[train_size:]
print("Training set size=",len(X_train),"=",len(y_train))
print("Reserved test set size=",len(X_test),"=",len(y_test))
Xfrq=None
y=None
show_time()
```
## Load a trained neural network
```
show_time()
model = load_model(BESTMODELPATH)
print(model.summary())
```
## Test the neural network
```
def show_test_AUC(model,X,y):
ns_probs = [0 for _ in range(len(y))]
bm_probs = model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
def show_test_accuracy(model,X,y):
scores = model.evaluate(X, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
print("Accuracy on test data.")
show_time()
show_test_AUC(model,X_test,y_test)
show_test_accuracy(model,X_test,y_test)
show_time()
```
|
github_jupyter
|
# Step 1: Data gathering
__Step goal__: Download and store the datasets used in this study.
__Step overview__:
1. London demographic data;
2. London shape files;
3. Counts data;
4. Metro stations and lines.
#### Introduction
All data is __open access__ and can be found on the official websites. Note, that the data sets can be updated by corresponding agencies; therefore, some discrepancies are possible: new variables will become available, or some data set will have fewer attributes.
```
import requests, zipfile, io
from datetime import datetime
import os
import pandas as pd
from bs4 import BeautifulSoup as bs
```
## 1. London demographic data
```
url = 'https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2fcensusoutputareaestimatesinthelondonregionofengland%2fmid2017/sape20dt10amid2017coaunformattedsyoaestimateslondon.zip'
r = requests.get(url)
z = zipfile.ZipFile(io.BytesIO(r.content))
directory = "../data/raw/population/"
if not os.path.exists(directory):
print(f'Succefully created new directory {directory}')
os.makedirs(directory)
z.extractall(path=directory)
print(f'Downloading date: {datetime.today().strftime("%d-%m-%Y %H:%M:%S")}')
```
## 2. London shape files
```
url = 'https://data.london.gov.uk/download/statistical-gis-boundary-files-london/9ba8c833-6370-4b11-abdc-314aa020d5e0/statistical-gis-boundaries-london.zip'
r = requests.get(url)
z = zipfile.ZipFile(io.BytesIO(r.content))
directory = "../data/raw/geometry/london/"
if not os.path.exists(directory):
print(f'Succefully created new directory {directory}')
os.makedirs(directory)
z.extractall(path=directory)
print(f'Downloading date: {datetime.today().strftime("%d-%m-%Y %H:%M:%S")}')
```
## 3. Counts data
```
url = 'http://tfl.gov.uk/tfl/syndication/feeds/counts.zip?app_id=&app_key='
r = requests.get(url)
z = zipfile.ZipFile(io.BytesIO(r.content))
directory = "../data/raw/counts/"
if not os.path.exists(directory):
print(f'Succefully created new directory {directory}')
os.makedirs(directory)
z.extractall(path=directory)
print(f'Downloading date: {datetime.today().strftime("%d-%m-%Y %H:%M:%S")}')
```
## 4. Station locations ans lines
```
url = 'https://commons.wikimedia.org/wiki/London_Underground_geographic_maps/CSV'
r = requests.get(url)
soup = bs(r.content, 'lxml')
pre = soup.select('pre')
file_names = ['stations.csv', 'routes.csv', 'lines.csv']
directory = "../data/raw/geometry/metro_stations/"
if not os.path.exists(directory):
print(f'Succefully created new directory {directory}')
os.makedirs(directory)
for i, p in enumerate(pre):
df = pd.DataFrame([x.split(',') for x in p.text.split('\n')])
df.to_csv(directory + file_names[i])
print(f'Downloading date: {datetime.today().strftime("%d-%m-%Y %H:%M:%S")}')
```
## References
1. Office for National Statistics (2019). Census Output Area population estimates – London, England (supporting information). Retrieved from https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/datasets/censusoutputareaestimatesinthelondonregionofengland
2. London Datastore (2019). Statistical GIS Boundary Files for London. Retrieved from https://data.london.gov.uk/dataset/statistical-gis-boundary-files-london
3. Transport for London (2020). Transport for London API. Retrieved from https://api-portal.tfl.gov.uk/docs
4. Wikimedia Commons (2020). London Underground geographic maps/CSV. Retrieved from https://commons.wikimedia.org/wiki/London_Underground_geographic_maps/CSV
|
github_jupyter
|
<h1 align="center">Exploratory Analysis : Game of Thrones</h1>

One of the most popular television series of all time, Game of Thrones is a fantasy drama set in fictional continents of Westeros and Essos filled with multiple plots and a huge number of characters all battling for the Iron Throne! It is an adaptation of _Song of Ice and Fire_ novel series by **George R. R. Martin**.
Being a popular series, it has caught the attention of many, and Data Scientists aren't to be excluded. This notebook presents **Exploratory Data Analysis (EDA)** on the _Kaggle_ dataset enhanced by _Myles O'Neill_ (more details: [click here](https://www.kaggle.com/mylesoneill/game-of-thrones)). This dataset is based on a combination of multiple datasets collected and contributed by multiple people. We utilize the ```battles.csv``` in this notebook. The original battles data was presented by _Chris Albon_, more details are on [github](https://github.com/chrisalbon/war_of_the_five_kings_dataset)
---
The image was taken from Game of Thrones, or from websites created and owned by HBO, the copyright of which is held by HBO. All trademarks and registered trademarks present in the image are proprietary to HBO, the inclusion of which implies no affiliation with the Game of Thrones. The use of such images is believed to fall under the fair dealing clause of copyright law.
## Import required packages
```
import cufflinks as cf
import pandas as pd
from collections import Counter
# pandas display data frames as tables
from IPython.display import display, HTML
```
### Set Configurations
```
cf.set_config_file(theme='white')
from plotly.offline import init_notebook_mode, iplot
init_notebook_mode(connected=True)
```
## Load Dataset
In this step we load the ```battles.csv``` for analysis
```
# load dataset using cufflinks wrapper for later usage with plot.ly plots
battles_df = cf.pd.read_csv('battles.csv')
# Display sample rows
display(battles_df.head())
```
## Explore raw properties
```
print("Number of attributes available in the dataset = {}".format(battles_df.shape[1]))
# View available columns and their data types
battles_df.dtypes
```
<h3 align="center">Battles for the Iron Throne</h3>

```
# Analyze properties of numerical columns
battles_df.describe()
```
---
## Number of Battles Fought
This data is till **season 5** only
```
print("Number of battles fought={}".format(battles_df.shape[0]))
```
## Battle Distribution Across Years
The plot below shows that maximum bloodshed happened in the year 299 with a total of 20 battles fought!
```
battles_df.year.value_counts().iplot(kind='barh',
xTitle='Number of Battles',
yTitle='Year',
title='Battle Distribution over Years',
showline=True)
```
## Which Regions saw most Battles?
<img src="https://racefortheironthrone.files.wordpress.com/2016/11/riverlands-political-map.jpg?w=580&h=781" alt="RiverLands" style="width: 200px;" align="left"/> **Riverland**s seem to be the favorite battle ground followed by the famous **The North**. Interestingly, till season 5, there was only 1 battle beyond the wall. Spoiler Alert: Winter is Coming!
```
battles_df.region.value_counts().iplot(kind='bar',
xTitle='Regions',
yTitle='Number of Battles',
title='Battles by Regions',
showline=True)
```
### Death or Capture of Main Characters by Region
No prizes for guessing that Riverlands have seen some of the main characters being killed or captured. Though _The Reach_ has seen 2 battles, none of the major characters seemed to have fallen there.
```
battles_df.groupby('region').agg({'major_death':'sum',
'major_capture':'sum'}).iplot(kind='bar')
```
## Who Attacked the most?
The Baratheon boys love attacking as they lead the pack with 38% while Rob Stark has been the attacker in close second with 27.8% of the battles.
<img src="http://vignette3.wikia.nocookie.net/gameofthrones/images/4/4c/JoffreyBaratheon-Profile.PNG/revision/latest?cb=20160626094917" alt="joffrey" style="width: 200px;" align="left"/> <img src="https://meninblazers.com/.image/t_share/MTMwMDE5NTU4NTI5NDk1MDEw/tumblr_mkzsdafejy1r2xls3o1_400.png" alt="robb" style="width: 200px; height: 200px" align="right"/>
```
king_attacked = battles_df.attacker_king.value_counts().reset_index()
king_attacked.rename(columns={'index':'king','attacker_king':'battle_count'},inplace=True)
king_attacked.iplot(kind='pie',labels='king',values='battle_count')
```
## Who Defended the most?
Rob Stark and Baratheon boys are again on the top of the pack. Looks like they have been on either sides of the war lot many times.
```
king_defended = battles_df.defender_king.value_counts().reset_index()
king_defended.rename(columns={'index':'king','defender_king':'battle_count'},inplace=True)
king_defended.iplot(kind='pie',labels='king',values='battle_count')
```
## Battle Style Distribution
Plenty of battles all across, yet the men of Westeros and Essos are men of honor.
This is visible in the distribution which shows **pitched battle** as the most common style of battle.
```
battles_df.battle_type.value_counts().iplot(kind='barh')
```
## Attack or Defend?
Defending your place in Westeros isn't easy, this is clearly visible from the fact that 32 out of 37 battles were won by attackers
```
battles_df.attacker_outcome.value_counts().iplot(kind='barh')
```
## Winners
Who remembers losers? (except if you love the Starks)
The following plot helps us understand who won how many battles and how, by attacking or defending.
```
attack_winners = battles_df[battles_df.attacker_outcome=='win']['attacker_king'].value_counts().reset_index()
attack_winners.rename(columns={'index':'king','attacker_king':'attack_wins'},inplace=True)
defend_winners = battles_df[battles_df.attacker_outcome=='loss']['defender_king'].value_counts().reset_index()
defend_winners.rename(columns={'index':'king','defender_king':'defend_wins'},inplace=True)
winner_df = pd.merge(attack_winners,defend_winners,how='outer',on='king')
winner_df.fillna(0,inplace=True)
winner_df['total_wins'] = winner_df.apply(lambda row: row['attack_wins']+row['defend_wins'],axis=1)
winner_df[['king','attack_wins','defend_wins']].set_index('king').iplot(kind='bar',barmode='stack',
xTitle='King',
yTitle='Number of Wins',
title='Wins per King',
showline=True)
```
## Battle Commanders
A battle requires as much brains as muscle power.
The following is a distribution of the number of commanders involved on attacking and defending sides.
```
battles_df['attack_commander_count'] = battles_df.dropna(subset=['attacker_commander']).apply(lambda row: len(row['attacker_commander'].split()),axis=1)
battles_df['defend_commander_count'] = battles_df.dropna(subset=['defender_commander']).apply(lambda row: len(row['defender_commander'].split()),axis=1)
battles_df[['attack_commander_count',
'defend_commander_count']].iplot(kind='box',boxpoints='suspectedoutliers')
```
## How many houses fought in a battle?
Were the battles evenly balanced? The plots tell the whole story.
<img src="https://c1.staticflickr.com/4/3893/14834104277_54d309b4ca_b.jpg" style="height: 200px;"/>
```
battles_df['attacker_house_count'] = (4 - battles_df[['attacker_1',
'attacker_2',
'attacker_3',
'attacker_4']].isnull().sum(axis = 1))
battles_df['defender_house_count'] = (4 - battles_df[['defender_1',
'defender_2',
'defender_3',
'defender_4']].isnull().sum(axis = 1))
battles_df['total_involved_count'] = battles_df.apply(lambda row: row['attacker_house_count']+row['defender_house_count'],
axis=1)
battles_df['bubble_text'] = battles_df.apply(lambda row: '{} had {} house(s) attacking {} house(s) '.format(row['name'],
row['attacker_house_count'],
row['defender_house_count']),
axis=1)
```
## Unbalanced Battles
Most battles so far have seen more houses forming alliances while attacking.
There are only a few friends when you are under attack!
```
house_balance = battles_df[battles_df.attacker_house_count != battles_df.defender_house_count][['name',
'attacker_house_count',
'defender_house_count']].set_index('name')
house_balance.iplot(kind='bar',tickangle=-25)
```
## Battles and The size of Armies
Attackers don't take any chances, they come in huge numbers, keep your eyes open
```
battles_df.dropna(subset=['total_involved_count',
'attacker_size',
'defender_size',
'bubble_text']).iplot(kind='bubble',
x='defender_size',
y='attacker_size',
size='total_involved_count',
text='bubble_text',
#color='red',
xTitle='Defender Size',
yTitle='Attacker Size')
```
## Archenemies?
The Stark-Baratheon friendship has taken a complete U-turn with a total of 19 battles and counting. Indeed there is no one to be trusted in this land.
```
temp_df = battles_df.dropna(subset = ["attacker_king",
"defender_king"])[[
"attacker_king",
"defender_king"
]]
archenemy_df = pd.DataFrame(list(Counter([tuple(set(king_pair))
for king_pair in temp_df.values
if len(set(king_pair))>1]).items()),
columns=['king_pair','battle_count'])
archenemy_df['versus_text'] = archenemy_df.apply(lambda row:
'{} Vs {}'.format(
row['king_pair'][0],
row['king_pair'][1]),
axis=1)
archenemy_df.sort_values('battle_count',
inplace=True,
ascending=False)
archenemy_df[['versus_text',
'battle_count']].set_index('versus_text').iplot(
kind='bar')
```
---
Note: A lot more exploration is possible with the remaining attributes and their different combinations. This is just tip of the iceberg
|
github_jupyter
|
# Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
```
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
```
## 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
- $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
- You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
## 2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> **Figure 1** </u>: **1D linear model**<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
**Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
### START CODE HERE ### (approx. 1 line)
J = theta * x
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
```
**Expected Output**:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
**Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
"""
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
"""
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
```
**Expected Output**:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
**Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
**Instructions**:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
```
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
"""
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / (2*epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
```
**Expected Output**:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`.
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
## 3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> **Figure 2** </u>: **deep neural network**<br>*LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
```
def forward_propagation_n(X, Y, parameters):
"""
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
"""
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
```
Now, run backward propagation.
```
def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
```
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
**How does gradient checking work?**.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> **Figure 2** </u>: **dictionary_to_vector() and vector_to_dictionary()**<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
**Exercise**: Implement gradient_check_n().
**Instructions**: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute `J_plus[i]`:
1. Set $\theta^{+}$ to `np.copy(parameters_values)`
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`.
- To compute `J_minus[i]`: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
```
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] += epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] -= epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2*epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
```
**Expected output**:
<table>
<tr>
<td> ** There is a mistake in the backward propagation!** </td>
<td> difference = 0.285093156781 </td>
</tr>
</table>
It seems that there were errors in the `backward_propagation_n` code we gave you! Good that you've implemented the gradient check. Go back to `backward_propagation` and try to find/correct the errors *(Hint: check dW2 and db1)*. Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining `backward_propagation_n()` if you modify the code.
Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented.
**Note**
- Gradient Checking is slow! Approximating the gradient with $\frac{\partial J}{\partial \theta} \approx \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct.
- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout.
Congrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :)
<font color='blue'>
**What you should remember from this notebook**:
- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).
- Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/mkmritunjay/machineLearning/blob/master/ADABOOSTClassifier.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# ADABOOST (Adaptive Boosting)
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.formula.api as sm
import scipy.stats as stats
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 7.5
plt.rcParams['axes.grid'] = True
plt.gray()
from matplotlib.backends.backend_pdf import PdfPages
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from statsmodels.stats.outliers_influence import variance_inflation_factor
from patsy import dmatrices
import sklearn.tree as dt
import sklearn.ensemble as en
from sklearn import metrics
from sklearn.tree import DecisionTreeClassifier, export_graphviz, export
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.ensemble import AdaBoostClassifier
import pydotplus as pdot
from IPython.display import Image
url = 'https://raw.githubusercontent.com/mkmritunjay/machineLearning/master/HR_comma_sep.csv'
hr_df = pd.read_csv(url)
# now we need to create dummy variables for categorical variables(dtype=object)
numerical_features = ['satisfaction_level', 'last_evaluation', 'number_project',
'average_montly_hours', 'time_spend_company']
categorical_features = ['Work_accident','promotion_last_5years', 'department', 'salary']
categorical_features
numerical_features
# A utility function to create dummy variable
def create_dummies( df, colname ):
col_dummies = pd.get_dummies(df[colname], prefix=colname)
col_dummies.drop(col_dummies.columns[0], axis=1, inplace=True)
df = pd.concat([df, col_dummies], axis=1)
df.drop( colname, axis = 1, inplace = True )
return df
for c_feature in categorical_features:
hr_df = create_dummies( hr_df, c_feature )
hr_df.head()
#Splitting the data
feature_columns = hr_df.columns.difference( ['left'] )
feature_columns
```
### Train Test split
```
train_X, test_X, train_y, test_y = train_test_split( hr_df[feature_columns],
hr_df['left'],
test_size = 0.3,
random_state = 42 )
```
### Building the model
```
# provide estimators and learning rate
pargrid_ada = {'n_estimators': [100, 200, 400, 600, 800],
'learning_rate': [10 ** x for x in range(-3, 3)]}
gscv_ada = GridSearchCV(estimator=AdaBoostClassifier(),
param_grid=pargrid_ada,
cv=5,
verbose=True, n_jobs=-1)
gscv_ada.fit(train_X, train_y)
gscv_ada.best_params_
gscv_ada.best_score_
```
### Building final model
```
ad = AdaBoostClassifier(learning_rate=0.1, n_estimators=800)
ad.fit(train_X, train_y)
clf_ada = gscv_ada.best_estimator_
print (pd.Series(cross_val_score(clf_ada,
train_X, train_y, cv=10)).describe()[['min', 'mean', 'max']])
```
|
github_jupyter
|
# HyperEuler on MNIST-trained Neural ODEs
```
import sys ; sys.path.append('..')
from torchdyn.models import *; from torchdyn import *
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
import pytorch_lightning as pl
from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning.metrics.functional import accuracy
from tqdm import tqdm_notebook as tqdm
from src.custom_fixed_explicit import ButcherTableau, GenericExplicitButcher
from src.hypersolver import *
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# smaller batch_size; only needed for visualization. The classification model
# will not be retrained
batch_size=16
size=28
path_to_data='../../data/mnist_data'
all_transforms = transforms.Compose([
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
])
test_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
])
train_data = datasets.MNIST(path_to_data, train=True, download=True,
transform=all_transforms)
test_data = datasets.MNIST(path_to_data, train=False,
transform=test_transforms)
trainloader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
testloader = DataLoader(test_data, batch_size=batch_size, shuffle=True)
```
## Loading the pretrained Neural ODE
```
func = nn.Sequential(nn.Conv2d(32, 46, 3, padding=1),
nn.Softplus(),
nn.Conv2d(46, 46, 3, padding=1),
nn.Softplus(),
nn.Conv2d(46, 32, 3, padding=1)
).to(device)
ndes = []
for i in range(1):
ndes.append(NeuralDE(func,
solver='dopri5',
sensitivity='adjoint',
atol=1e-4,
rtol=1e-4,
s_span=torch.linspace(0, 1, 2)).to(device))
#ndes.append(nn.Conv2d(32, 32, 3, padding=1)))
model = nn.Sequential(nn.BatchNorm2d(1),
Augmenter(augment_func=nn.Conv2d(1, 31, 3, padding=1)),
*ndes,
nn.AvgPool2d(28),
#nn.Conv2d(32, 1, 3, padding=1),
nn.Flatten(),
nn.Linear(32, 10)).to(device)
state_dict = torch.load('../pretrained_models/nde_mnist')
# remove state_dict keys for `torchdyn`'s Adjoint nn.Module (not used here)
copy_dict = state_dict.copy()
for key in copy_dict.keys():
if 'adjoint' in key: state_dict.pop(key)
model.load_state_dict(state_dict)
```
### Visualizing pretrained flows
```
x, y = next(iter(trainloader)); x = x.to(device)
for layer in model[:2]: x = layer(x)
model[2].nfe = 0
traj = model[2].trajectory(x, torch.linspace(0, 1, 50)).detach().cpu()
model[2].nfe
```
Pixel-flows of the Neural ODE, solved with `dopri5`
```
fig, axes = plt.subplots(nrows=5, ncols=10, figsize=(22, 10))
K = 4
for i in range(5):
for j in range(10):
im = axes[i][j].imshow(traj[i*5+j, K, 0], cmap='inferno')
fig.tight_layout(w_pad=0)
```
### Defining the HyperSolver class (-- HyperEuler version --)
```
tableau = ButcherTableau([[0]], [1], [0], [])
euler_solver = GenericExplicitButcher(tableau)
hypersolv_net = nn.Sequential(
nn.Conv2d(32+32+1, 32, 3, stride=1, padding=1),
nn.PReLU(),
nn.Conv2d(32, 32, 3, padding=1),
nn.PReLU(),
nn.Conv2d(32, 32, 3, padding=1)).to(device)
#for p in hypersolv_net.parameters(): torch.nn.init.zeros_(p)
hs = HyperEuler(f=model[2].defunc, g=hypersolv_net)
x0 = torch.zeros(12, 32, 6, 6).to(device)
span = torch.linspace(0, 2, 10).to(device)
traj = model[2].trajectory(x0, span)
res_traj = hs.base_residuals(traj, span)
hyp_res_traj = hs.hypersolver_residuals(traj, span)
hyp_traj = hs.odeint(x0, span)
hyp_traj = hs.odeint(x0, span, use_residual=False).detach().cpu()
etraj = odeint(model[2].defunc, x0, span, method='euler').detach().cpu()
(hyp_traj - etraj).max()
```
### Training the Hypersolver
```
PHASE1_ITERS = 10 # num iters without swapping of the ODE initial condition (new sample)
ITERS = 15000
s_span = torch.linspace(0, 1, 10).to(device)
run_loss = 0.
# using test data for hypersolver training does not cause issues
# or task information leakage; the labels are not utilized in any way
it = iter(trainloader)
X0, Y = next(it)
Y = Y.to(device)
X0 = model[:2](X0.to(device))
model[2].solver = 'dopri5'
traj = model[2].trajectory(X0, s_span)
etraj = odeint(model[2].defunc, X0, s_span, method='euler')
opt = torch.optim.AdamW(hypersolv_net.parameters(), 1e-3, weight_decay=1e-8)
sched = torch.optim.lr_scheduler.CosineAnnealingLR(opt, T_max=ITERS, eta_min=5e-4)
for i in tqdm(range(ITERS)):
ds = s_span[1] - s_span[0]
base_traj = model[2].trajectory(X0, s_span)
residuals = hs.base_residuals(base_traj, s_span).detach()
# Let the model generalize to other ICs after PHASE1_ITERS
if i > PHASE1_ITERS:
if i % 10 == 0: # swapping IC
try:
X0, _ = next(it)
except:
it = iter(trainloader)
X0, _ = next(it)
X0 = model[:2](X0.to(device))
model[2].solver = 'dopri5'
base_traj = model[2].trajectory(X0, s_span)
residuals = hs.base_residuals(base_traj.detach(), s_span).detach()
corrections = hs.hypersolver_residuals(base_traj.detach(), s_span)
loss = torch.norm(corrections - residuals.detach(), p='fro', dim=(3, 4)).mean() * ds**2
loss.backward()
torch.nn.utils.clip_grad_norm_(hypersolv_net.parameters(), 1)
if i % 10 == 0: print(f'\rLoss: {loss}', end='')
opt.step()
sched.step()
opt.zero_grad()
it = iter(testloader)
X0, _ = next(it)
X0 = model[:2](X0.to(device))
steps = 10
s_span = torch.linspace(0, 1, steps)
# dopri traj
model[2].solver = 'dopri5'
traj = model[2].trajectory(X0, s_span).detach().cpu()
# euler traj
model[2].solver = 'euler'
etraj = model[2].trajectory(X0, s_span).detach().cpu()
#etraj = hs.odeint(X0, s_span, use_residual=False).detach().cpu()
straj = hs.odeint(X0, s_span, use_residual=True).detach().cpu()
```
Evolution of absolute error: [Above] HyperEuler, [Below] Euler
```
fig, axes = plt.subplots(nrows=2, ncols=steps-1, figsize=(10, 4))
K = 1
vmin = min(torch.abs(straj[steps-1,:]-traj[steps-1,:]).mean(1)[K].min(),
torch.abs(etraj[steps-1,:]-traj[steps-1,:]).mean(1)[K].min())
vmax = max(torch.abs(straj[steps-1,:]-traj[steps-1,:]).mean(1)[K].max(),
torch.abs(etraj[steps-1,:]-traj[steps-1,:]).mean(1)[K].max())
for i in range(steps-1):
im = axes[0][i].imshow(torch.abs(straj[i+1,:]-traj[i+1,:]).mean(1)[K], cmap='inferno', vmin=vmin, vmax=vmax)
for i in range(steps-1):
im = axes[1][i].imshow(torch.abs(etraj[i+1,:]-traj[i+1,:]).mean(1)[K], cmap='inferno', vmin=vmin, vmax=vmax)
fig.colorbar(im, ax=axes.ravel().tolist(), orientation='horizontal')
#tikz.save('MNIST_interpolation_AE_plot.tex')
```
Evolution of absolute error: HyperEuler (alone). Greater detail
```
fig, axes = plt.subplots(nrows=1, ncols=steps-1, figsize=(10, 4))
for i in range(steps-1):
im = axes[i].imshow(torch.abs(straj[i+1,:]-traj[i+1,:]).mean(1)[K], cmap='inferno')
fig.colorbar(im, ax=axes.ravel().tolist(), orientation='horizontal')
```
### Evaluating ODE solution error
```
x = []
# NOTE: high GPU mem usage for generating data below for plot (on GPU)
# consider using less batches (and iterating) or performing everything on CPU
for i in range(5):
x_b, _ = next(it)
x += [model[:2](x_b.to(device))]
x = torch.cat(x); x.shape
STEPS = range(8, 50)
euler_avg_error, euler_std_error = [], []
hyper_avg_error, hyper_std_error = [], []
midpoint_avg_error, midpoint_std_error = [], []
rk4_avg_error, rk4_std_error = [], []
for step in tqdm(STEPS):
s_span = torch.linspace(0, 1, step)
# dopri traj
model[2].solver = 'dopri5'
traj = model[2].trajectory(x, s_span).detach().cpu()
# euler traj
model[2].solver = 'euler'
etraj = model[2].trajectory(x, s_span).detach().cpu()
# hypersolver
s_span = torch.linspace(0, 1, step)
straj = hs.odeint(x, s_span, use_residual=True).detach().cpu()
#midpoint
model[2].solver = 'midpoint'
s_span = torch.linspace(0, 1, step//2)
mtraj = model[2].trajectory(x, s_span).detach().cpu()
#midpoint
model[2].solver = 'rk4'
s_span = torch.linspace(0, 1, step//4)
rtraj = model[2].trajectory(x, s_span).detach().cpu()
# errors
euler_error = torch.abs((etraj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
hyper_error = torch.abs((straj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
midpoint_error = torch.abs((mtraj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
rk4_error = torch.abs((rtraj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
# mean, stdev
euler_avg_error += [euler_error.mean().item()] ; euler_std_error += [euler_error.mean(dim=1).mean(dim=1).std(0).item()]
hyper_avg_error += [hyper_error.mean().item()] ; hyper_std_error += [hyper_error.mean(dim=1).mean(dim=1).std(0).item()]
midpoint_avg_error += [midpoint_error.mean().item()] ; midpoint_std_error += [midpoint_error.mean(dim=1).mean(dim=1).std(0).item()]
rk4_avg_error += [rk4_error.mean().item()] ; rk4_std_error += [rk4_error.mean(dim=1).mean(dim=1).std(0).item()]
euler_avg_error, euler_std_error = np.array(euler_avg_error), np.array(euler_std_error)
hyper_avg_error, hyper_std_error = np.array(hyper_avg_error), np.array(hyper_std_error)
midpoint_avg_error, midpoint_std_error = np.array(midpoint_avg_error), np.array(midpoint_std_error)
rk4_avg_error, rk4_std_error = np.array(rk4_avg_error), np.array(rk4_std_error)
range_steps = range(8, 50, 1)
fig, ax = plt.subplots(1, 1); fig.set_size_inches(8, 3)
ax.plot(range_steps, euler_avg_error, color='red', linewidth=3, alpha=0.5)
ax.fill_between(range_steps, euler_avg_error-euler_std_error, euler_avg_error+euler_std_error, alpha=0.05, color='red')
ax.plot(range_steps, hyper_avg_error, c='black', linewidth=3, alpha=0.5)
ax.fill_between(range_steps, hyper_avg_error+hyper_std_error, hyper_avg_error-hyper_std_error, alpha=0.05, color='black')
# start from 10 steps, balance the steps
mid_range_steps = range(8, 50, 2)
ax.plot(mid_range_steps, midpoint_avg_error[::2], color='green', linewidth=3, alpha=0.5)
ax.fill_between(mid_range_steps, midpoint_avg_error[::2]-midpoint_std_error[::2], midpoint_avg_error[::2]+midpoint_std_error[::2], alpha=0.1, color='green')
# start from 10 steps, balance the steps
mid_range_steps = range(8, 50, 4)
ax.plot(mid_range_steps, rk4_avg_error[::4], color='gray', linewidth=3, alpha=0.5)
ax.fill_between(mid_range_steps, rk4_avg_error[::4]-rk4_std_error[::4], rk4_avg_error[::4]+rk4_std_error[::4], alpha=0.05, color='gray')
ax.set_ylim(0, 200)
ax.set_xlim(8, 40)
ax.legend(['Euler', 'HyperEuler', 'Midpoint', 'RK4'])
ax.set_xlabel('NFEs')
ax.set_ylabel('Terminal error (MAPE)')
```
|
github_jupyter
|
# Introduction to Linear Regression
Linear Regression assumes that the relationship between 2 variables, x and y can be modeled by a straight line
$y = \beta_0 + \beta_1x$
We can also see the equation written as : ** y = mx +b**
with $\beta_0$ and $\beta_1$ represents 2 model parameters
$x$ is usually called the **predictor** and $y$ the **response**
Some examples of linear relationships :
* size of vocabulary of a child and the amount of education of the parents
* length of hospital stay and severity of an operation
```
import pandas as pd
import numpy as np
import seaborn as sns
df = pd.read_csv('../Dataset/Iris.csv',index_col=0)
df.head()
```
Using the famous Iris dataset to showcase an example of a linear regression plot. This plot shows the relationship between flower petal length and their petal width. (Where the hell am I going to find "Common Brushtail Possum" data)
Linear regression with seaborn : **regplot()** or **lmplot()**
<br/><br/>
**regplot() : ** simple model fit
<br/><br/>
**lmplot() : ** model fit + FacetGrid
```
sns.regplot(x="PetalLengthCm",y="PetalWidthCm",data=df)
```
We can witness that the data align following a straight line
## 7.1 Line fitting, residuals and correlation
**Linear relationship** is being able to represent the relationship between 2 set of variables with a line
### 7.1.1 Beginning with straight lines
```
petal = df.sample(1,random_state=123)
petal
```
Straight line should only be used when the dta appear to have a linear relationship
### 7.1.2 Fitting a line by eye
Linear regression function
```
def linear_regression(x,y):
#Small control to test the nature of the parameters (Should be dataFrame columns)
if type(x) and type(y) is pd.Series:
print('Series')
df = pd.DataFrame()
df['X'] = x
df['Y'] = y
df['XY'] = df['X']*df['Y']
df['X_squared'] = df['X']*df['X']
df['Y_squared'] = df['Y']*df['Y']
#Let's make this more readable by putting each of the Numerator and Denominator
A = (df['Y'].sum() * df['X_squared'].sum()) - (df['X'].sum() * df['XY'].sum())
B = df.shape[0]*df['X_squared'].sum() -(df['X'].sum())**2
#We calculate the Y-intercept
y_intercept = A/B
#We know calculate the slope
C = df.shape[0]*df['XY'].sum() - df['X'].sum()*df['Y'].sum()
D = df.shape[0]*df['X_squared'].sum() - (df['X'].sum())**2
#separating the Numerator and Denominator makes it more readable
slope = C/D
#Gives the whole table
#print(df)
return (slope,y_intercept)
else:
print('Type should should be Series (a dataframe column)')
d = pd.DataFrame({'A':[1,2,3,4,5,6],'B':[6,5,4,3,2,1]})
linear_regression(d['A'],d['B'])
```
* We compute the **slope** and **Y-intercept**
```
sl,yi = linear_regression(df["PetalLengthCm"],df["PetalWidthCm"])
sl
yi
```
So we can conclude that the equation for the line of the petal length and width relationship is approximatively (the "hat" signifies that we are talking about an estimate) :
<h3>$\hat{y} = 0.415x - 0.366$ </h3>
### 7.1.2 Residuals
The data don't usually all fit exactly accross the line, they are mostly scattered around. We call **residual** the vertical distance between a data point and the regression line.
<br/>
They are positive if they fall above the regression line, and negative if they fall below.
<br/>
If they fit the line, the residual is zero.
We can write them as :
<br/><br/>
    $Residual = Data - Fit$
Or mathematically :
<br/><br/>
    $e = y - \hat{y}$
A right linear model would be for these residuals to be as small as possible
* **<u>Residual plot</u>**
```
sns.residplot(x="PetalLengthCm",y="PetalWidthCm",data=df)
```
Here we plot the residuals, as the middle straight line represents the regression line; the data on the line fit perfectly the model, the one above it are the positive residuals and vice versa. We call this a **residual plot**, and is very useful to evaluate how well a linear model fits a dataset.
### 7.1.3 Describing linear relationships with correlation
**Correlation** describes the strength of the linear relationship between 2 variable. It measures how things are related.
<br/>
The correlation **R** is always comprised between -1 and 1
* **0** : No apparent linear relationship
* **1** : perfectly linear and positive relationship (going upward)
* **-1** : Perfectly linear and negative relationship (going downward)
```
from IPython.display import Image
Image(filename="../img/correlation.png")
```
* We find the correlation of the "iris" database
```
""" Correlation Function """
#Zscore function
def zscore(x,mu,sigma):
return (x-mu)/sigma
def correlation(dx,dy):
if type(dx) and type(dy) is pd.Series:
#Compute the means
mx = dx.mean()
my = dy.mean()
#Compute the standard deviations
sx = dx.std()
sy = dy.std()
#Compute the second part of the equation
Q=0
for i,j in zip(dx,dy):
Q = Q + (zscore(i,mx,sx)*zscore(j,my,sy))
#multiply with the first part and return the result
return (1/(dx.shape[0]-1))*Q
else:
print('Type should should be Series (a dataframe column)')
```
Petal Length in relationship to Petal Width
```
correlation(df["PetalLengthCm"],df["PetalWidthCm"])
```
We find that the Correlation shows a pretty strong upward and positive fit
## 7.2 Least square regression line
When we find a linear fit for our data, we call this line the **regression line**. And it's equation is $\hat{y}= mx + b$.
The **Least squares regression line** is a more rigorous approach in which we try to minimize the vertical distance from the data points to the regression line, or in other words minimize the some of the **residuals**.
we can add some other definitions, as the slope equation :
<br/>
<h2>$m = \frac{S_y}{S_x}R$</h2>
|
github_jupyter
|
```
import numpy as np
import tensorflow as tf
import random as rn
import os
import matplotlib.pyplot as plt
%matplotlib inline
os.environ['PYTHONHASHSEED'] = '0'
np.random.seed(1)
rn.seed(1)
from keras import backend as K
tf.compat.v1.set_random_seed(1)
#sess = tf.Session(graph=tf.get_default_graph())
#K.set_session(sess)
import sys
from keras.models import Sequential
from keras.layers import LSTM, Dense, Activation, Dropout, Flatten
from keras.layers import Conv1D,MaxPooling1D
from keras.layers.normalization import BatchNormalization
from keras.optimizers import SGD
from keras.optimizers import RMSprop
import keras.regularizers
import scipy
import math
import sys
import pandas as pd
from scipy.ndimage.filters import gaussian_filter1d
from sklearn.metrics import mean_squared_error
from scipy.stats import linregress
from scipy import interpolate
from scipy import signal
import pickle
from video_process_utils import *
import collections
target_col = "SEMLS_dev_residual"
#assign train/validation/test ids
alldata_processed =\
pd.read_csv("./data/processed/alldata_processed_with_dev_residual.csv" )
alldata_processed['videoid'] = alldata_processed['videoid'].apply(lambda x: int(x))
alldata_processed = alldata_processed[alldata_processed[target_col].notnull()]
alldata_processed = alldata_processed.groupby(['videoid','side']).head(1)
ids_nonmissing_target = set(alldata_processed['videoid'].unique())
datasplit_df = pd.read_csv('./data/processed/train_test_valid_id_split.csv')
datasplit_df['videoid'] = datasplit_df['videoid'].apply(lambda x: int(x))
all_ids = set(datasplit_df['videoid']).intersection(ids_nonmissing_target)
train_ids = set(datasplit_df[datasplit_df['dataset'] == 'train']['videoid']).intersection(ids_nonmissing_target)
validation_ids = set(datasplit_df[datasplit_df['dataset'] == 'validation']['videoid']).intersection(ids_nonmissing_target)
test_ids = set(datasplit_df[datasplit_df['dataset'] == 'test']['videoid']).intersection(ids_nonmissing_target)
with open('./data/processed/all_processed_video_segments.pickle', 'rb') as handle:
processed_video_segments = pickle.load(handle)
x_columns = [2*LANK,2*LANK+1,2*LKNE,2*LKNE+1,
2*LHIP,2*LHIP+1,2*LBTO,2*LBTO+1,
2*RANK,2*RANK+1,2*RKNE,2*RKNE+1,
2*RHIP,2*RHIP+1,2*RBTO,2*RBTO+1,50,51,52,53,54,55,56]
target_dict = {}
for i in range(len(alldata_processed)):
row = alldata_processed.iloc[i]
target_dict[row['videoid']] = row[target_col]
if target_col == "gmfcs":
processed_video_segments = list(filter(lambda x: target_dict[x[0]] in range(1,6), processed_video_segments))
X = [t[2] for t in processed_video_segments if t[0] in all_ids]
X = np.stack(X)[:,:,x_columns]
y = np.array([target_dict[t[0]] for t in processed_video_segments if t[0] in all_ids])
X_train = [t[2] for t in processed_video_segments if t[0] in train_ids]
X_train = np.stack(X_train)[:,:,x_columns]
X_validation = [t[2] for t in processed_video_segments if t[0] in validation_ids]
X_validation = np.stack(X_validation)[:,:,x_columns]
y_train = np.array([target_dict[t[0]] for t in processed_video_segments if t[0] in train_ids])
y_validation = np.array([target_dict[t[0]] for t in processed_video_segments if t[0] in validation_ids])
videoid_count_dict = collections.Counter(np.array([t[0] for t in processed_video_segments]))
train_videoid_weights = [1./videoid_count_dict[t[0]] for t in processed_video_segments if t[0] in train_ids]
train_videoid_weights = np.array(train_videoid_weights).reshape(-1,1)
validation_videoid_weights = [1./videoid_count_dict[t[0]] for t in processed_video_segments if t[0] in validation_ids]
validation_videoid_weights = np.array(validation_videoid_weights).reshape(-1,1)
target_min = np.min(y_train,axis=0)
target_range = np.max(y_train,axis=0) - np.min(y_train,axis=0)
print(target_min, target_range)
y_train_scaled = ((y_train-target_min)/target_range).reshape(-1,1)
y_validation_scaled = ((y_validation-target_min)/target_range).reshape(-1,1)
y_validation_scaled = np.hstack([y_validation_scaled,validation_videoid_weights])
y_train_scaled = np.hstack([y_train_scaled,train_videoid_weights])
c_i_factor = np.mean(np.vstack([train_videoid_weights,validation_videoid_weights]))
vid_length = 124
def step_decay(initial_lrate,epochs_drop,drop_factor):
def step_decay_fcn(epoch):
return initial_lrate * math.pow(drop_factor, math.floor((1+epoch)/epochs_drop))
return step_decay_fcn
epochs_drop,drop_factor = (10,0.8)
initial_lrate = 0.001
dropout_amount = 0.5
last_layer_dim = 10
filter_length = 8
conv_dim = 32
l2_lambda = 10**(-3.5)
def w_mse(weights):
def loss(y_true, y_pred):
#multiply by len(weights) to make the magnitude invariant to number of components in target
return K.mean(K.sum(K.square(y_true-y_pred)*weights,axis=1)*tf.reshape(y_true[:,-1],(-1,1)))/c_i_factor
return loss
#we don't want to optimize for the column counting video occurences of course, but
#they are included in the target so we can use that column for the loss function
weights = [1.0,0]
normal_weights = [1.0,0]
#normalize weights to sum to 1 to prevent affecting loss function
weights = weights/np.sum(weights)
normal_weights = normal_weights/np.sum(normal_weights)
#fixed epoch budget of 100 that empirically seems to be sufficient
n_epochs = 100
mse_opt = w_mse(weights)
#monitor our actual objective
mse_metric = w_mse(target_range**2*normal_weights)
hyper_str = "params_"
for param in [initial_lrate,epochs_drop,drop_factor,dropout_amount,conv_dim,last_layer_dim,filter_length,l2_lambda]:
hyper_str = hyper_str + str(param) + "_"
K.clear_session()
#K.set_session(sess)
model = Sequential()
model.add(Conv1D(conv_dim,filter_length, input_dim=X_train.shape[2],input_length=vid_length,padding='same'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Conv1D(conv_dim,filter_length,padding='same'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(dropout_amount))
model.add(Conv1D(conv_dim,filter_length,padding='same',kernel_regularizer=keras.regularizers.l2(l2_lambda)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Conv1D(conv_dim,filter_length,padding='same',kernel_regularizer=keras.regularizers.l2(l2_lambda)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(dropout_amount))
model.add(Conv1D(conv_dim,filter_length,padding='same',kernel_regularizer=keras.regularizers.l2(l2_lambda)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Conv1D(conv_dim,filter_length,padding='same',kernel_regularizer=keras.regularizers.l2(l2_lambda)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling1D(pool_size=3))
model.add(Dropout(dropout_amount))
model.add(Flatten())
model.add(Dense(last_layer_dim,activation='relu'))
model.add(Dense(2, activation='linear'))
checkpoint_folder = "./data/checkpoints/cnn_checkpoints_%s" % (target_col)
from keras.callbacks import LearningRateScheduler
from keras.callbacks import ModelCheckpoint
from keras.callbacks import TerminateOnNaN
train_model = True
if not os.path.exists(checkpoint_folder):
os.makedirs(checkpoint_folder)
filepath=checkpoint_folder+"/weights-{epoch:02d}-{val_loss_1:.4f}.hdf5"
if train_model:
opt = RMSprop(lr=0.0,rho=0.9, epsilon=1e-08, decay=0.0)
model.compile(loss=mse_opt,metrics=[mse_metric],
optimizer=opt)
checkpoint = \
ModelCheckpoint(filepath, monitor='val_loss_2', verbose=1, save_best_only=False, save_weights_only=False, mode='auto', period=1)
lr = LearningRateScheduler(step_decay(initial_lrate,epochs_drop,drop_factor))
history = model.fit(X_train, y_train_scaled,callbacks=[checkpoint,lr,TerminateOnNaN()],
validation_data=(X_validation,y_validation_scaled),
batch_size=32, epochs=n_epochs,shuffle=True)
import statsmodels.api as sm
def undo_scaling(y,target_range,target_min):
return y*target_range+target_min
weight_files = os.listdir(checkpoint_folder)
weight_files_df = pd.DataFrame(weight_files,columns=['filename'])
weight_files_df['num'] = weight_files_df['filename'].apply(lambda x: int(x.split('-')[1]))
weight_files_df.sort_values(by='num',ascending=True,inplace=True)
def predict_and_aggregate_singlevar(y,X,ids,model,target_col):
df = pd.DataFrame(y,columns=[target_col])
target_col_pred = target_col + "_pred"
videoids = [t[0] for t in processed_video_segments if t[0] in ids]
df["videoid"] = np.array(videoids)
preds = model.predict(X)
df[target_col_pred] = undo_scaling(preds[:,0],target_range,target_min)
df["count"] = 1
df = df.groupby(['videoid'],as_index=False).agg({target_col_pred:np.mean,'count':np.sum,target_col:np.mean})
df['ones'] = 1
return df
video_ids = [t[0] for t in processed_video_segments if t[0] in all_ids]
predictions_df = pd.DataFrame(video_ids,columns=['videoid'])
predictions_df[target_col] = y
predictions_df = predictions_df.merge(right=datasplit_df[['videoid','dataset']],on=['videoid'],how='left')
for i in range(0,len(weight_files_df)):
weight_file = weight_files_df['filename'].iloc[i]
print(weight_file)
model.load_weights(checkpoint_folder + "/%s" % (weight_file))
preds = model.predict(X)
predictions_df["%s_pred_%s" % (target_col,i)] = undo_scaling(preds[:,0],target_range,target_min)
predictions_df.groupby(['videoid','dataset'],as_index=False).mean().to_csv("./data/predictions/cnn_%s_singlesided_predictions_all_epochs.csv" % (target_col),index=False)
# Save best models
# This must be run after finding the best model with select_optimal_epoch
maps = {
"gmfcs": "./data/checkpoints/cnn_checkpoints_gmfcs/weights-08-0.5025.hdf5", #
"speed": "./data/checkpoints/cnn_checkpoints_speed/weights-77-0.0336.hdf5", #
"cadence": "./data/checkpoints/cnn_checkpoints_cadence/weights-36-0.0211.hdf5", #
"SEMLS_dev_residual": "./data/checkpoints/cnn_checkpoints_SEMLS_dev_residual/weights-32-0.8929.hdf5", #
# "GDI": "./data/checkpoints/cnn_checkpoints_GDI/weights-88-72.0330.hdf5" #
"GDI": "./data/checkpoints/cnn_checkpoints_GDI/weights-92-90.8354.hdf5" #
}
for col in maps.keys():
model_folder_path = "./data/models/%s_best.pb" % (col)
model.load_weights(maps[col])
model.save(model_folder_path)
```
|
github_jupyter
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Modeling" data-toc-modified-id="Modeling-1"><span class="toc-item-num">1 </span>Modeling</a></span><ul class="toc-item"><li><span><a href="#Victims" data-toc-modified-id="Victims-1.1"><span class="toc-item-num">1.1 </span>Victims</a></span></li><li><span><a href="#Perpetrators" data-toc-modified-id="Perpetrators-1.2"><span class="toc-item-num">1.2 </span>Perpetrators</a></span></li><li><span><a href="#ViolenceEvent" data-toc-modified-id="ViolenceEvent-1.3"><span class="toc-item-num">1.3 </span>ViolenceEvent</a></span></li></ul></li></ul></div>
```
import sys
sys.version
from pathlib import Path
import pprint
%load_ext cypher
# https://ipython-cypher.readthedocs.io/en/latest/
# used for cell magic
from py2neo import Graph
NEO4J_URI="bolt://localhost:7687"
graph = Graph(NEO4J_URI)
graph
def clear_graph():
print(graph.run("MATCH (n) DETACH DELETE n").stats())
clear_graph()
graph.run("RETURN apoc.version();").data()
graph.run("call dbms.components() yield name, versions, edition unwind versions as version return name, version, edition;").data()
```
# Modeling
```
import pandas as pd
```
We are modeling data from the pinochet dataset, available in https://github.com/danilofreire/pinochet
> Freire, D., Meadowcroft, J., Skarbek, D., & Guerrero, E.. (2019). Deaths and Disappearances in the Pinochet Regime: A New Dataset. https://doi.org/10.31235/osf.io/vqnwu.
The dataset has 59 variables with information about the victims, the perpetrators, and geographical
coordinates of each incident.
```
PINOCHET_DATA = "../pinochet/data/pinochet.csv"
pin = pd.read_csv(PINOCHET_DATA)
pin.head()
pin.age.isna().sum()
```
The dataset contains informations about perpetrators, victims, violence events and event locations. We will develop models around these concepts, and we will stablish relationships between them later.
## Victims
- victim_id*: this is not the same as in the dataset.
- individual_id
- group_id
- first_name
- last_name
- age
- minor
- male
- number_previous_arrests
- occupation
- occupation_detail
- victim_affiliation
- victim_affiliation_detail
- targeted
```
victim_attributes = [
"individual_id",
"group_id",
"first_name",
"last_name",
"age",
"minor",
"male",
"number_previous_arrests",
"occupation",
"occupation_detail",
"victim_affiliation",
"victim_affiliation_detail",
"targeted",
]
pin_victims = pin[victim_attributes]
pin_victims.head()
# https://neo4j.com/docs/labs/apoc/current/import/load-csv/
PINOCHET_CSV_GITHUB = "https://raw.githubusercontent.com/danilofreire/pinochet/master/data/pinochet.csv"
query = """
WITH $url AS url
CALL apoc.load.csv(url)
YIELD lineNo, map, list
RETURN *
LIMIT 1"""
graph.run(query, url = PINOCHET_CSV_GITHUB).data()
%%cypher
CALL apoc.load.csv('pinochet.csv')
YIELD lineNo, map, list
RETURN *
LIMIT 1
```
## Perpetrators
- perpetrator_affiliation
- perpetrator_affiliation_detail
- war_tribunal
```
perpetrators_attributes = [
"perpetrator_affiliation",
"perpetrator_affiliation_detail",
"war_tribunal",
]
pin_perps = pin[perpetrators_attributes]
pin_perps.head()
```
## ViolenceEvent
```
clear_graph()
query = Path("../services/graph-api/project/queries/load_csv.cql").read_text()
# pprint.pprint(query)
graph.run(query, url = PINOCHET_CSV_GITHUB).stats()
```
|
github_jupyter
|
```
# evaluate RFE for classification
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score, RepeatedStratifiedKFold, RepeatedKFold
from sklearn.feature_selection import RFE
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import Pipeline
# define dataset
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, random_state=1)
#create pipeline
rfe = RFE(estimator=DecisionTreeClassifier(), n_features_to_select=5)
model= DecisionTreeClassifier() # the model shouldn't necessary the same with rfe's model
pipeline = Pipeline(steps=[('s', rfe),('m', model)]) # 's' and 'm' is just a chosen letters you can use any letter
# evaluate model
cv=RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
n_scores=cross_val_score(pipeline, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')
# report performance
print('Accuracy: %.3f (%.3f)'% (np.mean(n_scores), np.std(n_scores)))
# make a prediction with an RFE pipeline
# define dataset
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, random_state=1)
#create pipeline
rfe = RFE(estimator=DecisionTreeClassifier(), n_features_to_select=5)
model= DecisionTreeClassifier() # the model shouldn't necessary the same with rfe's model
pipeline = Pipeline(steps=[('s', rfe),('m', model)]) # 's' and 'm' is just a chosen letters you can use any letter
# fit the model on all available data
pipeline.fit(X,y)
# make a prediction for one example
data = [[2.56999479, 0.13019997, 3.16075093, -435936352, -1.61271951, -1.39352057, -2.48924933, -1.93094078, 3.26130366, 1.05692145]]
yhat = pipeline.predict(data)
print('Predicted Class: %d' % (yhat))
# test regression dataset
from sklearn.datasets import make_regression
# define dataset
X, y= make_regression(n_samples=1000, n_features=10, n_informative=5, random_state=1)
# summarize the dataset
print(X.shape, y.shape)
# evaluate RFE for regression
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score, RepeatedKFold
from sklearn.feature_selection import RFE
from sklearn.tree import DecisionTreeRegressor
from sklearn.pipeline import Pipeline
# define dataset
X, y= make_regression(n_samples=1000, n_features=10, n_informative=5, random_state=1)
# create pipeline
rfe = RFE(estimator=DecisionTreeRegressor(), n_features_to_select=5)
model= DecisionTreeRegressor() # the model shouldn't necessary the same with rfe's model
pipeline = Pipeline(steps=[('s', rfe),('m', model)])
# evaluate model
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)
n_scores = cross_val_score(pipeline, X, y, scoring= 'neg_mean_absolute_error', cv=cv, n_jobs=-1, error_score='raise')
# reporting MAE of the model across all the folds, the sklearn library make the MAE negative so it maximize
# instead of minimizing. This means the negative MAE values closer to zero are better and the perefect MAE is zero.
# report performance
print('MAE: %.3f (%.3f)'% (np.mean(n_scores), np.std(n_scores)))
# make a regression prediction with an RFE pipeline
from numpy import mean
from numpy import std
from sklearn.datasets import make_regression
from sklearn.feature_selection import RFE
from sklearn.tree import DecisionTreeRegressor
from sklearn.pipeline import Pipeline
#define dataset
X, y= make_regression(n_samples=1000, n_features=10, n_informative=5, random_state=1)
# create pipeline
rfe = RFE(estimator=DecisionTreeRegressor(), n_features_to_select=5)
model= DecisionTreeRegressor() # the model shouldn't necessary the same with rfe's model
pipeline = Pipeline(steps=[('s', rfe),('m', model)])
# fit the model on all available data
pipeline.fit(X,y)
# make a prediction for one example
data = [[-2.022220122, 0.31563495, 0.8279464, -0.30620401, 0.116003707, -1.44411381, 0.87616892, -0.50446586, 0.23009474, 0.76201118]]
yhat=pipeline.predict(data)
print('Predicted: %.3f' % (yhat))
# explore the number of selected features for RFE
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score, RepeatedStratifiedKFold
from sklearn.feature_selection import RFE
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import Pipeline
from matplotlib import pyplot
# get the dataset
def get_dataset():
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, random_state=1)
return X,y
# get a list of models to evaluate
def get_models():
models=dict()
for i in range(2, 10):
rfe = RFE(estimator=DecisionTreeClassifier(), n_features_to_select=i)
model = DecisionTreeClassifier()
models[str(i)] = Pipeline(steps=[('s', rfe), ('m', model)])
return models
# evaluate a give model using cross-validation
def evaluate_model(model, X, y):
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')
return scores
# define datasets
X, y = get_dataset()
# get the model to evaluate
models = get_models()
# evaluate the models and store results
results, names = list(), list()
for name, model in models.items():
scores = evaluate_model(model, X, y)
results.append(scores)
names.append(name)
print('>%s %.3f (%.3f)' % (name, mean(scores), std(scores)))
# plot model performance for comparison
pyplot.boxplot(results, labels=names, showmeans=True)
pyplot.show()
# automatically select the number of features for RFE
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score, RepeatedStratifiedKFold
from sklearn.feature_selection import RFECV #-> for automation we should use 'REFCV' instead of 'REF'
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import Pipeline
# define dataset
def get_dataset():
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, random_state=1)
return X,y
X, y = get_dataset()
# create pipeline
rfe = RFECV(estimator=DecisionTreeClassifier())
model = DecisionTreeClassifier()
pipeline = Pipeline(steps = [('s', rfe), ('m', model)])
# evaluate model
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
n_scores = cross_val_score(pipeline, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')
# report performance
print('Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))
# by using RFE we might be interested to know which feature get selected and which not
from sklearn.datasets import make_classification
from sklearn.feature_selection import RFE
from sklearn.tree import DecisionTreeClassifier
# define dataset
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, random_state=1)
# define RFE
rfe = RFE(estimator=DecisionTreeClassifier(), n_features_to_select=5)
# fit RFE
rfe.fit(X, y)
#Summarize all features
for i in range(X.shape[1]):
print('Column: %d, Selected %s, Rank: %.3f' % (i, rfe.support_[i], rfe.ranking_[i]))
# .support_ reports True or False
# .ranking_ reports the importance of each feature
# explore the algorithm wrapped by RFE -> This will tell us which algorithm is better to be used in RFE
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression, Perceptron
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from matplotlib import pyplot
# get the dataset
def get_dataset():
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, random_state=1)
return X,y
# get a list of models to evaluate
def get_models():
models=dict()
#lr
rfe=RFE(estimator=LogisticRegression(), n_features_to_select=5)
model = DecisionTreeClassifier()
models['lr']= Pipeline(steps=[('s', rfe), ('m', model)])
#perceptron
rfe=RFE(estimator=Perceptron(), n_features_to_select=5)
model = DecisionTreeClassifier()
models['per']= Pipeline(steps=[('s', rfe), ('m', model)])
# cart
rfe=RFE(estimator=DecisionTreeClassifier(), n_features_to_select=5)
model = DecisionTreeClassifier()
models['cart']= Pipeline(steps=[('s', rfe), ('m', model)])
# rf
rfe=RFE(estimator=RandomForestClassifier(), n_features_to_select=5)
model = DecisionTreeClassifier()
models['rf']= Pipeline(steps=[('s', rfe), ('m', model)])
# gbm
rfe=RFE(estimator=GradientBoostingClassifier(), n_features_to_select=5)
model = DecisionTreeClassifier()
models['gbm']= Pipeline(steps=[('s', rfe), ('m', model)])
return models
# evaluate a give model using cross-validation
def evaluate_model(model, X, y):
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')
return scores
# define datasets
X, y = get_dataset()
# get the model to evaluate
models = get_models()
# evaluate the models and store results
results, names = list(), list()
for name, model in models.items():
scores = evaluate_model(model, X, y)
results.append(scores)
names.append(name)
print('>%s %.3f (%.3f)' % (name, mean(scores), std(scores)))
# plot model performance for comparison
pyplot.boxplot(results, labels=names, showmeans=True)
pyplot.show()
```
|
github_jupyter
|
# [Day 8](https://www.hackerrank.com/challenges/30-dictionaries-and-maps/problem)
```
{'1':'a'}.update({'2':'c'})
d = {}
for i in range(int(input())):
x = input().split()
d[x[0]] = x[1]
while True:
try:
name = input()
if name in d:
print(name, "=", d[name], sep="")
else:
print("Not found")
except:
break
n = int(input().strip())
print(max(len(length) for length in bin(n)[2:].split('0')))
```
# Day 13
```
class Shape:
def area(): pass
def perimeter(): pass
class square(Shape):
def __init_(self,side):
self.side=side
s=Shape()
s
from abc import ABC,abstractclassmethod
class Shape(ABC):
@abstractclassmethod
def area(): pass
@abstractclassmethod
def perimeter(): pass
class square(Shape):
def __init__(self,side):
self.__side=side
def area(self):
return self.__side**2
def perimeter(self):
return self.__side*4
s=Shape() #if class is abstrat then we cant refrence this as
r=square(34)
print(r.area())
print(r.perimeter())
from abc import ABCMeta, abstractmethod
class Book(object, metaclass=ABCMeta):
def __init__(self,title,author):
self.title=title
self.author=author
@abstractmethod
def display(): pass
#Write MyBook class
class MyBook(Book):
price=0
def __init__(self,t,a,p):
super(Book,self).__init__()
self.price=p
def display(self):
print('Title: {}'.format(title))
print('Author: {}'.format(author))
print('Price: {}'.format(price))
title=input()
author=input()
price=int(input())
new_novel=MyBook(title,author,price)
new_novel.display()
```
# Day 17
```
#Write your code here
class Calculator:
def power(self,a,c):
if a>0 and c>0:
return a**c
else:
raise Exception('n and p should be non-negative')
myCalculator=Calculator()
T=int(input())
for i in range(T):
n,p = map(int, input().split())
try:
ans=myCalculator.power(n,p)
print(ans)
except Exception as e:
print(e)
```
# Day 18
```
s=list(input())
l=len(s)
if l%2!=0:
s.pop(l//2)
for i in range(l//2):
if s.pop(0)!=s.pop(-1):
print('oo')
break
else:
print('aagram')
[]+[1]
import sys
class Solution:
# Write your code here
def __init__(self):
self.s=[]
self.q=[]
def pushCharacter(self,i):
self.s+=[i]
def enqueueCharacter(self,j):
self.q=[j]+self.q
def popCharacter(self):
return self.s.pop()
def dequeueCharacter(self):
return self.q.pop()
# read the string s
s=input()
#Create the Solution class object
obj=Solution()
l=len(s)
# push/enqueue all the characters of string s to stack
for i in range(l):
obj.pushCharacter(s[i])
obj.enqueueCharacter(s[i])
isPalindrome=True
'''
pop the top character from stack
dequeue the first character from queue
compare both the characters
'''
for i in range(l // 2):
if obj.popCharacter()!=obj.dequeueCharacter():
isPalindrome=False
break
#finally print whether string s is palindrome or not.
if isPalindrome:
print("The word, "+s+", is a palindrome.")
else:
print("The word, "+s+", is not a palindrome.")
```
# Day 20
```
class Swaps:
def __init__(self,n,a):
self.n=n
self.a=a
self.numberOfSwaps=0
def calculate(self):
for i in range(self.n):
#Track number of elements swapped during a single array traversal
for j in range(self.n-1):
# Swap adjacent elements if they are in decreasing order
if self.a[j] > self.a[j + 1]:
self.numberOfSwaps+=1
temp=self.a[j]
self.a[j]=self.a[j + 1]
self.a[j+1]=temp
#If no elements were swapped during a traversal, array is sorted
if self.numberOfSwaps == 0:
break;
def display(self):
self.calculate()
print('Array is sorted in {0} swaps.\nFirst Element: {1}\nLast Element: {2}'.format(self.numberOfSwaps,a[0],a[-1]))
n = int(input().strip())
a = list(map(int, input().strip().split(' ')))
s=Swaps(n,a)
s.display()
s.display()
def isPrime(n) :
if (n <= 1) :
return False
if (n <= 3) :
return True
if (n % 2 == 0 or n % 3 == 0) :
return False
i = 5
while(i * i <= n) :
if (n % i == 0 or n % (i + 2) == 0) :
return False
i = i + 6
return True
for _ in range(int(input())):
if isPrime(int(input())):
print('Prime')
else:
print('Not prime')
rd, rm, ry = [int(x) for x in input().split(' ')]
ed, em, ey = [int(x) for x in input().split(' ')]
if (ry, rm, rd) <= (ey, em, ed):
print(0)
elif (ry, rm) == (ey, em):
print(15 * (rd - ed))
elif ry == ey:
print(500 * (rm - em))
else:
print(10000)
```
|
github_jupyter
|
# Illustra: Multi-text to Image
The part of [Aphantasia](https://github.com/eps696/aphantasia) series.
Based on [CLIP](https://github.com/openai/CLIP) + FFT from [Lucent](https://github.com/greentfrapp/lucent) // made by Vadim Epstein [[eps696](https://github.com/eps696)]
thanks to [Ryan Murdock](https://twitter.com/advadnoun), [Jonathan Fly](https://twitter.com/jonathanfly), [@eduwatch2](https://twitter.com/eduwatch2) for ideas.
## Features
* **continuously processes phrase lists** (e.g. illustrating lyrics)
* generates massive detailed high res imagery, a la deepdream
* directly parameterized with [FFT](https://github.com/greentfrapp/lucent/blob/master/lucent/optvis/param/spatial.py) (no pretrained GANs)
* various CLIP models (including multi-language from [SBERT](https://sbert.net))
* saving/loading FFT snapshots to resume processing
* separate text prompt for image style
**Run the cell below after each session restart**
Mark `resume` and upload `.pt` file, if you're resuming from the saved params.
```
#@title General setup
!pip install ftfy==5.8 transformers==4.6.0
!pip install gputil ffpb
try:
!pip3 install googletrans==3.1.0a0
from googletrans import Translator, constants
translator = Translator()
except: pass
!apt-get -qq install ffmpeg
from google.colab import drive
drive.mount('/G', force_remount=True)
# gdir = !ls /G/
# gdir = '/G/%s/' % str(gdir[0])
gdir = '/G/MyDrive/'
%cd $gdir
work_dir = 'illustra'
import os
work_dir = os.path.join(gdir, work_dir)
os.makedirs(work_dir, exist_ok=True)
%cd $work_dir
import os
import io
import time
import math
import random
import imageio
import numpy as np
import PIL
from base64 import b64encode
import shutil
# import moviepy, moviepy.editor
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torch.autograd import Variable
from IPython.display import HTML, Image, display, clear_output
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import ipywidgets as ipy
from google.colab import output, files
import warnings
warnings.filterwarnings("ignore")
!pip install git+https://github.com/openai/CLIP.git --no-deps
import clip
!pip install sentence_transformers
from sentence_transformers import SentenceTransformer
!pip install kornia
import kornia
!pip install lpips
import lpips
!pip install PyWavelets==1.1.1
!pip install git+https://github.com/fbcotter/pytorch_wavelets
%cd /content
!rm -rf aphantasia
!git clone https://github.com/eps696/aphantasia
%cd aphantasia/
from clip_fft import to_valid_rgb, fft_image
from utils import slice_imgs, derivat, pad_up_to, basename, file_list, img_list, img_read, txt_clean, plot_text, checkout
import transforms
from progress_bar import ProgressIPy as ProgressBar
clear_output()
resume = False #@param {type:"boolean"}
if resume:
resumed = files.upload()
params_pt = list(resumed.values())[0]
params_pt = torch.load(io.BytesIO(params_pt))
if isinstance(params_pt, list): params_pt = params_pt[0]
def read_pt(file):
return torch.load(file).cuda()
def ema(base, next, step):
scale_ma = 1. / (step + 1)
return next * scale_ma + base * (1.- scale_ma)
def save_img(img, fname=None):
img = np.array(img)[:,:,:]
img = np.transpose(img, (1,2,0))
img = np.clip(img*255, 0, 255).astype(np.uint8)
if fname is not None:
imageio.imsave(fname, np.array(img))
imageio.imsave('result.jpg', np.array(img))
def makevid(seq_dir, size=None):
out_sequence = seq_dir + '/%05d.jpg'
out_video = seq_dir + '.mp4'
!ffpb -y -i $out_sequence -codec nvenc $out_video
# moviepy.editor.ImageSequenceClip(img_list(seq_dir), fps=25).write_videofile(out_video, verbose=False)
data_url = "data:video/mp4;base64," + b64encode(open(out_video,'rb').read()).decode()
wh = '' if size is None else 'width=%d height=%d' % (size, size)
return """<video %s controls><source src="%s" type="video/mp4"></video>""" % (wh, data_url)
# Hardware check
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
import GPUtil as GPU
gpu = GPU.getGPUs()[0] # XXX: only one GPU on Colab and isn’t guaranteed
!nvidia-smi -L
print("GPU RAM {0:.0f}MB | Free {1:.0f}MB)".format(gpu.memoryTotal, gpu.memoryFree))
print('\nDone!')
#@title Upload text file
#@markdown For non-English languages mark one of:
multilang = False #@param {type:"boolean"}
translate = False #@param {type:"boolean"}
uploaded = files.upload()
#@markdown (`multilang` = multi-language CLIP model, trained with ViT,
#@markdown `translate` = Google translation, compatible with any visual model)
```
### Settings
Set the desired video resolution and `duration` (in sec).
Describe `style`, which you'd like to apply to the imagery.
Select CLIP visual `model` (results do vary!). I prefer ViT for consistency (and it's the only native multi-language option).
`align` option is about composition. `uniform` looks most adequate, `overscan` can make semi-seamless tileable texture.
`aug_transform` applies some augmentations, inhibiting image fragmentation & "graffiti" printing (slower, yet recommended).
Decrease `samples` if you face OOM (it's the main RAM eater).
Increasing `steps` will elaborate details and make tones smoother, but may start throwing texts like graffiti (and will obviously take more time).
`show_freq` controls preview frequency (doesn't affect the results; one can set it higher to speed up the process).
Tune `decay` (compositional softness) and `sharpness`, `colors` (saturation) and `contrast` as needed.
Experimental tricks:
`aug_noise` augmentation, `macro` (from 0 to 1) and `progressive_grow` (read more [here](https://github.com/eps696/aphantasia/issues/2)) may boost bigger forms, making composition less disperse.
`no_text` tries to remove "graffiti" by subtracting plotted text prompt. good start is \~0.1.
`enhance` boosts training consistency (of simultaneous samples) and steps progress. good start is 0.1~0.2.
NB: `keep` parameter controls how well the next line/image generation follows the previous. By default X = 0, and every frame is produced independently (i.e. randomly initiated).
Setting it higher starts each generation closer to the average of previous runs, effectively keeping macro compositions more similar and the transitions smoother. Safe values are < 0.5 (higher numbers may cause the imagery getting stuck). This behaviour depends on the input, so test with your prompts and see what's better in your case.
```
#@title Generate
# from google.colab import drive
# drive.mount('/content/GDrive')
# clipsDir = '/content/GDrive/MyDrive/T2I ' + dtNow.strftime("%Y-%m-%d %H%M")
style = "" #@param {type:"string"}
sideX = 1280 #@param {type:"integer"}
sideY = 720 #@param {type:"integer"}
duration = 60#@param {type:"integer"}
#@markdown > Config
model = 'ViT-B/16' #@param ['ViT-B/16', 'ViT-B/32', 'RN101', 'RN50x16', 'RN50x4', 'RN50']
align = 'uniform' #@param ['central', 'uniform', 'overscan']
aug_transform = True #@param {type:"boolean"}
keep = 0. #@param {type:"number"}
#@markdown > Look
decay = 1.5 #@param {type:"number"}
colors = 1.5 #@param {type:"number"}
contrast = 0.9#@param {type:"number"}
sharpness = 0.3 #@param {type:"number"}
#@markdown > Training
steps = 200 #@param {type:"integer"}
samples = 200 #@param {type:"integer"}
learning_rate = .05 #@param {type:"number"}
show_freq = 10 #@param {type:"integer"}
#@markdown > Tricks
aug_noise = 0.2 #@param {type:"number"}
no_text = 0. #@param {type:"number"}
enhance = 0. #@param {type:"number"}
macro = 0.4 #@param {type:"number"}
progressive_grow = False #@param {type:"boolean"}
diverse = -enhance
expand = abs(enhance)
fps = 25
if multilang: model = 'ViT-B/32' # sbert model is trained with ViT
use_jit = True if float(torch.__version__[:3]) < 1.8 else False
model_clip, _ = clip.load(model, jit=use_jit)
modsize = model_clip.visual.input_resolution
xmem = {'ViT-B/16':0.25, 'RN50':0.5, 'RN50x4':0.16, 'RN50x16':0.06, 'RN101':0.33}
if model in xmem.keys():
samples = int(samples * xmem[model])
def enc_text(txt):
if multilang:
model_lang = SentenceTransformer('clip-ViT-B-32-multilingual-v1').cuda()
emb = model_lang.encode([txt], convert_to_tensor=True, show_progress_bar=False).detach().clone()
del model_lang
else:
emb = model_clip.encode_text(clip.tokenize(txt).cuda())
return emb.detach().clone()
if diverse != 0:
samples = int(samples * 0.5)
if aug_transform is True:
trform_f = transforms.transforms_elastic
samples = int(samples * 0.95)
else:
trform_f = transforms.normalize()
text_file = list(uploaded)[0]
texts = list(uploaded.values())[0].decode().split('\n')
texts = [tt.strip() for tt in texts if len(tt.strip())>0 and tt[0] != '#']
print(' text file:', text_file)
print(' total lines:', len(texts))
if len(style) > 0:
print(' style:', style)
if translate:
translator = Translator()
style = translator.translate(style, dest='en').text
print(' translated to:', style)
txt_enc2 = enc_text(style)
workdir = os.path.join(work_dir, basename(text_file))
workdir += '-%s' % model if 'RN' in model.upper() else ''
!rm -rf $workdir
os.makedirs(workdir, exist_ok=True)
outpic = ipy.Output()
outpic
# make init
global params_start, params_ema
params_shape = [1, 3, sideY, sideX//2+1, 2]
params_start = torch.randn(*params_shape).cuda() # random init
params_ema = 0.
if resume is True:
# print(' resuming from', resumed)
# params, _, _ = fft_image([1, 3, sideY, sideX], resume = params_pt, sd=1.)
params_start = params_pt.cuda()
if keep > 0:
params_ema = params[0].detach()
torch.save(params_pt, os.path.join(workdir, '000-start.pt'))
else:
torch.save(params_start, os.path.join(workdir, '000-start.pt'))
torch.save(params_start, 'init.pt') # final init
prev_enc = 0
def process(txt, num):
global params_start
sd = 0.01
if keep > 0: sd = keep + (1-keep) * sd
params, image_f, _ = fft_image([1, 3, sideY, sideX], resume='init.pt', sd=sd, decay_power=decay)
image_f = to_valid_rgb(image_f, colors = colors)
if progressive_grow is True:
lr1 = learning_rate * 2
lr0 = lr1 * 0.01
else:
lr0 = learning_rate
optimizer = torch.optim.AdamW(params, lr0, weight_decay=0.01, amsgrad=True)
print(' topic: ', txt)
if translate:
translator = Translator()
txt = translator.translate(txt, dest='en').text
print(' translated to:', txt)
txt_enc = enc_text(txt)
if no_text > 0:
txt_plot = torch.from_numpy(plot_text(txt, modsize)/255.).unsqueeze(0).permute(0,3,1,2).cuda()
txt_plot_enc = model_clip.encode_image(txt_plot).detach().clone()
else: txt_plot_enc = None
out_name = '%03d-%s' % (num+1, txt_clean(txt))
tempdir = os.path.join(workdir, out_name)
!rm -rf $tempdir
os.makedirs(tempdir, exist_ok=True)
pbar = ProgressBar(steps) # // save_freq
for i in range(steps):
loss = 0
noise = aug_noise * torch.randn(1, 1, *params[0].shape[2:4], 1).cuda() if aug_noise > 0 else 0.
img_out = image_f(noise)
imgs_sliced = slice_imgs([img_out], samples, modsize, trform_f, align, macro=macro)
out_enc = model_clip.encode_image(imgs_sliced[-1])
loss -= torch.cosine_similarity(txt_enc, out_enc, dim=-1).mean()
if len(style) > 0:
loss -= 0.5 * torch.cosine_similarity(txt_enc2, out_enc, dim=-1).mean()
if no_text > 0:
loss += no_text * torch.cosine_similarity(txt_plot_enc, out_enc, dim=-1).mean()
if sharpness != 0: # mode = scharr|sobel|default
loss -= sharpness * derivat(img_out, mode='sobel')
# loss -= sharpness * derivat(img_sliced, mode='scharr')
if diverse != 0:
imgs_sliced = slice_imgs([image_f(noise)], samples, modsize, trform_f, align, macro=macro)
out_enc2 = model_clip.encode_image(imgs_sliced[-1])
loss += diverse * torch.cosine_similarity(out_enc, out_enc2, dim=-1).mean()
del out_enc2; torch.cuda.empty_cache()
if expand > 0:
global prev_enc
if i > 0:
loss += expand * torch.cosine_similarity(out_enc, prev_enc, dim=-1).mean()
prev_enc = out_enc.detach()
del img_out, imgs_sliced, out_enc; torch.cuda.empty_cache()
if progressive_grow is True:
lr_cur = lr0 + (i / steps) * (lr1 - lr0)
for g in optimizer.param_groups:
g['lr'] = lr_cur
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i % show_freq == 0:
with torch.no_grad():
img = image_f(contrast=contrast).cpu().numpy()[0]
if sharpness != 0:
img = img ** (1 + sharpness/2.) # empirical tone mapping
save_img(img, os.path.join(tempdir, '%05d.jpg' % (i // show_freq)))
outpic.clear_output()
with outpic:
display(Image('result.jpg'))
del img
pbar.upd()
if keep > 0:
global params_start, params_ema
params_ema = ema(params_ema, params[0].detach(), num+1)
torch.save((1-keep) * params_start + keep * params_ema, 'init.pt')
torch.save(params[0], '%s.pt' % os.path.join(workdir, out_name))
# shutil.copy(img_list(tempdir)[-1], os.path.join(workdir, '%s-%d.jpg' % (out_name, steps)))
# os.system('ffmpeg -v warning -y -i %s\%%05d.jpg -codec nvenc "%s.mp4"' % (tempdir, os.path.join(workdir, out_name)))
# HTML(makevid(tempdir))
for i, txt in enumerate(texts):
process(txt, i)
vsteps = int(duration * fps / len(texts))
tempdir = os.path.join(workdir, '_final')
!rm -rf $tempdir
os.makedirs(tempdir, exist_ok=True)
print(' rendering complete piece')
ptfiles = file_list(workdir, 'pt')
pbar = ProgressBar(vsteps * len(ptfiles))
for px in range(len(ptfiles)):
params1 = read_pt(ptfiles[px])
params2 = read_pt(ptfiles[(px+1) % len(ptfiles)])
params, image_f, _ = fft_image([1, 3, sideY, sideX], resume=params1, sd=1., decay_power=decay)
image_f = to_valid_rgb(image_f, colors = colors)
for i in range(vsteps):
with torch.no_grad():
img = image_f((params2 - params1) * math.sin(1.5708 * i/vsteps)**2)[0].permute(1,2,0)
img = torch.clip(img*255, 0, 255).cpu().numpy().astype(np.uint8)
imageio.imsave(os.path.join(tempdir, '%05d.jpg' % (px * vsteps + i)), img)
_ = pbar.upd()
HTML(makevid(tempdir))
#@markdown Run this, if you need to make the same video of another length (model must be the same)
duration = 12#@param {type:"integer"}
model = 'ViT-B/32' #@param ['ViT-B/32', 'RN101', 'RN50x4', 'RN50']
colors = 1.3 #@param {type:"number"}
fps = 25
text_file = list(uploaded)[0]
workdir = os.path.join(work_dir, basename(text_file))
workdir += '-%s' % model if 'RN' in model.upper() else ''
tempdir = os.path.join(workdir, '_final')
!rm -rf $tempdir
os.makedirs(tempdir, exist_ok=True)
print(' re-rendering final piece')
ptfiles = file_list(workdir, 'pt')
vsteps = int(duration * fps / (len(ptfiles)-1))
ptest = torch.load(ptfiles[0])
if isinstance(ptest, list): ptest = ptest[0]
shape = [*ptest.shape[:3], (ptest.shape[3]-1)*2]
pbar = ProgressBar(vsteps * len(ptfiles))
for px in range(len(ptfiles)):
params1 = read_pt(ptfiles[px])
params2 = read_pt(ptfiles[(px+1) % len(ptfiles)])
params, image_f, _ = fft_image(shape, resume=params1)
image_f = to_valid_rgb(image_f, colors = colors)
for i in range(vsteps):
with torch.no_grad():
img = image_f((params2 - params1) * math.sin(1.5708 * i/vsteps)**2)[0].permute(1,2,0)
img = torch.clip(img*255, 0, 255).cpu().numpy().astype(np.uint8)
imageio.imsave(os.path.join(tempdir, '%05d.jpg' % (px * vsteps + i)), img)
_ = pbar.upd()
HTML(makevid(tempdir))
```
|
github_jupyter
|
# Introduction to XGBoost Spark with GPU
The goal of this notebook is to show how to train a XGBoost Model with Spark RAPIDS XGBoost library on GPUs. The dataset used with this notebook is derived from Fannie Mae’s Single-Family Loan Performance Data with all rights reserved by Fannie Mae. This processed dataset is redistributed with permission and consent from Fannie Mae. This notebook uses XGBoost to train 12-month mortgage loan delinquency prediction model .
A few libraries required for this notebook:
1. NumPy
2. cudf jar
3. xgboost4j jar
4. xgboost4j-spark jar
5. rapids-4-spark.jar
This notebook also illustrates the ease of porting a sample CPU based Spark xgboost4j code into GPU. There is only one change required for running Spark XGBoost on GPU. That is replacing the API `setFeaturesCol(feature)` on CPU with the new API `setFeaturesCols(features)`. This also eliminates the need for vectorization (assembling multiple feature columns in to one column) since we can read multiple columns.
#### Import All Libraries
```
from ml.dmlc.xgboost4j.scala.spark import XGBoostClassificationModel, XGBoostClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.sql import SparkSession
from pyspark.sql.types import FloatType, IntegerType, StructField, StructType
from time import time
```
Besides CPU version requires two extra libraries.
```Python
from pyspark.ml.feature import VectorAssembler
from pyspark.sql.functions import col
```
#### Create Spark Session and Data Reader
```
spark = SparkSession.builder.getOrCreate()
reader = spark.read
```
#### Specify the Data Schema and Load the Data
```
label = 'delinquency_12'
schema = StructType([
StructField('orig_channel', FloatType()),
StructField('first_home_buyer', FloatType()),
StructField('loan_purpose', FloatType()),
StructField('property_type', FloatType()),
StructField('occupancy_status', FloatType()),
StructField('property_state', FloatType()),
StructField('product_type', FloatType()),
StructField('relocation_mortgage_indicator', FloatType()),
StructField('seller_name', FloatType()),
StructField('mod_flag', FloatType()),
StructField('orig_interest_rate', FloatType()),
StructField('orig_upb', IntegerType()),
StructField('orig_loan_term', IntegerType()),
StructField('orig_ltv', FloatType()),
StructField('orig_cltv', FloatType()),
StructField('num_borrowers', FloatType()),
StructField('dti', FloatType()),
StructField('borrower_credit_score', FloatType()),
StructField('num_units', IntegerType()),
StructField('zip', IntegerType()),
StructField('mortgage_insurance_percent', FloatType()),
StructField('current_loan_delinquency_status', IntegerType()),
StructField('current_actual_upb', FloatType()),
StructField('interest_rate', FloatType()),
StructField('loan_age', FloatType()),
StructField('msa', FloatType()),
StructField('non_interest_bearing_upb', FloatType()),
StructField(label, IntegerType()),
])
features = [ x.name for x in schema if x.name != label ]
train_data = reader.schema(schema).option('header', True).csv('/data/mortgage/csv/train')
trans_data = reader.schema(schema).option('header', True).csv('/data/mortgage/csv/test')
```
Note on CPU version, vectorization is required before fitting data to classifier, which means you need to assemble all feature columns into one column.
```Python
def vectorize(data_frame):
to_floats = [ col(x.name).cast(FloatType()) for x in data_frame.schema ]
return (VectorAssembler()
.setInputCols(features)
.setOutputCol('features')
.transform(data_frame.select(to_floats))
.select(col('features'), col(label)))
train_data = vectorize(train_data)
trans_data = vectorize(trans_data)
```
#### Create a XGBoostClassifier
```
params = {
'eta': 0.1,
'gamma': 0.1,
'missing': 0.0,
'treeMethod': 'gpu_hist',
'maxDepth': 10,
'maxLeaves': 256,
'objective':'binary:logistic',
'growPolicy': 'depthwise',
'minChildWeight': 30.0,
'lambda_': 1.0,
'scalePosWeight': 2.0,
'subsample': 1.0,
'nthread': 1,
'numRound': 100,
'numWorkers': 1,
}
classifier = XGBoostClassifier(**params).setLabelCol(label).setFeaturesCols(features)
```
The CPU version classifier provides the API `setFeaturesCol` which only accepts a single column name, so vectorization for multiple feature columns is required.
```Python
classifier = XGBoostClassifier(**params).setLabelCol(label).setFeaturesCol('features')
```
The parameter `num_workers` should be set to the number of GPUs in Spark cluster for GPU version, while for CPU version it is usually equal to the number of the CPU cores.
Concerning the tree method, GPU version only supports `gpu_hist` currently, while `hist` is designed and used here for CPU training.
#### Train the Data with Benchmark
```
def with_benchmark(phrase, action):
start = time()
result = action()
end = time()
print('{} takes {} seconds'.format(phrase, round(end - start, 2)))
return result
model = with_benchmark('Training', lambda: classifier.fit(train_data))
```
#### Save and Reload the Model
```
model.write().overwrite().save('/data/new-model-path')
loaded_model = XGBoostClassificationModel().load('/data/new-model-path')
```
#### Transformation and Show Result Sample
```
def transform():
result = loaded_model.transform(trans_data).cache()
result.foreachPartition(lambda _: None)
return result
result = with_benchmark('Transformation', transform)
result.select(label, 'rawPrediction', 'probability', 'prediction').show(5)
```
#### Evaluation
```
accuracy = with_benchmark(
'Evaluation',
lambda: MulticlassClassificationEvaluator().setLabelCol(label).evaluate(result))
print('Accuracy is ' + str(accuracy))
spark.stop()
```
|
github_jupyter
|
# Constructing the Set of Equations For the Blocks:
### Newtons second law is applied and each block is considered seperately to form the four equations of motion.
### Newton's Second Law:
$\Sigma F=ma$
#### Note:
In this circumstance, as each mass (except for 4) is on the same angled slope, I have made the decision to omit the explicit steps in each equation where I mention that:
$F_N = m_n gcos \theta$
*Where $F_N$ is the Normal force, $m_n$ is the mass of the block and $g$ is acceleration due to gravity.*
and the Friction on each block is:
$F_r=\mu F_N = \mu_n m_n gcos \theta $
*Where $\mu_n$ is the coefficients of friction for each block.*
I have just expressly written them into the equations.
### Block 1
$\Sigma F=m_1 g sin\theta - \mu_1 m_1 g cos\theta - T_1 = m_1 a
\implies m_1 g (sin\theta - \mu_1 cos\theta)=m_1 a + T_1 $
### Block 2
$\Sigma F=m_2 g sin\theta - \mu_2 m_2 g cos\theta + T_1 - T_2 = m_2 a
\implies m_2 g (sin\theta - \mu_2 cos\theta)=m_2 a + T_2 - T_1$
### Block 3
$\Sigma F=m_3 g sin\theta - \mu_3 m_3 g cos\theta + T_2 - T_3 = m_3 a
\implies m_3 g sin\theta - \mu_3 cos\theta)=m_3 a + T_3 - T_2$
### Block 4
$\Sigma F=m_4 g - T_3 = m_2 a \implies -T_3 + m_4 a=-m_4 g$
### Setting up the problem; the matrix "A" and vector "b" .
The matrix "A" is constructed from the coefficients of the tensions, $T_n$ and the acceleration, $a$.
The Vector "b" is constructed for the masses (multiplied by gravity); $sin\theta$ and $\mu_n cos\theta$.
As theta, $\theta$ ; the masses, *m* and the coefficients of fricton, $\mu$ are given the each element of "b" is a number.
$$\begin{bmatrix}
1 & 0 & 0 & m_1\\
-1 & 1 & 0 & m_2\\
0 & -1 & 1 & m_3\\
0 & 0 & 1 & m_4\\
\end{bmatrix}
\begin{bmatrix}
T_1\\
T_2\\
T_3\\
a\\
\end{bmatrix}
=
\begin{bmatrix}
m_1 g (sin\theta - \mu_1 cos\theta)\\
m_2 g (sin\theta - \mu_2 cos\theta)\\
m_3 g (sin\theta - \mu_3 cos\theta)\\
-m_4 g\\
\end{bmatrix}$$
#### Note:
Above I have placed "A" and "b" in the standard Ax=b layout, hence I have included another vector containing $T_1$ through to $a$.
```
import numpy as np
import gaussElimin as gE
#Problem Sheet 1 Question 2b
#Defining the values and constructing arrays for mass and coefficients of friction.
g=9.82
S=np.sin(np.pi/4)
C=np.cos(np.pi/4)
M=np.array([10,4,5,6])
mu=np.array([0.25,0.3,0.2])
#constructing the array of coeffiecients and the vector b
a=np.c_[(np.array([[1,0,0],[-1,1,0],[0,-1,1],[0,0,-1]])),M]#I am concatenating the array "M" to an array of the coefficients of the tensions. The values in "M" are the coefficients of the unknown acceleration, a value.
b=g*M*np.array([(S-C*mu[0]),(S-C*mu[1]), (S-C*mu[2]), -1])
gE.gaussElimin(a,b)
print ('T1=',b[0], ' T2=', b[1], ' T3=', b[2],' a=',b[3])
#Problem Sheet 1 Question 2c
#Want a to equal -9.82 then m4 will be in freefall.
#All the tensions found in part 2b can be used.
#Let a be set to -g and use Equations 3 and 4 to solve for sin(Theta) and cos(Theta).
#Rearrange the equations so sin and cos are in a matrix and use Gauss elim.
#Create another a and b; a is just the right hand side and b the left. Use -g as acceleration.
a1=g*np.array([[(M[1]),(M[1]*mu[1])], [(M[2]),(M[2]*mu[2])]]).astype(np.float)
# I am using specific values for the 2nd & 3rd equations of mass & coefficients of friction from the "M" & "mu" arrays defined in part 1b.
b=np.array([(-b[0]+b[1]+M[1]*-g),(-b[1]+b[2]+M[2]*-g)]).astype(np.float)
#I am calling back to the values of T for Eqn 2 & Eqn 3, T2=b[1] & T3=b[2]
gE.gaussElimin(a1,b)
print ('sin(Theta)=',b[0], ' cos(Theta)=',b[1])
if abs(b[0])>1 or abs(b[1])>1: #1st logical argument. If either sin theta or cos theta are over 1 or under -1 then its obvious there is not solution.
print ('Neither sin(Theta) or cos(Theta) can take values above or below 1 or -1. Hence there is not solution where Mass 4 is in freefall. ')
if abs(b[0])<=1 and abs(b[1])<=1: #2nd logical argument. Since sin theta & cos theta do not exceed their limits and as they are dependant on the same theta so taking the inverse of both should yeild the same result.
if round(np.arcsin(b[0]),20)==round(np.arccos(b[1]),20):
print ('When Mass 4 is in freefall; Theta=',np.arcsin(b[0])*180/np.pi, ' degrees.')
else:
print('There is not solution where Mass 4 is in freefall.')
```
|
github_jupyter
|
# Building Permit Data
## Documentation
[United States Census Bureau Building Permits Survey](https://www.census.gov/construction/bps/)
[ASCII files by State, Metropolitan Statistical Area (MSA), County or Place](https://www2.census.gov/econ/bps/)
[MSA Folder](https://www2.census.gov/econ/bps/Metro/)
[ASCII MSA Documentation](https://www2.census.gov/econ/bps/Documentation/msaasc.pdf)
```
import numpy as np
import pandas as pd
import re
import os.path
from os import path
from datetime import datetime
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
from sklearn.preprocessing import MinMaxScaler, StandardScaler, PowerTransformer
from sklearn.cluster import KMeans
import wrangle as wr
import preprocessing as pr
import explore as ex
import model as mo
import warnings
warnings.filterwarnings("ignore")
pd.set_option("display.max_columns", None)
def rename_columns(df):
"""
Docstring
"""
# rename columns inplace
df.rename(
columns={
"Date": "survey_date",
"Code": "csa_code",
"Code.1": "cbsa_code",
"Unnamed: 3": "moncov",
"Name": "cbsa_name",
"Bldgs": "one_unit_bldgs_est",
"Units": "one_unit_units_est",
"Value": "one_unit_value_est",
"Bldgs.1": "two_units_bldgs_est",
"Units.1": "two_units_units_est",
"Value.1": "two_units_value_est",
"Bldgs.2": "three_to_four_units_bldgs_est",
"Units.2": "three_to_four_units_units_est",
"Value.2": "three_to_four_units_value_est",
"Bldgs.3": "five_or_more_units_bldgs_est",
"Units.3": "five_or_more_units_units_est",
"Value.3": "five_or_more_units_value_est",
"Bldgs.4": "one_unit_bldgs_rep",
"Units.4": "one_unit_units_rep",
"Value.4": "one_unit_value_rep",
"Bldgs.5": "two_units_bldgs_rep",
"Units.5": "two_units_units_rep",
"Value.5": "two_units_value_rep",
" Bldgs": "three_to_four_units_bldgs_rep",
"Units.6": "three_to_four_units_units_rep",
"Value.6": "three_to_four_units_value_rep",
"Bldgs.6": "five_or_more_units_bldgs_rep",
"Units.7": "five_or_more_units_units_rep",
"Value.7": "five_or_more_units_value_rep",
},
inplace=True,
)
return df
# def sort_values_and_reset_index(df):
# """
# Docstring
# """
# # sort values by survey_date
# df.sort_values(by=["survey_date"], ascending=False, inplace=True)
# # reset index inplace
# df.reset_index(inplace=True)
# # drop former index inplace
# df.drop(columns=["index"], inplace=True)
# return df
def acquire_building_permits():
"""
Docstring
"""
# conditional
if path.exists("building_permits.csv"):
# read csv
df = pd.read_csv("building_permits.csv", index_col=0)
else:
# create original df with 2019 data
df = pd.read_csv("https://www2.census.gov/econ/bps/Metro/ma2019a.txt", sep=",", header=1)
# rename columns
rename_columns(df)
for i in range(1980, 2019):
# read the txt file at url where i is the year in range
year_df = pd.read_csv(
f"https://www2.census.gov/econ/bps/Metro/ma{i}a.txt",
sep=",",
header=1,
names=df.columns.tolist(),
)
# append data to global df variable
df = df.append(year_df, ignore_index=True)
# make moncov into bool so that the null observations of this feature are not considered in the dropna below
df["moncov"] = np.where(df.moncov == "C", 1, 0)
# dropna inplace
df.dropna(inplace=True)
# chop off the succeding two digits after the year for survey_date
df["survey_date"] = df.survey_date.astype(str).apply(lambda x: re.sub(r"\d\d$", "", x))
# add a preceding "19" to any years where the length of the observation is 2 (e.i., "80"-"97")
df["survey_date"] = df.survey_date.apply(lambda x: "19" + x if len(x) == 2 else x)
# turn survey_date back into an int
df["survey_date"] = df.survey_date.astype(int)
# turn moncov back into a bool
df["moncov"] = df.moncov.astype(bool)
# sort values by survey_date
df.sort_values(by=["survey_date"], ascending=False, inplace=True)
# reset index inplace
df.reset_index(inplace=True)
# drop former index inplace
df.drop(columns=["index"], inplace=True)
# write df to disk as csv
df.to_csv("building_permits.csv")
return df
df = pd.read_csv("ma2019a.txt", sep=",", header=1)
df
df.rename(columns={
"Date": "survey_date",
"Code": "csa_code",
"Code.1": "cbsa_code",
"Unnamed: 3": "moncov",
"Name": "cbsa_name",
"Bldgs": "one_unit_bldgs_est",
"Units": "one_unit_units_est",
"Value": "one_unit_value_est",
"Bldgs.1": "two_units_bldgs_est",
"Units.1": "two_units_units_est",
"Value.1": "two_units_value_est",
"Bldgs.2": "three_to_four_units_bldgs_est",
"Units.2": "three_to_four_units_units_est",
"Value.2": "three_to_four_units_value_est",
"Bldgs.3": "five_or_more_units_bldgs_est",
"Units.3": "five_or_more_units_units_est",
"Value.3": "five_or_more_units_value_est",
"Bldgs.4": "one_unit_bldgs_rep",
"Units.4": "one_unit_units_rep",
"Value.4": "one_unit_value_rep",
"Bldgs.5": "two_units_bldgs_rep",
"Units.5": "two_units_units_rep",
"Value.5": "two_units_value_rep",
" Bldgs": "three_to_four_units_bldgs_rep",
"Units.6": "three_to_four_units_units_rep",
"Value.6": "three_to_four_units_value_rep",
"Bldgs.6": "five_or_more_units_bldgs_rep",
"Units.7": "five_or_more_units_units_rep",
"Value.7": "five_or_more_units_value_rep",
}, inplace=True)
df
df.info()
len(df.cbsa_name.unique())
df.columns.tolist()
# df_2018 = pd.read_csv("ma2018a.txt", sep=",", header=1, names=df.columns.tolist())
# df_2018
# df = df.append(df_2018, ignore_index=True)
# df.shape
# df_2017 = pd.read_csv("https://www2.census.gov/econ/bps/Metro/ma2017a.txt", sep=",", header=1, names=df.columns.tolist())
# df_2017
# df = df.append(df_2017, ignore_index=True)
# df.shape
for i in range(2017, 2019):
temp_df = pd.read_csv(
f"https://www2.census.gov/econ/bps/Metro/ma{i}a.txt",
sep=",",
header=1,
names=df.columns.tolist(),
)
df = df.append(temp_df, ignore_index=True)
df
for i in range(2010, 2017):
# read the txt file at url where i is the year in range
temp_df = pd.read_csv(
f"https://www2.census.gov/econ/bps/Metro/ma{i}a.txt",
sep=",",
header=1,
names=df.columns.tolist(),
)
# append data to global df variable
df = df.append(temp_df, ignore_index=True)
df
for i in range(2006, 2010):
# read the txt file at url where i is the year in range
temp_df = pd.read_csv(
f"https://www2.census.gov/econ/bps/Metro/ma{i}a.txt",
sep=",",
header=1,
names=df.columns.tolist(),
)
# append data to global df variable
df = df.append(temp_df, ignore_index=True)
df
for i in range(1980, 2006):
# read the txt file at url where i is the year in range
temp_df = pd.read_csv(
f"https://www2.census.gov/econ/bps/Metro/ma{i}a.txt",
sep=",",
header=1,
names=df.columns.tolist(),
)
# append data to global df variable
df = df.append(temp_df, ignore_index=True)
df
df.isna().sum()
df[df.csa_code.isna()]
df.moncov.unique().tolist()
df["moncov"] = np.where(df.moncov == "C", 1, 0)
df.isna().sum()
df.dropna(inplace=True)
df.isna().sum()
print(df.shape)
df.head(20)
# df.to_csv("building_permits.csv")
# df = pd.read_csv("building_permits.csv", index_col=0)
# print(f"""Our DataFrame contains {df.shape[0]:,} observations and {df.shape[1]} features.""")
# df.head()
# df["survey_date"] = df.survey_date.astype(str).apply(lambda x: re.sub(r"\d\d$", "", x))
# df
# df.survey_date.unique()
# len(df.survey_date)
# df["survey_date"] = df.survey_date.astype(str).apply(lambda x: "19" + x if len(x) == 2 else x)
# df
# df.survey_date.unique().tolist()
# df.info()
# df = pd.read_csv("building_permits.csv", index_col=0)
# df
# df["survey_date"] = df.survey_date.astype(str).apply(lambda x: re.sub(r"\d\d$", "", x))
# df.survey_date.unique().tolist()
# df.info()
# df["survey_date"] = df.survey_date.apply(lambda x: "19" + x if len(x) == 2 else x)
# df.survey_date.unique().tolist()
# df["survey_date"] = df.survey_date.astype(int)
# df.info()
# df = acquire_building_permits()
# df
# df.reset_index(inplace=True)
# df.drop(columns=["index"], inplace=True)
# df
df = acquire_building_permits()
print(f"""Our DataFrame contains {df.shape[0]:,} observations and {df.shape[1]} features.""")
df
# df.sort_values(by=["survey_date"], ascending=False, inplace=True)
# df
# df.info()
# df.moncov.unique().tolist()
# df["moncov"] = df.moncov.astype(bool)
df.info()
# df
# df = sort_values_and_reset_index(df)
# df
```
|
github_jupyter
|
# Exp 41 analysis
See `./informercial/Makefile` for experimental
details.
```
import os
import numpy as np
from IPython.display import Image
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from infomercial.exp import meta_bandit
from infomercial.local_gym import bandit
from infomercial.exp.meta_bandit import load_checkpoint
import gym
# ls ../data/exp2*
```
# Load and process data
```
data_path ="/Users/qualia/Code/infomercial/data/"
exp_name = "exp41"
best_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_best.pkl"))
sorted_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_sorted.pkl"))
best_params
```
# Performance
of best parameters
```
env_name = 'BanditHardAndSparse2-v0'
num_episodes = 20*10
# Run w/ best params
result = meta_bandit(
env_name=env_name,
num_episodes=num_episodes,
lr=best_params["lr"],
tie_threshold=best_params["tie_threshold"],
seed_value=19,
save="exp41_best_model.pkl"
)
# Plot run
episodes = result["episodes"]
actions =result["actions"]
scores_R = result["scores_R"]
values_R = result["values_R"]
scores_E = result["scores_E"]
values_E = result["values_E"]
# Get some data from the gym...
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Init plot
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(5, 1, wspace=0.3, hspace=0.8)
# Do plots:
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best, np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# score
plt.subplot(grid[1, 0])
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.scatter(episodes, scores_E, color="purple", alpha=0.9, s=2, label="E")
plt.ylabel("log score")
plt.xlabel("Episode")
plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[2, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="R")
plt.scatter(episodes, values_E, color="purple", alpha=0.4, s=2, label="E")
plt.ylabel("log Q(s,a)")
plt.xlabel("Episode")
plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# -
plt.savefig("figures/epsilon_bandit.pdf", bbox_inches='tight')
plt.savefig("figures/epsilon_bandit.eps", bbox_inches='tight')
```
# Sensitivity
to parameter choices
```
total_Rs = []
ties = []
lrs = []
trials = list(sorted_params.keys())
for t in trials:
total_Rs.append(sorted_params[t]['total_R'])
ties.append(sorted_params[t]['tie_threshold'])
lrs.append(sorted_params[t]['lr'])
# Init plot
fig = plt.figure(figsize=(10, 18))
grid = plt.GridSpec(4, 1, wspace=0.3, hspace=0.8)
# Do plots:
# Arm
plt.subplot(grid[0, 0])
plt.scatter(trials, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("total R")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.scatter(trials, ties, color="black", alpha=.3, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("Tie threshold")
_ = sns.despine()
plt.subplot(grid[2, 0])
plt.scatter(trials, lrs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("lr")
_ = sns.despine()
```
# Distributions
of parameters
```
# Init plot
fig = plt.figure(figsize=(5, 6))
grid = plt.GridSpec(2, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(ties, color="black")
plt.xlabel("tie threshold")
plt.ylabel("Count")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.hist(lrs, color="black")
plt.xlabel("lr")
plt.ylabel("Count")
_ = sns.despine()
```
of total reward
```
# Init plot
fig = plt.figure(figsize=(5, 2))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(total_Rs, color="black", bins=50)
plt.xlabel("Total reward")
plt.ylabel("Count")
plt.xlim(0, 10)
_ = sns.despine()
```
|
github_jupyter
|
```
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
```
# Multi-Layer Perceptron, MNIST
---
In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.
The process will be broken down into the following steps:
>1. Load and visualize the data
2. Define a neural network
3. Train the model
4. Evaluate the performance of our trained model on a test dataset!
Before we begin, we have to import the necessary libraries for working with data and PyTorch.
```
# import libraries
import torch
import numpy as np
```
---
## Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.
This cell will create DataLoaders for each of our datasets.
```
from torchvision import datasets
import torchvision.transforms as transforms
num_workers = 0
batch_size = 20
transform = transforms.ToTensor()
train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
# from torchvision import datasets
# import torchvision.transforms as transforms
# # number of subprocesses to use for data loading
# num_workers = 0
# # how many samples per batch to load
# batch_size = 20
# # convert data to torch.FloatTensor
# transform = transforms.ToTensor()
# # choose the training and test datasets
# train_data = datasets.MNIST(root='data', train=True,
# download=True, transform=transform)
# test_data = datasets.MNIST(root='data', train=False,
# download=True, transform=transform)
# # prepare data loaders
# train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
# num_workers=num_workers)
# test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
# num_workers=num_workers)
```
### Visualize a Batch of Training Data
The first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
```
### View an Image in More Detail
```
img = np.squeeze(images[1])
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
```
---
## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)
The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.
```
import torch.nn as nn
import torch.nn.functional as F
## TODO: Define the NN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# linear layer (784 -> 1 hidden node)
h1_node = 512
h2_node = 512
self.fc1 = nn.Linear(28 * 28, h1_node)
self.fc2 = nn.Linear(h1_node, h2_node)
self.fc3 = nn.Linear(h2_node, 10)
self.dropout = nn.Dropout(0.2)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
# initialize the NN
model = Net()
print(model)
model.to('cuda');
```
### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)
It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.
```
## TODO: Specify loss and optimization functions
from torch import nn
from torch import optim
# specify loss function
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
---
## Train the Network
The steps for training/learning from a batch of data are described in the comments below:
1. Clear the gradients of all optimized variables
2. Forward pass: compute predicted outputs by passing inputs to the model
3. Calculate the loss
4. Backward pass: compute gradient of the loss with respect to model parameters
5. Perform a single optimization step (parameter update)
6. Update average training loss
The following loop trains for 30 epochs; feel free to change this number. For now, we suggest somewhere between 20-50 epochs. As you train, take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.
```
# number of epochs to train the model
n_epochs = 30 # suggest training between 20-50 epochs
model.to('cuda')
model.train() # prep model for training
for epoch in range(n_epochs):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data, target in train_loader:
data, target = data.to('cuda'), target.to('cuda')
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*data.size(0)
# print training statistics
# calculate average loss over an epoch
train_loss = train_loss/len(train_loader.sampler)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch+1,
train_loss
))
```
---
## Test the Trained Network
Finally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.
#### `model.eval()`
`model.eval(`) will set all the layers in your model to evaluation mode. This affects layers like dropout layers that turn "off" nodes during training with some probability, but should allow every node to be "on" for evaluation!
```
# initialize lists to monitor test loss and accuracy
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval() # prep model for *evaluation*
for data, target in test_loader:
data, target = data.to('cuda'), target.to('cuda')
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
for i in range(len(target)):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss/len(test_loader.sampler)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
str(i), 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
```
### Visualize Sample Test Results
This cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images_cuda = images.to('cuda')
labels = labels.to('cuda')
# get sample outputs
output = model(images_cuda)
# convert output probabilities to predicted class
_, preds = torch.max(output, 1)
# prep images for display
images = images.numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())),
color=("green" if preds[idx]==labels[idx] else "red"))
```
|
github_jupyter
|
# Hola mundo...
Agradecemos de manera especial a la PhD Lorena A. Barba [@LorenaABarba](https://twitter.com/LorenaABarba), quien desarrolló el material [CFDPython](https://github.com/barbagroup/CFDPython), sobre el cual está desarrollado el siguiente proyecto pedagógico.
¡Hola! Bienvenido a **Una introducción a la ecuación de Navier-Stokes en una y dos dimensiones, buscando soluciones numúricas vía Python**. Este es un material práctico que se utiliza al comienzo de un curso interactivo de dinámica de fluidos computacional (CFD), y está basado en la propuesta del curso **12 steps to Navier–Stokes**, el cual es impartido por el [Prof. Lorena Barba](http://lorenabarba.com) desde la primavera de 2009 en la Universidad de Boston.
El curso asume solo conocimientos básicos de programación y, por supuesto, algunos fundamentos en ecuaciones diferenciales parciales y mecánica de fluidosy se imparte en su totalidad utilizando Python.
# Ecuación de convección lineal
Para poder estudiar algunas ideas en la discretización de las ecuaciones de Navier-Stokes en una dimensión (1-D) y dos dimensiones (2-D) es importarte comenzar estudiando la ecuación de [convección lineal](http://www.unet.edu.ve/~fenomeno/F_DE_T-165.htm)
$$\frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0.$$
con condiciones iniciales $u(x,0)=u_0(x)$, dato conocido como onda inicial. La ecuación representa la propagación de la onda inicial con velicidad $c$. La solución analítica usando variables separables es dada por $u(x,t)=u_0(x-ct)$.
Para poder implementar el método numérico, nosotros discretizamos esta ecuación tanto en el tiempo como en el espacio, usando el esquema de diferencias finitas hacia adelante para la derivada en el tiempo y el esquema de diferencias finitas hacia atrás para la derivada en el espacio.
La coordenada espacial $x$ se discretiza en $N$ pasos regulares $\Delta x$ desde $i=0$ hasta $i=N$. Para la variable temporal $t$ se discretiza tomando intervalos de tamaño $\Delta t$.
Por la definición de derivada, removiendo el límite, es posible aproximar
$$\frac{\partial u}{\partial x}\approx \frac{u(x+\Delta x)-u(x)}{\Delta x}.$$
Así, por nuestra discretización, se sigue que:
$$\frac{u_i^{n+1}-u_i^n}{\Delta t} + c \frac{u_i^n - u_{i-1}^n}{\Delta x} = 0, $$
donde $n$ y $n+1$ son dos pasos en el tiempo, mientras que $i-1$ y $i$ son dos puntos vecinos de la coodenada $x$ discretizada. Si se tienen en cuenta la condiciones iniciales, la única variable desconocida es $u_i^{n+1}$. Resolviendo la ecuación para $u_i^{n+1}$, podemos obtener una ecuación que nos permita avanzar en el tiempo de la siguiente manera:
$$u_i^{n+1} = u_i^n - c \frac{\Delta t}{\Delta x}(u_i^n-u_{i-1}^n).$$
```
import numpy as np
import matplotlib.pyplot as plt
import time, sys
%matplotlib inline
#esto hace que los gráficos de matplotlib aparezcan en el cuaderno (en lugar de una ventana separada)
```
## Ahora definamos algunas variables:
queremos definir una cuadrícula de puntos uniformemente espaciados dentro de un dominio espacial que tiene $2$ unidades de longitud de ancho, es decir, $x_i\in(0,2)$. Definiremos una variable `nx`, que será el número de puntos de cuadrícula que queremos y `dx` será la distancia entre cualquier par de puntos de cuadrícula adyacentes.
```
nx = 41
dx = 2 / (nx-1)
nt = 20 #nt es el número de pasos de tiempo que queremos calcular
dt = .025 #dt es la cantidad de tiempo que cubre cada paso de tiempo (delta t)
c = 1 # suponga una velocidad de onda de c = 1
```
También necesitamos configurar nuestras condiciones iniciales.
Para ello, disponemos de dos funciones, definidas sobre el intervalo $(0,2)$.
# Ejemplo 1: f1
La velocidad inicial $ u_0 $ se da como
$ u = 2 $ en el intervalo $ 0.5 \leq x \leq 1 $ y $ u = 1 $ en cualquier otro lugar de $ (0,2) $ (es decir, una función de sombrero).
# Ejemplo 2: f2
Fijando el tiempo, y tomando valores fijos para las constastes, podemos crear un variado número de funciones para la velocidad inicial.
$$u_n(x,t)=C_n\sin\left(\frac{n\pi c}{L}t+\phi_n\right) \sin\left(\frac{n\pi }{L}x\right)$$
```
def f1(nx):
#dx = 2 / (nx-1)
u = np.ones(nx) #numpy función ones()
u[int(.5 / dx):int(1 / dx + 1)] = 2 #configurar u = 2 entre 0.5 y 1 y u = 1 para todo lo demás según nuestra CI
#print(u)
return u
def f1_var(x):
u=np.piecewise(x,[x<0.5,np.abs(x-0.75)<=0.25,x>1],[lambda x:1,lambda x: 2,lambda x:1])
return u
def f2(nx):
#dx = 2 / (nx-1)
L=2
n=4
Fi=0.2#ángulo de fase temporal
c=10 #velocidad de la onda
A=0.5#amplitud máxima, relacionada con la intensidad del sonido
t=0.18 #instantánea en el tiempo t segundos
x=np.arange(0.0,L+dx,dx)
u=A*np.sin(n*np.pi*c*0.005*t/L+Fi)*np.sin(n*np.pi*x/L)#aquí definimos la función
return u
def mostrar_imagen(u):
plt.plot(np.linspace(0, 2, nx), u);
```
## Ahora es el momento de implementar la discretización de la ecuación de convección utilizando un esquema de diferencias finitas.
Para cada elemento de nuestra matriz `u`, necesitamos realizar la operación $$u_i^{n+1} = u_i^n - c \frac{\Delta t}{\Delta x}(u_i^n-u_{i-1}^n)$$
Almacenaremos el resultado en una nueva matriz (temporal) `un`, que será la solución $ u $ para el próximo paso de tiempo. Repetiremos esta operación durante tantos pasos de tiempo como especifiquemos y luego podremos ver qué tan lejos se ha convectado la onda.
Primero inicializamos nuestra matriz de marcador de posición `un` para contener los valores que calculamos para el paso de tiempo $ n + 1 $, usando una vez más la función NumPy`ones() `.
Entonces, podemos pensar que tenemos dos operaciones iterativas: una en el espacio y otra en el tiempo (aprenderemos de manera diferente más adelante), por lo que comenzaremos anidando un bucle dentro del otro. Tenga en cuenta el uso de la ingeniosa función `range()`. Cuando escribimos: `for i in range(1, nx)` iteraremos a través de la matriz `u`, pero omitiremos el primer elemento (el elemento cero). *¿Por qué?*
```
def ECL(u):
for n in range(nt): # bucle para valores de n de 0 a nt, por lo que se ejecutará nt veces
un = u.copy() ##copiar los valores existentes de u en un
for i in range(1, nx):
#for i in range(nx):
u[i] = un[i] - c * (dt / dx) * (un[i] - un[i-1])
#pyplot.plot(np.linspace(0, 2, nx), u);
return u
def ECLspeed(u):
for n in range(nt): ## recorrer el número de pasos de tiempo
un = u.copy()
u[1:] = un[1:]- c * (dt / dx) * (un[1:] - un[:-1])
return u
```
Usemos el método para la ECL con la función usada en el ejemplo 1, denominada `f1`.
```
u=f1(nx)
mostrar_imagen(u)
```
Cuyo resultado final, después de $t = 0.5$ segundos es:
```
u=ECL(f2(nx))
mostrar_imagen(u)
x=np.arange(0.0,2.0+dx,dx)
u=ECL(f1_var(x))
mostrar_imagen(u)
u=ECL(f2(nx))
mostrar_imagen(u)
```
# Ecuación de convección no lineal
Consideremos la ecuación de convección no lineal (ECnL) dada por
$$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = 0$$
Usaremos la misma discretización que en el caso lineal: diferencia hacia adelante en el tiempo y diferencia hacia atrás en el espacio. Aquí está la ecuación discretizada.
$$\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n-u_{i-1}^n}{\Delta x} = 0.$$
Resolviendo en términos de $u_i^{n+1}$, obtenemos:
$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n)$$
Para obtener una explicación más detallada del método de diferencias finitas, es necesario tocar temas como el error de truncamiento, el orden de convergencia y otros detalles, que por efectos de tiempo no vamos a tratar en este proyecto.
Aún así, el lector que quiera profundizar en esto, por favor vea las lecciones en video 2 y 3 de la profesora Barba en YouTube.
```
def ECnL(u):
for n in range(nt): # bucle para valores de n de 0 a nt, por lo que se ejecutará nt veces
un = u.copy() ##copiar los valores existentes de u en un
#for i in range(1, nx):
for i in range(nx):
u[i] = un[i] - un[i] * (dt / dx) * (un[i] - un[i-1])
#print(u)
#pyplot.plot(np.linspace(0, 2, nx), u);
return u
```
## Juguemos un poco...
... con los valores de `nx` en la **ecuación lineal**
```
nx = 41 # intente cambiar este número de 41 a 81 y Ejecutar todo ... ¿qué sucede?
dx = 2 / (nx-1)
nt = 20 #nt es el número de pasos de tiempo que queremos calcular
dt = .025 #dt es la cantidad de tiempo que cubre cada paso de tiempo (delta t)
c = 1 # suponga una velocidad de onda de c = 1
x=np.arange(0.0,2.0+dx,dx)
plt.figure(figsize=(18,12))
#como colocar un título general a un paquete
plt.subplot(2, 2, 1)
plt.plot(x,f1(nx),label=f' Velocidad inicial para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 2)
plt.plot(x,ECL(f1(nx)),label=f'Vel en t=0.5 para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 3)
plt.plot(x,f2(nx),label=f' Velocidad inicial para f2')
plt.legend(frameon=False)
plt.subplot(2, 2, 4)
plt.plot(x,ECL(f2(nx)),label=f'Vel en t=0.5 para f2')
plt.legend(frameon=False)
plt.show()
```
## Y ahora,
en la **ecuación no lineal**...
# ¿Qué sucede?
```
nx = 41 # intente cambiar este número de 41 a 81 y Ejecutar todo ... ¿qué sucede?
dx = 2 / (nx-1)
nt = 20 #nt es el número de pasos de tiempo que queremos calcular
dt = .025 #dt es la cantidad de tiempo que cubre cada paso de tiempo (delta t)
c = 1 # suponga una velocidad de onda de c = 1
x=np.arange(0.0,2.0+dx,dx)
plt.figure(figsize=(18,12))
#como colocar un título general a un paquete
plt.subplot(2, 2, 1)
plt.plot(x,f1(nx),label=f' Velocidad inicial para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 2)
plt.plot(x,ECnL(f1(nx)),label=f'Velocidad en t=0.5 para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 3)
plt.plot(x,f2(nx),label=f' Velocidad inicial para f2')
plt.legend(frameon=False)
plt.subplot(2, 2, 4)
plt.plot(x,ECnL(f2(nx)),label=f'Velocidad en t=0.5 para f2')
plt.legend(frameon=False)
plt.show()
```
Para responder a esa pregunta, tenemos que pensar un poco sobre lo que realmente estamos implementando en el código.
En cada iteración de nuestro ciclo de tiempo, usamos los datos existentes sobre nuestra onda para estimar la velocidad de la onda en el siguiente paso de tiempo. Inicialmente, el aumento en el número de puntos de la cuadrícula arrojó respuestas más precisas. Hubo menos difusión numérica y la onda cuadrada se parecía mucho más a una onda cuadrada que en nuestro primer ejemplo.
Cada iteración de nuestro ciclo de tiempo cubre un intervalo de tiempo de longitud $\Delta t $, que hemos estado definiendo como 0.025
Durante esta iteración, evaluamos la velocidad de la onda en cada uno de los puntos $ x $ que hemos creado. En la última trama, claramente algo salió mal.
Lo que ha sucedido es que durante el período de tiempo $ \Delta t $, la onda viaja una distancia que es mayor que `dx`. La longitud `dx` de cada cuadro de cuadrícula está relacionada con el número de puntos totales` nx`, por lo que la estabilidad se puede aplicar si el tamaño del paso $ \Delta t $ se calcula con respecto al tamaño de `dx`.
$$\sigma = \frac{u \Delta t}{\Delta x} \leq \sigma_{\max}$$
donde $ u $ es la velocidad de la onda; $ \sigma $ se llama [**número de Courant**](https://n9.cl/86phk) (CFL) y el valor de $\sigma_{\max} $ que garantizará la estabilidad depende de la discretización utilizada.
En una nueva versión de nuestro código, usaremos el número CFL para calcular el paso de tiempo apropiado `dt` dependiendo del tamaño de `dx`.
```
nx = 1001 # intente cambiar este número de 41 a 81 y Ejecutar todo ... ¿qué sucede?
dx = 2 / (nx-1)
nt = 20 #nt es el número de pasos de tiempo que queremos calcular
sigma =0.5
dt = sigma * dx #dt es la cantidad de tiempo que cubre cada paso de tiempo (delta t)
c = 1 # suponga una velocidad de onda de c = 1
x=np.arange(0.0,2.0+dx,dx)
plt.figure(figsize=(18,12))
#como colocar un título general a un paquete
plt.subplot(2, 2, 1)
plt.plot(x,f1(nx),label=f' Velocidad inicial para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 2)
plt.plot(x,ECL(f1(nx)),label=f'Vel en t=0.5 para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 3)
plt.plot(x,f2(nx),label=f' Velocidad inicial para f2')
plt.legend(frameon=False)
plt.subplot(2, 2, 4)
plt.plot(x,ECL(f2(nx)),label=f'Vel en t=0.5 para f2')
plt.legend(frameon=False)
plt.show()
nx = 41 # intente cambiar este número de 41 a 81 y Ejecutar todo ... ¿qué sucede?
dx = 2 / (nx-1)
nt = 20 #nt es el número de pasos de tiempo que queremos calcular
dt = .025 #dt es la cantidad de tiempo que cubre cada paso de tiempo (delta t)
c = 1 # suponga una velocidad de onda de c = 1
x=np.arange(0.0,2.0+dx,dx)
plt.figure(figsize=(18,12))
#como colocar un título general a un paquete
plt.subplot(2, 2, 1)
plt.plot(x,f1(nx),label=f' Velocidad inicial para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 2)
plt.plot(x,ECnL(f1(nx)),label=f'Velocidad en t=0.5 para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 3)
plt.plot(x,f2(nx),label=f' Velocidad inicial para f2')
plt.legend(frameon=False)
plt.subplot(2, 2, 4)
plt.plot(x,ECnL(f2(nx)),label=f'Velocidad en t=0.5 para f2')
plt.legend(frameon=False)
plt.show()
from IPython.display import YouTubeVideo
YouTubeVideo('iz22_37mMkk')
YouTubeVideo('xq9YTcv-fQg')
```
# Extra!!!
Para una descripción detallada de la discretización de la ecuación de convección lineal con diferencias finitas (y también los siguientes problemas estudiados aquí), vea la Lección en video 4 del Prof. Barba en YouTube.
```
YouTubeVideo('y2WaK7_iMRI')
```
|
github_jupyter
|
# DOPPELGANGER #
## Ever wondered how your "doppelganger" dog would look like?
# EXPERIMENT LOCALLY
### Prepare Environment
Install and import needed modules.
```
import numpy as np
import pandas as pd
import os
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.applications.xception import preprocess_input
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
Set image path and explore enivornment.
```
images_path = 'code/training/Images'
len(os.listdir(os.path.join(images_path)))
```
Set parameters.
```
batch_size = 200
img_w_size = 299
img_h_size = 299
```
Build Data Generator
```
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
image_generator = datagen.flow_from_directory(
images_path,
target_size=(img_w_size, img_h_size),
batch_size=batch_size,
class_mode=None,
shuffle=False)
images = image_generator.next()
images.shape
```
### Show a sample picture!
```
sample_image_idx = 1
plt.imshow((images[sample_image_idx] + 1) / 2)
```
## Transform Images to Lower Feature Space (Bottleneck) ##
```
base_model = Xception(include_top=False,
weights='imagenet',
input_shape=(img_w_size, img_h_size, 3),
pooling='avg')
bottlenecks = base_model.predict(images)
bottlenecks.shape
```
### Show Bottleneck
```
plt.plot(bottlenecks[0])
plt.show()
from sklearn.neighbors import DistanceMetric
dist = DistanceMetric.get_metric('euclidean')
```
### Calculate pairwise distances
```
bn_dist = dist.pairwise(bottlenecks)
bn_dist.shape
```
## Pre-Process Image Similarities ##
```
plt.imshow(bn_dist, cmap='gray')
```
Set visualization parameters.
```
n_rows = 5
n_cols = 5
n_result_images = n_rows * n_cols
```
# Find Similar Images #
## Define `image_search()`
```
def image_search(img_index, n_rows=n_rows, n_columns=n_cols):
n_images = n_rows * n_cols
# create Pandas Series with distances from image
dist_from_sel = pd.Series(bn_dist[img_index])
# sort Series and get top n_images
retrieved_indexes = dist_from_sel.sort_values().head(n_images)
retrieved_images = []
# create figure, loop over closest images indices
# and display them
plt.figure(figsize=(10, 10))
i = 1
for idx in retrieved_indexes.index:
plt.subplot(n_rows, n_cols, i)
plt.imshow((images[idx] + 1) / 2)
if i == 1:
plt.title('Selected image')
else:
plt.title("Dist: {:0.4f}".format(retrieved_indexes[idx]))
i += 1
retrieved_images += [images[idx]]
plt.tight_layout()
return np.array(retrieved_images)
```
## Perform Image Search
```
similar_to_idx = 0
plt.imshow((images[similar_to_idx] + 1) / 2)
similar_images_sorted = image_search(similar_to_idx)
similar_images_sorted.shape
```
## Convert images to gray-scale ##
```
grayscaled_similar_images_sorted = similar_images_sorted.mean(axis=3)
flattened_grayscale_images = grayscaled_similar_images_sorted.reshape(n_result_images, -1)
flattened_grayscale_images.shape
_, h, w = grayscaled_similar_images_sorted.shape
# Compute a PCA
n_components = 10
pca = PCA(n_components=n_components, whiten=True).fit(flattened_grayscale_images)
# apply PCA transformation to training data
pca_transformed = pca.transform(flattened_grayscale_images)
```
## Visualize Eigenfaces
```
def plot_gallery(images, titles, h, w, rows=n_rows, cols=n_cols):
plt.figure()
for i in range(rows * cols):
plt.subplot(rows, cols, i + 1)
plt.imshow(images[i].reshape(h, w), cmap=plt.cm.gray)
plt.title(titles[i])
plt.xticks(())
plt.yticks(())
eigenfaces = pca.components_.reshape((n_components, h, w))
eigenface_titles = ["eigenface {0}".format(i) for i in range(eigenfaces.shape[0])]
plot_gallery(eigenfaces, eigenface_titles, h, w, 3, 3)
plt.show()
```
## Show Average Face
```
average_face = eigenfaces[9]
plt.imshow((average_face + 1) / 2)
```
# BUILD CONTAINER
```
!cat code/training/doppelganger-train.py
!cat code/training/Dockerfile
!cat code/training/doppelganger-train-deploy.yaml
```
# RUN TRAINING POD
Deploy the training job to Kubernetes
```
!kubectl create -f code/training/doppelganger-train-deploy.yaml
!kubectl logs doppelganger-train -c doppelganger-train --namespace deployment
!kubectl delete -f code/training/doppelganger-train-deploy.yaml
```
# RUN INFERENCE POD
Use the previously trained model and run an inference service on Kubernetes
```
!cat code/inference/DoppelgangerModel.py
!cat code/inference/Dockerfile-v1
!cat code/inference/doppelganger-predict-deploy.yaml
```
### Deploy the service
```
!kubectl create -f code/inference/doppelganger-predict-deploy.yaml
```
### Make a prediction
```
plt.imshow((images[0] + 1) / 2)
```
### Run a curl command to get a prediction from the REST API
```
!curl https://community.cloud.pipeline.ai/seldon/deployment/doppelganger-model/api/v0.1/predictions -d '{"data":{"ndarray":[[0]]}}' -H "Content-Type: application/json"
```
### Clean up
```
!kubectl delete -f code/inference/doppelganger-predict-deploy.yaml
```
|
github_jupyter
|
## Exercise 3.10 Taxicab (tramcar) problem
Suppose you arrive in a new city and see a taxi numbered 100. How many taxis are there in this city? Let us assume taxis are numbered sequentially as integers starting from 0, up to some unknown upper bound $\theta$. (We number taxis from 0 for simplicity; we can also count from 1 without changing the analysis.) Hence the likelihood function is $p(x) = U(0,\theta)$, the uniform distribution. The goal is to estimate $\theta$. We will use the Bayesian analysis from Exercise 3.9.
a) Suppose we see one taxi numbered 100, so $D = \{100\}, m = 100, N = 1$. Using an (improper) non-informative prior on θ of the form $p(\theta) = Pa(\theta|0, \theta) \propto 1/\theta$, what is the posterior $p(\theta|D)$?
**Solution**: Using that of 3.9, the posterior $p(\theta|D) = Pa(\theta|1, 100)$.
b) Compute the posterior mean, mode and median number of taxis in the city, if such quantities exist.
**Solution**:
The Pareto distribution $Pa(\theta|1, 100)$ does not have a mean defined (since $\mathbb{E}(\theta|a, b) = \frac{ab}{a-1}$). The mode of the distribution is at 100.
The median of the distribution is given by
$$
\int_{\mathrm{median}}^\infty 100\theta^{-2} = 0.5
$$
which gives median = 200.
(c) Rather than trying to compute a point estimate of the number of taxis, we can compute the predictive density over the next taxicab number using
$$
p(D'|D, \alpha) = \int p(D'|\theta)p(\theta|D, \alpha)d\theta = p(D'|\beta)
$$
where $\alpha = (b, K)$ are the hyper-parameters, $\beta = (c, N + K )$ are the updated hyper-parameters. Now
consider the case $D = \{m\}$, and $D' = \{x\}$. Using Equation 3.95, write down an expression for $p(x|D, \alpha)$.
As above, use a non-informative prior $b = K = 0$.
**Solution**:
Let's compute the predictive density over the next taxi number: First, we need to compute the posterior $p(\theta|D)$:
$$
p(\theta|D) = \mathrm{Pareto}(\theta|N + K, \max(m, b)) = \mathrm{Pareto}(\theta|1 + 0, \max(m, 0) = \mathrm{Pareto}(\theta|1, m)
$$
Since the posterior is a Pareto distribution like the prior, we can use it as a 'prior' for inference on $D'$ and use the expression of $p(D)$ (evidence) and the joint distribution $p(D, \theta)$. So our new 'prior' has the following distribution $p(\theta|D) = \mathrm{Pareto}(\theta, K'=1, b'=m)$. The number o samples is $N'=1$ and $m'=\max(D') = x$. Now we can calculate the predictive distribution:
\begin{aligned}
p(x|D, \alpha) & = \frac{K'}{(N'+K')b'^{N'}}\mathbb{I}(x\le m) + \frac{K'b'^{K'}}{(N'+K')m'^{N'+K'}}\mathbb{I}(x > m) \\
& = \frac{1}{2m}\mathbb{I}(x\le m) + \frac{m}{2x^2}\mathbb{I}(x > m)
\end{aligned}
(d) Use the predictive density formula to compute the probability that the next taxi you will see (say, the next day) has number 100, 50 or 150, i.e., compute $p(x = 100|D,\alpha)$, $p(x = 50|D,\alpha)$, $p(x = 150|D, \alpha)$.
**Solution**:
If we suppose $m = 100$, $p(100|D, \alpha) = 0.005$, $p(50|D, \alpha) = 0.01$, $p(150 |D, \alpha) = 0.002$
|
github_jupyter
|
<a href="https://colab.research.google.com/github/yahyanh21/Machine-Learning-Homework/blob/main/Week11_Padding_stride_pooling.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install d2l
import torch
from torch import nn
# We define a convenience function to calculate the convolutional layer. This
# function initializes the convolutional layer weights and performs
# corresponding dimensionality elevations and reductions on the input and
# output
def comp_conv2d(conv2d, X):
# Here (1, 1) indicates that the batch size and the number of channels
# are both 1
X = X.reshape((1, 1) + X.shape)
Y = conv2d(X)
# Exclude the first two dimensions that do not interest us: examples and
# channels
return Y.reshape(Y.shape[2:])
# Note that here 1 row or column is padded on either side, so a total of 2
# rows or columns are added
conv2d = nn.Conv2d(1, 1, kernel_size=3, padding=1)
X = torch.rand(size=(8, 8))
comp_conv2d(conv2d, X).shape
# Here, we use a convolution kernel with a height of 5 and a width of 3. The
# padding numbers on either side of the height and width are 2 and 1,
# respectively
conv2d = nn.Conv2d(1, 1, kernel_size=(5, 3), padding=(2, 1))
comp_conv2d(conv2d, X).shape
conv2d = nn.Conv2d(1, 1, kernel_size=3, padding=1, stride=2)
comp_conv2d(conv2d, X).shape
#Next, we will look at a slightly more complicated example.
conv2d = nn.Conv2d(1, 1, kernel_size=(3, 5), padding=(0, 1), stride=(3, 4))
comp_conv2d(conv2d, X).shape
import torch
from d2l import torch as d2l
def corr2d_multi_in(X, K):
# First, iterate through the 0th dimension (channel dimension) of `X` and
# `K`. Then, add them together
return sum(d2l.corr2d(x, k) for x, k in zip(X, K))
X = torch.tensor([[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]],
[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]])
K = torch.tensor([[[0.0, 1.0], [2.0, 3.0]], [[1.0, 2.0], [3.0, 4.0]]])
corr2d_multi_in(X, K)
def corr2d_multi_in_out(X, K):
# Iterate through the 0th dimension of `K`, and each time, perform
# cross-correlation operations with input `X`. All of the results are
# stacked together
return torch.stack([corr2d_multi_in(X, k) for k in K], 0)
K = torch.stack((K, K + 1, K + 2), 0)
K.shape
corr2d_multi_in_out(X, K)
def corr2d_multi_in_out_1x1(X, K):
c_i, h, w = X.shape
c_o = K.shape[0]
X = X.reshape((c_i, h * w))
K = K.reshape((c_o, c_i))
# Matrix multiplication in the fully-connected layer
Y = torch.matmul(K, X)
return Y.reshape((c_o, h, w))
X = torch.normal(0, 1, (3, 3, 3))
K = torch.normal(0, 1, (2, 3, 1, 1))
Y1 = corr2d_multi_in_out_1x1(X, K)
Y2 = corr2d_multi_in_out(X, K)
assert float(torch.abs(Y1 - Y2).sum()) < 1e-6
import torch
from torch import nn
from d2l import torch as d2l
def pool2d(X, pool_size, mode='max'):
p_h, p_w = pool_size
Y = torch.zeros((X.shape[0] - p_h + 1, X.shape[1] - p_w + 1))
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
if mode == 'max':
Y[i, j] = X[i: i + p_h, j: j + p_w].max()
elif mode == 'avg':
Y[i, j] = X[i: i + p_h, j: j + p_w].mean()
return Y
X = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])
pool2d(X, (2, 2))
pool2d(X, (2, 2), 'avg')
X = torch.arange(16, dtype=torch.float32).reshape((1, 1, 4, 4))
X
pool2d = nn.MaxPool2d(3)
pool2d(X)
#The stride and padding can be manually specified.
pool2d = nn.MaxPool2d(3, padding=1, stride=2)
pool2d(X)
#Of course, we can specify an arbitrary rectangular pooling window and specify the padding and stride for height and width, respectively.
pool2d = nn.MaxPool2d((2, 3), stride=(2, 3), padding=(0, 1))
pool2d(X)
X = torch.cat((X, X + 1), 1)
X
pool2d = nn.MaxPool2d(3, padding=1, stride=2)
pool2d(X)
```
|
github_jupyter
|
# Basics & Prereqs (run once)
If you don't already have the downloaded dependencies; if you don't have TheMovieDB data indexed run this
```
from ltr.client.elastic_client import ElasticClient
client = ElasticClient()
from ltr import download, index
from ltr.index import rebuild
from ltr.helpers.movies import indexable_movies
from ltr import download
corpus='http://es-learn-to-rank.labs.o19s.com/tmdb.json'
download([corpus], dest='data/');
movies=indexable_movies(movies='data/tmdb.json')
rebuild(client, index='tmdb', doc_src=movies)
```
## Create Elastic Client
```
from ltr.client.elastic_client import ElasticClient
client = ElasticClient()
```
# Our Task: Optimizing "Drama" and "Science Fiction" queries
In this example we have two user queries
- Drama
- Science Fiction
And we want to train a model to return the best movies for these movies when a user types them into our search bar.
We learn through analysis that searchers prefer newer science fiction, but older drama. Like a lot of search relevance problems, two queries need to be optimized in *different* directions
### Synthetic Judgment List Generation
To setup this example, we'll generate a judgment list that rewards new science fiction movies as more relevant; and old drama movies as relevant.
```
from ltr.date_genre_judgments import synthesize
judgments = synthesize(client, judgmentsOutFile='data/genre_by_date_judgments.txt')
```
### Feature selection should be *easy!*
Notice we have 4 proposed features, that seem like they should work! This should be a piece of cake...
1. Release Year of a movie `release_year` - feature ID 1
2. Is the movie Science Fiction `is_scifi` - feature ID 2
3. Is the movie Drama `is_drama` - feature ID 3
4. Does the search term match the genre field `is_genre_match` - feature ID 4
```
client.reset_ltr(index='tmdb')
config = {
"featureset": {
"features": [
{
"name": "release_year",
"params": [],
"template": {
"function_score": {
"field_value_factor": {
"field": "release_year",
"missing": 2000
},
"query": { "match_all": {} }
}
}
},
{
"name": "is_sci_fi",
"params": [],
"template": {
"constant_score": {
"filter": {
"match_phrase": {"genres": "Science Fiction"}
},
"boost": 1.0 }
}
},
{
"name": "is_drama",
"params": [],
"template": {
"constant_score": {
"filter": {
"match_phrase": {"genres": "Drama"}
},
"boost": 1.0 }
}
},
{
"name": "is_genre_match",
"params": ["keywords"],
"template": {
"constant_score": {
"filter": {
"match_phrase": {"genres": "{{keywords}}"}
},
"boost": 1.0
}
}
}
]
},
"validation": {
"params": {
"keywords": "Science Fiction"
},
"index": "tmdb"
}
}
client.create_featureset(index='tmdb', name='genre', ftr_config=config)
```
### Log from search engine -> to training set
Each feature is a query to be scored against the judgment list
```
from ltr.judgments import judgments_open
from ltr.log import FeatureLogger
from itertools import groupby
from ltr.log import FeatureLogger
from ltr.judgments import judgments_open
from itertools import groupby
ftr_logger=FeatureLogger(client, index='tmdb', feature_set='genre')
with judgments_open('data/genre_by_date_judgments.txt') as judgment_list:
for qid, query_judgments in groupby(judgment_list, key=lambda j: j.qid):
ftr_logger.log_for_qid(judgments=query_judgments,
qid=qid,
keywords=judgment_list.keywords(qid))
```
### Training - Guaraneed Perfect Search Results!
We'll train a LambdaMART model against this training data.
```
from ltr.ranklib import train
trainLog = train(client,
training_set=ftr_logger.logged,
metric2t='NDCG@10',
index='tmdb',
featureSet='genre',
modelName='genre')
print()
print("Impact of each feature on the model")
for ftrId, impact in trainLog.impacts.items():
print("{} - {}".format(ftrId, impact))
print("Perfect NDCG! {}".format(trainLog.rounds[-1]))
```
### But this search sucks!
Try searches for "Science Fiction" and "Drama"
```
from ltr.search import search
search(client, keywords="science fiction", modelName="genre")
```
### Why didn't it work!?!? Training data
1. Examine the training data, do we cover every example of a BAD result
2. Examine the feature impacts, do any of the features the model uses even USE the keywords?
### Ranklib only sees the data you give it, we don't have good enough coverage
You need to have feature coverage, especially over negative examples. Most documents in the index are negative!
One trick commonly used is to treat other queries positive results as this queries negative results. Indeed what we're missing here are negative examples for "Science Fiction" that are not science fiction movies. A glaring omission, we'll handle now... With the `autoNegate` flag, we'll add additional negative examples to the judgment list
```
from ltr import date_genre_judgments
date_genre_judgments.synthesize(client,
judgmentsOutFile='data/genre_by_date_judgments.txt',
autoNegate=True)
from ltr.log import FeatureLogger
from ltr.judgments import judgments_open
from itertools import groupby
ftr_logger=FeatureLogger(client, index='tmdb', feature_set='genre')
with judgments_open('data/genre_by_date_judgments.txt') as judgment_list:
for qid, query_judgments in groupby(judgment_list, key=lambda j: j.qid):
ftr_logger.log_for_qid(judgments=query_judgments,
qid=qid,
keywords=judgment_list.keywords(qid))
from ltr.ranklib import train
trainLog = train(client,
training_set=ftr_logger.logged,
metric2t='NDCG@10',
index='tmdb',
featureSet='genre',
modelName='genre')
print()
print("Impact of each feature on the model")
for ftrId, impact in trainLog.impacts.items():
print("{} - {}".format(ftrId, impact))
print("NDCG {}".format(trainLog.rounds[-1]))
```
### Now try those queries...
Replace keywords below with 'science fiction' or 'drama' and see how it works
```
from ltr.search import search
search(client, keywords="drama", modelName="genre")
from ltr.search import search
search(client, keywords="science fiction", modelName="genre")
```
### The next problem
- Overfit to these two examples
- We need many more queries, covering more use cases
|
github_jupyter
|
## Nearest Neighbor item based Collaborative Filtering

Source: https://towardsdatascience.com
```
##Dataset url: https://grouplens.org/datasets/movielens/latest/
import pandas as pd
import numpy as np
r_cols = ['user_id','movie_id','rating']
movies_df = pd.read_csv('u.item.csv', names=['movieId','title'],sep='|',usecols=range(2))
m_cols = ['movie_id','title']
rating_df=pd.read_csv('u.data.csv', names=['userId', 'movieId', 'rating'],usecols=range(3))
movies_df.head()
rating_df.head()
df = pd.merge(rating_df,movies_df,on='movieId')
df.head()
combine_movie_rating = df.dropna(axis = 0, subset = ['title'])
# combine_movie_rating.shape
movie_ratingCount = (combine_movie_rating.
groupby(by = ['title'])['rating'].
count().
reset_index().
rename(columns = {'rating': 'totalRatingCount'})
[['title', 'totalRatingCount']]
)
movie_ratingCount.head()
rating_with_totalRatingCount = combine_movie_rating.merge(movie_ratingCount, left_on = 'title', right_on = 'title', how = 'left')
rating_with_totalRatingCount.head()
pd.set_option('display.float_format', lambda x: '%.3f' % x)
print(movie_ratingCount['totalRatingCount'].describe())
popularity_threshold = 50
rating_popular_movie= rating_with_totalRatingCount.query('totalRatingCount >= @popularity_threshold')
rating_popular_movie.head()
rating_popular_movie.shape
## First lets create a Pivot matrix
movie_features_df=rating_popular_movie.pivot_table(index='title',columns='userId',values='rating').fillna(0)
movie_features_df.head()
from scipy.sparse import csr_matrix
movie_features_df_matrix = csr_matrix(movie_features_df.values)
# print(movie_features_df_matrix)
from sklearn.neighbors import NearestNeighbors
model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute')
model_knn.fit(movie_features_df_matrix)
movie_features_df.shape
# query_index = np.random.choice(movie_features_df.shape[0])
# print(query_index)
query_index = movie_features_df.index.get_loc('Star Wars (1977)')
distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6)
movie_features_df.head()
distances
indices
for i in range(0, len(distances.flatten())):
if i == 0:
print('Recommendations for {0}:\n'.format(movie_features_df.index[query_index]))
else:
print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i]))
```
## Cosine Similarity

```
my_ratings = movie_features_df[0]
my_ratings = my_ratings.loc[my_ratings!=0]
my_ratings
simCandidates = pd.Series()
for i in range(0,len(my_ratings.index)):
print("Adding sims for ",my_ratings.index[i],"...")
query_index = movie_features_df.index.get_loc(my_ratings.index[i])
# print(query_index)
distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6)
distances = (1/(1+distances)) * my_ratings[i]
# print(distances)
sims = pd.Series(distances.flatten(),
name="ratings", index=movie_features_df.index[indices.flatten()])
# sims = distances.map(lambda x: (1/x)*myRatings[i])
print(sims)
simCandidates = simCandidates.append(sims)
print('\nsorting..\n')
simCandidates.sort_values(inplace=True,ascending=False)
print(simCandidates.head(20))
simCandidates = simCandidates.groupby(simCandidates.index).sum()
simCandidates.sort_values(inplace=True,ascending=False)
simCandidates.head(10)
filteredSims = simCandidates.drop(my_ratings.index)
filteredSims.head(10)
```
This is the final Recommendation of movies of similar that i was like earlier such as `Empire Strikes Back, The (1980)`, `Gone with the Wind (1939)`, `Star Wars (1977)`
|
github_jupyter
|
# Data Preprocessing
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import display # Allows the use of display() for DataFrames
# Show matplotlib plots inline (nicely formatted in the notebook)
%matplotlib inline
# Load train and test datasets
df_train = pd.read_csv('train.csv')
df_test = pd.read_csv('test.csv')
print ("Training dataset has {} rows and {} columns.").format(*df_train.shape)
print ("Test dataset has {} rows and {} columns.").format(*df_test.shape)
# Inspect the datasets
display(df_train.head())
display(df_train.describe())
display(df_test.head())
display(df_test.describe())
```
Since some of the categorical variables can have up to 120+ unique values, creating dummy variables would considerably increase our feature space, requiring lots of memory. To save space, we label encode them instead.
```
from sklearn.preprocessing import LabelEncoder
def label_encode(df):
'''Label encode categorical features and normalize continuous features'''
le = LabelEncoder()
for col, col_data in df.iteritems():
if col_data.dtype == 'object':
le.fit(col_data)
df[col] = le.transform(col_data)
return df
df_train = label_encode(df_train)
df_test = label_encode(df_test)
display(df_train.head())
display(df_test.head())
# Transform
from scipy.stats import boxcox
from scipy.stats import skew
# Get the list of columns to be normalized, ignoring the target variable
def col_to_norm(df, target):
col_to_norm = []
for col, col_data in df.iteritems():
if col == target:
continue
elif col_data.dtype == 'float64':
col_to_norm.append(col)
return col_to_norm
# First, we need to convert all negative values to positive
def force_positive(df, col_to_norm):
for col in col_to_norm:
if np.min(df[col]) < 1:
df[col] = df[col] + np.abs(np.min(df[col])) + 2
return df
# Then, we perform the Box-Cox transformation
def transform_box_cox(df, col_to_norm):
'''Perform Box-Cox transformation for easier model learning'''
lambda_vals = []
for col in col_to_norm:
tformed = boxcox(df[col].values)
df[col] = tformed[0] #By default, Box-Cox outputs array and lambda. We only want the array output.
print "Box-Cox lambda: " + col + " " + str(tformed[1])
lambda_vals.append(tformed[1])
return df, lambda_vals
def normalize_data(df, col_to_norm):
'''Normalize data with the standard score for normal distributions'''
mean_std = []
for col in col_to_norm:
mean = np.mean(df[col])
std = np.std(df[col])
df[col] = df[col].apply(lambda x: (x - mean) / std)
print col + ": " + "mean =" + str(mean) + ", std =" + str(std)
mean_std.append((mean, std))
return df, mean_std
col_to_norm = col_to_norm(df_train, 'loss')
df_train = force_positive(df_train, col_to_norm)
df_train, lambda_vals = transform_box_cox(df_train, col_to_norm)
df_train, mean_std = normalize_data(df_train, col_to_norm)
display(df_train.describe())
# Calculate skewness of transformed data
for col in col_to_norm:
skness = skew(df_train[col])
print col + " skewnesss: {0:.2f}".format(skness)
# Transform test set with parameters done for training set
def transform_test_set(df, col_to_norm, lambda_vals, mean_std):
for col in col_to_norm:
if np.min(df[col]) < 1:
df[col] = df[col] + np.abs(np.min(df[col])) + 2
for col, val in zip(col_to_norm, lambda_vals):
tformed = boxcox(df[col].values, lmbda=val)
df[col] = tformed[0] #By default, Box-Cox outputs array and lambda. We only want the array output.
for col, (mean, std) in zip(col_to_norm, mean_std):
df[col] = df[col].apply(lambda x: (x - mean) / std)
return df
df_test = transform_test_set(df_test, col_to_norm, lambda_vals, mean_std)
display(df_test.describe())
sns_plot = sns.pairplot(df_train)
sns_plot.savefig("output_asc_after_norm.png")
# Save preprocessed datasets to pickle
df_train.to_pickle('train_2.pickle')
df_test.to_pickle('test_2.pickle')
print "Files saved."
```
|
github_jupyter
|
# PaddleOCR DJL example
In this tutorial, we will be using pretrained PaddlePaddle model from [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR) to do Optical character recognition (OCR) from the given image. There are three models involved in this tutorial:
- Word detection model: used to detect the word block from the image
- Word direction model: used to find if the text needs to rotate
- Word recognition model: Used to recognize test from the word block
## Import dependencies and classes
PaddlePaddle is one of the Deep Engines that requires DJL hybrid mode to run inference. Itself does not contains NDArray operations and needs a supplemental DL framework to help with that. So we import Pytorch DL engine as well in here to do the processing works.
```
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.14.0
%maven ai.djl.paddlepaddle:paddlepaddle-model-zoo:0.14.0
%maven org.slf4j:slf4j-api:1.7.32
%maven org.slf4j:slf4j-simple:1.7.32
// second engine to do preprocessing and postprocessing
%maven ai.djl.pytorch:pytorch-engine:0.14.0
import ai.djl.*;
import ai.djl.inference.Predictor;
import ai.djl.modality.Classifications;
import ai.djl.modality.cv.Image;
import ai.djl.modality.cv.ImageFactory;
import ai.djl.modality.cv.output.*;
import ai.djl.modality.cv.util.NDImageUtils;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.DataType;
import ai.djl.ndarray.types.Shape;
import ai.djl.repository.zoo.*;
import ai.djl.paddlepaddle.zoo.cv.objectdetection.PpWordDetectionTranslator;
import ai.djl.paddlepaddle.zoo.cv.imageclassification.PpWordRotateTranslator;
import ai.djl.paddlepaddle.zoo.cv.wordrecognition.PpWordRecognitionTranslator;
import ai.djl.translate.*;
import java.util.concurrent.ConcurrentHashMap;
```
## the Image
Firstly, let's take a look at our sample image, a flight ticket:
```
String url = "https://resources.djl.ai/images/flight_ticket.jpg";
Image img = ImageFactory.getInstance().fromUrl(url);
img.getWrappedImage();
```
## Word detection model
In our word detection model, we load the model exported from [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.0/doc/doc_en/inference_en.md#convert-detection-model-to-inference-model). After that, we can spawn a DJL Predictor from it called detector.
```
var criteria1 = Criteria.builder()
.optEngine("PaddlePaddle")
.setTypes(Image.class, DetectedObjects.class)
.optModelUrls("https://resources.djl.ai/test-models/paddleOCR/mobile/det_db.zip")
.optTranslator(new PpWordDetectionTranslator(new ConcurrentHashMap<String, String>()))
.build();
var detectionModel = criteria1.loadModel();
var detector = detectionModel.newPredictor();
```
Then, we can detect the word block from it. The original output from the model is a bitmap that marked all word regions. The `PpWordDetectionTranslator` convert the output bitmap into a rectangle bounded box for us to crop the image.
```
var detectedObj = detector.predict(img);
Image newImage = img.duplicate();
newImage.drawBoundingBoxes(detectedObj);
newImage.getWrappedImage();
```
As you can see above, the word block are very narrow and does not include the whole body of all words. Let's try to extend it a bit for a better result. `extendRect` extend the box height and width to a certain scale. `getSubImage` will crop the image and extract the word block.
```
Image getSubImage(Image img, BoundingBox box) {
Rectangle rect = box.getBounds();
double[] extended = extendRect(rect.getX(), rect.getY(), rect.getWidth(), rect.getHeight());
int width = img.getWidth();
int height = img.getHeight();
int[] recovered = {
(int) (extended[0] * width),
(int) (extended[1] * height),
(int) (extended[2] * width),
(int) (extended[3] * height)
};
return img.getSubImage(recovered[0], recovered[1], recovered[2], recovered[3]);
}
double[] extendRect(double xmin, double ymin, double width, double height) {
double centerx = xmin + width / 2;
double centery = ymin + height / 2;
if (width > height) {
width += height * 2.0;
height *= 3.0;
} else {
height += width * 2.0;
width *= 3.0;
}
double newX = centerx - width / 2 < 0 ? 0 : centerx - width / 2;
double newY = centery - height / 2 < 0 ? 0 : centery - height / 2;
double newWidth = newX + width > 1 ? 1 - newX : width;
double newHeight = newY + height > 1 ? 1 - newY : height;
return new double[] {newX, newY, newWidth, newHeight};
}
```
Let's try to extract one block out:
```
List<DetectedObjects.DetectedObject> boxes = detectedObj.items();
var sample = getSubImage(img, boxes.get(5).getBoundingBox());
sample.getWrappedImage();
```
## Word Direction model
This model is exported from [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.0/doc/doc_en/inference_en.md#convert-angle-classification-model-to-inference-model) that can help to identify if the image is required to rotate. The following code will load this model and create a rotateClassifier.
```
var criteria2 = Criteria.builder()
.optEngine("PaddlePaddle")
.setTypes(Image.class, Classifications.class)
.optModelUrls("https://resources.djl.ai/test-models/paddleOCR/mobile/cls.zip")
.optTranslator(new PpWordRotateTranslator())
.build();
var rotateModel = criteria2.loadModel();
var rotateClassifier = rotateModel.newPredictor();
```
## Word Recgonition model
The word recognition model is exported from [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.0/doc/doc_en/inference_en.md#convert-recognition-model-to-inference-model) that can recognize the text on the image. Let's load this model as well.
```
var criteria3 = Criteria.builder()
.optEngine("PaddlePaddle")
.setTypes(Image.class, String.class)
.optModelUrls("https://resources.djl.ai/test-models/paddleOCR/mobile/rec_crnn.zip")
.optTranslator(new PpWordRecognitionTranslator())
.build();
var recognitionModel = criteria3.loadModel();
var recognizer = recognitionModel.newPredictor();
```
Then we can try to play with these two models on the previous cropped image:
```
System.out.println(rotateClassifier.predict(sample));
recognizer.predict(sample);
```
Finally, let's run these models on the whole image and see the outcome. DJL offers a rich image toolkit that allows you to draw the text on image and display them.
```
Image rotateImg(Image image) {
try (NDManager manager = NDManager.newBaseManager()) {
NDArray rotated = NDImageUtils.rotate90(image.toNDArray(manager), 1);
return ImageFactory.getInstance().fromNDArray(rotated);
}
}
List<String> names = new ArrayList<>();
List<Double> prob = new ArrayList<>();
List<BoundingBox> rect = new ArrayList<>();
for (int i = 0; i < boxes.size(); i++) {
Image subImg = getSubImage(img, boxes.get(i).getBoundingBox());
if (subImg.getHeight() * 1.0 / subImg.getWidth() > 1.5) {
subImg = rotateImg(subImg);
}
Classifications.Classification result = rotateClassifier.predict(subImg).best();
if ("Rotate".equals(result.getClassName()) && result.getProbability() > 0.8) {
subImg = rotateImg(subImg);
}
String name = recognizer.predict(subImg);
names.add(name);
prob.add(-1.0);
rect.add(boxes.get(i).getBoundingBox());
}
newImage.drawBoundingBoxes(new DetectedObjects(names, prob, rect));
newImage.getWrappedImage();
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/dhsong95/dacon-emnist-competition/blob/master/notebooks/EMNIST_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/gdrive')
%ls /content/gdrive/'My Drive'/'Google Colaboratory'/dacon-emnist-competition/
%cd /content/gdrive/'My Drive'/'Google Colaboratory'/dacon-emnist-competition/
import os
import pandas as pd
datadir = 'data/'
train = pd.read_csv(os.path.join(datadir, 'train.csv'))
test = pd.read_csv(os.path.join(datadir, 'test.csv'))
submission = pd.read_csv(os.path.join(datadir, 'submission.csv'))
train.sample(10)
import matplotlib.pyplot as plt
import numpy as np
digit_columns = train.columns[3:]
digits = np.array(train.loc[:, digit_columns])
digits.shape
index = np.random.randint(digits.shape[0])
plt.figure(figsize=(4, 4))
plt.imshow(digits[index, :].reshape(28, 28))
plt.title(f'digit {train.loc[index, "digit"]} / letter {train.loc[index, "letter"]}')
plt.xticks([])
plt.yticks([])
plt.axis('off')
plt.show()
x_train = train.drop(columns=['id', 'digit', 'letter']).values.reshape(-1, 28, 28, 1)
x_train = x_train / 255.0
x_train.shape
N = len(train['digit'])
C = len(pd.unique(train['digit']))
y_train = np.zeros(shape=(N, C))
for idx, digit in enumerate(train['digit']):
y_train[idx, digit] = 1
y_train.shape
from tensorflow import keras
from tensorflow.keras.layers import Activation, BatchNormalization, Conv2D,\
Dense, Dropout, Flatten, MaxPool2D, LeakyReLU
from tensorflow.keras.optimizers import Adam
import tensorflow as tf
class BasicCNN:
def __init__(self, image_shape):
self.model = self.build_model(image_shape)
adam = Adam(learning_rate=1e-3)
self.model.compile(
loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy']
)
def build_model(self, image_shape):
model = keras.models.Sequential()
model.add(Conv2D(128, 3, padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.01))
model.add(MaxPool2D())
model.add(Dropout(rate=0.5))
model.add(Conv2D(256, 3, padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.01))
model.add(MaxPool2D())
model.add(Dropout(rate=0.5))
model.add(Conv2D(512, 3, padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.01))
model.add(MaxPool2D())
model.add(Dropout(rate=0.5))
model.add(Flatten())
model.add(Dense(512))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(10, activation='softmax'))
inputs = keras.Input(image_shape[1:])
outputs = model(inputs)
print(model.summary())
return keras.Model(inputs, outputs)
def train(self, x_train, y_train, epochs):
history = self.model.fit(
x_train, y_train,
epochs=epochs,
validation_split=0.2
)
return history
def predict(self, x_test):
return self.model.predict(x_test)
basic_cnn = BasicCNN(x_train.shape)
history = basic_cnn.train(x_train, y_train, 100)
train_loss = history.history['loss']
val_loss = history.history['val_loss']
train_acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
plt.figure(figsize=(8, 6))
plt.plot(train_loss, label='train')
plt.plot(val_loss, label='validation')
plt.legend()
plt.xlabel('epochs')
plt.ylabel('loss')
plt.title('Loss per Epochs')
plt.show()
plt.figure(figsize=(8, 6))
plt.plot(train_acc, label='train')
plt.plot(val_acc, label='validation')
plt.legend()
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.title('Accuracy per Epochs')
plt.show()
x_test = np.array(test.loc[:, digit_columns].values)
x_test = x_test.reshape(-1, 28, 28, 1)
x_test = x_test / 255.0
x_test.shape
prediction = basic_cnn.predict(x_test)
prediction = np.argmax(prediction, axis=-1)
submission.head()
test.head()
submission['digit'] = prediction
submission.head(10)
# submission.to_csv('data/submission_1.csv', index=False) # 0.83
submission.to_csv('data/submissions/submission_2.csv', index=False)
```
|
github_jupyter
|
# Física de partículas ... com R e tidyverse
Esse tutorial utiliza os dados abertos do experimento CMS do LHC [CMS Open Data](http://opendata.cern.ch/about/cms) Disponíveis no site [CERN Open Data portal](http://opendata.cern.ch).
Para rodar esse tutorial offline, vide o arquivo [README](https://github.com/cms-opendata-education/cms-rmaterial-multiple-languages/blob/master/README.md), con instruções em inglés. Eu estou rodando o notebook na minha instalação local de R.
Também é possível copiar as linhas de código e colar na consola de comandos do RStudio, ou num script e logo rodá-lo.
**Créditos:**
* Adaptado do original de [Edith Villegas Garcia](https://github.com/edithvillegas), [Andrew John Lowe](https://github.com/andrewjohnlowe) e [Achintya Rao](https://github.com/RaoOfPhysics).
* Traduzido ao português e adicionado o ajuste por [Clemencia Mora Herrera](https://github.com/clemencia).
---
## Os dados:
Este tutorial, introduce análise de dados com R usando dados divulgados ao público no portal **CMS Open Data**.
A origem desses dados colisões de prótons do LHC no ano 2011 (Energia do centro-de-massa de 7 TeV).
Estes dados contém medições de partículas do estado final, dois ***múons*** (uma versão um pouco mais pesada do elétron, comúm em raios cósmicos).
A primeira imagem mostra um desenho esquemático do LHC e seus 4 principais experimentos.
<figure>
<img src="https://github.com/cms-opendata-education/zboson-exercise/blob/master/images/LHC.png?raw=true" alt="image missing" style="height: 350px" />
<figcaption> Imagem 1: O LHC e seus 4 principais experimentos. ©
<a href="https://cds.cern.ch/record/1708847">CERN</a>
</figcaption>
</figure>
No LHC prótons são acelerados a altíssimas velocidades e feitos colidir em pontos determinados (os 4 da figura acima), onde cada experimento com seus detectores registra e salva a informação dos produtos da colisão. A energia da colisão pode ser convertida em massa de novas partículas ($E=mc^2$) que podem decair em outras mais leves, deixando sinais nos instrumentos de medição em cada detector. Os sinais são traduzidos em momentum ($p=mv$), carga da partícula, energia e a sua direção de saida do ponto de interação.
O seguinte é um vídeo que mostra como acontecem as colisões e medições no acelerador LHC.
```
library(IRdisplay)
display_html('<iframe width="560" height="315" src="https://www.youtube.com/embed/pQhbhpU9Wrg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
```
Se combinarmos a informação da energia e momentum dos dois múons para cada observação (_evento_), podemos observar que em certos valores de ***massa invariante*** (a energia de uma partícula de massa $m$ em reposo é $E=m c^2$ --> relatividade restrita isto é sempre válido no sistema de referência dela própia, então essa massa própria é constante para sistemas de referência diferentes) a frequência de observações é maior: isto quer dizer que existe uma partícula subatómica que decaiu em um par de múons e chamamos isso de uma "ressonância". Podemos inferir a presença dessas partículas indiretamente observando seus produtos de decaimento, os múons, e a sua frequência.
<figure>
<img src="http://github.com/cms-opendata-education/zboson-exercise/blob/master/images/eventdisplay.png?raw=true" alt="image missing" style="height: 350px" />
<figcaption> Imagem 2: Visualização da detecção de dois múons em uma colisão no CMS. </figcaption>
</figure>
<figure>
<img src="http://github.com/cms-opendata-education/zboson-exercise/blob/master/images/CMS.jpg?raw=true" alt="image missing" style="height: 350px" />
<figcaption> Imagem 3: Estrutura do experimento CMS, aberto. ©
<a href="https://cds.cern.ch/record/1433717">CERN</a>
</figcaption>
</figure>
<figure>
<img src="http://github.com/cms-opendata-education/zboson-exercise/blob/master/images/CMS2.gif?raw=true" alt="image missing" style="height: 350px" />
<figcaption>Imagem 4: Seção transversal do CMS, e como as partículas são detectadas nele. ©
<a href=\"https://cms-docdb.cern.ch/cgi-bin/PublicDocDB/ShowDocument?docid=4172\">CERN</a>
</figcaption>
</figure>
Nesse tutorial podemos criar um gráfico de frequências onde é possível perceber os picos correspondentes a algumas dessas partículas que tem preferência pelo decaimento em dois múons.
## Breve introdução a R
R é uma linguagem de programação usada amplamente em estatística e ciência de dados.
_"R é a língua franca da estatística"_ (W. Zeviani, UFPR)
[ http://leg.ufpr.br/~walmes/cursoR/data-vis/slides/01-tidyverse.pdf ]
### Os tipos de dados em R
Os tipos básicos de dados em R são:
- Logical -- lógicos ou booleanos: TRUE ou FALSE)
- Numeric -- números em geral, números reais
- Integer -- inteiros
- Complex -- complexos
- Character -- carateres ou sequências deles: letras, números como carater, símbolos e frases
Em geral, sem ter que especificar, R assigna automátiamente um tipo às variáveis declaradas.
Qualquer número é tipo ```numeric```, mas para especificar ```integer``` temos que adicionar um aletra "L" no final do valor.
A linha abaixo declara a variável ```a``` com valor $5$ de tipo inteiro.
```
a <- 5L
```
Para declarar variáveis complexas a sintaxe é a seguinte:
```
b <- 5 + 3i
d <- 8 + 0i
```
As variáveis lógicas podem tomar o valor ```TRUE``` ou ```FALSE```, mas també pode ser assignado o valor resultante de uma expressão condicional, p.ex.:
```
c <- 3 > 5
```
As variáveis de carateres podem ser letras, frases ou outros carateres incluindo números entre aspas.
```
cr <- "3!"
```
Para saber o valor de cada variável eu simplesmente chamo o nome:
```
a
b
c
d
cr
```
#### Vetores
É possível agrupar valores em variáveis vetoriais dessa forma:
```
a <- c(2, 3, 5)
```
Os vetores podem ser de qualquer tipo. Também podemos aplicar condições aos vetores para criar um vetor lógico:
```
a <- c(2, 5, 8, 3, 9)
b <- a > 3
```
O vetor ```b``` é o resultado da avaliação da condição ```x>3``` para cada elemento ```x``` do vetor ```a```.
```
b
```
Para acessar algum elemento do vetor, podemos chamar o nome da variável vetor com o índice do elemento desejado. O contador começa de 1 (outras linguagens utilizam 0).
Entáo o primeiro elemento de ```a``` será acessado assim:
```
a[1]
```
Também é possível acessar os elementos que satisfazem uma condição. A linha seguinte entrega o subconjunto (sub-vetor) dos elementos de ```a``` que tem valor maior que $3$.
```
c<-a[a>3]
c
```
#### Matrices
Em R podemos criar uma matriz a partir de vetores. As matrices são estruturas de dados em 2 dimensões.
Podemos criar a matriz especificando os valores nela, o numero de linhas e colunas e se o preenchimento sera por filas ou por colunas.
Neste exemplo começamos por um vetor do 1 ao 9:
```
a <- c(1:9)
a
```
Logo declaramos ```A``` uma matriz de 3x3 componentes, preenchidas por linha, com os 9 elementos de ```a```.
```
A <- matrix(a, nrow=3, ncol=3, byrow=TRUE)
A
```
Para accessar os elementos da matriz, usamos colchetes com o numero de linha e coluna. Por exemplo para acessar o elemento na segunda linha, terceira coluna de ```A``` fazemos:
```
A[2,3]
```
Podemos acessar uma linha completa se especificamos só o primeiro número e deixamos em branco o índice das colunas, e viceversa. Por exemplo a chamada ```A[2,]``` retorna os valores da segunda linha de ```A```.
```
A[2,]
```
As matrices podem ser acessadas com condições, como foi no caso dos vetores.
```
# Criar um vetor de valores 1 a 25
a <- c(1:25)
# Criar a matriz a partir desse vetor com 5 linhas e 5 colunas, preenchendo linha por linha.
A <- matrix(a, nrow=5, ncol=5, byrow=TRUE)
# Acessar os elementos de A que sejam maiores que 12
# ao colocar a condição "A>12" nos colchetes
# a variável nova é um vetor.
C<-A[A>12]
print(C)
length(C)
```
#### Arrays (Arranjos)
Arrays são similares às matrices, mas podem ter mais de duas dimensões.
Podem ser criadas, como as matrices, a partir de um vetor e especificando as dimensões escolhidas.
```
# Criar um vetor com valores 1 a 27
a <- c(1:27)
# Criar um array a partir do vetor a
# que contem 3 matrices de 3 linhas e 3 colunas .
A <- array(a, dim=c(3,3,3))
# Imprimir o array.
print(A)
```
#### Listas
As listas são como vetores, mas podem conter diferentes tipos de dados concomitantemente, e também vetores, entre seus elementos.
```
l <- list(c(1,2,3),'a', 1, 1+5i)
l
```
#### Data Frames
Data frames são como listas de vetores com o mesmo comprimento. São usados para armazenar dados em forma de tabela.
Para criar um data frame podemos fazer, por exemplo:
```
data <- data.frame(
Nome = c("Thereza", "Diana"),
Genero = c('F','F'),
Idade = c(20, 23)
)
data
```
(Para mim, aqui reside a beleza do R ... essa simplicidade!)
Para acessar uma coluna em particular, é simplesmente usar o ```$``` e o nome da coluna.
Por exemplo para ver os nomes:
```
data$Nome
```
Se queremos ver só uma linha (instância ou observação do seu experimento, medição) chamamos o número da linha
```
data[1,]
```
E R tem várias funções para importar arquivos (em formato texto, csv, até xls!) direto para data frames.
Como não amar?
## Explorando o CMS Open Data
Agora vamos à tarefa em mãos: analisar dados do CMS.
---
### Importar dados dos arquivos CSV
No portal do [CERN Open Data](http://opendata.cern.ch) tem vários conjuntos de dados disponíveis. Nós vamos usar dados que já foram reduzidos ao formato CSV (comma-separated values), importá-los em R e analizar seu conteúdo.
Os dados desse tutorial vêm do seguinte registro: [http://opendata.cern.ch/record/545](http://opendata.cern.ch/record/545)
Para importar usamos o seguinte comando:
```
mumu <- read.csv("http://opendata.cern.ch/record/545/files/Dimuon_DoubleMu.csv")
```
O comando anterior carregou os dados do arquivo `Dimuon_DoubleMu.csv` numa variável chamada `mumu`, que é um data frame.
Para olhar o conteúdo das primeiras 6 linhas podemos chamar a função `head` e para saber o número de observações usamos a função `nrow`
```
nrow(mumu)
head(mumu)
```
O nosso conjunto de dados tem 100 mil linhas (cada linha é um evento de colisão) e 21 colunas (cada coluna é uma variável da descrição ou das medições dos produtos finais do evento).
Nesse ponto já podemos chamar o *tidyverse*. Para ter uma visualização mais "agradável" dos dados podemos mudar de data frame para *tibble*.
```
require(tidyverse)
tbmumu<- mumu %>% as_tibble()
```
O tidyverse inclui o pacote `magrittr`, que introduz o operador *pipe* (como tubulação, mas também "ce ci n'est pas une pipe") com símbolo `%>%` o que entrega o objeto da esquerda como argumento à função da direita. E com esses encanamentos é possível fazer várias operações sucessivas de forma concisa.
Então o código acima aplica a função `as_tibble` ao data frame `mumu` e o resultado (que é um *tibble*) é armazenado na variável `tbmumu`.
Logo ao imprimir as primeiras 6 linhas da nossa tabela tipo *tibble* temos uma visualização que:
* cabe na tela
* dá informações sobre aquilo que não coube
```
print(head(tbmumu))
```
Do output acima podemos ver as primeiras 12 colunas, com seus tipos, com cores para valores negativos, e temos a informação adicional de 9 variáveis não mostradas. Podemos acessar as colunas e linhas da mesma forma do data frame.
```
# imprime os primeiros 6 elementos da coluna chamada E1
# o que é retornado com esse operador é um vetor
print(head(tbmumu$E1))
# imprime a primeira linha de dados
# retora um novo tibble que é sub-conjunto do original
print(tbmumu[1,])
# Este outro exemplo retorna o subconjunto das 10 primeiras linhas
print(tbmumu[1:10,])
```
### Calcular a Massa invariante
Nossa tabela tem observações de colisões com 2 *múons* no estado final.
Como vimos na tabela, temos valores para energia (E), o momentum linear (px, py, pz), a *pseudo-rapidez* (eta ou η, que tem relação com o ángulo polar) e o ángulo azimutal (phi ou φ).
Podemos calcular a massa invariante, ou seja a energia equivalente em repouso que produziu esses múons, com a seguinte equação:
$M = \sqrt{(\sum{E})^2 - ||\sum{p}||^2}$
onde $M$ é a massa invariante, $\sum{E}$ é o total da soma das energias (cinética relativística) das partículas finais, e $\sum{p}$ é o total da soma dos momentos lineares.
No nosso código, vamos calcular a massa invariante usando os valores de `px`, `py` e `pz` e a energia dos dois múons. Primeiramente precisamos calcular a soma vetorial do momentum.
A função `mutate` do **tidyverse** faz o cálculo especificado para cada observação e _adiciona novas variáveis_, nesse casso `ptotal`, `E` e `mass`
```
tbmumu<-tbmumu%>%mutate(ptotal = sqrt((px1+px2)^2 + (py1+py2)^2 + (pz1+pz2)^2),
E = E1+E2,
mass = sqrt(E^2 - ptotal^2))
tbmumu%>% select(Run, Event, ptotal,E, mass)%>%head()
```
É possível também definir uma função para fazer nosso cálculo:
```
myfunctionname = function(arg1, arg2...)
{
statements
return(a)
}
```
Por exemplo podemos definir uma função para a magnitude soma vetorial de dois vetores de 3 componentes, e outra função que entrega o resultado para a massa invariante a partir de `ptotal` e `E`
```
sumvecmag = function(x1,x2,y1,y2,z1,z2){
x = x1+x2
y = y1+y2
z = z1+z2
tot = sqrt(x^2+y^2+z^2)
return(tot)
}
invmass = function(ptot, E) {
m = sqrt(E^2 - ptot^2)
return(m)
}
```
Agora podemos adicionar uma nova coluna calculada chamando as funções definidas :
```
tbmumu<- tbmumu %>% mutate(
ptotal_f = sumvecmag( px1, px2, py1, py2 , pz1, pz2),
E = E1 + E2,
mass_f=invmass(ptotal_f,E))
# Visualizar as primeiras 6 linhas do tibble, selecionando só as colunas do meu interesse
print(head(tbmumu%>% select(ptotal,ptotal_f, E, mass, mass_f)))
```
### Fazer um Histograma
Em física de partículas trabalhamos com distribuições de frequências, quer dizer histogramas.
Nesse caso, quero olhar só uma porção dos dados, onde a variável massa esta entre 1.1 e 5 (GeV). Para isto posso utilizar a função `filter` do tidyverse, com operadores `%>%`
```
tbsel <- tbmumu%>% filter(mass>1.1 & mass < 5)
```
A visualização do gráfico do histograma pode ser feita com a função própria básica do R
```
Sys.setlocale(locale = "en_US.UTF-8") #Para ter carateres de acentos
library(repr)
options(repr.plot.width=6,repr.plot.height=4 ) #Para ter gráficos de tamanho que caiba na tela
hist(tbsel$mass, breaks = 200, xlim=c(1,5),
main="Histograma da Massa Invariante",
xlab = "Massa (GeV)",ylab="Frequência ",
lty="blank",
col="purple")
```
Observamos um pico maior perto do valor de $3.1$ GeV e outro pequeno perto de $3.7$ GeV.
Esses valores correspondem às massas de duas partículas que decaem em dois múons ou mais específicamente, um múon e um anti-múon (múon carregado positivamente).
Olhando na base de dados do [Particle Data Group](http://pdg.lbl.gov/), podemos ver que essas partículas são os **mésons** (partículas **hadrônicas** compostas de um quark e um anti-quark) ***J/ψ(1S)*** e ***ψ(2S)***, respectivamente.
### Graficando com o tidyverse
Podemos condensar o processo de importar, manipular as variáveis e graficar o histograma com um código muito enxuto:
```
read_csv("http://opendata.cern.ch/record/545/files/Dimuon_DoubleMu.csv",
col_types = cols()) %>%
mutate(ptotal = sqrt((px1+px2)^2 + (py1+py2)^2 + (pz1+pz2)^2),
E = E1+E2,
mass = sqrt(E^2 - ptotal^2)) %>%
filter(mass >0.1 & mass<120) %>%
ggplot(aes(mass)) +
geom_histogram(bins = 250, fill = "purple", alpha = 0.5) +
xlab("Massa (GeV)") +
ylab("Frequência") +
scale_x_continuous(trans = 'log10') +
scale_y_continuous(trans = 'log10') +
ggtitle("Espectro de di-múons no CMS") +
theme_bw() +
theme(plot.title = element_text(hjust = 0.5))
```
Agora está ficando bom!
A cadeia de comandos pode ser lida como uma frase, com uma sucessão de **ações** sobre os dados:
"**Leia** o arquivo,
logo **mude** o conteúdo criando as novas variáveis `ptotal`,`E` e `mass`,
logo **filtre** para ver só as observações no intervalo desejado,
logo **grafique** com os parâmetros apropriados"
O pacote de gráficos do tidyverse é ``ggplot2``, onde as diferentes opções do gráfico estão sequenciadas com símbolo `+`. Neste caso eu escolhi usar uma escala `log-log` que permite ter uma visualização abrangente de várias ordens de grandeza, e assim observar vários picos de ressonância.
As opções do gráfico são:
- `ggplot()` a função central do pacote `ggplot2`, que trabalha com o princípio de *camadas*:
1. `aes(mass)` quer dizer que vamos usar a variável `mass`
1. `geom_histogram()` tomar a variável e fazer o histograma
1. `xlab()` e `ylab()` os nomes dos eixos
1. `ggtitle()` título do gráfico
1. `theme_bw()` tema preto-e-branco,
1. `theme()` permite manipular alguns elementos específicos do gráfico
## Ajustando uma função ao pico do $J/\psi$
Voltando ao *tibble* onde já habiamos selecionado o intervalo que apresenta o pico do méson $J/\psi$, podemos chamar a função hist sem graficar para ter só os resultados das frequências nos intervalos.
```
a<-hist(tbsel$mass,breaks=200,plot=FALSE)
mydf<- data.frame(x=a$mids, nobs=a$counts)
print(head(mydf))
library(latex2exp)
mydf %>%
ggplot(aes(x,nobs, ymin=nobs-sqrt(nobs),ymax=nobs+sqrt(nobs))) +
geom_point() +
geom_errorbar() +
xlab("Massa (GeV)")+
ylab("Frequência")+
ggtitle(TeX("Histograma do pico dos mésons J/$\\psi$ e $\\psi$"))+
theme_bw() + theme(plot.title = element_text(hjust = 0.5))
```
### A função que descreve os dados
Uma possível função para descrever esses dois picos seria a soma de uma gaussiana com média perto de $3.1$, outra com média perto de $3.7$ e uma reta decrescente como "base" (nosso *background*).
```
my2gausspluslin <- function(x, mean1, sigma1, norm1,mean2,sigma2,norm2,a,b) {
f <- norm1 * exp(-1*((x-mean1)/sigma1)^2)+ norm2 * exp(-1*((x-mean2)/sigma2)^2) + a*x +b
return(f)
}
```
Vou chamar a função `nls` que faz a optimização dos parámetros da função com mínimos quadrados não lineares [documentação](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/nls.html).
```
res <- nls( nobs ~ my2gausspluslin(x,mean1,sigma1,norm1,mean2,sigma2,norm2,a,b),
data = mydf,
start=list(mean1=3.1, sigma1=0.1, norm1=3000,mean2=3.7,sigma2=0.05,norm2=30,a=-10,b=100))
summary(res)
```
Aqui o resultado foi salvo na variável `res`, e posso aplicar esse resultado com a função `predict`. Vamos adicionar uma coluna do número calculado a partir da função ao data frame e salvar com um novo nome de `nexp` por *expected*, número esperado segundo um modelo de 2 gaussianas mais um fundo linear.
```
newdf<- mydf%>% mutate(nexp = predict(res,mydf))
print(head(newdf))
```
Graficamos a predição
```
newdf%>%
ggplot(aes(x,nexp))+
geom_path(color="purple")+
xlab("Massa (GeV)")+
ylab("frequência")+
ggtitle("Predição do ajuste da função gaussiana + reta decrescente")+
theme_bw() + theme(plot.title = element_text(hjust = 0.5))
```
### Resultado
Finalmente graficamos os pontos das frequências observadas (com erros de distribuição de *Poisson*, $\sigma_n =\sqrt{n}$ ) junto com a linha da predição.
```
ggplot(newdf) +
geom_path(aes(x,nexp),color="purple")+
geom_point(aes(x,nobs))+
geom_errorbar(aes(x,nobs, ymin=nobs-sqrt(nobs),ymax=nobs+sqrt(nobs)))+
xlab("Massa (GeV)")+
ylab("frequência")+
ggtitle("Resultado dos dados com a função do ajuste")+
theme_bw() +theme(plot.title = element_text(hjust = 0.5))
```
## Motivação!
Agora estou entusiasmada, vamos dar uma olhada no pico de maior massa nesse espectro?
É o pico do bóson Z, que é análogo a um fóton, só que com massa alta (para uma partícula subatómica).
```
tbZboson<-tbmumu %>% filter(mass>70 & mass <110)
tbZboson %>%
ggplot(aes(mass)) +
geom_histogram(bins = 80, fill = "purple", alpha = 0.5) +
xlab("Massa (GeV)") +
ylab("Frequência") +
ggtitle("Pico do bóson Z") +
theme_bw() +
theme(plot.title = element_text(hjust = 0.5))
tbZboson %>% filter(abs(eta1)<2.4 & abs(eta2)<2.4, pt1>20 & pt2>20, type1=="G" & type2=="G") %>%
ggplot(aes(mass)) +
geom_histogram(bins = 80, fill = "purple", alpha = 0.5) +
xlab("Massa (GeV)") +
ylab("Frequência") +
ggtitle("Pico do bóson Z") +
theme_bw() +
theme(plot.title = element_text(hjust = 0.5))
zfilt<- tbZboson %>% filter(abs(eta1)<2.4 & abs(eta2)<2.4, pt1>20 & pt2>20, type1=="G" & type2=="G")
zh<- hist(zfilt$mass,breaks=80,plot=FALSE)
zdf<-data.frame(x=zh$mids,n=zh$counts)
print(head(zdf))
breitwpluslin <- function(x,M,gamma,N,a,b){
b<- a*x +b
s<- N*( (2*sqrt(2)*M*gamma*sqrt(M**2*(M**2+gamma**2)))/(pi*sqrt(M**2+sqrt(M**2*(M**2+gamma**2)))) )/((x**2-M**2)**2+M**2*gamma**2)
return(b+s)
}
library(minpack.lm)
resz <-nlsLM( n~ breitwpluslin(x,m,g,norm,a,b),
data = zdf,
start=list(m=90, g=3, norm =100,a=-10,b=100))
summary(resz)
newz<- zdf %>% mutate(nexp=predict(resz,zdf))
print(as_tibble(newz))
newz%>%
ggplot(aes(x,nexp))+
geom_path(color="purple")+
geom_point(aes(x,n))+
geom_errorbar(aes(x,n, ymin=n-sqrt(n),ymax=n+sqrt(n)))+
xlab("Massa (GeV)")+
ylab("frequência")+
ggtitle("Predição do ajuste da função Breit-Wigner + reta decrescente")+
theme_bw() + theme(plot.title = element_text(hjust = 0.5))
```
Ahhh amei!
Corações roxinhos 4 ever 💜💜💜
|
github_jupyter
|
# Relationships
> A Summary of lecture "Exploratory Data Analysis in Python", via datacamp
- toc: true
- badges: true
- comments: true
- author: Chanseok Kang
- categories: [Python, Datacamp]
- image: images/brfss-boxplot.png
## Exploring relationships
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from empiricaldist import Pmf, Cdf
brfss_original = pd.read_hdf('./dataset/brfss.hdf5', 'brfss')
```
### PMF of age
```
# Extract age
age = Pmf.from_seq(brfss_original['AGE'])
# Plot the PMF
age.bar()
# Label the axes
plt.xlabel('Age in years')
plt.ylabel('PMF')
```
### Scatter plot
```
# Select the first 1000 respondents
brfss = brfss_original[:1000]
# Extract age and weight
age = brfss['AGE']
weight = brfss['WTKG3']
# Make a scatter plot
plt.plot(age, weight, 'o', alpha=0.1)
plt.xlabel('Age in years')
plt.ylabel('Weight in kg')
```
### Jittering
```
# Select the first 1000 respondents
brfss = brfss_original[:1000]
# Add jittering to age
age = brfss['AGE'] + np.random.normal(0, 2.5, size=len(brfss))
# Extract weight
weight = brfss['WTKG3']
# Make a scatter plot
plt.plot(age, weight, 'o', markersize=4, alpha=0.2)
plt.xlabel('Age in years')
plt.ylabel('Weight in kg')
```
## Visualizing relationships
### Height and weight
```
# Drop rows with missing data
data = brfss_original.dropna(subset=['_HTMG10', 'WTKG3'])
# Make a box plot
sns.boxplot(x='_HTMG10', y='WTKG3', data=data, whis=10)
# Plot the y-axis on a log scale
plt.yscale('log')
# Remove unneeded lines and label axes
sns.despine(left=True, bottom=True)
plt.xlabel('Height in cm')
plt.ylabel('Weight in kg')
plt.savefig('../images/brfss-boxplot.png')
```
### Distribution of income
```
# Extract income
income = brfss_original['INCOME2']
# Plot the PMF
Pmf.from_seq(income).bar()
# Label the axes
plt.xlabel('Income level')
plt.ylabel('PMF')
```
### Income and height
```
# Drop rows with missing data
data = brfss_original.dropna(subset=['INCOME2', 'HTM4'])
# Make a violin plot
sns.violinplot(x = 'INCOME2', y='HTM4', data=data, inner=None)
# Remove unneeded lines and label axes
sns.despine(left=True, bottom=True)
plt.xlabel('Income level')
plt.ylabel('Height in cm')
```
## Correlation
### Computing correlations
```
# Select columns
columns = ['AGE', 'INCOME2', '_VEGESU1']
subset = brfss_original[columns]
# Compute the correlation matrix
print(subset.corr())
```
## Simple regression
### Income and vegetables
```
from scipy.stats import linregress
# Extract the variables
subset = brfss_original.dropna(subset=['INCOME2', '_VEGESU1'])
xs = subset['INCOME2']
ys = subset['_VEGESU1']
# Compute the linear regression
res = linregress(xs, ys)
print(res)
```
### Fit a line
```
plt.figure(figsize=(10, 10))
# Plot the scatter plot
x_jitter = xs + np.random.normal(0, 0.15, len(xs))
plt.plot(x_jitter, ys, 'o', alpha=0.2)
# Plot the line of best fit
fx = np.array([xs.min(), xs.max()])
fy = res.intercept + res.slope * fx
plt.plot(fx, fy, '-', alpha=0.7)
plt.xlabel('Income code')
plt.ylabel('Vegetable servings per day')
plt.ylim([0, 6])
```
|
github_jupyter
|
```
# Import packages
import os
from matplotlib import pyplot as plt
import pandas as pd
import math
import numpy as np
# Import AuTuMN modules
from autumn.settings import Models, Region
from autumn.settings.folders import OUTPUT_DATA_PATH
from autumn.tools.project import get_project
from autumn.tools import db
from autumn.tools.plots.calibration.plots import calculate_r_hats, get_output_from_run_id
from autumn.tools.plots.uncertainty.plots import _plot_uncertainty, _get_target_values
from autumn.tools.plots.plotter.base_plotter import COLOR_THEME
from autumn.tools.plots.utils import get_plot_text_dict, change_xaxis_to_date, REF_DATE, ALPHAS, COLORS, _apply_transparency, _plot_targets_to_axis
from autumn.models.covid_19.stratifications.agegroup import AGEGROUP_STRATA
import matplotlib.patches as mpatches
from autumn.tools.calibration.utils import get_uncertainty_df
# Specify model details
model = Models.COVID_19
region = Region.MALAYSIA # http://www.autumn-data.com/app/covid_19/region/malaysia/run/1636441221-1a276cd.html
dirname = "2021-11-09"
# get the relevant project and output data
project = get_project(model, region)
project_calib_dir = os.path.join(
OUTPUT_DATA_PATH, "calibrate", project.model_name, project.region_name
)
calib_path = os.path.join(project_calib_dir, dirname)
# Load tables
mcmc_tables = db.load.load_mcmc_tables(calib_path)
mcmc_params = db.load.load_mcmc_params_tables(calib_path)
uncertainty_df = get_uncertainty_df(calib_path, mcmc_tables, project.plots)
scenario_list = uncertainty_df['scenario'].unique()
# make output directories
output_dir = f"{model}_{region}_{dirname}"
base_dir = os.path.join("outputs", output_dir)
os.makedirs(base_dir, exist_ok=True)
dirs_to_make = ["MLE", "median","csv_files"]
for dir_to_make in dirs_to_make:
os.makedirs(os.path.join(base_dir, dir_to_make), exist_ok=True)
titles = {
"notifications": "Daily number of notified Covid-19 cases",
"infection_deaths": "Daily number of Covid-19 deaths",
"accum_deaths": "Cumulative number of Covid-19 deaths",
"incidence": "Daily incidence (incl. asymptomatics and undetected)",
"hospital_occupancy": "Hospital beds occupied by Covid-19 patients",
"icu_occupancy": "ICU beds occupied by Covid-19 patients",
"new_hospital_admissions": "New hospital admissions",
"cdr": "Proportion detected among symptomatics",
"proportion_vaccinated": "Proportion vaccinated",
"prop_incidence_strain_delta": "Proportion of Delta variant in new cases",
"accum_notifications": "Cumulative Covid-19 notifications"
}
def plot_outputs(output_type, output_name, scenario_list, sc_linestyles, sc_colors, show_v_lines=False, x_min=590, x_max=775):
# plot options
title = titles[output_name]
title_fontsize = 18
label_font_size = 15
linewidth = 3
n_xticks = 10
# initialise figure
fig = plt.figure(figsize=(12, 8))
plt.style.use("ggplot")
axis = fig.add_subplot()
# prepare colors for ucnertainty
n_scenarios_to_plot = len(scenario_list)
uncertainty_colors = _apply_transparency(COLORS[:n_scenarios_to_plot], ALPHAS[:n_scenarios_to_plot])
if output_type == "MLE":
derived_output_tables = db.load.load_derived_output_tables(calib_path, column=output_name)
for i, scenario in enumerate(scenario_list):
linestyle = sc_linestyles[scenario]
color = sc_colors[scenario]
if output_type == "MLE":
times, values = get_output_from_run_id(output_name, mcmc_tables, derived_output_tables, "MLE", scenario)
axis.plot(times, values, color=color, linestyle=linestyle, linewidth=linewidth)
elif output_type == "median":
_plot_uncertainty(
axis,
uncertainty_df,
output_name,
scenario,
x_max,
x_min,
[_, _, _, color],
overlay_uncertainty=False,
start_quantile=0,
zorder=scenario + 1,
linestyle=linestyle,
linewidth=linewidth,
)
elif output_type == "uncertainty":
scenario_colors = uncertainty_colors[i]
_plot_uncertainty(
axis,
uncertainty_df,
output_name,
scenario,
x_max,
x_min,
scenario_colors,
overlay_uncertainty=True,
start_quantile=0,
zorder=scenario + 1,
)
else:
print("Please use supported output_type option")
axis.set_xlim((x_min, x_max))
axis.set_title(title, fontsize=title_fontsize)
plt.setp(axis.get_yticklabels(), fontsize=label_font_size)
plt.setp(axis.get_xticklabels(), fontsize=label_font_size)
change_xaxis_to_date(axis, REF_DATE)
plt.locator_params(axis="x", nbins=n_xticks)
if output_name == "accum_notifications":
axis.set_ylabel("million")
if show_v_lines:
release_dates = {}
y_max = plt.gca().get_ylim()[1]
linestyles = ["dashdot", "solid"]
i = 0
for time, date in release_dates.items():
plt.vlines(time, ymin=0, ymax=y_max, linestyle=linestyles[i])
text = f"Lockdown relaxed on {date}"
plt.text(time - 5, .5*y_max, text, rotation=90, fontsize=11)
i += 1
return axis
```
# Scenario plots with single lines
```
output_names = ["notifications", "icu_occupancy","accum_deaths","accum_notifications"]
scenario_x_min, scenario_x_max = 610, 913
sc_to_plot = [0, 2]
legend = ["With vaccine", "Without vaccine"]
lift_time = 731
vaccine_time = 481 # 25 April, 2021
text_font = 14
sc_colors = [COLOR_THEME[i] for i in scenario_list]
sc_linestyles = ["solid"] * (len(scenario_list))
for output_type in ["median"]:
for output_name in output_names:
plot_outputs(output_type, output_name, sc_to_plot, sc_linestyles, sc_colors, False, x_min=scenario_x_min, x_max=scenario_x_max)
path = os.path.join(base_dir, output_type, f"{output_name}.png")
plt.legend(labels=legend, fontsize=text_font, facecolor="white",loc = "lower right")
ymax = plt.gca().get_ylim()[1]
# if "accum" in output_name:
# plt.vlines(x=vaccine_time,ymin=0,ymax=1.05*ymax, linestyle="dashed") # 31 Dec 2021
# plt.text(x=vaccine_time + 3, y=ymax, s="Vaccination starts", fontsize = text_font, rotation=90, va="top")
# else:
plt.vlines(x=lift_time,ymin=0,ymax=1.05*ymax, linestyle="dashed") # 31 Dec 2021
plt.text(x=(scenario_x_min + lift_time) / 2., y=1.* ymax, s="Vaccination phase", ha="center", fontsize = text_font)
plt.text(x=lift_time + 3, y=ymax/2, s="Restrictions lifted", fontsize = text_font, rotation=90, va="top")
plt.savefig(path)
```
# Make Adverse Effects figures
```
params = project.param_set.baseline.to_dict()
ae_risk = {
"AstraZeneca": params["vaccination_risk"]["tts_rate"],
"mRNA": params["vaccination_risk"]["myocarditis_rate"]
}
agg_agegroups = ["10_14","15_19", "20_29", "30_39", "40_49", "50_59", "60_69", "70_plus"]
text_font = 12
vacc_scenarios = {
"mRNA": 2,
"AstraZeneca": 2,
}
adverse_effects = {
"mRNA": "myocarditis",
"AstraZeneca": "thrombosis with thrombocytopenia syndrome",
}
adverse_effects_short= {
"mRNA": "myocarditis",
"AstraZeneca": "tts",
}
left_title = "COVID-19-associated hospitalisations prevented"
def format_age_label(age_bracket):
if age_bracket.startswith("70"):
return "70+"
else:
return age_bracket.replace("_", "-")
def make_ae_figure(vacc_scenario, log_scale=False):
trimmed_df = uncertainty_df[
(uncertainty_df["scenario"]==vacc_scenarios[vacc_scenario]) & (uncertainty_df["time"]==913)
]
right_title = f"Cases of {adverse_effects[vacc_scenario]}"
fig = plt.figure(figsize=(10, 4))
plt.style.use("default")
axis = fig.add_subplot()
h_max = 0
delta_agegroup = 1.2 if log_scale else 4000
barwidth = .7
text_offset = 0.5 if log_scale else 20
unc_color = "black"
unc_lw = 1.
for i, age_bracket in enumerate(agg_agegroups):
y = len(agg_agegroups) - i - .5
plt.text(x=delta_agegroup / 2, y=y, s=format_age_label(age_bracket), ha="center", va="center", fontsize=text_font)
# get outputs
hosp_output_name = f"abs_diff_cumulative_hospital_admissionsXagg_age_{age_bracket}"
ae_output_name = f"abs_diff_cumulative_{adverse_effects_short[vacc_scenario]}_casesXagg_age_{age_bracket}"
prev_hosp_df = trimmed_df[trimmed_df["type"] == hosp_output_name]
prev_hosp_values = [ # median, lower, upper
float(prev_hosp_df['value'][prev_hosp_df["quantile"] == q]) for q in [0.5, 0.025, 0.975]
]
log_prev_hosp_values = [math.log10(v) for v in prev_hosp_values]
ae_df = trimmed_df[trimmed_df["type"] == ae_output_name]
ae_values = [ # median, lower, upper
- float(ae_df['value'][ae_df["quantile"] == q]) for q in [0.5, 0.975, 0.025]
]
log_ae_values = [max(math.log10(v), 0) for v in ae_values]
if log_scale:
plot_h_values = log_prev_hosp_values
plot_ae_values = log_ae_values
else:
plot_h_values = prev_hosp_values
plot_ae_values = ae_values
h_max = max(plot_h_values[2], h_max)
origin = 0
# hospital
rect = mpatches.Rectangle((origin, y - barwidth/2), width=-plot_h_values[0], height=barwidth, facecolor="cornflowerblue")
axis.add_patch(rect)
plt.hlines(y=y, xmin=-plot_h_values[1], xmax=-plot_h_values[2], color=unc_color, linewidth=unc_lw)
disp_val = int(prev_hosp_values[0])
plt.text(x= -plot_h_values[0] - text_offset, y=y + barwidth/2, s=int(disp_val), ha="right", va="center", fontsize=text_font*.7)
min_bar_length = 0
if not log_scale:
min_bar_length = 0 if vacc_scenario == "Astrazeneca" else 0
rect = mpatches.Rectangle((delta_agegroup + origin, y - barwidth/2), width=max(plot_ae_values[0], min_bar_length), height=barwidth, facecolor="tab:red")
axis.add_patch(rect)
plt.hlines(y=y, xmin=delta_agegroup + origin + plot_ae_values[1], xmax=delta_agegroup + origin + plot_ae_values[2], color=unc_color, linewidth=unc_lw)
disp_val = int(ae_values[0])
plt.text(x=delta_agegroup + origin + max(plot_ae_values[0], min_bar_length) + text_offset, y=y + barwidth/2, s=int(disp_val), ha="left", va="center", fontsize=text_font*.7)
# main title
axis.set_title(f"Benefit/Risk analysis with {vacc_scenario} vaccine", fontsize = text_font + 2)
# x axis ticks
if log_scale:
max_val_display = math.ceil(h_max)
else:
magnitude = 500
max_val_display = math.ceil(h_max / magnitude) * magnitude
# sub-titles
plt.text(x= - max_val_display / 2, y=len(agg_agegroups) + .3, s=left_title, ha="center", fontsize=text_font)
plt.text(x= max_val_display / 2 + delta_agegroup, y=len(agg_agegroups) + .3, s=right_title, ha="center", fontsize=text_font)
if log_scale:
ticks = range(max_val_display + 1)
rev_ticks = [-t for t in ticks]
rev_ticks.reverse()
x_ticks = rev_ticks + [delta_agegroup + t for t in ticks]
labels = [10**(p) for p in range(max_val_display + 1)]
rev_labels = [l for l in labels]
rev_labels.reverse()
x_labels = rev_labels + labels
x_labels[max_val_display] = x_labels[max_val_display + 1] = 0
else:
n_ticks = 6
x_ticks = [-max_val_display + j * (max_val_display/(n_ticks - 1)) for j in range(n_ticks)] + [delta_agegroup + j * (max_val_display/(n_ticks - 1)) for j in range(n_ticks)]
rev_n_ticks = x_ticks[:n_ticks]
rev_n_ticks.reverse()
x_labels = [int(-v) for v in x_ticks[:n_ticks]] + [int(-v) for v in rev_n_ticks]
plt.xticks(ticks=x_ticks, labels=x_labels)
# x, y lims
axis.set_xlim((-max_val_display, max_val_display + delta_agegroup))
axis.set_ylim((0, len(agg_agegroups) + 1))
# remove axes
axis.set_frame_on(False)
axis.axes.get_yaxis().set_visible(False)
log_ext = "_log_scale" if log_scale else ""
path = os.path.join(base_dir, f"{vacc_scenario}_adverse_effects{log_ext}.png")
plt.tight_layout()
plt.savefig(path, dpi=600)
for vacc_scenario in ["mRNA", "AstraZeneca"]:
for log_scale in [False,True]:
make_ae_figure(vacc_scenario, log_scale)
```
# Counterfactual no vaccine scenario
```
output_type = "uncertainty"
output_names = ["notifications", "icu_occupancy", "accum_deaths"]
sc_to_plot = [0, 1]
x_min, x_max = 400, 670
vacc_start = 426
for output_name in output_names:
axis = plot_outputs(output_type, output_name, sc_to_plot, sc_linestyles, sc_colors, False, x_min=400, x_max=670)
y_max = plt.gca().get_ylim()[1]
plt.vlines(x=vacc_start, ymin=0, ymax=y_max, linestyle="dashdot")
plt.text(x=vacc_start - 5, y=.6 * y_max, s="Vaccination starts", rotation=90, fontsize=12)
path = os.path.join(base_dir, f"{output_name}_counterfactual.png")
plt.tight_layout()
plt.savefig(path, dpi=600)
```
# number of lives saved
```
today = 660 # 21 Oct
df = uncertainty_df[(uncertainty_df["type"] == "accum_deaths") & (uncertainty_df["quantile"] == 0.5) & (uncertainty_df["time"] == today)]
baseline = float(df[df["scenario"] == 0]["value"])
counterfact = float(df[df["scenario"] == 1]["value"])
print(counterfact - baseline)
```
|
github_jupyter
|
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# ADM Quantities in terms of BSSN Quantities
## Author: Zach Etienne
[comment]: <> (Abstract: TODO)
**Notebook Status:** <font color='orange'><b> Self-Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**
### NRPy+ Source Code for this module: [ADM_in_terms_of_BSSN.py](../edit/BSSN/BSSN_in_terms_of_ADM.py)
## Introduction:
This module documents the conversion of ADM variables:
$$\left\{\gamma_{ij}, K_{ij}, \alpha, \beta^i\right\}$$
into BSSN variables
$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$
in the desired curvilinear basis (given by `reference_metric::CoordSystem`). Then it rescales the resulting BSSNCurvilinear variables (as defined in [the covariant BSSN formulation tutorial](Tutorial-BSSN_formulation.ipynb)) into the form needed for solving Einstein's equations with the BSSN formulation:
$$\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}.$$
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Initialize core Python/NRPy+ modules; set desired output BSSN Curvilinear coordinate system set to Spherical
1. [Step 2](#adm2bssn): Perform the ADM-to-BSSN conversion for 3-metric, extrinsic curvature, and gauge quantities
1. [Step 2.a](#adm2bssn_gamma): Convert ADM $\gamma_{ij}$ to BSSN $\bar{\gamma}_{ij}$; rescale to get $h_{ij}$
1. [Step 2.b](#admexcurv_convert): Convert the ADM extrinsic curvature $K_{ij}$ to BSSN $\bar{A}_{ij}$ and $K$; rescale to get $a_{ij}$, $K$.
1. [Step 2.c](#lambda): Define $\bar{\Lambda}^i$
1. [Step 2.d](#conformal): Define the conformal factor variable `cf`
1. [Step 3](#code_validation): Code Validation against `BSSN.BSSN_in_terms_of_ADM` NRPy+ module
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Initialize core Python/NRPy+ modules \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
```
# Step 1: Import needed core NRPy+ modules
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: Parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import sys # Standard Python modules for multiplatform OS-level functions
import BSSN.BSSN_quantities as Bq # NRPy+: This module depends on the parameter EvolvedConformalFactor_cf,
# which is defined in BSSN.BSSN_quantities
# Step 1.a: Set DIM=3, as we're using a 3+1 decomposition of Einstein's equations
DIM=3
```
<a id='adm2bssn'></a>
# Step 2: Perform the ADM-to-BSSN conversion for 3-metric, extrinsic curvature, and gauge quantities \[Back to [top](#toc)\]
$$\label{adm2bssn}$$
Here we convert ADM quantities to their BSSN Curvilinear counterparts.
<a id='adm2bssn_gamma'></a>
## Step 2.a: Convert ADM $\gamma_{ij}$ to BSSN $\bar{\gamma}_{ij}$; rescale to get $h_{ij}$ \[Back to [top](#toc)\]
$$\label{adm2bssn_gamma}$$
We have (Eqs. 2 and 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)):
$$
\bar{\gamma}_{i j} = \left(\frac{\bar{\gamma}}{\gamma}\right)^{1/3} \gamma_{ij},
$$
where we always make the choice $\bar{\gamma} = \hat{\gamma}$.
After constructing $\bar{\gamma}_{ij}$, we rescale to get $h_{ij}$ according to the prescription described in the [the covariant BSSN formulation tutorial](Tutorial-BSSN_formulation.ipynb) (also [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)):
$$
h_{ij} = (\bar{\gamma}_{ij} - \hat{\gamma}_{ij})/\text{ReDD[i][j]}.
$$
```
# Step 2: All ADM quantities were input into this function in the Spherical or Cartesian
# basis, as functions of r,th,ph or x,y,z, respectively. In Steps 1 and 2 above,
# we converted them to the xx0,xx1,xx2 basis, and as functions of xx0,xx1,xx2.
# Here we convert ADM quantities to their BSSN Curvilinear counterparts:
# Step 2.a: Convert ADM $\gamma_{ij}$ to BSSN $\bar{gamma}_{ij}$:
# We have (Eqs. 2 and 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)):
def gammabarDD_hDD(gammaDD):
global gammabarDD,hDD
if rfm.have_already_called_reference_metric_function == False:
print("BSSN.BSSN_in_terms_of_ADM.hDD_given_ADM(): Must call reference_metric() first!")
sys.exit(1)
# \bar{gamma}_{ij} = (\frac{\bar{gamma}}{gamma})^{1/3}*gamma_{ij}.
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
gammabarDD = ixp.zerorank2()
hDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
gammabarDD[i][j] = (rfm.detgammahat/gammaDET)**(sp.Rational(1,3))*gammaDD[i][j]
hDD[i][j] = (gammabarDD[i][j] - rfm.ghatDD[i][j]) / rfm.ReDD[i][j]
```
<a id='admexcurv_convert'></a>
## Step 2.b: Convert the ADM extrinsic curvature $K_{ij}$ to BSSN quantities $\bar{A}_{ij}$ and $K={\rm tr}(K_{ij})$; rescale $\bar{A}_{ij}$ to get $a_{ij}$ \[Back to [top](#toc)\]
$$\label{admexcurv_convert}$$
Convert the ADM extrinsic curvature $K_{ij}$ to the trace-free extrinsic curvature $\bar{A}_{ij}$, plus the trace of the extrinsic curvature $K$, where (Eq. 3 of [Baumgarte *et al.*](https://arxiv.org/pdf/1211.6632.pdf)):
\begin{align}
K &= \gamma^{ij} K_{ij} \\
\bar{A}_{ij} &= \left(\frac{\bar{\gamma}}{\gamma}\right)^{1/3} \left(K_{ij} - \frac{1}{3} \gamma_{ij} K \right)
\end{align}
After constructing $\bar{A}_{ij}$, we rescale to get $a_{ij}$ according to the prescription described in the [the covariant BSSN formulation tutorial](Tutorial-BSSN_formulation.ipynb) (also [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)):
$$
a_{ij} = \bar{A}_{ij}/\text{ReDD[i][j]}.
$$
```
# Step 2.b: Convert the extrinsic curvature K_{ij} to the trace-free extrinsic
# curvature \bar{A}_{ij}, plus the trace of the extrinsic curvature K,
# where (Eq. 3 of [Baumgarte *et al.*](https://arxiv.org/pdf/1211.6632.pdf)):
def trK_AbarDD_aDD(gammaDD,KDD):
global trK,AbarDD,aDD
if rfm.have_already_called_reference_metric_function == False:
print("BSSN.BSSN_in_terms_of_ADM.trK_AbarDD(): Must call reference_metric() first!")
sys.exit(1)
# \bar{gamma}_{ij} = (\frac{\bar{gamma}}{gamma})^{1/3}*gamma_{ij}.
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
# K = gamma^{ij} K_{ij}, and
# \bar{A}_{ij} &= (\frac{\bar{gamma}}{gamma})^{1/3}*(K_{ij} - \frac{1}{3}*gamma_{ij}*K)
trK = sp.sympify(0)
for i in range(DIM):
for j in range(DIM):
trK += gammaUU[i][j]*KDD[i][j]
AbarDD = ixp.zerorank2()
aDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
AbarDD[i][j] = (rfm.detgammahat/gammaDET)**(sp.Rational(1,3))*(KDD[i][j] - sp.Rational(1,3)*gammaDD[i][j]*trK)
aDD[i][j] = AbarDD[i][j] / rfm.ReDD[i][j]
```
<a id='lambda'></a>
## Step 2.c: Assuming the ADM 3-metric $\gamma_{ij}$ is given as an explicit function of `(xx0,xx1,xx2)`, convert to BSSN $\bar{\Lambda}^i$; rescale to compute $\lambda^i$ \[Back to [top](#toc)\]
$$\label{lambda}$$
To define $\bar{\Lambda}^i$ we implement Eqs. 4 and 5 of [Baumgarte *et al.*](https://arxiv.org/pdf/1211.6632.pdf):
$$
\bar{\Lambda}^i = \bar{\gamma}^{jk}\left(\bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}\right).
$$
The [reference_metric.py](../edit/reference_metric.py) module provides us with exact, closed-form expressions for $\hat{\Gamma}^i_{jk}$, so here we need only compute exact expressions for $\bar{\Gamma}^i_{jk}$, based on $\gamma_{ij}$ given as an explicit function of `(xx0,xx1,xx2)`. This is particularly useful when setting up initial data.
After constructing $\bar{\Lambda}^i$, we rescale to get $\lambda^i$ according to the prescription described in the [the covariant BSSN formulation tutorial](Tutorial-BSSN_formulation.ipynb) (also [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)):
$$
\lambda^i = \bar{\Lambda}^i/\text{ReU[i]}.
$$
```
# Step 2.c: Define \bar{Lambda}^i (Eqs. 4 and 5 of [Baumgarte *et al.*](https://arxiv.org/pdf/1211.6632.pdf)):
def LambdabarU_lambdaU__exact_gammaDD(gammaDD):
global LambdabarU,lambdaU
# \bar{Lambda}^i = \bar{gamma}^{jk}(\bar{Gamma}^i_{jk} - \hat{Gamma}^i_{jk}).
gammabarDD_hDD(gammaDD)
gammabarUU, gammabarDET = ixp.symm_matrix_inverter3x3(gammabarDD)
# First compute Christoffel symbols \bar{Gamma}^i_{jk}, with respect to barred metric:
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
GammabarUDD[i][j][k] += sp.Rational(1,2)*gammabarUU[i][l]*( sp.diff(gammabarDD[l][j],rfm.xx[k]) +
sp.diff(gammabarDD[l][k],rfm.xx[j]) -
sp.diff(gammabarDD[j][k],rfm.xx[l]) )
# Next evaluate \bar{Lambda}^i, based on GammabarUDD above and GammahatUDD
# (from the reference metric):
LambdabarU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
LambdabarU[i] += gammabarUU[j][k] * (GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k])
for i in range(DIM):
# We evaluate LambdabarU[i] here to ensure proper cancellations. If these cancellations
# are not applied, certain expressions (e.g., lambdaU[0] in StaticTrumpet) will
# cause SymPy's (v1.5+) CSE algorithm to hang
LambdabarU[i] = LambdabarU[i].doit()
lambdaU = ixp.zerorank1()
for i in range(DIM):
lambdaU[i] = LambdabarU[i] / rfm.ReU[i]
```
<a id='conformal'></a>
## Step 2.d: Define the conformal factor variable `cf` \[Back to [top](#toc)\]
$$\label{conformal}$$
We define the conformal factor variable `cf` based on the setting of the `"BSSN_quantities::EvolvedConformalFactor_cf"` parameter.
For example if `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"phi"`, we can use Eq. 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf), which in arbitrary coordinates is written:
$$
\phi = \frac{1}{12} \log\left(\frac{\gamma}{\bar{\gamma}}\right).
$$
Alternatively if `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"chi"`, then
$$
\chi = e^{-4 \phi} = \exp\left(-4 \frac{1}{12} \left(\frac{\gamma}{\bar{\gamma}}\right)\right)
= \exp\left(-\frac{1}{3} \log\left(\frac{\gamma}{\bar{\gamma}}\right)\right) = \left(\frac{\gamma}{\bar{\gamma}}\right)^{-1/3}.
$$
Finally if `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"W"`, then
$$
W = e^{-2 \phi} = \exp\left(-2 \frac{1}{12} \log\left(\frac{\gamma}{\bar{\gamma}}\right)\right) =
\exp\left(-\frac{1}{6} \log\left(\frac{\gamma}{\bar{\gamma}}\right)\right) =
\left(\frac{\gamma}{\bar{\gamma}}\right)^{-1/6}.
$$
```
# Step 2.d: Set the conformal factor variable cf, which is set
# by the "BSSN_quantities::EvolvedConformalFactor_cf" parameter. For example if
# "EvolvedConformalFactor_cf" is set to "phi", we can use Eq. 3 of
# [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf),
# which in arbitrary coordinates is written:
def cf_from_gammaDD(gammaDD):
global cf
# \bar{Lambda}^i = \bar{gamma}^{jk}(\bar{Gamma}^i_{jk} - \hat{Gamma}^i_{jk}).
gammabarDD_hDD(gammaDD)
gammabarUU, gammabarDET = ixp.symm_matrix_inverter3x3(gammabarDD)
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
cf = sp.sympify(0)
if par.parval_from_str("EvolvedConformalFactor_cf") == "phi":
# phi = \frac{1}{12} log(\frac{gamma}{\bar{gamma}}).
cf = sp.Rational(1,12)*sp.log(gammaDET/gammabarDET)
elif par.parval_from_str("EvolvedConformalFactor_cf") == "chi":
# chi = exp(-4*phi) = exp(-4*\frac{1}{12}*(\frac{gamma}{\bar{gamma}}))
# = exp(-\frac{1}{3}*log(\frac{gamma}{\bar{gamma}})) = (\frac{gamma}{\bar{gamma}})^{-1/3}.
#
cf = (gammaDET/gammabarDET)**(-sp.Rational(1,3))
elif par.parval_from_str("EvolvedConformalFactor_cf") == "W":
# W = exp(-2*phi) = exp(-2*\frac{1}{12}*log(\frac{gamma}{\bar{gamma}}))
# = exp(-\frac{1}{6}*log(\frac{gamma}{\bar{gamma}})) = (\frac{gamma}{bar{gamma}})^{-1/6}.
cf = (gammaDET/gammabarDET)**(-sp.Rational(1,6))
else:
print("Error EvolvedConformalFactor_cf type = \""+par.parval_from_str("EvolvedConformalFactor_cf")+"\" unknown.")
sys.exit(1)
```
<a id='betvet'></a>
## Step 2.e: Rescale $\beta^i$ and $B^i$ to compute $\mathcal{V}^i={\rm vet}^i$ and $\mathcal{B}^i={\rm bet}^i$, respectively \[Back to [top](#toc)\]
$$\label{betvet}$$
We rescale $\beta^i$ and $B^i$ according to the prescription described in the [the covariant BSSN formulation tutorial](Tutorial-BSSN_formulation.ipynb) (also [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)):
\begin{align}
\mathcal{V}^i &= \beta^i/\text{ReU[i]}\\
\mathcal{B}^i &= B^i/\text{ReU[i]}.
\end{align}
```
# Step 2.e: Rescale beta^i and B^i according to the prescription described in
# the [BSSN in curvilinear coordinates tutorial notebook](Tutorial-BSSNCurvilinear.ipynb)
# (also [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)):
#
# \mathcal{V}^i &= beta^i/(ReU[i])
# \mathcal{B}^i &= B^i/(ReU[i])
def betU_vetU(betaU,BU):
global vetU,betU
if rfm.have_already_called_reference_metric_function == False:
print("BSSN.BSSN_in_terms_of_ADM.bet_vet(): Must call reference_metric() first!")
sys.exit(1)
vetU = ixp.zerorank1()
betU = ixp.zerorank1()
for i in range(DIM):
vetU[i] = betaU[i] / rfm.ReU[i]
betU[i] = BU[i] / rfm.ReU[i]
```
<a id='code_validation'></a>
# Step 3: Code Validation against `BSSN.BSSN_in_terms_of_ADM` module \[Back to [top](#toc)\]
$$\label{code_validation}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for [UIUC initial data](Tutorial-ADM_Initial_Data-UIUC_BlackHole.ipynb) between
1. this tutorial and
2. the NRPy+ [BSSN.BSSN_in_terms_of_ADM](../edit/BSSN/BSSN_in_terms_of_ADM.py) module.
As no basis transformation is performed, we analyze these expressions in their native, Spherical coordinates.
```
# Step 3.a: Set the desired *output* coordinate system to Spherical:
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
rfm.reference_metric()
# Step 3.b: Set up initial data; assume UIUC spinning black hole initial data
import BSSN.UIUCBlackHole as uibh
uibh.UIUCBlackHole(ComputeADMGlobalsOnly=True)
# Step 3.c: Call above functions to convert ADM to BSSN curvilinear
gammabarDD_hDD( uibh.gammaSphDD)
trK_AbarDD_aDD( uibh.gammaSphDD,uibh.KSphDD)
LambdabarU_lambdaU__exact_gammaDD(uibh.gammaSphDD)
cf_from_gammaDD( uibh.gammaSphDD)
betU_vetU( uibh.betaSphU,uibh.BSphU)
# Step 3.d: Now load the BSSN_in_terms_of_ADM module and perform the same conversion
import BSSN.BSSN_in_terms_of_ADM as BitoA
BitoA.gammabarDD_hDD( uibh.gammaSphDD)
BitoA.trK_AbarDD_aDD( uibh.gammaSphDD,uibh.KSphDD)
BitoA.LambdabarU_lambdaU__exact_gammaDD(uibh.gammaSphDD)
BitoA.cf_from_gammaDD( uibh.gammaSphDD)
BitoA.betU_vetU( uibh.betaSphU,uibh.BSphU)
# Step 3.e: Perform the consistency check
print("Consistency check between this tutorial notebook and BSSN.BSSN_in_terms_of_ADM NRPy+ module: ALL SHOULD BE ZERO.")
print("cf - BitoA.cf = " + str(cf - BitoA.cf))
print("trK - BitoA.trK = " + str(trK - BitoA.trK))
# alpha is the only variable that remains unchanged:
# print("alpha - BitoA.alpha = " + str(alpha - BitoA.alpha))
for i in range(DIM):
print("vetU["+str(i)+"] - BitoA.vetU["+str(i)+"] = " + str(vetU[i] - BitoA.vetU[i]))
print("betU["+str(i)+"] - BitoA.betU["+str(i)+"] = " + str(betU[i] - BitoA.betU[i]))
print("lambdaU["+str(i)+"] - BitoA.lambdaU["+str(i)+"] = " + str(lambdaU[i] - BitoA.lambdaU[i]))
for j in range(DIM):
print("hDD["+str(i)+"]["+str(j)+"] - BitoA.hDD["+str(i)+"]["+str(j)+"] = "
+ str(hDD[i][j] - BitoA.hDD[i][j]))
print("aDD["+str(i)+"]["+str(j)+"] - BitoA.aDD["+str(i)+"]["+str(j)+"] = "
+ str(aDD[i][j] - BitoA.aDD[i][j]))
```
<a id='latex_pdf_output'></a>
# Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.pdf](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BSSN_in_terms_of_ADM")
```
|
github_jupyter
|
# Part 4: Federated Learning with Model Averaging
**Recap**: In Part 2 of this tutorial, we trained a model using a very simple version of Federated Learning. This required each data owner to trust the model owner to be able to see their gradients.
**Description:**In this tutorial, we'll show how to use the advanced aggregation tools from Part 3 to allow the weights to be aggregated by a trusted "secure worker" before the final resulting model is sent back to the model owner (us).
In this way, only the secure worker can see whose weights came from whom. We might be able to tell which parts of the model changed, but we do NOT know which worker (bob or alice) made which change, which creates a layer of privacy.
Authors:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
- Jason Mancuso - Twitter: [@jvmancuso](https://twitter.com/jvmancuso)
```
import torch
import syft as sy
import copy
hook = sy.TorchHook(torch)
from torch import nn
from syft import optim
```
# Step 1: Create Data Owners
First, we're going to create two data owners (Bob and Alice) each with a small amount of data. We're also going to initialize a secure machine called "secure_worker". In practice this could be secure hardware (such as Intel's SGX) or simply a trusted intermediary.
```
# create a couple workers
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
secure_worker = sy.VirtualWorker(hook, id="secure_worker")
bob.add_workers([alice, secure_worker])
alice.add_workers([bob, secure_worker])
secure_worker.add_workers([alice, bob])
# A Toy Dataset
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True)
target = torch.tensor([[0],[0],[1],[1.]], requires_grad=True)
# get pointers to training data on each worker by
# sending some training data to bob and alice
bobs_data = data[0:2].send(bob)
bobs_target = target[0:2].send(bob)
alices_data = data[2:].send(alice)
alices_target = target[2:].send(alice)
```
# Step 2: Create Our Model
For this example, we're going to train with a simple Linear model. We can initialize it normally using PyTorch's nn.Linear constructor.
```
# Iniitalize A Toy Model
model = nn.Linear(2,1)
```
# Step 3: Send a Copy of the Model to Alice and Bob
Next, we need to send a copy of the current model to Alice and Bob so that they can perform steps of learning on their own datasets.
```
bobs_model = model.copy().send(bob)
alices_model = model.copy().send(alice)
bobs_opt = optim.SGD(params=bobs_model.parameters(),lr=0.1)
alices_opt = optim.SGD(params=alices_model.parameters(),lr=0.1)
```
# Step 4: Train Bob's and Alice's Models (in parallel)
As is conventional with Federated Learning via Secure Averaging, each data owner first trains their model for several iterations locally before the models are averaged together.
```
for i in range(10):
# Train Bob's Model
bobs_opt.zero_grad()
bobs_pred = bobs_model(bobs_data)
bobs_loss = ((bobs_pred - bobs_target)**2).sum()
bobs_loss.backward()
bobs_opt.step(bobs_data.shape[0])
bobs_loss = bobs_loss.get().data
# Train Alice's Model
alices_opt.zero_grad()
alices_pred = alices_model(alices_data)
alices_loss = ((alices_pred - alices_target)**2).sum()
alices_loss.backward()
alices_opt.step(alices_data.shape[0])
alices_loss = alices_loss.get().data
print("Bob:" + str(bobs_loss) + " Alice:" + str(alices_loss))
```
# Step 5: Send Both Updated Models to a Secure Worker
Now that each data owner has a partially trained model, it's time to average them together in a secure way. We achieve this by instructing Alice and Bob to send their model to the secure (trusted) server.
Note that this use of our API means that each model is sent DIRECTLY to the secure_worker. We never see it.
```
alices_model.move(secure_worker)
bobs_model.move(secure_worker)
```
# Step 6: Average the Models
Finally, the last step for this training epoch is to average Bob and Alice's trained models together and then use this to set the values for our global "model".
```
model.weight.data.set_(((alices_model.weight.data + bobs_model.weight.data) / 2).get())
model.bias.data.set_(((alices_model.bias.data + bobs_model.bias.data) / 2).get())
""
```
# Rinse and Repeat
And now we just need to iterate this multiple times!
```
iterations = 10
worker_iters = 5
for a_iter in range(iterations):
bobs_model = model.copy().send(bob)
alices_model = model.copy().send(alice)
bobs_opt = optim.SGD(params=bobs_model.parameters(),lr=0.1)
alices_opt = optim.SGD(params=alices_model.parameters(),lr=0.1)
for wi in range(worker_iters):
# Train Bob's Model
bobs_opt.zero_grad()
bobs_pred = bobs_model(bobs_data)
bobs_loss = ((bobs_pred - bobs_target)**2).sum()
bobs_loss.backward()
bobs_opt.step(bobs_data.shape[0])
bobs_loss = bobs_loss.get().data
# Train Alice's Model
alices_opt.zero_grad()
alices_pred = alices_model(alices_data)
alices_loss = ((alices_pred - alices_target)**2).sum()
alices_loss.backward()
alices_opt.step(alices_data.shape[0])
alices_loss = alices_loss.get().data
alices_model.move(secure_worker)
bobs_model.move(secure_worker)
model.weight.data.set_(((alices_model.weight.data + bobs_model.weight.data) / 2).get())
model.bias.data.set_(((alices_model.bias.data + bobs_model.bias.data) / 2).get())
print("Bob:" + str(bobs_loss) + " Alice:" + str(alices_loss))
```
Lastly, we want to make sure that our resulting model learned correctly, so we'll evaluate it on a test dataset. In this toy problem, we'll use the original data, but in practice we'll want to use new data to understand how well the model generalizes to unseen examples.
```
preds = model(data)
loss = ((preds - target) ** 2).sum()
print(preds)
print(target)
print(loss.data)
```
In this toy example, the averaged model is underfitting relative to a plaintext model trained locally would behave, however we were able to train it without exposing each worker's training data. We were also able to aggregate the updated models from each worker on a trusted aggregator to prevent data leakage to the model owner.
In a future tutorial, we'll aim to do our trusted aggregation directly with the gradients, so that we can update the model with better gradient estimates and arrive at a stronger model.
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on GitHub
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
|
github_jupyter
|
##### Copyright 2022 The Cirq Developers
```
# @title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Parameter Sweeps
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/params"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/params.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/params.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/params.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
```
## Concept of Circuit Parameterization and Sweeps
Suppose you have a quantum circuit and in this circuit there is a gate with some parameter. You might wish to run this circuit for different values of this parameter. An example of this type of circuit is a Rabi flop experiment. This experiment runs a set of quantum computations which 1) starts in $|0\rangle$ state, 2) rotates the state by $\theta$ about the $x$ axis, i.e. applies the gate $\exp(i \theta X)$, and 3) measures the state in the computational basis. Running this experiment for multiple values of $\theta$, and plotting the probability of observing a $|1\rangle$ outcome yields the quintessential $\cos^2$ probability distribution as a function of the parameter $\theta$. To support this type of experiment, Cirq provides the concept of parameterized circuits and parameter sweeps.
The next cell illustrates parameter sweeps with a simple example. Suppose you want to compare two quantum circuits that are identical except for a single exponentiated `cirq.Z` gate.
```
q0 = cirq.LineQubit(0)
circuit1 = cirq.Circuit([cirq.H(q0), cirq.Z(q0)**0.5, cirq.H(q0), cirq.measure(q0)])
print(f"circuit1:\n{circuit1}")
circuit2 = cirq.Circuit([cirq.H(q0), cirq.Z(q0)**0.25, cirq.H(q0), cirq.measure(q0)])
print(f"circuit2:\n{circuit2}")
```
You could run these circuits separately (either on hardware or in simulation), and collect statistics on the results of these circuits. However parameter sweeps can do this in a cleaner and more perfomant manner.
First define a parameter, and construct a circuit that depends on this parameter. Cirq uses [SymPy](https://www.sympy.org/en/index.html){:external}, a symbolic mathematics package, to define parameters. In this example the Sympy parameter is `theta`, which is used to construct a parameterized circuit.
```
import sympy
theta = sympy.Symbol("theta")
circuit = cirq.Circuit([cirq.H(q0), cirq.Z(q0)**theta, cirq.H(q0), cirq.measure(q0)])
print(f"circuit:\n{circuit}")
```
Notice now that the circuit contains a `cirq.Z` gate that is raised to a power, but this power is the parameter `theta`. This is a "parameterized circuit". An equivalent way to construct this circuit, where the parameter is actually a parameter in the gate constructor's arguments, is:
```
circuit = cirq.Circuit(
cirq.H(q0), cirq.ZPowGate(exponent=theta)(q0), cirq.H(q0), cirq.measure(q0)
)
print(f"circuit:\n{circuit}")
```
Note: You can check whether an object in Cirq is parameterized using `cirq.is_parameterized`:
```
cirq.is_parameterized(circuit)
```
Parameterized circuits are just like normal circuits; they just aren't defined in terms of gates that you can actually run on a quantum computer without the additional information about the values of the parameters. Following the example above, you can generate the two circuits (`circuit1` and `circuit2`) by using `cirq.resolve_parameter` and supplying the values that you want the parameter(s) to take:
```
# circuit1 has theta = 0.5
cirq.resolve_parameters(circuit, {"theta": 0.5})
# circuit2 has theta = 0.25
cirq.resolve_parameters(circuit, {"theta": 0.25})
```
More interestingly, you can combine parameterized circuits with a list of parameter assignments when doing things like running circuits or simulating them. These lists of parameter assignements are called "sweeps". For example you can use a simulator's `run_sweep` method to run simulations for the parameters corresponding to the two circuits defined above.
```
sim = cirq.Simulator()
results = sim.run_sweep(circuit, repetitions=25, params=[{"theta": 0.5}, {"theta": 0.25}])
for result in results:
print(f"param: {result.params}, result: {result}")
```
To recap, you can construct parameterized circuits that depend on parameters that have not yet been assigned a value. These parameterized circuits can then be resolved to circuits with actual values via a dictionary that maps the sympy variable name to the value that parameter should take. You can also construct lists of dictionaries of parameter assignments, called sweeps, and pass this to many functions in Cirq that use circuits to do an action (such as `simulate` or `run`). For each of the elements in the sweep, the function will execute using the parameters as described by the element.
## Constructing Sweeps
The previous example constructed a sweep by simply constructing a list of parameter assignments, `[{"theta": 0.5}, {"theta": 0.25}]`. Cirq also provides other ways to construct sweeps.
One useful method for constructing parameter sweeps is `cirq.Linspace` which creates a sweep over a list of equally spaced elements.
```
# Create a sweep over 5 equally spaced values from 0 to 2.5.
params = cirq.Linspace(key="theta", start=0, stop=2.5, length=5)
for param in params:
print(param)
```
Note: The `Linspace` sweep is composed of `cirq.ParamResolver` instances instead of simple dictionaries. However, you can think of them as effectively the same for most use cases.
If you need to explicitly and individually specify each parameter resolution, you can do it by constructing a list of dictionaries as before. However, you can also use `cirq.Points` to do this more succinctly.
```
params = cirq.Points(key="theta", points=[0, 1, 3])
for param in params:
print(param)
```
If you're working with parameterized circuits, it is very likely you'll need to keep track of multiple parameters. Two common use cases necessitate building a sweep from two constituent sweeps, where the new sweep includes:
- Every possible combination of the elements of each sweep: A cartesian product.
- A element-wise pairing of the two sweeps: A zip.
The following are examples of using the `*` and `+` operators to combine sweeps by cartesian product and zipping, respectively.
```
sweep1 = cirq.Linspace("theta", 0, 1, 5)
sweep2 = cirq.Points("gamma", [0, 3])
# By taking the product of these two sweeps, you can sweep over all possible
# combinations of the parameters.
for param in sweep1 * sweep2:
print(param)
sweep1 = cirq.Points("theta", [1, 2, 3])
sweep2 = cirq.Points("gamma", [0, 3, 4])
# By taking the sum of these two sweeps, you can combine the sweeps
# elementwise (similar to python's zip function):
for param in sweep1 + sweep2:
print(param)
```
`cirq.Linspace` and `cirq.Points` are instances of the `cirq.Sweep` class, which explicitly supports cartesian product with the `*` operation, and zipping with the `+` operation. The `*` operation produces a `cirq.Product` object, and `+` produces a `cirq.Zip` object, both of which are also `Sweep`s. Other mathematical operations will not work in general *between sweeps*.
## Symbols and Expressions
[SymPy](https://www.sympy.org/en/index.html){:external} is a general symbolic mathematics toolset, and you can leverage this in Cirq to define more complex parameters than have been shown so far. For example, you can define an expression in Sympy and use it to construct circuits that depend on this expression:
```
# Construct an expression for 0.5 * a + 0.25:
expr = 0.5 * sympy.Symbol("a") + 0.25
print(expr)
# Use the expression in the circuit:
circuit = cirq.Circuit(cirq.X(q0)**expr, cirq.measure(q0))
print(f"circuit:\n{circuit}")
```
Both the exponents and parameter arguments of circuit operations can in fact be any general Sympy expression: The previous examples just used single-variable expressions. When you resolve parameters for this circuit, the expressions are evaluated under the given assignments to the variables in the expression.
```
print(cirq.resolve_parameters(circuit, {"a": 0}))
```
Just as before, you can pass a sweep over variable values to `run` or `simulate`, and Cirq will evaluate the expression for each possible value.
```
sim = cirq.Simulator()
results = sim.run_sweep(circuit, repetitions=25, params=cirq.Points('a', [0, 1]))
for result in results:
print(f"param: {result.params}, result: {result}")
```
Sympy supports a large number of numeric functions and methods, which can be used to create fairly sophisticated expressions, like cosine, exponentiation, and more:
```
print(sympy.cos(sympy.Symbol("a"))**sympy.Symbol("b"))
```
Cirq can numerically evaluate all of the expressions Sympy can evalute. However, if you are running a parameterized circuit on a service (such as on a hardware backed quantum computing service) that service may not support evaluating all expressions. See documentation for the particular service you're using for details.
As a general workaround, you can instead use Cirq's flattening ability to evaluate the parameters before sending them off to the service.
### Flattening Expressions
Suppose you build a circuit that includes multiple different expressions:
```
a = sympy.Symbol('a')
circuit = cirq.Circuit(cirq.X(q0)**(a / 4), cirq.Y(q0)**(1 - a / 2), cirq.measure(q0))
print(circuit)
```
Flattening replaces every expression in the circuit with a new symbol that is representative of the value of that expression. Additionally, it keeps track of the new symbols and provices a `cirq.ExpressionMap` object to map the old sympy expression objects to the new symbols that replaced them.
```
# Flatten returns two objects, the circuit with new symbols, and the mapping from old to new values.
c_flat, expr_map = cirq.flatten(circuit)
print(c_flat)
print(expr_map)
```
Notice that the new circuit has new symbols, `<a/2>` and `<1-a/2>`, which are explicitly not expressions. You can see this by looking at the value of the exponent in the first gate:
```
first_gate = c_flat[0][q0].gate
print(first_gate.exponent)
# Note this is a symbol, not an expression
print(type(first_gate.exponent))
```
The second object returned by `cirq.flatten` is an object that can be used to map sweeps over the previous symbols to new sweeps over the new expression-symbols. The values assigned to the new expression symbols in the resulting sweep are the old expressions kept track of in the `ExpressionMap`, but resolved with the values provided by the original input sweep.
```
sweep = cirq.Linspace(a, start=0, stop=3, length=4)
print(f"Old {sweep}")
new_sweep = expr_map.transform_sweep(sweep)
print(f"New {new_sweep}")
```
To reinforce: The new sweep is over two new symbols, which each represent the values of the expressions in the original circuit. The values assigned to these new expression symbols is acquired by evaluating the expressions with `a` resolved to a value in `[0, 4]`, according to the old sweep.
You can use these new sweep elements to resolve the parameters of the flattened circuit:
```
for params in new_sweep:
print(c_flat, '=>', end=' ')
print(cirq.resolve_parameters(c_flat, params))
```
Using `cirq.flatten`, you can always take a parameterized circuit with any complicated expressions, plus a sweep, and produce an equivalent circuit with no expressions, only symbols, and a sweep for these new symbols. Because this is a common flow, Cirq provides `cirq.flatten_sweep` to do this in one step:
```
c_flat, new_sweep = cirq.flatten_with_sweep(circuit, sweep)
print(c_flat)
print(new_sweep)
```
You can then directly use these objects to run the sweeps. For example, you can use them to perform a simulation:
```
sim = cirq.Simulator()
results = sim.run_sweep(c_flat, repetitions=20, params=new_sweep)
for result in results:
print(result.params, result)
```
You can see that the different flattened parameters have corresponding different results for their simulation.
# Summary
- Cirq circuits can handle arbitrary Sympy expressions in place of exponents and parameter arguments in operations.
- By providing one or a sequence of `ParamResolver`s or dictionaries that resolve the Sympy variables to values, `run`, `simulate`, and other functions can iterate efficiently over different parameter assignments for otherwise identical circuits.
- Sweeps can be created succinctly with `cirq.Points` and `cirq.Linspace`, and composed with each other with `*` and `+`, to create `cirq.Product` and `cirq.Zip` sweeps.
- When the service you're using does not support arbitrary expressions, you can flatten a circuit and sweep into a new circuit that doesn't have complex expressions, and a corresponding new sweep.
|
github_jupyter
|
# AutoGluon-Tabular in AWS Marketplace
[AutoGluon](https://github.com/awslabs/autogluon) automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy deep learning models on tabular, image, and text data.
This notebook shows how to use AutoGluon-Tabular in AWS Marketplace.
### Contents:
* [Step 1: Subscribe to AutoML algorithm from AWS Marketplace](#Step-1:-Subscribe-to-AutoML-algorithm-from-AWS-Marketplace)
* [Step 2: Set up environment](#Step-2-:-Set-up-environment)
* [Step 3: Prepare and upload data](#Step-3:-Prepare-and-upload-data)
* [Step 4: Train a model](#Step-4:-Train-a-model)
* [Step 5: Deploy the model and perform a real-time inference](#Step-5:-Deploy-the-model-and-perform-a-real-time-inference)
* [Step 6: Use Batch Transform](#Step-6:-Use-Batch-Transform)
* [Step 7: Clean-up](#Step-7:-Clean-up)
### Step 1: Subscribe to AutoML algorithm from AWS Marketplace
1. Open [AutoGluon-Tabular listing from AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-n4zf5pmjt7ism)
2. Read the **Highlights** section and then **product overview** section of the listing.
3. View **usage information** and then **additional resources**.
4. Note the supported instance types and specify the same in the following cell.
5. Next, click on **Continue to subscribe**.
6. Review **End user license agreement**, **support terms**, as well as **pricing information**.
7. Next, "Accept Offer" button needs to be clicked only if your organization agrees with EULA, pricing information as well as support terms. Once **Accept offer** button has been clicked, specify compatible training and inference types you wish to use.
**Notes**:
1. If **Continue to configuration** button is active, it means your account already has a subscription to this listing.
2. Once you click on **Continue to configuration** button and then choose region, you will see that a product ARN will appear. This is the algorithm ARN that you need to specify in your training job. However, for this notebook, the algorithm ARN has been specified in **src/algorithm_arns.py** file and you do not need to specify the same explicitly.
### Step 2 : Set up environment
```
#Import necessary libraries.
import os
import boto3
import sagemaker
from time import sleep
from collections import Counter
import numpy as np
import pandas as pd
from sagemaker import get_execution_role, local, Model, utils, fw_utils, s3
from sagemaker.algorithm import AlgorithmEstimator
from sagemaker.predictor import Predictor
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import StringDeserializer
from sklearn.metrics import accuracy_score, classification_report
from IPython.core.display import display, HTML
from IPython.core.interactiveshell import InteractiveShell
# Print settings
InteractiveShell.ast_node_interactivity = "all"
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 10)
# Account/s3 setup
session = sagemaker.Session()
bucket = session.default_bucket()
prefix = 'sagemaker/autogluon-tabular'
region = session.boto_region_name
role = get_execution_role()
compatible_training_instance_type='ml.m5.4xlarge'
compatible_inference_instance_type='ml.m5.4xlarge'
#Specify algorithm ARN for AutoGluon-Tabular from AWS Marketplace. However, for this notebook, the algorithm ARN
#has been specified in src/algorithm_arns.py file and you do not need to specify the same explicitly.
from src.algorithm_arns import AlgorithmArnProvider
algorithm_arn = AlgorithmArnProvider.get_algorithm_arn(region)
import subprocess
subprocess.run("apt-get update -y", shell=True)
subprocess.run("apt install unzip", shell=True)
```
### Step 3: Get the data
In this example we'll use the direct-marketing dataset to build a binary classification model that predicts whether customers will accept or decline a marketing offer.
First we'll download the data and split it into train and test sets. AutoGluon does not require a separate validation set (it uses bagged k-fold cross-validation).
```
# Download and unzip the data
subprocess.run(f"aws s3 cp --region {region} s3://sagemaker-sample-data-{region}/autopilot/direct_marketing/bank-additional.zip .", shell=True)
subprocess.run("unzip -qq -o bank-additional.zip", shell=True)
subprocess.run("rm bank-additional.zip", shell=True)
local_data_path = './bank-additional/bank-additional-full.csv'
data = pd.read_csv(local_data_path)
# Split train/test data
train = data.sample(frac=0.7, random_state=42)
test = data.drop(train.index)
# Split test X/y
label = 'y'
y_test = test[label]
X_test = test.drop(columns=[label])
```
##### Check the data
```
train.head(3)
train.shape
test.head(3)
test.shape
X_test.head(3)
X_test.shape
```
Upload the data to s3
```
train_file = 'train.csv'
train.to_csv(train_file,index=False)
train_s3_path = session.upload_data(train_file, key_prefix='{}/data'.format(prefix))
test_file = 'test.csv'
test.to_csv(test_file,index=False)
test_s3_path = session.upload_data(test_file, key_prefix='{}/data'.format(prefix))
X_test_file = 'X_test.csv'
X_test.to_csv(X_test_file,index=False)
X_test_s3_path = session.upload_data(X_test_file, key_prefix='{}/data'.format(prefix))
```
### Step 4: Train a model
Next, let us train a model.
**Note:** Depending on how many underlying models are trained, `train_volume_size` may need to be increased so that they all fit on disk.
```
# Define required label and optional additional parameters
init_args = {
'label': 'y'
}
# Define additional parameters
fit_args = {
# Adding 'best_quality' to presets list will result in better performance (but longer runtime)
'presets': ['optimize_for_deployment'],
}
# Pass fit_args to SageMaker estimator hyperparameters
hyperparameters = {
'init_args': init_args,
'fit_args': fit_args,
'feature_importance': True
}
tags = [{
'Key' : 'AlgorithmName',
'Value' : 'AutoGluon-Tabular'
}]
algo = AlgorithmEstimator(algorithm_arn=algorithm_arn,
role=role,
instance_count=1,
instance_type=compatible_training_instance_type,
sagemaker_session=session,
base_job_name='autogluon',
hyperparameters=hyperparameters,
train_volume_size=100)
inputs = {'training': train_s3_path}
algo.fit(inputs)
```
### Step 5: Deploy the model and perform a real-time inference
##### Deploy a remote endpoint
```
%%time
predictor = algo.deploy(initial_instance_count=1,
instance_type=compatible_inference_instance_type,
serializer=CSVSerializer(),
deserializer=StringDeserializer())
```
##### Predict on unlabeled test data
```
results = predictor.predict(X_test.to_csv(index=False)).splitlines()
# Check output
y_results = np.array([i.split(",")[0] for i in results])
print(Counter(y_results))
```
##### Predict on data that includes label column
Prediction performance metrics will be printed to endpoint logs.
```
results = predictor.predict(test.to_csv(index=False)).splitlines()
# Check output
y_results = np.array([i.split(",")[0] for i in results])
print(Counter(y_results))
```
##### Check that classification performance metrics match evaluation printed to endpoint logs as expected
```
y_results = np.array([i.split(",")[0] for i in results])
print("accuracy: {}".format(accuracy_score(y_true=y_test, y_pred=y_results)))
print(classification_report(y_true=y_test, y_pred=y_results, digits=6))
```
### Step 6: Use Batch Transform
By including the label column in the test data, you can also evaluate prediction performance (In this case, passing `test_s3_path` instead of `X_test_s3_path`).
```
output_path = f's3://{bucket}/{prefix}/output/'
transformer = algo.transformer(instance_count=1,
instance_type=compatible_inference_instance_type,
strategy='MultiRecord',
max_payload=6,
max_concurrent_transforms=1,
output_path=output_path)
transformer.transform(test_s3_path, content_type='text/csv', split_type='Line')
transformer.wait()
```
### Step 7: Clean-up
Once you have finished performing predictions, you can delete the endpoint to avoid getting charged for the same.
```
predictor.delete_endpoint()
```
Finally, if the AWS Marketplace subscription was created just for the experiment and you would like to unsubscribe to the product, here are the steps that can be followed.
Before you cancel the subscription, ensure that you do not have any [deployable model](https://console.aws.amazon.com/sagemaker/home#/models) created from the model-package or using the algorithm. Note - You can find this by looking at container associated with the model.
Steps to un-subscribe to product from AWS Marketplace:
1. Navigate to __Machine Learning__ tab on [__Your Software subscriptions page__](https://aws.amazon.com/marketplace/ai/library?productType=ml&ref_=lbr_tab_ml)
2. Locate the listing that you would need to cancel subscription for, and then __Cancel Subscription__ can be clicked to cancel the subscription.
|
github_jupyter
|
文章来自 作者:子实 更多机器学习笔记访问[这里](https://github.com/zlotus/notes-LSJU-machine-learning)
# 第十五讲:PCA的奇异值分解、独立成分分析
回顾一下上一讲的内容——PCA算法,主要有三个步骤:
1. 将数据正规化为零期望以及单位化方差;
2. 计算协方差矩阵$\displaystyle\varSigma=\frac{1}{m}x^{(i)}\left(x^{(i)}\right)^T$;
3. 找到$\varSigma$的前$k$个特征向量。
在上一讲的最后,我们还介绍了PCA在面部识别中的应用。试想一下,在面部识别中$x^{(i)}\in\mathbb R^{10000}$,那么$\varSigma\in\mathbb R^{100000000}$,而要求这种亿级别的矩阵的特征向量并不容易,这个问题我们先放一放。
来看另一个例子,在前面([第五讲](https://github.com/zlotus/notes-LSJU-machine-learning/blob/master/chapter05.ipynb))我们讨论过关于垃圾邮件分类的问题,我们会构造一个词汇表向量,其每个分量所对应的单词如果在邮件中出现则置为$1$,反正则置为$0$。当我们对这种数据应用PCA时,产生的算法换了个名字,叫做**潜在语义索引(LSI: latent semantic analysis)**。不过在LSI中,我们通常跳过预处理阶段,因为正规化数据使它们具有相同的方差可能会大幅增加不参加单词的权重。比如说我们拿到了一个文档的内容$x^{(i)}$,现在希望知道这份文档与我们已有文档中的哪个相似度最高,也就是衡量两个高维输入向量所表示的文档的相似性$\mathrm{sim}\left(x^{(i)},x^{(j)}\right), x\in\mathbb R^{50000}$。我们通常的做法是衡量这两个向量直接的夹角,如果夹角越小则认为他们越相似,于是$\displaystyle\mathrm{sim}\left(x^{(i)},x^{(j)}\right)=\cos\theta=\frac{\left(x^{(i)}\right)^Tx^{(j)}}{\left\lVert x^{(i)}\right\rVert\left\lVert x^{(j)}\right\rVert}$。再来看上面这个分子,$x^{(i)}\left(x^{(j)}\right)^T=\sum_kx_k^{(i)}x_k^{(j)}=\sum_k1\{\textrm{文档i和j都包含词语k}\}$,如果文档中没有词语重复,则该式结果为$0$。但是,如果文档$j$包含单词“study”而文档$j$包含单词“learn”,那么如果一篇介绍“study strategy”的文章和一篇介绍“method of learning”的文章在这种算法下就是无关的,我们现在想要使它们相关。于是一开始,“learn”向量与“study”向量是相互正交的,它们的内积为零,我们在这两个向量之间再找一个向量$u$,然后将“learn”与“study”投影在$u$上,此时两个向量的投影点在$u$上将相距很近,那么它们做内积时将会得到一个正数,表示它们相关。于是,如果算法再遇到一篇关于“政治”的文章和另一篇包含很多政治家名字的文章时,它会判断这两篇文章是相关的。
## 奇异值分解(SVD: Singular Value Decomposition)
我们引入奇异值分解(可以参考线性代数笔记中的[奇异值分解](http://nbviewer.jupyter.org/github/zlotus/notes-linear-algebra/blob/master/chapter30.ipynb))来解决一开始遇到的大矩阵求特征向量的问题。比如在潜在语义索引中的输入,是一个$50000$维的向量,那么其对应的$\varSigma\in\mathbb R^{50000\times50000}$,这种规模的矩阵太大了,我们需要使用另一种方法实现PCA。
对矩阵$A\in\mathbb R^{m\times n}$,总有$A=UDV^T,U\in\mathbb R^{m\times m},D\in\mathbb R^{m\times n},V^T\in\mathbb R^{n\times n}$,其中$D=\begin{bmatrix}\sigma_1&&&\\&\sigma_2&&\\&&\ddots&\\&&&\sigma_n\end{bmatrix}$是一个对角矩阵,而$\sigma_i$称为矩阵的奇异值。分解之后的矩阵为$\begin{bmatrix}A\\m\times n\end{bmatrix}=\begin{bmatrix}U\\m\times m\end{bmatrix}\begin{bmatrix}D\\m\times n\end{bmatrix}\begin{bmatrix}V^T\\n\times n\end{bmatrix}$。
现在来观察协方差矩阵的定义式$\displaystyle\varSigma=\sum_{i=1}^mx^{(i)}\left(x^{(i)}\right)^T$,在前面的章节中([第二讲](chapter02.ipynb))我们介绍过一种叫做“设计矩阵”的构造方式,也就是将每一个样本向量作为矩阵$X$的一行拼凑一个矩阵:$X=\begin{bmatrix}—\left(x^{(1)}\right)^T—\\—\left(x^{(2)}\right)^T—\\\vdots\\—\left(x^{(m)}\right)^T—\end{bmatrix}$,则我们可以将协方差矩阵用设计矩阵来表示:$\varSigma=\begin{bmatrix}\mid&\mid&&\mid\\x^{(1)}&x^{(2)}&\cdots&x^{(m)}\\\mid&\mid&&\mid\end{bmatrix}\begin{bmatrix}—\left(x^{(1)}\right)^T—\\—\left(x^{(2)}\right)^T—\\\vdots\\—\left(x^{(m)}\right)^T—\end{bmatrix}$。
最后就是计算$\varSigma$的前$k$个特征向量了,我们选择对$X$做奇异值分解$X=UDV^T$,而$X^TX=VDU^TUD^TV^T=VD^2V^T=\varSigma$,于是在$D$中从大到小排列奇异值并在$V$中取前$k$个奇异值对应的特征向量即可。
容易看出$X\in\mathbb R^{m\times50000}$,做这种规模矩阵的奇异值分解会比直接计算$\varSigma\in\mathbb R^{50000\times50000}$的特征向量快很多。这就是使用SVD实现PCA算法的计算过程。
(不过,值得注意的是,在不同的计算软件,甚至是在同一种软件的不同版本中,对SVD的计算可能遵循某种默认的维数约定,因为SVD经常会得到带有很多零元素的$U$和$D$,而软件可能会按照某种约定舍弃这些零元素。所以,在使用SVD时,需要注意这样的维数约定。)
## 无监督学习各算法的对比
我们在前面介绍因子分析模型时指出,它是对每个因子$z^{(i)}$进行高斯建模,是一种对概率密度进行估计的算法,它试图对训练样本$X$的概率密度进行建模;而在其后介绍的PCA则有所不同,它并不是一个概率算法,因为它并没有使用任何概率分布拟合训练集,而是直接去寻找子空间。从这里我们可以大致的看到,如何在解决问题时在因子分析与PCA间做取舍:如果目标就是降低数据维数、寻找数据所在的子空间,我们就更倾向于使用PCA;而因子分析会假设数据本来就在某子空间内,如果有维度很高的数据$X$,而我又想对$X$建模,那么就应该使用因子分析算法(比如做异常检测,我们可以建立关于$P(X)$的模型,如果有一个低概率事件,就可以将这个事件分解在各因子分布中,进而估计其异常情况)。这两种算法的共同点就是,它们都会假设数据位于或靠近某个低维的子空间。
再来看回顾一开始介绍的两种无监督学习方法:混合高斯模型以及$k$-means算法。这两种算法的共同点是——它们都会假设数据聚集在位于某几个簇内。不同点是混合高斯模型是一种对概率密度进行估计的算法,而$k$-means则不是。所以,如果我们需要将数据分成簇并对每一个簇建模,那么我们就倾向于使用高斯混合模型;而如我我们只想将数据分簇,并不要求确定每个簇的概率系统,则就更倾向于使用$k$-means算法。
综合上面的观点可以得到表格,便于记忆:
$$
\begin{array}
{c|c|c}
&\textbf{Model }P(x)&\textbf{Not probabilistic}\\\hline
\textbf{Subspace}&\textrm{Factor Analysis}&\textrm{PCA}\\\hline
\textbf{Cluster}&\textrm{Mixtures of Gaussians}&k\textrm{-means}
\end{array}
$$
# 第十二部分:独立成分分析(Independent components analysis)
接下来,我们将介绍独立成分分析(ICA: Independent components analysis)。类似于PCA,独立成分分析也会寻找一组新的基,用来重新表示训练样本,然而这两个算法的目标截然不同。
举一个实际问题作为例子,在第一讲,我们介绍了一个在鸡尾酒会上从嘈杂的背景音中分离发言者音频的应用。假设有$n$个发言者在一个舞会上同时说话,而在屋子里的各话筒仅捕捉到这种$n$个发言者的声音叠加在一起的音频。但是,假设我们有$n$个不同的话筒,由于每个话筒到个发言者的距离都不相同,则这些话筒记录下来的是不同形式的发言者声音叠加。那么,通过使用这些话筒的音频记录,我们能否分离出这$n$个发言者各自的音频信号?
为了正式的描述这个问题,我们假设数据$x\in\mathbb R^n$,这些数据由$n$个相互独立的来源生成,而我们能够观测到的数据为:
$$
x=As
$$
这里的$A$是一个未知的方阵,通常被称为**混合矩阵(mixing matrix)**。通过重复观测,我们得到一个数据集$\left\{x^{(i)};i=1,\cdots,m\right\}$,而我们的目标是还原那一组生成“观测到的数据集($x^{(i)}=As^{(i)}$)”的声音源$s^{(i)}$。
在鸡尾酒舞会问题中,$s^{(i)}$是一个$n$维向量,$s_j^{(i)}$表示发言者$j$在第$i$次音频采集时发出的声音。而$x^{(i)}$也是$n$维向量,$x_j^{(i)}$表示话筒$j$在第$i$次音频采样时记录下来的音频。
令$W=A^{-1}$作为**分离矩阵(unmixing matrix)**。我们的目标就是求出$W$,进而使用$s^{(i)}=Wx^{(i)}$从话筒收集的音频中还原出各独立的音源。按照一贯的标记法,我们通常使用$w_i^T$表示$W$矩阵的第$i$行,则有$W=\begin{bmatrix}—w_1^T—\\\vdots\\—w_n^T—\end{bmatrix}$。那么,对$w_i\in\mathbb R^n$,有第$j$个发言者音源可以使用$s_j^{(i)}=w_j^Tx^{(i)}$表示。
## 1. ICA二义性
$W=A^{-1}$能够做出什么程度的还原?如果没有关于音源和混合矩阵的先验经验,不难看出,矩阵$A$存在固有二义性使得它不可能只通过$x^{(i)}$就还原并对应出每一个$s^{(i)}$。
* 令$P$为$n$阶置换矩阵,也就是$P$的每行每列都只有一个$1$,其余元素均为零。举个例子:$P=\begin{bmatrix}0&1&0\\1&0&0\\0&0&1\end{bmatrix},\ P=\begin{bmatrix}0&1\\1&0\end{bmatrix},\ P=\begin{bmatrix}1&0\\0&1\end{bmatrix}$。其作用是,对向量$z$,$Pz$会产生一个将$z$的各分量重新排列得到的的置换后的向量$z'$。对于给定的$x^{(i)}$,我们无法分辨出$W$和$PW$(也就是无法确定分离矩阵中的每一行向量对应哪一位发言者)。不难预料,音源也存在这种置换二义性,不过这种二义性对于大多数应用而言并不是重要问题。
* 此外,我们也无法确定$w_i$的大小。比如当$A$变为$2A$,而每个$s^{(i)}$变为$0.5s^{(i)}$时,对于我们的观测值$x^{(i)}=2A\cdot(0.5)s^{(i)}$没有任何影响。更广泛的,如果$A$的某列被加上了缩放因子$\alpha$,而相应的音源又被缩放因子$\frac{1}{\alpha}$调整了大小,那么我们将无法从$x^{(i)}$这一单一条件中还原这次缩放调整。因此,我们无法还原音源的原有大小。不过,对于我们关心的问题(包括鸡尾酒舞会问题),这个二义性也并不重要。特别是在本例中,使用正缩放因子$\alpha$调整发言者音源$s_j^{(i)}$的大小只会影响到发言者音量的大小。而且即使改变音源的符号也没关系,$s_j^{(i)}$和$-s_j^{(i)}$在扩音器上听起来是完全一样的。综上,如果算法得出的$w_i$被某非零缩放因子影响,那么使用$s_i=w_i^Tx$得到的相应音源也会受到这个缩放因子的影响,然而这种二义性也并不重要。(这个ICA二义性同样适用于后面介绍的脑MEG中。)
我们不禁要问,上面提到的这两种情况是ICA涉及的所有可能的二义性吗?只要源$s_i$不服从高斯分布,那么答案就是肯定的。
* 我们通过一个例子来看高斯分布下的数据会产生什么麻烦,考虑当$n=2$时的$s\sim\mathcal N(0,I)$,此处的$I$是一个二阶单位矩阵。注意到标准正态分布$\mathcal N(0,I)$概率密度的等高线图是一组圆心在原点的正圆,其概率密度具有旋转对称性。假设我们观测到某些$x=As$,$A$是混合矩阵,因为源源服从$\mathcal N(0,I)$,则混合后的$x$同样服从一个以$0$为期望、以$\mathrm E\left[xx^T\right]=\mathrm E\left[Ass^TA^T\right]=AA^T$为协方差的高斯分布。现在令$R$为任意正交矩阵(非正式的也叫作旋转矩阵或反射矩阵),则有$RR^T=R^TR=I$,并令$A'=AR$。那么如果源被$A'$混合(而不是被$A$混合),则我们会观测到$x'=A's$。而$x'$也服从高斯分布,同样以$0$围棋网、以$\mathrm E\left[x'\left(x'\right)^T\right]=\mathrm E\left[A'ss^T\left(A'\right)^T\right]=\mathrm E\left[ARss^T\left(AR\right)^T\right]=ARR^TA^T=AA^T$。可以看到,不论混合矩阵是$A$还是$A'$,我们观测到的值都会服从同一个高斯分布$\mathcal N\left(0,AA^T\right)$,于是,我们无法辨别观测到的随机变量是来自混合矩阵$A$还是$A'$。因此,混合矩阵中可以包含任意的旋转矩阵,而我们是无法从观测数据中看到这个旋转矩阵的痕迹的,所以我们也就无法完全还原出源了。
上面的这段论述是基于“多元标准正态分布是旋转对称的”这一性质的。尽管ICA对于服从高斯分布的数据存在这样的缺陷,但只要数据不服从高斯分布,在数据充足的情况下,我们还是可以分离出$n$个独立的源的。
## 2. 概率密度与线性变换
在展开ICA算法的推导之前,我们先来简要介绍一下线性变换对概率密度的影响。
假设有一个从$p_s(s)$中抽取的随机变量$s$,为了简约表达,令$s\in\mathbb R$为实数。现在按照$x=As$定义随机变量$x$($x\in\mathbb R,A\in\mathbb R$),令$p_x$为$x$的概率密度。那么,什么是$p_x$?
令$W=A^{-1}$,要计算一个特定$x$的概率,也就是尝试计算$s=Wx$,之后再使用$p_s$估计再点$s$处的概率,从而得出$p_x(x)=p_s(Wx)$的结论——*然而这是一个错误的结论*。比如令$s\sim\mathrm{Uniform}[0,1]$服从均匀分布,则$s$的概率密度为$p_s(s)=1\{0\leq s\leq1\}$;再令$A=2$,则$x=2s$。显然$x$是一个在区间$[0,2]$上的均匀分布,则其概率密度函数为$p_x(x)=(0.5)1\{0\leq x\leq2\}$。而此时的$W=A^{-1}=0.5$,很明显,$p_s(Wx)$并不等于$p_x(x)$。此时,真正成立的是式子$p_x(x)=p_s(Wx)\lvert W\rvert$。
一般的,若$s$是一个向量值,来自以$p_s$为概率密度的某分布,且对于可逆矩阵$A$有$x=As$,$A$的逆矩阵为$W=A^{-1}$,那么$x$的概率密度为:
$$
p_x(x)=p_s(Wx)\cdot\lvert W\rvert
$$
**注意:**如果曾经见过$A$将$[0,1]^n$映射到一组体积$\lvert A\rvert$的集合上(可以参考[克拉默法则、逆矩阵、体积](http://nbviewer.jupyter.org/github/zlotus/notes-linear-algebra/blob/master/chapter20.ipynb)),则就有另一种记忆上面这个关于$p_x$的公式的方法,这个方法同样可以一般化前面$1$维的例子。令$A\in\mathbb R^{n\times n}$,令$W=A^{-1}$,再令$C_1=[0,1]^n$为$n$维超立方体,并定义$C_2=\{As:\ s\in C_1\}\subseteq\mathbb R^n$(即$C_2$是原像$C_1$在映射$A$作用下得到的像)。按照上面这些条件,在线性代数中有一个现成的标准公式可以使用(同时这也是行列式的一种定义方式):$C_2$的体积为$\lvert A\rvert$。假设$s$服从在$[0,1]^n$上的均匀分布,则其概率密度函数为$p_s(s)=1\{s\in C_1\}$。很明显$x$为在$C_2$上的均匀分布,其概率密度为$\displaystyle p_x(x)=\frac{1\{x\in C_2\}}{\mathrm{Vol}(C_2)}$(因为它必须做从$C_2$到$1$的积分)。再利用“逆矩阵的行列式是原矩阵行列式的倒数”这一性质,则有$\displaystyle\frac{1}{\mathrm{Vol}(C_2)}=\frac{1}{\lvert A\rvert}=\left\lvert A^{-1}\right\rvert=\lvert W\rvert$。最终我们又得到$p_x(x)=1\{x\in C_2\}\lvert W\rvert=1\{Wx\in C_1\}\lvert W\rvert=p_s(Wx)\lvert W\rvert$。
## 3. ICA算法
现在我们可以开始推导ICA算法了。这个算法来自Bell和Sejnowski,我们这里给出的关于ICA的理解,会将其看做是一个用来最大化似然估计的算法(这与该算法原本的演绎不同,该演绎涉及到一个称为infomax原则的复杂的概念,不过在现代关于ICA的推导中已经没有必要提及了)。
我们假设每个源$s_i$来自一个由概率密度函数$p_s$定义的分布,而且$s$的联合分布为:
$$
p(s)=\prod_{i=1}^np_s(s_i)
$$
注意到我们使用各源的边缘分布的乘积来对联合分布建模,也就是默认假设每一个源是独立的。使用上一节得到的公式能够得到关于$x=As=W^{-1}s$的概率密度:
$$
p(x)=\prod_{i=1}^np_s\left(w_i^Tx\right)\cdot\lvert W\rvert
$$
接下来就剩为每一个独立的源$p_s$估计概率密度了。
以前在概率论中我们学过,对于给定的实随机变量$z$,其累积分布函数(CDF: Cumulative Distribution Function)$F$可以使用概率密度函数(PDF: Probability Density Function)来计算:$F(z_0)=P(z\leq z_0)=\int_{-\infty}^{z_0}p_z(z)\mathrm dz$。当然,也可以通过对CDF求导得到$z$的概率密度函数$p_z(z)=F'(z)$。
因此,要求出每个$s_i$的PDF,只需要求出它们对应的CDF的即可,而CDF是一个函数值从$0$到$1$的单调增函数。根据上一节的推导,我们知道ICA对服从高斯分布的数据无效,所以不能选择高斯分布的累积分布函数作为CDF。我们现在要选择一个合理的“默认”函数,其函数值也是从$0$缓慢的单调递增至$1$——这就是前面([第三讲](chapter03.ipynb))介绍的逻辑函数(即S型函数)$\displaystyle g(s)=\frac{1}{1+e^{-s}}$。于是有$p_s(s)=g'(s)$。
(如果我们有关于源数据集的先验知识,已经知道源的PDF的形式,那么就可以用该PDF对应的CDF来代替上面的逻辑函数。如果不知道PDF的形式,那么逻辑函数就是一个很合理的默认函数,因为在处理很多问题时,逻辑函数都有具有良好的表现。并且,在本例中我们使用的输入观测数据集$x^{(i)}$要么已经被预处理为期望为$0$的数据,要么$x^{(i)}$在自然状态下就是期望为$0$的数据集,如音频信号。而零期望是必须的,因为我们假设了$p_s(s)=g'(s)$,表明$\mathrm E[s]=0$——这里说明一下,对逻辑函数求导会得到一个对称函数,这也就是PDF,所以这个对称的PDF对应的随机变量必须保证期望为$0$——则有$\mathrm E[x]=\mathrm E[As]=0$。顺便再提一点,除了逻辑函数,PDF也经常选用[拉普拉斯分布](https://zh.wikipedia.org/wiki/%E6%8B%89%E6%99%AE%E6%8B%89%E6%96%AF%E5%88%86%E5%B8%83)/[Laplace distribution](https://en.wikipedia.org/wiki/Laplace_distribution) $\displaystyle p(s)=\frac{1}{2}e^{-\lvert s\rvert}$。)
方阵$W$是模型中的参数,对已给定训练集$\left\{x^{(i)};i=1,\cdots,m\right\}$,有对数似然函数为:
$$
\mathscr l(W)=\sum_{i=1}^m\left(\sum_{j=1}^n\log g'\left(w_j^Tx^{(i)}\right)+\log\lvert W\rvert\right)
$$
我们的目标是用$W$最大化上面的函数,使用性质$\nabla_W\lvert W\rvert=\lvert W\rvert\left(W^{-1}\right)^T$(参见[第二讲](chapter02.ipynb))对似然函数求导,便可以得到一个以$\alpha$为学习速率的随机梯度上升的更新规则。对于训练集$x^{(i)}$的更新规则是:
$$
W:=W+\alpha\left(\begin{bmatrix}
1-2g\left(w_1^Tx^{(i)}\right)\\
1-2g\left(w_2^Tx^{(i)}\right)\\
\vdots\\
1-2g\left(w_n^Tx^{(i)}\right)
\end{bmatrix}\left(x^{(i)}\right)^T+\left(W^T\right)^{-1}\right)
$$
在算法迭代收敛之后,通过$s^{(i)}=Wx^{(i)}$还原源即可。
**注意:**在构造似然函数时,我们默认假设了各样本$x^{(i)}$是相互独立的(要注意,这不是指样本$x^{(i)}$的各分量间是相互独立的)。于是能够得到训练集的似然函数为$\prod_ip\left(x^{(i)};W\right)$。这个假设在$x^{(i)}$表示演讲音频或其他基于时间序列的数据时是错误的,因为这种样本数据之间并不是相互独立的,不过这一点也可以证明在训练样本充足的前提下,存在相关关系的各样本并不会影响ICA的表现。不过,对于相关的训练样本,当我们使用随机梯度上升时,如果随机改变训练样本载入算法的次序,有时可能会加速算法的收敛过程。(建议准备多组经过乱序的训练集,然后用不同的顺序为模型加载样本,有时有利于快速收敛。)
|
github_jupyter
|
<h1 align="center">SimpleITK Spatial Transformations</h1>
**Summary:**
1. Points are represented by vector-like data types: Tuple, Numpy array, List.
2. Matrices are represented by vector-like data types in row major order.
3. Default transformation initialization as the identity transform.
4. Angles specified in radians, distances specified in unknown but consistent units (nm,mm,m,km...).
5. All global transformations **except translation** are of the form:
$$T(\mathbf{x}) = A(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c}$$
Nomenclature (when printing your transformation):
* Matrix: the matrix $A$
* Center: the point $\mathbf{c}$
* Translation: the vector $\mathbf{t}$
* Offset: $\mathbf{t} + \mathbf{c} - A\mathbf{c}$
6. Bounded transformations, BSplineTransform and DisplacementFieldTransform, behave as the identity transform outside the defined bounds.
7. DisplacementFieldTransform:
* Initializing the DisplacementFieldTransform using an image requires that the image's pixel type be sitk.sitkVectorFloat64.
* Initializing the DisplacementFieldTransform using an image will "clear out" your image (your alias to the image will point to an empty, zero sized, image).
8. Composite transformations are applied in stack order (first added, last applied).
## Transformation Types
SimpleITK supports the following transformation types.
<table width="100%">
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1TranslationTransform.html">TranslationTransform</a></td><td>2D or 3D, translation</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1VersorTransform.html">VersorTransform</a></td><td>3D, rotation represented by a versor</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1VersorRigid3DTransform.html">VersorRigid3DTransform</a></td><td>3D, rigid transformation with rotation represented by a versor</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1Euler2DTransform.html">Euler2DTransform</a></td><td>2D, rigid transformation with rotation represented by a Euler angle</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1Euler3DTransform.html">Euler3DTransform</a></td><td>3D, rigid transformation with rotation represented by Euler angles</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1Similarity2DTransform.html">Similarity2DTransform</a></td><td>2D, composition of isotropic scaling and rigid transformation with rotation represented by a Euler angle</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1Similarity3DTransform.html">Similarity3DTransform</a></td><td>3D, composition of isotropic scaling and rigid transformation with rotation represented by a versor</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1ScaleTransform.html">ScaleTransform</a></td><td>2D or 3D, anisotropic scaling</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1ScaleVersor3DTransform.html">ScaleVersor3DTransform</a></td><td>3D, rigid transformation and anisotropic scale is <bf>added</bf> to the rotation matrix part (not composed as one would expect)</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1ScaleSkewVersor3DTransform.html">ScaleSkewVersor3DTransform</a></td><td>3D, rigid transformation with anisotropic scale and skew matrices <bf>added</bf> to the rotation matrix part (not composed as one would expect)</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1AffineTransform.html">AffineTransform</a></td><td>2D or 3D, affine transformation.</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1BSplineTransform.html">BSplineTransform</a></td><td>2D or 3D, deformable transformation represented by a sparse regular grid of control points. </td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1DisplacementFieldTransform.html">DisplacementFieldTransform</a></td><td>2D or 3D, deformable transformation represented as a dense regular grid of vectors.</td></tr>
<tr><td><a href="http://www.itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1Transform.html">Transform</a></td>
<td>A generic transformation. Can represent any of the SimpleITK transformations, and a <b>composite transformation</b> (stack of transformations concatenated via composition, last added, first applied). </td></tr>
</table>
```
import SimpleITK as sitk
import utilities as util
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact, fixed
OUTPUT_DIR = "output"
```
We will introduce the transformation types, starting with translation and illustrating how to move from a lower to higher parameter space (e.g. translation to rigid).
We start with the global transformations. All of them <b>except translation</b> are of the form:
$$T(\mathbf{x}) = A(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c}$$
In ITK speak (when printing your transformation):
<ul>
<li>Matrix: the matrix $A$</li>
<li>Center: the point $\mathbf{c}$</li>
<li>Translation: the vector $\mathbf{t}$</li>
<li>Offset: $\mathbf{t} + \mathbf{c} - A\mathbf{c}$</li>
</ul>
## TranslationTransform
Create a translation and then transform a point and use the inverse transformation to get the original back.
```
dimension = 2
offset = [2]*dimension # use a Python trick to create the offset list based on the dimension
translation = sitk.TranslationTransform(dimension, offset)
print(translation)
point = [10, 11] if dimension==2 else [10, 11, 12] # set point to match dimension
transformed_point = translation.TransformPoint(point)
translation_inverse = translation.GetInverse()
print('original point: ' + util.point2str(point) + '\n'
'transformed point: ' + util.point2str(transformed_point) + '\n'
'back to original: ' + util.point2str(translation_inverse.TransformPoint(transformed_point)))
```
## Euler2DTransform
Rigidly transform a 2D point using a Euler angle parameter specification.
Notice that the dimensionality of the Euler angle based rigid transformation is associated with the class, unlike the translation which is set at construction.
```
point = [10, 11]
rotation2D = sitk.Euler2DTransform()
rotation2D.SetTranslation((7.2, 8.4))
rotation2D.SetAngle(np.pi/2)
print('original point: ' + util.point2str(point) + '\n'
'transformed point: ' + util.point2str(rotation2D.TransformPoint(point)))
```
## VersorTransform (rotation in 3D)
Rotation using a versor, vector part of unit quaternion, parameterization. Quaternion defined by rotation of $\theta$ radians around axis $n$, is $q = [n*\sin(\frac{\theta}{2}), \cos(\frac{\theta}{2})]$.
```
# Use a versor:
rotation1 = sitk.VersorTransform([0,0,1,0])
# Use axis-angle:
rotation2 = sitk.VersorTransform((0,0,1), np.pi)
# Use a matrix:
rotation3 = sitk.VersorTransform()
rotation3.SetMatrix([-1, 0, 0, 0, -1, 0, 0, 0, 1]);
point = (10, 100, 1000)
p1 = rotation1.TransformPoint(point)
p2 = rotation2.TransformPoint(point)
p3 = rotation3.TransformPoint(point)
print('Points after transformation:\np1=' + str(p1) +
'\np2='+ str(p2) + '\np3='+ str(p3))
```
## Translation to Rigid [3D]
We only need to copy the translational component.
```
dimension = 3
t =(1,2,3)
translation = sitk.TranslationTransform(dimension, t)
# Copy the translational component.
rigid_euler = sitk.Euler3DTransform()
rigid_euler.SetTranslation(translation.GetOffset())
# Apply the transformations to the same set of random points and compare the results.
util.print_transformation_differences(translation, rigid_euler)
```
## Rotation to Rigid [3D]
Copy the matrix or versor and <b>center of rotation</b>.
```
rotation_center = (10, 10, 10)
rotation = sitk.VersorTransform([0,0,1,0], rotation_center)
rigid_versor = sitk.VersorRigid3DTransform()
rigid_versor.SetRotation(rotation.GetVersor())
#rigid_versor.SetCenter(rotation.GetCenter()) #intentional error, not copying center of rotation
# Apply the transformations to the same set of random points and compare the results.
util.print_transformation_differences(rotation, rigid_versor)
```
In the cell above, when we don't copy the center of rotation we have a constant error vector, $\mathbf{c}$ - A$\mathbf{c}$.
## Similarity [2D]
When the center of the similarity transformation is not at the origin the effect of the transformation is not what most of us expect. This is readily visible if we limit the transformation to scaling: $T(\mathbf{x}) = s\mathbf{x}-s\mathbf{c} + \mathbf{c}$. Changing the transformation's center results in scale + translation.
```
def display_center_effect(x, y, tx, point_list, xlim, ylim):
tx.SetCenter((x,y))
transformed_point_list = [ tx.TransformPoint(p) for p in point_list]
plt.scatter(list(np.array(transformed_point_list).T)[0],
list(np.array(transformed_point_list).T)[1],
marker='^',
color='red', label='transformed points')
plt.scatter(list(np.array(point_list).T)[0],
list(np.array(point_list).T)[1],
marker='o',
color='blue', label='original points')
plt.xlim(xlim)
plt.ylim(ylim)
plt.legend(loc=(0.25,1.01))
# 2D square centered on (0,0)
points = [np.array((-1.0,-1.0)), np.array((-1.0,1.0)), np.array((1.0,1.0)), np.array((1.0,-1.0))]
# Scale by 2
similarity = sitk.Similarity2DTransform();
similarity.SetScale(2)
interact(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(similarity), point_list = fixed(points),
xlim = fixed((-10,10)),ylim = fixed((-10,10)));
```
## Rigid to Similarity [3D]
Copy the translation, center, and matrix or versor.
```
rotation_center = (100, 100, 100)
theta_x = 0.0
theta_y = 0.0
theta_z = np.pi/2.0
translation = (1,2,3)
rigid_euler = sitk.Euler3DTransform(rotation_center, theta_x, theta_y, theta_z, translation)
similarity = sitk.Similarity3DTransform()
similarity.SetMatrix(rigid_euler.GetMatrix())
similarity.SetTranslation(rigid_euler.GetTranslation())
similarity.SetCenter(rigid_euler.GetCenter())
# Apply the transformations to the same set of random points and compare the results.
util.print_transformation_differences(rigid_euler, similarity)
```
## Similarity to Affine [3D]
Copy the translation, center and matrix.
```
rotation_center = (100, 100, 100)
axis = (0,0,1)
angle = np.pi/2.0
translation = (1,2,3)
scale_factor = 2.0
similarity = sitk.Similarity3DTransform(scale_factor, axis, angle, translation, rotation_center)
affine = sitk.AffineTransform(3)
affine.SetMatrix(similarity.GetMatrix())
affine.SetTranslation(similarity.GetTranslation())
affine.SetCenter(similarity.GetCenter())
# Apply the transformations to the same set of random points and compare the results.
util.print_transformation_differences(similarity, affine)
```
## Scale Transform
Just as the case was for the similarity transformation above, when the transformations center is not at the origin, instead of a pure anisotropic scaling we also have translation ($T(\mathbf{x}) = \mathbf{s}^T\mathbf{x}-\mathbf{s}^T\mathbf{c} + \mathbf{c}$).
```
# 2D square centered on (0,0).
points = [np.array((-1.0,-1.0)), np.array((-1.0,1.0)), np.array((1.0,1.0)), np.array((1.0,-1.0))]
# Scale by half in x and 2 in y.
scale = sitk.ScaleTransform(2, (0.5,2));
# Interactively change the location of the center.
interact(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(scale), point_list = fixed(points),
xlim = fixed((-10,10)),ylim = fixed((-10,10)));
```
## Unintentional Misnomers (originally from ITK)
Two transformation types whose names may mislead you are ScaleVersor and ScaleSkewVersor. Basing your choices on expectations without reading the documentation will surprise you.
ScaleVersor - based on name expected a composition of transformations, in practice it is:
$$T(x) = (R+S)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S= \left[\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \end{array}\right]$$
ScaleSkewVersor - based on name expected a composition of transformations, in practice it is:
$$T(x) = (R+S+K)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S = \left[\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \end{array}\right]\;\; \textrm{and } K = \left[\begin{array}{ccc} 0 & k_0 & k_1 \\ k_2 & 0 & k_3 \\ k_4 & k_5 & 0 \end{array}\right]$$
Note that ScaleSkewVersor is is an over-parametrized version of the affine transform, 15 parameters (scale, skew, versor, translation) vs. 12 parameters (matrix, translation).
## Bounded Transformations
SimpleITK supports two types of bounded non-rigid transformations, BSplineTransform (sparse representation) and DisplacementFieldTransform (dense representation).
Transforming a point that is outside the bounds will return the original point - identity transform.
## BSpline
Using a sparse set of control points to control a free form deformation. Using the cell below it is clear that the BSplineTransform allows for folding and tearing.
```
# Create the transformation (when working with images it is easier to use the BSplineTransformInitializer function
# or its object oriented counterpart BSplineTransformInitializerFilter).
dimension = 2
spline_order = 3
direction_matrix_row_major = [1.0,0.0,0.0,1.0] # identity, mesh is axis aligned
origin = [-1.0,-1.0]
domain_physical_dimensions = [2,2]
bspline = sitk.BSplineTransform(dimension, spline_order)
bspline.SetTransformDomainOrigin(origin)
bspline.SetTransformDomainDirection(direction_matrix_row_major)
bspline.SetTransformDomainPhysicalDimensions(domain_physical_dimensions)
bspline.SetTransformDomainMeshSize((4,3))
# Random displacement of the control points.
originalControlPointDisplacements = np.random.random(len(bspline.GetParameters()))
bspline.SetParameters(originalControlPointDisplacements)
# Apply the BSpline transformation to a grid of points
# starting the point set exactly at the origin of the BSpline mesh is problematic as
# these points are considered outside the transformation's domain,
# remove epsilon below and see what happens.
numSamplesX = 10
numSamplesY = 20
coordsX = np.linspace(origin[0]+np.finfo(float).eps, origin[0] + domain_physical_dimensions[0], numSamplesX)
coordsY = np.linspace(origin[1]+np.finfo(float).eps, origin[1] + domain_physical_dimensions[1], numSamplesY)
XX, YY = np.meshgrid(coordsX, coordsY)
interact(util.display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),
tx = fixed(bspline), original_control_point_displacements = fixed(originalControlPointDisplacements));
```
## DisplacementField
A dense set of vectors representing the displacement inside the given domain. The most generic representation of a transformation.
```
# Create the displacement field.
# When working with images the safer thing to do is use the image based constructor,
# sitk.DisplacementFieldTransform(my_image), all the fixed parameters will be set correctly and the displacement
# field is initialized using the vectors stored in the image. SimpleITK requires that the image's pixel type be
# sitk.sitkVectorFloat64.
displacement = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [-1.0,-1.0]
field_spacing = [2.0/9.0,2.0/19.0]
field_direction = [1,0,0,1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list
displacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
# Set the interpolator, either sitkLinear which is default or nearest neighbor
displacement.SetInterpolator(sitk.sitkNearestNeighbor)
originalDisplacements = np.random.random(len(displacement.GetParameters()))
displacement.SetParameters(originalDisplacements)
coordsX = np.linspace(field_origin[0], field_origin[0]+(field_size[0]-1)*field_spacing[0], field_size[0])
coordsY = np.linspace(field_origin[1], field_origin[1]+(field_size[1]-1)*field_spacing[1], field_size[1])
XX, YY = np.meshgrid(coordsX, coordsY)
interact(util.display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),
tx = fixed(displacement), original_control_point_displacements = fixed(originalDisplacements));
```
## Composite transform (Transform)
The generic SimpleITK transform class. This class can represent both a single transformation (global, local), or a composite transformation (multiple transformations applied one after the other). This is the output typed returned by the SimpleITK registration framework.
The choice of whether to use a composite transformation or compose transformations on your own has subtle differences in the registration framework.
Composite transforms enable a combination of a global transformation with multiple local/bounded transformations. This is useful if we want to apply deformations only in regions that deform while other regions are only effected by the global transformation.
The following code illustrates this, where the whole region is translated and subregions have different deformations.
```
# Global transformation.
translation = sitk.TranslationTransform(2,(1.0,0.0))
# Displacement in region 1.
displacement1 = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [-1.0,-1.0]
field_spacing = [2.0/9.0,2.0/19.0]
field_direction = [1,0,0,1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement1.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
displacement1.SetParameters(np.ones(len(displacement1.GetParameters())))
# Displacement in region 2.
displacement2 = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [1.0,-3]
field_spacing = [2.0/9.0,2.0/19.0]
field_direction = [1,0,0,1] #direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement2.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
displacement2.SetParameters(-1.0*np.ones(len(displacement2.GetParameters())))
# Composite transform which applies the global and local transformations.
composite = sitk.Transform(translation)
composite.AddTransform(displacement1)
composite.AddTransform(displacement2)
# Apply the composite transformation to points in ([-1,-3],[3,1]) and
# display the deformation using a quiver plot.
# Generate points.
numSamplesX = 10
numSamplesY = 10
coordsX = np.linspace(-1.0, 3.0, numSamplesX)
coordsY = np.linspace(-3.0, 1.0, numSamplesY)
XX, YY = np.meshgrid(coordsX, coordsY)
# Transform points and compute deformation vectors.
pointsX = np.zeros(XX.shape)
pointsY = np.zeros(XX.shape)
for index, value in np.ndenumerate(XX):
px,py = composite.TransformPoint((value, YY[index]))
pointsX[index]=px - value
pointsY[index]=py - YY[index]
plt.quiver(XX, YY, pointsX, pointsY);
```
## Writing and Reading
The SimpleITK.ReadTransform() returns a SimpleITK.Transform . The content of the file can be any of the SimpleITK transformations or a composite (set of transformations).
```
import os
# Create a 2D rigid transformation, write it to disk and read it back.
basic_transform = sitk.Euler2DTransform()
basic_transform.SetTranslation((1.0,2.0))
basic_transform.SetAngle(np.pi/2)
full_file_name = os.path.join(OUTPUT_DIR, 'euler2D.tfm')
sitk.WriteTransform(basic_transform, full_file_name)
# The ReadTransform function returns an sitk.Transform no matter the type of the transform
# found in the file (global, bounded, composite).
read_result = sitk.ReadTransform(full_file_name)
print('Different types: '+ str(type(read_result) != type(basic_transform)))
util.print_transformation_differences(basic_transform, read_result)
# Create a composite transform then write and read.
displacement = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [-10.0,-100.0]
field_spacing = [20.0/(field_size[0]-1),200.0/(field_size[1]-1)]
field_direction = [1,0,0,1] #direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
displacement.SetParameters(np.random.random(len(displacement.GetParameters())))
composite_transform = sitk.Transform(basic_transform)
composite_transform.AddTransform(displacement)
full_file_name = os.path.join(OUTPUT_DIR, 'composite.tfm')
sitk.WriteTransform(composite_transform, full_file_name)
read_result = sitk.ReadTransform(full_file_name)
util.print_transformation_differences(composite_transform, read_result)
```
<a href="02_images_and_resampling.ipynb"><h2 align=right>Next »</h2></a>
|
github_jupyter
|
In this dataset, were are going to be exploring sales information provided.
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
```
Let us start by importing some helper libraries and the dataset as well
```
import seaborn as sns
from matplotlib import pyplot as plt
%matplotlib inline
from brewer2mpl import qualitative
df= pd.read_csv("../input/Video_Games_Sales_as_at_22_Dec_2016.csv")
df.shape
```
Now that we know the shape of the dataset, let's have a peek at the data and try to find if they are any missing values.
```
df.head()
```
The cell below shows the data type of the columns
```
df.info()
```
The cell below shows the column name and the number of empty rows.
```
df.isnull().sum()
```
A list of all the columns in the dataset
```
df.columns.tolist()
```
Calculating the % of missing values
```
df_na= ( df.isnull().sum() / len(df) ) * 100
df_na= df_na.drop(df_na[df_na == 0].index).sort_values(ascending= False)
f, ax= plt.subplots(figsize=(12, 8))
plt.xticks(rotation='90')
sns.barplot(x=df_na.index, y=df_na.values)
ax.set(title='Missing Values Plot', ylabel='% Missing')
```
Unique Gaming platforms
```
df.Platform.unique()
#df.Platform.value_counts()
ssc = df.Platform.value_counts()
f, ax= plt.subplots(figsize=(12, 8))
plt.xticks(rotation='90')
sns.barplot(x=ssc.values, y=ssc.index, orient='h')
ax.set(title='Consoles by count', ylabel='Count')
f.tight_layout()
```
dropping all NA values
```
df_clean= df.dropna(axis=0)
df_clean.shape
ssc = df_clean.Platform.value_counts()
f, ax= plt.subplots(figsize=(12, 8))
plt.xticks(rotation='90')
sns.barplot(x=ssc.values, y=ssc.index, orient='h')
ax.set(title='Consoles by count after dropping NAs', ylabel='Count')
f.tight_layout()
#df['User_Score']= df['User_Score'].convert_objects(convert_numeric=True)
df_clean.User_Score= df_clean.User_Score.astype('float')
#df.User_Score.dtype
#df['User_Score'] = video['User_Score'].convert_objects(convert_numeric= True)
```
Plot of the user score v the critic score of games. It appears the users and the critics agree on
games with score greater than 8.
```
sns.jointplot(x='User_Score', y='Critic_Score', data=df_clean, kind='hex', cmap='coolwarm', size=7)
```
Critic score v critic count. From this plot, we observe that few critics give scores above 80.
```
sns.jointplot(x='Critic_Score', y='Critic_Count', data=df_clean, kind='hex', cmap='plasma', size=7)
```
CORRELATION BETWEEN COLUMNS
```
stats=['Year_of_Release','NA_Sales', 'EU_Sales', 'JP_Sales', 'Other_Sales',
'Global_Sales', 'Critic_Score', 'Critic_Count', 'User_Score', 'User_Count',
'Rating']
corrmat = df_clean[stats].corr()
f, ax = plt.subplots(figsize=(10, 7))
plt.xticks(rotation='90')
plt.title('correlation between columns')
sns.heatmap(corrmat, square=True, linewidths=.5, annot=True)
```
Taking a look at Playstation
```
play= df_clean[(df_clean['Platform']== 'PS2') | (df_clean['Platform']== 'PS3')
| (df_clean['Platform']== 'PS')| (df_clean['Platform']== 'PS4')]
play.shape
```
Playststation Global 1994-2016
```
sales_Play= play.groupby(['Year_of_Release', 'Platform'])['Global_Sales'].sum()
sales_Play.unstack().plot(kind='bar',stacked=True, colormap= 'Oranges', grid=False)
ax.set(title='Playststation Global over the year', ylabel='Cumulative count')
```
Top selling genre for Playstation
```
sales_Play= play.groupby(['Genre', 'Platform'])['Global_Sales'].sum()
sales_Play.unstack().plot(kind='bar',stacked=True, colormap= 'Oranges', grid=False)
```
Rating of the games made
```
sales_Play= play.groupby(['Rating', 'Platform'])['Global_Sales'].sum()
sales_Play.unstack().plot(kind='bar',stacked=True, colormap= 'Oranges', grid=False)
```
Taking a closer look at Xbox
```
xb= df_clean[(df_clean['Platform']== 'X360') | (df_clean['Platform']== 'XOne')
| (df_clean['Platform']== 'XB')]
xb.shape
```
Global sales of the Xbox consoles globally 1994-2016
```
sales_xb= xb.groupby(['Year_of_Release', 'Platform'])['Global_Sales'].sum()
sales_xb.unstack().plot(kind='bar',stacked=True, colormap= 'Vega20', grid=False)
```
Top selling genre per Xbox console. The top selling genre is the shooter, which makes sense because of the
halo franchise.
```
sales_xb= xb.groupby(['Genre', 'Platform'])['Global_Sales'].sum()
sales_xb.unstack().plot(kind='bar',stacked=True, colormap= 'Vega20', grid=False)
```
Rating and global sales
```
sales_xb= xb.groupby(['Rating', 'Platform'])['Global_Sales'].sum()
sales_xb.unstack().plot(kind='bar',stacked=True, colormap= 'Vega20', grid=False)
```
Taking a closer look at nintendo
```
nintendo= df_clean[(df_clean['Platform']== 'DS') | (df_clean['Platform']== 'Wii')
| (df_clean['Platform']== 'GC')| (df_clean['Platform']== 'GBA')
|(df_clean['Platform']== '3DS') | (df_clean['Platform']== 'WiiU')]
nintendo.shape
```
Platform and total global sales from 1994-2016
```
nintendo_sales= nintendo.groupby(['Year_of_Release', 'Platform'])['Global_Sales'].sum()
nintendo_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set1', grid=False)
```
Genre and total sales on platform. Nintendo looks to be selling alot of sports oriented games, especially on the
Wii. However the Wii U is struggling in sales.
```
nintendo_sales= nintendo.groupby(['Genre', 'Platform'])['Global_Sales'].sum()
nintendo_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set1', grid=False)
```
Rating and total global sales. Nintendo sold majorly in the category of E (everyone)
```
nintendo_sales= nintendo.groupby(['Rating', 'Platform'])['Global_Sales'].sum()
nintendo_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set1', grid=False)
current_gen= df_clean[(df_clean['Platform']== 'Wii') | (df_clean['Platform']== 'X360') |
(df_clean['Platform']== 'PS3')]
current_gen.shape
```
Comparing the top selling platforms, last generation
```
current_gen_sales= current_gen.groupby(['Year_of_Release', 'Platform'])['Global_Sales'].sum()
current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set2', grid=False)
current_gen_sales= current_gen.groupby(['Genre', 'Platform'])['Global_Sales'].sum()
current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set2', grid=False)
current_gen_sales= current_gen.groupby(['Rating', 'Platform'])['Global_Sales'].sum()
current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Set2', grid=False)
```
last generation sales from North America
```
current_gen_sales= current_gen.groupby(['Year_of_Release', 'Platform'])['NA_Sales'].sum()
current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Blues', grid=False)
```
Last generation sales from Japan
```
current_gen_sales= current_gen.groupby(['Year_of_Release', 'Platform'])['JP_Sales'].sum()
current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Blues', grid=False)
```
Last generation sales from EU
```
current_gen_sales= current_gen.groupby(['Year_of_Release', 'Platform'])['EU_Sales'].sum()
current_gen_sales.unstack().plot(kind='bar',stacked=True, colormap= 'Blues', grid=False)
```
|
github_jupyter
|
<!--NAVIGATION-->
< [简单的折线图](04.01-Simple-Line-Plots.ipynb) | [目录](Index.ipynb) | [误差可视化](04.03-Errorbars.ipynb) >
<a href="https://colab.research.google.com/github/wangyingsm/Python-Data-Science-Handbook/blob/master/notebooks/04.02-Simple-Scatter-Plots.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
# Simple Scatter Plots
# 简单散点图
> Another commonly used plot type is the simple scatter plot, a close cousin of the line plot.
Instead of points being joined by line segments, here the points are represented individually with a dot, circle, or other shape.
We’ll start by setting up the notebook for plotting and importing the functions we will use:
另一种常用的图表类型是简单散点图,它是折线图的近亲。不像折线图,图中的点连接起来组成连线,散点图中的点都是独立分布的点状、圆圈或其他形状。本节开始我们也是首先将需要用到的图表工具和函数导入到notebook中:
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
```
## Scatter Plots with ``plt.plot``
## 使用 `plt.plot` 绘制散点图
> In the previous section we looked at ``plt.plot``/``ax.plot`` to produce line plots.
It turns out that this same function can produce scatter plots as well:
在上一节中,我们介绍了`plt.plot`/`ax.plot`方法绘制折线图。这两个方法也可以同样用来绘制散点图:
```
x = np.linspace(0, 10, 30)
y = np.sin(x)
plt.plot(x, y, 'o', color='black');
```
> The third argument in the function call is a character that represents the type of symbol used for the plotting. Just as you can specify options such as ``'-'``, ``'--'`` to control the line style, the marker style has its own set of short string codes. The full list of available symbols can be seen in the documentation of ``plt.plot``, or in Matplotlib's online documentation. Most of the possibilities are fairly intuitive, and we'll show a number of the more common ones here:
传递给函数的第三个参数是使用一个字符代表的图表绘制点的类型。就像你可以使用`'-'`或`'--'`来控制线条的风格那样,点的类型风格也可以使用短字符串代码来表示。所有可用的符号可以通过`plt.plot`文档或Matplotlib在线文档进行查阅。大多数的代码都是非常直观的,我们使用下面的例子可以展示那些最通用的符号:
```
rng = np.random.RandomState(0)
for marker in ['o', '.', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']:
plt.plot(rng.rand(5), rng.rand(5), marker,
label="marker='{0}'".format(marker))
plt.legend(numpoints=1)
plt.xlim(0, 1.8);
```
> For even more possibilities, these character codes can be used together with line and color codes to plot points along with a line connecting them:
而且这些符号代码可以和线条、颜色代码一起使用,这会在折线图的基础上绘制出散点:
```
plt.plot(x, y, '-ok');
```
> Additional keyword arguments to ``plt.plot`` specify a wide range of properties of the lines and markers:
`plt.plot`还有很多额外的关键字参数用来指定广泛的线条和点的属性:
```
plt.plot(x, y, '-p', color='gray',
markersize=15, linewidth=4,
markerfacecolor='white',
markeredgecolor='gray',
markeredgewidth=2)
plt.ylim(-1.2, 1.2);
```
> This type of flexibility in the ``plt.plot`` function allows for a wide variety of possible visualization options.
For a full description of the options available, refer to the ``plt.plot`` documentation.
`plt.plot`函数的这种灵活性提供了很多的可视化选择。查阅`plt.plot`帮助文档获得完整的选项说明。
## Scatter Plots with ``plt.scatter``
## 使用`plt.scatter`绘制散点图
> A second, more powerful method of creating scatter plots is the ``plt.scatter`` function, which can be used very similarly to the ``plt.plot`` function:
第二种更强大的绘制散点图的方法是使用`plt.scatter`函数,它的使用方法和`plt.plot`类似:
```
plt.scatter(x, y, marker='o');
```
> The primary difference of ``plt.scatter`` from ``plt.plot`` is that it can be used to create scatter plots where the properties of each individual point (size, face color, edge color, etc.) can be individually controlled or mapped to data.
`plt.scatter`和`plt.plot`的主要区别在于,`plt.scatter`可以针对每个点设置不同属性(大小、填充颜色、边缘颜色等),还可以通过数据集合对这些属性进行设置。
> Let's show this by creating a random scatter plot with points of many colors and sizes.
In order to better see the overlapping results, we'll also use the ``alpha`` keyword to adjust the transparency level:
让我们通过一个随机值数据集绘制不同颜色和大小的散点图来说明。为了更好的查看重叠的结果,我们还使用了`alpha`关键字参数对点的透明度进行了调整:
```
rng = np.random.RandomState(0)
x = rng.randn(100)
y = rng.randn(100)
colors = rng.rand(100)
sizes = 1000 * rng.rand(100)
plt.scatter(x, y, c=colors, s=sizes, alpha=0.3,
cmap='viridis')
plt.colorbar(); # 显示颜色对比条
```
> Notice that the color argument is automatically mapped to a color scale (shown here by the ``colorbar()`` command), and that the size argument is given in pixels.
In this way, the color and size of points can be used to convey information in the visualization, in order to visualize multidimensional data.
注意图表右边有一个颜色对比条(这里通过`colormap()`函数输出),图表中的点大小的单位是像素。使用这种方法,散点的颜色和大小都能用来展示数据信息,在希望展示多个维度数据集合的情况下很直观。
> For example, we might use the Iris data from Scikit-Learn, where each sample is one of three types of flowers that has had the size of its petals and sepals carefully measured:
例如,当我们使用Scikit-learn中的鸢尾花数据集,里面的每个样本都是三种鸢尾花中的其中一种,并带有仔细测量的花瓣和花萼的尺寸数据:
```
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
plt.scatter(features[0], features[1], alpha=0.2,
s=100*features[3], c=iris.target, cmap='viridis')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1]);
```
> We can see that this scatter plot has given us the ability to simultaneously explore four different dimensions of the data:
the (x, y) location of each point corresponds to the sepal length and width, the size of the point is related to the petal width, and the color is related to the particular species of flower.
Multicolor and multifeature scatter plots like this can be useful for both exploration and presentation of data.
我们可以从上图中看出,可以通过散点图同时展示该数据集的四个不同维度:图中的(x, y)位置代表每个样本的花萼的长度和宽度,散点的大小代表每个样本的花瓣的宽度,而散点的颜色代表一种特定的鸢尾花类型。如上图的多种颜色和多种属性的散点图对于我们分析和展示数据集时都非常有帮助。
## ``plot`` Versus ``scatter``: A Note on Efficiency
## `plot` 和 `scatter` 对比:性能提醒
> Aside from the different features available in ``plt.plot`` and ``plt.scatter``, why might you choose to use one over the other? While it doesn't matter as much for small amounts of data, as datasets get larger than a few thousand points, ``plt.plot`` can be noticeably more efficient than ``plt.scatter``.
The reason is that ``plt.scatter`` has the capability to render a different size and/or color for each point, so the renderer must do the extra work of constructing each point individually.
In ``plt.plot``, on the other hand, the points are always essentially clones of each other, so the work of determining the appearance of the points is done only once for the entire set of data.
For large datasets, the difference between these two can lead to vastly different performance, and for this reason, ``plt.plot`` should be preferred over ``plt.scatter`` for large datasets.
除了上面说的`plt.plot`和`plt.scatter`对于每个散点不同属性的支持不同之外,还有别的因素影响对这两个函数的选择吗?对于小的数据集来说,两者并无差别,当数据集增长到几千个点时,`plt.plot`会明显比`plt.scatter`的性能要高。造成这个差异的原因是`plt.scatter`支持每个点使用不同的大小和颜色,因此渲染每个点时需要完成更多额外的工作。而`plt.plot`来说,每个点都是简单的复制另一个点产生,因此对于整个数据集来说,确定每个点的展示属性的工作仅需要进行一次即可。对于很大的数据集来说,这个差异会导致两者性能的巨大区别,因此,对于大数据集应该优先使用`plt.plot`函数。
<!--NAVIGATION-->
< [简单的折线图](04.01-Simple-Line-Plots.ipynb) | [目录](Index.ipynb) | [误差可视化](04.03-Errorbars.ipynb) >
<a href="https://colab.research.google.com/github/wangyingsm/Python-Data-Science-Handbook/blob/master/notebooks/04.02-Simple-Scatter-Plots.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
|
github_jupyter
|
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sqlalchemy import create_engine
import warnings
warnings.filterwarnings('ignore')
sns.set(style="whitegrid")
postgres_user = 'dsbc_student'
postgres_pw = '7*.8G9QH21'
postgres_host = '142.93.121.174'
postgres_port = '5432'
postgres_db = 'useducation'
engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(
postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db))
education_df = pd.read_sql_query('select * from useducation',con=engine)
engine.dispose()
fill_list = ["ENROLL", "TOTAL_REVENUE", "FEDERAL_REVENUE",
"STATE_REVENUE", "LOCAL_REVENUE", "TOTAL_EXPENDITURE",
"INSTRUCTION_EXPENDITURE", "SUPPORT_SERVICES_EXPENDITURE",
"OTHER_EXPENDITURE", "CAPITAL_OUTLAY_EXPENDITURE", "GRADES_PK_G",
"GRADES_KG_G", "GRADES_4_G", "GRADES_8_G", "GRADES_12_G", "GRADES_1_8_G",
"GRADES_9_12_G", "GRADES_ALL_G"]
states = education_df["STATE"].unique()
for state in states:
education_df.loc[education_df["STATE"] == state, fill_list] = education_df.loc[education_df["STATE"] == state, fill_list].interpolate()
education_df.dropna(inplace=True)
education_df["overall_score"] = (education_df["GRADES_4_G"]*((education_df["AVG_MATH_4_SCORE"] + education_df["AVG_READING_4_SCORE"])*0.5) + education_df["GRADES_8_G"]
* ((education_df["AVG_MATH_8_SCORE"] + education_df["AVG_READING_8_SCORE"])*0.5))/(education_df["GRADES_4_G"] + education_df["GRADES_8_G"])
education_df[["overall_score", "TOTAL_EXPENDITURE", "INSTRUCTION_EXPENDITURE",
"SUPPORT_SERVICES_EXPENDITURE", "OTHER_EXPENDITURE", "CAPITAL_OUTLAY_EXPENDITURE"]].corr()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
X = education_df[["INSTRUCTION_EXPENDITURE", "SUPPORT_SERVICES_EXPENDITURE",
"OTHER_EXPENDITURE", "CAPITAL_OUTLAY_EXPENDITURE"]]
X = StandardScaler().fit_transform(X)
sklearn_pca = PCA(n_components=1)
education_df["pca_1"] = sklearn_pca.fit_transform(X)
print(
'The percentage of total variance in the dataset explained by each',
'component from Sklearn PCA.\n',
sklearn_pca.explained_variance_ratio_)
education_df[["overall_score", "pca_1", "TOTAL_EXPENDITURE", "INSTRUCTION_EXPENDITURE",
"SUPPORT_SERVICES_EXPENDITURE", "OTHER_EXPENDITURE", "CAPITAL_OUTLAY_EXPENDITURE"]].corr()
```
Instruction expenditure variable is more correlated with the overall score than the first principal component. Hence using instruction expenditure makes more sense. PCA works best when the correlation between the variables are less than and equal to 0.8. In our case, all of the expenditure variables are highly correlated with each other. This may result in instable principal component estimations.
|
github_jupyter
|
```
##### derived from https://github.com/bozhu/AES-Python
import copy
Sbox = (
0x63, 0x7C, 0x77, 0x7B, 0xF2, 0x6B, 0x6F, 0xC5, 0x30, 0x01, 0x67, 0x2B, 0xFE, 0xD7, 0xAB, 0x76,
0xCA, 0x82, 0xC9, 0x7D, 0xFA, 0x59, 0x47, 0xF0, 0xAD, 0xD4, 0xA2, 0xAF, 0x9C, 0xA4, 0x72, 0xC0,
0xB7, 0xFD, 0x93, 0x26, 0x36, 0x3F, 0xF7, 0xCC, 0x34, 0xA5, 0xE5, 0xF1, 0x71, 0xD8, 0x31, 0x15,
0x04, 0xC7, 0x23, 0xC3, 0x18, 0x96, 0x05, 0x9A, 0x07, 0x12, 0x80, 0xE2, 0xEB, 0x27, 0xB2, 0x75,
0x09, 0x83, 0x2C, 0x1A, 0x1B, 0x6E, 0x5A, 0xA0, 0x52, 0x3B, 0xD6, 0xB3, 0x29, 0xE3, 0x2F, 0x84,
0x53, 0xD1, 0x00, 0xED, 0x20, 0xFC, 0xB1, 0x5B, 0x6A, 0xCB, 0xBE, 0x39, 0x4A, 0x4C, 0x58, 0xCF,
0xD0, 0xEF, 0xAA, 0xFB, 0x43, 0x4D, 0x33, 0x85, 0x45, 0xF9, 0x02, 0x7F, 0x50, 0x3C, 0x9F, 0xA8,
0x51, 0xA3, 0x40, 0x8F, 0x92, 0x9D, 0x38, 0xF5, 0xBC, 0xB6, 0xDA, 0x21, 0x10, 0xFF, 0xF3, 0xD2,
0xCD, 0x0C, 0x13, 0xEC, 0x5F, 0x97, 0x44, 0x17, 0xC4, 0xA7, 0x7E, 0x3D, 0x64, 0x5D, 0x19, 0x73,
0x60, 0x81, 0x4F, 0xDC, 0x22, 0x2A, 0x90, 0x88, 0x46, 0xEE, 0xB8, 0x14, 0xDE, 0x5E, 0x0B, 0xDB,
0xE0, 0x32, 0x3A, 0x0A, 0x49, 0x06, 0x24, 0x5C, 0xC2, 0xD3, 0xAC, 0x62, 0x91, 0x95, 0xE4, 0x79,
0xE7, 0xC8, 0x37, 0x6D, 0x8D, 0xD5, 0x4E, 0xA9, 0x6C, 0x56, 0xF4, 0xEA, 0x65, 0x7A, 0xAE, 0x08,
0xBA, 0x78, 0x25, 0x2E, 0x1C, 0xA6, 0xB4, 0xC6, 0xE8, 0xDD, 0x74, 0x1F, 0x4B, 0xBD, 0x8B, 0x8A,
0x70, 0x3E, 0xB5, 0x66, 0x48, 0x03, 0xF6, 0x0E, 0x61, 0x35, 0x57, 0xB9, 0x86, 0xC1, 0x1D, 0x9E,
0xE1, 0xF8, 0x98, 0x11, 0x69, 0xD9, 0x8E, 0x94, 0x9B, 0x1E, 0x87, 0xE9, 0xCE, 0x55, 0x28, 0xDF,
0x8C, 0xA1, 0x89, 0x0D, 0xBF, 0xE6, 0x42, 0x68, 0x41, 0x99, 0x2D, 0x0F, 0xB0, 0x54, 0xBB, 0x16,
)
InvSbox = (
0x52, 0x09, 0x6A, 0xD5, 0x30, 0x36, 0xA5, 0x38, 0xBF, 0x40, 0xA3, 0x9E, 0x81, 0xF3, 0xD7, 0xFB,
0x7C, 0xE3, 0x39, 0x82, 0x9B, 0x2F, 0xFF, 0x87, 0x34, 0x8E, 0x43, 0x44, 0xC4, 0xDE, 0xE9, 0xCB,
0x54, 0x7B, 0x94, 0x32, 0xA6, 0xC2, 0x23, 0x3D, 0xEE, 0x4C, 0x95, 0x0B, 0x42, 0xFA, 0xC3, 0x4E,
0x08, 0x2E, 0xA1, 0x66, 0x28, 0xD9, 0x24, 0xB2, 0x76, 0x5B, 0xA2, 0x49, 0x6D, 0x8B, 0xD1, 0x25,
0x72, 0xF8, 0xF6, 0x64, 0x86, 0x68, 0x98, 0x16, 0xD4, 0xA4, 0x5C, 0xCC, 0x5D, 0x65, 0xB6, 0x92,
0x6C, 0x70, 0x48, 0x50, 0xFD, 0xED, 0xB9, 0xDA, 0x5E, 0x15, 0x46, 0x57, 0xA7, 0x8D, 0x9D, 0x84,
0x90, 0xD8, 0xAB, 0x00, 0x8C, 0xBC, 0xD3, 0x0A, 0xF7, 0xE4, 0x58, 0x05, 0xB8, 0xB3, 0x45, 0x06,
0xD0, 0x2C, 0x1E, 0x8F, 0xCA, 0x3F, 0x0F, 0x02, 0xC1, 0xAF, 0xBD, 0x03, 0x01, 0x13, 0x8A, 0x6B,
0x3A, 0x91, 0x11, 0x41, 0x4F, 0x67, 0xDC, 0xEA, 0x97, 0xF2, 0xCF, 0xCE, 0xF0, 0xB4, 0xE6, 0x73,
0x96, 0xAC, 0x74, 0x22, 0xE7, 0xAD, 0x35, 0x85, 0xE2, 0xF9, 0x37, 0xE8, 0x1C, 0x75, 0xDF, 0x6E,
0x47, 0xF1, 0x1A, 0x71, 0x1D, 0x29, 0xC5, 0x89, 0x6F, 0xB7, 0x62, 0x0E, 0xAA, 0x18, 0xBE, 0x1B,
0xFC, 0x56, 0x3E, 0x4B, 0xC6, 0xD2, 0x79, 0x20, 0x9A, 0xDB, 0xC0, 0xFE, 0x78, 0xCD, 0x5A, 0xF4,
0x1F, 0xDD, 0xA8, 0x33, 0x88, 0x07, 0xC7, 0x31, 0xB1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xEC, 0x5F,
0x60, 0x51, 0x7F, 0xA9, 0x19, 0xB5, 0x4A, 0x0D, 0x2D, 0xE5, 0x7A, 0x9F, 0x93, 0xC9, 0x9C, 0xEF,
0xA0, 0xE0, 0x3B, 0x4D, 0xAE, 0x2A, 0xF5, 0xB0, 0xC8, 0xEB, 0xBB, 0x3C, 0x83, 0x53, 0x99, 0x61,
0x17, 0x2B, 0x04, 0x7E, 0xBA, 0x77, 0xD6, 0x26, 0xE1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0C, 0x7D,
)
Rcon = (
0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40,
0x80, 0x1B, 0x36, 0x6C, 0xD8, 0xAB, 0x4D, 0x9A,
0x2F, 0x5E, 0xBC, 0x63, 0xC6, 0x97, 0x35, 0x6A,
0xD4, 0xB3, 0x7D, 0xFA, 0xEF, 0xC5, 0x91, 0x39,
)
def text2matrix(text): ##
matrix = []
for i in range(16):
byte = (text >> (8 * (15 - i))) & 0xFF
if i % 4 == 0:
matrix.append([byte])
else:
matrix[i // 4].append(byte)
#print("{:32x}".format(text))
#print([[hex(a) for a in m] for m in matrix])
"""
A B C D E F G H I J K L M N O P
A B C D
E F G H
I J K L
M N O P
"""
return matrix
def matrix2text(matrix): ##
text = 0
for i in range(4):
for j in range(4):
text |= (matrix[i][j] << (120 - 8 * (4 * i + j)))
return text
class AES:
def __init__(self, master_key, iv=None, aes256=True):
self.num_rounds = 14 if aes256 else 10
self.change_key(master_key)
self.iv = iv
def matrix_xor_elementwise(self, s, k): ##
for i in range(4):
for j in range(4):
s[i][j] ^= k[i][j]
# shifts / movements only
def matrix_shift_rows(self, s): ##
s[0][1], s[1][1], s[2][1], s[3][1] = s[1][1], s[2][1], s[3][1], s[0][1]
s[0][2], s[1][2], s[2][2], s[3][2] = s[2][2], s[3][2], s[0][2], s[1][2]
s[0][3], s[1][3], s[2][3], s[3][3] = s[3][3], s[0][3], s[1][3], s[2][3]
def matrix_unshift_rows(self, s): ##
s[0][1], s[1][1], s[2][1], s[3][1] = s[3][1], s[0][1], s[1][1], s[2][1]
s[0][2], s[1][2], s[2][2], s[3][2] = s[2][2], s[3][2], s[0][2], s[1][2]
s[0][3], s[1][3], s[2][3], s[3][3] = s[1][3], s[2][3], s[3][3], s[0][3]
def matrix_sbox_lookup(self, s): ##
for i in range(4):
for j in range(4):
s[i][j] = Sbox[s[i][j]]
def matrix_invsbox_lookup(self, s): ##
for i in range(4):
for j in range(4):
s[i][j] = InvSbox[s[i][j]]
def mix_columns(self, s):
xtime = lambda a: (((a << 1) ^ 0x1B) & 0xFF) if (a & 0x80) else (a << 1)
for i in range(4):
t = s[i][0] ^ s[i][1] ^ s[i][2] ^ s[i][3]
u = s[i][0]
s[i][0] ^= t ^ xtime(s[i][0] ^ s[i][1])
s[i][1] ^= t ^ xtime(s[i][1] ^ s[i][2])
s[i][2] ^= t ^ xtime(s[i][2] ^ s[i][3])
s[i][3] ^= t ^ xtime(s[i][3] ^ u)
def unmix_columns(self, s):
xtime = lambda a: (((a << 1) ^ 0x1B) & 0xFF) if (a & 0x80) else (a << 1)
for i in range(4):
u = xtime(xtime(s[i][0] ^ s[i][2]))
v = xtime(xtime(s[i][1] ^ s[i][3]))
s[i][0] ^= u
s[i][1] ^= v
s[i][2] ^= u
s[i][3] ^= v
self.mix_columns(s)
def encrypt(self, plaintext):
if self.iv is not None:
self.plain_state = text2matrix(plaintext ^ self.iv)
else:
self.plain_state = text2matrix(plaintext)
self.matrix_xor_elementwise(self.plain_state, self.round_keys[0])
#print([hex(x) for x in self.plain_state[0]])
for i in range(1, self.num_rounds+1):
## CYCLE 1
self.matrix_sbox_lookup(self.plain_state)
self.matrix_shift_rows(self.plain_state)
#print([hex(x) for x in self.plain_state[0]])
## CYCLE 2
if i != self.num_rounds: self.mix_columns(self.plain_state)
self.matrix_xor_elementwise(self.plain_state, self.round_keys[i])
#print([hex(x) for x in self.plain_state[0]])
if self.iv is not None:
self.iv = matrix2text(self.plain_state)
return matrix2text(self.plain_state)
def decrypt(self, ciphertext):
self.cipher_state = text2matrix(ciphertext)
#print(hex(self.cipher_state[3][3]))
for i in range(self.num_rounds, 0, -1):
## CYCLE 1
self.matrix_xor_elementwise(self.cipher_state, self.round_keys[i])
if i != self.num_rounds: self.unmix_columns(self.cipher_state)
#print(hex(self.cipher_state[3][3]))
## CYCLE 2
self.matrix_unshift_rows(self.cipher_state)
self.matrix_invsbox_lookup(self.cipher_state)
#print(hex(self.cipher_state[0][3]))
self.matrix_xor_elementwise(self.cipher_state, self.round_keys[0])
out = matrix2text(self.cipher_state)
if self.iv is not None:
out = out ^ self.iv
self.iv = ciphertext
return out
def change_key(self, master_key):
if (self.num_rounds == 14):
self.round_keys = [text2matrix(master_key >> 128), text2matrix(master_key & ((1 << 128) - 1))]
else:
self.round_keys = [text2matrix(master_key)]
last_key2 = self.round_keys[0]
last_key = self.round_keys[1] if (self.num_rounds == 14) else self.round_keys[0]
#print([hex(x) for x in last_key[0]])
for i in range(1, self.num_rounds - len(self.round_keys) + 2):
key = []
aes256_alt = (i % 2 == 0) and (self.num_rounds == 14)
# row 0
s0 = Sbox[last_key[3][0]]
s1 = Sbox[last_key[3][1]]
s2 = Sbox[last_key[3][2]]
s3 = Sbox[last_key[3][3]]
last2 = last_key2 if (self.num_rounds == 14) else last_key
round_const = Rcon[i // 2 + 1] if (self.num_rounds == 14) else Rcon[i]
r0b0 = last2[0][0] ^ (s1 if not aes256_alt else s0) ^ (round_const if not aes256_alt else 0)
r0b1 = last2[0][1] ^ (s2 if not aes256_alt else s1)
r0b2 = last2[0][2] ^ (s3 if not aes256_alt else s2)
r0b3 = last2[0][3] ^ (s0 if not aes256_alt else s3)
key.append([r0b0, r0b1, r0b2, r0b3])
# row 1
r1b0 = last2[1][0] ^ r0b0
r1b1 = last2[1][1] ^ r0b1
r1b2 = last2[1][2] ^ r0b2
r1b3 = last2[1][3] ^ r0b3
key.append([r1b0, r1b1, r1b2, r1b3])
# row 2
r2b0 = last2[2][0] ^ r1b0
r2b1 = last2[2][1] ^ r1b1
r2b2 = last2[2][2] ^ r1b2
r2b3 = last2[2][3] ^ r1b3
key.append([r2b0, r2b1, r2b2, r2b3])
# row 3
r3b0 = last2[3][0] ^ r2b0
r3b1 = last2[3][1] ^ r2b1
r3b2 = last2[3][2] ^ r2b2
r3b3 = last2[3][3] ^ r2b3
key.append([r3b0, r3b1, r3b2, r3b3])
self.round_keys.append(key)
last_key2 = last_key
last_key = key
#print([hex(x) for x in key[0]])
# 2d1541c695f88a16f8bfb5dbe3a95022
def packtext(s):
s = s
o = 0
while len(s) > 0:
o = (o << 8) | ord(s[0])
s = s[1:]
return o
def unpacktext(s):
o = ""
while s > 0:
o = chr(s & 0xFF) + o
s = s >> 8
return o
key = "abcd1234ABCD!@#$zyxwZYXW*1*2*3*4"
assert len(key) == 32
iv = "54123892jsdkjsdj"
assert len(iv) == 16
string1 = "helloworld123456"
string2 = "test string 1234"
print(key)
#print(ek(key.encode("ascii")))
a = AES(packtext(key), aes256=True)
enc = a.encrypt(packtext(string1))
dec = unpacktext(a.decrypt(enc))
print(packtext(key) >> 128, packtext(key) & ((1 << 128) - 1))
print(packtext(string1))
print(enc)
print(a.decrypt(enc))
print(unpacktext(a.decrypt(enc)))
print(hex(enc), dec)
assert enc == 0x2d1541c695f88a16f8bfb5dbe3a95022
##
#print(ek(key.encode("ascii")))
a = AES(packtext(key[:16]), aes256=False)
enc = a.encrypt(packtext(string1))
dec = unpacktext(a.decrypt(enc))
print(hex(enc), dec)
assert enc == 0x1708271a0a18bb2e15bd658805297b8d
(packtext(string1) >> 128),(packtext(string1) & ((1 << 128) - 1))
a = AES(packtext(key))
e1 = hex(a.encrypt(packtext(string1)))
#assert e1 == "0x1708271a0a18bb2e15bd658805297b8d"
e2 = hex(a.encrypt(packtext(string2)))
#assert e2 == "0x482ac205196a804865262a0044915738"
print(e1)
print(e2)
print(packtext(key), packtext(string1), int(e1, 0))
a = AES(packtext(key))
print(unpacktext(a.decrypt(int(e1, 0))))
#assert(unpacktext(a.decrypt(int(e1, 0))) == string1)
print(unpacktext(a.decrypt(int(e2, 0))))
#assert(unpacktext(a.decrypt(int(e2, 0))) == string2)
hex(30614575354952859734368363414031006605)
a = AES(packtext(key), packtext(iv))
e1 = hex(a.encrypt(packtext(string1)))
#assert e1 == "0x6cbaa5d41d87fc1cb2cde5f49c592554"
e2 = hex(a.encrypt(packtext(string2)))
#assert e2 == "0xb2b95376972f97140a84deda840144a2"
print(e1)
print(e2)
a = AES(packtext(key), packtext(iv))
dec1 = (unpacktext(a.decrypt(int(e1, 0))))
#assert(dec1 == string1)
print(dec1)
dec2 = (unpacktext(a.decrypt(int(e2, 0))))
#assert(dec2 == string2)
print(dec2)
from Crypto.Cipher import AES as AE
print(key, len(key.encode()))
cipher = AE.new(key.encode(), AE.MODE_ECB)
ciphertext = cipher.encrypt(string1 + string2)
print(ciphertext.hex()[:32])
print(ciphertext.hex()[32:])
plaintext = cipher.decrypt(ciphertext)
print(plaintext)
cipher = AE.new(key.encode(), AE.MODE_CBC, iv)
ciphertext = cipher.encrypt(string1+string2)
print(ciphertext.hex()[:32])
print(ciphertext.hex()[32:])
cipher = AE.new(key.encode(), AE.MODE_CBC, iv)
plaintext = cipher.decrypt(ciphertext)
print(plaintext)
import random
# Generate 256-bit encrypt test-cases
for _1 in range(10):
key = "".join([chr(random.randint(0x20, 0x7E)) for _ in range(32)]) # AES256 key
print("setTopKey(BigInt(\"{}\"))".format(packtext(key) >> 128))
print("setKey(BigInt(\"{}\"))".format(packtext(key) & ((1 << 128) - 1)))
for _2 in range(10):
plaintext = "".join([chr(random.randint(0x20, 0x7E)) for _ in range(16)])
iv = "".join([chr(random.randint(0x20, 0x7E)) for _ in range(16)])
c1 = AES(packtext(key))
ct1 = c1.encrypt(packtext(plaintext))
print("runSingleEncryptTest(BigInt(\"{}\"), BigInt(\"{}\"))"
.format(packtext(plaintext), ct1))
c2 = AES(packtext(key), iv=packtext(iv))
ct2 = c2.encrypt(packtext(plaintext))
print("runSingleEncryptTest(BigInt(\"{}\"), BigInt(\"{}\"), iv=BigInt(\"{}\"))"
.format(packtext(plaintext), ct2, packtext(iv)))
import random
# Generate 256-bit decrypt test-cases
for _1 in range(10):
key = "".join([chr(random.randint(0x20, 0x7E)) for _ in range(32)]) # AES256 key
print("setTopKey(BigInt(\"{}\"))".format(packtext(key) >> 128))
print("setKey(BigInt(\"{}\"))".format(packtext(key) & ((1 << 128) - 1)))
for _2 in range(10):
plaintext = "".join([chr(random.randint(0x20, 0x7E)) for _ in range(16)])
iv = "".join([chr(random.randint(0x20, 0x7E)) for _ in range(16)])
c1 = AES(packtext(key))
ct1 = c1.encrypt(packtext(plaintext))
print("runSingleDecryptTest(BigInt(\"{}\"), BigInt(\"{}\"))"
.format(ct1, packtext(plaintext)))
c2 = AES(packtext(key), iv=packtext(iv))
ct2 = c2.encrypt(packtext(plaintext))
print("runSingleDecryptTest(BigInt(\"{}\"), BigInt(\"{}\"), iv=BigInt(\"{}\"))"
.format(ct2, packtext(plaintext), packtext(iv)))
```
|
github_jupyter
|
# CME 193 - Lecture 8
Here's what you've seen over the past 7 lectures:
* Python Language Basics
* NumPy - Arrays/Linear Algebra
* SciPy - Sparse Linear Algebra/Optimization
* DataFrames - Reading & Maniputlating tabular data
* Scikit learn - Machine Learning Models & use with data
* Ortools - More Optimization
You've now seen some tools for scientific computing in Python. How you add to them and what you do with them is up to you!

(Maybe you've also had a bit of [this](https://xkcd.com/1987/) experience)
## Today
1. We'll revisit object oriented programming in Python
2. We'll look at PyTorch (deep learning package)
# Object Oriented Programming - II
Recall some of the basic terminology of [object oriented programming](https://en.wikipedia.org/wiki/Object-oriented_programming)
* **Classes** are templates for objects (e.g., "the Integers" is a class)
* **Objects** are specific instances of a class (e.g., "2 is an integer")
* **Methods** are fuctions associated to objects of a class
* the "the square of 2" may be expressed as `2.square()` (returns 4)
* the "addition of 1 to 2" may be expressed as `2.add(1)` (returns 3)
* the "name of 2" may be expressed as `2.name()` (returns "two")
Today we'll use an extended example of univariate functions
$$f:\mathbb{R} \to \mathbb{R}$$
to see how you might use object oriented programming for something like automatic differentiation, classical machine learning, or deep learning. Yes - you can maybe use a library like [Tensorflow](https://www.tensorflow.org/), [Keras](https://keras.io/), or [PyTorch](https://pytorch.org/), but it's more fun to understand how to do it yourself (and then maybe use someone else's fancy/high quality implementation).
First thing to remember is that everything in Python is an object, even functions.
```
def f(x):
return x
isinstance(f, object)
isinstance(isinstance, object)
isinstance(object, object)
```
Once you create an object, it lives somewhere on your computer:
```
id(f) # memory address on your computer
x = 1000
id(x)
```
You can check if two variables are referring to the same address using `is`
```
z = x
print("equality: {}".format(z == x))
print("same address: {}".format(z is x))
y = 1000
print("equality: {}".format(y == x))
print("same address: {}".format(y is x))
```
## Univariate functions
Let's consider functions that send a real number to a real number
$$f:\mathbb{R} \to \mathbb{R}$$
Perhaps these functions have some parameters $\theta$, such as
$$f(x; \theta) = \theta x$$
(a linear function with slope $\theta$), or
$$g(x;\theta) = \theta_1 x + \theta_0$$
(linear function with slope $\theta_1$ and intercept $\theta_0$), or
$$h(x;\theta) = \theta_0 \exp(-\theta_1 x^2)$$
and so on. The point is that we can parameterize functions that have a similar form, and that there may be different numbers of parameters depending on the function.
What might we want to be able to do with a function?
1. Evaluate it (`y = f(x)`)
2. Print it as a string `f(x) = "3x + 2"`
3. Calculate a gradient
4. add/multiply/exponentiate...
We could think of doint the above with methods like `f.evaluate(x)`, and `f.name()`, but we'll use the special methods `__call__` and `__str__` to be able to do things like call `f(x)` and `format(f)` just as we might do so with built-in objects. You can see the different special methods available to overload [here](https://docs.python.org/3/reference/datamodel.html)
We're going to create an abstract function class that all the other classes we create will inherit from. If you haven't seen object oriented programming before, think of this as a way to promise all our functions will be able to do certain things (or throw an error). We'll provide default implementations for some methods (these will get filled in later), and have some methods that will need to be implemented differently for each sub-class.
For more on classes and inheritance, see [here](https://thepythonguru.com/python-inheritance-and-polymorphism/). The idea of giving objects methods with the same name is one form of [polymorphism](https://stackoverflow.com/questions/1031273/what-is-polymorphism-what-is-it-for-and-how-is-it-used) - we'll see how this is actually quite useful and allows you to do things that would be difficult without object-oriented programming.
```
class AbstractUnivariate:
def __init__(self):
raise NotImplementedError
def __call__(self, x):
raise NotImplementedError
def fmtstr(self, x="x"):
raise NotImplementedError
def __str__(self):
return self.fmtstr("x")
def gradient(self):
raise NotImplementedError
# the rest of these methods will be implemented when we write the appropriate functions
def __add__(self, other):
return SumFunction(self, other)
def __mul__(self, other):
return ProdFunction(self, other)
def __rmul__(self, other):
return ScaleFunction(other, self)
def __pow__(self, n):
return ComposeFunction(PowerFunction(1, n), self)
```
Now, to create a class that inherits from our abstract class, we just use the following syntax:
```
class ConstantFunction(AbstractUnivariate): # AbstractUnivariate indicates class to use for inheritance
def __init__(self, c):
self.c = c
f = ConstantFunction(3)
```
We can see there's a class hierarchy now:
```
print(isinstance(f, ConstantFunction))
print(isinstance(f, AbstractUnivariate))
print(isinstance(f, object))
```
If we haven't implemented the methods we promised we would, we'll get errors
```
f(1)
```
Let's go ahead an implement the promised methods
```
class ConstantFunction(AbstractUnivariate):
def __init__(self, c):
self.c = c
def __call__(self, x):
return self.c
def fmtstr(self, x="x"):
return "{}".format(self.c)
# __str__(self) uses default from abstract class
def gradient(self):
return ConstantFunction(0)
# we inherit the other functions from the AbstractUnivariate class
f = ConstantFunction(3)
print(f)
print(f(1))
print(f(2))
print(f.gradient())
```
What is it this object does? It represents the constant function
$$f: x \mapsto c$$
Let's do something a little less trivial. Now we'll implement
$$f: x \mapsto ax + b$$
```
class AffineFunction(AbstractUnivariate):
def __init__(self, a, b):
self.a = a
self.b = b
def __call__(self, x):
return self.a * x + self.b
def fmtstr(self, x="x"):
s = "{}".format(x)
if self.a != 1:
s = "{}*".format(self.a) + s
if self.b != 0:
s = s + " + {}".format(self.b)
return s
def gradient(self):
return ConstantFunction(self.a)
f = AffineFunction(1, 1)
print(f)
print(f(2))
print(f.gradient())
print(isinstance(f, AbstractUnivariate))
```
## Discussion
Let's take ourselves back to calculus. At some point you learned that you can take any function
$$y = ax + b$$
and if you know the values of $a$ and $b$, and someone gives you a value for $x$, you can calculate the value of $y$. At some later point you learned the rule
$$ \frac{d}{dx}(ax + b) = a$$
regardless of what values $a$ and $b$ take. The class `AffineFunction` defines the rules that you learned in math class.
When you write something like
```python
f = AffineFunction(1,1)
```
You are just choosing the values of $a$ and $b$. Now just like you would be able to use the rules of arithmetic and calculus to compute $y$ given $x$ or the gradient of the function, your computer can as well.
**Summary**
* Class definition gives mathematical rules for an equation of a certain form
* Instance of class is choice of constants for a function of that type
# Exercise 1
Implement classes for the following univariate function templates:
1. `QuadraticFunction` -- $f: x \mapsto a x^2 + bx + c$
2. `ExponentialFunction` -- $f: x \mapsto a e^{bx}$
3. `PowerFunction` -- $f: x \mapsto ax^n$
Make sure to return derivatives that are also `AbstractUnivariate` sub-classes. Which class can I use to represent $f: x \mapsto x^{-1}$?
```
# your code here
from math import * # for math.exp
```
# More functions
We can do more than just encode standard functions - we can scale, add, multiply, and compose functions.
Scaling a function:
$$ g(x)= a *f(x)$$
```
class ScaleFunction(AbstractUnivariate):
def __init__(self, a, f):
self.a = a
if isinstance(f, AbstractUnivariate):
self.f = f
else:
raise AssertionError("must input an AbstractUnivariate function")
def __call__(self, x):
return self.a * self.f(x)
def fmtstr(self, x="x"):
if self.a == 1:
return self.f.fmtstr(x)
else:
return "{}*({})".format(self.a, self.f.fmtstr(x))
def gradient(self):
return ScaleFunction(self.a, self.f.gradient())
f = ExponentialFunction(1, 2)
print(f)
g = ScaleFunction(2, f)
print(g)
print(g.gradient())
print(g(1))
```
Sum and product of two functions
$$ h(x) = f(x) + g(x)$$
$$ h(x) = f(x) * g(x)$$
```
class SumFunction(AbstractUnivariate):
def __init__(self, f, g):
if isinstance(f, AbstractUnivariate) and isinstance(g, AbstractUnivariate):
self.f = f
self.g = g
else:
raise AssertionError("must input AbstractUnivariate functions")
def __call__(self, x):
return self.f(x) + self.g(x)
def fmtstr(self, x="x"):
return "{} + {}".format(self.f.fmtstr(x), self.g.fmtstr(x))
def gradient(self):
return SumFunction(self.f.gradient(), self.g.gradient())
f = ExponentialFunction(1, 2)
g = AffineFunction(2, 1)
h = SumFunction(f, g)
print(h.fmtstr(x="y"))
print(h(-1))
print(h.gradient())
class ProdFunction(AbstractUnivariate):
def __init__(self, f, g):
if isinstance(f, AbstractUnivariate) and isinstance(g, AbstractUnivariate):
self.f = f
self.g = g
else:
raise AssertionError("must input AbstractUnivariate functions")
def __call__(self, x):
return self.f(x) * self.g(x)
def fmtstr(self, x="x"):
return "({}) * ({})".format(self.f.fmtstr(x=x), self.g.fmtstr(x=x))
# product rule (f*g)' = f'*g + f*g'
def gradient(self):
return SumFunction(ProdFunction(self.f.gradient(),self.g), ProdFunction(self.f, self.g.gradient()))
f = ExponentialFunction(1, 2)
g = AffineFunction(2, 1)
h = ProdFunction(f, g)
print(h)
print(h(-1))
print(h.gradient())
```
Compose Functions:
$$h(x) = (g \circ f)(x) = g(f(x))$$
```
class ComposeFunction(AbstractUnivariate):
def __init__(self, g, f):
if isinstance(f, AbstractUnivariate) and isinstance(g, AbstractUnivariate):
self.f = f
self.g = g
else:
raise AssertionError("must input AbstractUnivariate functions")
def __call__(self, x):
return self.g(self.f(x))
def fmtstr(self, x="x"):
return self.g.fmtstr(x="({})".format(self.f.fmtstr(x)))
# chain rule : g(f(x))' = g'(f(x))*f'(x)
def gradient(self):
return ProdFunction(ComposeFunction(self.g.gradient(), self.f), self.f.gradient())
f = PowerFunction(1,2)
print(f.fmtstr("x"))
g = ComposeFunction(f,f)
print(g)
h = ComposeFunction(g, f)
print(h)
print(h(2)) # 2^(2*2*2) = 2^8 = 256
f = PowerFunction(1,2)
g = ExponentialFunction(0.5, -1)
h = ComposeFunction(g, f)
print(h)
print(h.gradient())
```
## Operator overloading makes everything better
Recall how when we wrote the AbstractUnivariate class, we included some default methods
```python
class AbstractUnivariate:
# ...
# the rest of these methods will be implemented when we write the appropriate functions
def __add__(self, other):
return SumFunction(self, other)
def __mul__(self, other):
return ProdFunction(self, other)
def __rmul__(self, other):
return ScaleFunction(other, self)
def __pow__(self, n):
return ComposeFunction(PowerFunction(1, n), self)
```
If you think it is clunky to keep writing `SumFunction` or `ProdFunction` everywhere, you're not alone. Again, you can use the special methods above to [overload operators](https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types)
```
f = ExponentialFunction(1, 2)
g = AffineFunction(2, 1)
print("f = {}".format(f))
print("g = {}".format(g))
print("f + g = {}".format(f+g))
print("f * g = {}".format(f*g))
print("f^2 = {}".format(f**2))
print("2*g = {}".format(2*g))
f = ExponentialFunction(1, 2)
g = AffineFunction(2, 1)
h = f*g
print(h.gradient())
```
## What's going on?
Because we thought ahead to define addition, multiplication, scaling, and powers in our `AbstractUnivariate` class, every sub-class will implement those methods by default **without needing to write any extra code**.
If we hadn't done this, we would have had to copy and paste the same thing into every class definition to get the same behavior, **but we don't need to**. In fact, if we write a new basic univariate function class, e.g. `LogFunction`, we get addition, multiplication, etc., for free!
## Symbolic Functions
Just for fun, let's create an `AbstractUnivariate` sub-class, which just holds a placeholder symbolic function
```
class SymbolicFunction(AbstractUnivariate):
def __init__(self, name):
if isinstance(name, str):
self.name=name
else:
raise AssertionError("name must be string")
def __call__(self, x):
return "{}({})".format(self.name, x)
def fmtstr(self, x="x"):
return self.name + "({})".format(x)
# product rule (f*g)' = f'*g + f*g'
def gradient(self):
return SymbolicFunction(self.name + "'")
f = SymbolicFunction("f")
print(f)
print(f.gradient())
g = SymbolicFunction("g")
print(g + f)
```
Now we can remind ourselves of product rule, and chain rule (which we encoded in `ProductFunction` and `ComposeFunction` classes)
```
f = SymbolicFunction("f")
g = SymbolicFunction("g")
print((f*g).gradient())
h = ComposeFunction(g, f)
print(h.gradient())
```
And we can derive quotient rule
```
f = SymbolicFunction("f")
g = SymbolicFunction("g")
h = f * g**-1
print(h)
print(h.gradient())
```
You can also add symbolic functions to non-symbolic ones:
```
f = SymbolicFunction("f")
g = AffineFunction(1, 2)
h = f + g
print(h)
print(h.gradient())
```
## Summary
You're now on your way to having your own automatic differentiation library! Or your own symbolic computation library! You can probably see lots of ways to extend and improve what you've seen here:
* Support Multivariate Functions
* Add more "basic functions" such as trig functions, etc.
* Reduce expressions when you are able to
* ...
Yes, there are many libraries that do this very thing. Keywords are "autodifferentiation", "symbolic math". This sort of thing is used extensively in deep learning libraries, as well as optimization libraries.
* [Sympy](https://www.sympy.org/en/index.html) for symbolic computation
* [SciPy linear operators](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.LinearOperator.html) do something similar to HW1
* [Sage](https://www.sagemath.org/) does a lot of symbolic math using Python
* [Autodiff tools for Python](http://www.autodiff.org/?module=Tools&language=python)
* [Autograd](https://github.com/HIPS/autograd) package
* Most Deep learning libraries (see below) do some form of automatic differentiation
### How was Object Oriented Programming Useful?
**Class Inhertiance** allowed you to get functions like addition and multiplication for free once you defined the class everything inherited from
**Polymorphism** enabled you to use any combination of `AbstractUnivariate` functions and still evaluate them, calculate derivatives, and format equations. Everyone played by the same rules.
**Encapsulation** let you interact with functions without worrying about how they are implemented under the hood.
If you think back to HW1, we implicitly used polymorphism in the power method function (e.g., matrix-vector multiply always uses `dot()` no matter which class we're using)
# Exercise 2
Ignoring our `SymbolicFunction` class, any sub-class of `AbstractUnivariate` is a real function $f:\mathbb{R} \to \mathbb{R}$ that we can evaluate using `f(x)` syntax. One thing that you may wish to do is find roots of your function: $\{x \mid f(x) = 0\}$.
One very classical algorithm for doing this is called [Newton's Method](https://en.wikipedia.org/wiki/Newton%27s_method), and has the basic pseudocode:
```
initialize x_0
while not converged:
x_{k+1} = x_k - f(x_k)/f'(x_k)
```
Write a function that implements Newton's method on any `AbstractUnivariate` function
Hint: use the `gradient()` method to get a function for derivatives
```
def find_root(f, x0=0.0, tol=1e-8):
if isinstance(f, SymbolicFunction):
raise AssertionError("can't handle symbolic input")
elif not isinstance(f, AbstractUnivariate):
raise AssertionError("Input must be AbstractUnivariate")
x = x0
# your code here
return x
```
# Deep Learning
After the first part of this lecture, you now have a pretty good idea of how to get started implementing a deep learning library. Recall that above we considered functions of the form
$$f(x; \theta): \mathbb{R} \to \mathbb{R}$$
To get to machine learning, you need to handle multivariate input and output
$$f(x; \theta):\mathbb{R}^p \to \mathbb{R}^k$$
You also need to be able to take the gradient of $f$ with respect to the parameters $\theta$ (which we didn't do in our `AbstractUnivariate` class, but is straightforward), and then you can do things like optimize a loss function using your favorite optimization algorithm.
In deep learning, we have the exact same setup
$$f(x; \theta):\mathbb{R}^p \to \mathbb{R}^k$$
What makes deep learning a "special case" of machine learning is that the function $f$ is the composition of several/many functions
$$f = f_n \circ f_{n-1} \circ \dots \circ f_1$$
This is what we mean by "layers", and you use chain rule to "backpropagate" gradients with respect to the parameters.
**Disclaimer** If you really want to learn to use a deep learning library, you really should go through several tutorials and learn about the different functions that are used (and *why* they are used). This is beyond the scope of this course, but there are several courses at Stanford that are devoted to this.
## Deep Learning Libraries
Some popular libraries for deep learning are [Tensorflow](https://www.tensorflow.org/), [Keras](https://keras.io/), and [PyTorch](https://pytorch.org/). Each has their strengths and weaknesses. All of them do essientially the same thing: you define a function through composition using objects that are in many ways similar to what you just implemented. Then you choose a loss function and start optimizing the parameters in these functions using something like stochastic gradient descent.
We'll do an example in PyTorch, since it is higher-level than Tensorflow, and perhaps the most "Pythonic" of the libraries.
```bash
conda install pytorch pillow
```
## PyTorch
What's a tensor? Conceptually identical to numpy array.
We'll consider the following network
$$ x \xrightarrow{w_1} h \to ReLU(h) \xrightarrow{w_2} y$$
where $x$ is a 500-dimensional vector, $h$ is a 100-dimensional "hidden layer", and $y$ is a 10-dimensional vector. $w_1$ and $w_2$ are linear transformations (matrices), and ReLU refers to the function
$$ReLU(x) = \begin{cases}
x & x > 0\\
0 & x \le 0
\end{cases}$$
```
import torch
from torch.autograd import Variable
dtype = torch.FloatTensor
# N - batch size
# D_in - x dimension
# H - h dimension"
# D_out - y dimension
N, D_in, H, D_out = 64, 500, 100, 10
# Setting requires_grad=False indicates that we do not need to compute gradients w.r.t var
# during the backward pass.
x = Variable(torch.randn(N, D_in).type(dtype), requires_grad = False)
y = Variable(torch.randn(N, D_out).type(dtype), requires_grad = False)
# Setting requires_grad=True indicates that we want to compute gradients with
# respect to these Variables during the backward pass.
w1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad=True)
w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)
learning_rate = 1e-6
for t in range(10000):
# Forward pass: compute predicted y using operations on Variables;
y_pred = x.mm(w1).clamp(min=0).mm(w2) # clamp=ReLU
# Compute and print loss using operations on Variables.
# Now loss is a Variable of shape (1,) and loss.data is a Tensor of shape
loss = (y_pred - y).pow(2).sum()
# Use autograd to compute the backward pass. This call will compute the
# gradient of loss with respect to all Variables with requires_grad=True.
loss.backward()
# Update weights using gradient descent; w1.data and w2.data are Tensors,
# w1.grad and w2.grad are Variables and w1.grad.data and w2.grad.data are
# Tensors.
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
# Manually zero the gradients after running the backward pass
w1.grad.data.zero_()
w2.grad.data.zero_()
print("Loss is: {}".format(loss.data.numpy()), end='\r')
print()
print("Final loss is {}".format(loss.data[0]))
```
## That's still fairly cumbersome
- When building neural networks, arrange the computation into layers, some of which have learnable parameters which will be optimized during learning.
- Use the ``` torch.nn ``` package to define your layers
- Create custom networks by subclassing the nn.Module
- Really clean code!
- Just create a class subclassing the nn.Module
- specify layers in the ```__init__```
- define a forward pass by ```forward(self,x)``` method
This is analgous to how we created specific sub-classes of `AbstractUnivariate`, and got a lot for free through class inheritance, polymorphism, abstraction, etc.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
class TwoLayerNet(nn.Module):
def __init__(self, D_in, H, D_out): # this defines the parameters, and stores them
super(TwoLayerNet, self).__init__() # overrides class inheritance
self.layer1 = nn.Linear(D_in, H) # initializes weights
self.layer2 = nn.Linear(H, D_out)
def forward(self, x): # this defines the composition of functions
out = F.relu(self.layer1(x))
out = self.layer2(out)
return out
# N is batch size; D_in is input dimension; H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs, and wrap them in Variables
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
# Construct our model by instantiating the class defined above
model = TwoLayerNet(D_in, H, D_out) # we create our function f:x \to y
# Construct our loss function and an Optimizer.
loss_fn = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(1000):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x) # evaluate the f(x)
# Compute and print loss
loss = loss_fn(y_pred, y) # evaluate the loss
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Final Loss is {}".format(loss.data[0]))
```
## Training a CNN for Image Classification
The following example is ported from [PyTorch's Documentation](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py)
The basic task of the network is to classify images in the [CIFAR10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), which has 10 classes:
```'plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'```

```
import torch
import torchvision
import torchvision.transforms as transforms
# normalizes images to have pixel values between [-1,1]
# turns image into "tensor" to be fed to network
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# get data
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
# Classes in the CIFAR10 dataset
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
To visualize images:
```
import matplotlib.pyplot as plt
import numpy as np
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
$$ x \xrightarrow{p_1 \circ r_1 \circ c_1} h_1 \xrightarrow{p_2 \circ r_2 \circ c_2} h_2 \xrightarrow{r_3 \circ f_1} h_3 \xrightarrow{r_4 \circ f_2} h_4 \xrightarrow{f_3} y$$
where $c$ refers to a convolution (a type of linear transormation), $r$ a ReLU, $p$ a pool, and $f$ a (fully connected) linear transformation. $x$ is an input image, and $y$ is a vector of length 10 which you can think of as "class probabilities".
You might also write the above expression as the following composition of functions:
$$y = f_3(r_4(f_2(r_3(f_1(p_2(r_2(c_2(p_1(r_1(c_1(x)))))))))))$$
How would you like to write out that chain rule by hand?
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
# composition of functions
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5) # flattens tensor
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
```
Now, we define a loss function and choose an optimimization algorithm
```
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
```
Now, we can train the network
```
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward() # calculate gradient w.r.t. parameters
optimizer.step() # update parameters
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
```
To test the classifier, we'll load a few images from our test set
```
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
Now we'll make predictions
```
outputs = net(images)
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
```
To get accuracy over the whole test set (keep in mind, we expect 10% accuracy if we randomly guess a class):
```
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
## For more examples...
check out [Pytorch Docs](http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)
## To add your own function to PyTorch's autograd library
If you want to add your own functions to PyTorch's autograd library, see [here](https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html).
You would write a class that inherits from `torch.autograd.Function`, and just need to implement `forward` and `backward` methods (conceptually similar to `eval` and `gradient`).
# Reminders
* This is the last class
* HW 2 is due - this is the last homework
* After today, office hours will be by appointment
# Course Conclusion
You've now seen the basics of Python, and have now seen some of the standard libraries for scientific computing and data science. Hopefully you may now have some ideas of how you can use Python for whatever problems interest you, and have some templates to get you started.
To continue on your Python journey, the best way to improve your skills and knowledge is to just try using it for whatever it is you're doing.
If you'd like to use Python for a specific task, and don't know how to get started, feel free to send me an email and I'll try to point you in a reasonable direction.
# Additional Resources
## Object Oriented Programming
* Beginner's guide to Object Oriented Programming in Python [here](https://stackabuse.com/object-oriented-programming-in-python/)
## Image Processing
In this class, we've worked a lot with tabular data. Another important type of data to be able to work with is image data.
Some options are
* [scikit-image](https://scikit-image.org/)
* [scipy](http://www.scipy-lectures.org/advanced/image_processing/index.html)
* [Pillow](https://pillow.readthedocs.io)
* [OpenCV](https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_tutorials.html)
For many examples, see the [Scikit-image gallery](http://scikit-image.org/docs/stable/auto_examples/). Other libraries also have examples.
|
github_jupyter
|
```
from utils import *
import gensim
from sklearn.mixture import BayesianGaussianMixture
import json
df = pd.read_csv("assets/finalproduct/finalproductDf")
df.drop(["Unnamed: 0"],axis=1, inplace=True)
id_to_auth = pickle_o.load("assets/dictionaries/id_to_all_auths_2004")
auth_to_id = pickle_o.load("assets/dictionaries/auths_to_all_id_2004")
Name = list(df.Author.values)
kth_id = [auth_to_id[a] for a in Name]
df_only_auth = pd.DataFrame(data={"Name":Name, "ID":kth_id})
df_only_auth.to_csv("assets/finalproduct/onlyAuthors.csv")
df_abs = pd.read_csv("assets/dataframes/all_authors_df_2004")
df_abs.drop(["Unnamed: 0"],axis=1, inplace=True)
df_abs.head()
df.head()
list_of_dict = list()
for a, d in zip(df.Author.values, df.Doc_id.values):
new_d = dict()
new_d["name"] = str(a)
abstracts = list()
all_d = d.split(":")
new_d["docid"] = all_d
for ad in all_d:
abst = df_abs[df_abs.Doc_id == int(ad)].Abstracts.values[0]
abstracts.append(abst)
new_d["abstracts"] = abstracts
list_of_dict.append(new_d)
list_of_dict[0]
with open('assets/finalproduct/auth_to_abs.json', 'w') as fp:
json.dump(list_of_dict, fp)
y = json.dumps(auth_to_abs)
# the result is a Python dictionary:
print(y)
author = df.Author.values
list_of_author= list()
for i, a in enumerate(author):
a_dict = dict()
a_dict["id"]= i
a_dict["name"]= a
list_of_author.append(a_dict)
with open('assets/finalproduct/list_of_author.json', 'w') as fp:
json.dump(list_of_author, fp)
nan_ix = [isinstance(i,float) for i in df.Department.values]
df.Department[nan_ix] = "NaN"
department = list(set(df.Department.values))
department = [make_name_noAscii(d) for d in department]
department_to_auth= list()
for i, d in enumerate(department):
author = list(df[df.Department == d].Author.values)
a_dict = dict()
a_dict["department"]= d
a_dict["name"]= author
department_to_auth.append(a_dict)
with open('assets/finalproduct/department_to_auth.json', 'w') as fp:
json.dump(department_to_auth, fp)
dep_list = list()
for i, d in enumerate(department):
a_dict = dict()
a_dict["id"]= i
a_dict["department"]= d
dep_list.append(a_dict)
with open('assets/finalproduct/departments.json', 'w') as fp:
json.dump(dep_list, fp)
kth_school_s = pd.Series(np.array(df.Department)).value_counts().sort_values(ascending=False)
plt.figure(figsize=(35,23))
ax = sns.barplot(kth_school_s.index,kth_school_s.values)
ax.set_xticklabels(ax.get_xticklabels(), rotation=50, ha="right",fontsize=30)
ax.set_title("KTH authors distribution(departments)",fontsize=50)
ax.set_ylabel("Counts",fontsize=30)
sns.set(font_scale=3)
plt.gcf().subplots_adjust(bottom=0.40)
#plt.show()
plt.savefig("assets/figures/articleDepartmentFinal")
len(kth_school_s.index)
39 - 5
kth_school_s.values.sum()
1744 - 884
```
|
github_jupyter
|
```
import xarray as xr
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seawater as sw
import cartopy.crs as ccrs # import projections
import cartopy.feature as cf # import features
fig_dir='C:/Users/gentemann/Google Drive/f_drive/docs/projects/misst-arctic/Saildrone/'
icefile='C:/Users/gentemann/Google Drive/f_drive/docs/projects/misst-arctic/Ice Present.xlsx'
data_dir = 'F:/data/cruise_data/saildrone/2019_arctic/post_mission/'
adir_sbe='F:/data/cruise_data/saildrone/2019_arctic/sbe56/sd-'
data_dir_sbe_combined = 'F:/data/cruise_data/saildrone/2019_arctic/post_mission_combined_fluxes/'
ds = xr.open_mfdataset(data_dir_sbe_combined+'*.nc',combine='nested',concat_dim='trajectory').load()
ds
# calculate density at different depth
#import seawater as sw
# tem=sw.dens0(ds.SAL_SBE37_MEAN,ds.TEMP_SBE37_MEAN)
# ds['density_MEAN']=xr.DataArray(tem,dims=('time'),coords={'time':ds.time})
#make diruanl plot
ds2=ds#.isel(trajectory=0)
xlon=ds2.lon
tdif=ds2.TEMP_CTD_RBR_MEAN-ds2.TEMP_SBE37_MEAN
time_offset_to_lmt=(xlon/360.)*24.*60
ds2['tlmt']=ds2.lon
for i in range(2):
ds2['tlmt'][i,:]= ds2.time.data+time_offset_to_lmt[i,:]*np.timedelta64(1,'m')# dt.timedelta(seconds=1)
tdif=ds2.TEMP_CTD_RBR_MEAN-ds2.TEMP_SBE37_MEAN
fig,(ax1,ax2) =plt.subplots(1,2)
for i in range(2):
cs=ax1.scatter(ds2.wspd_MEAN[i,:],tdif[i,:],c=ds2.time.dt.hour,s=.5)
ax1.set(xlabel='Wind Speed (ms$^{-1}$)', ylabel='RBR - SBE4 SST (K)')
ax1.set_xlim(0,15)
cbar = fig.colorbar(cs,orientation='horizontal',ax=ax1)
cbar.set_label('GMT Time (hrs)')
for i in range(2):
cs2=ax2.scatter(ds2.time.dt.hour,tdif[i,:],c=ds2.wspd_MEAN[i,:],s=.5)
ax2.set(xlabel='GMT (hr)')
cbar = fig.colorbar(cs2,orientation='horizontal',ax=ax2)
cbar.set_label('Wind Speed (ms$^{-1}$)')
fig.savefig(fig_dir+'figs/temp_buld_dw_data.png')
tdif=ds2.TEMP_CTD_RBR_MEAN-ds2.TEMP_SBE37_MEAN
fig,(ax1,ax2) =plt.subplots(1,2)
cs=ax1.scatter(ds2.wspd_MEAN[0,:],tdif[i,:],c=ds2.time.dt.hour,s=.5)
ax1.set(xlabel='Wind Speed (ms$^{-1}$)', ylabel='RBR - SBE4 SST (K)')
ax1.set_xlim(0,15)
cbar = fig.colorbar(cs,orientation='horizontal',ax=ax1)
cbar.set_label('GMT Time (hrs)')
cs2=ax2.scatter(ds2.time.dt.hour,tdif[0,:],c=ds2.wspd_MEAN[i,:],s=.5)
ax2.set(xlabel='GMT (hr)')
cbar = fig.colorbar(cs2,orientation='horizontal',ax=ax2)
cbar.set_label('Wind Speed (ms$^{-1}$)')
fig.savefig(fig_dir+'figs/temp_buld_dw_data36.png')
tdif=ds2.TEMP_CTD_RBR_MEAN-ds2.TEMP_SBE37_MEAN
fig,(ax1,ax2) =plt.subplots(1,2)
cs=ax1.scatter(ds2.wspd_MEAN[1,:],tdif[i,:],c=ds2.time.dt.hour,s=.5)
ax1.set(xlabel='Wind Speed (ms$^{-1}$)', ylabel='RBR - SBE4 SST (K)')
ax1.set_xlim(0,15)
cbar = fig.colorbar(cs,orientation='horizontal',ax=ax1)
cbar.set_label('GMT Time (hrs)')
cs2=ax2.scatter(ds2.time.dt.hour,tdif[1,:],c=ds2.wspd_MEAN[i,:],s=.5)
ax2.set(xlabel='GMT (hr)')
cbar = fig.colorbar(cs2,orientation='horizontal',ax=ax2)
cbar.set_label('Wind Speed (ms$^{-1}$)')
fig.savefig(fig_dir+'figs/temp_buld_dw_data37.png')
tdif=ds2.TEMP_CTD_RBR_MEAN-ds2.sea_water_temperature_01_mean
fig,(ax1,ax2) =plt.subplots(1,2)
for i in range(2):
cs=ax1.scatter(ds2.wspd_MEAN[i,:],tdif[i,:],c=ds2.time.dt.hour,s=.5)
ax1.set(xlabel='Wind Speed (ms$^{-1}$)', ylabel='RBR - SBE4 SST (K)')
ax1.set_xlim(0,15)
cbar = fig.colorbar(cs,orientation='horizontal',ax=ax1)
cbar.set_label('GMT Time (hrs)')
for i in range(2):
cs2=ax2.scatter(ds2.time.dt.hour,tdif[i,:],c=ds2.wspd_MEAN[i,:],s=.5)
ax2.set(xlabel='GMT (hr)')
cbar = fig.colorbar(cs2,orientation='horizontal',ax=ax2)
cbar.set_label('Wind Speed (ms$^{-1}$)')
fig.savefig(fig_dir+'figs/temp_rbr-sbe-buld_dw_data36.png')
tdif=ds2.TEMP_SBE37_MEAN-ds2.sea_water_temperature_01_mean
fig,(ax1,ax2) =plt.subplots(1,2)
for i in range(2):
cs=ax1.scatter(ds2.wspd_MEAN[i,:],tdif[i,:],c=ds2.time.dt.hour,s=.5)
ax1.set(xlabel='Wind Speed (ms$^{-1}$)', ylabel='SBE37 - SBE1 SST (K)')
ax1.set_xlim(0,15)
cbar = fig.colorbar(cs,orientation='horizontal',ax=ax1)
cbar.set_label('GMT Time (hrs)')
for i in range(2):
cs2=ax2.scatter(ds2.time.dt.hour,tdif[i,:],c=ds2.wspd_MEAN[i,:],s=.5)
ax2.set(xlabel='GMT (hr)')
cbar = fig.colorbar(cs2,orientation='horizontal',ax=ax2)
cbar.set_label('Wind Speed (ms$^{-1}$)')
fig.savefig(fig_dir+'figs/temp_sbe-sbe-buld_dw_data.png')
plt.scatter(ds2.wspd_MEAN,ds2.sea_water_temperature_01_std)
plt.scatter(ds2.wspd_MEAN,ds2.TEMP_SBE37_STDDEV)
#ICE VERIFIED FROM CAMERA
t1='2019-06-22T14'
t2='2019-06-23T00'
#(ds2.sea_water_temperature_00_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s00')
#(ds2.sea_water_temperature_01_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s01')
(ds2.TEMP_SBE37_MEAN[0,:]-ds2.sea_water_temperature_01_mean[0,:]).sel(time=slice(t1,t2)).plot(label='sbe')
#(ds2.wspd_MEAN[0,:]).sel(time=slice(t1,t2)).plot(label='sbe')
plt.legend()
#Differences due to strong gradients in area and maybe shallow fresh layer
#surface is COOLER than at depth
#salinity drops significantly
#deeper temperatures warmer from sbe56 05 as compared to sbe01
t1='2019-07-17T00'
t2='2019-07-18T00'
#(ds2.sea_water_temperature_00_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s00')
#(ds2.sea_water_temperature_01_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s01')
(ds2.TEMP_SBE37_MEAN[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='sbe')
(ds2.sea_water_temperature_00_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='sbe00')
(ds2.sea_water_temperature_05_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='sbe05')
(ds2.SAL_SBE37_MEAN[0,:]-28).sel(time=slice(t1,t2)).plot(label='salinity')
#(ds2.wspd_MEAN[0,:]).sel(time=slice(t1,t2)).plot(label='sbe')
plt.legend()
#Differences due to strong gradients in area and maybe shallow fresh layer
#surface is COOLER than at depth
#salinity drops significantly
#deeper temperatures warmer from sbe56 05 as compared to sbe01
import seawater as sw
tem=sw.dens0(ds.SAL_SBE37_MEAN,ds.sea_water_temperature_00_mean)
ds['density_MEAN_00']=xr.DataArray(tem,dims=('trajectory','time'),coords={'trajetcory':ds.trajectory,'time':ds.time})
tem=sw.dens0(ds.SAL_SBE37_MEAN,ds.sea_water_temperature_01_mean)
ds['density_MEAN_01']=xr.DataArray(tem,dims=('trajectory','time'),coords={'trajetcory':ds.trajectory,'time':ds.time})
tem=sw.dens0(ds.SAL_SBE37_MEAN,ds.sea_water_temperature_02_mean)
ds['density_MEAN_02']=xr.DataArray(tem,dims=('trajectory','time'),coords={'trajetcory':ds.trajectory,'time':ds.time})
tem=sw.dens0(ds.SAL_SBE37_MEAN,ds.sea_water_temperature_04_mean)
ds['density_MEAN_04']=xr.DataArray(tem,dims=('trajectory','time'),coords={'trajetcory':ds.trajectory,'time':ds.time})
tem=sw.dens0(ds.SAL_SBE37_MEAN,ds.sea_water_temperature_05_mean)
ds['density_MEAN_05']=xr.DataArray(tem,dims=('trajectory','time'),coords={'trajetcory':ds.trajectory,'time':ds.time})
tem=sw.dens0(ds.SAL_SBE37_MEAN,ds.sea_water_temperature_06_mean)
ds['density_MEAN_06']=xr.DataArray(tem,dims=('trajectory','time'),coords={'trajetcory':ds.trajectory,'time':ds.time})
t1='2019-07-17T00'
t2='2019-07-18T00'
#(ds2.sea_water_temperature_00_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s00')
#(ds2.sea_water_temperature_01_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s01')
(ds.density_MEAN[0,:]-ds.density_MEAN_06[0,:]).sel(time=slice(t1,t2)).plot(label='den')
(ds.density_MEAN_00[0,:]-ds.density_MEAN_06[0,:]).sel(time=slice(t1,t2)).plot(label='den')
(ds.density_MEAN_02[0,:]-ds.density_MEAN_06[0,:]).sel(time=slice(t1,t2)).plot(label='den')
(ds.density_MEAN_04[0,:]-ds.density_MEAN_06[0,:]).sel(time=slice(t1,t2)).plot(label='den')
(ds.density_MEAN_05[0,:]-ds.density_MEAN_06[0,:]).sel(time=slice(t1,t2)).plot(label='den')
#(ds2.wspd_MEAN[0,:]).sel(time=slice(t1,t2)).plot(label='sbe')
plt.legend()
t1='2019-10-01'
t2='2019-10-11'
(ds.density_MEAN[0,:]-ds.density_MEAN_06[0,:]).sel(time=slice(t1,t2)).plot(label='den')
t1='2019-07-04T18'
t2='2019-07-05'
(ds.sea_water_temperature_00_mean[0,:]-ds.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='den')
#(ds.sea_water_temperature_05_mean[0,:]-ds.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='den')
(ds.SAL_SBE37_MEAN[0,:]).sel(time=slice(t1,t2)).plot(label='den')
#(ds.sea_water_temperature_05_mean[0,:]-ds.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='den')
#tdif=ds2.TEMP_SBE37_MEAN-ds2.sea_water_temperature_01_mean
t1='2019-07-10T00'
t2='2019-07-12T00'
#(ds2.TEMP_AIR_MEAN[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='air')
(ds2.sea_water_temperature_00_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s00')
(ds2.sea_water_temperature_01_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s01')
(ds2.TEMP_SBE37_MEAN[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='sbe')
(ds2.sea_water_temperature_02_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s02')
(ds2.sea_water_temperature_04_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s04')
plt.legend()
(ds2.sea_water_temperature_04_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s04')
plt.legend()
#tdif=ds2.TEMP_SBE37_MEAN-ds2.sea_water_temperature_01_mean
t1='2019-07-08T18'
t2='2019-07-10T00'
(ds2.TEMP_AIR_MEAN[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='air')
(ds2.sea_water_temperature_00_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s00')
(ds2.sea_water_temperature_01_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s01')
(ds2.TEMP_SBE37_MEAN[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='sbe')
(ds2.sea_water_temperature_02_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s02')
(ds2.sea_water_temperature_04_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s04')
plt.legend()
#tdif=ds2.TEMP_SBE37_MEAN-ds2.sea_water_temperature_01_mean
t1='2019-06-28T12'
t2='2019-06-29T12'
(ds2.TEMP_AIR_MEAN[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='air')
(ds2.sea_water_temperature_00_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s00')
(ds2.sea_water_temperature_01_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s01')
(ds2.TEMP_SBE37_MEAN[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='sbe')
(ds2.sea_water_temperature_02_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s02')
(ds2.sea_water_temperature_04_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s04')
plt.legend()
#tdif=ds2.TEMP_SBE37_MEAN-ds2.sea_water_temperature_01_mean
t1='2019-06-05T18'
t2='2019-06-06T05'
(ds2.TEMP_AIR_MEAN[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='air')
(ds2.sea_water_temperature_00_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s00')
(ds2.sea_water_temperature_01_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s01')
(ds2.TEMP_SBE37_MEAN[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='sbe')
(ds2.sea_water_temperature_02_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s02')
(ds2.sea_water_temperature_04_mean[0,:]-ds2.sea_water_temperature_06_mean[0,:]).sel(time=slice(t1,t2)).plot(label='s04')
plt.legend()
tdif=ds.TEMP_SBE37_MEAN-ds.sea_water_temperature_00_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_CTD_RBR_MEAN-ds.sea_water_temperature_00_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_O2_RBR_MEAN-ds.sea_water_temperature_00_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_SBE37_MEAN-ds.sea_water_temperature_01_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_CTD_RBR_MEAN-ds.sea_water_temperature_01_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_O2_RBR_MEAN-ds.sea_water_temperature_01_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_SBE37_MEAN-ds.sea_water_temperature_02_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_CTD_RBR_MEAN-ds.sea_water_temperature_02_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_O2_RBR_MEAN-ds.sea_water_temperature_02_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_SBE37_MEAN-ds.sea_water_temperature_04_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_CTD_RBR_MEAN-ds.sea_water_temperature_04_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_O2_RBR_MEAN-ds.sea_water_temperature_04_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_SBE37_MEAN-ds.sea_water_temperature_05_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_CTD_RBR_MEAN-ds.sea_water_temperature_05_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_O2_RBR_MEAN-ds.sea_water_temperature_05_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_SBE37_MEAN-ds.sea_water_temperature_06_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_CTD_RBR_MEAN-ds.sea_water_temperature_06_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
tdif=ds.TEMP_O2_RBR_MEAN-ds.sea_water_temperature_06_mean
print(tdif.mean('time').data,tdif.std('time').data,(np.isfinite(tdif)).sum('time').data)
```
# PLOT DIURANL WARMING
```
ds10=ds.isel(trajectory=0).resample(time='10min').mean()
plt.figure(figsize=(12,6))
subset=ds10.sel(time=slice('2019-06-15T08','2019-06-16'))
for i in range(2):
var='sea_water_temperature_'+str(i).zfill(2)+'_mean'
lvar=str(i).zfill(2)
plt.plot(subset.time,subset[var]-subset.sea_water_temperature_06_mean,label=lvar,lw=3)
var='TEMP_SBE37_MEAN'
lvar='SBE37'
plt.plot(subset.time,subset[var]-subset.sea_water_temperature_06_mean,label=lvar,lw=3)
for i in range(2,7):
var='sea_water_temperature_'+str(i).zfill(2)+'_mean'
lvar=str(i).zfill(2)
if i==3:
continue
plt.plot(subset.time,subset[var]-subset.sea_water_temperature_06_mean,label=lvar,lw=3)
plt.legend()
plt.ylabel('$\Delta$ T (K)')
plt.xlabel('Time (GMT)')
plt.savefig(fig_dir+'figs/diurnal36_06-15.png')
plt.figure(figsize=(12,6))
plt.plot(subset.time,subset.TEMP_AIR_MEAN-subset.sea_water_temperature_00_mean,label=lvar)
plt.figure(figsize=(12,6))
subset=ds10.sel(time=slice('2019-07-08T12','2019-07-10T12'))
for i in range(2):
var='sea_water_temperature_'+str(i).zfill(2)+'_mean'
lvar=str(i).zfill(2)
plt.plot(subset.time,subset[var]-subset.sea_water_temperature_06_mean,label=lvar,lw=3)
var='TEMP_SBE37_MEAN'
lvar='SBE37'
plt.plot(subset.time,subset[var]-subset.sea_water_temperature_06_mean,label=lvar,lw=3)
for i in range(2,7):
var='sea_water_temperature_'+str(i).zfill(2)+'_mean'
lvar=str(i).zfill(2)
if i==3:
continue
plt.plot(subset.time,subset[var]-subset.sea_water_temperature_06_mean,label=lvar,lw=3)
plt.legend()
plt.ylabel('$\Delta$ T (K)')
plt.xlabel('Time (GMT)')
plt.savefig(fig_dir+'figs/diurnal36_07-08.png')
plt.figure(figsize=(12,6))
plt.plot(subset.time,subset.TEMP_AIR_MEAN-subset.sea_water_temperature_00_mean,label=lvar)
plt.figure(figsize=(12,6))
subset=ds10.sel(time=slice('2019-05-15T12','2019-09-10T12'))
plt.plot(subset.time,subset.TEMP_AIR_MEAN-subset.sea_water_temperature_00_mean,label='$\Delta$T$_{air-sea}$')
plt.plot(subset.time,subset.sea_water_temperature_00_mean-subset.sea_water_temperature_06_mean,label='$\Delta$T$_{dw}$')
plt.legend()
plt.ylabel('$\Delta$ T (K)')
plt.xlabel('Time (GMT)')
plt.savefig(fig_dir+'figs/diurnal36_airseatemp.png')
subset=ds.sel(time=slice('2019-07-07','2019-07-11'))
tdif=subset.sea_water_temperature_00_mean-subset.sea_water_temperature_06_mean
tdif[0,:].plot()
fig = plt.figure(figsize=(8,15))
ax = plt.axes(projection = ccrs.NorthPolarStereo(central_longitude=180.0)) # create a set of axes with Mercator projection
for i in range(1):
ds2 = ds.isel(trajectory=i).sel(time=slice('2019-05-01','2019-09-15'))
im2=ax.quiver(ds2.lon[::200].data,
ds2.lat[::200].data,
ds2.UWND_MEAN[::200].data,
ds2.VWND_MEAN[::200].data,
scale=140,transform=ccrs.PlateCarree())
im=ax.scatter(ds2.lon,ds2.lat,
c=ds2.TEMP_AIR_MEAN-ds2.sea_water_temperature_00_mean,
s=.15,transform=ccrs.PlateCarree(),label=ds.trajectory[i].data,
cmap='seismic',vmin=-2,vmax=2)
ax.coastlines(resolution='10m')
ax.set_extent([-180,-158,68,77])
ax.legend()
cax = fig.add_axes([0.45, 0.17, 0.3, 0.02])
cbar = fig.colorbar(im,cax=cax, orientation='horizontal')
cbar.set_label('SST ($^\deg$C)')
fig.savefig(fig_dir+'figs/map_nasa_data_air-sbe5600.png')
fig = plt.figure(figsize=(8,15))
ax = plt.axes(projection = ccrs.NorthPolarStereo(central_longitude=180.0)) # create a set of axes with Mercator projection
for i in range(1):
ds2 = ds.isel(trajectory=i).sel(time=slice('2019-06-15','2019-06-16'))
im2=ax.quiver(ds2.lon[::100].data,
ds2.lat[::100].data,
ds2.UWND_MEAN[::100].data,
ds2.VWND_MEAN[::100].data,
scale=20,transform=ccrs.PlateCarree())
im=ax.scatter(ds2.lon,ds2.lat,
c=ds2.TEMP_AIR_MEAN-ds2.sea_water_temperature_00_mean,
s=.15,transform=ccrs.PlateCarree(),label=ds.trajectory[i].data,
cmap='seismic',vmin=-2,vmax=2)
ax.coastlines(resolution='10m')
ax.set_extent([-175,-158,68,72])
ax.legend()
cax = fig.add_axes([0.45, 0.17, 0.3, 0.02])
cbar = fig.colorbar(im,cax=cax, orientation='horizontal')
cbar.set_label('SST ($^\deg$C)')
fig.savefig(fig_dir+'figs/map_nasa_data_air-sbe5600-06-15.png')
ig = plt.figure(figsize=(8,15))
ax = plt.axes(projection = ccrs.NorthPolarStereo(central_longitude=180.0)) # create a set of axes with Mercator projection
for i in range(1):
ds2 = ds.isel(trajectory=i).sel(time=slice('2019-07-08','2019-07-10'))
im2=ax.quiver(ds2.lon[::100].data,
ds2.lat[::100].data,
ds2.UWND_MEAN[::100].data,
ds2.VWND_MEAN[::100].data,
scale=100,transform=ccrs.PlateCarree())
im=ax.scatter(ds2.lon,ds2.lat,
c=ds2.sea_water_temperature_00_mean-ds2.sea_water_temperature_06_mean,
s=.15,transform=ccrs.PlateCarree(),label=ds.trajectory[i].data,
cmap='seismic',vmin=-2,vmax=2)
ax.coastlines(resolution='10m')
ax.set_extent([-173,-160,70,71])
ax.legend()
cax = fig.add_axes([0.45, 0.17, 0.3, 0.02])
cbar = fig.colorbar(im,cax=cax, orientation='horizontal')
cbar.set_label('SST ($^\deg$C)')
fig.savefig(fig_dir+'figs/map_nasa_data_air-sbe5600-07-10.png')
plt.quiver(ds2.lon[::100].data,
ds2.lat[::100].data,
ds2.UWND_MEAN[::100].data,
ds2.VWND_MEAN[::100].data,
scale=50)
plt.scatter(ds2.lon,ds2.lat,
c=ds2.sea_water_temperature_00_mean-ds2.sea_water_temperature_06_mean,
s=.15,
cmap='seismic',vmin=-2,vmax=2)
%matplotlib inline
import sys
sys.path.append('./../../flux/')
from coare3 import coare3
coare3
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
np.random.seed(42)
from sklearn.metrics import mean_squared_error
import time
names = ['user_id', 'movie_id', 'rating', 'timestamp']
df = pd.read_csv('./ml-100k/u.data', sep='\t', names=names)
print(df.head())
print(df.shape)
n_users = df["user_id"].unique().shape[0]
n_movies = df["movie_id"].unique().shape[0]
ratings = np.zeros((n_users, n_movies))
for row in df.itertuples():
ratings[row[1] - 1, row[2] - 1] = row[3]
print(f"ratings = {ratings}")
W = ratings.copy()
W[W > 0] = 1
print(f"W = {W}")
print(W.shape)
def train_test_split(ratings, s=100, r=1000):
test = np.zeros(ratings.shape)
train = ratings.copy()
found = False
l = []
for movie in range(ratings.shape[1]):
l.append(np.sum(ratings[:, movie] != 0))
l = np.array(l)
top_r = np.argsort(-l)[:r]
while not found:
for user in range(ratings.shape[0]):
test_ratings = np.random.choice(top_r.nonzero()[0], size=s, replace=False)
train[user, test_ratings] = 0.
test[user, test_ratings] = ratings[user, test_ratings]
found = True
for movie in range(ratings.shape[1]):
if np.all(train[:, movie] == 0):
found = False
break
mean_imputed_ratings = train.copy()
for col in range(train.shape[1]):
non_zero_cols = train[train[:, col] != 0, col]
mean_imputed_ratings[:, col][test[:, col] != 0] = non_zero_cols.mean()
# Check if train and test sets are disjoint
assert(np.all((train * test) == 0))
return train, test, mean_imputed_ratings
def als_step(ratings, W, latent, fixed, lmd, cat="user", basic=False):
n, k, d = latent.shape[0], latent.shape[1], fixed.shape[0]
lambdaI = lmd * np.eye(k)
if not basic:
for i in range(latent.shape[0]):
if cat == "user":
W_i, x_i = W[i, :].reshape(1, W[i, :].shape[0]), ratings[i, :]
elif cat == "movie":
W_i, x_i = W[:, i].reshape(1, W[:, i].shape[0]), ratings[:, i]
latent[i, :] = np.linalg.solve(((fixed.T * W_i).dot(fixed) + lambdaI), (fixed.T * W_i).dot(x_i))
else:
fTf = fixed.T.dot(fixed)
for i in range(latent.shape[0]):
if cat == "user":
x_i = ratings[i, :]
elif cat == "movie":
x_i = ratings[:, i]
latent[i, :] = np.linalg.solve((fTf + lambdaI), fixed.T.dot(x_i))
def update_u_v(ratings, W, users, movies, epochs=10, n_factors=5, lmd=10, basic=False, debug=False):
epoch = 0
print(f"Incremental epochs = {epochs}")
while epoch < epochs:
als_step(ratings, W, users, movies, lmd, "user", basic)
als_step(ratings, W, movies, users, lmd, "movie", basic)
epoch += 1
def get_predictions(users, movies):
predictions = np.zeros((users.shape[0], movies.shape[0]))
for i in range(users.shape[0]):
for j in range(movies.shape[0]):
predictions[i, j] = users[i, :].dot(movies[j, :])
return predictions
def get_mse(predictions, truths, tests=None):
if tests is not None:
non_zero_predictions = predictions[tests.nonzero()].flatten()
non_zero_truths = truths[tests.nonzero()].flatten()
else:
non_zero_predictions = predictions[truths.nonzero()].flatten()
non_zero_truths = truths[truths.nonzero()].flatten()
return mean_squared_error(non_zero_predictions, non_zero_truths)
def get_best_hyperparameters(train, test, mean_imputed_ratings, epochs_list, k_list=[40], lmd_list=[0.1], basic=False, debug=False):
start_time = time.time()
epochs_list.sort()
n, d = train.shape
best_hyper_and_error = {}
best_hyper_and_error["k"] = k_list[0]
best_hyper_and_error["lambda"] = lmd_list[0]
best_hyper_and_error["epochs"] = 0
best_hyper_and_error["train_error"] = np.inf
best_hyper_and_error["test_error"] = np.inf
best_hyper_and_error["mean_error"] = np.inf
W = train.copy()
W[W > 0] = 1
for k in k_list:
for lmd in lmd_list:
print(f"k = {k} lambda = {lmd}")
train_error = []
test_error = []
mean_error = []
users = np.random.random((n, k))
movies = np.random.random((d, k))
prev = 0
for (i, epochs) in enumerate(epochs_list):
if debug:
print(f"Total epochs = {epochs}")
update_u_v(train, W, users, movies, epochs - prev, k, lmd, basic, debug)
predictions = get_predictions(users, movies)
train_error.append(get_mse(predictions, train))
test_error.append(get_mse(predictions, test))
mean_error.append(get_mse(predictions, mean_imputed_ratings, test))
if debug:
print(f"Train error = {train_error[-1]}")
print(f"Test error = {test_error[-1]}")
print(f"Mean error = {mean_error[-1]}")
prev = epochs
min_test_error_index = np.argmin(test_error)
if test_error[min_test_error_index] < best_hyper_and_error["test_error"]:
best_hyper_and_error["k"] = k
best_hyper_and_error["lambda"] = lmd
best_hyper_and_error["epochs"] = epochs_list[min_test_error_index]
best_hyper_and_error["train_error"] = train_error[min_test_error_index]
best_hyper_and_error["test_error"] = test_error[min_test_error_index]
best_hyper_and_error["mean_error"] = mean_error[min_test_error_index]
if debug:
print("Current optimal hyperparameters are")
print(pd.Series(best_hyper_and_error))
if debug:
print(f"Time elapsed = {time.strftime('%Hh %Mm %Ss', time.gmtime(time.time() - start_time))}")
print()
return best_hyper_and_error
epochs_list = [1, 2, 5, 10]
k_list = [5, 10, 20, 40, 80]
lmd_list = [0.1, 2, 5, 10, 25, 50, 100]
train, test, mean_imputed_ratings = train_test_split(ratings, 100, 1000)
best_hyper_and_error = get_best_hyperparameters(train, test, mean_imputed_ratings, epochs_list, k_list, lmd_list, False, True)
print(best_hyper_and_error)
```
|
github_jupyter
|
# Processor temperature
We have a temperature sensor in the processor of our company's server. We want to analyze the data provided to determinate whether we should change the cooling system for a better one. It is expensive and as a data analyst we cannot make decisions without a basis.
We provide the temperatures measured throughout the 24 hours of a day in a list-type data structure composed of 24 integers:
```
temperatures_C = [33,66,65,0,59,60,62,64,70,76,80,69,80,83,68,79,61,53,50,49,53,48,45,39]
```
## Goals
1. Treatment of lists
2. Use of loop or list comprenhention
3. Calculation of the mean, minimum and maximum.
4. Filtering of lists.
5. Interpolate an outlier.
6. Logical operators.
7. Print
## Temperature graph
To facilitate understanding, the temperature graph is shown below. You do not have to do anything in this section. The test starts in **Problem**.
```
# import
import matplotlib.pyplot as plt
%matplotlib inline
# axis x, axis y
y = [33,66,65,0,59,60,62,64,70,76,80,81,80,83,90,79,61,53,50,49,53,48,45,39]
x = list(range(len(y)))
# plot
plt.plot(x, y)
plt.axhline(y=70, linewidth=1, color='r')
plt.xlabel('hours')
plt.ylabel('Temperature ºC')
plt.title('Temperatures of our server throughout the day')
```
## Problem
If the sensor detects more than 4 hours with temperatures greater than or equal to 70ºC or any temperature above 80ºC or the average exceeds 65ºC throughout the day, we must give the order to change the cooling system to avoid damaging the processor.
We will guide you step by step so you can make the decision by calculating some intermediate steps:
1. Minimum temperature
2. Maximum temperature
3. Temperatures equal to or greater than 70ºC
4. Average temperatures throughout the day.
5. If there was a sensor failure at 03:00 and we did not capture the data, how would you estimate the value that we lack? Correct that value in the list of temperatures.
6. Bonus: Our maintenance staff is from the United States and does not understand the international metric system. Pass temperatures to Degrees Fahrenheit.
Formula: F = 1.8 * C + 32
web: https://en.wikipedia.org/wiki/Conversion_of_units_of_temperature
```
# assign a variable to the list of temperatures
# 1. Calculate the minimum of the list and print the value using print()
# 2. Calculate the maximum of the list and print the value using print()
# 3. Items in the list that are greater than 70ºC and print the result
# 4. Calculate the mean temperature throughout the day and print the result
# 5.1 Solve the fault in the sensor by estimating a value
# 5.2 Update of the estimated value at 03:00 on the list
# Bonus: convert the list of ºC to ºFarenheit
temperatures_list = [33,66,65,0,59,60,62,64,70,76,80,81,80,83,90,79,61,53,50,49,53,48,45,39]
min_temperature = 0
max_temperature = 0
max_temperature = max(temperatures_list)
min_temperature = min(temperatures_list)
print ("The minimum temperature of the list is", min_temperature)
print ("The maximum temperature of the list is", max_temperature)
greater_than_seventy = []
for i in temperatures_list:
if i > 70:
greater_than_seventy.append(i)
print ("The items in the list that are greater than 70ºC are", greater_than_seventy)
mean_temperature_24 = 0
mean_temperature_24 = sum (temperatures_list) / len (temperatures_list)
print("The mean temperature is", mean_temperature_24)
y = [33,66,65,0,59,60,62,64,70,76,80,81,80,83,90,79,61,53,50,49,53,48,45,39]
x = ()
#Struggled with the Value estimation
temperatures_list = y
new_temperatures_list = list(y)
mean_temperature_24 = 60.25
for i in temperatures_list:
mean_temperature_24 = ((sum (temperatures_list) -65) + int(x) / len (temperatures_list)
int(x) = (60.25*24) - (sum(temperatures_list) -65)
print(x)
Farenheit_list = []
#for loop to iterate through the temperatures_list/y list.
#for loop inside the for loop to do the calculations through all the temperatures_list.
#round to round the decimals
for i in y:
Farenheit_list = [(round(i * 1.8)+32.1) for i in temperatures_list]
print("The temperatures list in Farenheit is the following:", Farenheit_list)
```
## Take the decision
Remember that if the sensor detects more than 4 hours with temperatures greater than or equal to 70ºC or any temperature higher than 80ºC or the average was higher than 65ºC throughout the day, we must give the order to change the cooling system to avoid the danger of damaging the equipment:
* more than 4 hours with temperatures greater than or equal to 70ºC
* some temperature higher than 80ºC
* average was higher than 65ºC throughout the day
If any of these three is met, the cooling system must be changed.
```
# Print True or False depending on whether you would change the cooling system or not
temperatures_list = [33,66,65,0,59,60,62,64,70,76,80,81,80,83,90,79,61,53,50,49,53,48,45,39]
avg_list = sum(temperatures_list) / len(temperatures_list)
hours = 0
for i in temperatures_list:
if i >= 70:
hours += 1
print("True")
elif i > 80:
higher_80 += 1
print ("True")
elif avg_list>65:
print("True")
else:
print("False")
```
## Future improvements
1. We want the hours (not the temperatures) whose temperature exceeds 70ºC
2. Condition that those hours are more than 4 consecutive and consecutive, not simply the sum of the whole set. Is this condition met?
3. Average of each of the lists (ºC and ºF). How they relate?
4. Standard deviation of each of the lists. How they relate?
```
# 1. We want the hours (not the temperatures) whose temperature exceeds 70ºC
hours_list = []
for i in temperatures_list:
if i>70:
print(i)
print (hours_list)
hours = range(1,25)
for i in hours:
hours = temperatures_list
for i in hours:
print(i)
# 2. Condition that those hours are more than 4 consecutive and consecutive, not simply the sum of the whole set. Is this condition met?
# 3. Average of each of the lists (ºC and ºF). How they relate?
Farenheit_list = [91.1, 151.1, 149.1, 32.1, 138.1, 140.1, 144.1, 147.1, 158.1, 169.1, 176.1, 178.1, 176.1, 181.1, 194.1, 174.1, 142.1, 127.1, 122.1, 120.1, 127.1, 118.1, 113.1, 102.1]
mean_celsius = 0
mean_farenheit = 0
mean_celsius = sum (temperatures_list) / len (temperatures_list)
mean_farenheit = sum (Farenheit_list) / len (Farenheit_list)
print(mean_celsius)
print(mean_farenheit)
means_ratio = mean_celsius / mean_farenheit
print ("1 farenheit degree equals to",means_ratio, "celsius degrees")
# 4. Standard deviation of each of the lists. How they relate?
sd_celsius = 0
sd_farenheit = 0
```
|
github_jupyter
|
# Understanding Data Actions
blocktorch streamlines the creation and implementation of machine learning models for tabular data. One of the many features it offers is [data checks](https://blocktorch.alteryx.com/en/stable/user_guide/data_checks.html), which are geared towards determining the health of the data before we train a model on it. These data checks have associated actions with them and will be shown in this notebook. In our default data checks, we have the following checks:
- `HighlyNullDataCheck`: Checks whether the rows or columns are highly null
- `IDColumnsDataCheck`: Checks for columns that could be ID columns
- `TargetLeakageDataCheck`: Checks if any of the input features have high association with the targets
- `InvalidTargetDataCheck`: Checks if there are null or other invalid values in the target
- `NoVarianceDataCheck`: Checks if either the target or any features have no variance
- `NaturalLanguageNaNDataCheck`: Checks if any natural language columns have missing data
- `DateTimeNaNDataCheck`: Checks if any datetime columns have missing data
blocktorch has additional data checks that can be seen [here](https://blocktorch.alteryx.com/en/stable/api_index.html#data-checks), with usage examples [here](https://blocktorch.alteryx.com/en/stable/user_guide/data_checks.html). Below, we will walk through usage of blocktorch's default data checks and actions.
First, we import the necessary requirements to demonstrate these checks.
```
import woodwork as ww
import pandas as pd
from blocktorch import AutoMLSearch
from blocktorch.demos import load_fraud
from blocktorch.preprocessing import split_data
```
Let's look at the input feature data. blocktorch uses the [Woodwork](https://woodwork.alteryx.com/en/stable/) library to represent this data. The demo data that blocktorch returns is a Woodwork DataTable and DataColumn.
```
X, y = load_fraud(n_rows=1500)
X
```
## Adding noise and unclean data
This data is already clean and compatible with blocktorch's ``AutoMLSearch``. In order to demonstrate blocktorch default data checks, we will add the following:
- A column of mostly null values (<0.5% non-null)
- A column with low/no variance
- A row of null values
- A missing target value
We will add the first two columns to the whole dataset and we will only add the last two to the training data. Note: these only represent some of the scenarios that blocktorch default data checks can catch.
```
# add a column with no variance in the data
X['no_variance'] = [1 for _ in range(X.shape[0])]
# add a column with >99.5% null values
X['mostly_nulls'] = [None] * (X.shape[0] - 5) + [i for i in range(5)]
# since we changed the data, let's reinitialize the woodwork datatable
X.ww.init()
# let's split some training and validation data
X_train, X_valid, y_train, y_valid = split_data(X, y, problem_type='binary')
# let's copy the datetime at row 1 for future use
date = X_train.iloc[1]['datetime']
# make row 1 all nan values
X_train.iloc[1] = [None] * X_train.shape[1]
# make one of the target values null
y_train[990] = None
X_train.ww.init()
y_train = ww.init_series(y_train)
# Let's take another look at the new X_train data
X_train
```
If we call `AutoMLSearch.search()` on this data, the search will fail due to the columns and issues we've added above. Note: we use a try/except here to catch the resulting ValueError that AutoMLSearch raises.
```
automl = AutoMLSearch(X_train=X_train, y_train=y_train, problem_type='binary')
try:
automl.search()
except ValueError as e:
# to make the error message more distinct
print("=" * 80, "\n")
print("Search errored out! Message received is: {}".format(e))
print("=" * 80, "\n")
```
We can use the `search_iterative()` function provided in blocktorch to determine what potential health issues our data has. We can see that this [search_iterative](https://blocktorch.alteryx.com/en/latest/autoapi/blocktorch/automl/index.html#blocktorch.automl.search_iterative) function is a public method available through `blocktorch.automl` and is different from the [search](https://blocktorch.alteryx.com/en/stable/autoapi/blocktorch/automl/index.html#blocktorch.automl.AutoMLSearch) function of the `AutoMLSearch` class in blocktorch. This `search_iterative()` function allows us to run the default data checks on the data, and, if there are no errors, automatically runs `AutoMLSearch.search()`.
```
from blocktorch.automl import search_iterative
results = search_iterative(X_train, y_train, problem_type='binary')
results
```
The return value of the `search_iterative` function above is a tuple. The first element is the `AutoMLSearch` object if it runs (and `None` otherwise), and the second element is a dictionary of potential warnings and errors that the default data checks find on the passed-in `X` and `y` data. In this dictionary, warnings are suggestions that the datachecks give that can useful to address to make the search better but will not break AutoMLSearch. On the flip side, errors will break AutoMLSearch and need to be addressed by the user.
## Addressing DataCheck errors
We will show that we can address errors to allow AutoMLSearch to run. However, ignoring warnings will come at the expense of performance.
We can print out the errors first to make it easier to read, and then we'll create new features and targets from the original training data.
```
results[1]['errors']
# copy the DataTables to new variables
X_train_no_errors = X_train.copy()
y_train_no_errors = y_train.copy()
# We address the errors by looking at the resulting dictionary errors listed
# first, let's address the `TARGET_HAS_NULL` error
y_train_no_errors.fillna(False, inplace=True)
# here, we address the `NO_VARIANCE` error
X_train_no_errors.drop("no_variance", axis=1, inplace=True)
# lastly, we address the `DATETIME_HAS_NAN` error with the date we had saved earlier
X_train_no_errors.iloc[1, 2] = date
# let's reinitialize the Woodwork DataTable
X_train_no_errors.ww.init()
X_train_no_errors.head()
```
We can now run search on `X_train_no_errors` and `y_train_no_errors`. Note that the search here doesn't fail since we addressed the errors, but there will still exist warnings in the returned tuple. This search allows the `mostly_nulls` column to remain in the features during search.
```
results_no_errors = search_iterative(X_train_no_errors, y_train_no_errors, problem_type='binary')
results_no_errors
```
## Addressing all warnings and errors
We can look at the `actions` key of the dictionary in order to see how we can fix and clean all of the data. This will help us clean both the warnings and errors from the data and provide us with a better model.
```
results[1]['actions']
```
We note that there are four action tasks that we can take to clean the data. Three of the tasks ask us to drop a row or column in the features, while one task asks us to impute the target value.
```
# The first action states to drop the row given by the action code
X_train.drop(1477, axis=0, inplace=True)
# we must also drop this for y since we are removing its associated feature input
y_train.drop(index=1477, inplace=True)
print("The new length of X_train is {} and y_train is {}".format(len(X_train),len(y_train)))
# Remove the 'mostly_nulls' column from X_train, which is the second action item
X_train.drop('mostly_nulls', axis=1, inplace=True)
X_train.head()
# Address the null in targets, which is the third action item
y_train.fillna(False, inplace=True)
y_train.isna().any()
# Finally, we can drop the 'no_variance' column, which is the final action item
X_train.drop('no_variance', axis=1, inplace=True)
X_train.head()
# let's reinitialize the dataframe using Woodwork and try the search again
X_train.ww.init()
results_cleaned = search_iterative(X_train, y_train, problem_type='binary')
```
Note that this time, we do get an `AutoMLSearch` object returned to us, as well as an empty dictionary of warnings and errors. We can use the `AutoMLSearch` object as needed, and we can see that the resulting warning dictionary is empty.
```
aml = results_cleaned[0]
aml.rankings
data_check_results = results_cleaned[1]
data_check_results
```
## Comparing removing only errors versus removing both warnings and errors
Let's see the differences in model performance when we remove only errors versus remove both warnings and errors. To do this, we compare the performance of the best pipelines on the validation data. Remember that in the search where we only address errors, we still have the `mostly_nulls` column present in the data, so we leave that column in the validation data for its respective search. We drop the other `no_variance` column from both searches.
Additionally, we do some logical type setting since we had added additional noise to just the training data. This allows the data to be of the same types in both training and validation.
```
# drop the no_variance column
X_valid.drop("no_variance", axis=1, inplace=True)
# logical type management
X_valid.ww.init(logical_types={"customer_present": "Categorical"})
y_valid = ww.init_series(y_valid, logical_type="Categorical")
best_pipeline_no_errors = results_no_errors[0].best_pipeline
print("Only dropping errors:", best_pipeline_no_errors.score(X_valid, y_valid, ["Log Loss Binary"]), "\n")
# drop the mostly_nulls column and reinitialize the DataTable
X_valid.drop("mostly_nulls", axis=1, inplace=True)
X_valid.ww.init()
best_pipeline_clean = results_cleaned[0].best_pipeline
print("Addressing all actions:", best_pipeline_clean.score(X_valid, y_valid, ["Log Loss Binary"]), "\n")
```
We can compare the differences in model performance when we address all action items (warnings and errors) in comparison to when we only address errors. While it isn't guaranteed that addressing all actions will always have better performance, we do recommend doing so since we only raise these issues when we believe the features have problems that could negatively impact or not benefit the search.
In the future, we aim to provide a helper function to allow users to quickly clean the data by taking in the list of actions and creating an appropriate pipeline of transformers to alter the data.
|
github_jupyter
|
```
import numpy as np
from bokeh.plotting import figure, show, output_notebook
from bokeh.layouts import gridplot
output_notebook()
N = 9
x = np.linspace(-2, 2, N)
y = x**2
sizes = np.linspace(10, 20, N)
xpts = np.array([-.09, -.12, .0, .12, .09])
ypts = np.array([-.1, .02, .1, .02, -.1])
figures = []
p = figure(title="annular_wedge")
p.annular_wedge(x, y, 10, 20, 0.6, 4.1, color="#8888ee",
inner_radius_units="screen", outer_radius_units="screen")
figures.append(p)
p = figure(title="annulus")
p.annulus(x, y, 10, 20, color="#7FC97F",
inner_radius_units="screen", outer_radius_units = "screen")
figures.append(p)
p = figure(title="arc")
p.arc(x, y, 20, 0.6, 4.1,
radius_units="screen", color="#BEAED4", line_width=3)
figures.append(p)
p = figure(title="bezier")
p.bezier(x, y, x+0.2, y, x+0.1, y+0.1, x-0.1, y-0.1,
color="#D95F02", line_width=2)
figures.append(p)
p = figure(title="circle")
p.circle(x, y, radius=0.1, color="#3288BD")
figures.append(p)
p = figure(title="ellipse")
p.ellipse(x, y, 15, 25, angle=-0.7, color="#1D91C0",
width_units="screen", height_units="screen")
figures.append(p)
p = figure(title="line")
p.line(x, y, color="#F46D43")
figures.append(p)
p = figure(title="multi_line")
p.multi_line([xpts+xx for xx in x], [ypts+yy for yy in y], color="#8073AC", line_width=2)
figures.append(p)
p = figure(title="multi_polygons")
p.multi_polygons([[[xpts*2+xx, xpts+xx]] for xx in x], [[[ypts*3+yy, ypts+yy]] for yy in y], color="#FB9A99")
figures.append(p)
p = figure(title="oval")
p.oval(x, y, 15, 25, angle=-0.7, color="#1D91C0",
width_units="screen", height_units="screen")
figures.append(p)
p = figure(title="patch")
p.patch(x, y, color="#A6CEE3")
figures.append(p)
p = figure(title="patches")
p.patches([xpts+xx for xx in x], [ypts+yy for yy in y], color="#FB9A99")
figures.append(p)
p = figure(title="quad")
p.quad(x, x-0.1, y, y-0.1, color="#B3DE69")
figures.append(p)
p = figure(title="quadratic")
p.quadratic(x, y, x+0.2, y, x+0.1, y+0.1, color="#4DAF4A", line_width=3)
figures.append(p)
p = figure(title="ray")
p.ray(x, y, 45, -0.7, color="#FB8072", line_width=2)
figures.append(p)
p = figure(title="rect")
p.rect(x, y, 10, 20, color="#CAB2D6", width_units="screen", height_units="screen")
figures.append(p)
p = figure(title="segment")
p.segment(x, y, x-0.1, y-0.1, color="#F4A582", line_width=3)
figures.append(p)
p = figure(title="square")
p.square(x, y, size=sizes, color="#74ADD1")
figures.append(p)
p = figure(title="wedge")
p.wedge(x, y, 15, 0.6, 4.1, radius_units="screen", color="#B3DE69")
figures.append(p)
p = figure(title="circle_x")
p.scatter(x, y, marker="circle_x", size=sizes, color="#DD1C77", fill_color=None)
figures.append(p)
p = figure(title="triangle")
p.scatter(x, y, marker="triangle", size=sizes, color="#99D594", line_width=2)
figures.append(p)
p = figure(title="circle")
p.scatter(x, y, marker="o", size=sizes, color="#80B1D3", line_width=3)
figures.append(p)
p = figure(title="cross")
p.scatter(x, y, marker="cross", size=sizes, color="#E6550D", line_width=2)
figures.append(p)
p = figure(title="diamond")
p.scatter(x, y, marker="diamond", size=sizes, color="#1C9099", line_width=2)
figures.append(p)
p = figure(title="inverted_triangle")
p.scatter(x, y, marker="inverted_triangle", size=sizes, color="#DE2D26")
figures.append(p)
p = figure(title="square_x")
p.scatter(x, y, marker="square_x", size=sizes, color="#FDAE6B",
fill_color=None, line_width=2)
figures.append(p)
p = figure(title="asterisk")
p.scatter(x, y, marker="asterisk", size=sizes, color="#F0027F",
line_width=2)
figures.append(p)
p = figure(title="square_cross")
p.scatter(x, y, marker="square_cross", size=sizes, color="#7FC97F",
fill_color=None, line_width=2)
figures.append(p)
p = figure(title="diamond_cross")
p.scatter(x, y, marker="diamond_cross", size=sizes, color="#386CB0",
fill_color=None, line_width=2)
figures.append(p)
p = figure(title="circle_cross")
p.scatter(x, y, marker="circle_cross", size=sizes, color="#FB8072",
fill_color=None, line_width=2)
figures.append(p)
show(gridplot(figures, ncols=3, plot_width=200, plot_height=200))
```
|
github_jupyter
|
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This document introduces `tf.estimator`—a high-level TensorFlow
API. Estimators encapsulate the following actions:
* training
* evaluation
* prediction
* export for serving
TensorFlow implements several pre-made Estimators. Custom estimators are still suported, but mainly as a backwards compatibility measure. **Custom estimators should not be used for new code**. All Estimators--whether pre-made or custom--are classes based on the `tf.estimator.Estimator` class.
For a quick example try [Estimator tutorials](../tutorials/estimator/linear.ipynb). For an overview of the API design, see the [white paper](https://arxiv.org/abs/1708.02637).
## Setup
```
! pip install -U tensorflow_datasets
import tempfile
import os
import tensorflow as tf
import tensorflow_datasets as tfds
```
## Advantages
Similar to a `tf.keras.Model`, an `estimator` is a model-level abstraction. The `tf.estimator` provides some capabilities currently still under development for `tf.keras`. These are:
* Parameter server based training
* Full [TFX](http://tensorflow.org/tfx) integration.
## Estimators Capabilities
Estimators provide the following benefits:
* You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.
* Estimators provide a safe distributed training loop that controls how and when to:
* load data
* handle exceptions
* create checkpoint files and recover from failures
* save summaries for TensorBoard
When writing an application with Estimators, you must separate the data input
pipeline from the model. This separation simplifies experiments with
different data sets.
## Using pre-made Estimators
Pre-made Estimators enable you to work at a much higher conceptual level than the base TensorFlow APIs. You no longer have to worry about creating the computational graph or sessions since Estimators handle all the "plumbing" for you. Furthermore, pre-made Estimators let you experiment with different model architectures by making only minimal code changes. `tf.estimator.DNNClassifier`, for example, is a pre-made Estimator class that trains classification models based on dense, feed-forward neural networks.
A TensorFlow program relying on a pre-made Estimator typically consists of the following four steps:
### 1. Write an input functions
For example, you might create one function to import the training set and another function to import the test set. Estimators expect their inputs to be formatted as a pair of objects:
* A dictionary in which the keys are feature names and the values are Tensors (or SparseTensors) containing the corresponding feature data
* A Tensor containing one or more labels
The `input_fn` should return a `tf.data.Dataset` that yields pairs in that format.
For example, the following code builds a `tf.data.Dataset` from the Titanic dataset's `train.csv` file:
```
def train_input_fn():
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic = tf.data.experimental.make_csv_dataset(
titanic_file, batch_size=32,
label_name="survived")
titanic_batches = (
titanic.cache().repeat().shuffle(500)
.prefetch(tf.data.experimental.AUTOTUNE))
return titanic_batches
```
The `input_fn` is executed in a `tf.Graph` and can also directly return a `(features_dics, labels)` pair containing graph tensors, but this is error prone outside of simple cases like returning constants.
### 2. Define the feature columns.
Each `tf.feature_column` identifies a feature name, its type, and any input pre-processing.
For example, the following snippet creates three feature columns.
- The first uses the `age` feature directly as a floating-point input.
- The second uses the `class` feature as a categorical input.
- The third uses the `embark_town` as a categorical input, but uses the `hashing trick` to avoid the need to enumerate the options, and to set the number of options.
For further information, see the [feature columns tutorial](https://www.tensorflow.org/tutorials/keras/feature_columns).
```
age = tf.feature_column.numeric_column('age')
cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third'])
embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32)
```
### 3. Instantiate the relevant pre-made Estimator.
For example, here's a sample instantiation of a pre-made Estimator named `LinearClassifier`:
```
model_dir = tempfile.mkdtemp()
model = tf.estimator.LinearClassifier(
model_dir=model_dir,
feature_columns=[embark, cls, age],
n_classes=2
)
```
For further information, see the [linear classifier tutorial](https://www.tensorflow.org/tutorials/estimator/linear).
### 4. Call a training, evaluation, or inference method.
All Estimators provide `train`, `evaluate`, and `predict` methods.
```
model = model.train(input_fn=train_input_fn, steps=100)
result = model.evaluate(train_input_fn, steps=10)
for key, value in result.items():
print(key, ":", value)
for pred in model.predict(train_input_fn):
for key, value in pred.items():
print(key, ":", value)
break
```
### Benefits of pre-made Estimators
Pre-made Estimators encode best practices, providing the following benefits:
* Best practices for determining where different parts of the computational graph should run, implementing strategies on a single machine or on a
cluster.
* Best practices for event (summary) writing and universally useful
summaries.
If you don't use pre-made Estimators, you must implement the preceding features yourself.
## Custom Estimators
The heart of every Estimator—whether pre-made or custom—is its *model function*, `model_fn`, which is a method that builds graphs for training, evaluation, and prediction. When you are using a pre-made Estimator, someone else has already implemented the model function. When relying on a custom Estimator, you must write the model function yourself.
> Note: A custom `model_fn` will still run in 1.x-style graph mode. This means there is no eager execution and no automatic control dependencies. You should plan to migrate away from `tf.estimator` with custom `model_fn`. The alternative APIs are `tf.keras` and `tf.distribute`. If you still need an `Estimator` for some part of your training you can use the `tf.keras.estimator.model_to_estimator` converter to create an `Estimator` from a `keras.Model`.
## Create an Estimator from a Keras model
You can convert existing Keras models to Estimators with `tf.keras.estimator.model_to_estimator`. This is helpful if you want to modernize your model code, but your training pipeline still requires Estimators.
Instantiate a Keras MobileNet V2 model and compile the model with the optimizer, loss, and metrics to train with:
import tensorflow as tf
import tensorflow_datasets as tfds
```
keras_mobilenet_v2 = tf.keras.applications.MobileNetV2(
input_shape=(160, 160, 3), include_top=False)
keras_mobilenet_v2.trainable = False
estimator_model = tf.keras.Sequential([
keras_mobilenet_v2,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(1)
])
# Compile the model
estimator_model.compile(
optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
```
Create an `Estimator` from the compiled Keras model. The initial model state of the Keras model is preserved in the created `Estimator`:
```
est_mobilenet_v2 = tf.keras.estimator.model_to_estimator(keras_model=estimator_model)
```
Treat the derived `Estimator` as you would with any other `Estimator`.
```
IMG_SIZE = 160 # All images will be resized to 160x160
def preprocess(image, label):
image = tf.cast(image, tf.float32)
image = (image/127.5) - 1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
def train_input_fn(batch_size):
data = tfds.load('cats_vs_dogs', as_supervised=True)
train_data = data['train']
train_data = train_data.map(preprocess).shuffle(500).batch(batch_size)
return train_data
```
To train, call Estimator's train function:
```
est_mobilenet_v2.train(input_fn=lambda: train_input_fn(32), steps=50)
```
Similarly, to evaluate, call the Estimator's evaluate function:
```
est_mobilenet_v2.evaluate(input_fn=lambda: train_input_fn(32), steps=10)
```
For more details, please refer to the documentation for `tf.keras.estimator.model_to_estimator`.
## Saving object-based checkpoints with Estimator
Estimators by default save checkpoints with variable names rather than the object graph described in the [Checkpoint guide](checkpoint.ipynb). `tf.train.Checkpoint` will read name-based checkpoints, but variable names may change when moving parts of a model outside of the Estimator's `model_fn`. For forwards compatibility saving object-based checkpoints makes it easier to train a model inside an Estimator and then use it outside of one.
```
import tensorflow.compat.v1 as tf_compat
def toy_dataset():
inputs = tf.range(10.)[:, None]
labels = inputs * 5. + tf.range(5.)[None, :]
return tf.data.Dataset.from_tensor_slices(
dict(x=inputs, y=labels)).repeat().batch(2)
class Net(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def call(self, x):
return self.l1(x)
def model_fn(features, labels, mode):
net = Net()
opt = tf.keras.optimizers.Adam(0.1)
ckpt = tf.train.Checkpoint(step=tf_compat.train.get_global_step(),
optimizer=opt, net=net)
with tf.GradientTape() as tape:
output = net(features['x'])
loss = tf.reduce_mean(tf.abs(output - features['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
return tf.estimator.EstimatorSpec(
mode,
loss=loss,
train_op=tf.group(opt.apply_gradients(zip(gradients, variables)),
ckpt.step.assign_add(1)),
# Tell the Estimator to save "ckpt" in an object-based format.
scaffold=tf_compat.train.Scaffold(saver=ckpt))
tf.keras.backend.clear_session()
est = tf.estimator.Estimator(model_fn, './tf_estimator_example/')
est.train(toy_dataset, steps=10)
```
`tf.train.Checkpoint` can then load the Estimator's checkpoints from its `model_dir`.
```
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
ckpt = tf.train.Checkpoint(
step=tf.Variable(1, dtype=tf.int64), optimizer=opt, net=net)
ckpt.restore(tf.train.latest_checkpoint('./tf_estimator_example/'))
ckpt.step.numpy() # From est.train(..., steps=10)
```
## SavedModels from Estimators
Estimators export SavedModels through [`tf.Estimator.export_saved_model`](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator#export_saved_model).
```
input_column = tf.feature_column.numeric_column("x")
estimator = tf.estimator.LinearClassifier(feature_columns=[input_column])
def input_fn():
return tf.data.Dataset.from_tensor_slices(
({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16)
estimator.train(input_fn)
```
To save an `Estimator` you need to create a `serving_input_receiver`. This function builds a part of a `tf.Graph` that parses the raw data received by the SavedModel.
The `tf.estimator.export` module contains functions to help build these `receivers`.
The following code builds a receiver, based on the `feature_columns`, that accepts serialized `tf.Example` protocol buffers, which are often used with [tf-serving](https://tensorflow.org/serving).
```
tmpdir = tempfile.mkdtemp()
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
tf.feature_column.make_parse_example_spec([input_column]))
estimator_base_path = os.path.join(tmpdir, 'from_estimator')
estimator_path = estimator.export_saved_model(estimator_base_path, serving_input_fn)
```
You can also load and run that model, from python:
```
imported = tf.saved_model.load(estimator_path)
def predict(x):
example = tf.train.Example()
example.features.feature["x"].float_list.value.extend([x])
return imported.signatures["predict"](
examples=tf.constant([example.SerializeToString()]))
print(predict(1.5))
print(predict(3.5))
```
`tf.estimator.export.build_raw_serving_input_receiver_fn` allows you to create input functions which take raw tensors rather than `tf.train.Example`s.
## Using `tf.distribute.Strategy` with Estimator (Limited support)
See the [Distributed training guide](guide/distributed_training.ipynb) for more info.
`tf.estimator` is a distributed training TensorFlow API that originally supported the async parameter server approach. `tf.estimator` now supports `tf.distribute.Strategy`. If you're using `tf.estimator`, you can change to distributed training with very few changes to your code. With this, Estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs. This support in Estimator is, however, limited. See [What's supported now](#estimator_support) section below for more details.
The usage of `tf.distribute.Strategy` with Estimator is slightly different than the Keras case. Instead of using `strategy.scope`, now we pass the strategy object into the [`RunConfig`](https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig) for the Estimator.
Here is a snippet of code that shows this with a premade Estimator `LinearRegressor` and `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
```
We use a premade Estimator here, but the same code works with a custom Estimator as well. `train_distribute` determines how training will be distributed, and `eval_distribute` determines how evaluation will be distributed. This is another difference from Keras where we use the same strategy for both training and eval.
Now we can train and evaluate this Estimator with an input function:
```
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
```
Another difference to highlight here between Estimator and Keras is the input handling. In Keras, we mentioned that each batch of the dataset is split automatically across the multiple replicas. In Estimator, however, we do not do automatic splitting of batch, nor automatically shard the data across different workers. You have full control over how you want your data to be distributed across workers and devices, and you must provide an `input_fn` to specify how to distribute your data.
Your `input_fn` is called once per worker, thus giving one dataset per worker. Then one batch from that dataset is fed to one replica on that worker, thereby consuming N batches for N replicas on 1 worker. In other words, the dataset returned by the `input_fn` should provide batches of size `PER_REPLICA_BATCH_SIZE`. And the global batch size for a step can be obtained as `PER_REPLICA_BATCH_SIZE * strategy.num_replicas_in_sync`.
When doing multi worker training, you should either split your data across the workers, or shuffle with a random seed on each. You can see an example of how to do this in the [Multi-worker Training with Estimator](../tutorials/distribute/multi_worker_with_estimator.ipynb).
And similarly, you can use multi worker and parameter server strategies as well. The code remains the same, but you need to use `tf.estimator.train_and_evaluate`, and set `TF_CONFIG` environment variables for each binary running in your cluster.
<a name="estimator_support"></a>
### What's supported now?
There is limited support for training with Estimator using all strategies except `TPUStrategy`. Basic training and evaluation should work, but a number of advanced features such as `v1.train.Scaffold` do not. There may also be a number of bugs in this integration. At this time, we do not plan to actively improve this support, and instead are focused on Keras and custom training loop support. If at all possible, you should prefer to use `tf.distribute` with those APIs instead.
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:--------------- |:------------------ |:------------- |:----------------------------- |:------------------------ |:------------------------- |
| Estimator API | Limited Support | Not supported | Limited Support | Limited Support | Limited Support |
### Examples and Tutorials
Here are some examples that show end to end usage of various strategies with Estimator:
1. [Multi-worker Training with Estimator](../tutorials/distribute/multi_worker_with_estimator.ipynb) to train MNIST with multiple workers using `MultiWorkerMirroredStrategy`.
2. [End to end example](https://github.com/tensorflow/ecosystem/tree/master/distribution_strategy) for multi worker training in tensorflow/ecosystem using Kubernetes templates. This example starts with a Keras model and converts it to an Estimator using the `tf.keras.estimator.model_to_estimator` API.
3. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/vision/image_classification/resnet_imagenet_main.py) model, which can be trained using either `MirroredStrategy` or `MultiWorkerMirroredStrategy`.
|
github_jupyter
|
# LightGBM Test Run
### summary
- 'gender' and 'device' are discrete strings -> dummies (keep both M and F to capture nan info)
- 'drivers', 'vehicles', 'age', 'launch', and 'tenure' are all discrete numeric -> lgbm can handle
- target variable 'outcome' is imbalanced at approximately 1:9 -> use lgbm imbalanced setting
- multicolinearity with 'age', 'income', and 'tenure' -> lgbm (trees) is robust to colinearity
---
# EDA
```
# mac install for lightgbm
# !brew install lightgbm
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
import lightgbm
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn import metrics
pd.set_option('display.max_rows', 50)
pd.set_option('display.max_columns', 50)
import warnings
warnings.filterwarnings('ignore')
data = pd.read_csv('train.csv')
print(data.shape)
list(data.columns)
data.head()
def summary(df):
'''exploratory data analysis of a dataframe'''
import pandas as pd
summary = {}
summary['count'] = df.count()
summary['dtypes'] = df.dtypes
summary['null_sum'] = df.isnull().sum()
summary['null_pct'] = df.isnull().mean()
summary['mean'] = df.mean()
summary['median'] = df.median()
summary['min'] = df.min()
summary['max'] = df.max()
return pd.DataFrame(summary)
summary(data)
def numeric_hists(df):
'''quick histograms for numeric features'''
import pandas as pd
import matplotlib.pyplot as plt
for col in df.columns:
if df[col].dtype in ['int64', 'float64']:
print(f'{col}:')
plt.hist(df[col])
plt.show()
numeric_hists(data)
data.corr()
for idx, i in enumerate(data.columns):
if data[i].dtype in ['int64', 'float64']:
for j in data.columns[idx+1:]:
if data[j].dtype in ['int64', 'float64']:
r = data[[i, j]].corr().values[0][1]
if abs(r) > 0.2:
print((i, j, r))
# missing gender has ~1/3 frequency of positive outcomes as overall data
round(data[data['gender'].isna()]['outcome'].mean(), 2), round(data['outcome'].mean(), 2)
```
---
# Transformations
```
# z_scores
def z_score(col):
mu = col.mean()
sig = col.std()
return [(i-mu)/sig for i in col]
for col in ['cost_of_ad', 'income']:
data[col] = z_score(data[col])
# dummies
device = pd.get_dummies(data['device_type'], drop_first=True)
device.columns = [f'device_{i}' for i in device.columns]
gender = pd.get_dummies(data['gender'], drop_first=False)
data = data.drop(labels=['gender', 'device_type'], axis=1)
data = pd.concat([data, device, gender], axis=1)
data.head()
# feature correlations with outcome
correlations = []
for col in data.columns:
if col == 'outcome':
pass
else:
if data[col].dtype in ['int64', 'float64', 'uint8']:
correlations.append((col, np.corrcoef(data[col], data['outcome'])[0][1])) # extract r
else:
dummies = pd.get_dummies(data[col])
for col in dummies.columns:
correlations.append((col, np.corrcoef(dummies[col], data['outcome'])[0][1]))
correlations = sorted(correlations, key = lambda x: x[1])
plt.figure(figsize=(10, 10))
plt.barh([i[0] for i in correlations],
[i[1] for i in correlations])
plt.title('Feature Correlations with Outcome', fontsize=18);
rfc = RandomForestClassifier(random_state=42)
X_train = data.drop(labels=['outcome'], axis=1)
y_train = data['outcome']
rfc = rfc.fit(X_train, y_train)
feats = sorted(zip(rfc.feature_importances_, X_train.columns), key=lambda x: x[0])
plt.figure(figsize=(10, 10))
plt.barh([i[1] for i in feats],
[i[0] for i in feats])
plt.title('Feature Importances via Random Forest', fontsize=18);
```
---
# Modeling
```
def run_kfold(X_train, y_train, model, kfold):
idx = 1
in_ = []
out_ = []
for train_index, test_index in kfold.split(X_train, y_train):
try:
model.fit(X_train.iloc[train_index], y_train.iloc[train_index])
except:
y_train = pd.Series(y_train)
model.fit(X_train.iloc[train_index], y_train.iloc[train_index])
train_fp = model.predict(X_train.iloc[train_index])
test_fp = model.predict(X_train.iloc[test_index])
in_.append(metrics.roc_auc_score(y_train.iloc[train_index], train_fp))
out_.append(metrics.roc_auc_score(y_train.iloc[test_index], test_fp))
idx += 1
return np.mean(in_), np.mean(out_)
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
# default lgbm
lgb = lightgbm.LGBMClassifier(random_state=42,
is_unbalance=True)
in_, out_ = run_kfold(X_train, y_train, model=lgb, kfold=skf)
print(f'Average in auc: {in_}')
print(f'Average oos auc: {out_}')
# some manual tuning for faster, smaller trees
lgb1 = lightgbm.LGBMClassifier(random_state=42,
is_unbalance=True,
learning_rate = 0.5,
num_leaves = 2**3-1,
num_trees = 50,
min_data_in_leaf = 200,
max_bin=2**3-1)
in_, out_ = run_kfold(X_train, y_train, model=lgb1, kfold=skf)
print(f'Average in auc: {in_}')
print(f'Average oos auc: {out_}')
# model for slower, smaller trees
lgb2 = lightgbm.LGBMClassifier(random_state=42,
is_unbalance=True,
learning_rate = 0.1,
num_leaves = 2**3-1,
num_trees = 50,
min_data_in_leaf = 10,
max_bin=2**3-1)
in_, out_ = run_kfold(X_train, y_train, model=lgb2, kfold=skf)
print(f'Average in auc: {in_}')
print(f'Average oos auc: {out_}')
# model for slower, bigger trees
lgb3 = lightgbm.LGBMClassifier(random_state=42,
is_unbalance=True,
learning_rate = 0.005,
num_leaves = 2**5-1,
num_trees = 500,
min_data_in_leaf = 100,
max_bin=2**5-1)
in_, out_ = run_kfold(X_train, y_train, model=lgb3, kfold=skf)
print(f'Average in auc: {in_}')
print(f'Average oos auc: {out_}')
# ensembling
in_all = []
out_all = []
for train_index, test_index in skf.split(X_train, y_train):
in_ = []
out_ = []
for model in [lgb1, lgb2, lgb3]:
model.fit(X_train.iloc[train_index], y_train.iloc[train_index])
in_.append(model.predict_proba(X_train.iloc[train_index])[:,1])
out_.append(model.predict_proba(X_train.iloc[test_index])[:,1])
in_pred = [1 if (i * in_[1][idx] * in_[2][idx])**(1/3) > 0.5
else 0 for idx, i in enumerate(in_[0])]
out_pred = [1 if (i * out_[1][idx] * out_[2][idx])**(1/3) > 0.5
else 0 for idx, i in enumerate(out_[0])]
in_all.append(metrics.roc_auc_score(y_train.iloc[train_index], in_pred))
out_all.append(metrics.roc_auc_score(y_train.iloc[test_index], out_pred))
print(f'Average in auc: {np.mean(in_all)}')
print(f'Average oos auc: {np.mean(out_all)}')
```
|
github_jupyter
|
<div class="alert alert-block alert-info">
<font size="5"><b><center> Section 5</font></center>
<br>
<font size="5"><b><center>Recurrent Neural Network in PyTorch with an Introduction to Natural Language Processing</font></center>
</div>
Credit: This example is obtained from the following book:
Subramanian, Vishnu. 2018. "*Deep Learning with PyTorch: A Practical Approach to Building Neural Network Models Using PyTorch.*" Birmingham, U.K., Packt Publishing.
# Simple Text Processing
## Typically Data Preprocessing Steps before Modeling Training for NLP Applications
* Read the data from disk
* Tokenize the text
* Create a mapping from word to a unique integer
* Convert the text into lists of integers
* Load the data in whatever format your deep learning framework requires
* Pad the text so that all the sequences are the same length, so you can process them in batch
## Word Embedding
Word embedding is a very popular way of representing text data in problems that are solved by deep learning algorithms
Word embedding provides a dense representation of a word filled with floating numbers.
It drastically reduces the dimension of the dictionary
### `Torchtext` and Training word embedding by building a sentiment classifier
Torchtext takes a declarative approach to loading its data:
* you tell torchtext how you want the data to look like, and torchtext handles it for you
* Declaring a Field: The Field specifies how you want a certain field to be processed
The `Field` class is a fundamental component of torchtext and is what makes preprocessing very easy
### Load `torchtext.datasets`
# Use LSTM for Sentiment Classification
1. Preparing the data
2. Creating the batches
3. Creating the network
4. Training the model
```
from torchtext import data, datasets
from torchtext.vocab import GloVe,FastText,CharNGram
TEXT = data.Field(lower=True, fix_length=100,batch_first=False)
LABEL = data.Field(sequential=False,)
train, test = datasets.imdb.IMDB.splits(TEXT, LABEL)
TEXT.build_vocab(train, vectors=GloVe(name='6B', dim=300),max_size=10000,min_freq=10)
LABEL.build_vocab(train,)
len(TEXT.vocab.vectors)
train_iter, test_iter = data.BucketIterator.splits((train, test), batch_size=32, device=-1)
train_iter.repeat = False
test_iter.repeat = False
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
class IMDBRnn(nn.Module):
def __init__(self,vocab,hidden_size,n_cat,bs=1,nl=2):
super().__init__()
self.hidden_size = hidden_size
self.bs = bs
self.nl = nl
self.e = nn.Embedding(n_vocab,hidden_size)
self.rnn = nn.LSTM(hidden_size,hidden_size,nl)
self.fc2 = nn.Linear(hidden_size,n_cat)
self.softmax = nn.LogSoftmax(dim=-1)
def forward(self,inp):
bs = inp.size()[1]
if bs != self.bs:
self.bs = bs
e_out = self.e(inp)
h0 = c0 = Variable(e_out.data.new(*(self.nl,self.bs,self.hidden_size)).zero_())
rnn_o,_ = self.rnn(e_out,(h0,c0))
rnn_o = rnn_o[-1]
fc = F.dropout(self.fc2(rnn_o),p=0.8)
return self.softmax(fc)
n_vocab = len(TEXT.vocab)
n_hidden = 100
model = IMDBRnn(n_vocab,n_hidden,n_cat=3,bs=32)
#model = model.cuda()
optimizer = optim.Adam(model.parameters(),lr=1e-3)
def fit(epoch,model,data_loader,phase='training',volatile=False):
if phase == 'training':
model.train()
if phase == 'validation':
model.eval()
volatile=True
running_loss = 0.0
running_correct = 0
for batch_idx , batch in enumerate(data_loader):
text , target = batch.text , batch.label
# if is_cuda:
# text,target = text.cuda(),target.cuda()
if phase == 'training':
optimizer.zero_grad()
output = model(text)
loss = F.nll_loss(output,target)
#running_loss += F.nll_loss(output,target,size_average=False).data[0]
running_loss += F.nll_loss(output,target,size_average=False).data
preds = output.data.max(dim=1,keepdim=True)[1]
running_correct += preds.eq(target.data.view_as(preds)).cpu().sum()
if phase == 'training':
loss.backward()
optimizer.step()
loss = running_loss/len(data_loader.dataset)
accuracy = 100. * running_correct/len(data_loader.dataset)
print("epoch: ", epoch, "loss: ", loss, "accuracy: ", accuracy)
#print(f'{phase} loss is {loss:{5}.{2}} and {phase} accuracy is {running_correct}/{len(data_loader.dataset)}{accuracy:{10}.{4}}')
return loss,accuracy
import time
start = time.time()
train_losses , train_accuracy = [],[]
val_losses , val_accuracy = [],[]
for epoch in range(1,20):
epoch_loss, epoch_accuracy = fit(epoch,model,train_iter,phase='training')
val_epoch_loss , val_epoch_accuracy = fit(epoch,model,test_iter,phase='validation')
train_losses.append(epoch_loss)
train_accuracy.append(epoch_accuracy)
val_losses.append(val_epoch_loss)
val_accuracy.append(val_epoch_accuracy)
end = time.time()
print((end-start)/60)
print("Execution Time: ", round(((end-start)/60),1), "minutes")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(range(1,len(train_losses)+1),train_losses,'bo',label = 'training loss')
plt.plot(range(1,len(val_losses)+1),val_losses,'r',label = 'validation loss')
plt.legend()
plt.plot(range(1,len(train_accuracy)+1),train_accuracy,'bo',label = 'train accuracy')
plt.plot(range(1,len(val_accuracy)+1),val_accuracy,'r',label = 'val accuracy')
plt.legend()
```
|
github_jupyter
|
# Intraday Strategy, Part 2: Model Training & Signal Evaluation
In this notebook, we load the high-quality NASDAQ100 minute-bar trade-and-quote data generously provided by [Algoseek](https://www.algoseek.com/) (available [here](https://www.algoseek.com/ml4t-book-data.html)) and use the features engineered in the last notebook to train gradient boosting model that predicts the returns for the NASDAQ100 stocks over the next 1-minute bar.
> Note that we will assume throughout that we can always buy (sell) at the first (last) trade price for a given bar at no cost and without market impact. This does certainly not reflect market reality, and is rather due to the challenges of simulating a trading strategy at this much higher intraday frequency in a realistic manner using open-source tools.
Note also that this section has slightly changed from the version published in the book to permit replication using the Algoseek data sample.
## Imports & Settings
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import sys, os
from pathlib import Path
from time import time
from tqdm import tqdm
import numpy as np
import pandas as pd
from scipy.stats import spearmanr
import lightgbm as lgb
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
import seaborn as sns
```
Ensuring we can import `utils.py` in the repo's root directory:
```
sys.path.insert(1, os.path.join(sys.path[0], '..'))
from utils import format_time
sns.set_style('whitegrid')
idx = pd.IndexSlice
deciles = np.arange(.1, 1, .1)
# where we stored the features engineered in the previous notebook
data_store = 'data/algoseek.h5'
# where we'll store the model results
result_store = 'data/intra_day.h5'
# here we save the trained models
model_path = Path('models/intraday')
if not model_path.exists():
model_path.mkdir(parents=True)
```
## Load Model Data
```
data = pd.read_hdf(data_store, 'model_data2')
data.info(null_counts=True)
data.sample(frac=.1).describe(percentiles=np.arange(.1, 1, .1))
```
## Model Training
### Helper functions
```
class MultipleTimeSeriesCV:
"""Generates tuples of train_idx, test_idx pairs
Assumes the MultiIndex contains levels 'symbol' and 'date'
purges overlapping outcomes"""
def __init__(self,
n_splits=3,
train_period_length=126,
test_period_length=21,
lookahead=None,
date_idx='date',
shuffle=False):
self.n_splits = n_splits
self.lookahead = lookahead
self.test_length = test_period_length
self.train_length = train_period_length
self.shuffle = shuffle
self.date_idx = date_idx
def split(self, X, y=None, groups=None):
unique_dates = X.index.get_level_values(self.date_idx).unique()
days = sorted(unique_dates, reverse=True)
split_idx = []
for i in range(self.n_splits):
test_end_idx = i * self.test_length
test_start_idx = test_end_idx + self.test_length
train_end_idx = test_start_idx + self.lookahead - 1
train_start_idx = train_end_idx + self.train_length + self.lookahead - 1
split_idx.append([train_start_idx, train_end_idx,
test_start_idx, test_end_idx])
dates = X.reset_index()[[self.date_idx]]
for train_start, train_end, test_start, test_end in split_idx:
train_idx = dates[(dates[self.date_idx] > days[train_start])
& (dates[self.date_idx] <= days[train_end])].index
test_idx = dates[(dates[self.date_idx] > days[test_start])
& (dates[self.date_idx] <= days[test_end])].index
if self.shuffle:
np.random.shuffle(list(train_idx))
yield train_idx.to_numpy(), test_idx.to_numpy()
def get_n_splits(self, X, y, groups=None):
return self.n_splits
def get_fi(model):
fi = model.feature_importance(importance_type='gain')
return (pd.Series(fi / fi.sum(),
index=model.feature_name()))
```
### Categorical Variables
```
data['stock_id'] = pd.factorize(data.index.get_level_values('ticker'), sort=True)[0]
categoricals = ['stock_id']
```
### Custom Metric
```
def ic_lgbm(preds, train_data):
"""Custom IC eval metric for lightgbm"""
is_higher_better = True
return 'ic', spearmanr(preds, train_data.get_label())[0], is_higher_better
```
### Cross-validation setup
```
DAY = 390 # number of minute bars in a trading day of 6.5 hrs (9:30 - 15:59)
MONTH = 21 # trading days
def get_cv(n_splits=23):
return MultipleTimeSeriesCV(n_splits=n_splits,
lookahead=1,
test_period_length=MONTH * DAY, # test for 1 month
train_period_length=12 * MONTH * DAY, # train for 1 year
date_idx='date_time')
```
Show train/validation periods:
```
for i, (train_idx, test_idx) in enumerate(get_cv().split(X=data)):
train_dates = data.iloc[train_idx].index.unique('date_time')
test_dates = data.iloc[test_idx].index.unique('date_time')
print(train_dates.min(), train_dates.max(), test_dates.min(), test_dates.max())
```
### Train model
```
label = sorted(data.filter(like='fwd').columns)
features = data.columns.difference(label).tolist()
label = label[0]
params = dict(objective='regression',
metric=['rmse'],
device='gpu',
max_bin=63,
gpu_use_dp=False,
num_leaves=16,
min_data_in_leaf=500,
feature_fraction=.8,
verbose=-1)
num_boost_round = 250
cv = get_cv(n_splits=23) # we have enough data for 23 different test periods
def get_scores(result):
return pd.DataFrame({'train': result['training']['ic'],
'valid': result['valid_1']['ic']})
```
The following model-training loop will take more than 10 hours to run and also consumes substantial memory. If you run into resource constraints, you can modify the code, e.g., by:
1. Only loading data required for one iteration.
2. Shortening the training period to require less than one year.
You can also speed up the process by using fewer `n_splits`, which implies longer test periods.
```
start = time()
for fold, (train_idx, test_idx) in enumerate(cv.split(X=data), 1):
# create lgb train set
train_set = data.iloc[train_idx, :]
lgb_train = lgb.Dataset(data=train_set.drop(label, axis=1),
label=train_set[label],
categorical_feature=categoricals)
# create lgb test set
test_set = data.iloc[test_idx, :]
lgb_test = lgb.Dataset(data=test_set.drop(label, axis=1),
label=test_set[label],
categorical_feature=categoricals,
reference=lgb_train)
# train model
evals_result = {}
model = lgb.train(params=params,
train_set=lgb_train,
valid_sets=[lgb_train, lgb_test],
feval=ic_lgbm,
num_boost_round=num_boost_round,
evals_result=evals_result,
verbose_eval=50)
model.save_model((model_path / f'{fold:02}.txt').as_posix())
# get train/valid ic scores
scores = get_scores(evals_result)
scores.to_hdf(result_store, f'ic/{fold:02}')
# get feature importance
fi = get_fi(model)
fi.to_hdf(result_store, f'fi/{fold:02}')
# generate validation predictions
X_test = test_set.loc[:, model.feature_name()]
y_test = test_set.loc[:, [label]]
y_test['pred'] = model.predict(X_test)
y_test.to_hdf(result_store, f'predictions/{fold:02}')
# compute average IC per minute
by_minute = y_test.groupby(test_set.index.get_level_values('date_time'))
daily_ic = by_minute.apply(lambda x: spearmanr(x[label], x.pred)[0]).mean()
print(f'\nFold: {fold:02} | {format_time(time()-start)} | IC per minute: {daily_ic:.2%}\n')
```
## Signal Evaluation
```
with pd.HDFStore(result_store) as store:
pred_keys = [k[1:] for k in store.keys() if k[1:].startswith('pred')]
cv_predictions = pd.concat([store[k] for k in pred_keys]).sort_index()
cv_predictions.info(null_counts=True)
time_stamp = cv_predictions.index.get_level_values('date_time')
dates = sorted(np.unique(time_stamp.date))
```
We have out-of-sample predictions for 484 days from February 2016 through December 2017:
```
print(f'# Days: {len(dates)} | First: {dates[0]} | Last: {dates[-1]}')
```
We only use minutes with at least 100 predictions:
```
n = cv_predictions.groupby('date_time').size()
```
There are ~700 periods, equivalent to a bit over a single trading day (0.67% of all periods in the sample), with fewer than 100 predictions over the 23 test months:
```
incomplete_minutes = n[n<100].index
print(f'{len(incomplete_minutes)} ({len(incomplete_minutes)/len(n):.2%})')
cv_predictions = cv_predictions[~time_stamp.isin(incomplete_minutes)]
cv_predictions.info(null_counts=True)
```
### Information Coefficient
#### Across all periods
```
ic = spearmanr(cv_predictions.fwd1min, cv_predictions.pred)[0]
```
#### By minute
We are making new predictions every minute, so it makes sense to look at the average performance across all short-term forecasts:
```
minutes = cv_predictions.index.get_level_values('date_time')
by_minute = cv_predictions.groupby(minutes)
ic_by_minute = by_minute.apply(lambda x: spearmanr(x.fwd1min, x.pred)[0])
minute_ic_mean = ic_by_minute.mean()
minute_ic_median = ic_by_minute.median()
print(f'\nAll periods: {ic:6.2%} | By Minute: {minute_ic_mean: 6.2%} (Median: {minute_ic_median: 6.2%})')
```
Plotted as a five-day rolling average, we see that the IC was mostly below the out-of-sample period mean, and increased during the last quarter of 2017 (as reflected in the validation results we observed while training the model).
```
ax = ic_by_minute.rolling(5*650).mean().plot(figsize=(14, 5), title='IC (5-day MA)', rot=0)
ax.axhline(minute_ic_mean, ls='--', lw=1, c='k')
ax.yaxis.set_major_formatter(FuncFormatter(lambda y, _: '{:.0%}'.format(y)))
ax.set_ylabel('Information Coefficient')
ax.set_xlabel('')
sns.despine()
plt.tight_layout()
```
### Vectorized backtest of a naive strategey: financial performance by signal quantile
Alphalens does not work with minute-data, so we need to compute our own signal performance measures.
Unfortunately, Zipline's Pipeline also doesn't work for minute-data and Backtrader takes a very long time with such a large dataset. Hence, instead of an event-driven backtest of entry/exit rules as in previous examples, we can only create a rough sketch of the financial performance of a naive trading strategy driven by the model's predictions using vectorized backtesting (see Chapter 8 on the [ML4T workflow](../08_ml4t_workflow'). As we will see below, this does not produce particularly helpful results.
This naive strategy invests in equal-weighted portfolios of the stocks in each decile under the following assumptions (mentioned at the beginning of this notebook:
1. Based on the predictions using inputs from the current and previous bars, we can enter positions at the first trade price in the following minute bar
2. We exit all positions at the last price in that following minute bar
3. There are no trading cost or market impact (slippage) of our trades (but we can check how sensitive the results would be).
#### Average returns by minute bar and signal quantile
To this end, we compute the quintiles and deciles of the model's `fwd1min` predictions for each minute:
```
by_minute = cv_predictions.groupby(minutes, group_keys=False)
labels = list(range(1, 6))
cv_predictions['quintile'] = by_minute.apply(lambda x: pd.qcut(x.pred, q=5, labels=labels).astype(int))
labels = list(range(1, 11))
cv_predictions['decile'] = by_minute.apply(lambda x: pd.qcut(x.pred, q=10, labels=labels).astype(int))
cv_predictions.info(show_counts=True)
```
#### Descriptive statistics of intraday returns by quintile and decile of model predictions
Next, we compute the average one-minute returns for each quintile / decile and minute.
```
def compute_intraday_returns_by_quantile(predictions, quantile='quintile'):
by_quantile = cv_predictions.reset_index().groupby(['date_time', quantile])
return by_quantile.fwd1min.mean().unstack(quantile).sort_index()
intraday_returns = {'quintile': compute_intraday_returns_by_quantile(cv_predictions),
'decile': compute_intraday_returns_by_quantile(cv_predictions, quantile='decile')}
def summarize_intraday_returns(returns):
summary = returns.describe(deciles)
return pd.concat([summary.iloc[:1].applymap(lambda x: f'{x:,.0f}'),
summary.iloc[1:].applymap(lambda x: f'{x:.4%}')])
```
The returns per minute, averaged over the 23-months period, increase by quintile/decile and range from -.3 (-.4) to .27 (.37) basis points for the bottom and top quintile (decile), respectively. While this aligns with the finding of a weakly positive rank correlation coefficient, it also suggests that such small gains are unlikely to survive the impact of trading costs.
```
summary = summarize_intraday_returns(intraday_returns['quintile'])
summary
summary = summarize_intraday_returns(intraday_returns['decile'])
summary
```
#### Cumulative Performance by Quantile
To simulate the performance of our naive strategy that trades all available stocks every minute, we simply assume that we can reinvest (including potential gains/losses) every minute. To check for the sensitivity with respect for trading cost, we can assume they are a constant number (fraction) of basis points, and subtract this number from the minute-bar returns.
```
def plot_cumulative_performance(returns, quantile='quintile', trading_costs_bp=0):
"""Plot average return by quantile (in bp) as well as cumulative return,
both net of trading costs (provided as basis points; 1bp = 0.01%)
"""
fig, axes = plt.subplots(figsize=(14, 4), ncols=2)
sns.barplot(y='fwd1min', x=quantile,
data=returns[quantile].mul(10000).sub(trading_costs_bp).stack().to_frame(
'fwd1min').reset_index(),
ax=axes[0])
axes[0].set_title(f'Avg. 1-min Return by Signal {quantile.capitalize()}')
axes[0].set_ylabel('Return (bps)')
axes[0].set_xlabel(quantile.capitalize())
title = f'Cumulative Return by Signal {quantile.capitalize()}'
(returns[quantile].sort_index().add(1).sub(trading_costs_bp/10000).cumprod().sub(1)
.plot(ax=axes[1], title=title))
axes[1].yaxis.set_major_formatter(
FuncFormatter(lambda y, _: '{:.0%}'.format(y)))
axes[1].set_xlabel('')
axes[1].set_ylabel('Return')
fig.suptitle(f'Average and Cumulative Performance (Net of Trading Cost: {trading_costs_bp:.2f}bp)')
sns.despine()
fig.tight_layout()
```
Without trading costs, the compounding of even fairly small gains leads to extremely large cumulative profits for the top quantile. However, these disappear as soon as we allow for minuscule trading costs that reduce the average quantile return close to zero.
##### Without trading costs
```
plot_cumulative_performance(intraday_returns, 'quintile', trading_costs_bp=0)
plot_cumulative_performance(intraday_returns, 'decile', trading_costs_bp=0)
```
##### With extremely low trading costs
```
# assuming costs of a fraction of a basis point, close to the average return of the top quantile
plot_cumulative_performance(intraday_returns, 'quintile', trading_costs_bp=.2)
plot_cumulative_performance(intraday_returns, 'decile', trading_costs_bp=.3)
```
### Feature Importance
We'll take a quick look at the features that most contributed to improving the IC across the 23 folds:
```
with pd.HDFStore(result_store) as store:
fi_keys = [k[1:] for k in store.keys() if k[1:].startswith('fi')]
fi = pd.concat([store[k].to_frame(i) for i, k in enumerate(fi_keys, 1)], axis=1)
```
The top features from a conventional feature importance perspective are the ticker, followed by NATR, minute of the day, latest 1m return and the CCI:
```
fi.mean(1).nsmallest(25).plot.barh(figsize=(12, 8), title='LightGBM Feature Importance (gain)')
sns.despine()
plt.tight_layout();
```
Explore with greater accuracy and in more detail how feature values affect predictions using SHAP values as demonstrated in various other notebooks in this Chapter and the appendix!
## Conclusion
We have seen that a relatively simple gradient boosting model is able to achieve fairly consistent predictive performance that is significantly better than a random guess even on a very short horizon.
However, the resulting economic gains of our naive strategy of frequently buying/(short-)selling the top/bottome quantiles are too small to overcome the inevitable transaction costs. On the one hand, this demonstrates the challenges of extracting value from a predictive signal. On the other hand, it shows that we need a more sophisticated backtesting platform so that we can even begin to design and evaluate a more sophisticated strategy that requires far fewer trades to exploit the signal in our ML predictions.
In addition, we would also want to work on improving the model by adding more informative feature, e.g. based on the quote/trade info contained in the Algoseek data, or by fine-tuning our model architecture and hyperparameter settings.
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import holoviews as hv
import networkx as nx
from holoviews import opts
hv.extension('bokeh')
defaults = dict(width=400, height=400)
hv.opts.defaults(
opts.EdgePaths(**defaults), opts.Graph(**defaults), opts.Nodes(**defaults))
```
Visualizing and working with network graphs is a common problem in many different disciplines. HoloViews provides the ability to represent and visualize graphs very simply and easily with facilities for interactively exploring the nodes and edges of the graph, especially using the bokeh plotting interface.
The ``Graph`` ``Element`` differs from other elements in HoloViews in that it consists of multiple sub-elements. The data of the ``Graph`` element itself are the abstract edges between the nodes. By default the element will automatically compute concrete ``x`` and ``y`` positions for the nodes and represent them using a ``Nodes`` element, which is stored on the Graph. The abstract edges and concrete node positions are sufficient to render the ``Graph`` by drawing straight-line edges between the nodes. In order to supply explicit edge paths we can also declare ``EdgePaths``, providing explicit coordinates for each edge to follow.
To summarize a ``Graph`` consists of three different components:
* The ``Graph`` itself holds the abstract edges stored as a table of node indices.
* The ``Nodes`` hold the concrete ``x`` and ``y`` positions of each node along with a node ``index``. The ``Nodes`` may also define any number of value dimensions, which can be revealed when hovering over the nodes or to color the nodes by.
* The ``EdgePaths`` can optionally be supplied to declare explicit node paths.
#### A simple Graph
Let's start by declaring a very simple graph connecting one node to all others. If we simply supply the abstract connectivity of the ``Graph``, it will automatically compute a layout for the nodes using the ``layout_nodes`` operation, which defaults to a circular layout:
```
# Declare abstract edges
N = 8
node_indices = np.arange(N, dtype=np.int32)
source = np.zeros(N, dtype=np.int32)
target = node_indices
simple_graph = hv.Graph(((source, target),))
simple_graph
```
#### Accessing the nodes and edges
We can easily access the ``Nodes`` and ``EdgePaths`` on the ``Graph`` element using the corresponding properties:
```
simple_graph.nodes + simple_graph.edgepaths
```
#### Displaying directed graphs
When specifying the graph edges the source and target node are listed in order, if the graph is actually a directed graph this may used to indicate the directionality of the graph. By setting ``directed=True`` as a plot option it is possible to indicate the directionality of each edge using an arrow:
```
simple_graph.relabel('Directed Graph').opts(directed=True, node_size=5, arrowhead_length=0.05)
```
The length of the arrows can be set as an fraction of the overall graph extent using the ``arrowhead_length`` option.
#### Supplying explicit paths
Next we will extend this example by supplying explicit edges:
```
def bezier(start, end, control, steps=np.linspace(0, 1, 100)):
return (1-steps)**2*start + 2*(1-steps)*steps*control+steps**2*end
x, y = simple_graph.nodes.array([0, 1]).T
paths = []
for node_index in node_indices:
ex, ey = x[node_index], y[node_index]
paths.append(np.column_stack([bezier(x[0], ex, 0), bezier(y[0], ey, 0)]))
bezier_graph = hv.Graph(((source, target), (x, y, node_indices), paths))
bezier_graph
```
## Interactive features
#### Hover and selection policies
Thanks to Bokeh we can reveal more about the graph by hovering over the nodes and edges. The ``Graph`` element provides an ``inspection_policy`` and a ``selection_policy``, which define whether hovering and selection highlight edges associated with the selected node or nodes associated with the selected edge, these policies can be toggled by setting the policy to ``'nodes'`` (the default) and ``'edges'``.
```
bezier_graph.relabel('Edge Inspection').opts(inspection_policy='edges')
```
In addition to changing the policy we can also change the colors used when hovering and selecting nodes:
```
bezier_graph.opts(
opts.Graph(inspection_policy='nodes', tools=['hover', 'box_select'],
edge_hover_line_color='green', node_hover_fill_color='red'))
```
#### Additional information
We can also associate additional information with the nodes and edges of a graph. By constructing the ``Nodes`` explicitly we can declare additional value dimensions, which are revealed when hovering and/or can be mapped to the color by setting the ``color`` to the dimension name ('Weight'). We can also associate additional information with each edge by supplying a value dimension to the ``Graph`` itself, which we can map to various style options, e.g. by setting the ``edge_color`` and ``edge_line_width``.
```
node_labels = ['Output']+['Input']*(N-1)
np.random.seed(7)
edge_labels = np.random.rand(8)
nodes = hv.Nodes((x, y, node_indices, node_labels), vdims='Type')
graph = hv.Graph(((source, target, edge_labels), nodes, paths), vdims='Weight')
(graph + graph.opts(inspection_policy='edges', clone=True)).opts(
opts.Graph(node_color='Type', edge_color='Weight', cmap='Set1',
edge_cmap='viridis', edge_line_width=hv.dim('Weight')*10))
```
If you want to supply additional node information without speciying explicit node positions you may pass in a ``Dataset`` object consisting of various value dimensions.
```
node_info = hv.Dataset(node_labels, vdims='Label')
hv.Graph(((source, target), node_info)).opts(node_color='Label', cmap='Set1')
```
## Working with NetworkX
NetworkX is a very useful library when working with network graphs and the Graph Element provides ways of importing a NetworkX Graph directly. Here we will load the Karate Club graph and use the ``circular_layout`` function provided by NetworkX to lay it out:
```
G = nx.karate_club_graph()
hv.Graph.from_networkx(G, nx.layout.circular_layout).opts(tools=['hover'])
```
It is also possible to pass arguments to the NetworkX layout function as keywords to ``hv.Graph.from_networkx``, e.g. we can override the k-value of the Fruchteran Reingold layout
```
hv.Graph.from_networkx(G, nx.layout.fruchterman_reingold_layout, k=1)
```
Finally if we want to layout a Graph after it has already been constructed, the ``layout_nodes`` operation may be used, which also allows applying the ``weight`` argument to graphs which have not been constructed with networkx:
```
from holoviews.element.graphs import layout_nodes
graph = hv.Graph([
('a', 'b', 3),
('a', 'c', 0.2),
('c', 'd', 0.1),
('c', 'e', 0.7),
('c', 'f', 5),
('a', 'd', 0.3)
], vdims='weight')
layout_nodes(graph, layout=nx.layout.fruchterman_reingold_layout, kwargs={'weight': 'weight'})
```
## Adding labels
If the ``Graph`` we have constructed has additional metadata we can easily use those as labels, we simply get a handle on the nodes, cast them to hv.Labels and then overlay them:
```
graph = hv.Graph.from_networkx(G, nx.layout.fruchterman_reingold_layout)
labels = hv.Labels(graph.nodes, ['x', 'y'], 'club')
(graph * labels.opts(text_font_size='8pt', text_color='white', bgcolor='gray'))
```
## Animating graphs
Like all other elements ``Graph`` can be updated in a ``HoloMap`` or ``DynamicMap``. Here we animate how the Fruchterman-Reingold force-directed algorithm lays out the nodes in real time.
```
hv.HoloMap({i: hv.Graph.from_networkx(G, nx.spring_layout, iterations=i, seed=10) for i in range(5, 30, 5)},
kdims='Iterations')
```
## Real world graphs
As a final example let's look at a slightly larger graph. We will load a dataset of a Facebook network consisting a number of friendship groups identified by their ``'circle'``. We will load the edge and node data using pandas and then color each node by their friendship group using many of the things we learned above.
```
kwargs = dict(width=800, height=800, xaxis=None, yaxis=None)
opts.defaults(opts.Nodes(**kwargs), opts.Graph(**kwargs))
colors = ['#000000']+hv.Cycle('Category20').values
edges_df = pd.read_csv('../assets/fb_edges.csv')
fb_nodes = hv.Nodes(pd.read_csv('../assets/fb_nodes.csv')).sort()
fb_graph = hv.Graph((edges_df, fb_nodes), label='Facebook Circles')
fb_graph.opts(cmap=colors, node_size=10, edge_line_width=1,
node_line_color='gray', node_color='circle')
```
## Bundling graphs
The datashader library provides algorithms for bundling the edges of a graph and HoloViews provides convenient wrappers around the libraries. Note that these operations need ``scikit-image`` which you can install using:
```
conda install scikit-image
```
or
```
pip install scikit-image
```
```
from holoviews.operation.datashader import datashade, bundle_graph
bundled = bundle_graph(fb_graph)
bundled
```
## Datashading graphs
For graphs with a large number of edges we can datashade the paths and display the nodes separately. This loses some of the interactive features but will let you visualize quite large graphs:
```
(datashade(bundled, normalization='linear', width=800, height=800) * bundled.nodes).opts(
opts.Nodes(color='circle', size=10, width=1000, cmap=colors, legend_position='right'))
```
### Applying selections
Alternatively we can select the nodes and edges by an attribute that resides on either. In this case we will select the nodes and edges for a particular circle and then overlay just the selected part of the graph on the datashaded plot. Note that selections on the ``Graph`` itself will select all nodes that connect to one of the selected nodes. In this way a smaller subgraph can be highlighted and the larger graph can be datashaded.
```
datashade(bundle_graph(fb_graph), normalization='linear', width=800, height=800) *\
bundled.select(circle='circle15').opts(node_fill_color='white')
```
To select just nodes that are in 'circle15' set the ``selection_mode='nodes'`` overriding the default of 'edges':
```
bundled.select(circle='circle15', selection_mode='nodes')
```
|
github_jupyter
|
# Data Analysis of Bitcoin and Where it is Heading
# Graphing the whole Graph
```
#### Importing Pandas and others and Reading csv file
import os
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import plotly.express as px
##Remodified .CSV data to make managing data easier.
##Some data cleaning.
Bitcoin = pd.read_csv('HistoricalData4.csv')
##Created A daily Average for each day to work around with the data.
Bitcoin['Daily Average'] = Bitcoin.iloc[:, 2:4].sum(axis=1)/2
##Just to let the viewer see the data that is coming out, truancated
##because there is 2843 rows to show.... too much data, lol
print(Bitcoin[['Date', 'Low', 'High', 'Daily Average']])
print(Bitcoin[['Date', 'Open', 'Close']])
print(Bitcoin[['Date', 'Volume', 'Market Cap']])
#Line graph plot to show, low, high, and Average price
Bitcoin.plot(x="Date", y=["Low", "High", "Daily Average"], figsize=(15, 20), title ="Bitcoin Low, High, and Daily Average Prices.", ylabel="Price in $")
plt.show()
#Line graph to show traditonal Open and Close (although the selling and buying never sleeps for Cryptocurrency)
Bitcoin.plot(x="Date", y=["Open", "Close"], figsize=(15, 30), title ="Open and Close Prices.", ylabel="Price in $")
plt.show()
#Line graph to Show Volume and Market Cap
#These two indicators are important to understand how healthy or not healthy a particular stock or cryptocurrency is.
#High Volume but decrease in price could mean people are because they see a decrease in price or cashing out
#High Volume and higher price means that people are still buying the currency because there is value.
Bitcoin.plot(x="Date", y=["Volume", "Market Cap"], figsize=(15, 30), title ="Volume and Market Cap.", ylabel="Price in $")
plt.show()
```
## Thoughts on the Line Graph
This graph shows the whole entire graph since Bitcoin started being sold and used and the day I stopped collecting Data. As you can see, it isn't really useful for trying to extract specific Data because there is just so much Data and the data extremes like 0, and recently....bitcoin going past 50k makes it hard to view specific data unless this graphical picture is at least maybe 10 times bigger (if I were to guess.) You can't even see the minute difference between the 3 data sets.
## Cryptocurrencies are said to be volatile, but are they though?
Or is it only considered volatile if
1: You invest in the wrong ones
2: You continue to invest on a project that isn't ongoing (like Dogecoin), has an extremely high supply cap, or known to have problems (the people and company aren't trustworthy or the company or currency was hacked)
3: You invest on unknown cryotocurrencies except that those currencies are just a deritivative of Bitcoin
4: There is very few popular news around the cryptocurrency.
## There are 3 points in the graph that prove to be interesting because those are 3 points where Bitcoin skyrocketed
The last one needs no introduction. Elon Musk decided to invest more than $1 Billion dollars into Bitcoin. That alone has caused the cryotcurrency itself to skyrocket. [CNBC Link to Article About it!](https://www.cnbc.com/2021/02/08/tesla-buys-1point5-billion-in-bitcoin.html) While this graph doesn't refect the current day (the price is at 50k right now) it's still fairly accurate.
Other than that, let's see if we can grab some data between the two other points, what caused it to go up close to 20k and then start dropping again around ~16k in the line graph? Lets create a closer line graph for that
# Bulls and Bear #1
The first point of the graph is interesting, it woentt close to about 1000 and quickly went close to 20000 in half a year. While I sold my bitcoin for a measely 700 back in the day, I stopped reading news about bitcoin, so I am not too sure what went on that cause it to signficantly jump, other than I know that bitcoin gets increasing worse/harder to grab as bitcoin solves ever harder math questions that take up a lot of energy. A perception of miners and non-miners are that these cryptocurrency mining is... energy intensive... and it is , if you are a first/second generation cryptocurrency that uses mining as currency
[Bitcoin uses as much energy as Argentina](https://www.iflscience.com/technology/bitcoin-mining-now-uses-more-electricity-than-argentina/)
```
#### Importing Pandas and others and Reading csv file
import os
import matplotlib.pyplot as plt
import numpy as np
import math
import seaborn as sns
import pandas as pd
import plotly.express as px
Bitcoin = pd.read_csv('HistoricalData4.csv')
##Created A daily Average for each day to work around with the data.
Bitcoin['Daily Average'] = Bitcoin.iloc[:, 2:4].sum(axis=1)/2
#Condensed line graph plot to show, low, high, and Average price
Bitcoin.plot(x="Date", y=["Low", "High", "Daily Average"], figsize=(15, 20), title ="Bitcoin Low, High, and Daily Average Prices.", ylabel="Price in $")
plt.ylim([1000, 20000])
plt.xlim([1500, 2000])
plt.xticks(visible = True)
plt.show()
```
Sorting around June 16, 2017 and August 21, 2018, [we get some various articles while searching through Google](https://www.google.com/search?q=bitcoin&client=firefox-b-1-d&tbs=cdr:1,cd_min:6/8/2017,cd_max:10/21/2018,sbd:1&tbm=nws&ei=ys0tYMT8EIXU-gSIh7WICA&start=0&sa=N&ved=0ahUKEwjEie_KrvLuAhUFqp4KHYhDDYE4ChDy0wMIhQE&biw=1280&bih=818&dpr=1)
[Bitcoin Hits a New Record High, But Stops Short of $20,000](https://fortune.com/2017/12/17/bitcoin-record-high-short-of-20000/)
[Why is bitcoin’s price so high?](https://techcrunch.com/2017/12/08/why-is-bitcoins-price-so-high/)
[Bitcoin tops $16,000, and it's $271B market value passes Home Depot's](https://www.usatoday.com/story/money/2017/12/07/bitcoin-tops-15-000-and-its-259-b-market-value-tops-home-depot/929962001/)
## Bear Market of Bitcoin, and the Rise of Bitcoin Cash Derivative
Later articles in 2018 point to [Bitcoin Falling Off its all time high of getting close to 20k and dropping down close to 8k](https://www.reuters.com/article/us-global-markets-bitcoin-idUSKBN1FM11M) due to possible regulatory clampdown similar to what is happening to another cryptocurrency called [Ripple](https://www.sec.gov/news/press-release/2020-338).
In 2017, it was also the year that [Bitcoin Cash](https://www.marketwatch.com/story/meet-bitcoin-cashthe-new-digital-currency-that-surged-122-in-less-than-a-day-2017-08-02) made a debut. In technical terms, is known as a fork/derivative. It follow the example of the original Bitcoin and made changes to it. Even so, [Bitcoin Cash as of this article is 700+](https://www.coindesk.com/price/bitcoin-cash)
# What is the future of of Cryptocurrency?
With the Stockmarket and Cryptocurrencies, we get a time of low and highs. For most Cryptocurrencies, in typical stockmartket terms, it is in quite a bullish territory right now.
But the old addage applies here: Buy low, sell high. Right now might not be a good time to buy BTC. If news is an indicator, it can go any way right now. A few things to look at.
Elon Musk Spent 1B+ of Bitcoin
[SEC is looking into regulating cryptocurrency](https://www.financemagnates.com/cryptocurrency/regulation/sec-commissioner-demands-clear-cryptocurrency-regulations/)
[Visa is planning to include Crytocurrency in it's list of currencies allowed to be transacted.](https://www.forbes.com/sites/billybambrough/2021/02/03/visa-reveals-bitcoin-and-crypto-banking-roadmap-amid-race-to-reach-network-of-70-million/?sh=39b269b401cd)
The original type of crytocurrency are energy intensive, Bitcoin and Etherium are the worse offenders when it comes to energy consumption. New Crypto, like Cardano, choose to do away with the mining and choose to reward people who invest with more shares rather than having to have thousands of computers taking up the worlds energy.
**A Linear Regression** points that the data has overexceed it's prediction expectations. From day 1 since the Data collected to recently, the current price of Bitcoin is an extreme outlier when it comes to it's price. The Linear Regression expects Bitcoin price to be ~10k, but it has pushed up to 50k recently.
If the pattern continues, the new linear regression will eventually move up closer to 50k to reflect what could be it's potential price, as of right now, the prediction isn't accurate.
```
import os
import matplotlib.pyplot as plt
import math
import numpy as np
import seaborn as sns
import pandas as pd
import plotly.express as px
from sklearn.linear_model import LinearRegression
##Remodified .CSV data to make managing data easier.
##Some data cleaning.
Bitcoin = pd.read_csv('HistoricalData4.csv')
##Created A daily Average for each day to work around with the data.
Bitcoin['Daily Average'] = Bitcoin.iloc[:, 2:4].sum(axis=1)/2
Bitcoin['Date'] = pd.to_datetime(Bitcoin['Date']).apply(lambda date: date.toordinal())
X = Bitcoin[["Date"]]
y = Bitcoin[["Daily Average"]]
regressor = LinearRegression()
regressor.fit(X, y)
y_pred = regressor.predict(X)
plt.scatter(X, y, color = 'red')
plt.plot(X, regressor.predict(X), color='blue')
plt.title('Simple Bitcoin Regression')
plt.xlabel('Ordinal Date')
plt.ylabel('Daily Average Price in $')
plt.figsize=(15, 30)
plt.show()
```
## In the next installment, I'll be doing a few different cryptocurrencies to see if it follows a similar pattern.
My thought is to do one on Cardano/ADA as that is where I see the future of Crytocurrency that is outside of just currency itself (like Bitcoin.)
|
github_jupyter
|
<a href="https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/athermal_linear_elasticity.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Imports and utility code
!pip install jax-md
import numpy as onp
import jax.numpy as jnp
from jax.config import config
config.update('jax_enable_x64', True)
from jax import random
from jax import jit, lax, grad, vmap
import jax.scipy as jsp
from jax_md import space, energy, smap, minimize, util, elasticity, quantity
from jax_md.colab_tools import renderer
f32 = jnp.float32
f64 = jnp.float64
from functools import partial
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 16})
def format_plot(x, y):
plt.grid(True)
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 0.7)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
def run_minimization_while(energy_fn, R_init, shift, max_grad_thresh = 1e-12, max_num_steps=1000000, **kwargs):
init,apply=minimize.fire_descent(jit(energy_fn), shift, **kwargs)
apply = jit(apply)
@jit
def get_maxgrad(state):
return jnp.amax(jnp.abs(state.force))
@jit
def cond_fn(val):
state, i = val
return jnp.logical_and(get_maxgrad(state) > max_grad_thresh, i<max_num_steps)
@jit
def body_fn(val):
state, i = val
return apply(state), i+1
state = init(R_init)
state, num_iterations = lax.while_loop(cond_fn, body_fn, (state, 0))
return state.position, get_maxgrad(state), num_iterations
def run_minimization_while_neighbor_list(energy_fn, neighbor_fn, R_init, shift,
max_grad_thresh = 1e-12, max_num_steps = 1000000,
step_inc = 1000, verbose = False, **kwargs):
nbrs = neighbor_fn.allocate(R_init)
init,apply=minimize.fire_descent(jit(energy_fn), shift, **kwargs)
apply = jit(apply)
@jit
def get_maxgrad(state):
return jnp.amax(jnp.abs(state.force))
@jit
def body_fn(state_nbrs, t):
state, nbrs = state_nbrs
nbrs = neighbor_fn.update(state.position, nbrs)
state = apply(state, neighbor=nbrs)
return (state, nbrs), 0
state = init(R_init, neighbor=nbrs)
step = 0
while step < max_num_steps:
if verbose:
print('minimization step {}'.format(step))
rtn_state, _ = lax.scan(body_fn, (state, nbrs), step + jnp.arange(step_inc))
new_state, nbrs = rtn_state
# If the neighbor list overflowed, rebuild it and repeat part of
# the simulation.
if nbrs.did_buffer_overflow:
print('Buffer overflow.')
nbrs = neighbor_fn.allocate(state.position)
else:
state = new_state
step += step_inc
if get_maxgrad(state) <= max_grad_thresh:
break
if verbose:
print('successfully finished {} steps.'.format(step*step_inc))
return state.position, get_maxgrad(state), nbrs, step
def run_minimization_scan(energy_fn, R_init, shift, num_steps=5000, **kwargs):
init,apply=minimize.fire_descent(jit(energy_fn), shift, **kwargs)
apply = jit(apply)
@jit
def scan_fn(state, i):
return apply(state), 0.
state = init(R_init)
state, _ = lax.scan(scan_fn,state,jnp.arange(num_steps))
return state.position, jnp.amax(jnp.abs(state.force))
key = random.PRNGKey(0)
```
#Linear elasticity in athermal systems
## The elastic modulus tensor
An global affine deformation is given to lowest order by a symmetric strain tensor $\epsilon$, which transforms any vector $r$ according to
\begin{equation}
r \rightarrow (1 + \epsilon) \cdot r.
\end{equation}
Note that in $d$ dimensions, the strain tensor has $d(d + 1)/2$ independent elements. Now, when a mechanically stable system (i.e. a system at a local energy minimum where there is zero net force on every particle) is subject to an affine deformation, it usually does not remain in mechanical equilibrium. Therefore, there is a secondary, nonaffine response that returns the system to mechanical equilibrium, though usually at a different energy than the undeformed state.
The change of energy can be written to quadratic order as
\begin{equation}
\frac{ \Delta U}{V^0} = \sigma^0_{ij}\epsilon_{ji} + \frac 12 C_{ijkl} \epsilon_{ij} \epsilon_{kl} + O\left( \epsilon^3 \right)
\end{equation}
where $C_{ijkl}$ is the $d × d × d × d$ elastic modulus tensor, $\sigma^0$ is the $d × d$ symmetric stress tensor describing residual stresses in the initial state, and $V^0$ is the volume of the initial state. The symmetries of $\epsilon_{ij}$ imply the following:
\begin{equation}
C_{ijkl} = C_{jikl} = C_{ijlk} = C_{klij}
\end{equation}
When no further symmetries are assumed, the number of independent elastic constants becomes $\frac 18 d(d + 1)(d^2 + d + 2)$, which is 6 in two dimensions and 21 in three dimensions.
##Linear response to an external force
Consider a set of $N$ particles in $d$ dimensions with positions $R_0$. Using $u \equiv R - R_0$ and assuming fixed boundary conditions, we can expand the energy about $R_0$:
\begin{equation}
U = U^0 - F^0 u + \frac 12 u H^0 u + O(u^3),
\end{equation}
where $U^0$ is the energy at $R_0$, $F^0$ is the force, $F^0_\mu \equiv \left. \frac {\partial U}{\partial u_\mu} \right |_{u=0}$, and $H^0$ is the Hessian, $H^0 \equiv \left. \frac{ \partial^2 U}{\partial u_\mu \partial u_\nu}\right|_{u=0}$.
Note that here we are expanding in terms of the particle positions, where as above we were expanding in the global strain degrees of freedom.
If we assume that $R_0$ corresponds to a local energy minimum, then $F^0=0$. Dropping higher order terms, we have a system of coupled harmonic oscillators given by
\begin{equation}
\Delta U \equiv U - U^0 = \frac 12 u H^0 u.
\end{equation}
This is independent of the form or details of $U$.
Hooke's law for this system gives the net force $f$ as a result of displacing the particles by $u$:
\begin{equation}
f = -H^0 u.
\end{equation}
Thus, if an *external* force $f_\mathrm{ext}$ is applied, the particles will respond so that the total force is zero, i.e. $f = -f_\mathrm{ext}$. This response is obtained by solving for $u$:
\begin{equation}
u = (H^0)^{-1} f_\mathrm{ext}.
\end{equation}
## Response to an affine strain
Now consider a strain tensor $\epsilon = \tilde \epsilon \gamma$, where $\gamma$ is a scalar and will be used to explicitly take the limit of small strain for fixed $\tilde \epsilon$. Importantly, the strain tensor represents a deformation of the underlying space that the particles live in and thus is a degree of freedom that is independent of the $Nd$ particle degrees of freedom. Therefore, knowing the particle positions $R$ is not sufficient to describe the energy, we also need to know $\gamma$ to specify the correct boundary conditions:
\begin{equation}
U = U(R, \gamma).
\end{equation}
We now have a system with $Nd+1$ variables $\{R, \gamma\}$ that, like before, form a set of coupled harmonic oscillators. We can describe this using the so-called "generalized Hessian" matrix of second derivatives of the energy with respect to both $R$ and $\gamma$. Specifically, Hooke's law reads
\begin{equation}
\left( \begin{array}{ ccccc|c}
&&&&&\\
&&H^0 &&& -\Xi \\
&&&&& \\ \hline
&&-\Xi^T &&&\frac{\partial ^2U}{\partial \gamma^2}
\end{array}\right)
\left( \begin{array}{ c}
\\
u \\
\\ \hline
\gamma
\end{array}\right)
=
\left( \begin{array}{ c}
\\
0 \\
\\ \hline
\tilde \sigma
\end{array}\right),
\end{equation}
where $u = R - R_0$ is the displacement of every particle, $\Xi = -\frac{ \partial^2 U}{\partial R \partial \gamma}$, and $\tilde \sigma$ is the induced stress caused by the deformation. (If there is prestress in the system, i.e. $\sigma^0 = \frac{\partial U}{\partial \gamma} \neq 0$, the total stress is $\sigma = \sigma^0 + \tilde \sigma$.) In this equation, $\gamma$ is held fixed and the zero in the top of the right-hand-side imposes force balance after the deformation and resulting non-affine displacement of every particle. The non-affine displacement itself, $u$, and the induced stress $\sigma$, are both unknown but can be solved for. First, the non-affine response is
\begin{equation}
u = (H^0)^{-1} \Xi \; \gamma,
\end{equation}
where we note that in the limit of small $\gamma$, the force induced on every particle due to the affine deformation is $\Xi \; \gamma$. Second, the induced stress is
\begin{equation}
\tilde \sigma = \frac{\partial ^2U}{\partial \gamma^2} \gamma - \Xi^T u = \left(\frac{\partial ^2U}{\partial \gamma^2} - \Xi^T (H^0)^{-1} \Xi \right) \gamma.
\end{equation}
Similarly, the change in energy is
\begin{equation}
\frac{\Delta U}{V^0} = \sigma^0 \gamma + \frac 1{2V^0} \left(\frac{\partial ^2U}{\partial \gamma^2} - \Xi^T (H^0)^{-1} \Xi \right) \gamma^2,
\end{equation}
where $\sigma^0$ is the prestress in the system per unit volume. Comparing this to the above definition of the the elastic modulus tensor, we see that the elastic constant associated with the deformation $\tilde \epsilon$ is
\begin{equation}
C(\tilde \epsilon) = \frac 1{V^0} \left( \frac{\partial^2 U}{\partial \gamma^2} - \Xi^T (H^0)^{-1} \Xi \right).
\end{equation}
$C(\tilde \epsilon)$ is related to $C_{ijkl}$ by summing $C(\tilde \epsilon) = C_{ijkl}\tilde \epsilon_{ij} \tilde \epsilon_{kl}$. So, if $\tilde \epsilon_{ij} = \delta_{0i}\delta_{0j}$, then $C_{0000} = C(\tilde \epsilon)$.
The internal code in `jax_md.elasticity` repeats this calculation for different $\tilde \epsilon$ to back out the different independent elastic constants.
#First example
As a first example, let's consider a 3d system of 128 soft spheres. The elastic modulus tensor is only defined for systems that are at a local energy minimum, so we start by minimizing the energy.
```
N = 128
dimension = 3
box_size = quantity.box_size_at_number_density(N, 1.4, dimension)
displacement, shift = space.periodic(box_size)
energy_fn = energy.soft_sphere_pair(displacement)
key, split = random.split(key)
R_init = random.uniform(split, (N,dimension), minval=0.0, maxval=box_size, dtype=f64)
R, max_grad, niters = run_minimization_while(energy_fn, R_init, shift)
print('Minimized the energy in {} minimization steps and reached a final \
maximum gradient of {}'.format(niters, max_grad))
```
We can now calculate the elastic modulus tensor
```
emt_fn = jit(elasticity.athermal_moduli(energy_fn, check_convergence=True))
C, converged = emt_fn(R,box_size)
print(converged)
```
The elastic modulus tensor gives a quantitative prediction for how the energy should change if we deform the system according to a strain tensor
\begin{equation}
\frac{ \Delta U}{V^0} = \sigma^0\epsilon + \frac 12 \epsilon C \epsilon + O\left(\epsilon^3\right)
\end{equation}
To test this, we define $\epsilon = \tilde \epsilon \gamma$ for a randomly chosen strain tensor $\tilde \epsilon$ and for $\gamma << 1$. Ignoring terms of order $\gamma^3$ and higher, we have
\begin{equation}
\frac{ \Delta U}{V^0} - \sigma^0\epsilon = \left[\frac 12 \tilde \epsilon C \tilde \epsilon \right] \gamma^2
\end{equation}
Thus, we can test our calculation of $C$ by plotting $\frac{ \Delta U}{V^0} - \sigma^0\epsilon$ as a function of $\gamma$ for our randomly chosen $\tilde \epsilon$ and comparing it to the line $\left[\frac 12 \tilde \epsilon C \tilde \epsilon \right] \gamma^2$.
First, generate a random $\tilde \epsilon$ and calculate $U$ for different $\gamma$.
```
key, split = random.split(key)
#Pick a random (symmetric) strain tensor
strain_tensor = random.uniform(split, (dimension,dimension), minval=-1, maxval=1, dtype=f64)
strain_tensor = (strain_tensor + strain_tensor.T) / 2.0
#Define a function to calculate the energy at a given strain
def get_energy_at_strain(gamma, strain_tensor, R_init, box):
R_init = space.transform(space.inverse(box),R_init)
new_box = jnp.matmul(jnp.eye(strain_tensor.shape[0]) + gamma * strain_tensor, box)
displacement, shift = space.periodic_general(new_box, fractional_coordinates=True)
energy_fn = energy.soft_sphere_pair(displacement, sigma=1.0)
R_final, _, _ = run_minimization_while(energy_fn, R_init, shift)
return energy_fn(R_final)
gammas = jnp.logspace(-7,-4,50)
Us = vmap(get_energy_at_strain, in_axes=(0,None,None,None))(gammas, strain_tensor, R, box_size * jnp.eye(dimension))
```
Plot $\frac{ \Delta U}{V^0} - \sigma^0\epsilon$ and $\left[\frac 12 \tilde \epsilon C \tilde \epsilon \right] \gamma^2$ as functinos of $\gamma$. While there may be disagreements for very small $\gamma$ due to numerical precision or at large $\gamma$ due to higher-order terms becoming relevant, there should be a region of quantitative agreement.
```
U_0 = energy_fn(R)
stress_0 = -quantity.stress(energy_fn, R, box_size)
V_0 = quantity.volume(dimension, box_size)
#Plot \Delta E/V - sigma*epsilon
y1 = (Us - U_0)/V_0 - gammas * jnp.einsum('ij,ji->',stress_0,strain_tensor)
plt.plot(jnp.abs(gammas), y1, lw=3, label=r'$\Delta U/V^0 - \sigma^0 \epsilon$')
#Plot 0.5 * epsilon*C*epsilon
y2 = 0.5 * jnp.einsum('ij,ijkl,kl->',strain_tensor, C, strain_tensor) * gammas**2
plt.plot(jnp.abs(gammas), y2, ls='--', lw=3, label=r'$(1/2) \epsilon C \epsilon$')
plt.xscale('log')
plt.yscale('log')
plt.legend()
format_plot('$\gamma$','')
finalize_plot()
```
To test the accuracy of this agreement, we first define:
\begin{equation}
T(\gamma) = \frac{ \Delta U}{V^0} - \sigma^0\epsilon - \frac 12 \epsilon C \epsilon \sim O\left(\gamma^3\right)
\end{equation}
which should be proportional to $\gamma^3$ for small $\gamma$ (note that this expected scaling should break down when the y-axis approaches machine precision). This is a prediction of scaling only, so we plot a line proportional to $\gamma^3$ to compare the slopes.
```
#Plot the difference, which should scales as gamma**3
plt.plot(jnp.abs(gammas), jnp.abs(y1-y2), label=r'$T(\gamma)$')
#Plot gamma**3 for reference
plt.plot(jnp.abs(gammas), jnp.abs(gammas**3), 'black', label=r'slope = $\gamma^3$ (for reference)')
plt.xscale('log')
plt.yscale('log')
plt.legend()
format_plot('$\gamma$','')
finalize_plot()
```
Save `C` for later testing.
```
C_3d = C
```
#Example with neighbor lists
As a second example, consider a much larger systems that is implemented using neighbor lists.
```
N = 5000
dimension = 2
box_size = quantity.box_size_at_number_density(N, 1.3, dimension)
box = box_size * jnp.eye(dimension)
displacement, shift = space.periodic_general(box, fractional_coordinates=True)
sigma = jnp.array([[1.0, 1.2], [1.2, 1.4]])
N_2 = int(N / 2)
species = jnp.where(jnp.arange(N) < N_2, 0, 1)
neighbor_fn, energy_fn = energy.soft_sphere_neighbor_list(
displacement, box_size, species=species, sigma=sigma, dr_threshold = 0.1,
fractional_coordinates = True)
key, split = random.split(key)
R_init = random.uniform(split, (N,dimension), minval=0.0, maxval=1.0, dtype=f64)
R, max_grad, nbrs, niters = run_minimization_while_neighbor_list(energy_fn, neighbor_fn, R_init, shift)
print('Minimized the energy in {} minimization steps and reached a final \
maximum gradient of {}'.format(niters, max_grad))
```
We have to pass the neighbor list to `emt_fn`.
```
emt_fn = jit(elasticity.athermal_moduli(energy_fn, check_convergence=True))
C, converged = emt_fn(R,box,neighbor=nbrs)
print(converged)
```
We can time the calculation of the compiled function.
```
%timeit emt_fn(R,box,neighbor=nbrs)
```
Repeat the same tests as above. NOTE: this may take a few minutes.
```
key, split = random.split(key)
#Pick a random (symmetric) strain tensor
strain_tensor = random.uniform(split, (dimension,dimension), minval=-1, maxval=1, dtype=f64)
strain_tensor = (strain_tensor + strain_tensor.T) / 2.0
def get_energy_at_strain(gamma, strain_tensor, R_init, box):
new_box = jnp.matmul(jnp.eye(strain_tensor.shape[0]) + gamma * strain_tensor, box)
displacement, shift = space.periodic_general(new_box, fractional_coordinates=True)
neighbor_fn, energy_fn = energy.soft_sphere_neighbor_list(
displacement, box_size, species=species, sigma=sigma, dr_threshold = 0.1,
fractional_coordinates = True, capacity_multiplier = 1.5)
R_final, _, nbrs, _ = run_minimization_while_neighbor_list(energy_fn, neighbor_fn, R_init, shift)
return energy_fn(R_final, neighbor=nbrs)
gammas = jnp.logspace(-7,-3,20)
Us = jnp.array([ get_energy_at_strain(gamma, strain_tensor, R, box) for gamma in gammas])
U_0 = energy_fn(R, neighbor=nbrs)
stress_0 = -quantity.stress(energy_fn, R, box, neighbor=nbrs)
V_0 = quantity.volume(dimension, box)
#Plot \Delta E/V - sigma*epsilon
y1 = (Us - U_0)/V_0 - gammas * jnp.einsum('ij,ji->',stress_0,strain_tensor)
plt.plot(jnp.abs(gammas), y1, lw=3, label=r'$\Delta U/V^0 - \sigma^0 \epsilon$')
#Plot 0.5 * epsilon*C*epsilon
y2 = 0.5 * jnp.einsum('ij,ijkl,kl->',strain_tensor, C, strain_tensor) * gammas**2
plt.plot(jnp.abs(gammas), y2, ls='--', lw=3, label=r'$(1/2) \epsilon C \epsilon$')
plt.xscale('log')
plt.yscale('log')
plt.legend()
format_plot('$\gamma$','')
finalize_plot()
#Plot the difference, which should scales as gamma**3
plt.plot(jnp.abs(gammas), jnp.abs(y1-y2), label=r'$T(\gamma)$')
#Plot gamma**3 for reference
plt.plot(jnp.abs(gammas), jnp.abs(gammas**3), 'black', label=r'slope = $\gamma^3$ (for reference)')
plt.xscale('log')
plt.yscale('log')
plt.legend()
format_plot('$\gamma$','')
finalize_plot()
```
Save `C` for later testing.
```
C_2d = C
```
#Mandel notation
Mandel notation is a way to represent symmetric second-rank tensors and fourth-rank tensors with so-called "minor symmetries", i.e. $T_{ijkl} = T_{ijlk} = T_{jilk}$. The idea is to map pairs of indices so that $(i,i) \rightarrow i$ and $(i,j) \rightarrow K - i - j$ for $i\neq j$, where $K = d(d+1)/2$ is the number of independent pairs $(i,j)$ for tensors with $d$ elements along each axis. Thus, second-rank tensors become first-rank tensors, and fourth-rank tensors become second-rank tensors, according to:
\begin{align}
M_{m(i,j)} &= T_{ij} w(i,j) \\
M_{m(i,j),m(k,l)} &= T_{ijkl} w(i,j) w(k,l).
\end{align}
Here, $m(i,j)$ is the mapping function described above, and w(i,j) is a weight that preserves summation rules and is given by
\begin{align}
w(i,j) = \delta_{ij} + \sqrt{2} (\delta_{ij}-1).
\end{align}
We can convert strain tensors, stress tensors, and elastic modulus tensors to and from Mandel notation using the functions `elasticity.tensor_to_mandel` and `elasticity.mandel_to_tensor`.
First, lets copy one of the previously calculated elastic modulus tensors and define a random strain tensor.
```
#This can be 2 or 3 depending on which of the above solutions has been calculated
dimension = 3
if dimension == 2:
C = C_2d
else:
C = C_3d
key, split = random.split(key)
e = random.uniform(key, (dimension,dimension), minval=-1, maxval=1, dtype=f64)
e = (e + e.T)/2.
```
Convert `e` and `C` to Mental notation
```
e_m = jit(elasticity.tensor_to_mandel)(e)
C_m = jit(elasticity.tensor_to_mandel)(C)
print(e_m)
print(C_m)
```
Using "bar" notation to represent Mandel vectors and matrices, we have
\begin{equation}
\frac{ \Delta U}{V^0} = \bar \sigma_i^0 \bar\epsilon_i + \frac 12 \bar \epsilon_i \bar C_{ij} \bar\epsilon_j + O\left(\bar \epsilon^3\right)
\end{equation}
We can explicity test that the sums are equivalent to the sums involving the original tensors
```
sum_m = jnp.einsum('i,ij,j->',e_m, C_m, e_m)
sum_t = jnp.einsum('ij,ijkl,kl->',e, C, e)
print('Relative error is {}, which should be very close to 0'.format((sum_t-sum_m)/sum_t))
```
Finally, we can convert back to the full tensors and check that they are unchanged.
```
C_new = jit(elasticity.mandel_to_tensor)(C_m)
print('Max error in C is {}, which should be very close to 0.'.format(jnp.max(jnp.abs(C-C_new))))
e_new = jit(elasticity.mandel_to_tensor)(e_m)
print('Max error in e is {}, which should be very close to 0.'.format(jnp.max(jnp.abs(e-e_new))))
```
# Isotropic elastic constants
The calculation of the elastic modulus tensor does not make any assumptions about the underlying symmetries in the material. However, for isotropic systems, only two constants are needed to completely describe the elastic behavior. These are often taken to be the bulk modulus, $B$, and the shear modulus, $G$, or the Young's modulus, $E$, and the Poisson's ratio, $\nu$. The function `elasticity.extract_isotropic_moduli` extracts these values, as well as the longitudinal modulus, $M$, from an elastic modulus tensor.
Importantly, since there is not guarantee that `C` is calculated from a truely isotropic systems, these are "orientation-averaged" values. For example, there are many directions in which you can shear a system, and the shear modulus that is returned represents and average over all these orientations. This can be an effective way to average over small fluctuations in an "almost isotropic" system, but the values lose their typical meaning when the systems is highly anisotropic.
```
elasticity.extract_isotropic_moduli(C)
```
# Gradients
The calculation of the elastic modulus tensor is fully differentiable:
```
def setup(N,dimension,key):
box_size = quantity.box_size_at_number_density(N, 1.4, dimension)
box = box_size * jnp.eye(dimension)
displacement, shift = space.periodic_general(box, fractional_coordinates=True)
R_init = random.uniform(key, (N,dimension), minval=0.0, maxval=1.0, dtype=f64)
def run(sigma):
energy_fn = energy.soft_sphere_pair(displacement, sigma=sigma)
R, max_grad = run_minimization_scan(energy_fn, R_init, shift, num_steps=1000)
emt_fn = jit(elasticity.athermal_moduli(energy_fn))
C = emt_fn(R,box)
return elasticity.extract_isotropic_moduli(C)['G']
return run
key, split = random.split(key)
N = 50
dimension = 2
run = setup(N, dimension, split)
sigma = jnp.linspace(1.0,1.4,N)
print(run(sigma))
print(grad(run)(sigma))
```
|
github_jupyter
|
## This assignment
In this assignment, you'll learn (or review):
* How to set up Jupyter on your own computer.
* Python basics, like defining functions.
* How to use the `numpy` library to compute with arrays of numbers.
# 2. Python
Python is the main programming language we'll use in this course. We assume you have some experience with Python or can learn it yourself, but here is a brief review.
Below are some simple Python code fragments.
You should feel confident explaining what each fragment is doing. If not,
please brush up on your Python. There a number of tutorials online (search
for "Python tutorial"). https://docs.python.org/3/tutorial/ is a good place to
start.
```
2 + 2
# This is a comment.
# In Python, the ** operator performs exponentiation.
import math
math.e**(-2)
print("Hello" + ",", "world!")
"Hello, cell output!"
def add2(x):
"""This docstring explains what this function does: it adds 2 to a number."""
return x + 2
def makeAdder(amount):
"""Make a function that adds the given amount to a number."""
def addAmount(x):
return x + amount
return addAmount
add3 = makeAdder(3)
add3(4)
# add4 is very similar to add2, but it's been created using a lambda expression.
add4 = lambda x: x + 4
add4(5)
sameAsMakeAdder = lambda amount: lambda x: x + amount
add5 = sameAsMakeAdder(5)
add5(6)
def fib(n):
if n <= 1:
return 1
# Functions can call themselves recursively.
return fib(n-1) + fib(n-2)
fib(4)
# A for loop repeats a block of code once for each
# element in a given collection.
for i in range(5):
if i % 2 == 0:
print(2**i)
else:
print("Odd power of 2")
# A list comprehension is a convenient way to apply a function
# to each element in a given collection.
# The String method join appends together all its arguments
# separated by the given string. So we append each element produced
# by the list comprehension, each separated by a newline ("\n").
print("\n".join([str(2**i) if i % 2 == 0 else "Odd power of 2" for i in range(5)]))
```
#### Question 1
##### Question 1a
Write a function nums_reversed that takes in an integer `n` and returns a string
containing the numbers 1 through `n` including `n` in reverse order, separated
by spaces. For example:
>>> nums_reversed(5)
'5 4 3 2 1'
***Note:*** The ellipsis (`...`) indicates something you should fill in. It *doesn't* necessarily imply you should replace it with only one line of code.
```
def nums_reversed(n):
lst1 = list(range(n + 1))
lst2 = lst1[1:]
lst3 = lst2[::-1]
print(lst3)
nums_reversed(4)
_ = ok.grade('q01a')
_ = ok.backup()
```
##### Question 1b
Write a function `string_splosion` that takes in a non-empty string like
`"Code"` and returns a long string containing every prefix of the input.
For example:
>>> string_splosion('Code')
'CCoCodCode'
>>> string_splosion('data!')
'ddadatdatadata!'
>>> string_splosion('hi')
'hhi'
```
def string_splosion(string):
ans = ''
for i in range(len(string)):
ans = ans + string[:(i+1)]
return ans
string_splosion('data!')
_ = ok.grade('q01b')
_ = ok.backup()
```
##### Question 1c
Write a function `double100` that takes in a list of integers
and returns `True` only if the list has two `100`s next to each other.
>>> double100([100, 2, 3, 100])
False
>>> double100([2, 3, 100, 100, 5])
True
```
def double100(nums):
if 100 in nums:
first = nums.index(100)
if nums[first + 1] == 100:
return True
else:
return False
double100([2, 3, 100, 100, 5])
_ = ok.grade('q01c')
_ = ok.backup()
```
##### Question 1d
Write a function `median` that takes in a list of numbers
and returns the median element of the list. If the list has even
length, it returns the mean of the two elements in the middle.
>>> median([5, 4, 3, 2, 1])
3
>>> median([ 40, 30, 10, 20 ])
25
```
def median(number_list):
half = len(number_list) // 2
if len(number_list) % 2 == 0:
right = number_list[half]
left = number_list[half - 1]
return (right + left) / 2
else:
return number_list[half]
median([ 40, 30, 10, 20 ])
_ = ok.grade('q01d')
_ = ok.backup()
```
# 3. `NumPy`
The `NumPy` library lets us do fast, simple computing with numbers in Python.
## 3.1. Arrays
The basic `NumPy` data type is the array, a homogeneously-typed sequential collection (a list of things that all have the same type). Arrays will most often contain strings, numbers, or other arrays.
Let's create some arrays:
```
import numpy as np
array1 = np.array([2, 3, 4, 5])
array2 = np.arange(4)
array1, array2
```
Math operations on arrays happen *element-wise*. Here's what we mean:
```
array1 * 2
array1 * array2
array1 ** array2
```
This is not only very convenient (fewer `for` loops!) but also fast. `NumPy` is designed to run operations on arrays much faster than equivalent Python code on lists. Data science sometimes involves working with large datasets where speed is important - even the constant factors!
**Jupyter pro-tip**: Pull up the docs for any function in Jupyter by running a cell with
the function name and a `?` at the end:
```
np.arange?
```
**Another Jupyter pro-tip**: Pull up the docs for any function in Jupyter by typing the function
name, then `<Shift>-<Tab>` on your keyboard. Super convenient when you forget the order
of the arguments to a function. You can press `<Tab>` multiple tabs to expand the docs.
Try it on the function below:
```
np.linspace
```
#### Question 2
Using the `np.linspace` function, create an array called `xs` that contains
100 evenly spaced points between `0` and `2 * np.pi`. Then, create an array called `ys` that
contains the value of $ \sin{x} $ at each of those 100 points.
*Hint:* Use the `np.sin` function. You should be able to define each variable with one line of code.)
```
xs = np.linspace(0, 2 * np.pi)
ys = np.sin(np.linspace(0, 2 * np.pi))
_ = ok.grade('q02')
_ = ok.backup()
```
The `plt.plot` function from another library called `matplotlib` lets us make plots. It takes in
an array of x-values and a corresponding array of y-values. It makes a scatter plot of the (x, y) pairs and connects points with line segments. If you give it enough points, it will appear to create a smooth curve.
Let's plot the points you calculated in the previous question:
```
import matplotlib.pyplot as plt
plt.plot(xs, ys);
```
This is a useful recipe for plotting any function:
1. Use `linspace` or `arange` to make a range of x-values.
2. Apply the function to each point to produce y-values.
3. Plot the points.
You might remember from calculus that the derivative of the `sin` function is the `cos` function. That means that the slope of the curve you plotted above at any point `xs[i]` is given by `cos(xs[i])`. You can try verifying this by plotting `cos` in the next cell.
```
yc = np.cos(np.linspace(0, 2 * np.pi))
plt.plot(xs, yc);
```
Calculating derivatives is an important operation in data science, but it can be difficult. We can have computers do it for us using a simple idea called *numerical differentiation*.
Consider the `i`th point `(xs[i], ys[i])`. The slope of `sin` at `xs[i]` is roughly the slope of the line connecting `(xs[i], ys[i])` to the nearby point `(xs[i+1], ys[i+1])`. That slope is:
(ys[i+1] - ys[i]) / (xs[i+1] - xs[i])
If the difference between `xs[i+1]` and `xs[i]` were infinitessimal, we'd have exactly the derivative. In numerical differentiation we take advantage of the fact that it's often good enough to use "really small" differences instead.
#### Question 3
Define a function called `derivative` that takes in an array of x-values and their
corresponding y-values and computes the slope of the line connecting each point to the next point.
>>> derivative(np.array([0, 1, 2]), np.array([2, 4, 6]))
np.array([2., 2.])
>>> derivative(np.arange(5), np.arange(5) ** 2)
np.array([0., 2., 4., 6.])
Notice that the output array has one less element than the inputs since we can't
find the slope for the last point.
It's possible to do this in one short line using [slicing](http://pythoncentral.io/how-to-slice-listsarrays-and-tuples-in-python/), but feel free to use whatever method you know.
**Then**, use your `derivative` function to compute the slopes for each point in `xs`, `ys`.
Store the slopes in an array called `slopes`.
```
def derivative(xvals, yvals):
y_dim = np.diff(yvals)
x_dim = np.diff(xvals)
slopes = y_dim / x_dim
return slopes
slopes = derivative(xs, ys)
derivative(np.array([0, 1, 2]), np.array([2, 4, 6]))
_ = ok.grade('q03')
_ = ok.backup()
```
#### Question 4
Plot the slopes you computed. Then plot `cos` on top of your plot, calling `plt.plot` again in the same cell. Did numerical differentiation work?
*Note:* Since we have only 99 slopes, you'll need to take off the last x-value before plotting to avoid an error.
```
plt.plot(xs[:-1], slopes);
plt.plot(xs, yc);
```
In the plot above, it's probably not clear which curve is which. Examine the cell below to see how to plot your results with a legend.
```
plt.plot(xs[:-1], slopes, label="Numerical derivative")
plt.plot(xs[:-1], np.cos(xs[:-1]), label="True derivative")
# You can just call plt.legend(), but the legend will cover up
# some of the graph. Use bbox_to_anchor=(x,y) to set the x-
# and y-coordinates of the center-left point of the legend,
# where, for example, (0, 0) is the bottom-left of the graph
# and (1, .5) is all the way to the right and halfway up.
plt.legend(bbox_to_anchor=(1, .5), loc="center left");
```
## 3.2. Multidimensional Arrays
A multidimensional array is a primitive version of a table, containing only one kind of data and having no column labels. A 2-dimensional array is useful for working with *matrices* of numbers.
```
# The zeros function creates an array with the given shape.
# For a 2-dimensional array like this one, the first
# coordinate says how far the array goes *down*, and the
# second says how far it goes *right*.
array3 = np.zeros((4, 5))
array3
# The shape attribute returns the dimensions of the array.
array3.shape
# You can think of array3 as an array containing 4 arrays, each
# containing 5 zeros. Accordingly, we can set or get the third
# element of the second array in array 3 using standard Python
# array indexing syntax twice:
array3[1][2] = 7
array3
# This comes up so often that there is special syntax provided
# for it. The comma syntax is equivalent to using multiple
# brackets:
array3[1, 2] = 8
array3
```
Arrays allow you to assign to multiple places at once. The special character `:` means "everything."
```
array4 = np.zeros((3, 5))
array4[:, 2] = 5
array4
```
In fact, you can use arrays of indices to assign to multiple places. Study the next example and make sure you understand how it works.
```
array5 = np.zeros((3, 5))
rows = np.array([1, 0, 2])
cols = np.array([3, 1, 4])
# Indices (1,3), (0,1), and (2,4) will be set.
array5[rows, cols] = 3
array5
```
#### Question 5
Create a 50x50 array called `twice_identity` that contains all zeros except on the
diagonal, where it contains the value `2`.
Start by making a 50x50 array of all zeros, then set the values. Use indexing, not a `for` loop! (Don't use `np.eye` either, though you might find that function useful later.)
```
twice_identity = np.identity(50) * 2
twice_identity
_ = ok.grade('q05')
_ = ok.backup()
```
# 4. A Picture Puzzle
Your boss has given you some strange text files. He says they're images,
some of which depict a summer scene and the rest a winter scene.
He demands that you figure out how to determine whether a given
text file represents a summer scene or a winter scene.
You receive 10 files, `1.txt` through `10.txt`. Peek at the files in a text
editor of your choice.
#### Question 6
How do you think the contents of the file are structured? Take your best guess.
**Much like the MNIST dataset, these files are probably structured as pixels where the data represents a greyscale value or the level to which each pixel is dark or has color.**
#### Question 7
Create a function called `read_file_lines` that takes in a filename as its argument.
This function should return a Python list containing the lines of the
file as strings. That is, if `1.txt` contains:
```
1 2 3
3 4 5
7 8 9
```
the return value should be: `['1 2 3\n', '3 4 5\n', '7 8 9\n']`.
**Then**, use the `read_file_lines` function on the file `1.txt`, reading the contents
into a variable called `file1`.
*Hint:* Check out [this Stack Overflow page](http://stackoverflow.com/questions/3277503/how-to-read-a-file-line-by-line-into-a-list-with-python) on reading lines of files.
```
def read_file_lines(filename):
with open(filename) as f:
lines = f.readlines()
file1 = [x.strip() for x in lines]
return file1
file1 = read_file_lines('C:\\Users\\davei\\Documents\\EMSEDataAnalytics\\EMSE6992_Assignments\\data\\HW1\\1.txt')
_ = ok.grade('q07')
_ = ok.backup()
```
Each file begins with a line containing two numbers. After checking the length of
a file, you could notice that the product of these two numbers equals the number of
lines in each file (other than the first one).
This suggests the rows represent elements in a 2-dimensional grid. In fact, each
dataset represents an image!
On the first line, the first of the two numbers is
the height of the image (in pixels) and the second is the width (again in pixels).
Each line in the rest of the file contains the pixels of the image.
Each pixel is a triplet of numbers denoting how much red, green, and blue
the pixel contains, respectively.
In image processing, each column in one of these image files is called a *channel*
(disregarding line 1). So there are 3 channels: red, green, and blue.
#### Question 8
Define a function called `lines_to_image` that takes in the contents of a
file as a list (such as `file1`). It should return an array containing integers of
shape `(n_rows, n_cols, 3)`. That is, it contains the pixel triplets organized in the
correct number of rows and columns.
For example, if the file originally contained:
```
4 2
0 0 0
10 10 10
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
```
The resulting array should be a *3-dimensional* array that looks like this:
```
array([
[ [0,0,0], [10,10,10] ],
[ [2,2,2], [3,3,3] ],
[ [4,4,4], [5,5,5] ],
[ [6,6,6], [7,7,7] ]
])
```
The string method `split` and the function `np.reshape` might be useful.
**Important note:** You must call `.astype(np.uint8)` on the final array before
returning so that `numpy` will recognize the array represents an image.
Once you've defined the function, set `image1` to the result of calling
`lines_to_image` on `file1`.
```
import numpy as np
def lines_to_image(lines_to_image):
file = lines_to_image[1:]
lst = []
for line in file:
new_file1 = line.split()
new_file2 = list(new_file1)
new_file2 = [int(i) for i in new_file2]
lst.append(new_file2)
array = np.array([lst])
new_array = array.reshape((array.shape[0], array.shape[1], 3))
return new_array
image1 = lines_to_image(file1)
image1.shape
_ = ok.grade('q08')
_ = ok.backup()
```
#### Question 9
Images in `numpy` are simply arrays, but we can also display them them as
actual images in this notebook.
Use the provided `show_images` function to display `image1`. You may call it
like `show_images(image1)`. If you later have multiple images to display, you
can call `show_images([image1, image2])` to display them all at once.
The resulting image should look almost completely black. Why do you suppose
that is?
```
def show_images(images, ncols=2, figsize=(10, 7), **kwargs):
"""
Shows one or more color images.
images: Image or list of images. Each image is a 3-dimensional
array, where dimension 1 indexes height and dimension 2
the width. Dimension 3 indexes the 3 color values red,
blue, and green (so it always has length 3).
"""
def show_image(image, axis=plt):
plt.imshow(image, **kwargs)
if not (isinstance(images, list) or isinstance(images, tuple)):
images = [images]
images = [image.astype(np.uint8) for image in images]
nrows = math.ceil(len(images) / ncols)
ncols = min(len(images), ncols)
plt.figure(figsize=figsize)
for i, image in enumerate(images):
axis = plt.subplot2grid(
(nrows, ncols),
(i // ncols, i % ncols),
)
axis.tick_params(bottom='off', left='off', top='off', right='off',
labelleft='off', labelbottom='off')
axis.grid(False)
show_image(image, axis)
show_images(image1)
```
#### Question 10
If you look at the data, you'll notice all the numbers lie between 0 and 10.
In `NumPy`, a color intensity is an integer ranging from 0 to 255, where 0 is
no color (black). That's why the image is almost black. To see the image,
we'll need to rescale the numbers in the data to have a larger range.
Define a function `expand_image_range` that takes in an image. It returns a
**new copy** of the image with the following transformation:
old value | new value
========= | =========
0 | 12
1 | 37
2 | 65
3 | 89
4 | 114
5 | 137
6 | 162
7 | 187
8 | 214
9 | 240
10 | 250
This expands the color range of the image. For example, a pixel that previously
had the value `[5 5 5]` (almost-black) will now have the value `[137 137 137]`
(gray).
Set `expanded1` to the expanded `image1`, then display it with `show_images`.
[This page](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#boolean-array-indexing)
from the numpy docs has some useful information that will allow you
to use indexing instead of `for` loops.
However, the slickest implementation uses one very short line of code.
*Hint:* If you index an array with another array or list as in question 5, your
array (or list) of indices can contain repeats, as in `array1[[0, 1, 0]]`.
Investigate what happens in that case.
```
# This array is provided for your convenience.
transformed = np.array([12, 37, 65, 89, 114, 137, 162, 187, 214, 240, 250])
def expand_image_range(image):
...
expanded1 = ...
show_images(expanded1)
_ = ok.grade('q10')
_ = ok.backup()
```
#### Question 11
Eureka! You've managed to reveal the image that the text file represents.
Now, define a function called `reveal_file` that takes in a filename
and returns an expanded image. This should be relatively easy since you've
defined functions for each step in the process.
Then, set `expanded_images` to a list of all the revealed images. There are
10 images to reveal (including the one you just revealed).
Finally, use `show_images` to display the `expanded_images`.
```
def reveal_file(filename):
...
filenames = ['1.txt', '2.txt', '3.txt', '4.txt', '5.txt',
'6.txt', '7.txt', '8.txt', '9.txt', '10.txt']
expanded_images = ...
show_images(expanded_images, ncols=5)
```
Notice that 5 of the above images are of summer scenes; the other 5
are of winter.
Think about how you'd distinguish between pictures of summer and winter. What
qualities of the image seem to signal to your brain that the image is one of
summer? Of winter?
One trait that seems specific to summer pictures is that the colors are warmer.
Let's see if the proportion of pixels of each color in the image can let us
distinguish between summer and winter pictures.
#### Question 12
To simplify things, we can categorize each pixel according to its most intense
(highest-value) channel. (Remember, red, green, and blue are the 3 channels.)
For example, we could just call a `[2 4 0]` pixel "green." If a pixel has a
tie between several channels, let's count it as none of them.
Write a function `proportion_by_channel`. It takes in an image. It assigns
each pixel to its greatest-intensity channel: red, green, or blue. Then
the function returns an array of length three containing the proportion of
pixels categorized as red, the proportion categorized as green, and the
proportion categorized as blue (respectively). (Again, don't count pixels
that are tied between 2 or 3 colors as any category, but do count them
in the denominator when you're computing proportions.)
For example:
```
>>> test_im = np.array([
[ [5, 2, 2], [2, 5, 10] ]
])
>>> proportion_by_channel(test_im)
array([ 0.5, 0, 0.5 ])
# If tied, count neither as the highest
>>> test_im = np.array([
[ [5, 2, 5], [2, 50, 50] ]
])
>>> proportion_by_channel(test_im)
array([ 0, 0, 0 ])
```
Then, set `image_proportions` to the result of `proportion_by_channel` called
on each image in `expanded_images` as a 2d array.
*Hint:* It's fine to use a `for` loop, but for a difficult challenge, try
avoiding it. (As a side benefit, your code will be much faster.) Our solution
uses the `NumPy` functions `np.reshape`, `np.sort`, `np.argmax`, and `np.bincount`.
```
def proportion_by_channel(image):
...
image_proportions = ...
image_proportions
_ = ok.grade('q12')
_ = ok.backup()
```
Let's plot the proportions you computed above on a bar chart:
```
# You'll learn about Pandas and DataFrames soon.
import pandas as pd
pd.DataFrame({
'red': image_proportions[:, 0],
'green': image_proportions[:, 1],
'blue': image_proportions[:, 2]
}, index=pd.Series(['Image {}'.format(n) for n in range(1, 11)], name='image'))\
.iloc[::-1]\
.plot.barh();
```
#### Question 13
What do you notice about the colors present in the summer images compared to
the winter ones?
Use this info to write a function `summer_or_winter`. It takes in an image and
returns `True` if the image is a summer image and `False` if the image is a
winter image.
**Do not hard-code the function to the 10 images you currently have (eg.
`if image1, return False`).** We will run your function on other images
that we've reserved for testing.
You must classify all of the 10 provided images correctly to pass the test
for this function.
```
def summer_or_winter(image):
...
_ = ok.grade('q13')
_ = ok.backup()
```
Congrats! You've created your very first classifier for this class.
#### Question 14
1. How do you think your classification function will perform
in general?
2. Why do you think it will perform that way?
3. What do you think would most likely give you false positives?
4. False negatives?
*Write your answer here, replacing this text.*
**Final note:** While our approach here is simplistic, skin color segmentation
-- figuring out which parts of the image belong to a human body -- is a
key step in many algorithms such as face detection.
# Optional: Our code to encode images
Here are the functions we used to generate the text files for this assignment.
Feel free to send not-so-secret messages to your friends if you'd like.
```
import skimage as sk
import skimage.io as skio
def read_image(filename):
'''Reads in an image from a filename'''
return skio.imread(filename)
def compress_image(im):
'''Takes an image as an array and compresses it to look black.'''
res = im / 25
return res.astype(np.uint8)
def to_text_file(im, filename):
'''
Takes in an image array and a filename for the resulting text file.
Creates the encoded text file for later decoding.
'''
h, w, c = im.shape
to_rgb = ' '.join
to_row = '\n'.join
to_lines = '\n'.join
rgb = [[to_rgb(triplet) for triplet in row] for row in im.astype(str)]
lines = to_lines([to_row(row) for row in rgb])
with open(filename, 'w') as f:
f.write('{} {}\n'.format(h, w))
f.write(lines)
f.write('\n')
summers = skio.imread_collection('orig/summer/*.jpg')
winters = skio.imread_collection('orig/winter/*.jpg')
len(summers)
sum_nums = np.array([ 5, 6, 9, 3, 2, 11, 12])
win_nums = np.array([ 10, 7, 8, 1, 4, 13, 14])
for im, n in zip(summers, sum_nums):
to_text_file(compress_image(im), '{}.txt'.format(n))
for im, n in zip(winters, win_nums):
to_text_file(compress_image(im), '{}.txt'.format(n))
```
# 5. Submitting this assignment
First, run this cell to run all the autograder tests at once so you can double-
check your work.
```
_ = ok.grade_all()
```
Now, run this code in your terminal to make a
[git commit](https://www.atlassian.com/git/tutorials/saving-changes/git-commit)
that saves a snapshot of your changes in `git`. The last line of the cell
runs [git push](http://stackoverflow.com/questions/2745076/what-are-the-differences-between-git-commit-and-git-push), which will send your work to your personal Github repo.
```
# Tell git to commit all the changes so far
git add -A
# Tell git to make the commit
git commit -m "hw1 finished"
# Send your updates to your personal private repo
git push origin master
```
Finally, we'll submit the assignment to OkPy so that the staff will know to
grade it. You can submit as many times as you want and you can choose which
submission you want us to grade by going to https://okpy.org/cal/data100/sp17/.
```
# Now, we'll submit to okpy
_ = ok.submit()
```
Congrats! You are done with homework 1.
|
github_jupyter
|
# COVID-19 Analysis across countries and weeks
In this study, the focus is on the country cases. This analysis examines at the case growth, case proportion and weekly growth.
# This kerenel will be updated frequently to keep it up to date
# Library and dataset imports
```
#Importing the libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.graph_objects as go
```
The data used for this study is taken from the Johns Hopkins CSSE data repository. The following files were used for the analysis:
1. time_series_covid19_confirmed_global.csv-> https://rb.gy/uktxf3
2. time_series_covid19_deaths_global.csv -> https://rb.gy/qnjgsj
3. time_series_covid19_recovered_global.csv-> https://rb.gy/dxfjfl
```
#Importing the datasets
url='https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
confirmed=pd.read_csv(url,error_bad_lines=False)
death=pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv',
error_bad_lines=False)
recovered=pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv',
error_bad_lines=False)
```
# Case proportions of countries
The function country_with_cases will return a dataframe with just the country name and the number of cases. This function also adds up all the province value into one to make the overall data just in terms of the Country/Region. The main reason this is done is because the Province/State contains a lot of null values
```
def country_with_cases(dataset):
is_province=dataset.loc[dataset['Province/State'].isna()==False]
is_province=is_province['Country/Region'].unique()
df=dataset.copy()
Country=df['Country/Region'].values
temp=df.drop(columns=['Province/State','Country/Region','Lat','Long'])
values=temp.values
cases=[]
for i in range(0,len(values)):
cases.append(values[i][values.shape[1]-1])
new_df=pd.concat([pd.DataFrame(Country),pd.DataFrame(cases)],axis=1)
new_df.columns=['Country','Cases']
index=[]
is_province_sums=[]
for i in is_province:
temp=new_df.loc[new_df['Country']==i]
index.append(temp.index)
s=np.sum(temp['Cases'])
is_province_sums.append(s)
for i in index:
new_df.drop(i,axis=0,inplace=True)
countries_with_province=pd.concat([pd.DataFrame(is_province),pd.DataFrame(is_province_sums)],axis=1)
countries_with_province.columns=['Country','Cases']
All_country_cases=pd.concat([new_df,countries_with_province],axis=0)
All_country_cases.reset_index(inplace=True)
return All_country_cases
confirmed_tree=country_with_cases(confirmed)
def tree_map(df,color_scale,title):
#The df is the one which only has country and cases
fig = px.treemap(df,path=['Country'], values='Cases',color='Cases',
color_continuous_scale=color_scale,
title=title)
fig.show()
tree_map(confirmed_tree,'amp','Confirmed cases across countries')
recovered_tree=country_with_cases(recovered)
tree_map(recovered_tree,'Greens','Recovery across countries')
death_tree=country_with_cases(death)
tree_map(death_tree,'Reds','Deaths across countries')
```
# Visualising the Country growth
```
def top_10(df):# This function will only work
df_descending=df.sort_values(by='Cases', ascending=False)
df_descending=df_descending.reset_index()
top=df_descending.iloc[:10 :]
return top['Country'].values
```
The rate function converts the structire of the dataframe in general. In the dataset the dates are the column making it not so convenient for visualising the data. This function transforms all the dates into one column and this makes plotting very much easier
```
def rate(df):
is_province=df.loc[confirmed['Province/State'].isna()==False]
is_province=is_province['Country/Region'].unique()
copy=df.copy()
final=[]
index=[]
for i in is_province:
temp=copy.loc[copy['Country/Region']==i]
index=copy.loc[copy['Country/Region']==i].index
temp=temp.sum(axis=0)
final.append(temp)
copy.drop(index,inplace=True)
new_df=pd.DataFrame(final)
new_df['Country/Region']=is_province
total=pd.concat([copy,new_df],axis=0)
total.reset_index(inplace=True)
total.drop(columns=['Province/State'],inplace=True)
t=pd.melt(total,id_vars=['Country/Region','index','Lat','Long'],var_name="Date", value_name="Value")
return t
c=rate(confirmed)
fig = px.line(c, x="Date", y="Value", color="Country/Region",
title='Confirmed cases across countries')
fig.update_layout(showlegend=False)
fig.show()
r=rate(recovered)
fig = px.line(r, x="Date", y="Value", color="Country/Region",
title='Confirmed cases across countries')
fig.update_layout(showlegend=False)
fig.show()
d=rate(death)
fig = px.line(d, x="Date", y="Value", color="Country/Region",
title='Deaths across countries')
fig.update_layout(showlegend=False)
fig.show()
```
# Analysing the top 10 countries
We will analyse 10 countries with the highest confirmed case
```
top_ten=top_10(confirmed_tree)
def stacked_line_subplots(confirmed,death,recovered,countries):
subplot_title=[]
for i in countries:
subplot_title.append('Cases in {}'.format(i))
subplot_title=tuple(subplot_title)
fig = make_subplots(rows=len(countries), cols=1,subplot_titles=subplot_title)
#countries=['India','US','Yemen','Angola']
dates=confirmed.columns
dates=np.delete(dates,[0,2,3])
dfs=[confirmed,death,recovered]
row=1
for i in range(len(countries)):
value=[]
for j in dfs:
temp=j.loc[j['Country/Region']==countries[i]].values
temp=np.delete(temp,[0,1,2,3])
value.append(temp)
if(i==0):
fig.append_trace(go.Scatter(x=dates,y=value[1],mode='lines',name='Death',
line_color='red',stackgroup='covid',legendgroup="group1"),row=row,col=1)
fig.append_trace(go.Scatter(x=dates,y=value[2],mode='lines',name='Recovered',
line_color='green',stackgroup='covid',legendgroup="group2"),row=row,col=1)
fig.append_trace(go.Scatter(x=dates,y=value[0],mode='lines',name='Confirmed',
line_color='blue',stackgroup='covid',legendgroup="group3"),row=row,col=1)
else:
fig.append_trace(go.Scatter(x=dates,y=value[1],mode='lines',name='Death',
line_color='red',stackgroup='covid', showlegend=False,
legendgroup="group1"),row=row,col=1)
fig.append_trace(go.Scatter(x=dates,y=value[2],mode='lines',name='Recovered',
line_color='green',stackgroup='covid', showlegend=False,
legendgroup="group2"),row=row,col=1)
fig.append_trace(go.Scatter(x=dates,y=value[0],mode='lines',name='Confirmed',
line_color='blue',stackgroup='covid', showlegend=False,
legendgroup="group3"),row=row,col=1)
row+=1
fig.update_layout(height=2000, width=800,
title_text="Cases across the top 10 countries")
fig.show()
```
This stacked line graph in a way shows the ratio of the cases in a country. The blue area in way indicates the active cases(when both red and green areas are present), green indicates recovered and red indicates the deaths
```
stacked_line_subplots(confirmed,death,recovered,top_ten)
```
# Visuals on the World Map
```
def world_map(df,title,color):
dates=df.columns
dates=np.delete(dates,[0,2,3])
country=df['Country/Region']
lat=df['Lat']
long=df['Long']
transformed=pd.melt(df,id_vars=['Province/State','Country/Region','Lat','Long'],
var_name="Date", value_name="Value")
fig = px.scatter_geo(transformed, lat="Lat",lon="Long", color="Value",
hover_name="Country/Region", size=transformed["Value"],
animation_frame="Date",
projection="natural earth",title=title,
color_continuous_scale=color)
fig.show()
world_map(confirmed,'Confirmed cases with time series','Jet')
world_map(recovered,'Recovered cases with time series','YlGn')
world_map(death,'Death Cases with time series','Burg')
```
# Weekly analysis on case growth
The function weekly_trend() mainly creates a new column named week. This column indicates the week
number of a particular day. For this, the dates have been converted to days since the first reported day. The function will return the a dataframe of one particular week
```
from datetime import date
def weekly_trend(df,number):
c=rate(df)
date_values=c['Date'].values
first_date=date_values[0].split('/')
first=list(map(int, first_date))
for i in range(len(date_values)):
random=date_values[i].split('/')
random=list(map(int, random))
delta=date(2020,random[0],random[1])-date(2020,first[0],first[1])
date_values[i]=delta.days
c['Date']=date_values
c['Weeks']=c['Date']//7
confirmed_week=c.loc[c['Weeks']==number]
confirmed_week=confirmed_week.sort_values(by='Value',ascending=False)
return confirmed_week
```
The function top_3_trends returns the top 3 countries. These are the countries which have the highest growth in the week
```
def top_3_trends(week,week_number):
last_day=week.loc[week['Date']==((week_number*7)+6)]
first_day=week.loc[week['Date']==(week_number*7)]
first_day_values=first_day['Value'].values
last_day['Value']=last_day['Value']-first_day_values
last_day=last_day.sort_values(by='Value',ascending=False)
countries=last_day['Country/Region'].unique()[:3]
return countries
```
This function plots the top 3 countries in a subplot
```
def top_3(week,week_number,subject):
top_3=top_3_trends(week,week_number)
week=week.sort_values(by='Date',ascending=True)
subplot_title=[]
for i in top_3:
subplot_title.append( '{}'.format(i))
subplot_title=tuple(subplot_title)
fig = make_subplots(rows=1, cols=3,subplot_titles=subplot_title)
col=1
for i in range(0,3):
temp=week.loc[week['Country/Region']==top_3[i]]
temp['Value']-=temp.iloc[0,5]
fig.append_trace(go.Scatter(x=temp['Date'],y=temp['Value'],mode='lines',showlegend=False
),col=col,row=1)
col+=1
fig.update_layout(title_text="Week {} trends of {} in top 3 countries".format(week_number,subject))
fig.show()
for i in range(1,22):
week=weekly_trend(confirmed,i)
top_3(week,i,'Confirmed Cases')
#The same can be done for Recovered and death cases, but the the kernel becomes to long and redundant
```
|
github_jupyter
|
# 🔪 JAX - The Sharp Bits 🔪
[](https://colab.research.google.com/github/google/jax/blob/master/docs/notebooks/Common_Gotchas_in_JAX.ipynb)
*levskaya@ mattjj@*
When walking about the countryside of [Italy](https://iaml.it/blog/jax-intro), the people will not hesitate to tell you that __JAX__ has _"una anima di pura programmazione funzionale"_.
__JAX__ is a language for __expressing__ and __composing__ __transformations__ of numerical programs. __JAX__ is also able to __compile__ numerical programs for CPU or accelerators (GPU/TPU).
JAX works great for many numerical and scientific programs, but __only if they are written with certain constraints__ that we describe below.
```
import numpy as np
from jax import grad, jit
from jax import lax
from jax import random
import jax
import jax.numpy as jnp
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib import rcParams
rcParams['image.interpolation'] = 'nearest'
rcParams['image.cmap'] = 'viridis'
rcParams['axes.grid'] = False
```
## 🔪 Pure functions
JAX transformation and compilation are designed to work only on Python functions that are functionally pure: all the input data is passed through the function parameters, all the results are output through the function results. A pure function will always return the same result if invoked with the same inputs.
Here are some examples of functions that are not functionally pure for which JAX behaves differently than the Python interpreter. Note that these behaviors are not guaranteed by the JAX system; the proper way to use JAX is to use it only on functionally pure Python functions.
```
def impure_print_side_effect(x):
print("Executing function") # This is a side-effect
return x
# The side-effects appear during the first run
print ("First call: ", jit(impure_print_side_effect)(4.))
# Subsequent runs with parameters of same type and shape may not show the side-effect
# This is because JAX now invokes a cached compilation of the function
print ("Second call: ", jit(impure_print_side_effect)(5.))
# JAX re-runs the Python function when the type or shape of the argument changes
print ("Third call, different type: ", jit(impure_print_side_effect)(jnp.array([5.])))
g = 0.
def impure_uses_globals(x):
return x + g
# JAX captures the value of the global during the first run
print ("First call: ", jit(impure_uses_globals)(4.))
g = 10. # Update the global
# Subsequent runs may silently use the cached value of the globals
print ("Second call: ", jit(impure_uses_globals)(5.))
# JAX re-runs the Python function when the type or shape of the argument changes
# This will end up reading the latest value of the global
print ("Third call, different type: ", jit(impure_uses_globals)(jnp.array([4.])))
g = 0.
def impure_saves_global(x):
global g
g = x
return x
# JAX runs once the transformed function with special Traced values for arguments
print ("First call: ", jit(impure_saves_global)(4.))
print ("Saved global: ", g) # Saved global has an internal JAX value
```
A Python function can be functionally pure even if it actually uses stateful objects internally, as long as it does not read or write external state:
```
def pure_uses_internal_state(x):
state = dict(even=0, odd=0)
for i in range(10):
state['even' if i % 2 == 0 else 'odd'] += x
return state['even'] + state['odd']
print(jit(pure_uses_internal_state)(5.))
```
It is not recommended to use iterators in any JAX function you want to `jit` or in any control-flow primitive. The reason is that an iterator is a python object which introduces state to retrieve the next element. Therefore, it is incompatible with JAX functional programming model. In the code below, there are some examples of incorrect attempts to use iterators with JAX. Most of them return an error, but some give unexpected results.
```
import jax.numpy as jnp
import jax.lax as lax
from jax import make_jaxpr
# lax.fori_loop
array = jnp.arange(10)
print(lax.fori_loop(0, 10, lambda i,x: x+array[i], 0)) # expected result 45
iterator = iter(range(10))
print(lax.fori_loop(0, 10, lambda i,x: x+next(iterator), 0)) # unexpected result 0
# lax.scan
def func11(arr, extra):
ones = jnp.ones(arr.shape)
def body(carry, aelems):
ae1, ae2 = aelems
return (carry + ae1 * ae2 + extra, carry)
return lax.scan(body, 0., (arr, ones))
make_jaxpr(func11)(jnp.arange(16), 5.)
# make_jaxpr(func11)(iter(range(16)), 5.) # throws error
# lax.cond
array_operand = jnp.array([0.])
lax.cond(True, lambda x: x+1, lambda x: x-1, array_operand)
iter_operand = iter(range(10))
# lax.cond(True, lambda x: next(x)+1, lambda x: next(x)-1, iter_operand) # throws error
```
## 🔪 In-Place Updates
In Numpy you're used to doing this:
```
numpy_array = np.zeros((3,3), dtype=np.float32)
print("original array:")
print(numpy_array)
# In place, mutating update
numpy_array[1, :] = 1.0
print("updated array:")
print(numpy_array)
```
If we try to update a JAX device array in-place, however, we get an __error__! (☉_☉)
```
jax_array = jnp.zeros((3,3), dtype=jnp.float32)
# In place update of JAX's array will yield an error!
try:
jax_array[1, :] = 1.0
except Exception as e:
print("Exception {}".format(e))
```
__What gives?!__
Allowing mutation of variables in-place makes program analysis and transformation very difficult. JAX requires a pure functional expression of a numerical program.
Instead, JAX offers the _functional_ update functions: [__index_update__](https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.index_update.html#jax.ops.index_update), [__index_add__](https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.index_add.html#jax.ops.index_add), [__index_min__](https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.index_min.html#jax.ops.index_min), [__index_max__](https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.index_max.html#jax.ops.index_max), and the [__index__](https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.index.html#jax.ops.index) helper.
️⚠️ inside `jit`'d code and `lax.while_loop` or `lax.fori_loop` the __size__ of slices can't be functions of argument _values_ but only functions of argument _shapes_ -- the slice start indices have no such restriction. See the below __Control Flow__ Section for more information on this limitation.
```
from jax.ops import index, index_add, index_update
```
### index_update
If the __input values__ of __index_update__ aren't reused, __jit__-compiled code will perform these operations _in-place_.
```
jax_array = jnp.zeros((3, 3))
print("original array:")
print(jax_array)
new_jax_array = index_update(jax_array, index[1, :], 1.)
print("old array unchanged:")
print(jax_array)
print("new array:")
print(new_jax_array)
```
### index_add
If the __input values__ of __index_update__ aren't reused, __jit__-compiled code will perform these operations _in-place_.
```
print("original array:")
jax_array = jnp.ones((5, 6))
print(jax_array)
new_jax_array = index_add(jax_array, index[::2, 3:], 7.)
print("new array post-addition:")
print(new_jax_array)
```
## 🔪 Out-of-Bounds Indexing
In Numpy, you are used to errors being thrown when you index an array outside of its bounds, like this:
```
try:
np.arange(10)[11]
except Exception as e:
print("Exception {}".format(e))
```
However, raising an error from code running on an accelerator can be difficult or impossible. Therefore, JAX must choose some non-error behavior for out of bounds indexing (akin to how invalid floating point arithmetic results in `NaN`). When the indexing operation is an array index update (e.g. `index_add` or `scatter`-like primitives), updates at out-of-bounds indices will be skipped; when the operation is an array index retrieval (e.g. NumPy indexing or `gather`-like primitives) the index is clamped to the bounds of the array since __something__ must be returned. For example, the last value of the array will be returned from this indexing operation:
```
jnp.arange(10)[11]
```
Note that due to this behavior for index retrieval, functions like `jnp.nanargmin` and `jnp.nanargmax` return -1 for slices consisting of NaNs whereas Numpy would throw an error.
Note also that, as the two behaviors described above are not inverses of each other, reverse-mode automatic differentiation (which turns index updates into index retrievals and vice versa) [will not preserve the semantics of out of bounds indexing](https://github.com/google/jax/issues/5760). Thus it may be a good idea to think of out-of-bounds indexing in JAX as a case of [undefined behavior](https://en.wikipedia.org/wiki/Undefined_behavior).
## 🔪 Non-array inputs: NumPy vs. JAX
NumPy is generally happy accepting Python lists or tuples as inputs to its API functions:
```
np.sum([1, 2, 3])
```
JAX departs from this, generally returning a helpful error:
```
try:
jnp.sum([1, 2, 3])
except TypeError as e:
print(f"TypeError: {e}")
```
This is a deliberate design choice, because passing lists or tuples to traced functions can lead to silent performance degradation that might otherwise be difficult to detect.
For example, consider the following permissive version of `jnp.sum` that allows list inputs:
```
def permissive_sum(x):
return jnp.sum(jnp.array(x))
x = list(range(10))
permissive_sum(x)
```
The output is what we would expect, but this hides potential performance issues under the hood. In JAX's tracing and JIT compilation model, each element in a Python list or tuple is treated as a separate JAX variable, and individually processed and pushed to device. This can be seen in the jaxpr for the ``permissive_sum`` function above:
```
make_jaxpr(permissive_sum)(x)
```
Each entry of the list is handled as a separate input, resulting in a tracing & compilation overhead that grows linearly with the size of the list. To prevent surprises like this, JAX avoids implicit conversions of lists and tuples to arrays.
If you would like to pass a tuple or list to a JAX function, you can do so by first explicitly converting it to an array:
```
jnp.sum(jnp.array(x))
```
## 🔪 Random Numbers
> _If all scientific papers whose results are in doubt because of bad
> `rand()`s were to disappear from library shelves, there would be a
> gap on each shelf about as big as your fist._ - Numerical Recipes
### RNGs and State
You're used to _stateful_ pseudorandom number generators (PRNGs) from numpy and other libraries, which helpfully hide a lot of details under the hood to give you a ready fountain of pseudorandomness:
```
print(np.random.random())
print(np.random.random())
print(np.random.random())
```
Underneath the hood, numpy uses the [Mersenne Twister](https://en.wikipedia.org/wiki/Mersenne_Twister) PRNG to power its pseudorandom functions. The PRNG has a period of $2^{19937}-1$ and at any point can be described by __624 32bit unsigned ints__ and a __position__ indicating how much of this "entropy" has been used up.
```
np.random.seed(0)
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([0, 1, 1812433255, 1900727105, 1208447044,
# 2481403966, 4042607538, 337614300, ... 614 more numbers...,
# 3048484911, 1796872496], dtype=uint32), 624, 0, 0.0)
```
This pseudorandom state vector is automagically updated behind the scenes every time a random number is needed, "consuming" 2 of the uint32s in the Mersenne twister state vector:
```
_ = np.random.uniform()
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([2443250962, 1093594115, 1878467924,
# ..., 2648828502, 1678096082], dtype=uint32), 2, 0, 0.0)
# Let's exhaust the entropy in this PRNG statevector
for i in range(311):
_ = np.random.uniform()
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([2443250962, 1093594115, 1878467924,
# ..., 2648828502, 1678096082], dtype=uint32), 624, 0, 0.0)
# Next call iterates the RNG state for a new batch of fake "entropy".
_ = np.random.uniform()
rng_state = np.random.get_state()
# print(rng_state)
# --> ('MT19937', array([1499117434, 2949980591, 2242547484,
# 4162027047, 3277342478], dtype=uint32), 2, 0, 0.0)
```
The problem with magic PRNG state is that it's hard to reason about how it's being used and updated across different threads, processes, and devices, and it's _very easy_ to screw up when the details of entropy production and consumption are hidden from the end user.
The Mersenne Twister PRNG is also known to have a [number](https://cs.stackexchange.com/a/53475) of problems, it has a large 2.5Kb state size, which leads to problematic [initialization issues](https://dl.acm.org/citation.cfm?id=1276928). It [fails](http://www.pcg-random.org/pdf/toms-oneill-pcg-family-v1.02.pdf) modern BigCrush tests, and is generally slow.
### JAX PRNG
JAX instead implements an _explicit_ PRNG where entropy production and consumption are handled by explicitly passing and iterating PRNG state. JAX uses a modern [Threefry counter-based PRNG](https://github.com/google/jax/blob/master/design_notes/prng.md) that's __splittable__. That is, its design allows us to __fork__ the PRNG state into new PRNGs for use with parallel stochastic generation.
The random state is described by two unsigned-int32s that we call a __key__:
```
from jax import random
key = random.PRNGKey(0)
key
```
JAX's random functions produce pseudorandom numbers from the PRNG state, but __do not__ change the state!
Reusing the same state will cause __sadness__ and __monotony__, depriving the enduser of __lifegiving chaos__:
```
print(random.normal(key, shape=(1,)))
print(key)
# No no no!
print(random.normal(key, shape=(1,)))
print(key)
```
Instead, we __split__ the PRNG to get usable __subkeys__ every time we need a new pseudorandom number:
```
print("old key", key)
key, subkey = random.split(key)
normal_pseudorandom = random.normal(subkey, shape=(1,))
print(" \---SPLIT --> new key ", key)
print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom)
```
We propagate the __key__ and make new __subkeys__ whenever we need a new random number:
```
print("old key", key)
key, subkey = random.split(key)
normal_pseudorandom = random.normal(subkey, shape=(1,))
print(" \---SPLIT --> new key ", key)
print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom)
```
We can generate more than one __subkey__ at a time:
```
key, *subkeys = random.split(key, 4)
for subkey in subkeys:
print(random.normal(subkey, shape=(1,)))
```
## 🔪 Control Flow
### ✔ python control_flow + autodiff ✔
If you just want to apply `grad` to your python functions, you can use regular python control-flow constructs with no problems, as if you were using [Autograd](https://github.com/hips/autograd) (or Pytorch or TF Eager).
```
def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
print(grad(f)(2.)) # ok!
print(grad(f)(4.)) # ok!
```
### python control flow + JIT
Using control flow with `jit` is more complicated, and by default it has more constraints.
This works:
```
@jit
def f(x):
for i in range(3):
x = 2 * x
return x
print(f(3))
```
So does this:
```
@jit
def g(x):
y = 0.
for i in range(x.shape[0]):
y = y + x[i]
return y
print(g(jnp.array([1., 2., 3.])))
```
But this doesn't, at least by default:
```
@jit
def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
# This will fail!
try:
f(2)
except Exception as e:
print("Exception {}".format(e))
```
__What gives!?__
When we `jit`-compile a function, we usually want to compile a version of the function that works for many different argument values, so that we can cache and reuse the compiled code. That way we don't have to re-compile on each function evaluation.
For example, if we evaluate an `@jit` function on the array `jnp.array([1., 2., 3.], jnp.float32)`, we might want to compile code that we can reuse to evaluate the function on `jnp.array([4., 5., 6.], jnp.float32)` to save on compile time.
To get a view of your Python code that is valid for many different argument values, JAX traces it on _abstract values_ that represent sets of possible inputs. There are [multiple different levels of abstraction](https://github.com/google/jax/blob/master/jax/abstract_arrays.py), and different transformations use different abstraction levels.
By default, `jit` traces your code on the `ShapedArray` abstraction level, where each abstract value represents the set of all array values with a fixed shape and dtype. For example, if we trace using the abstract value `ShapedArray((3,), jnp.float32)`, we get a view of the function that can be reused for any concrete value in the corresponding set of arrays. That means we can save on compile time.
But there's a tradeoff here: if we trace a Python function on a `ShapedArray((), jnp.float32)` that isn't committed to a specific concrete value, when we hit a line like `if x < 3`, the expression `x < 3` evaluates to an abstract `ShapedArray((), jnp.bool_)` that represents the set `{True, False}`. When Python attempts to coerce that to a concrete `True` or `False`, we get an error: we don't know which branch to take, and can't continue tracing! The tradeoff is that with higher levels of abstraction we gain a more general view of the Python code (and thus save on re-compilations), but we require more constraints on the Python code to complete the trace.
The good news is that you can control this tradeoff yourself. By having `jit` trace on more refined abstract values, you can relax the traceability constraints. For example, using the `static_argnums` argument to `jit`, we can specify to trace on concrete values of some arguments. Here's that example function again:
```
def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
f = jit(f, static_argnums=(0,))
print(f(2.))
```
Here's another example, this time involving a loop:
```
def f(x, n):
y = 0.
for i in range(n):
y = y + x[i]
return y
f = jit(f, static_argnums=(1,))
f(jnp.array([2., 3., 4.]), 2)
```
In effect, the loop gets statically unrolled. JAX can also trace at _higher_ levels of abstraction, like `Unshaped`, but that's not currently the default for any transformation
️⚠️ **functions with argument-__value__ dependent shapes**
These control-flow issues also come up in a more subtle way: numerical functions we want to __jit__ can't specialize the shapes of internal arrays on argument _values_ (specializing on argument __shapes__ is ok). As a trivial example, let's make a function whose output happens to depend on the input variable `length`.
```
def example_fun(length, val):
return jnp.ones((length,)) * val
# un-jit'd works fine
print(example_fun(5, 4))
bad_example_jit = jit(example_fun)
# this will fail:
try:
print(bad_example_jit(10, 4))
except Exception as e:
print("Exception {}".format(e))
# static_argnums tells JAX to recompile on changes at these argument positions:
good_example_jit = jit(example_fun, static_argnums=(0,))
# first compile
print(good_example_jit(10, 4))
# recompiles
print(good_example_jit(5, 4))
```
`static_argnums` can be handy if `length` in our example rarely changes, but it would be disastrous if it changed a lot!
Lastly, if your function has global side-effects, JAX's tracer can cause weird things to happen. A common gotcha is trying to print arrays inside __jit__'d functions:
```
@jit
def f(x):
print(x)
y = 2 * x
print(y)
return y
f(2)
```
### Structured control flow primitives
There are more options for control flow in JAX. Say you want to avoid re-compilations but still want to use control flow that's traceable, and that avoids un-rolling large loops. Then you can use these 4 structured control flow primitives:
- `lax.cond` _differentiable_
- `lax.while_loop` __fwd-mode-differentiable__
- `lax.fori_loop` __fwd-mode-differentiable__
- `lax.scan` _differentiable_
#### cond
python equivalent:
```python
def cond(pred, true_fun, false_fun, operand):
if pred:
return true_fun(operand)
else:
return false_fun(operand)
```
```
from jax import lax
operand = jnp.array([0.])
lax.cond(True, lambda x: x+1, lambda x: x-1, operand)
# --> array([1.], dtype=float32)
lax.cond(False, lambda x: x+1, lambda x: x-1, operand)
# --> array([-1.], dtype=float32)
```
#### while_loop
python equivalent:
```
def while_loop(cond_fun, body_fun, init_val):
val = init_val
while cond_fun(val):
val = body_fun(val)
return val
```
```
init_val = 0
cond_fun = lambda x: x<10
body_fun = lambda x: x+1
lax.while_loop(cond_fun, body_fun, init_val)
# --> array(10, dtype=int32)
```
#### fori_loop
python equivalent:
```
def fori_loop(start, stop, body_fun, init_val):
val = init_val
for i in range(start, stop):
val = body_fun(i, val)
return val
```
```
init_val = 0
start = 0
stop = 10
body_fun = lambda i,x: x+i
lax.fori_loop(start, stop, body_fun, init_val)
# --> array(45, dtype=int32)
```
#### Summary
$$
\begin{array} {r|rr}
\hline \
\textrm{construct}
& \textrm{jit}
& \textrm{grad} \\
\hline \
\textrm{if} & ❌ & ✔ \\
\textrm{for} & ✔* & ✔\\
\textrm{while} & ✔* & ✔\\
\textrm{lax.cond} & ✔ & ✔\\
\textrm{lax.while_loop} & ✔ & \textrm{fwd}\\
\textrm{lax.fori_loop} & ✔ & \textrm{fwd}\\
\textrm{lax.scan} & ✔ & ✔\\
\hline
\end{array}
$$
<center>$\ast$ = argument-__value__-independent loop condition - unrolls the loop </center>
## 🔪 NaNs
### Debugging NaNs
If you want to trace where NaNs are occurring in your functions or gradients, you can turn on the NaN-checker by:
* setting the `JAX_DEBUG_NANS=True` environment variable;
* adding `from jax.config import config` and `config.update("jax_debug_nans", True)` near the top of your main file;
* adding `from jax.config import config` and `config.parse_flags_with_absl()` to your main file, then set the option using a command-line flag like `--jax_debug_nans=True`;
This will cause computations to error-out immediately on production of a NaN. Switching this option on adds a nan check to every floating point type value produced by XLA. That means values are pulled back to the host and checked as ndarrays for every primitive operation not under an `@jit`. For code under an `@jit`, the output of every `@jit` function is checked and if a nan is present it will re-run the function in de-optimized op-by-op mode, effectively removing one level of `@jit` at a time.
There could be tricky situations that arise, like nans that only occur under a `@jit` but don't get produced in de-optimized mode. In that case you'll see a warning message print out but your code will continue to execute.
If the nans are being produced in the backward pass of a gradient evaluation, when an exception is raised several frames up in the stack trace you will be in the backward_pass function, which is essentially a simple jaxpr interpreter that walks the sequence of primitive operations in reverse. In the example below, we started an ipython repl with the command line `env JAX_DEBUG_NANS=True ipython`, then ran this:
```
In [1]: import jax.numpy as jnp
In [2]: jnp.divide(0., 0.)
---------------------------------------------------------------------------
FloatingPointError Traceback (most recent call last)
<ipython-input-2-f2e2c413b437> in <module>()
----> 1 jnp.divide(0., 0.)
.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)
343 return floor_divide(x1, x2)
344 else:
--> 345 return true_divide(x1, x2)
346
347
.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)
332 x1, x2 = _promote_shapes(x1, x2)
333 return lax.div(lax.convert_element_type(x1, result_dtype),
--> 334 lax.convert_element_type(x2, result_dtype))
335
336
.../jax/jax/lax.pyc in div(x, y)
244 def div(x, y):
245 r"""Elementwise division: :math:`x \over y`."""
--> 246 return div_p.bind(x, y)
247
248 def rem(x, y):
... stack trace ...
.../jax/jax/interpreters/xla.pyc in handle_result(device_buffer)
103 py_val = device_buffer.to_py()
104 if np.any(np.isnan(py_val)):
--> 105 raise FloatingPointError("invalid value")
106 else:
107 return DeviceArray(device_buffer, *result_shape)
FloatingPointError: invalid value
```
The nan generated was caught. By running `%debug`, we can get a post-mortem debugger. This also works with functions under `@jit`, as the example below shows.
```
In [4]: from jax import jit
In [5]: @jit
...: def f(x, y):
...: a = x * y
...: b = (x + y) / (x - y)
...: c = a + 2
...: return a + b * c
...:
In [6]: x = jnp.array([2., 0.])
In [7]: y = jnp.array([3., 0.])
In [8]: f(x, y)
Invalid value encountered in the output of a jit function. Calling the de-optimized version.
---------------------------------------------------------------------------
FloatingPointError Traceback (most recent call last)
<ipython-input-8-811b7ddb3300> in <module>()
----> 1 f(x, y)
... stack trace ...
<ipython-input-5-619b39acbaac> in f(x, y)
2 def f(x, y):
3 a = x * y
----> 4 b = (x + y) / (x - y)
5 c = a + 2
6 return a + b * c
.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)
343 return floor_divide(x1, x2)
344 else:
--> 345 return true_divide(x1, x2)
346
347
.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)
332 x1, x2 = _promote_shapes(x1, x2)
333 return lax.div(lax.convert_element_type(x1, result_dtype),
--> 334 lax.convert_element_type(x2, result_dtype))
335
336
.../jax/jax/lax.pyc in div(x, y)
244 def div(x, y):
245 r"""Elementwise division: :math:`x \over y`."""
--> 246 return div_p.bind(x, y)
247
248 def rem(x, y):
... stack trace ...
```
When this code sees a nan in the output of an `@jit` function, it calls into the de-optimized code, so we still get a clear stack trace. And we can run a post-mortem debugger with `%debug` to inspect all the values to figure out the error.
⚠️ You shouldn't have the NaN-checker on if you're not debugging, as it can introduce lots of device-host round-trips and performance regressions!
## Double (64bit) precision
At the moment, JAX by default enforces single-precision numbers to mitigate the Numpy API's tendency to aggressively promote operands to `double`. This is the desired behavior for many machine-learning applications, but it may catch you by surprise!
```
x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
x.dtype
```
To use double-precision numbers, you need to set the `jax_enable_x64` configuration variable __at startup__.
There are a few ways to do this:
1. You can enable 64bit mode by setting the environment variable `JAX_ENABLE_X64=True`.
2. You can manually set the `jax_enable_x64` configuration flag at startup:
```python
# again, this only works on startup!
from jax.config import config
config.update("jax_enable_x64", True)
```
3. You can parse command-line flags with `absl.app.run(main)`
```python
from jax.config import config
config.config_with_absl()
```
4. If you want JAX to run absl parsing for you, i.e. you don't want to do `absl.app.run(main)`, you can instead use
```python
from jax.config import config
if __name__ == '__main__':
# calls config.config_with_absl() *and* runs absl parsing
config.parse_flags_with_absl()
```
Note that #2-#4 work for _any_ of JAX's configuration options.
We can then confirm that `x64` mode is enabled:
```
import jax.numpy as jnp
from jax import random
x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
x.dtype # --> dtype('float64')
```
### Caveats
⚠️ XLA doesn't support 64-bit convolutions on all backends!
## Fin.
If something's not covered here that has caused you weeping and gnashing of teeth, please let us know and we'll extend these introductory _advisos_!
|
github_jupyter
|
```
# noexport
import os
os.system('export_notebook identify_domain_training_data.ipynb')
from tmilib import *
import csv
import sys
num_prev_enabled = int(sys.argv[1])
num_labels_enabled = 2 + num_prev_enabled
data_version = 4 + num_prev_enabled
print 'num_prev_enabled', num_prev_enabled
print 'data_version', data_version
twenty_letters = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t"]
#domain_to_letter = {x:twenty_letters[i] for i,x in enumerate(top_domains)}
domain_id_to_letter = {domain_to_id(x):twenty_letters[i] for i,x in enumerate(top_n_domains_by_visits(20))}
#print domain_id_to_letter
#print domain_to_letter
productivity_letters = {-2: 'v', -1: 'w', 0: 'x', 1: 'y', 2: 'z'}
domain_id_to_productivity_letter = [productivity_letters[x] for x in get_domain_id_to_productivity()]
#print domain_id_to_productivity[:10]
#print domain_id_to_productivity_letter[:10]
def get_row_names():
output_row_names = [
'label',
'spanlen',
'since_cur',
'cur_domain_letter',
'cur_domain_productivity',
'to_next',
'next_domain_letter',
'next_domain_productivity',
'n_eq_c',
]
for idx_p_zeroidx in range(num_prev_enabled):
sp = str(idx_p_zeroidx + 1)
new_feature_names_for_p = [
'since_prev' + sp,
'prev' + sp +'_domain_letter',
'prev' + sp + '_domain_productivity',
'n_eq_p' + sp,
]
output_row_names.extend(new_feature_names_for_p)
return tuple(output_row_names)
row_names = get_row_names()
print row_names
def get_rows_for_user(user):
output = []
#ordered_visits = get_history_ordered_visits_corrected_for_user(user)
ordered_visits = get_history_ordered_visits_corrected_for_user(user)
ordered_visits = exclude_bad_visits(ordered_visits)
#active_domain_at_time = get_active_domain_at_time_for_user(user)
active_seconds_set = set(get_active_insession_seconds_for_user(user))
active_second_to_domain_id = {int(k):v for k,v in get_active_second_to_domain_id_for_user(user).viewitems()}
prev_domain_ids = [-1]*8
domain_id_to_most_recent_visit = {}
total_items = 0
skipped_items = 0
for idx,visit in enumerate(ordered_visits):
if idx+1 >= len(ordered_visits):
break
next_visit = ordered_visits[idx+1]
cur_domain = url_to_domain(visit['url'])
cur_domain_id = domain_to_id(cur_domain)
next_domain = url_to_domain(next_visit['url'])
next_domain_id = domain_to_id(next_domain)
cur_time_sec = int(round(visit['visitTime'] / 1000.0))
next_time_sec = int(round(next_visit['visitTime'] / 1000.0))
domain_id_to_most_recent_visit[cur_domain_id] = cur_time_sec
if prev_domain_ids[0] != cur_domain_id:
#prev_domain_ids = ([cur_domain_id] + [x for x in prev_domain_ids if x != cur_domain_id])[:4]
if cur_domain_id in prev_domain_ids:
prev_domain_ids.remove(cur_domain_id)
prev_domain_ids.insert(0, cur_domain_id)
while len(prev_domain_ids) > 8:
prev_domain_ids.pop()
# prev_domain_ids includes the current one
if cur_time_sec > next_time_sec:
continue
prev1_domain_id = prev_domain_ids[1]
prev2_domain_id = prev_domain_ids[2]
prev3_domain_id = prev_domain_ids[3]
prev4_domain_id = prev_domain_ids[4]
prev5_domain_id = prev_domain_ids[5]
prev6_domain_id = prev_domain_ids[6]
prev7_domain_id = prev_domain_ids[7]
n_eq_c = 'T' if (next_domain_id == cur_domain_id) else 'F'
n_eq_p1 = 'T' if (next_domain_id == prev1_domain_id) else 'F'
n_eq_p2 = 'T' if (next_domain_id == prev2_domain_id) else 'F'
n_eq_p3 = 'T' if (next_domain_id == prev3_domain_id) else 'F'
n_eq_p4 = 'T' if (next_domain_id == prev4_domain_id) else 'F'
n_eq_p5 = 'T' if (next_domain_id == prev5_domain_id) else 'F'
n_eq_p6 = 'T' if (next_domain_id == prev6_domain_id) else 'F'
n_eq_p7 = 'T' if (next_domain_id == prev7_domain_id) else 'F'
for time_sec in xrange(cur_time_sec+1, next_time_sec):
if time_sec not in active_seconds_set:
continue
ref_domain_id = active_second_to_domain_id[time_sec]
total_items += 1
label = None
available_labels = (
(cur_domain_id, 'c'),
(next_domain_id, 'n'),
(prev1_domain_id, 'p1'),
(prev2_domain_id, 'p2'),
(prev3_domain_id, 'p3'),
(prev4_domain_id, 'p4'),
(prev5_domain_id, 'p5'),
(prev6_domain_id, 'p6'),
(prev7_domain_id, 'p7'),
)[:num_labels_enabled]
# c p n p q r s t
for label_value,label_name in available_labels:
if ref_domain_id == label_value:
label = label_name
break
if label == None:
skipped_items += 1
continue
next_domain_letter = domain_id_to_letter.get(next_domain_id, 'u')
cur_domain_letter = domain_id_to_letter.get(cur_domain_id, 'u')
prev1_domain_letter = domain_id_to_letter.get(prev1_domain_id, 'u')
prev2_domain_letter = domain_id_to_letter.get(prev2_domain_id, 'u')
prev3_domain_letter = domain_id_to_letter.get(prev3_domain_id, 'u')
prev4_domain_letter = domain_id_to_letter.get(prev4_domain_id, 'u')
prev5_domain_letter = domain_id_to_letter.get(prev5_domain_id, 'u')
prev6_domain_letter = domain_id_to_letter.get(prev6_domain_id, 'u')
prev7_domain_letter = domain_id_to_letter.get(prev7_domain_id, 'u')
next_domain_productivity = domain_id_to_productivity_letter[next_domain_id]
cur_domain_productivity = domain_id_to_productivity_letter[cur_domain_id]
prev1_domain_productivity = domain_id_to_productivity_letter[prev1_domain_id]
prev2_domain_productivity = domain_id_to_productivity_letter[prev2_domain_id]
prev3_domain_productivity = domain_id_to_productivity_letter[prev3_domain_id]
prev4_domain_productivity = domain_id_to_productivity_letter[prev4_domain_id]
prev5_domain_productivity = domain_id_to_productivity_letter[prev5_domain_id]
prev6_domain_productivity = domain_id_to_productivity_letter[prev6_domain_id]
prev7_domain_productivity = domain_id_to_productivity_letter[prev7_domain_id]
since_cur = time_sec - cur_time_sec
to_next = next_time_sec - time_sec
spanlen = since_cur + to_next
prev1_domain_last_visit = domain_id_to_most_recent_visit.get(prev1_domain_id, 0)
prev2_domain_last_visit = domain_id_to_most_recent_visit.get(prev2_domain_id, 0)
prev3_domain_last_visit = domain_id_to_most_recent_visit.get(prev3_domain_id, 0)
prev3_domain_last_visit = domain_id_to_most_recent_visit.get(prev3_domain_id, 0)
prev4_domain_last_visit = domain_id_to_most_recent_visit.get(prev4_domain_id, 0)
prev5_domain_last_visit = domain_id_to_most_recent_visit.get(prev5_domain_id, 0)
prev6_domain_last_visit = domain_id_to_most_recent_visit.get(prev6_domain_id, 0)
prev7_domain_last_visit = domain_id_to_most_recent_visit.get(prev7_domain_id, 0)
since_prev1 = time_sec - prev1_domain_last_visit
since_prev2 = time_sec - prev2_domain_last_visit
since_prev3 = time_sec - prev3_domain_last_visit
since_prev4 = time_sec - prev4_domain_last_visit
since_prev5 = time_sec - prev5_domain_last_visit
since_prev6 = time_sec - prev6_domain_last_visit
since_prev7 = time_sec - prev7_domain_last_visit
since_cur = log(since_cur)
to_next = log(to_next)
spanlen = log(spanlen)
since_prev1 = log(since_prev1)
since_prev2 = log(since_prev2)
since_prev3 = log(since_prev3)
since_prev4 = log(since_prev4)
since_prev5 = log(since_prev5)
since_prev6 = log(since_prev6)
since_prev7 = log(since_prev7)
cached_locals = locals()
output.append([cached_locals[row_name] for row_name in row_names])
#print 'user', user, 'guaranteed error', float(skipped_items)/total_items, 'skipped', skipped_items, 'total', total_items
return {
'rows': output,
'skipped_items': skipped_items,
'total_items': total_items,
}
def create_domainclass_data_for_users(users, filename):
if sdir_exists(filename):
print 'already exists', filename
return
outfile = csv.writer(open(sdir_path(filename), 'w'))
outfile.writerow(row_names)
total_items = 0
skipped_items = 0
for user in users:
data = get_rows_for_user(user)
total_items += data['total_items']
if total_items == 0:
print user, 'no items'
continue
skipped_items += data['skipped_items']
print user, 'skipped', float(data['skipped_items'])/data['total_items'], 'skipped', data['skipped_items'], 'total', data['total_items']
outfile.writerows(data['rows'])
print 'guaranteed error', float(skipped_items) / total_items, 'skipped', skipped_items, 'total', total_items
create_domainclass_data_for_users(get_training_users(), 'domainclass_cpn_train_v' + str(data_version) +'.csv')
create_domainclass_data_for_users(get_test_users(), 'domainclass_cpn_test_v' + str(data_version) + '.csv')
```
|
github_jupyter
|
```
from matplotlib import pyplot as plt
import pandas as pd
titanic_df = pd.read_csv('train.csv')
titanic_df.head()
titanic_df.columns
male_df = titanic_df[titanic_df['Sex'] == 'male']
female_df = titanic_df[titanic_df['Sex'] == 'female']
male_df.head()
female_df.head()
plt.subplot(1,2,1)
plt.bar([0,1],[sum(male_df['Survived'] == 0), sum(male_df['Survived'] == 1)])
plt.ylim(ymax = 500)
plt.xticks([0,1],['Not-survived','Survived'])
plt.legend(['Male'])
plt.subplot(1,2,2)
plt.bar([0,1],[sum(female_df['Survived'] == 0), sum(female_df['Survived'] == 1)],color = 'orange')
plt.ylim(ymax = 500)
plt.xticks([0,1],['Not-survived','Survived'])
plt.legend(['Female'])
plt.show()
print(len(male_df))
print(len(female_df))
survivor_df = titanic_df[titanic_df['Survived'] == 1]
non_survivor_df = titanic_df[titanic_df['Survived'] == 0]
plt.subplot(1,2,1)
plt.bar([0,1],[sum(survivor_df['Sex'] == 'male'), sum(survivor_df['Sex'] == 'female')])
plt.ylim(ymax = 500)
plt.xticks([0,1],['Male','Female'])
plt.legend(['Survived'])
plt.subplot(1,2,2)
plt.bar([0,1],[sum(non_survivor_df['Sex'] == 'male'), sum(non_survivor_df['Sex'] == 'female')],color = 'orange')
plt.ylim(ymax = 500)
plt.xticks([0,1],['Male','Female'])
plt.legend(['Not-survived'])
plt.show()
plt.subplot(1,2,1)
plt.bar([0,1,2],[sum(survivor_df['Pclass'] == 1), sum(survivor_df['Pclass'] == 2), sum(survivor_df['Pclass'] == 3)])
plt.ylim(ymax = 400)
plt.xticks([0,1,2],['Class 1','Class 2', 'Class 3'])
plt.legend(['Survived'])
plt.subplot(1,2,2)
plt.bar([0,1,2],[sum(non_survivor_df['Pclass'] == 1), sum(non_survivor_df['Pclass'] == 2), sum(non_survivor_df['Pclass'] == 3)], color = 'orange')
plt.ylim(ymax = 400)
plt.xticks([0,1,2],['Class 1','Class 2', 'Class 3'])
plt.legend(['Not-survived'])
plt.show()
plt.subplot(2,1,1)
plt.hist(survivor_df[survivor_df.Age > 1].Age,40,edgecolor = 'black')
plt.xlabel('Age')
plt.ylim(ymax = 50)
plt.xlim(xmax = 85)
plt.legend(['Survivors'])
plt.subplot(2,1,2)
plt.hist(non_survivor_df[non_survivor_df.Age > 1].Age,40,edgecolor = 'black', color = 'orange')
plt.xlabel('Age')
plt.ylim(ymax = 50)
plt.xlim(xmax = 85)
plt.legend(['Non-Survivors'])
plt.show()
plt.subplot(2,1,1)
plt.hist([len(name) for name in survivor_df['Name']], 20,edgecolor='black')
plt.ylim(ymax = 85)
plt.xlim(xmax = 85)
plt.legend(['Survivors'])
plt.subplot(2,1,2)
plt.hist([len(name) for name in non_survivor_df['Name']], 20,edgecolor='black',color = 'orange')
plt.ylim(ymax = 85)
plt.xlim(xmax = 85)
plt.xlabel('Name Length')
plt.legend(['Non-survivors'])
plt.show()
plt.subplot(2,1,1)
plt.hist([len(name) for name in female_df['Name']], 20,edgecolor='black')
plt.ylim(ymax = 90)
plt.xlim(xmax = 85)
plt.legend(['Female'])
plt.subplot(2,1,2)
plt.hist([len(name) for name in male_df['Name']], 20,edgecolor='black',color = 'orange')
plt.ylim(ymax = 90)
plt.xlim(xmax = 85)
plt.xlabel('Name Length')
plt.legend(['Male'])
plt.show()
# Embarked
set(titanic_df.Embarked)
plt.subplot(1,2,1)
plt.bar([0,1,2],[sum(survivor_df['Embarked'] == 'Q'), sum(survivor_df['Embarked'] == 'C'), sum(survivor_df['Embarked'] == 'S')])
plt.ylim(ymax = 450)
plt.xticks([0,1,2],['Queenstown','Cherbourg', 'Southampton'], rotation = 'vertical')
plt.legend(['Survived'])
plt.subplot(1,2,2)
plt.bar([0,1,2],[sum(non_survivor_df['Embarked'] == 'Q'), sum(non_survivor_df['Embarked'] == 'C'), sum(non_survivor_df['Embarked'] == 'S')], color = 'orange')
plt.ylim(ymax = 450)
plt.xticks([0,1,2],['Queenstown','Cherbourg', 'Southampton'], rotation='vertical')
plt.legend(['Not-survived'])
plt.show()
import numpy as np
x = np.arange(3)
fig, ax1 = plt.subplots()
width = 0.3
plt.xticks(x + width/2, ['Queenstown','Cherbourg', 'Southampton'])
survivors = ax1.bar(x, [sum(survivor_df['Embarked'] == 'Q'), sum(survivor_df['Embarked'] == 'C'), sum(survivor_df['Embarked'] == 'S')], width)
plt.ylabel('No. of Survivors')
plt.ylim(ymax = 450)
ax2 = ax1.twinx()
non_survivors = ax2.bar(x + width, [sum(non_survivor_df['Embarked'] == 'Q'), sum(non_survivor_df['Embarked'] == 'C'), sum(non_survivor_df['Embarked'] == 'S')], width, color='orange')
plt.ylabel('No. of non-survivors')
plt.ylim(ymax = 450)
plt.legend([survivors, non_survivors],['Survivors','Non-survivors'])
figure = plt.gcf()
plt.show()
x = np.arange(2)
fig, ax1 = plt.subplots()
width = 0.2
plt.xticks(x + width/2, ['Male','Female'])
survivors = ax1.bar(x, [sum(survivor_df['Sex'] == 'male'), sum(survivor_df['Sex'] == 'female')], width)
plt.ylabel('No. of People')
plt.ylim(ymax = 480)
ax2 = ax1.twinx()
non_survivors = ax2.bar(x + width, [sum(non_survivor_df['Sex'] == 'male'), sum(non_survivor_df['Sex'] == 'female')], width, color='orange')
plt.ylim(ymax = 480)
plt.legend([survivors, non_survivors],['Survivors','Non-survivors'],loc=9)
figure = plt.gcf()
plt.show()
sum(survivor_df['Age'] > 65)
titanic_df.describe()
survivor_df.describe()
non_survivor_df.describe()
sum(survivor_df.SibSp > 1)
sum(non_survivor_df.SibSp > 1)
sum(survivor_df.SibSp == 0)
sum(non_survivor_df.SibSp == 0)
sum(survivor_df.SibSp == 1)
sum(non_survivor_df.SibSp == 1)
sum(titanic_df.Age > 65)
np.arange(3)
class1 = titanic_df.loc[titanic_df.Pclass == 1]
plt.bar([1,2,3], [sum(class1.Embarked == 'Q'), sum(class1.Embarked == 'C'), sum(class1.Embarked == 'S')])
plt.xticks([1,2,3],['Queenstown','Cherbourg','Southampton'])
plt.show()
class2 = titanic_df.loc[titanic_df.Pclass == 2]
plt.bar([1,2,3], [sum(class2.Embarked == 'Q'), sum(class2.Embarked == 'C'), sum(class2.Embarked == 'S')])
plt.xticks([1,2,3],['Queenstown','Cherbourg','Southampton'])
plt.show()
plt.subplot(2,1,1)
plt.hist(survivor_df.Fare, 35,edgecolor='black')
plt.ylim(ymax = 340)
plt.xlim(xmax = 300)
plt.legend(['Survivors'])
plt.subplot(2,1,2)
plt.hist(non_survivor_df.Fare, 20,edgecolor='black',color = 'orange')
plt.ylim(ymax = 340)
plt.xlim(xmax = 300)
plt.xlabel('Ticket Fare')
plt.legend(['Non-survivors'])
plt.show()
sum(titanic_df.Fare > 300)
```
|
github_jupyter
|
# Module 3 Required Coding Activity
Introduction to Python (Unit 2) Fundamentals
All course .ipynb Jupyter Notebooks are available from the project files download topic in Module 1, Section 1.
This is an activity from the Jupyter Notebook **`Practice_MOD03_IntroPy.ipynb`** which you may have already completed.
| Assignment Requirements |
|:-------------------------------|
| **NOTE:** This program requires **`print`** output and using code syntax used in module 3: **`if`**, **`input`**, **`def`**, **`return`**, **`for`**/**`in`** keywords, **`.lower()`** and **`.upper()`** method, **`.append`**, **`.pop`**, **`.split`** methods, **`range`** and **`len`** functions |
## Program: poem mixer
This program takes string input and then prints out a mixed order version of the string
**Program Parts**
- **program flow** gathers the word list, modifies the case and order, and prints
- get string input, input like a poem, verse or saying
- split the string into a list of individual words
- determine the length of the list
- Loop the length of the list by index number and for each list index:
- if a word is short (3 letters or less) make the word in the list lowercase
- if a word is long (7 letters or more) make the word in the list uppercase
- **call the word_mixer** function with the modified list
- print the return value from the word_mixer function
- **word_mixer** Function has 1 argument: an original list of string words, containing greater than 5 words and the function returns a new list.
- sort the original list
- create a new list
- Loop while the list is longer than 5 words:
- *in each loop pop a word from the sorted original list and append to the new list*
- pop the word 5th from the end of the list and append to the new list
- pop the first word in the list and append to the new list
- pop the last word in the list and append to the new list
- **return** the new list on exiting the loop

**input example** *(beginning of William Blake poem, "The Fly")*
>enter a saying or poem: `Little fly, Thy summer’s play My thoughtless hand Has brushed away. Am not I A fly like thee? Or art not thou A man like me?`
**output example**
>`or BRUSHED thy not Little thou me? SUMMER’S thee? like THOUGHTLESS play i a not hand a my fly am man`
**alternative output** in each loop in the function that creates the new list add a "\\n" to the list
```
or BRUSHED thy
not Little thou
me? SUMMER’S thee?
like THOUGHTLESS play
i a not
hand a my
fly am man
```
```
# [] create poem mixer
# [] copy and paste in edX assignment page
```
Submit this by creating a python file (.py) and submitting it in D2L. Be sure to test that it works.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/1_getting_started_roadmap/2_elemental_features_of_monk/5)%20Feature%20-%20Switch%20modes%20without%20reloading%20experiment%20-%20train%2C%20eval%2C%20infer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Goals
### 1. Understand how continuously and manually switching between training and externally validating help improve training
### 2. Experiment and learning how switching between train and val modes can help choose right hyper-parameters
### 2. Steps
- You will use mxnet gluon backend for this example
- You will first train a classifier using default params
- You will switch mode from train to val for the first time here
- You will then validate to check accuracy
- You will switch mode from val to train here
- You will reduce the learning rate (Need notfocus on how it is done for now)
- You will retrain again using this new lr
- You will switch mode from train to val for the second time here
- You will then validate to check accuracy
- You will again switch mode from val to train here
- You will further change the learning rate (Need notfocus on how it is done for now)
- You will retrain again using this newest lr
- You will switch mode from train to val for the final time here
- You will then validate to check accuracy
# Table of Contents
## [0. Install](#0)
## [1. Train a classifier using default settings](#1)
## [2. Switch mode from train to eval and validate](#2)
## [3. Switch back mode, reduce lr, retrain](#3)
## [4. Switch mode from train to eval and re-validate](#4)
## [5. Switch back mode, change lr further, retrain](#5)
## [6. Switch mode from train to eval and re-validate](#6)
<a id='0'></a>
# Install Monk
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
- cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
- (Select the requirements file as per OS and CUDA version)
```
!git clone https://github.com/Tessellate-Imaging/monk_v1.git
# If using Colab install using the commands below
!cd monk_v1/installation/Misc && pip install -r requirements_colab.txt
# If using Kaggle uncomment the following command
#!cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt
# Select the requirements file as per OS and CUDA version when using a local system or cloud
#!cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
```
## Dataset - Malarial cell images
- Credits: https://www.kaggle.com/iarunava/cell-images-for-detecting-malaria
```
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1mMEtGIK8UZNCrErXRJR-kutNTaN1zxjC' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1mMEtGIK8UZNCrErXRJR-kutNTaN1zxjC" -O malaria_cell.zip && rm -rf /tmp/cookies.txt
! unzip -qq malaria_cell.zip
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1WHpd7M-E_EiXmdjOr48BfvlUtMRPV6PM' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1WHpd7M-E_EiXmdjOr48BfvlUtMRPV6PM" -O malaria_cell_val.zip && rm -rf /tmp/cookies.txt
! unzip -qq malaria_cell_val.zip
```
# Imports
- Using single mxnet-gluoncv backend for this tutorial
```
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
#Using mxnet-gluon backend
from gluon_prototype import prototype
```
<a id='1'></a>
# Train a classifier using default settings
### Creating and managing experiments
- Provide project name
- Provide experiment name
```
gtf = prototype(verbose=1);
gtf.Prototype("Malaria-Cell", "exp-switch-modes");
```
### This creates files and directories as per the following structure
workspace
|
|--------Malaria-Cell
|
|
|-----exp-switch-modes
|
|-----experiment-state.json
|
|-----output
|
|------logs (All training logs and graphs saved here)
|
|------models (all trained models saved here)
### Load Dataset
```
gtf.Default(dataset_path="malaria_cell",
model_name="resnet18_v1",
num_epochs=5);
#Read the summary generated once you run this cell.
```
### From summary current Learning rate: 0.01
```
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
<a id='2'></a>
# Switch mode from train to eval and validate
```
gtf.Switch_Mode(eval_infer=True)
```
### Load the validation dataset
```
gtf.Dataset_Params(dataset_path="malaria_cell_val");
gtf.Dataset();
```
### Run validation
```
accuracy, class_based_accuracy = gtf.Evaluate();
```
### Accuracy now is - 65.08% when learning rate is 0.01
(Can change when you run the exp)
<a id='3'></a>
# Switch back mode, reduce lr, retrain
```
gtf.Switch_Mode(train=True)
```
## Reduce learning rate from 0.01 to 0.001
```
# This part of code will be taken up again in upcoming sections
gtf.update_learning_rate(0.001);
gtf.Reload();
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
<a id='4'></a>
# Switch mode from train to eval and re-validate
```
gtf.Switch_Mode(eval_infer=True)
```
### Load the validation dataset
```
gtf.Dataset_Params(dataset_path="malaria_cell_val");
gtf.Dataset();
```
### Run validation
```
accuracy, class_based_accuracy = gtf.Evaluate();
```
### Accuracy now is - 58.85% when learning rate is 0.001
(Can change when you run the exp)
- Thus reducing learning rate didn't help our case
<a id='5'></a>
# Switch back mode, change lr, retrain
```
gtf.Switch_Mode(train=True)
```
## Update the learning rate again
```
# This part of code will be taken up again in upcoming sections
gtf.update_learning_rate(0.1);
gtf.Reload();
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
<a id='6'></a>
# Switch mode from train to eval and re-validate
```
gtf.Switch_Mode(eval_infer=True)
```
### Load the validation dataset
```
gtf.Dataset_Params(dataset_path="malaria_cell_val");
gtf.Dataset();
```
### Run validation
```
accuracy, class_based_accuracy = gtf.Evaluate();
```
### Accuracy now is - 49.85% when learning rate is 0.1, even lower
(Can change when you run the exp)
- Thus increasing learning rate didn't help our case
### LR 0.01 worked best for us
- That's how manual hyper-parameter tuning can be done using switch modes
|
github_jupyter
|
<a href="https://colab.research.google.com/github/towardsai/tutorials/blob/master/random-number-generator/random_number_generator_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Random Number Generator Tutorial with Python
* Tutorial: https://towardsai.net/p/data-science/random-number-generator-tutorial-with-python-3b35986132c7
* Github: https://github.com/towardsai/tutorials/tree/master/random-number-generator
## Generating pseudorandom numbers with Python's standard library
Python has a built-in module called random to generate a variety of pseudorandom numbers. Although it is recommended that this module should not be used for security purposes like cryptographic uses this will do for machine learning and data science. This module uses a PRNG called Mersenne Twister.
### Importing module: random
```
import random
```
### Random numbers within a range
```
#initialize the seed to 25
random.seed(25)
#generating random number between 10 and 20(both excluded)
print(random.randrange(10, 20))
#generating random number between 10 and 20(both included)
print(random.randint(10, 20))
```
### Random element from a sequence
```
#initialize the seed to 2
random.seed(2)
#setting up the sequence
myseq = ['Towards', 'AI', 'is', 1]
#randomly choosing an element from the sequence
random.choice(myseq)
```
### Multiple random selections with different possibilities
```
#initialize the seed to 25
random.seed(25)
#setting up the sequence
myseq = ['Towards', 'AI', 'is', 1]
#random selection of length 15
#10 time higher possibility of selecting 'Towards'
#5 time higher possibility of selecting 'AI'
#2 time higher possibility of selecting 'is'
#2 time higher possibility of selecting 1
random.choices(myseq, weights=[10, 5, 2, 2], k = 15)
```
### Random element from a sequence without replacement
```
#initialize the seed to 25
random.seed(25)
#setting up the sequence
myseq = ['Towards', 'AI', 'is', 1]
#randomly choosing an element from the sequence
random.sample(myseq, 2)
#initialize the seed to 25
random.seed(25)
#setting up the sequence
myseq = ['Towards', 'AI', 'is', 1]
#randomly choosing an element from the sequence
#you are trying to choose 5 random elements from a sequence of lenth 4
#since the selection is without replacement it is not possible and hence the error
random.sample(myseq, 35)
```
### Rearrange the sequence
```
#initialize the seed to 25
random.seed(25)
#setting up the sequence
myseq = ['Towards', 'AI', 'is', 1]
#rearranging the order of elements of the list
random.shuffle(myseq)
myseq
```
### Floating-point random number
```
#initialize the seed to 25
random.seed(25)
#random float number between 0 and 1
random.random()
```
### Real-valued distributions
```
#initialize the seed to 25
random.seed(25)
#random float number between 10 and 20 (both included)
print(random.uniform(10, 20))
#random float number mean 10 standard deviation 4
print(random.gauss(10, 4))
```
## Generating pseudorandom numbers with Numpy
```
#importing random module from numpy
import numpy as np
```
### Uniform distributed floating values
```
#initialize the seed to 25
np.random.seed(25)
#single uniformly distributed random number
np.random.rand()
#initialize the seed to 25
np.random.seed(25)
#uniformly distributed random numbers of length 10: 1-D array
np.random.rand(10)
#initialize the seed to 25
np.random.seed(25)
#uniformly distributed random numbers of 2 rows and 3 columns: 2-D array
np.random.rand(2, 3)
```
### Normal distributed floating values
```
#initialize the seed to 25
np.random.seed(25)
#single normally distributed random number
np.random.randn()
#initialize the seed to 25
np.random.seed(25)
#normally distributed random numbers of length 10: 1-D array
np.random.randn(10)
#initialize the seed to 25
np.random.seed(25)
#normally distributed random numbers of 2 rows and 3 columns: 2-D array
np.random.randn(2, 3)
```
### Uniformly distributed integers in a given range
```
#initialize the seed to 25
np.random.seed(25)
#single uniformly distributed random integer between 10 and 20
np.random.randint(10, 20)
#initialize the seed to 25
np.random.seed(25)
#uniformly distributed random integer between 0 to 100 of length 10: 1-D array
np.random.randint(100, size=(10))
#initialize the seed to 25
np.random.seed(25)
#uniformly distributed random integer between 0 to 100 of 2 rows and 3 columns: 2-D array
np.random.randint(100, size=(2, 3))
```
### Random elements from a defined list
```
#initialize the seed to 25
random.seed(25)
#setting up the sequence
myseq = ['Towards', 'AI', 'is', 1]
#randomly choosing an element from the sequence
np.random.choice(myseq)
#initialize the seed to 25
random.seed(25)
#setting up the sequence
myseq = ['Towards', 'AI', 'is', 1]
#randomly choosing elements from the sequence: 2-D array
np.random.choice(myseq, size=(2, 3))
#initialize the seed to 25
random.seed(25)
#setting up the sequence
myseq = ['Towards', 'AI', 'is', 1]
#randomly choosing elements from the sequence with defined probabilities
#The probability for the value to be 'Towards' is set to be 0.1
#The probability for the value to be 'AI' is set to be 0.6
#The probability for the value to be 'is' is set to be 0.05
#The probability for the value to be 1 is set to be 0.25
#0.1 + 0.6 + 0.05 + 0.25 = 1
np.random.choice(myseq, p=[0.1, 0.6, 0.05, 0.25], size=(2, 3))
```
### Binomial distributed values
```
#initialize the seed to 25
np.random.seed(25)
#10 number of trials with probability of 0.5 each
np.random.binomial(n=10, p=0.5, size=10)
```
### Poisson Distribution values
```
#initialize the seed to 25
np.random.seed(25)
#rate 2 and size 10
np.random.poisson(lam=2, size=10)
```
### Chi Square distribution
```
#initialize the seed to 25
np.random.seed(25)
#degree of freedom 2 and size (2, 3)
np.random.chisquare(df=2, size=(2, 3))
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.