text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Using starry process as a prior
Most of the tutorials here focus on doing inference on the statistical properties of star spots from large ensemble analyses. But what if we know (or think we know) the properties of the spots of a given star? Then we can use the GP to constrain the actual surface map of the body. This tutorial shows how to compute the mean and covariance of the GP in both spherical harmonic space and pixel space; these can be used as informative priors when mapping individual stars.
```
try:
from IPython import get_ipython
get_ipython().run_line_magic("run", "notebook_config.py")
except:
import warnings
warnings.warn("Can't execute `notebook_config.py`.")
from IPython.display import display, Markdown
from starry_process.defaults import defaults
```
## Setup
```
from starry_process import StarryProcess
import numpy as np
import matplotlib.pyplot as plt
from tqdm.auto import tqdm
import theano
import theano.tensor as tt
```
Let's instantiate a `StarryProcess` with all parameters set to their default values.
```
sp = StarryProcess()
```
## Prior in spherical harmonic space
Computing the GP prior in spherical harmonic space is easy. The GP mean is given by
```
mean = sp.mean_ylm.eval()
mean.shape
```
where its length is just the number of spherical harmonic coefficients at the default maximum degree of the expansion,
$$
N = (l_\mathrm{max} + 1)^2 = (15 + 1)^2 = 256
$$
We can plot this as a function of coefficient index:
```
plt.plot(mean)
plt.ylim(-0.02, 0.045)
plt.xlabel("flattened spherical harmonic index")
plt.ylabel("GP mean")
plt.show()
```
This very regular pattern corresponds to the 2-band structure of the process: a band of spots at $\pm 30^\circ$ latitude. We'll see in the next section what this actually looks like in pixel space.
The GP covariance may be computed from
```
cov = sp.cov_ylm.eval()
cov.shape
```
It's a matrix, which we can also visualize. We'll limit the plot to the first 8 spherical harmonic degrees (81 coefficients) since it's a pretty big matrix:
```
fig, ax = plt.subplots(1, 2)
im = ax[0].imshow(cov[:81, :81])
plt.colorbar(im, ax=ax[0])
ax[0].set_title("covariance")
im = ax[1].imshow(np.log10(np.abs(cov[:81, :81])), vmin=-15)
plt.colorbar(im, ax=ax[1])
ax[1].set_title("$\log_{10}|\mathrm{covariance}|$")
plt.show()
```
The structure certainly isn't trivial: it encodes everything about the size, location, contrast, and number of spots.
Now that we have the GP mean vector ``mean`` and the GP covariance matrix ``cov``, we effectively have a prior for doing inference. This is useful when mapping stellar surfaces with the ``starry`` code, which accepts a spherical harmonic mean vector and covariance matrix as a prior (see [here](https://luger.dev/starry/v1.0.0/notebooks/EclipsingBinary_Linear.html#Linear-solve)).
## Prior in pixel space
For some applications (particularly those not using ``starry``), it may be useful to compute the prior in pixel space. This is helpful if one is attempting to map the stellar surface directly in the pixel basis (i.e., the model is computed on a gridded stellar surface, and the model parameters are the actual pixel intensities). Since there is a linear relationship between spherical harmonic coefficients and pixels, it is very easy to convert between the two.
To visualize the GP mean in pixel space, let's create a grid of latitude-longitude points in degrees:
```
lat = np.linspace(-90, 90, 50)
lon = np.linspace(-180, 180, 100)
```
Let's turn this into a vector of ``(lat, lon)`` tuples...
```
latlon = np.transpose(np.meshgrid(lat, lon))
```
and feed it into ``sp.mean_pix`` to compute the process mean:
```
mean = sp.mean_pix(latlon).eval()
mean.shape
```
The mean computed by ``StarryProcess`` is flattened, so we can unravel it back into the dimensions of our grid to visualize it:
```
plt.imshow(mean.reshape(50, 100), origin="lower", extent=(-180, 180, -90, 90))
plt.colorbar()
plt.xlabel("longitude [degrees]")
plt.ylabel("latitude [degrees]")
plt.show()
```
The prior mean corresponds to dark bands at mid-latitudes. Even though ``StarryProcess`` models circular spots, it is a longitudinally isotropic process, so there's no preferred longitude at which to place the spots. The prior mean is therefore just a spot that's been "smeared out" longitudinally. All of the information about how spots emerge from this pattern is encoded in the covariance matrix (see below).
You can experiment with passing different values for the spot latitude parameters when instantiating the ``StarryProcess`` to see how that affects the mean.
The covariance may be computed from
```
cov = sp.cov_pix(latlon).eval()
cov.shape
```
Again, this is flattened. Let's attempt to visualize it (again restricting to the first few hundred coefficients):
```
plt.imshow(cov[:500, :500])
plt.colorbar()
plt.show()
```
That looks pretty wonky! In general, it's much harder to visualize covariances in pixel space, since it's inherently 4-d! We can settle instead for visualizing the *variance*, which is 2d, and tells us how much scatter there is at every point on the grid when we sample from the prior:
```
plt.imshow(np.diag(cov).reshape(50, 100))
plt.colorbar()
plt.show()
```
We see the same banded structure as before, but now we have *positive* values in the bands and values close to zero outside of the bands. This is exactly what we'd expect: the variance is high within the bands (that's where all the spots live, and where we expect the samples to differ from each other) and zero outside (where the surface should be close to the unspotted mean level).
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('/Users/palmer/Documents/python_codebase/')
from pyMS.centroid_detection import gradient
from pyImagingMSpec.hdf5.inMemoryIMS_hdf5 import inMemoryIMS_hdf5
from pyImagingMSpec.image_measures import level_sets_measure
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import bokeh as bk
from bokeh.plotting import output_notebook
output_notebook()
filename_in = '/Volumes/alexandr/shared/Luca/20150824_ADP_LR_Finger_Print_FullScan_DHBsub_80x80_15um.hdf5' #using a temporary hdf5 based format
save_dir='/Volumes/alexandr/shared/Luca/20150824_ADP_LR_Finger_Print_FullScan_DHBsub_80x80_15um_images'
# Parse data
IMS_dataset=inMemoryIMS(filename_in)
ppm = 0.75
# Generate mean spectrum
#hist_axis,mean_spec =IMS_dataset.generate_summary_spectrum(summary_type='mean')
hist_axis,freq_spec =IMS_dataset.generate_summary_spectrum(summary_type='freq',ppm=ppm/2)
#p1 = bk.plotting.figure()
#p1.line(hist_axis,mean_spec/np.max(mean_spec),color='red')
#p1.line(hist_axis,freq_spec/np.max(freq_spec),color='orange')
#bk.plotting.show(p1)
print len(hist_axis)
plt.figure(figsize=(20,10))
plt.plot(hist_axis,freq_spec)
plt.show()
# Centroid detection of frequency spectrum
mz_list,count_list,idx_list = gradient(np.asarray(hist_axis),np.asarray(freq_spec),weighted_bins=2)
c_thresh=0.05
moc_thresh=0.99
print np.sum(count_list>c_thresh)
# Calcualte MoC for images of all peaks
nlevels=30
im_list={}
for ii, c in enumerate(count_list):
if c>c_thresh:
ion_image = IMS_dataset.get_ion_image(np.asarray([mz_list[ii],]),ppm)
im=ion_image.xic_to_image(0)
m,im_moc,levels,nobjs = level_sets_measure.measure_of_chaos(im,nlevels,interp='median') #just output measure value
m=1-m
im_list[mz_list[ii]]={'image':im,'moc':m,'freq':c}
from pySpatialMetabolomics.tools import colourmaps
c_map = colourmaps.get_colormap('vidris')#if black images: open->save->rerun
c_pal=[[int(255*cc) for cc in c_map(c)] for c in range(0,254)]
# Export all images
import png as pypng
for mz in im_list:
if im_list[mz]['moc']>moc_thresh:
with open('{}/{}_{}.png'.format(save_dir,mz,im_list[mz]['moc']),'wb') as f_out:
im_out = im_list[mz]['image']
im_out = 254*im_out/np.max(im_out)
w,h = np.shape(im_out)
w = pypng.Writer(h, w, palette=c_pal, bitdepth=8)
w.write(f_out,im_out)
#im_out = im_list[mz]['image']
mz=333.334188269
ion_image = IMS_dataset.get_ion_image(np.asarray([mz,]),ppm)
im_out=ion_image.xic_to_image(0)
m,im_moc,levels,nobjs = level_sets_measure.measure_of_chaos(im,nlevels,interp='') #just output measure value
print 1-m
im_out = 254.*im_out/np.max(im_out)
print mz
#print im_list[mz]['moc']
plt.figure()
plt.imshow(im_moc)
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/lionelsamrat10/machine-learning-a-to-z/blob/main/Classification/Naive%20Bayes%20Classification/naive_bayes_samrat.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Naive Bayes (Non Linear Classifier)
## Importing the libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
## Importing the dataset
```
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
## Splitting the dataset into the Training set and Test set
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
print(X_train)
print(y_train)
print(X_test)
print(y_test)
```
## Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print(X_train)
print(X_test)
```
## Training the Naive Bayes model on the Training set
```
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
```
## Predicting a new result
```
print(classifier.predict(sc.transform([[30, 87000]])))
```
## Predicting the Test set results
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
```
## Making the Confusion Matrix
```
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred) # The accuracy we are getting is 90% for our model
```
## Visualising the Training set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_train), y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
```
## Visualising the Test set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_test), y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
```
| github_jupyter |
# Importing Libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import plotly.offline as py
import plotly.graph_objects as go
%matplotlib inline
import nltk
import re
import warnings
train_df= pd.read_csv("https://raw.githubusercontent.com/sharmaroshan/Twitter-Sentiment-Analysis/master/train_tweet.csv")
test_df= pd.read_csv("https://raw.githubusercontent.com/sharmaroshan/Twitter-Sentiment-Analysis/master/train_tweet.csv")
train_df.head()
test_df.head()
train_df.isnull().any()
test_df.isnull().any()
# checking out the negative comments from the train set
train_df[train_df["label"]== 0].head()
# checking out the postive comments from the train set
train_df[train_df["label"]== 1].head()
train_df["label"].value_counts().plot.bar(color= "purple", figsize= (6, 4))
# checking the distribution of tweets in the data
length_train = train_df['tweet'].str.len().plot.hist(color = 'pink', figsize = (6, 4))
length_test = test_df['tweet'].str.len().plot.hist(color = 'orange', figsize = (6, 4))
# adding a column to represent the length of the tweet
train_df["len"]= train_df["tweet"].str.len()
test_df["len"]= test_df["tweet"].str.len()
train_df.head()
train_df.groupby("label").describe()
train_df.groupby('len').mean()['label'].plot.hist(color = 'black', figsize = (6, 4),)
plt.title('variation of length')
plt.xlabel('Length')
plt.show()
from sklearn.feature_extraction.text import CountVectorizer
cv= CountVectorizer(stop_words= "english")
words= cv.fit_transform(train_df.tweet)
sum_words= words.sum(axis= 0)
words_freq= [(word,sum_words[0,i]) for word, i in cv.vocabulary_.items()]
words_freq= sorted(words_freq, key=lambda x: x[1], reverse= True)
frequency= pd.DataFrame(words_freq, columns= ["word", "freq"])
frequency.head(30).plot(x= "word", y="freq", kind="bar", figsize= (15,7), color="b")
plt.title("Most Frequently Occuring Words - Top 30")
plt.show()
from wordcloud import WordCloud
wordcloud= WordCloud(background_color= "white",
width= 1000,
height= 1000).generate_from_frequencies(dict(words_freq))
plt.figure(figsize=(15,7))
plt.imshow(wordcloud)
plt.title("WordCloud - Vocabulary from Reviews", fontsize = 22)
plt.show()
normal_words= " ".join([text for text in train_df["tweet"][train_df["label"]== 0]])
wordcloud= WordCloud(random_state= 0,
background_color= "black",
height= 1000,
width= 1000,
max_font_size= 110).generate(normal_words)
plt.figure(figsize=(15, 7))
plt.imshow(wordcloud, interpolation= "bilinear")
plt.axis("off")
plt.title("The Neutral Words")
plt.show()
normal_words= " ".join([text for text in train_df["tweet"][train_df["label"]== 1]])
wordcloud= WordCloud(random_state= 0,
background_color= "cyan",
height= 1000,
width= 1000,
max_font_size= 110).generate(normal_words)
plt.figure(figsize=(15, 7))
plt.imshow(wordcloud, interpolation= "bilinear")
plt.axis("off")
plt.title("The Neutral Words")
plt.show()
#Collecting the hashtags
def hashtag_extract(x):
hashtags= []
for i in x:
ht= re.findall(r"#(\w+)", i)
hashtags.append(ht)
return hashtags
# extracting hashtags from non racist/sexist tweets
HT_regular= hashtag_extract(train_df["tweet"][train_df["label"]==0])
# extracting hashtags from racist/sexist tweets
HT_negative= hashtag_extract(train_df["tweet"][train_df["label"]==1])
# unnesting list
HT_regular= sum(HT_regular, [])
HT_negative= sum(HT_negative, [])
a = nltk.FreqDist(HT_regular)
d = pd.DataFrame({'Hashtag': list(a.keys()),
'Count': list(a.values())})
# selecting top 20 most frequent hashtags
d = d.nlargest(columns="Count", n = 20)
plt.figure(figsize=(16,5))
ax = sns.barplot(data=d, x= "Hashtag", y = "Count")
ax.set(ylabel = 'Count')
plt.show()
a = nltk.FreqDist(HT_negative)
d = pd.DataFrame({'Hashtag': list(a.keys()),
'Count': list(a.values())})
# selecting top 20 most frequent hashtags
d = d.nlargest(columns="Count", n = 20)
plt.figure(figsize=(16,5))
ax = sns.barplot(data=d, x= "Hashtag", y = "Count")
ax.set(ylabel = 'Count')
plt.show()
# tokenizing the words present in the training set
tokenized_tweet= train_df["tweet"].apply(lambda x: x.split())
# importing gensim
import gensim
# creating a word to vector model
model_w2v= gensim.models.Word2Vec(
tokenized_tweet,
vector_size= 200, # desired no. of features/independent variables
window= 5, # context window size
min_count= 2,
sg= 1, # 1 for skip-gram model
hs= 0,
negative= 10, # for negative sampling
workers= 2, # no.of cores
seed= 34
)
model_w2v.train(tokenized_tweet, total_examples= len(train_df["tweet"]), epochs= 20)
print(tokenized_tweet)
model_w2v.wv.most_similar(positive= "dinner")
model_w2v.wv.most_similar(positive="cancer")
model_w2v.wv.most_similar(positive= "apple")
model_w2v.wv.most_similar(negative= "hate")
from tqdm import tqdm
tqdm.pandas(desc= "progress-bar")
from gensim.models.doc2vec import TaggedDocument
def add_label(twt):
output= []
for i, s in zip(twt.index, twt):
output.append(TaggedDocument(s, ["tweet_" + str(i)]))
return output
# label all the tweets
labeled_tweets= add_label(tokenized_tweet)
labeled_tweets[:6]
# removing unwanted patterns from the data
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
train_corpus= []
for i in range(0, 31962):
review= re.sub("[^a-zA-Z]", " ", train_df["tweet"][i])
review= review.lower()
review= review.split()
ps= PorterStemmer()
#stemming
review= [ps.stem(word) for word in review if not word in set(stopwords.words("english"))]
#joining them back with space
review= " ".join(review)
train_corpus.append(review)
test_corpus= []
for i in range(0, 17197):
review= re.sub("[^a-zA-Z]", " ",test_df["tweet"][i])
review= review.lower()
review= review.split()
ps= PorterStemmer()
#stemming
review= [ps.stem(word) for word in review if not word in set(stopwords.words("english"))]
#Joining them back with space
review= " ".join(review)
test_corpus.append(review)
#Create a bag of words
from sklearn.feature_extraction.text import CountVectorizer
cv= CountVectorizer(max_features= 2500)
x= cv.fit_transform(train_corpus).toarray()
y= train_df.iloc[:,-1] #taking all the rows and only the last column
print(x.shape)
print(y.shape)
# creating bag of words
from sklearn.feature_extraction.text import CountVectorizer
cv= CountVectorizer(max_features= 2500)
x_test= cv.fit_transform(test_corpus).toarray()
print(x_test.shape)
# splitting the training data into train and valid sets
from sklearn.model_selection import train_test_split
x_train, x_valid, y_train, y_valid= train_test_split(x, y, test_size= 0.25, random_state= 42)
print(x_train.shape)
print(x_valid.shape)
print(y_train.shape)
print(y_valid.shape)
# standardization
from sklearn.preprocessing import StandardScaler
sc= StandardScaler()
x_train= sc.fit_transform(x_train)
x_valid= sc.transform(x_valid)
x_test= sc.transform(x_test)
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
model = RandomForestClassifier()
model.fit(x_train, y_train)
y_pred = model.predict(x_valid)
print("Training Accuracy :", model.score(x_train, y_train))
print("Validation Accuracy :", model.score(x_valid, y_valid))
# calculating the f1 score for the validation set
print("F1 score :", f1_score(y_valid, y_pred), average = 'micro')
# confusion matrix
cm = confusion_matrix(y_valid, y_pred)
print(cm)
from sklearn.linear_model import LogisticRegression
model= LogisticRegression()
model.fit(x_train, y_train)
y_pred= model.predict(x_valid)
print("Training accuracy: ",model.score(x_train, y_train))
print("Validation accuracy: ",model.score(x_valid, y_valid))
# calculating the f1 score for the validation set
print("F1 score: ",f1_score(y_valid, y_pred), average = ['micro'])
# confusion matrix
cm= confusion_matrix(y_valid, y_pred)
print(cm)
from sklearn.svm import SVC
model= SVC()
model.fit(x_train, y_train)
print("Training accuracy: ",model.score(x_train, y_train))
print("Validation accuracy: ",model.score(x_valid, y_valid))
# calculating the f1 score for the validation set
print("F1 score: ",f1_score(y_valid, y_pred), average = ['micro'])
# confusion matrix
cm= confusion_matrix(y_valid, y_pred)
print(cm)
from xgboost import XGBClassifier
model= XGBClassifier()
model.fit(x_train, y_train)
print("Training accuracy: ",model.score(x_train, y_train))
print("Validation accuracy: ",model.score(x_valid, y_valid))
# calculating the f1 score for the validation set
print("F1 score: ",f1_score(y_valid, y_pred), average = ['micro'])
# confusion matrix
cm= confusion_matrix(y_valid, y_pred)
print(cm)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ahmedhisham73/deep_learningtuts/blob/master/DataAugmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
```
we will start testing on cats vs dogs dataset
```
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
```
creating training and validation image data generator with subdirectories
```
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
```
creating the Deep neural network
```
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
```
creating the optimizer
```
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['acc'])
```
data validation and training
```
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100, # 2000 images = batch_size * steps
epochs=100,
validation_data=validation_generator,
validation_steps=50, # 1000 images = batch_size * steps
verbose=2)
```
plotting accuracy vs loss
```
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training_accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation_accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training_Loss')
plt.plot(epochs, val_loss, 'b', label='Validation_Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
overfitting occurs?
refers to a model that models the training data too well.
Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize.
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['acc'])
# This code has changed. Now instead of the ImageGenerator just rescaling
# the image, we also rotate and do other operations
# Updated to do image augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100, # 2000 images = batch_size * steps
epochs=100,
validation_data=validation_generator,
validation_steps=50, # 1000 images = batch_size * steps
verbose=2)
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training_accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation_accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training_Loss')
plt.plot(epochs, val_loss, 'b', label='Validation_Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
| github_jupyter |
## Setup
This section installs required packages, and initializes some imports and helper functions to keep the notebook code below neater.
```
#!pip uninstall tensorflow -yq
#!pip install tensorflow-gpu>=2.0 gpustat -Uq
# GPU selection
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
from IPython.core.display import display, HTML
def export_html(result, max_activation):
output = ""
max_activation += 1e-8
for line in result:
word, activation = line
if activation>0:
activation = activation/max_activation
colour = str(int(255 - activation*255))
tag_open = "<span style='background-color: rgb(255,"+colour+","+colour+");'>"
else:
activation = -1 * activation/max_activation
colour = str(int(255 - activation*255))
tag_open = "<span style='background-color: rgb("+colour+","+colour+",255);'>"
tag_close = "</span>"
tag = " ".join([tag_open, word, tag_close])
output = output + tag
output = output + ""
return output
import time
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = "retina"
import tensorflow.compat.v2 as tf
from tensorflow.keras import layers
def train_simple_lm(model, x_train, y_train, verbose=2, test=False):
start_time = time.time()
print("[Phase 1/3] Warming up...")
opt = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=opt,
metrics=["acc"])
history_1 = model.fit(x_train, y_train, epochs=3,
batch_size=128, shuffle=False,
callbacks=[], verbose=verbose)
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
if not test:
print("[Phase 2/3] Fast training...")
opt = tf.keras.optimizers.Adam(learning_rate=0.01)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=opt,
metrics=["acc"])
early_stop = tf.keras.callbacks.EarlyStopping(monitor='acc',
restore_best_weights=True,
patience=3)
history_2 = model.fit(x_train, y_train, epochs=100,
batch_size=256, shuffle=False,
callbacks=[early_stop], verbose=verbose)
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
print("[Phase 3/3] Train to convergence...")
opt = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=opt,
metrics=["acc"])
early_stop = tf.keras.callbacks.EarlyStopping(monitor='acc',
restore_best_weights=True,
patience=3)
history_3 = model.fit(x_train, y_train, epochs=200,
batch_size=256, shuffle=True,
callbacks=[early_stop], verbose=verbose)
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
opt = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=opt,
metrics=["acc"])
early_stop = tf.keras.callbacks.EarlyStopping(monitor='acc',
restore_best_weights=True,
patience=10)
history_4 = model.fit(x_train, y_train, epochs=200,
batch_size=128, shuffle=True,
callbacks=[early_stop], verbose=verbose)
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
log_x = history_1.history['loss'] + history_2.history['loss'] + history_3.history['loss'] + history_4.history['loss']
plt.plot(log_x)
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.show()
elif test:
log_x = history_1.history['loss']
plt.plot(log_x)
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.show()
end_time = time.time()
print("Done! Training took", int(end_time-start_time), "seconds")
return model
```
# Exploring RNNs
In this notebook, we will train an **LSTM** and a vanilla **RNN** (Keras `SimpleRNN`) on a small language modelling task and visualize how an LSTM or RNN works when learning how to model sequences.
We will visualize the **activations**, **hidden states** and **information dependency** inside these models.
```
!gpustat
seq_len = 64
model_dim = 128
```
## Load Text Data
We will load a short paragraph from Wikipedia about NVIDIA.
The goal here is to train an LSTM and RNN to autocomplete the passage.
```
#text = "Nvidia Corporation is more commonly referred to as Nvidia. It was formerly stylized as nVidia on products from the mid 90s to early 2000s. Nvidia is an American technology company incorporated in Delaware and based in Santa Clara, California. Nvidia designs graphics processing units for the gaming and professional markets, as well as system on a chip units for the mobile computing and automotive market. Nvidia primary GPU product line, labeled GeForce, is in direct competition with Advanced Micro Devices Radeon products. Nvidia expanded its presence in the gaming industry with its handheld Shield Portable, Shield Tablet, and Shield Android TV. Since 2014, Nvidia has diversified its business focusing on four markets: gaming, professional visualization, data centers, and auto. Nvidia is also now focused on artificial intelligence. In addition to GPU manufacturing, Nvidia provides parallel processing capabilities to researchers and scientists that allow them to efficiently run high performance applications. They are deployed in supercomputing sites around the world. "
text_url = "https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt"
text_path = tf.keras.utils.get_file("shakespeare.txt", text_url)
text_lines = [line.rstrip('\n').strip() for line in open(text_path)]
text = " ".join(text_lines)
text = text.lower().replace(" ", " ").replace(" ", "_")
text_len = len(text)
print("Text length:", text_len)
vocab = sorted(set(text))
vocab_size = len(vocab) + 1
print("Vocab size:", vocab_size)
tokenizer = tf.keras.preprocessing.text.Tokenizer(lower=True, char_level=True)
tokenizer.fit_on_texts([text])
tokens = tokenizer.texts_to_sequences([text])[0]
x_train = []
y_train = []
for i in range(0, text_len-seq_len, int(seq_len/2)):
x_train.append(tokens[i:i+seq_len])
y_train.append(tokens[i+seq_len])
print("Training examples:", len(y_train))
x_train, y_train = np.asarray(x_train), np.asarray(y_train)
```
## Build LSTM model
```
TRAIN = False
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(False)
l_input = layers.Input(shape=(seq_len,))
l_embed = layers.Embedding(vocab_size, model_dim)(l_input)
l_rnn_1, state_h, state_c = layers.LSTM(model_dim,
return_state=True,
return_sequences=False)(l_embed)
preds = layers.Dense(vocab_size,
activation="softmax")(l_rnn_1)
model = tf.keras.models.Model(inputs=l_input, outputs=preds)
model.summary()
if TRAIN:
model = train_simple_lm(model, x_train, y_train, verbose=1)
model.save_weights("lstm.h5")
else:
print("Loading pretrained LSTM model:")
model_url = "https://github.com/OpenSUTD/machine-learning-workshop/releases/download/v0.0.03/lstm.h5"
model_path = tf.keras.utils.get_file("lstm_large.h5", model_url)
model.load_weights(model_path)
model.compile(loss="sparse_categorical_crossentropy",
optimizer="adam",
metrics=["acc"])
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
def plot_dependency(n):
input_text = text[n:n+seq_len]
input_tokens = tokenizer.texts_to_sequences([input_text])
label = [text[n+seq_len]]
label = tokenizer.texts_to_sequences([label])
loss = tf.keras.losses.SparseCategoricalCrossentropy()
x = tf.convert_to_tensor(input_tokens, dtype=tf.float32)
y_true = tf.convert_to_tensor(label, dtype=tf.float32)
with tf.GradientTape() as g:
g.watch(x)
y = model(x)
loss_value = loss(y_true, y)
grads = g.gradient(loss_value, model.trainable_weights)
input_grads = grads[0].values.numpy()
input_grads = np.sum(np.abs(input_grads), axis=-1)
result = zip(input_text, input_grads)
output = export_html(result, max(input_grads))
output = output + " -> " + text[n+seq_len]
output = "<tt>" + output + "</tt>"
display(HTML(output))
s = 200000
for i in range(s,s+20):
plot_dependency(i)
```
## Build RNN model
```
TRAIN = False
# improve vanilla RNN training speed
# LSTM doesn't need this since it has a cuDNN implementation
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(True)
unroll = True
l_input = layers.Input(shape=(seq_len,))
l_embed = layers.Embedding(vocab_size, model_dim)(l_input)
l_rnn_1, h = layers.SimpleRNN(model_dim,
unroll=unroll,
return_state=True,
return_sequences=False)(l_embed)
preds = layers.Dense(vocab_size,
activation="softmax")(l_rnn_1)
model = tf.keras.models.Model(inputs=l_input, outputs=preds)
model.summary()
if TRAIN:
model = train_simple_lm(model, x_train, y_train, verbose=2)
model.save_weights("rnn.h5")
else:
print("Loading pretrained RNN model:")
model_url = "https://github.com/OpenSUTD/machine-learning-workshop/releases/download/v0.0.03/rnn.h5"
model_path = tf.keras.utils.get_file("rnn_large.h5", model_url)
model.load_weights(model_path)
model.compile(loss="sparse_categorical_crossentropy",
optimizer="adam",
metrics=["acc"])
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
for i in range(s,s+20):
plot_dependency(i)
```
## Bi-LSTM
```
TRAIN = False
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(False)
l_input = layers.Input(shape=(seq_len,))
l_embed = layers.Embedding(vocab_size, model_dim)(l_input)
l_rnn_1 = layers.Bidirectional(layers.LSTM(model_dim,
return_sequences=False))(l_embed)
preds = layers.Dense(vocab_size,
activation="softmax")(l_rnn_1)
model = tf.keras.models.Model(inputs=l_input, outputs=preds)
model.summary()
if TRAIN:
model = train_simple_lm(model, x_train, y_train, verbose=2)
model.save_weights("bilstm.h5")
else:
print("Loading pretrained LSTM model:")
model_url = "https://github.com/OpenSUTD/machine-learning-workshop/releases/download/v0.0.03/bilstm.h5"
model_path = tf.keras.utils.get_file("bilstm_large.h5", model_url)
model.load_weights(model_path)
model.compile(loss="sparse_categorical_crossentropy",
optimizer="adam",
metrics=["acc"])
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
for i in range(s,s+20):
plot_dependency(i)
```
## Stacked LSTM
```
TRAIN = False
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(False)
l_input = layers.Input(shape=(seq_len,))
l_embed = layers.Embedding(vocab_size, model_dim)(l_input)
l_embed = layers.LSTM(model_dim,
return_sequences=True)(l_embed)
l_embed = layers.LSTM(model_dim,
return_sequences=True)(l_embed)
l_rnn_1 = layers.LSTM(model_dim,
return_sequences=False)(l_embed)
preds = layers.Dense(vocab_size,
activation="softmax")(l_rnn_1)
model = tf.keras.models.Model(inputs=l_input, outputs=preds)
model.summary()
if TRAIN:
model = train_simple_lm(model, x_train, y_train, verbose=2)
model.save_weights("deeplstm.h5")
else:
print("Loading pretrained LSTM model:")
model_url = "https://github.com/OpenSUTD/machine-learning-workshop/releases/download/v0.0.03/deeplstm.h5"
model_path = tf.keras.utils.get_file("deeplstm_large.h5", model_url)
model.load_weights(model_path)
model.compile(loss="sparse_categorical_crossentropy",
optimizer="adam",
metrics=["acc"])
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
for i in range(s,s+20):
plot_dependency(i)
from xfmers import layers, utils, ops
TRAIN = False
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(True)
inputs = tf.keras.Input(shape=(None, ))
padding_mask = layers.PaddingMaskGenerator()(inputs)
embeddings = layers.TokenPosEmbedding(d_vocab=vocab_size, d_model=model_dim, pos_length=seq_len)(inputs)
decoder_block = layers.TransformerStack(layers=3,
ff_units=model_dim*4,
d_model=model_dim,
num_heads=4,
dropout=0.01,
causal=True,
layer_norm="double",
activation=ops.gelu,
weight_sharing=False,
name="DecoderBlock")
dec_outputs = decoder_block({"token_inputs": embeddings,
"mask_inputs": padding_mask})
dec_outputs = dec_outputs[:, -1:]
preds = layers.LMHead(vocab_size=vocab_size)(dec_outputs)
model = tf.keras.Model(inputs=inputs, outputs=preds)
model.summary()
if TRAIN:
verbose = 1
opt = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=opt,
metrics=["acc"])
history_1 = model.fit(x_train, y_train, epochs=3,
batch_size=128, shuffle=False,
callbacks=[], verbose=verbose)
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
opt = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=opt,
metrics=["acc"])
early_stop = tf.keras.callbacks.EarlyStopping(monitor='acc',
restore_best_weights=True,
patience=3)
history_3 = model.fit(x_train, y_train, epochs=200,
batch_size=256, shuffle=True,
callbacks=[early_stop], verbose=verbose)
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
log_x = history_1.history['loss'] + history_3.history['loss']
plt.plot(log_x)
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.show()
model.save_weights("transformer_xs.h5")
else:
print("Loading pretrained LSTM model:")
model_url = "https://github.com/OpenSUTD/machine-learning-workshop/releases/download/v0.0.03/transformer_xs.h5"
model_path = tf.keras.utils.get_file("transformer_xs_large.h5", model_url)
model.load_weights(model_path)
model.compile(loss="sparse_categorical_crossentropy",
optimizer="adam",
metrics=["acc"])
scores = model.evaluate(x_train, y_train, batch_size=512, verbose=2)
print(" - Loss:", scores[0])
print(" - Acc: ", scores[1])
for i in range(s,s+20):
plot_dependency(i)
```
| github_jupyter |
### 1.A Basic HTML
### 1.A.1 Tags
The pieces of HTML documents that carry the commands for the browser are referred to as "tags". Tags are separated from the text through angle brackets ("<",">"). Also, most commands come in pairs consisting of an opening and an end tag. For example, when the browser encounters the opening tag "``<b>``", the browser displays all text in bold type until it encounters the end tag "``</b>``".
The HTML sequence "``<b>This is bold.</b>``" thus yields:
<center><b>This is bold.</b></center>
Because tags usually come in pairs, HTML documents have a nested structure when multiple tags are combined. For instance, if one wants to underline part of the above example in HTML one would write "``<b>This <u>is</u> bold.</b>``". In this example, the tag for underline (``<u>``) is nested inside the tag for bold and your browser displays:<br>
<center><b>This <u>is</u> bold.</b></center>
Before looking at a sample HTML document in more detail, it is useful to recognize a few commonly used tags. You will encounter some of them again later on in our tutorial.
<table>
<thead>
<tr>
<th>Tag name</th>
<th>Function</th>
</tr>
</thead>
<tbody>
<tr>
<td><code><head></code></td>
<td>The head of a page includes general information such as its authors, language etc.</td>
</tr>
<tr>
<td><code><body></code></td>
<td>All the content you see in the browser is inside the body tags.</td>
</tr>
<tr>
<td><code><p></code></td>
<td>A paragraph of text.</td>
</tr>
<tr>
<td><code><br></code></td>
<td>A break between two lines of text.</td>
</tr>
<tr>
<td><code><b></code>, <code><i></code>, <code><u></code></td>
<td>Display text in bold, italics or underlined.</td>
</tr>
<tr>
<td><code><a href="gpo.html"></code></td>
<td>A link to the page gpo.html.</td>
</tr>
<tr>
<tr>
<td><code><img src="gpo.jpg"></code></td>
<td>Displays the picture gpo.jpg.</td>
</tr>
<tr>
<td><code><table></code>, <code><tr></code>, <code><td></code></td>
<td>A table including its rows (tr) and cells (td).</td>
</tr>
<tr>
<td><code><div></code></td>
<td>Divides an HTML document into different parts. Usually used for layout purposes.</td>
</tr>
<tr>
<td><code><span></code></td>
<td>Similar to <code><div></code> tag, but only for small parts of the code.</td>
</tr>
</tbody></table>
### 1.A.2 Attributes
Some tags include additional information besides the command. See for example the ``<a>`` or the ``<img>`` tags in the table above. This additional information is referred to as an "attribute" of the tag. The hyperlink tag ``<a>`` always contains the URL of the destination page as an attribute. Similarly, the tag ``<img>`` includes the location of the image to display.
Another common role of an attribute is to assign specific IDs to a tag. A specific ID helps the author locate and manipulate special sections of the code more quickly. For example, ID tags are often used to assign special formatting to selected sections which all share a common ID for that purpose.
### 1.A.3 HTML document structure
Now that the basic components of HTML documents are introduced, let's look at how they come together in an HTML document. The example we are going to study is the following excerpt from the above GPO screenshot:
<table class="browse-node-table">
<tr>
<td colspan="2">
<span>
S. Hrg. 113 - DEPARTMENT OF DEFENSE APPROPRIATIONS FOR FISCAL YEAR 2014
</span>
</td>
</tr>
<tr>
<td>
<span class="results-line2">
Appropriation. Wednesday, April 24, 2013.
</span>
</td>
<td>
<a href="https://www.gpo.gov/fdsys/pkg/CHRG-113shrg39104553/pdf/CHRG-113shrg39104553.pdf" target="_blank">
PDF
</a>
|
<a href="https://www.gpo.gov/fdsys/pkg/CHRG-113shrg39104553/html/CHRG-113shrg39104553.htm" target="_blank">
Text
</a>
|
<a href="https://www.gpo.gov/fdsys/search/pagedetails.action?collectionCode=CHRG&browsePath=113%2FSENATE%2FCommittee+on+Appropriations&granuleId=CHRG-113shrg39104553&packageId=CHRG-113shrg39104553&fromBrowse=true" target="_blank">
More
</a>
</td>
</tr>
<tr>
<td colspan="2"> </td>
</tr>
</table>
This excerpt contains the basic information for the first Hearing (as displayed above). It is organized as a table with three rows. The first row consists of only one cell which contains the Hearing title. The second row contains two cells: the left cell includes more information on the Hearing (e.g. the date) and the right cell includes links to the different formats of the Hearing transcript (e.g. the PDF). The third row again consists of only one cell and this cell is left blank.
Now remember that we want to give the computer simple directions such as "Please click on the second link in the right-hand cell of the above table's second row ("Text")." This would be enough for you to locate the link to this Hearing's transcript. Eventually, we are going to write very similar instructions for the program; just using HTML tags rather than the words "table", "cell" or "row".
However, to formulate these directions it does not suffice to give the name of the tag that contains the desired URL. Most HTML tags appear several times inside the same page. There will be an ``<a>`` for ever link, a ``<p>`` for every new paragraph, and so on. Using just the tag would be as precise as finding the address of "John Smith" by looking up his name in New York's phone book. To provide precise information, we also need to understand the internal structure of an HTML code much like the way we would use the street address or area to locate someone with a common name.
For HTML documents, the nested structure can be visualized using a horizontal hierarchy. In this visualization, tags from the same nest are displayed with the same horizontal indentation. For an intuitive introduction to HTML document structure, let’s revisit the Hearing excerpt above and look at the tags behind that table with three rows through the following vizualisation. Before looking at the code for the entire table, let's first understand the top row:
<br>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px;background-color:#ddd"><tr id="this-is-the-1st-row">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><td colspan="2" id="this-is-the-only-cell">
<div style="width:90%;border:0px dashed grey;padding:5px;margin:5px">S. Hrg. 113 - DEPARTMENT OF DEFENSE APPROPRIATIONS FOR FISCAL YEAR 2014</div>
</td>
</div>
</tr>
</div>
Note that besides indentation, tags from the same nest are surrounded by the same dashed line. The excerpt starts with a the opening tag for a table row (``<tr>``) and ends with the resepective closing tag (``</tr>``).
The row includes one cell (``<td>`` → ``</td>``) and in that cell we find the text that we see in our browser.
There are two noteworthy things we can learn from the cell tags. First, the starting tag include two attributes. The first attribute ``colspan`` indicates to the browser across how many columns this cell is going spreading. The second attribute is one of the ID assignments alluded to above. The author could use this attribute to standardise the display of such cells throughout the entire page. For example, one could use this ID attribute to define that all cells with this ID should have a light blue background. When using ID attributes, the HTML author only has to define this once at the start of the HTML code and the browser will apply it whenever it encounters the ID inside the code.
The second noteworthy thing is a further bit of HTML jargon. We have already seen how to "tag" and "attribute" are comonly used in the context of HTML. The next piece is what to understand under the difference between "the value of an attribute" and "the value of a tag". The value of a tag is what is contained inside its nest i.e. between the opening and the end tag. So in the above excerpt, the Hearing title ("S. Hrg. 113 - DEPARTMENT OF DEFENSE APPROPRIATIONS FOR FISCAL YEAR 2014") is the value of the ``<td>`` tag. However, the value of the ID attribute of the ``<td>`` tag is "this-is-the-only-cell".
This piece of jargon will be useful when sending our computer out to collect information for us. Sometimes we are interested in the value of an attribute and sometimes in the value of a tag. Say we want to collect a link. In that case, the computer should collect the attribute of a specifc ``<a>`` tag for us. In other applications, we may want to collect the displayed text itself. In that case, we will ask the computer to come back with the value of, say, the ``<td>`` tag.
Note here that the value of a tag is not restricted to include text only. It also includes the tags contained in its nest. So in the above example, the value of the ``<tr>`` tag is <br>
"``<td colspan="2" id="this-is-the-only-cell"> S. Hrg. 113 - DEPARTMENT OF DEFENSE APPROPRIATIONS FOR FISCAL YEAR 2014 </td>``".
<br>
<br>
Now that we have seen the frist row of the above table, let's look at the code for the entire code. It reads:
<div style="width:99%;border:1px solid grey;padding:3px;margin:10px;background-color:#ddd"><table class="browse-node-table">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><tr id="this-is-the-1st-row">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><td colspan="2" id="this-is-the-only-cell">
<div style="width:90%;border:0px dashed grey;padding:5px;margin:0px">S. Hrg. 113 - DEPARTMENT OF DEFENSE APPROPRIATIONS FOR FISCAL YEAR 2014</div>
</td>
</div>
</tr>
</div>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><tr id="this-is-the-2nd-row">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><td id="this-is-one-cell">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:5px"><span class="results-line2"> <div style="width:90%;border:0px dashed grey;padding:5px;margin:5px">Appropriation. Wednesday, April 24, 2013.</div>
</span></div>
</td>
</div>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><td id="this-is-another-cell">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><a href="https://www.gpo.gov/link.to.PDF" target="_blank"> <div style="width:90%;border:0px dashed grey;padding:5px;margin:5px">PDF</div>
</a></div>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><a href="https://www.gpo.gov/link.to.text" target="_blank"> <div style="width:90%;border:0px dashed grey;padding:5px;margin:5px">Text</div>
</a></div>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><a href="https://www.gpo.gov/link.to.more" target="_blank"> <div style="width:90%;border:0px dashed grey;padding:5px;margin:5px">More</div>
</a></div>
</td>
</div>
</tr>
</div>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><tr id="this-is-the-3rd-row">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><td colspan="2" id="this-is-the-only-cell">
<div style="width:90%;border:0px dashed grey;padding:5px;margin:5px"> </div>
</td>
</div>
</tr>
</div>
</table>
</div>
In this display, the original table's structure is clear to see. The entire excerpt is one large nest build from a ``<table>`` tag. In that nest, there are a three row nests (``<tr>``) and each of the row nests includes at least one cell nest (``<td>``). Of those cells, the one containing the links takes up the most space with each link having its own nest (``<a>``).
There is one last HTML convention we need to be aware of before we can go on to write directions: How does the browser know how to arrange the layout of values within the same tag? For example, why is the cell containing the links to the right of the other cell in that row? Same for the order of the links. Why does the browser display "PDF | Text | More" and not "More | Text | PDF" or some other order?
The reason is simple: As it goes through the code, our browser arranges everything it finds from left to right. To avoid a display in a single long line, some tags force the browser to start a new one. In that new one, the browser continues to go from left to right until it is interrupted again. Tags that cause such breaks in text are ``<br>`` or ``<p>``, but also design elements such as tables or images.
This left-to-right convention will help us when writing the directions. What it does is allow us to pinpoint a tag that is otherwise identical to those in the same nest. In our example, we are only interest in the "Text" link from the table above. The URL we are looking for is in an otherwise indistinguishable ``<a>`` tag. In this case, we are still lucky because we could use the word "Text" to make our directions precise.
However, we would be out of luck in a case like:<br>
<center>"Download different versions <a href="https://www.gpo.gov/fdsys/pkg/CHRG-113shrg39104553/pdf/CHRG-113shrg39104553.pdf" target="_blank">here</a>, <a href="https://www.gpo.gov/fdsys/pkg/CHRG-113shrg39104553/html/CHRG-113shrg39104553.htm" target="_blank">here</a> or <a href="https://www.gpo.gov/fdsys/search/pagedetails.action?collectionCode=CHRG&browsePath=113%2FSENATE%2FCommittee+on+Appropriations&granuleId=CHRG-113shrg39104553&packageId=CHRG-113shrg39104553&fromBrowse=true" target="_blank">here</a>." </center><br>
On occassions like this, we can exploit the "left-to-right" convention and simply ask the computer to extract the attribute value of the second ``<a>`` tag. More generally, it is often useful to give directions by the number of the tag, rather than the value it contains. Often, we do not know the value of the relevant tag or the value of that tag is precisely what we want to collect.
With the basics of HTML in mind, we can now turn to the problem of how to translate "Please click on the second link in the right-hand cell of the above table's second row ("Text")." into a command the computer understands.
### 1.B XPath: The directory of HTML
While your computer may not understand verbal directions (for now), it understands XPath. XPath, the "XML Path Language", allows you to identify a particular tag, attribute or sets of them inside an HTML code. As the name implies, we will communicate our directions by giving the computer the path it should follow inside the HTML document in order to get to the location of what we want to collect. There are various ways to formulate our directions in XPath. Please consult <a href="https://www.w3schools.com/xml/xpath_intro.asp">this tutorial</a> for a detailed exposition of XPath and its many useful properties. For the purpose of this tutorial, we will start with the most specific version of using an XPath and then introduce two shortcuts.
#### 1.B.1 The complete path
The most specific way to ask the programme to "go to the second link in the right-hand cell of the above table's second row" is to literally spell out the entire path through the document from start to finish.
The rules are simple:<br>
1. We only include the opening tags. The program will just ignore closing tags it sees inside the code along the way.<br>
2. The path only includes tags where the programme has to move into a new nest (or towards the right in the above visualization).<br>
3. We use numbers in squared brackets to identify a specific tag in cases where there are multiple ones of the same kind within the same nest.<br>
4. We connect the different pieces of our path with backslashes in-between each tag.
Voilà!
Let's apply this to our example. We spell it out alongside the XPath again to see where the different parts come from:
<table style="width:90%;border:0px;fill:0px;padding:5px;margin:0px">
<tr style="width:30%;border:0px;fill:0px">
<td style="width:30%;border:0px;fill:0px">
<span style="margin-left:27px">``table``</span><br>
<span style="margin-left:82px">``tr[2]``</span><br>
<span style="margin-left:130px">``td[2]``</span><br>
<span style="margin-left:179px">``a[2]``</span><br>
🢂 ``/table/tr[2]/td[2]/a[2]``
</td>
<td style="width:70%;border:0px;fill:0px">
"Start at the beginning of our code (``table``),<br>
use the second row tag (``tr[2]``).<br>
Within that row tag, go into the second cell tag you can find (``td[2]``), and <br>
stop in that cell's second link tag (``a[2]``)."<br>
<br>
</td>
</tr>
</table>
If you take another look at the visualized HTML code above, you can recognize the XPath logic:
<div style="width:99%;border:1px solid grey;padding:3px;margin:10px;background-color:#ddd"><table class="browse-node-table">
<font color="#6e6e6e">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><tr id="this-is-the-1st-row"> ...
</tr>
</div>
</font>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><tr id="this-is-the-2nd-row">
<font color="#6e6e6e">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><td id="this-is-one-cell"> ... </td>
</div>
</font>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><td id="this-is-another-cell">
<font color="#6e6e6e">
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><a ... > PDF </a></div></font>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><a href="https://www.gpo.gov/link.to.text" target="_blank"> <div style="width:90%;border:0px dashed grey;padding:5px;margin:5px">Text</div>
</a></div>
<div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><font color="#6e6e6e"><a ... > More </a></font></div>
</td>
</div>
</tr>
</div>
<font color="#6e6e6e"><div style="width:90%;border:1px dashed grey;padding:5px;margin:10px"><tr id="this-is-the-3rd-row"> ...
</tr>
</div></font>
</table>
</div>
Writing the entire path is simple enough in short HTML code. However, in more convoluted code this may be no longer practical. There are two types of shortcuts to help you locate the relevant part of the code more efficiently.
#### 1.B.2 Shortcuts
Landmarks are often useful when giving directions to a friend. Rather than spelling out the entire path from start to finish, you can use a known landmark near the destination and describe the rest of the way from there. XPath also allows for such shortcuts.
##### Unique attributes
A unique attribute is the euqivalent of a landmark in XPath. Recall that an attribute is the text inside a tag. Now observe that the cell containing our desired link is opened with a uniquley labelled ``<td>`` tag. Its ``id`` attribute has the value "this-is-another-cell". This value is unique throughout the code. Rather than spelling out the entire XPath from the top of our code, we can use this unique attribut as our landmark to start our directions from.
Inserting an attribute into an XPath works much the same way as using their order. We again use square brackets to describe a feature of the tag we are looking for. Rather than using its order number (e.g. ``[2]``), we specify the type and value of the relevant attribute. In our case, the attribute type is "``id``" and the value is "``this-is-another-cell``". As another XPath convention, we use the "@" symbol to alert our programme that what follows is an attribute of the tag we are looking for. Our XPath-landmark thus translates into "``[@id='this-is-another-cell']``".
Using this changes our XPath from <br>
``/table/tr[2]/td[2]/a[2]`` to <br>
``.//td[@id='this-is-another-cell']/a[2]``
Looking at the differences between the two, notice that the start of our changed XPath also differs. The "``.//``" at the beginning of it tells the computer to allow all possible tag combinations before the unique ending we provide.
##### Unique tails
The second possibility to shorten an XPath direction is to restrict it to those tags that make its ending unique. What an XPath query does is ask the computer to look through the code for all paths matching our directions. So if we know which parts of our directions make the desired location unique, that unique part is all the computer needs to know.
In our example case, what makes the "Text" link unique is that it is within the only second "``<a>``" tag throughout the code. Arguably, this is very rare, but for our special case, the relevant piece of the XPath boils down to:<br>
"``.//a[2]``".
Again, we have used the "``.//``" to tell our computer to allow for all possible tags before our desired ``a[2]``. However, allowing so much flexibility also bears a risk. In cases where we have not found the unique
#### 1.B.3 Tool: Browser plugins help you find the XPath
The logic behind XPath is simple. However, in how many cases do we know the HTML code of a website whose text we want to collect? The good news: you don't have to sift through HTML documents yourself in order to tease out the XPath you want. Luckily, there are various browser plugins that help you extract the correct path with a few simple clicks.
One option is the "Firebug" add-on for Mozilla Firefox. Look at this video demonstration to see how it works:
[](http://youtu.be/XsysldVfAmk?hd=1 "Extracting an XPath (3 minute video)")
Here is an incomplete list of these very useful helpers:
* Mozilla Firefox: The <a href="https://www.mozilla.org/en-US/firefox/developer/" target="_blank">Developer Kit</a> contains all you need.
* Google Chrome: for <a href="https://chrome.google.com/webstore/detail/xpath-helper/hgimnogjllphhhkhlmebbmlgjoejdpjl"
target="_blank">XPath helper</a>
* <a href="https://stackoverflow.com/questions/34456722/getting-xpath-for-element-in-safari" target="_blank">Tutorial for Apple Safari</a>
#### Exercise: Find and compare XPaths
For this excercise, please familiarise yourself with how to extract an XPath using your browser. Once your browser is set up, please direct it to the <a href="https://www.gpo.gov/fdsys/browse/collection.action?collectionCode=CHRG" target="_blank">GPO website</a>.
Please extract the XPath to one "Text"-link of your choice.
How would you have to change the extracted path in order to direct your programme to the "PDF"-link?
## 1.C Another way to navigate inside your file: CSS selectors
Cascading Style Sheets (CSS) are a popular way to layout HTML documents. Using CSS, the programmer has the option to specify the layout of alike HTML tags in a single command. For example, the programmer could specify that all URLs shall be displayed in red and a bold font with italic style. Rather than adding this information to each link tag, CSS allows him to set this feature once and see it applied whenever the link tag appears e.g.:
`` a {``<br>
`` font-family: Arial, sans-serif;``<br>
`` font-weight: bold;``<br>
`` font-style: italic;``<br>
`` color: red;``<br>
`` }``<br>
To identify the tag of interest, the programmer uses a so-called "CSS selector". Besides applying formatting, CSS selectors can be used to navigate inside an HTML document.
Since the logic of these selectors mirrors the XPath language closely, the interested reader is referred to <a href="https://www.w3schools.com/cssref/css_selectors.asp">this tutorial</a> for more details on CSS selectors. Thankfully, the Firefox Developer Edition is able to provide the CSS selectors as described above.
## 2 Scrape basics
In scraping, we ultimately try to collect the content of a tag inside an HTML document. After we received it, we process it further. The content could be the link to a file, which we then download or import into our software. In many applications, we want to collect the text e.g. from a government announcement online.
The hoops we go through are always the same, regardless of what we are trying to collect.
1. Load the website containing the desired data
2. Store a copy of the underlying HTML file in your computer's memory
3. Parse the HTML file so you can query it
4. Apply your query to the parsed copy
5. Store your returned value (or process it further)
Let's go through these in turn.
### 2.1 Load the website & store a copy of the underyling HTML file
```
import requests
gpo = requests.get('http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=CHRG')
```
This is what we got:
```
gpo.content
```
Does not look like an HTML file, right? At this point, Python does not know it is looking at an HTML document. All it sees is a large chunk of text.
To be able to run XPath queries or otherwise process our file, we first have to:
### 2.2 Parse the HTML
This is where BeautifulSoup comes in (although there are other parsers). It has various parsers to create its "soup".
The result looks much more recognizable.
```
from bs4 import BeautifulSoup
gpo_parsed = BeautifulSoup(gpo.content, 'html.parser')
print(gpo_parsed.prettify()[6967:8374])
```
### 2.3 Apply your query
Now that Python knows it is dealing with an HTML file, we can write a query to extract the information we want.
BeautifulSoup supports two ways to query an HTML file: finding node names or using CSS selectors. Not to worry, we will return to XPath when applying another package further below.
BeautifulSoup offers two ways to collect the information contained in an HTML node. One can either search the HTML code for specific tags and retrieve the information contained in them. Or one can use a CSS selector to pinpoint the position of a particular tag of interest.
#### Using 'find' or 'find_all'
Using the search function comes with the option to either find all or only the first of the nodes matching the search key. Here is the code to retrieve all links from the GPO's page.
```
gpo_parsed.find_all("a")
```
And this only yields the first, as would using a predicate in combination with 'find_all'.
```
gpo_parsed.find("a")
gpo_parsed.find_all("a")[0]
```
#### Using CSS selectors
An alternative is using the CSS selectors. You can use the Firefox Development Editor to extract the CSS selector for the position of interest. For example, this yields all paragraphs of the main text on the page:
```
gpo_parsed.select('#browse-layout-mask p')
```
#### Reducing it to the text
The find_all command extracts all nodes that fit the search terms. In our case, it only found one h3-title i.e. "Congressional Hearings". Regardless of how many items it finds, BeatifulSoup stores the returned values in a list. If it finds one item that fits the search query, the list has contains 1 item. If it finds 3, the list contains 3.
Note that the result of our "h3" query above contains not only the value of the node ("Congressional Hearings"), but also the tags and their attributes. To restrict the returned value to the text only, BeautifulSoup provides the useful ".get_text" command. However, the command can only be applied to individual items of a list, not the entire list at once. Thus we have to identify the number of the list elemment from which we want to extract only text before applying the command. In Python, one can select individual list items by stating their position as a predicate i.e. in square brackets.
Extracting the text of the headline thus translates into the command:
```
gpo_parsed.find_all("h3")[0].get_text()
```
Besides removing the starting and closing tags, adding ".get_text()" will also remove tags inside the text you have extracted. For instance, the third paragraph in the text displayed at the start of this section contains a link (signalled by the ``<a href=...>`` tag). Applying ".get_text()" likewise removes the URL from the returned value.
To see this, use the following code to get out all tags (``<a>`` or otherwise) from the last paragraph of our GPO page.
```
gpo_parsed.select('#browse-layout-mask p')[2].get_text()
```
### 2.4 Store the returned value
Since we have now processed the collected text, we conclude with storing it on our local hard drive for future applications. In this example, we write a new text file containing the headline of the GPO's page.
```
file_name='my first file.txt'
text=gpo_parsed.find_all("h3")[0].get_text()
file = open(file_name,'w')
file.write(text)
file.close()
```
### 2.5 Exercise: Go through all steps yourself
To practice your scraping skill, try to collect the transcript of an individual hearing.
To do this, we have to change the URL somewhat.
```
exercise=requests.get('https://www.gpo.gov/fdsys/browse/collection.action?collectionCode=CHRG&browsePath=115%2FSENATE%2FCommittee+on+Foreign+Relations&isCollapsed=false&leafLevelBrowse=false&isDocumentResults=true&ycord=325.5')
```
Now write the code to collect the text (not the file) for the Hearing "S. Hrg. 115-4 - Nomination of Rex Tillerson to Be Secretary of State".
Print the first 1000 characters, just to be sure. Then, store your hearing into a local text file.
## 3 Adding complex pages: Let your computer browse the web
Now that we know all about giving directions, it is time to let your computer browse the web. This section introduces the Selenium package. Originally designed to test new website, the Selenium package allows you to control your web browser without moving your mouse or touching your keyboard. Instead, the program navigates the site based on the XPath directions introduced above.
The introduction to Selenium remains very close to the goals of this tutorial. Please see the package's <a href="https://selenium-python.readthedocs.io/" target="_blank">documentation</a> for more information. Furthermore, specific tutorials can be found easily online as the Selenium package is very popular among Python users.
#### Browser setup
Before letting it navigate the web, we have to award additional abilities to our program. When we want our smartphone to learn a new trick, we install a new app. In Python, our programming language, the equivalent of installing an app is loading a module.
For what we want to do in this tutorial, we will have to load four modules. Our program uses the Operating System module ("os") which allows the program to work on your hard drive, the urllib module ("urllib") to collect text from a webpage, and the Selenium module ("selenium") to mimic the user's click routine on the page. Finally, we use a module called "time" to allow our program to take short breaks when we want it to wait before browsing on.
```
import os
import urllib
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
```
#### Starting the browser
With these modules in memory, our program is now ready to start browsing. The first step is to open a browser. In our example, we will browse the web using Mozilla Firefox. However, Selenium works on most popular browsers.
Our computer does not have its own mouse or keyboard with which it could open a new browser window. Instead, what Selenium uses is a so-called "webdriver". A webdriver basically opens the channel between our program and the browser installed on your computer.
To ask the webdriver to open up Firefox, all we need is:
```
webdriver.Firefox()
```
To open a Chrome window, the command is almost identical:
```
webdriver.Chrome()
```
Please close both browser windows manually again.
The promise of this tutorial was that the computer would browse the web without your interaction. Yet, you were just asked to close two browser windows. Not to worry, the program can also close browser windows (or open new tabs and the like). However, for it to be able to do that, we have to give the browser window a name. Once it has a name, we can tell the program which browser window to work on.
To see this, please run the following code. What will happen is that a new Firefox window opens. We call this window "my_fox". The programm will then wait for 5 seconds before it closes the "my_fox"-window again.
```
my_fox=webdriver.Firefox()
sleep(5)
my_fox.close()
```
#### Basic navigation
So far, so spooky. Now let's have the programme go to our GPO website.
First, we open a new browser window again:
```
gpo=webdriver.Firefox()
```
Now, we tell the programme to get the GPO's link into that browser window:
```
gpo.get("http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=CHRG")
```
<b>Your turn:</b><br>Let the browser now move to a website of your choice:
```
gpo.get("www.berkeley.edu")
```
Getting to a page is nice, but we also want to interact with that page. Recall that the browser sees this page as a series of tags. Selenium refers to these tags as "elements" on the page. What we have to do then is direct the program at the page element we want to interact with. This is where the XPath language we learnt above comes in. But before we get back to that, let's look at a more intuitive way to interact.
In Selenium, it's not strictly necessary to know each and every element by its XPath. While XPath is the most precise way to get where want, there are less general, but more user-friendly ways as well. For instance, the Selenium package includes a function that searches the webpage for elements containing a certain piece of text.
Say, I want to start browsing the Hearings of the 155th Congress. First, let's return to that page again:
```
gpo.get("http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=CHRG")
```
To locate the link we want to click on, we have to supply a unique piece of text to the programme. Let's use "115th Congress".
We also want to store the location of this element so we can use it in later commands. The logic is the same as with naming our browser window "my_fox" or "gpo". Once we assign a name to the element, we can ask the program to use it in later applications. For this example, we call the location "congress_115".
```
congress_115=gpo.find_element_by_partial_link_text("115th Congress")
```
Now that the program knows element, why not have it click on it?
```
congress_115.click()
```
And let's go look at the Hearings.
```
hearing=gpo.find_element_by_partial_link_text("House Hearings")
hearing.click()
```
Oh, actually we meant the Senate Hearings; specifically those of the Appropriations Committee.
```
gpo.back()
sleep(.5)
hearing=gpo.find_element_by_partial_link_text("Senate Hearings")
hearing.click()
sleep(.5)
committee=gpo.find_element_by_partial_link_text("Appropriations")
committee.click()
```
And now let's have a look at one of those texts. We will use the first one from the top.
```
text= gpo.find_elements_by_partial_link_text("Text")
text[0].click()
```
You can see how this is coming along nicely. But before we go on, please notice two things about the code we just executed. First, there is an extra "s" in the command "``find_element``<b>s</b>``_by_partial_link_text``". It was added to the command because the text we are looking for fits more than one element on the page. Our program would have been confused if we had used the command without the extra "s".
The second point to note is the zero in the squared brackets. This is another Python convention. We see it for the first time in this tutorial. This, somewhat unintuitive convention says that lists in Python are number starting from zero. So the first list item is item zero and the tenth item would be item eleven. In order to click on the first element, we thus have to write "[0]".
Back to our text. As you can see in your browser, the GPO provides transcripts of the Hearings as HTML documents including basic, unformatted text. This is good news for us since we do not have to worry about finding the text in a particular location on the page. Rather, we can download the whole page and store it into a text file. In the end, we will precisely do just that: ask our program to download and store the many HTML files that it finds behind all the "Text" links.
There is one inconvenience between us and the conlusion of this tutorial right now: The Hearing text was opened in a new browser window. Before we can start storing individual text files, we need to get back to our original window.
The way we can do this is allow the program to exploit the keyboard shortcuts of our browser. If you want to close a tab, you have two options: You can click on the "x" in its righthand corner. Or you can press "CTRL + w" on your keyboard ("COMMAND + w" for Macs). Our program is able to do just that as well.
In the code below, we first ensure that the browser tab is active. We do this by having the program locate a tag that is inside the HTML we are looking at. Probably the most general tag in HTML world is the "body" tag. We will have our program search for this one. Now that we are sure that the program is looking at the browser tab we want to close, we will send the key combindation "CTRL + w" right to it.
```
gpo.find_element_by_tag_name('*').send_keys(Keys.CONTROL + 'w')
```
Welcome back!<br>
#### Storing the text
Rather than opening the link under "Text", we want to store the HTML file it leads to onto our computer. Let's first look at this link once. Remember from our basic HTML above that the link is found in the "href" attribute of the ``<a>`` tag. To extract this link, we write:
```
hearing_text_link = text[0].get_attribute('href')
print(hearing_text_link)
```
Notice that the extracted link ends with ".htm", the file extension for HTML files (another one is ".html"). What we thus have in front of us is the path to an HTML file of the Hearing transcript. What is left to do is to use this link, ask our programme to download the ".htm" file and convert it into a text file (".txt"). Converting the HTML file into the text file is very easy thanks to the inexistent page formatting used by the GPO to display the transcripts. All we have to do is change the ending of the file from ".htm" to ".txt". The last package we loaded at the start of this tutorial, urllib, will do exactly that.
However, in some cases, there is one last hurdle we need to clear. When we navigate to a website, our browser sends some information about us to it e.g. the type of browser we use and in what version. In its default mode, Python communicates to the website that it operates through a computer-controlled browser. Some websites seek to block computer-controlled bots and spiders from crawling their content.
The GPO is among those, as the following error messages indicates:
```
hearing=urllib.request.Request(hearing_text_link)
urllib.request.urlopen(hearing).read()
```
To avoid being blocked from your site, you need to change the default information when you request the webpages. For example, you may specify your browser type and version. You may also write any other information in there as long as you deviate from the default.
```
hearing=urllib.request.Request(hearing_text_link, headers={'User-Agent': "Mozilla/56.0.1 (Windows 10; Win64; x64)"})
urllib.request.urlopen(hearing).read()[:1000]
```
All that we have left to do then is to set the file name which we want to use.
```
text= urllib.request.urlopen(hearing).read()
file_name="my second file.txt"
file = open(file_name,'wb')
file.write(text)
file.close()
```
That's it! Our program just learned how to download its first Hearing transcript! The program will have to do it many times over in order to store all the Hearings that we can find online. But before we get to do that, let's first close the browser window again:
```
gpo.quit()
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# ONNX Runtime: Tutorial for Nuphar execution provider
**Accelerating model inference via compiler, using Docker Images for ONNX Runtime with Nuphar**
This example shows how to accelerate model inference using Nuphar, an execution provider that leverages just-in-time compilation to generate optimized executables.
For more background about Nuphar, please check [Nuphar-ExecutionProvider.md](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/Nuphar-ExecutionProvider.md) and its [build instructions](https://www.onnxruntime.ai/docs/how-to/build.html#nuphar).
#### Tutorial Roadmap:
1. Prerequistes
2. Create and run inference on a simple ONNX model, and understand how ***compilation*** works in Nuphar.
3. Create and run inference on a model using ***LSTM***, run symbolic shape inference, edit LSTM ops to Scan, and check Nuphar speedup.
4. ***Quantize*** the LSTM model and check speedup in Nuphar (CPU with AVX2 support is required).
5. Working on real models from onnx model zoo: ***BERT squad***, ***GPT-2*** and ***Bidirectional Attention Flow ([BiDAF](https://arxiv.org/pdf/1611.01603))***.
6. ***Ahead-Of-Time (AOT) compilation*** to save just-in-time compilation cost on model load.
7. Performance tuning for single thread inference.
## 1. Prerequistes
Please make sure you have installed following Python packages. Besides, C++ compiler/linker is required for ahead-of-time compilation. Please make sure you have g++ if running on Linux, or Visual Studio 2017 on Windows.
For simplicity, you may use [Nuphar docker image](https://github.com/microsoft/onnxruntime/blob/master/dockerfiles/README.md) from Microsoft Container Registry.
```
import cpufeature
import hashlib
import numpy as np
import onnx
from onnx import helper, numpy_helper
import os
from timeit import default_timer as timer
import shutil
import subprocess
import sys
import tarfile
import urllib.request
def is_windows():
return sys.platform.startswith('win')
if is_windows():
assert shutil.which('cl.exe'), 'Please make sure MSVC compiler and liner are in PATH.'
else:
assert shutil.which('g++'), 'Please make sure g++ is installed.'
def print_speedup(name, delta_baseline, delta):
print("{} speed-up {:.2f}%".format(name, 100*(delta_baseline/delta - 1)))
print(" Baseline: {:.3f} s, Current: {:.3f} s".format(delta_baseline, delta))
def create_cache_dir(cache_dir):
# remove any stale cache files
if os.path.exists(cache_dir):
shutil.rmtree(cache_dir)
os.makedirs(cache_dir, exist_ok=True)
def md5(file_name):
hash_md5 = hashlib.md5()
with open(file_name, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_md5.update(chunk)
return hash_md5.hexdigest()
```
And Nuphar package in onnxruntime is required too. Please make sure you are using Nuphar enabled build.
```
import onnxruntime
from onnxruntime.nuphar.model_editor import convert_to_scan_model
from onnxruntime.nuphar.model_quantizer import convert_matmul_model
from onnxruntime.nuphar.rnn_benchmark import generate_model, perf_test
from onnxruntime.tools.symbolic_shape_infer import SymbolicShapeInference
```
## 2. Create and run inference on a simple ONNX model
Let's start with a simple model: Y = ((X + X) * X + X) * X + X
```
model = onnx.ModelProto()
opset = model.opset_import.add()
opset.domain == 'onnx'
opset.version = 7 # ONNX opset 7 is required for LSTM op later
model.ir_version = onnx.IR_VERSION
graph = model.graph
X = 'input'
Y = 'output'
# declare graph input/ouput with shape [seq, batch, 1024]
dim = 1024
model.graph.input.add().CopyFrom(helper.make_tensor_value_info(X, onnx.TensorProto.FLOAT, ['seq', 'batch', dim]))
model.graph.output.add().CopyFrom(helper.make_tensor_value_info(Y, onnx.TensorProto.FLOAT, ['seq', 'batch', dim]))
# create nodes: Y = ((X + X) * X + X) * X + X
num_nodes = 5
for i in range(num_nodes):
n = helper.make_node('Mul' if i % 2 else 'Add',
[X, X if i == 0 else 'out_'+str(i-1)],
['out_'+str(i) if i < num_nodes - 1 else Y],
'node'+str(i))
model.graph.node.add().CopyFrom(n)
# save the model
simple_model_name = 'simple.onnx'
onnx.save(model, simple_model_name)
```
We will use nuphar execution provider to run the inference for the model that we created above, and use settings string to check the generated code.
Because of the redirection of output, we dump the lowered code from a subprocess to a log file:
```
code_to_run = '''
import onnxruntime
s = 'codegen_dump_lower:verbose'
providers = [('NupharExecutionProvider', {'nuphar_settings': s}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession('simple.onnx', providers=providers)
'''
log_file = 'simple_lower.log'
with open(log_file, "w") as f:
subprocess.run([sys.executable, '-c', code_to_run], stdout=f, stderr=f)
```
The lowered log is similar to C source code, but the whole file is lengthy to show here. Let's just check the last few lines that are most important:
```
with open(log_file) as f:
log_lines = f.readlines()
log_lines[-10:]
```
The compiled code showed that the nodes of Add/Mul were fused into a single function, and vectorization was applied in the loop. The fusion was automatically done by the compiler in the Nuphar execution provider, and did not require any manual model editing.
Next, let's run inference on the model and compare the accuracy and performance with numpy:
```
seq = 128
batch = 16
input_data = np.random.rand(seq, batch, dim).astype(np.float32)
sess = onnxruntime.InferenceSession(simple_model_name)
simple_feed = {X:input_data}
simple_output = sess.run([], simple_feed)
np_output = ((((input_data + input_data) * input_data) + input_data) * input_data) + input_data
assert np.allclose(simple_output[0], np_output)
simple_repeats = 100
start_ort = timer()
for i in range(simple_repeats):
sess.run([], simple_feed)
end_ort = timer()
start_np = timer()
for i in range(simple_repeats):
np_output = ((((input_data + input_data) * input_data) + input_data) * input_data) + input_data
end_np = timer()
print_speedup('Fusion', end_np - start_np, end_ort - start_ort)
```
## 3. Create and run inference on a model using LSTM
Now, let's take one step further to work on a 4-layer LSTM model, created from onnxruntime.nuphar.rnn_benchmark module.
```
lstm_model = 'LSTMx4.onnx'
input_dim = 256
hidden_dim = 1024
generate_model('lstm', input_dim, hidden_dim, bidirectional=False, layers=4, model_name=lstm_model)
```
**IMPORTANT**: Nuphar generates code before knowing shapes of input data, unlike other execution providers that do runtime shape inference. Thus, shape inference information is critical for compiler optimizations in Nuphar. To do that, we run symbolic shape inference on the model. Symbolic shape inference is based on the ONNX shape inference, and enhanced by sympy to better handle Shape/ConstantOfShape/etc. ops using symbolic computation.
**IMPORTANT**: When running multi-threaded inference, Nuphar currently uses TVM's parallel schedule with has its own thread pool that's compatible with OpenMP and MKLML. The TVM thread pool has not been integrated with ONNX runtime thread pool, so intra_op_num_threads won't control it. Please make sure the build is with OpenMP or MKLML, and use OMP_NUM_THREADS to control thread pool.
```
onnx.save(SymbolicShapeInference.infer_shapes(onnx.load(lstm_model)), lstm_model)
```
Now, let's check baseline performance on the generated model, using CPU execution provider.
```
sess_baseline = onnxruntime.InferenceSession(lstm_model, providers=['CPUExecutionProvider'])
seq = 128
input_data = np.random.rand(seq, 1, input_dim).astype(np.float32)
lstm_feed = {sess_baseline.get_inputs()[0].name:input_data}
lstm_output = sess_baseline.run([], lstm_feed)
```
To run RNN models in Nuphar execution provider efficiently, LSTM/GRU/RNN ops need to be converted to Scan ops. This is because Scan is more flexible, and supports quantized RNNs.
```
lstm_scan_model = 'Scan_LSTMx4.onnx'
convert_to_scan_model(lstm_model, lstm_scan_model)
```
After conversion, let's compare performance and accuracy with baseline:
```
sess_nuphar = onnxruntime.InferenceSession(lstm_scan_model)
output_nuphar = sess_nuphar.run([], lstm_feed)
assert np.allclose(lstm_output[0], output_nuphar[0])
lstm_repeats = 10
start_lstm_baseline = timer()
for i in range(lstm_repeats):
sess_baseline.run([], lstm_feed)
end_lstm_baseline = timer()
start_nuphar = timer()
for i in range(lstm_repeats):
sess_nuphar.run([], lstm_feed)
end_nuphar = timer()
print_speedup('Nuphar Scan', end_lstm_baseline - start_lstm_baseline, end_nuphar - start_nuphar)
```
## 4. Quantize the LSTM model
Let's get more speed-ups from Nuphar by quantizing the floating point GEMM/GEMV in LSTM model to int8 GEMM/GEMV.
**NOTE:** For inference speed of quantizated model, a CPU with AVX2 instructions is preferred.
```
cpufeature.CPUFeature['AVX2'] or 'No AVX2, quantization model might be slow'
```
We can use onnxruntime.nuphar.model_quantizer to quantize floating point GEMM/GEMVs. Assuming GEMM/GEMV takes form of input * weights, weights are statically quantized per-column, and inputs are dynamically quantized per-row.
```
lstm_quantized_model = 'Scan_LSTMx4_int8.onnx'
convert_matmul_model(lstm_scan_model, lstm_quantized_model)
```
Now run the quantized model, and check accuracy. Please note that quantization may cause accuracy loss, so we relax the comparison threshold a bit.
```
sess_quantized = onnxruntime.InferenceSession(lstm_quantized_model)
output_quantized = sess_quantized.run([], lstm_feed)
assert np.allclose(lstm_output[0], output_quantized[0], rtol=1e-3, atol=1e-3)
```
Now check quantized model performance:
```
start_quantized = timer()
for i in range(lstm_repeats):
sess_quantized.run([], lstm_feed)
end_quantized = timer()
print_speedup('Quantization', end_nuphar - start_nuphar, end_quantized - start_quantized)
```
To check RNN quantization performance, please use rnn_benchmark.perf_test.
```
rnn_type = 'lstm' # could be 'lstm', 'gru' or 'rnn'
num_threads = cpufeature.CPUFeature['num_physical_cores'] # no hyper thread
input_dim = 80 # size of input dimension
hidden_dim = 512 # size of hidden dimension in cell
bidirectional = True # specify RNN being bidirectional
layers = 6 # number of stacked RNN layers
seq_len = 40 # length of sequence
batch_size = 1 # size of batch
original_ms, scan_ms, int8_ms = perf_test(rnn_type, num_threads, input_dim, hidden_dim, bidirectional, layers, seq_len, batch_size)
print_speedup('Nuphar Quantization speed up', original_ms / 1000, int8_ms / 1000)
```
## 5. Working on real models
### 5.1 BERT Squad
BERT (Bidirectional Encoder Representations from Transformers) applies Transformers to language modelling. With Nuphar, we may fuse and compile the model to accelerate inference on CPU.
#### Download model and test data
```
# download BERT squad model
cwd = os.getcwd()
bert_model_url = 'https://github.com/onnx/models/raw/master/text/machine_comprehension/bert-squad/model/bertsquad-10.tar.gz'
bert_model_local = os.path.join(cwd, 'bertsquad-10.tar.gz')
if not os.path.exists(bert_model_local):
urllib.request.urlretrieve(bert_model_url, bert_model_local)
with tarfile.open(bert_model_local, 'r') as f:
f.extractall(cwd)
```
#### Run symbolic shape inference
Note that this model has computations like `min(100000, seq_len)` which could be simplified to `seq_len` if we know `seq_len` is not going to be too big. We can do this by setting int_max. Besides, auto_merge is used to make sure the all nodes in the entire model could have shape inferenced by merging symbolic dims when broadcasting.
```
bert_model_dir = os.path.join(cwd, 'bertsquad-10')
bert_model = os.path.join(bert_model_dir, 'bertsquad10.onnx')
bert_model_with_shape_inference = os.path.join(bert_model_dir, 'bertsquad10_shaped.onnx')
# run symbolic shape inference
onnx.save(SymbolicShapeInference.infer_shapes(onnx.load(bert_model), auto_merge=True, int_max=100000), bert_model_with_shape_inference)
```
#### Run inference on original model, using CPU execution provider, with maximum optimization
```
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
sess_baseline = onnxruntime.InferenceSession(bert_model, sess_options=sess_options, providers=['CPUExecutionProvider'])
# load test data
test_data_dir = os.path.join(bert_model_dir, 'test_data_set_1')
tps = [onnx.load_tensor(os.path.join(test_data_dir, 'input_{}.pb'.format(i))) for i in range(len(sess_baseline.get_inputs()))]
bert_feed = {tp.name:numpy_helper.to_array(tp) for tp in tps}
bert_output_baseline = sess_baseline.run([], bert_feed)
bert_repeats = 20
start_bert_baseline = timer()
for i in range(bert_repeats):
sess_baseline.run([], bert_feed)
end_bert_baseline = timer()
```
#### Run inference on the model with symbolic shape inference, using Nuphar execution provider
First let's check accuracy:
```
sess = onnxruntime.InferenceSession(bert_model_with_shape_inference)
output = sess.run([], bert_feed)
assert all([np.allclose(o, ob, atol=1e-4) for o, ob in zip(output, bert_output_baseline)])
```
Then check speed:
```
start_nuphar = timer()
for i in range(bert_repeats):
sess.run([], bert_feed)
end_nuphar = timer()
print_speedup('Nuphar BERT squad', end_bert_baseline - start_bert_baseline, end_nuphar - start_nuphar)
```
### 5.2 GPT-2 with fixed batch size
GPT-2 is a language model using Generative Pre-Trained Transformer for text generation. With Nuphar, we may fuse and compile the model to accelerate inference on CPU.
#### Download model and test data
```
# download GPT-2 model
cwd = os.getcwd()
gpt2_model_url = 'https://github.com/onnx/models/raw/master/text/machine_comprehension/gpt-2/model/gpt2-10.tar.gz'
gpt2_model_local = os.path.join(cwd, 'gpt2-10.tar.gz')
if not os.path.exists(gpt2_model_local):
urllib.request.urlretrieve(gpt2_model_url, gpt2_model_local)
with tarfile.open(gpt2_model_local, 'r') as f:
f.extractall(cwd)
```
#### Change batch dimension to fixed value, and run symbolic shape inference
The GPT-2 model from model zoo has a symbolic batch dimension. By replacing it with a fixed value, compiler would be able to generate better code.
```
gpt2_model_dir = os.path.join(cwd, 'GPT2')
gpt2_model = os.path.join(gpt2_model_dir, 'model.onnx')
# edit batch dimension from symbolic to int value for better codegen
mp = onnx.load(gpt2_model)
mp.graph.input[0].type.tensor_type.shape.dim[0].dim_value = 1
onnx.save(mp, gpt2_model)
gpt2_model_with_shape_inference = os.path.join(gpt2_model_dir, 'model_shaped.onnx')
# run symbolic shape inference
onnx.save(SymbolicShapeInference.infer_shapes(onnx.load(gpt2_model), auto_merge=True), gpt2_model_with_shape_inference)
```
#### Run inference and compare accuracy/performance to CPU provider
```
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
sess_baseline = onnxruntime.InferenceSession(gpt2_model, sess_options=sess_options, providers=['CPUExecutionProvider'])
# load test data
input_name = [i.name for i in sess_baseline.get_inputs()][0] # This model only has one input
test_data_dir = os.path.join(gpt2_model_dir, 'test_data_set_0')
tp = onnx.load_tensor(os.path.join(test_data_dir, 'input_0.pb'))
gpt2_feed = {input_name:numpy_helper.to_array(tp)}
gpt2_output_baseline = sess_baseline.run([], gpt2_feed)
gpt2_repeats = 100
start_gpt2_baseline = timer()
for i in range(gpt2_repeats):
sess_baseline.run([], gpt2_feed)
end_gpt2_baseline = timer()
sess = onnxruntime.InferenceSession(gpt2_model_with_shape_inference)
output = sess.run([], gpt2_feed)
assert all([np.allclose(o, ob, atol=1e-4) for o, ob in zip(output, gpt2_output_baseline)])
start_nuphar = timer()
for i in range(gpt2_repeats):
output = sess.run([], gpt2_feed)
end_nuphar = timer()
print_speedup('Nuphar GPT-2', end_gpt2_baseline - start_gpt2_baseline, end_nuphar - start_nuphar)
```
### 5.3 BiDAF with quantization
BiDAF is a machine comprehension model that uses LSTMs. The inputs to this model are paragraphs of contexts and queries, and the outputs are start/end indices of words in the contexts that answers the queries.
First let's download the model:
```
# download BiDAF model
cwd = os.getcwd()
bidaf_url = 'https://github.com/onnx/models/raw/master/text/machine_comprehension/bidirectional_attention_flow/model/bidaf-9.tar.gz'
bidaf_local = os.path.join(cwd, 'bidaf-9.tar.gz')
if not os.path.exists(bidaf_local):
urllib.request.urlretrieve(bidaf_url, bidaf_local)
with tarfile.open(bidaf_local, 'r') as f:
f.extractall(cwd)
```
Now let's check the performance of the CPU provider:
```
bidaf_dir = os.path.join(cwd, 'bidaf')
bidaf = os.path.join(bidaf_dir, 'bidaf.onnx')
sess_baseline = onnxruntime.InferenceSession(bidaf, providers=['CPUExecutionProvider'])
# load test data
test_data_dir = os.path.join(cwd, 'bidaf', 'test_data_set_3')
tps = [onnx.load_tensor(os.path.join(test_data_dir, 'input_{}.pb'.format(i))) for i in range(len(sess_baseline.get_inputs()))]
bidaf_feed = {tp.name:numpy_helper.to_array(tp) for tp in tps}
bidaf_output_baseline = sess_baseline.run([], bidaf_feed)
```
The context in this test data:
```
' '.join(list(bidaf_feed['context_word'].reshape(-1)))
```
The query:
```
' '.join(list(bidaf_feed['query_word'].reshape(-1)))
```
And the answer:
```
' '.join(list(bidaf_feed['context_word'][bidaf_output_baseline[0][0]:bidaf_output_baseline[1][0]+1].reshape(-1)))
```
Now put all steps together:
```
# editing
bidaf_converted = 'bidaf_mod.onnx'
onnx.save(SymbolicShapeInference.infer_shapes(onnx.load(bidaf)), bidaf_converted)
convert_to_scan_model(bidaf_converted, bidaf_converted)
# When quantizing, there's an only_for_scan option to quantize only the GEMV inside Scan ops.
# This is useful when the input dims of LSTM being much bigger than hidden dims.
# BiDAF has several LSTMs with input dim being 800/1400/etc, while hidden dim is 100.
# So unlike the LSTMx4 model above, we use only_for_scan here
convert_matmul_model(bidaf_converted, bidaf_converted, only_for_scan=True)
# inference and verify accuracy
sess = onnxruntime.InferenceSession(bidaf_converted)
output = sess.run([], bidaf_feed)
assert all([np.allclose(o, ob) for o, ob in zip(output, bidaf_output_baseline)])
```
Check performance after all these steps:
```
bidaf_repeats = 100
start_bidaf_baseline = timer()
for i in range(bidaf_repeats):
sess_baseline.run([], bidaf_feed)
end_bidaf_baseline = timer()
start_nuphar = timer()
for i in range(bidaf_repeats):
sess.run([], bidaf_feed)
end_nuphar = timer()
print_speedup('Nuphar quantized BiDAF', end_bidaf_baseline - start_bidaf_baseline, end_nuphar - start_nuphar)
```
The benefit of quantization in BiDAF is not as great as in the LSTM sample above, because BiDAF has relatively small hidden dimensions, which limited the gain from optimization inside Scan ops. However, this model still benefits from fusion/vectorization/etc.
## 6. Ahead-Of-Time (AOT) compilation
Nuphar runs Just-in-time (JIT) compilation when loading models. The compilation may lead to slow cold start. We can use create_shared script to build dll from JIT code and accelerate model loading.
```
start_jit = timer()
sess = onnxruntime.InferenceSession(bidaf_converted)
end_jit = timer()
'JIT took {:.3f} seconds'.format(end_jit - start_jit)
# use settings to enable JIT cache
bidaf_cache_dir = os.path.join(bidaf_dir, 'cache')
create_cache_dir(bidaf_cache_dir)
settings = 'nuphar_cache_path:{}'.format(bidaf_cache_dir)
providers = [('NupharExecutionProvider', {'nuphar_settings': settings}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(bidaf_converted, providers=providers)
```
Now object files of JIT code is stored in cache_dir, let's link them into dll:
```
bidaf_cache_versioned_dir = os.path.join(bidaf_cache_dir, os.listdir(bidaf_cache_dir)[0])
# use onnxruntime.nuphar.create_shared module to create dll
subprocess.run([sys.executable, '-m', 'onnxruntime.nuphar.create_shared', '--input_dir', bidaf_cache_versioned_dir], check=True)
os.listdir(bidaf_cache_versioned_dir)
```
Check the model loading speed-up with AOT dll:
```
start_aot = timer()
settings = 'nuphar_cache_path:{}'.format(bidaf_cache_dir)
providers = [('NupharExecutionProvider', {'nuphar_settings': settings}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(bidaf_converted, providers=providers)
end_aot = timer()
print_speedup('AOT', end_jit - start_jit, end_aot - start_aot)
```
Moreover, Nuphar AOT also supports:
* Generate JIT cache with AVX/AVX2/AVX-512 and build a AOT dll including support for all these CPUs, which makes deployment easier when targeting different CPUs in one package.
* Bake model checksum into AOT dll to validate model with given AOT dll.
```
# create object files for different CPUs
cache_dir = os.path.join(os.getcwd(), 'lstm_cache')
model_name = lstm_quantized_model
model_checksum = md5(model_name)
repeats = lstm_repeats
feed = lstm_feed
time_baseline = end_lstm_baseline - start_lstm_baseline
multi_isa_so = 'avx_avx2_avx512.so'
create_cache_dir(cache_dir)
settings = 'nuphar_cache_path:{}'.format(cache_dir)
for isa in ['avx512', 'avx2', 'avx']:
settings_with_isa = settings + ', nuphar_codegen_target:' + isa
providers = [('NupharExecutionProvider', {'nuphar_settings': settings_with_isa}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(model_name, providers=providers)
cache_versioned_dir = os.path.join(cache_dir, os.listdir(cache_dir)[0])
# link object files to AOT dll
subprocess.run([sys.executable, '-m', 'onnxruntime.nuphar.create_shared', '--input_dir', cache_versioned_dir, '--input_model', model_name, '--output_name', multi_isa_so], check=True)
# now load the model with AOT dll
# NOTE: when nuphar_codegen_target is not set, it defaults to current CPU ISA
settings = 'nuphar_cache_path:{}, nuphar_cache_so_name:{}, nuphar_cache_model_checksum:{}, nuphar_cache_force_no_jit:on'.format(cache_dir, multi_isa_so, model_checksum)
providers = [('NupharExecutionProvider', {'nuphar_settings': settings}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(model_name, providers=providers)
# force to a different ISA which is a subset of current CPU
# NOTE: if an incompatible ISA is used, exception on invalid instructions would be thrown
for valid_isa in ['avx2', 'avx']:
settings_with_isa = 'nuphar_cache_path:{}, nuphar_cache_so_name:{}, nuphar_cache_model_checksum:{}, nuphar_codegen_target:{}, nuphar_cache_force_no_jit:on'.format(cache_dir, multi_isa_so, model_checksum, valid_isa)
providers = [('NupharExecutionProvider', {'nuphar_settings': settings_with_isa}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(model_name, providers=providers)
start_nuphar = timer()
for i in range(repeats):
sess.run([], feed)
end_nuphar = timer()
print_speedup('{} in {}'.format(model_name, valid_isa), time_baseline, end_nuphar - start_nuphar)
```
## 7. Performance tuning for single thread inference.
By default, Nuphar enables parallel schedule for lower inference latency with multiple threads, when building with MKLML or OpenMP. For some models, user may want to run single-thread inference for better throughput with multiple concurrent inference threads, and turning off parallel schedule may make inference a bit faster in single thread.
```
# set OMP_NUM_THREADS to 1 for single thread inference
# this would mak
os.environ['OMP_NUM_THREADS'] = '1'
sess = onnxruntime.InferenceSession(bidaf_converted)
start_baseline = timer()
for i in range(bidaf_repeats):
sess_baseline.run([], bidaf_feed)
end_baseline = timer()
# use NUPHAR_PARALLEL_MIN_WORKLOADS=0 to turn off parallel schedule, using settings string
# it can be set from environment variable too: os.environ['NUPHAR_PARALLEL_MIN_WORKLOADS'] = '0'
settings = 'nuphar_parallel_min_workloads:0'
providers = [('NupharExecutionProvider', {'nuphar_settings': settings}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(bidaf_converted, providers=providers)
start = timer()
for i in range(bidaf_repeats):
sess_baseline.run([], bidaf_feed)
end = timer()
print_speedup('Single thread perf w/o parallel schedule', end_baseline - start_baseline, end - start)
del os.environ['OMP_NUM_THREADS']
```
| github_jupyter |
# Graph Measures
In this section we'll cover some common network analysis techniques. This doesn't cover everything NetworkX is capable of, but is a should get you started exploring the rest of the package.
First we are going to need import some other packages.
```
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
```
This is a little jupyter magic ([literally](https://ipython.org/ipython-doc/3/interactive/tutorial.html#magics-explained)], to make sure plots show up in the notebook.
```
%matplotlib inline
```
## Degree Distribution
A common feature of complex networks it's their degree distribution. This is often represented as degree rank plot. Let's check out the degree rank plot of a BA Graph.
```
BA = nx.barabasi_albert_graph(10000,1)
```
To get the correct degree sequence, we need to get the degrees sorted in descending order. Most NetworkX functions return a dictionary, with the keys being the nodes (or edges) and the values being the result of whatever measure you are running. So we want to sort the values in reverse order.
```
degrees = BA.degree()
degree_sequence = sorted(degrees.values(),reverse=True)
```
We could do this in one line if we wanted...
```
degree_sequence = sorted(BA.degree().values(),reverse=True)
```
Now we need to do some plotting. Plotting using matplotlib is a lot like plotting using MATLAB. Because the degree distribution of a BA graph is a power-law, we'd like to use a plot with log scales. Here is how we'd do it.
```
# loglog tells matplotlib to use log scales.
# The x values, range(1,10001), are the ranks,
# and the degree_sequence are the y values.
# The String 'k.' means use black (k) dots (.)
plt.loglog(range(1,BA.order()+1),degree_sequence,'k.')
plt.xlabel('Rank')
plt.ylabel('Degree')
plt.ylim(1,max(degree_sequence)+1)
plt.xlim(.9,10001)
```
Matplotlib is a powerful tool more info can be found on using it [here](http://matplotlib.org/users/beginner.html).
### Degree Distribution of models
In the original paper where the Barabási–Albert model was introduced it was stated that it provided a good explanation for the Autonomous Sytems Network. Let's build a network with similar degree structure to a recent snapshot of the Autonomous Systems Network. The data was retrieved from [UCLA's Internet Research Lab's Topology Data](http://irl.cs.ucla.edu/topology/).
First, read in the network, it is in the data folder labeled `AS2013Sep.pickle`
```
AS = nx.read_gpickle('./data/AS2013Sep.pickle')
```
Let's find out the number of nodes and edges in the networks, and the average degree of the network
```
AS.order(),AS.size(),(2.0*AS.size())/AS.order()
```
Let's use these values as approximates to create a BA graph of the same size with almost the same number of edges
```
BA = nx.barabasi_albert_graph(#Fill in the rest AS.order(),8)
```
### Exercise
Find the degree sequence of each, and use the code below to plot each. Is this a good model?
```
BA_deg_seq = #
AS_deg_seq = #
plt.loglog(BA_deg_seq,'b.',label='BA Model')
plt.loglog(AS_deg_seq,'r.',label='AS Data')
plt.xlabel('Rank')
plt.ylabel('Degree')
plt.legend()
```
#### A note on power laws.
It is oftern claimed that networks have power-law degree distribution. That is the probability of degree k is proportional to
$$Pr[k] \sim \frac{1}{k^\alpha}$$
Where, $\alpha$ is some constant. Often this is claimed pased on linear regressions of degree/rank plots. However, the appropriate way to fit power-laws is using maximum likelihood techniques. See [1] for more info
[1] Clauset, Aaron, Cosma Rohilla Shalizi, and Mark EJ Newman. "Power-law distributions in empirical data." _SIAM review_ 51.4 (2009): 661-703.
## Centrality
Identifying important nodes is often a common technique in complex network analysis. Degree is a simple measure of centrality, but there are many others. Let's explore a few on some real data on Terrorists in Colonial America. I wish I could claim I came up with this, but I didn't all credit goes to
[1] http://kieranhealy.org/blog/archives/2013/06/09/using-metadata-to-find-paul-revere/
[2] Fischer, David Hackett. Historians' fallacies: Toward a logic of historical thought. Vol. 1970. London: Routledge & Kegan Paul, 1971.
The data file contains a graph with two types of nodes, 'Organization' and 'Person'. Organizations are different groups who met in colonial America and supported independence from England. People are people attending those meetings. First let's read the file
```
G = nx.read_gpickle('./data/Revolutionaries.gpickle')
```
Let's Check out the edges
```
G.edges()
```
We can see that the edges are between people and organizations. We are curious about the connections between people. We could write a function to create a new graph in which two people are connected if they co-attend a meeting, and the edge has a weight indicates the number of meetings they both attended. But that sounds hard, luckily NetworkX can do it for us. First let's make a list of nodes for organizations and people.
```
orgs = [n for n in G.nodes() if G.node[n]['type']=='Organization']
people = [n for n in G.nodes() if G.node[n]['type']=='Person']
```
Now we can use that handy function
```
O = nx.bipartite.weighted_projected_graph(G,orgs) # Connections between people based on co-meeting attendence
R = nx.bipartite.weighted_projected_graph(G,people) # Overlap among meeting attendees
```
We can use a quick python trick to determine who has the highest degree. Adding the keyword wight indicates that we should use _weighted_ edges.
```
deg = R.degree(weight='weight')
sorted(deg.items(), # Look at all the degrees as a tuple (node,degree)
key=lambda i: i[1], # Sort the list by the second item in the tuple
reverse=True)[:5] # Reverse the list (highest degree first), and only give the first 5
```
Another important measure of centrality is the betweenneess centrality
```
btw = nx.betweenness_centrality(R,weight='weight')
sorted(btw.items(),
key=lambda i: i[1],
reverse=True)[:5]
```
Try plotting the betweenness centrality vs the degree for each node. Instead of using `plt.loglog`, use `plt.semilogy`
### Exercise
Do a similar analysis with the network of oranizations `O`, which is organziation is most central to the revolution?
First calculate the degree and betweenness for each of the nodes
```
deg = O.degree()
btw = nx.betweenness_centrality(O)
```
Since the number of organizations is small, it might not be the worst idea to draw the network. Here is a Red, White and Blue network drawing...
```
size = [G.degree(org)**2 for org in O.nodes()]
width = [d['weight'] for (u,v,d) in O.edges(data=True)]
plt.figure(figsize=(14,10)) #We'll make it bigger
nx.draw(M, width=width,
with_labels=True,
edge_color='r',
node_color='b',
node_size=size,
font_size=24)
```
## Phase Transitions
Some random graph models experience phase transitions like other physical phenomona. For example, the Erdos-Renyi graph that we already explored experiences a phase transition in the size of the giant connected component when the average degree of the model cross a certain threshold. We are going to use NetworkX to explore that threshold.
Recall that an Erdos-Renyi random graph is one where there is an edge between each node with probability $p$. The ER model has expected number of Edges $\mathbb{E}[|E|]={n \choose 2}p$. With a little math on the degree distribution, we can find that the average degree will be $k=np$, and $p=\frac{k}{n}$.
The giant component is defined as the largest connected component in the graph. Let's explore how the size of the giant component changes for a graph of size 100, as we incrase the average degree.
```
n = 100
ks = np.arange(.1,3,.02)
GCC_size = []
for k in ks:
G = nx.gnp_random_graph(n,k/n)
GC = sorted(nx.connected_component_subgraphs(G),key=lambda C: len(C),reverse=True)[0]
GCC_size.append(float(len(GC))/n)
plt.plot(ks,GCC_size,marker='.',lw=0)
plt.xlabel('Average Degree')
plt.ylabel('Relative size of Giant Component')
```
Looks pretty messy. This is because each graph is a random instance. Let's make 50 graphs for each possible average degree.
```
N = 50 # Number of times to create a graph
n = 100 # Graph Size
ks = np.arange(.1,3,.1) # A bunch of average degrees, separated by .1 from .1 to 3
GCC_size = [] #List to store the size of the giant component
for k in ks:
GCs = []
for _ in range(N):
G = nx.gnp_random_graph(n,k/n) #generate teh graph
GC = sorted(nx.connected_component_subgraphs(G),key=lambda C: len(C),reverse=True)[0] #graph teh largest component
GCs.append(float(len(GC))/n) # Measure it's size
GCC_size.append(np.mean(GCs)) # Take the average and append it to the list
plt.plot(ks,GCC_size,marker='.',lw=0)
plt.xlabel('Average Degree')
plt.ylabel('Average Relative size of Giant Component')
```
Now let's test it for various values of $n$
```
ks = np.arange(.1,3,.1)
N = 50
for n in [10,50,100,250,500]:
print(n)
GCC_size = []
for k in ks:
GCs = []
for _ in range(N):
G = nx.gnp_random_graph(n,k/n)
GC = sorted(nx.connected_component_subgraphs(G),key=lambda C: len(C),reverse=True)[0]
GCs.append(float(len(GC))/n)
GCC_size.append(np.mean(GCs))
plt.plot(ks,GCC_size,marker='.',label=str(n),lw=0)
plt.legend(loc='lower right')
plt.xlabel('Average Degree')
plt.ylabel('Relative size of Giant Component')
```
Notice how the transition gets sharper as $n$ gets larger.
## Community Detection
Determining different node types solely from network data is one of the most powerful tools in network analysis. NetworkX has limited capacity for community detection, but some new algorithms are coming with version 2.0. I've included one function here in the tutorial
```
import community
```
The first is a classic algorithm due to Girvan-Newman. It progressively removes the highest betweenness edges from a graph until graph becomes disconnected. Then it repeateds the process into successively smaller groups. Let's test it on a generated graph we know has two communities.
A connected caveman graph is a graph with $k$ complete graphs connected in a ring.
```
G = nx.connected_caveman_graph(2,10)
group1 = range(10)
group2 = range(10,20)
```
Let's see what partitions Girvan Newman can partition the graph correctly
```
comm = community.girvan_newman(G)
```
`comm` will have the breakdown of the graph at each stage of the algorithm, so the first item in the list is the graph broken into two parts.
```
comm[0]
```
### Excercose
Now you try with the karate club graph
```
KC = nx.karate_club_graph()
comm = community.girvan_newman(KC)
```
Let's see how the algorithm did on identifying different groups. In the Karate Club Graph, nodes have an attribute `club` which is the group affiliatino and is either `Mr. Hi` or `Officer`.
```
KC.nodes(data=True)
```
Create two lists, one with all the nodes who are members of Mr. Hi's group and those that are members of Officer's group
```
mrHi = #
officer = #
```
Compare them to the divisions found by the algorithm.
```
sorted(comm[0][0]),sorted(mrHi)
sorted(comm[0][1]),sorted(officer)
```
Since this graph isn't that large, we could actually plot it. We'll make the first found community red, and the second community blue. If the nodes are misclassified we'll put a thick border around them.
```
colors = []
for n in KC.nodes():
if n in comm[0][0]:
colors.append('r')
else:
colors.append('b')
lwidths = []
for n in KC.nodes():
if n in comm[0][0] and n in mrHi:
lwidths.append(0.5)
elif n in comm[0][1] and n in officer:
lwidths.append(0.5)
else:
lwidths.append(2)
nx.draw(KC,node_color=colors,linewidths=lwidths)
```
| github_jupyter |
# 19. Gradient Boosting Regression
[](https://colab.research.google.com/github/rhennig/EMA6938/blob/main/Notebooks/19.GradientBoostingRegression.ipynb)
In this notebook, we will use a gradient boosted trees model for regression of $({\bf X}, {\bf y})$ data to obtain a function $f({\bf x})$ that best models the labels $y$.
A gradient boosted trees model sequentially adds decision trees to the model to learn the residuals of the model.
To illustrate the behavior of gradient boosting for regression, we will fit a simple one-dimensional function to the same data set that we previously used for linear regression, decision tree regression, and random forest regression.
```
# Import the numpy, panda, sklearn, and matplotlib libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_validate
from sklearn.model_selection import GridSearchCV
from sklearn.tree import plot_tree
plt.rc('xtick', labelsize=18)
plt.rc('ytick', labelsize=18)
```
### Create a one-dimensional dataset for regression
```
# Generate a data set for machine learning
np.random.seed(seed=5)
x=np.linspace(0, 2, 300)
x=x+np.random.normal(0,.3,x.shape)
y=np.cos(x)+2*np.sin(x)+3*np.cos(x*2)+np.random.normal(0,1,x.shape)
# Split the dataset into 80% for training and 20% for testing
x = x.reshape((x.size,1))
X_train,X_test,y_train,y_test = train_test_split(x, y, train_size=0.8, shuffle=True)
# Plot the training and testing dataset
fig,ax=plt.subplots(figsize=(8,8))
ax.scatter(X_train, y_train, color='blue', label='Training')
ax.scatter(X_test, y_test, color='orange', label='Testing')
ax.set_xlabel('X Values',fontsize=20)
ax.set_ylabel('cos(x)+2sin(x)+3cos(2x)',fontsize=20)
ax.set_title('Training and testing data',fontsize=25)
plt.legend(fontsize=20)
plt.show()
```
### Train Decision Tree Regression Model
```
# Fitting Decision Tree Regression to the dataset
regressor = GradientBoostingRegressor()
regressor.fit(X_train, y_train)
# Regressor score is the coefficient of determination of the prediction
print('Training score =', np.round(regressor.score(X_train,y_train),3))
print('Testing score =', np.round(regressor.score(X_test,y_test),3))
y_train_pred = regressor.predict(X_train)
training_mse = mean_squared_error(y_train, y_train_pred)
y_test_pred = regressor.predict(X_test)
testing_mse = mean_squared_error(y_test, y_test_pred)
print('Training RMSE = ', np.round(np.sqrt(training_mse),3))
print('Testing RMSE = ', np.round(np.sqrt(testing_mse),3))
```
We get a more similar score on the training and testing data than the previously trained decision tree and random forest models. This indicates that gradient boosted trees are less prone to overfitting.
Let us visualize the model and data to see the results.
### Visualization of Model Performance
```
# Calculate results for decision tree regression
X_model = np.linspace(np.min(x), np.max(x), 10000)
X_model = X_model.reshape((X_model.size,1))
y_model_pred = regressor.predict(X_model)
y_truth = np.cos(X_model)+2*np.sin(X_model)+3*np.cos(X_model*2)
# Plot the whole dataset
fig,ax=plt.subplots(figsize=(24,8))
ax.scatter(X_train, y_train, label='Data')
ax.scatter(X_test, y_test, color='orange', label='Testing')
ax.plot(X_model, y_model_pred, color='red', label='Model')
ax.plot(X_model, y_truth, color='green', label='Truth')
ax.set_xlabel('x-Values', fontsize=20)
ax.set_ylabel('y-Values', fontsize=20)
ax.set_title('Performance', fontsize=25)
ax.legend(loc='upper right', fontsize=20)
plt.show()
```
As for decision tree and random forest models, gradient boosted trees also result in **piecewise constant** models..
Let's check the predicted $y$ and true $y$ values using a scatter plot.
```
fig,ax=plt.subplots(figsize=(8,8))
ax.scatter(y_test, y_test_pred, color="orange")
ax.scatter(y_train, y_train_pred, color="blue")
ax.set_xlabel('Truth', fontsize=20)
ax.set_ylabel('Prediction', fontsize=20)
plt.show()
```
### Hyperparameter Optimization with Cross-Validation
To address overfitting, we should optimize the hyperparameters for gradient boosted trees.
The two main hyperparameters for gradient boosted trees are the
`learning rate` and `n_estimators`.
1. The `learning rate` is usually denoted as α.
- It determines how fast the model learns. Each tree added modifies the overall model. The learning rate modifies the magnitude of the modification.
- The lower the learning rate, the slower the model learns. The advantage of slower learning rate is that the model becomes more robust and generalized. In statistical learning, models that learn slowly perform better.
- However, learning slowly comes at a cost. It takes more time to train the model which brings us to the other significant hyperparameter.
2. The `n_estimator` hyperparameter determines the number of trees used in the model. If the learning rate is low, we need more trees to train the model. Be very careful selecting the number of trees as too many trees creates the risk of overfitting.
In cell `[4]` above, set the hyperparameter:
`regressor = GradientBoostingRegressor(hyperparameter = value)`
### Grid Search for Optimal Hyperparameters
Instead of optimizing hyperparameters one by one, we will use a grid search for the optimization of some of the hyperparameters of the decision tree model with cross-validation. The optimal values of hyperparameters depend on each other. The grid search varies all the parameters together, which ensures that we obtain a somewhat optimal model.
```
# List possible hyperparameters
regressor.get_params().keys()
# Grid search cross-validation
# Hyper parameters range intialization for tuning
parameters={"learning_rate" : [0.1, 0.3, 0.5],
"n_estimators" : [20, 40, 80]}
grid_search = GridSearchCV(regressor,param_grid=parameters,
scoring='neg_mean_squared_error',cv=3,verbose=1)
grid_search.fit(X_train, y_train)
# Optimial hyperparameters
tuned_parameters = grid_search.best_params_
print(tuned_parameters)
tuned_regressor = GradientBoostingRegressor(**tuned_parameters)
tuned_regressor.fit(X_train, y_train)
print('Training score =', np.round(tuned_regressor.score(X_train,y_train),3))
print('Testing score =', np.round(tuned_regressor.score(X_test,y_test),3))
y_train_pred = tuned_regressor.predict(X_train)
training_mse = mean_squared_error(y_train, y_train_pred)
y_test_pred = tuned_regressor.predict(X_test)
testing_mse = mean_squared_error(y_test, y_test_pred)
print('Training RMSE = ', np.round(np.sqrt(training_mse),3))
print('Testing RMSE = ', np.round(np.sqrt(testing_mse),3))
```
The tuned model does very similar to the value using default parameters.
- It predicts similar training and testing errors.
### Visualization of Model Performance
```
# Calculate optimal decision tree regression
X_model = np.linspace(np.min(x), np.max(x), 1000)
X_model = X_model.reshape((X_model.size,1))
y_model_pred = tuned_regressor.predict(X_model)
y_truth = np.cos(X_model)+2*np.sin(X_model)+3*np.cos(X_model*2)
# Plot the whole dataset
fig,ax=plt.subplots(figsize=(24,8))
ax.scatter(X_train, y_train, label='Data')
ax.scatter(X_test, y_test, color='orange', label='Testing')
ax.plot(X_model, y_model_pred, color='red', label='Model')
ax.plot(X_model, y_truth, color='green', label='Truth')
ax.set_xlabel('x-Values', fontsize=20)
ax.set_ylabel('y-Values', fontsize=20)
ax.set_title('Performance', fontsize=25)
ax.legend(loc='upper right', fontsize=20)
plt.show()
fig,ax=plt.subplots(figsize=(8,8))
ax.scatter(y_test, y_test_pred, color="orange")
ax.scatter(y_train, y_train_pred, color="blue")
ax.set_xlabel('Truth', fontsize=20)
ax.set_ylabel('Prediction', fontsize=20)
plt.show()
```
| github_jupyter |
# PyCaret 2 Regression Example
This notebook is created using PyCaret 2.0. Last updated : 31-07-2020
```
# check version
from pycaret.utils import version
version()
```
# 1. Loading Dataset
```
from pycaret.datasets import get_data
data = get_data("insurance")
```
# 2. Initialize Setup
### 気になったsetup()の引数メモ
#### 追加で入れといた方が良さそうなの
- log_data=True なら訓練データとテストデータがcsvとして保存されるらしいがされず・・・。MLFlowでの話?
- session_id=123 乱数固定
#### データの切り方
- data_split_shuffl=False, folds_shuffle=False にしたらまだ時系列に近い切り方でtrain/validation/test切れる
- デフォルトはどちらもTrue
#### MLFlow
- log_experiment=True ならすべてのメトリクスとパラメータが MLFlow サーバに記録
- デフォルトはFalse
- log_plots=True なら特定のプロットを png ファイルとして MLflow に記録
- デフォルトは False
- log_profile=True ならデータプロファイルも html ファイルとして MLflow に記録
- デフォルトは False
```
from pycaret.regression import *
help(setup)
from pycaret.regression import *
reg1 = setup(
data,
target="charges",
session_id=123,
log_experiment=True,
experiment_name="insurance1",
log_data=True,
silent=True,
)
reg1
```
# 3. Compare Baseline
```
best_model = compare_models(fold=5)
```
# 4. Create Model
```
lightgbm = create_model("lightgbm")
import numpy as np
lgbms = [create_model("lightgbm", learning_rate=i) for i in np.arange(0.1, 1, 0.1)]
print(len(lgbms))
```
# 5. Tune Hyperparameters
```
help(tune_model)
tuned_lightgbm = tune_model(lightgbm,
n_iter=50,
optimize="RMSE",
#optimize="MAE",
)
tuned_lightgbm
```
# 6. Ensemble Model
```
dt = create_model("dt")
bagged_dt = ensemble_model(dt, n_estimators=50)
boosted_dt = ensemble_model(dt, method="Boosting")
```
# 7. Blend Models
```
blender = blend_models()
```
# 8. Stack Models
```
stacker = stack_models(
estimator_list=compare_models(
n_select=5,
fold=5,
whitelist=models(type="ensemble").index.tolist()
)
)
```
# 9. Analyze Model
```
plot_model(dt)
plot_model(dt, plot="error")
plot_model(dt, plot="feature")
evaluate_model(dt)
```
# 10. Interpret Model
## DockerではShapはつかえない。imageが壊れるためインストールしていない
```
#interpret_model(lightgbm)
#interpret_model(lightgbm, plot="correlation")
#interpret_model(lightgbm, plot="reason", observation=12)
```
# 11. AutoML()
```
help(automl)
best = automl(optimize="MAE")
best
```
# 12. Predict Model
```
pred_holdouts = predict_model(lightgbm)
pred_holdouts.head()
new_data = data.copy()
new_data.drop(["charges"], axis=1, inplace=True)
predict_new = predict_model(best, data=new_data)
predict_new.head()
```
# 13. Save / Load Model
```
save_model(best, model_name="best-model")
from pycaret.regression import * # v2.0からsetup()実行しなくてもモデルロードできる
loaded_bestmodel = load_model("best-model")
print(loaded_bestmodel)
from sklearn import set_config
set_config(display="diagram")
loaded_bestmodel[0]
from sklearn import set_config
set_config(display="text")
```
# 14. Deploy Model
```
#deploy_model(best, model_name="best-aws", authentication={"bucket": "pycaret-test"})
```
# 15. Get Config / Set Config
```
X_train = get_config("X_train")
X_train.head()
get_config("seed")
from pycaret.regression import set_config
set_config("seed", 999)
get_config("seed")
```
# 16. MLFlow UI
- 5000ポート開けたけど使えず。。。
```
# !mlflow ui
```
# End
Thank you. For more information / tutorials on PyCaret, please visit https://www.pycaret.org
| github_jupyter |
# Deploy and Distribute TensorFlow
In this notebook you will learn how to deploy TensorFlow models to TensorFlow Serving (TFS), using the REST API or the gRPC API, and how to train a model across multiple devices.
## Imports
```
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import sklearn
import sys
import tensorflow as tf
from tensorflow import keras
import time
print("python", sys.version)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
assert sys.version_info >= (3, 5) # Python ≥3.5 required
assert tf.__version__ >= "2.0" # TensorFlow ≥2.0 required
```

## Exercise 1 – Deploying a Model to TensorFlow Serving
## Save/Load a `SavedModel`
```
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.
X_test = X_test / 255.
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
MODEL_NAME = "my_fashion_mnist"
!rm -rf {MODEL_NAME}
import time
model_version = int(time.time())
model_path = os.path.join(MODEL_NAME, str(model_version))
os.makedirs(model_path)
tf.saved_model.save(model, model_path)
for root, dirs, files in os.walk(MODEL_NAME):
indent = ' ' * root.count(os.sep)
print('{}{}/'.format(indent, os.path.basename(root)))
for filename in files:
print('{}{}'.format(indent + ' ', filename))
!saved_model_cli show --dir {model_path}
!saved_model_cli show --dir {model_path} --tag_set serve
!saved_model_cli show --dir {model_path} --tag_set serve \
--signature_def serving_default
!saved_model_cli show --dir {model_path} --all
```
**Warning**: as you can see, the method name is empty. This is [a bug](https://github.com/tensorflow/tensorflow/issues/25235), hopefully it will be fixed shortly. In the meantime, you must use `keras.experimental.export()` instead of `tf.saved_model.save()`:
```
!rm -rf {MODEL_NAME}
model_path = keras.experimental.export(model, MODEL_NAME).decode("utf-8")
!saved_model_cli show --dir {model_path} --all
```
Let's write a few test instances to a `npy` file so we can pass them easily to our model:
```
X_new = X_test[:3]
np.save("my_fashion_mnist_tests.npy", X_new, allow_pickle=False)
input_name = model.input_names[0]
input_name
```
And now let's use `saved_model_cli` to make predictions for the instances we just saved:
```
!saved_model_cli run --dir {model_path} --tag_set serve \
--signature_def serving_default \
--inputs {input_name}=my_fashion_mnist_tests.npy
```
## TensorFlow Serving
Install [Docker](https://docs.docker.com/install/) if you don't have it already. Then run:
```bash
docker pull tensorflow/serving
docker run -it --rm -p 8501:8501 \
-v "`pwd`/my_fashion_mnist:/models/my_fashion_mnist" \
-e MODEL_NAME=my_fashion_mnist \
tensorflow/serving
```
Once you are finished using it, press Ctrl-C to shut down the server.
```
import json
input_data_json = json.dumps({
"signature_name": "serving_default",
"instances": X_new.tolist(),
})
print(input_data_json[:200] + "..." + input_data_json[-200:])
```
Now let's use TensorFlow Serving's REST API to make predictions:
```
import requests
SERVER_URL = 'http://localhost:8501/v1/models/my_fashion_mnist:predict'
response = requests.post(SERVER_URL, data=input_data_json)
response.raise_for_status()
response = response.json()
response.keys()
y_proba = np.array(response["predictions"])
y_proba.round(2)
```
### Using Serialized Examples
```
serialized = []
for image in X_new:
image_data = tf.train.FloatList(value=image.ravel())
features = tf.train.Features(
feature={
"image": tf.train.Feature(float_list=image_data),
}
)
example = tf.train.Example(features=features)
serialized.append(example.SerializeToString())
[data[:100]+b'...' for data in serialized]
def parse_images(serialized):
expected_features = {
"image": tf.io.FixedLenFeature([28 * 28], dtype=tf.float32)
}
examples = tf.io.parse_example(serialized, expected_features)
return tf.reshape(examples["image"], (-1, 28, 28))
parse_images(serialized)
serialized_inputs = keras.layers.Input(shape=[], dtype=tf.string)
images = keras.layers.Lambda(lambda serialized: parse_images(serialized))(serialized_inputs)
y_proba = model(images)
ser_model = keras.models.Model(inputs=[serialized_inputs], outputs=[y_proba])
SER_MODEL_NAME = "my_ser_fashion_mnist"
!rm -rf {SER_MODEL_NAME}
ser_model_path = keras.experimental.export(ser_model, SER_MODEL_NAME).decode("utf-8")
!saved_model_cli show --dir {ser_model_path} --all
```
```bash
docker run -it --rm -p 8500:8500 -p 8501:8501 \
-v "`pwd`/my_ser_fashion_mnist:/models/my_ser_fashion_mnist" \
-e MODEL_NAME=my_ser_fashion_mnist \
tensorflow/serving
```
```
import base64
import json
ser_input_data_json = json.dumps({
"signature_name": "serving_default",
"instances": [{"b64": base64.b64encode(data).decode("utf-8")}
for data in serialized],
})
print(ser_input_data_json[:200] + "..." + ser_input_data_json[-200:])
import requests
SER_SERVER_URL = 'http://localhost:8501/v1/models/my_ser_fashion_mnist:predict'
response = requests.post(SER_SERVER_URL, data=ser_input_data_json)
response.raise_for_status()
response = response.json()
response.keys()
y_proba = np.array(response["predictions"])
y_proba.round(2)
!python3 -m pip install --no-deps tensorflow-serving-api
import grpc
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
channel = grpc.insecure_channel('localhost:8500')
predict_service = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = SER_MODEL_NAME
request.model_spec.signature_name = "serving_default"
input_name = ser_model.input_names[0]
request.inputs[input_name].CopyFrom(tf.compat.v1.make_tensor_proto(serialized))
result = predict_service.Predict(request, 10.0)
result
output_name = ser_model.output_names[0]
output_name
shape = [dim.size for dim in result.outputs[output_name].tensor_shape.dim]
shape
y_proba = np.array(result.outputs[output_name].float_val).reshape(shape)
y_proba.round(2)
```

## Exercise 2 – Distributed Training
```
keras.backend.clear_session()
distribution = tf.distribute.MirroredStrategy()
with distribution.scope():
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid), batch_size=25)
```
| github_jupyter |
# Fan-Tas-Tic test and debug
* Read switch inputs
* Make the LEDs blink, experiment with timing
* Fire the solenoids, experiment with intensities
* Try out quickfire rules
Make sure to connect to the `DEVICE` port on the TM4C123 eval board.
```
import os, serial
from time import sleep
from numpy import *
from colorsys import *
```
# Communication functions
```
def updateLed( sObj, dat=None, channel=0 ):
if dat is None:
nLeds = 1024
ledDat = bytearray( os.urandom( nLeds*3 ) )
elif type(dat) is int:
nLeds = dat
ledDat = bytearray( os.urandom( nLeds*3 ) )
else:
nLeds = len(dat)
ledDat = bytearray( dat )
sDat = bytes("LED {0} {1}\n".format(channel, len(ledDat)), "utf8") + ledDat
sObj.write( sDat )
```
# Open serial port connection
```
try:
s.close()
except:
pass
# Note that the baudrate is ignored (USB virtual serial port!)
s = serial.Serial("/dev/ttyACM0", 115200, timeout=1)
```
# Get ID and Software Version
```
print(s.read_all()) #Clear receive buffer
s.write(b"\n*IDN?\n") #First \n clears send buffer
s.read_until()
```
# Get Switch state
```
#%%timeit #1000 loops, best of 3: 1.78 ms per loop
s.write(b"SW?\n")
s.read_all()
```
# Adressable LEDs
### Setup a lower speed WS2811 LED strand on CH1
```
s.write(b"LEC 1 2400000\n")
```
Datasheet spec. would be **1.6 Mbit**, but my LED strand glitches like hell with that setting.
I found the upper operation limit by trial and error at **3.1 Mbit** and the lower one at **1.7 Mbit**. So now I'm using their average.
### Setup high speed WS2812 LED strip on CH0
```
s.write(b"LEC 0 3200000\n")
```
# Benchmark LED throughput (worst case)
```
%%timeit
x = array(ones(256 * 3), dtype=uint8)
updateLed(s, x, 0)
updateLed(s, x, 1)
updateLed(s, x, 2)
# [USB only] 20.6 ms ± 5.91 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
# [USB + SPI send] 24.8 ms ± 3.92 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
It looks like the USB communication is the bottleneck at the moment. Note that the firmware can transmit on several channels simultaneously but will block if a channel is updated, which has not finished transmitting yet.
**TLDR:** If you need > 30 Hz refresh rate, do not connect more than 256 LEDs per output
# Play with LEDs
### Glitch test
Do many sets of the same values. LEDs should show an alternating faint pink / faint green and not flicker at all!
```
x = zeros(59 * 3, dtype=uint8 )
x[0:-1][::2] = 1
#x[-6:] = 1
for i in range(8000):
updateLed(s, x, 0)
updateLed(s, x, 1)
updateLed(s, x, 2)
```
### Turn ON one color after another
```
updateLed(s, zeros(1024*3, dtype=uint8), 1)
while(True):
x = zeros( 59*3, dtype=uint8 )
for i in range(len(x)):
x[i] = 25
# updateLed(s, x, 0)
updateLed(s, x, 1)
# updateLed(s, x, 2)
sleep(1/60)
```
### All LEDs same color
```
x = array( [10, 90, 80]*59, dtype=uint8 )
updateLed( s, x, 0 )
updateLed( s, x, 1 )
updateLed( s, x, 2 )
```
### Gamma corrected fade-UP
```
# Precompute quadratic brightness values
bVals = array( arange(33)**2/4, dtype=uint8 )
bVals[-1] = 255
for bVal in bVals:
x = array( [bVal, bVal, bVal]*5, dtype=uint8 )
updateLed( s, x, 0 )
sleep(0.03)
```
### Each LED a random color
```
# Set it once
x = array(random.randint(0,255,58*3),dtype=uint8)
updateLed(s, x, 0)
updateLed(s, x, 1)
updateLed(s, x, 2)
# Set it in a loop
while True:
x = array(random.randint(0,2,58*3)*25,dtype=uint8)
updateLed( s, x, 0 )
updateLed( s, x, 1 )
updateLed( s, x, 2 )
sleep(0.05)
```
# Rainbow
### Darken LEDs
```
s.write(b"LEC 1 2400000\n")
updateLed( s , zeros(70*3, dtype=uint8), 0 )
updateLed( s , zeros(70*3, dtype=uint8), 1 )
updateLed( s , zeros(70*3, dtype=uint8), 2 )
```
### Setup color values
```
nCols = 200
x = arange(nCols)/(nCols-1)
x2 = list(map( hsv_to_rgb, x, ones(nCols), ones(nCols)*1.0))
rawOut = array( array(x2) * 255, dtype=uint8 )
```
### Roll color values through the string
```
while(True):
updateLed( s, array(rawOut[:60]/5, dtype=uint8), 0 )
updateLed( s, rawOut[:60], 1 )
updateLed( s, rawOut[:60], 2 )
rawOut = roll(rawOut, 3)
sleep(1/60)
```
# Enable reporting of Switch Events
```
#SWE : <OnOff> En./Dis. reporting of switch events.
s.write(b"HI 0x48\n")
s.write(b"SWE 1\n")
s.write(b"SW?\n")
print(s.read_all())
```
# Play with Relay Outputs
### Flush serial input buffer
```
print(s.read_all())
s.write(b"\n*IDN?\n")
print(s.read_until())
```
### Unstuck relay
```
for i in range(30):
s.write(b"SOE 1\n")
sleep(0.05)
s.write(b"SOE 0\n")
sleep(0.05)
s.write(b"SOE 0\n")
```
### Flippers
```
for i in (0x3C, 0x3E):
s.write("OUT {0} 0 30 1000\n".format(i).encode("utf8"))
sleep(0.1)
```
### Enable 24 V solenoid power (careful!)
```
s.write(b"SOE 1\n")
```
### Set a Solenoid to a specific power
```
#OUT : <hwIndex> <PWMlow>
s.write(b"OUT 0x40 0\n")
```
### Trigger a solenoid pulse
```
#OUT : <hwIndex> <PWMlow> [tPulse] [PWMhigh]
s.write(b"OUT 0x40 0 20 2\n")
s.write(b"OUT 0x41 0 20 2\n")
s.write(b"OUT 0x42 0 20 2\n")
```
### Mushrooms
```
for i in (0x40, 0x41, 0x42):
s.write( "OUT {0} 0 30 2\n".format(i).encode("utf8") )
sleep( 0.1 )
s.write( b"OUT 0x3D 0 30 1000\n" )
```
# Setup quick-fire rules
Right flipper switch, negative edge --> Pulse right flipper solenoid for 100 ms
Add a 1000 ms hold-off time (triggers max. once a second).
```
#RUL <ID> <IDin> <IDout> <trHoldOff> <tPulse> <pwmOn> <pwmOff> <bPosEdge>
s.write(b"RUL 0 0x4F 0x3E 1000 100 1000 0 0\n")
```
### Disable the rules
```
for i in range(16):
s.write("RULE {0} 0\n".format(i).encode("UTF8"))
```
### Rules for basic Flipper operation (Fan-Tas-Tic)
```
#RUL <ID><IDin><IDout><trHoldOff><tPulse><pwmOn><pwmOff><bPosEdge>
# Note that buttons are active low
# Flipper rules
# Attack + Hold on neg. edge
s.write(b"RUL 0 0x4C 0x3C 200 75 1500 500 0\n")
s.write(b"RUL 1 0x4F 0x3E 200 75 1500 500 0\n")
# Release on pos. edge
s.write(b"RUL 2 0x4C 0x3C 0 0 0 0 1\n")
s.write(b"RUL 3 0x4F 0x3E 0 0 0 0 1\n")
# Jet bumper rules
for rulId, hwIndexIn, hwIndexOut in zip( (4,5,6,7), (0x05, 0x04, 0x03, 0x1A), (0x40, 0x41, 0x42, 0x3D) ):
if hwIndexOut == 0x3D:
power = 1500
else:
power = 4
rulStr = "RUL {0} {1} {2} 0 20 {3} 0 1\n".format( rulId, hwIndexIn, hwIndexOut, power )
print( rulStr )
s.write( rulStr.encode("UTF8") )
# disable debouncing for the jet bumper inputs
#s.write( "DEB {0} 0\n".format(hwIndexIn).encode("UTF8") )
# Captive ball rules
for rulId, hwIndexIn, hwIndexOut in zip( (8,9,10), (0x16, 0x23, 0x00), (0x46, 0x47, 0x43) ):
rulStr = "RUL {0} {1} {2} 1000 75 3 0 1\n".format( rulId, hwIndexIn, hwIndexOut )
print( rulStr )
s.write( rulStr.encode("UTF8") )
```
#Flippers
RUL 0 0x4C 0x3C 200 75 3000 500 0
RUL 1 0x4F 0x3E 200 75 3000 500 0
RUL 2 0x4C 0x3C 0 0 0 0 1
RUL 3 0x4F 0x3E 0 0 0 0 1
#Jet bumpers
RUL 4 5 64 0 15 4 0 1
RUL 5 4 65 0 15 4 0 1
RUL 6 3 66 0 15 4 0 1
RUL 7 26 61 0 15 4000 0 1
#Captive holes
#R
RUL 8 0x16 0x46 500 75 3 0 1
#L
RUL 9 0x23 0x47 500 75 3 0 1
#T
RUL 10 0x00 0x43 500 75 3 0 1
```
s.write(b"SWE 0\n")
s.write(b"SOE 0\n")
s.close()
```
| github_jupyter |
```
# SELECT DISTINCT ?personVal ?relationVal ?toPVal WHERE {
# ?s a tbio:Person .
# ?s ?p ?o .
# ?p rdfs:subPropertyOf ?familyOP .
# FILTER ( ?familyOP = tbio:hasFamilyRelation || ?familyOP = tbio:isFamilyRelationOf ) .
# BIND(STR(?s) AS ?personStr) .
# BIND(REPLACE(?personStr, "http://tbio.orient.cas.cz#", "") AS ?personVal) .
# BIND(STR(?p) AS ?relationStr) .
# BIND(REPLACE(?relationStr, "http://tbio.orient.cas.cz#", "") AS ?relationVal) .
# BIND(STR(?o) AS ?toPStr) .
# BIND(REPLACE(?toPStr, "http://tbio.orient.cas.cz#", "") AS ?toPVal) .
# }
import pandas as pd
filepath = '/Volumes/backup_128G/z_repository/TBIO_data/RequestsFromTana/20190520'
filename = 'familyRelations_20190516.tsv'
read_filename = '{0}/{1}'.format(filepath, filename)
workDf = pd.read_csv(read_filename, delimiter='\t')
workDf = workDf.fillna('')
workDf.shape, workDf.head()
```
# Assign all relations
```
workList = []
for idx in range(0, len(workDf)):
row = workDf.loc[idx]
sPerson = str(row['?personVal'])
relation = str(row['?relationVal'])
tPerson = str(row['?toPVal'])
if sPerson == '' or tPerson == '':
continue
toRow = [sPerson, relation, tPerson]
if toRow not in workList:
workList.append(toRow)
else:
print("DUP!", toRow)
print(len(workList))
```
# Filter out inverse relations
```
def findInverse(INsPerson, INtPerson):
for work in workList:
sPerson = work[0]
relation = work[1]
tPerson = work[2]
# if INsPerson == sPerson and INtPerson == tPerson:
# # print(INsPerson, INtPerson, work)
# return relation
if INsPerson == tPerson and INtPerson == sPerson:
# print(INsPerson, INtPerson, work)
return relation
return ''
relationList = []
for work in workList:
sPerson = work[0]
relation = work[1]
tPerson = work[2]
if 'has' in relation:
relationList.append(work)
continue
OUTrelation = findInverse(sPerson, tPerson)
if OUTrelation == '':
relationList.append(work)
# if 'has' in OUTrelation:
# resList.append([sPerson, OUTrelation, tPerson])
# elif OUTrelation != '':
# resList.append(work)
print(len(relationList), relationList[0:5])
# orgDf = pd.DataFrame(resList, columns=['SPerson', 'Relation', 'TPerson'])
# orgDf.drop_duplicates(keep='first', inplace=True)
# orgDf.sort_values(by=['SPerson', 'TPerson'], inplace=True)
# orgDf.head()
# orgDf.count()
```
# Read person-family map table
```
familymembers = 'Familymembers_20190521.xlsx'
read_familymembers = '{0}/{1}'.format(filepath, familymembers)
fmDf = pd.read_excel(read_familymembers)
fmDf.shape, fmDf.head()
fmDic = {}
# familiesDic = []
for idx in range(0, len(fmDf)):
row = fmDf.loc[idx]
person = str(row['personStr'])
family = str(row['familyStr'])
if person not in fmDic:
fmDic[person] = family
elif family != fmDic[person]:
print("Dup:", person, family, fmDic[person])
# if family not in familiesDic:
# familiesDic.append(family)
# fmDic
# print(len(familiesDic), familiesDic)
def getFamilyName(INperson):
if INperson not in fmDic:
return ''
return fmDic[INperson]
relationFamilyList = []
for row in relationList:
sPerson = row[0]
relation = row[1]
tPerson = row[2]
sFamily = getFamilyName(sPerson)
if sFamily == '':
continue
tFamily = getFamilyName(tPerson)
if tFamily == '':
continue
if sFamily != tFamily:
relationFamilyList.append([sPerson, relation, tPerson, sFamily, tFamily])
print(len(relationFamilyList), relationFamilyList[0:5])
```
# Remove parallel relations
```
# marriageList = []
resList = []
def duplicate(INsPerson, INtPerson):
# for row in marriageList:
for row in resList:
sPerson = row[0]
tPerson = row[2]
if sPerson == INsPerson and INtPerson == tPerson:
# print(INsPerson, INtPerson)
return True
if tPerson == INsPerson and INtPerson == sPerson:
# print(INsPerson, INtPerson)
return True
return False
for row in relationFamilyList:
sPerson = row[0]
tPerson = row[2]
if duplicate(sPerson, tPerson) == False:
resList.append(row)
# marriageList.append([sPerson, tPerson])
personList = []
familyList = []
for res in resList:
sPerson = res[0]
# relation = res[1]
tPerson = res[2]
sFamily = res[3]
tFamily = res[4]
if sPerson not in personList:
personList.append(sPerson)
if tPerson not in personList:
personList.append(tPerson)
if sFamily not in familyList:
familyList.append(sFamily)
if tFamily not in familyList:
familyList.append(tFamily)
print(len(personList), personList[0:3])
print(len(familyList), familyList[0:3])
personDf = pd.DataFrame(personList)
personDf = personDf.rename(columns = {0:'PersonName'})
personDf.sort_values(by=['PersonName'], inplace=True)
personDf.reset_index(inplace=True, drop=True)
personDf['ID'] = personDf.index + 1000
personDf.head()
familyDf = pd.DataFrame(familyList)
familyDf = familyDf.rename(columns = {0:'FamilyName'})
familyDf.sort_values(by=['FamilyName'], inplace=True)
familyDf.reset_index(inplace=True, drop=True)
familyDf['ID'] = familyDf.index + 2000
familyDf.head()
```
# map person/family to ids
```
def getPersonID(INperson):
return personDf.loc[personDf['PersonName'] == INperson]['ID'].item()
def getFamilyID(INfamily):
return familyDf.loc[familyDf['FamilyName'] == INfamily]['ID'].item()
finalList = []
for res in resList:
sPerson = res[0]
relation = res[1]
tPerson = res[2]
sFamily = res[3]
tFamily = res[4]
sPersonID = getPersonID(sPerson)
tPersonID = getPersonID(tPerson)
sFamilyID = getFamilyID(sFamily)
tFamilyID = getFamilyID(tFamily)
row = [sPersonID, tPersonID, sFamilyID, tFamilyID, 'Undirected', sPerson, tPerson, sFamily, tFamily, relation]
if row not in finalList:
finalList.append(row)
print(len(finalList), finalList[0:5])
finalDf = pd.DataFrame(finalList, columns=['SourcePersonID', 'TargetPersonID', 'SourceFamilyID',
'TargetFamilyID', 'Type', 'SourcePersonName',
'TargetPersonName', 'SourceFamilyName', 'TargetFamilyName',
'Relation'])
finalDf.head()
write_file_to = '{0}/{1}'.format(filepath, 'familyRelations_20190521_v6.xlsx')
finalDf.to_excel(write_file_to, index=False)
```
# Write Person/Family node IDs
```
write_file_to = '{0}/{1}'.format(filepath, 'familyRelations_Node_20190521_v6.xlsx')
with pd.ExcelWriter(write_file_to) as writer:
personDf.to_excel(writer, 'PersonNodes', index=False)
familyDf.to_excel(writer, 'FamilyNodes', index=False)
writer.save()
```
| github_jupyter |
----
<img src="../../../files/refinitiv.png" width="20%" style="vertical-align: top;">
# Data Library for Python
----
## Content layer - Pricing snapshot
This notebook demonstrates how to retrieve Pricing snapshot data.
#### Learn more
To learn more about the Refinitiv Data Library for Python please join the Refinitiv Developer Community. By [registering](https://developers.refinitiv.com/iam/register) and [logging](https://developers.refinitiv.com/content/devportal/en_us/initCookie.html) into the Refinitiv Developer Community portal you will have free access to a number of learning materials like
[Quick Start guides](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/quick-start),
[Tutorials](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/learning),
[Documentation](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/docs)
and much more.
#### Getting Help and Support
If you have any questions regarding using the API, please post them on
the [Refinitiv Data Q&A Forum](https://community.developers.refinitiv.com/spaces/321/index.html).
The Refinitiv Developer Community will be happy to help.
## Set the configuration file location
For a better ease of use, you have the option to set initialization parameters of the Refinitiv Data Library in the _refinitiv-data.config.json_ configuration file. This file must be located beside your notebook, in your user folder or in a folder defined by the _RD_LIB_CONFIG_PATH_ environment variable. The _RD_LIB_CONFIG_PATH_ environment variable is the option used by this series of examples. The following code sets this environment variable.
```
import os
os.environ["RD_LIB_CONFIG_PATH"] = "../../../Configuration"
```
## Some Imports to start with
```
import refinitiv.data as rd
from refinitiv.data.content import pricing
```
## Open the data session
The open_session() function creates and open sessions based on the information contained in the refinitiv-data.config.json configuration file. Please edit this file to set the session type and other parameters required for the session you want to open.
```
# Open the session named 'platform.rdp' defined in the 'sessions' section of the refinitiv-data.config.json configuration file
rd.open_session('platform.rdp')
```
## Retrieve data
### Get Price Snapshot
```
response = rd.content.pricing.Definition(
['EUR=', 'GBP=', 'JPY=', 'CAD='],
fields=['BID', 'ASK']
).get_data()
response.data.df
```
### Alternative method for Snapshot data
If the above code generates an error, this could be because you are not licensed for the RDP Snapshot API OR you are using a platform session only connected to a Real-Time Distribution System (a.k.a. TREP).
If this is the case, then the following can be used to request a Snapshot from the Streaming data feed (assuming you have a Streaming Data licence) - note the **get_stream()** call
```
non_streaming = rd.content.pricing.Definition(
['EUR=', 'GBP=', 'JPY=', 'CAD='],
fields=['BID', 'ASK']
).get_stream()
# We want to just snap the current prices, don't need updates
# Open the instrument in non-streaming mode
non_streaming.open(with_updates=False)
# Snapshot the prices at the time of the open() call
non_streaming.get_snapshot()
```
### Alternative ways of accessing instruments + values
#### Direct Access to fields
```
non_streaming['EUR=']['BID']
eur = non_streaming['EUR=']
eur['ASK']
```
#### Iterate on fields
```
print('JPY=')
for field_name, field_value in non_streaming['JPY=']:
print(f"\t{field_name} : {field_value}")
```
#### Iterate on instruments and fields
```
for instrument in non_streaming:
print(instrument.name)
for field_name, field_value in instrument:
print(f"\t{field_name} : {field_value}")
```
### Close the stream
```
non_streaming.close()
```
## Close the session
```
rd.close_session()
```
| github_jupyter |
```
from local.torch_basics import *
from local.test import *
from local.core import *
from local.layers import *
from local.data.all import *
from local.optimizer import *
from local.learner import *
from local.metrics import *
from local.text.all import *
from local.callback.rnn import *
from local.callback.all import *
from local.notebook.showdoc import *
```
# Transfer learning in text
> How to fine-tune a language model and train a classifier
## Finetune a pretrained Language Model
First we get our data and tokenize it.
```
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
df_tok,count = tokenize_df(df, 'text')
```
Then we put it in a `DataSource`. For a language model, we don't have targets, so there is only one transform to numericalize the texts. Note that `tokenize_df` returns the count of the words in the corpus to make it easy to create a vocabulary.
```
splits = RandomSplitter()(range_of(df_tok))
vocab = make_vocab(count)
dsrc = DataSource(df_tok, [[attrgetter("text"), Numericalize(vocab)]], splits=splits, dl_type=LMDataLoader)
```
Then we use that `DataSource` to create a `DataBunch`. Here the class of `TfmdDL` we need to use is `LMDataLoader` which will concatenate all the texts in a source (with a shuffle at each epoch for the training set), split it in `bs` chunks then read continuously through it.
```
dbunch = dsrc.databunch(bs=64, seq_len=72, after_batch=Cuda)
dbunch.show_batch()
```
Then we have a convenience method to directly grab a `Learner` from it, using the `AWD_LSTM` architecture.
```
learn = language_model_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy, Perplexity()], path=path, opt_func = partial(Adam, wd=0.1)).to_fp16()
learn.freeze()
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7,0.8))
learn.unfreeze()
learn.fit_one_cycle(4, 1e-2, moms=(0.8,0.7,0.8))
```
Once we have fine-tuned the pretrained language model to this corpus, we save the encoder since we will use it for the classifier.
```
learn.show_results()
learn.save_encoder('enc1')
```
## Use it to train a classifier
For classification, we need to use two set of transforms: one to numericalize the texts and the other to encode the labels as categories.
```
splits = RandomSplitter()(range_of(df_tok))
dsrc = DataSource(df_tok, splits=splits, tfms=[
[attrgetter("text"), Numericalize(vocab)],
[attrgetter("label"), Categorize()]], dl_type=SortedDL)
```
We opnce again use a subclass of `TfmdDL` for the dataloaders, since we want to sort the texts (sortish for the training set) by order of lengths. We also use `pad_collate` to create batches form texts of different lengths.
```
dbunch = dsrc.databunch(before_batch=pad_input, after_batch=Cuda)
dbunch.show_batch(max_n=2)
```
Then we once again have a convenience function to create a classifier from this `DataBunch` with the `AWD_LSTM` architecture.
```
learn = text_classifier_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy], path=path, opt_func=Adam, drop_mult=0.5)
learn = learn.load_encoder('enc1')
```
Then we can train with gradual unfreezing and differential learning rates.
```
learn.fit_one_cycle(4, moms=(0.8,0.7,0.8))
learn.unfreeze()
learn.opt = learn.create_opt()
learn.fit_one_cycle(8, slice(1e-5,1e-3), moms=(0.8,0.7,0.8))
learn.show_results(max_n=5)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn as sk
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
# Генерируем уникальный seed
my_code = "Пушкарёва"
seed_limit = 2 ** 32
my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit
np.random.seed(my_seed)
# Формируем случайную нормально распределенную выборку sample
N = 10000
sample = np.random.normal(0, 1, N)
plt.hist(sample, bins=100)
plt.show()
# Формируем массив целевых метока классов: 0 - если значение в sample меньше t и 1 - если больше
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
plt.hist(target_labels, bins=100)
plt.show()
# Используя данные заготовки (или, при желании, не используя),
# реализуйте функции для рассчета accuracy, precision, recall и F1
def confusion_matrix(target_labels, model_labels) :
tp = 0
tn = 0
fp = 0
fn = 0
for i in range(len(target_labels)) :
if target_labels[i] == 1 and model_labels[i] == 1 :
tp += 1
if target_labels[i] == 0 and model_labels[i] == 0 :
tn += 1
if target_labels[i] == 0 and model_labels[i] == 1 :
fp += 1
if target_labels[i] == 1 and model_labels[i] == 0 :
fn += 1
return tp, tn, fp, fn
def metrics_list(target_labels, model_labels):
metrics_result = []
metrics_result.append(sk.metrics.accuracy_score(target_labels, model_labels))
metrics_result.append(sk.metrics.precision_score(target_labels, model_labels))
metrics_result.append(sk.metrics.recall_score(target_labels, model_labels))
metrics_result.append(sk.metrics.f1_score(target_labels, model_labels))
return metrics_result
# Первый эксперимент: t = 0, модель с вероятностью 50% возвращает 0 и 1
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.random.randint(2, size=N)
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
metrics_list(target_labels, model_labels)
# Второй эксперимент: t = 0, модель с вероятностью 25% возвращает 0 и с 75% - 1
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
labels = np.random.randint(4, size=N)
model_labels = np.array([0 if i == 0 else 1 for i in labels])
np.random.shuffle(model_labels)
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
metrics_list(target_labels, model_labels)
# Проанализируйте, какие из метрик применимы в первом и втором экспериментах.
# Третий эксперимент: t = 2, модель с вероятностью 50% возвращает 0 и 1
t = 2
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.random.randint(2, size=N)
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
metrics_list(target_labels, model_labels)
# Четвёртый эксперимент: t = 2, модель с вероятностью 100% возвращает 0
t = 2
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.zeros(N)
# Рассчитайте и выведите значения метрик accuracy, precision, recall и F1.
metrics_list(target_labels, model_labels)
# Проанализируйте, какие из метрик применимы в третьем и четвёртом экспериментах.
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#TSS" data-toc-modified-id="TSS-1"><span class="toc-item-num">1 </span>TSS</a></span></li></ul></div>
```
from collections import defaultdict
import warnings
import logging
import gffutils
import pybedtools
import pandas as pd
import copy
import re
from gffutils.pybedtools_integration import tsses
logging.basicConfig(level=logging.INFO)
gencode_gtf = '/home/cmb-panasas2/skchoudh/genomes/C_albicans_SC5314/Assembly22/annotation/C_albicans_SC5314_version_A22-s07-m01-r50_features.encode.gtf'
gencode_gtf_db = '/home/cmb-panasas2/skchoudh/genomes/C_albicans_SC5314/Assembly22/annotation/C_albicans_SC5314_version_A22-s07-m01-r50_features.encode.gtf.db'
prefix = '/home/cmb-panasas2/skchoudh/genomes/C_albicans_SC5314/Assembly22/annotation/C_albicans_SC5314_version_A22-s07-m01-r50_features.encode.gffutils'
chrsizes = '/home/cmb-panasas2/skchoudh/genomes/C_albicans_SC5314/Assembly22/fasta_v50/C_albicans_SC5314_version_A22-s07-m01-r50_chromosomes_clean_records.sizes'
def create_gene_dict(db):
'''
Store each feature line db.all_features() as a dict of dicts
'''
gene_dict = defaultdict(lambda: defaultdict(lambda: defaultdict(list)))
for line_no, feature in enumerate(db.all_features()):
gene_ids = feature.attributes['gene_id']
feature_type = feature.featuretype
if feature_type == 'gene':
if len(gene_ids)!=1:
logging.warning('Found multiple gene_ids on line {} in gtf'.format(line_no))
break
else:
gene_id = gene_ids[0]
gene_dict[gene_id]['gene'] = feature
else:
transcript_ids = feature.attributes['transcript_id']
for gene_id in gene_ids:
for transcript_id in transcript_ids:
gene_dict[gene_id][transcript_id][feature_type].append(feature)
return gene_dict
db = gffutils.create_db(gencode_gtf, dbfn=gencode_gtf_db, keep_order=True,
merge_strategy='merge', force=True)
db = gffutils.FeatureDB(gencode_gtf_db)
gene_dict = create_gene_dict(db)
def get_gene_list(gene_dict):
return list(set(gene_dict.keys()))
def get_UTR_regions(gene_dict, gene_id, transcript, cds):
if len(cds)==0:
return [], []
utr5_regions = []
utr3_regions = []
utrs = gene_dict[gene_id][transcript]['UTR']
first_cds = cds[0]
last_cds = cds[-1]
for utr in utrs:
## Push all cds at once
## Sort later to remove duplicates
strand = utr.strand
if strand == '+':
if utr.stop < first_cds.start:
utr.feature_type = 'five_prime_UTR'
utr5_regions.append(utr)
elif utr.start > last_cds.stop:
utr.feature_type = 'three_prime_UTR'
utr3_regions.append(utr)
else:
#raise RuntimeError('Error with cds: {}\t {} \t {}'.format(utr, last_cds, first_cds))
print('Error with cds: {}\t {} \t {}'.format(utr, last_cds, first_cds))
elif strand == '-':
if utr.stop < first_cds.start:
utr.feature_type = 'three_prime_UTR'
utr3_regions.append(utr)
elif utr.start > last_cds.stop:
utr.feature_type = 'five_prime_UTR'
utr5_regions.append(utr)
else:
#raise RuntimeError('Error with cds')
print('Error with cds: {}\t {} \t {}'.format(utr, last_cds, first_cds))
return utr5_regions, utr3_regions
def create_bed(regions, bedtype='0'):
'''Create bed from list of regions
bedtype: 0 or 1
0-Based or 1-based coordinate of the BED
'''
bedstr = ''
for region in regions:
assert len(region.attributes['gene_id']) == 1
## GTF start is 1-based, so shift by one while writing
## to 0-based BED format
if bedtype == '0':
start = region.start - 1
else:
start = region.start
bedstr += '{}\t{}\t{}\t{}\t{}\t{}\n'.format(region.chrom,
start,
region.stop,
re.sub('\.\d+', '', region.attributes['gene_id'][0]),
'.',
region.strand)
return bedstr
def rename_regions(regions, gene_id):
regions = list(regions)
if len(regions) == 0:
return []
for region in regions:
region.attributes['gene_id'] = gene_id
return regions
def merge_regions(db, regions):
if len(regions) == 0:
return []
merged = db.merge(sorted(list(regions), key=lambda x: x.start))
return merged
def merge_regions_nostrand(db, regions):
if len(regions) == 0:
return []
merged = db.merge(sorted(list(regions), key=lambda x: x.start), ignore_strand=True)
return merged
utr5_bed = ''
utr3_bed = ''
gene_bed = ''
exon_bed = ''
intron_bed = ''
start_codon_bed = ''
stop_codon_bed = ''
cds_bed = ''
gene_list = []
for gene_id in get_gene_list(gene_dict):
gene_list.append(gene_dict[gene_id]['gene'])
utr5_regions, utr3_regions = [], []
exon_regions, intron_regions = [], []
star_codon_regions, stop_codon_regions = [], []
cds_regions = []
for feature in gene_dict[gene_id].keys():
if feature == 'gene':
continue
cds = list(gene_dict[gene_id][feature]['CDS'])
exons = list(gene_dict[gene_id][feature]['exon'])
merged_exons = merge_regions(db, exons)
introns = db.interfeatures(merged_exons)
utr5_region, utr3_region = get_UTR_regions(gene_dict, gene_id, feature, cds)
utr5_regions += utr5_region
utr3_regions += utr3_region
exon_regions += exons
intron_regions += introns
cds_regions += cds
merged_utr5 = merge_regions(db, utr5_regions)
renamed_utr5 = rename_regions(merged_utr5, gene_id)
merged_utr3 = merge_regions(db, utr3_regions)
renamed_utr3 = rename_regions(merged_utr3, gene_id)
merged_exons = merge_regions(db, exon_regions)
renamed_exons = rename_regions(merged_exons, gene_id)
merged_introns = merge_regions(db, intron_regions)
renamed_introns = rename_regions(merged_introns, gene_id)
merged_cds = merge_regions(db, cds_regions)
renamed_cds = rename_regions(merged_cds, gene_id)
utr3_bed += create_bed(renamed_utr3)
utr5_bed += create_bed(renamed_utr5)
exon_bed += create_bed(renamed_exons)
intron_bed += create_bed(renamed_introns)
cds_bed += create_bed(renamed_cds)
gene_bed = create_bed(gene_list)
gene_bedtool = pybedtools.BedTool(gene_bed, from_string=True)
utr5_bedtool = pybedtools.BedTool(utr5_bed, from_string=True)
utr3_bedtool = pybedtools.BedTool(utr3_bed, from_string=True)
exon_bedtool = pybedtools.BedTool(exon_bed, from_string=True)
intron_bedtool = pybedtools.BedTool(intron_bed, from_string=True)
cds_bedtool = pybedtools.BedTool(cds_bed, from_string=True)
gene_bedtool.remove_invalid().sort().saveas('{}.genes.bed'.format(prefix))
utr5_bedtool.remove_invalid().sort().saveas('{}.UTR5.bed'.format(prefix))
utr3_bedtool.remove_invalid().sort().saveas('{}.UTR3.bed'.format(prefix))
exon_bedtool.remove_invalid().sort().saveas('{}.exon.bed'.format(prefix))
intron_bedtool.remove_invalid().sort().saveas('{}.intron.bed'.format(prefix))
cds_bedtool.remove_invalid().sort().saveas('{}.cds.bed'.format(prefix))
for gene_id in get_gene_list(gene_dict):
start_codons = []
stop_codons = []
for start_codon in db.children(gene_id, featuretype='start_codon'):
## 1 -based stop
## 0-based start handled while converting to bed
start_codon.stop = start_codon.start
start_codons.append(start_codon)
for stop_codon in db.children(gene_id, featuretype='stop_codon'):
stop_codon.start = stop_codon.stop
stop_codon.stop = stop_codon.stop+1
stop_codons.append(stop_codon)
merged_start_codons = merge_regions(db, start_codons)
renamed_start_codons = rename_regions(merged_start_codons, gene_id)
merged_stop_codons = merge_regions(db, stop_codons)
renamed_stop_codons = rename_regions(merged_stop_codons, gene_id)
start_codon_bed += create_bed(renamed_start_codons)
stop_codon_bed += create_bed(renamed_stop_codons)
start_codon_bedtool = pybedtools.BedTool(start_codon_bed, from_string=True)
stop_codon_bedtool = pybedtools.BedTool(stop_codon_bed, from_string=True)
start_codon_bedtool.remove_invalid().sort().saveas('{}.start_codon.bed'.format(prefix))
stop_codon_bedtool.remove_invalid().sort().saveas('{}.stop_codon.bed'.format(prefix))
## TSS
polyA_sites_bed = ''
tss_sites_bed = ''
for gene_id in get_gene_list(gene_dict):
tss_sites = []
polyA_sites = []
for transcript in db.children(gene_id, featuretype='transcript'):
start_t = copy.deepcopy(transcript)
stop_t = copy.deepcopy(transcript)
start_t.stop = start_t.start + 1
stop_t.start = stop_t.stop
if transcript.strand == '-':
start_t, stop_t = stop_t, start_t
polyA_sites.append(start_t)
tss_sites.append(stop_t)
merged_polyA_sites = merge_regions(db, polyA_sites)
renamed_polyA_sites = rename_regions(merged_polyA_sites, gene_id)
merged_tss_sites = merge_regions(db, tss_sites)
renamed_tss_sites = rename_regions(merged_tss_sites, gene_id)
polyA_sites_bed += create_bed(renamed_polyA_sites)
tss_sites_bed += create_bed(renamed_tss_sites)
polyA_sites_bedtool = pybedtools.BedTool(polyA_sites_bed, from_string=True)
tss_sites_bedtool = pybedtools.BedTool(tss_sites_bed, from_string=True)
polyA_sites_bedtool.remove_invalid().sort().saveas('{}.polyA_sites.bed'.format(prefix))
tss_sites_bedtool.remove_invalid().sort().saveas('{}.tss_sites.bed'.format(prefix))
```
# TSS
```
tss = tsses(db, as_bed6=True, merge_overlapping=False)
tss.remove_invalid().sort().saveas('{}.tss_temp.bed'.format(prefix))
promoter = tss.slop(l=1000, r=1000, s=True, g=chrsizes)
promoter.remove_invalid().sort().saveas('{}.promoter.1000.bed'.format(prefix))
for l in [1000, 2000, 3000, 4000, 5000]:
promoter = tss.slop(l=l, r=l, s=True, g=chrsizes)
promoter.remove_invalid().sort().saveas('{}.promoter.{}.bed'.format(prefix, l))
for x in db.featuretypes():
print(x)
for gene_id in get_gene_list(gene_dict):
for transcript in db.children(gene_id, featuretype='transcript'):
#print(transcript.attributes)
pass
chrom_sizes = pd.read_table('/home/cmb-06/as/skchoudh/genomes/C_albicans_SC5314/Assembly22/fasta_v50/C_albicans_SC5314_version_A22-s07-m01-r50_chromosomes_clean_records.sizes', names=['chrom', 'size']).set_index('chrom')
chrom_sizes
def clean_gff(db, chrom_sizes, gffout):
# check if the coordinates are withing the chromosome's boundaries
with open(gffout, 'w') as f:
for feature in db.all_features():
if feature.stop <= chrom_sizes.loc[feature.chrom]['size']:
f.write('{}\n'.format(feature))
clean_gff(db, chrom_sizes, '/home/cmb-panasas2/skchoudh/genomes/C_albicans_SC5314/Assembly22/annotation/C_albicans_SC5314_version_A22-s07-m01-r50_features.encode.cleaned.gtf')
for feature in db.all_features():
print feature.stop
print chrom_sizes.loc[feature.chrom]['size']
break
```
| github_jupyter |

# A simple pipeline using hypergroup to perform community detection and network analysis
A social network of a [karate club](https://en.wikipedia.org/wiki/Zachary%27s_karate_club) was studied by Wayne W. Zachary [1] for a period of three years from 1970 to 1972. The network captures 34 members of a karate club, documenting 78 pairwise links between members who interacted outside the club. During the study a conflict arose between the administrator "John A" and instructor "Mr. Hi" (pseudonyms), which led to the split of the club into two. Half of the members formed a new club around Mr. Hi, members from the other part found a new instructor or gave up karate. Basing on collected data Zachary assigned correctly all but one member of the club to the groups they actually joined after the split.
[1] W. Zachary, An information flow model for conflict and fission in small groups, Journal of Anthropological Research 33, 452-473 (1977)
## Data Preparation
### Import packages: SAS Wrapper for Analytic Transfer and open source libraries
```
import swat
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
# Also import networkx used for rendering a network
import networkx as nx
%matplotlib inline
```
### Connect to Cloud Analytic Services in SAS Viya
```
s = swat.CAS('http://cas.mycompany.com:8888') # REST API
```
### Load the action set for hypergroup
```
s.loadactionset('hypergroup')
```
### Load data into CAS
Data set used from https://en.wikipedia.org/wiki/Zachary%27s_karate_club.
```
df = pd.DataFrame.from_records([[2,1],[3,1],[3,2],[4,1],[4,2],[4,3],[5,1],[6,1],[7,1],[7,5],[7,6],[8,1],[8,2],[8,3],[8,4],[9,1],[9,3],[10,3],[11,1],[11,5],[11,6],[12,1],[13,1],[13,4],[14,1],[14,2],[14,3],[14,4],[17,6],[17,7],[18,1],[18,2],[20,1],[20,2],[22,1],[22,2],[26,24],[26,25],[28,3],[28,24],[28,25],[29,3],[30,24],[30,27],[31,2],[31,9],[32,1],[32,25],[32,26],[32,29],[33,3],[33,9],[33,15],[33,16],[33,19],[33,21],[33,23],[33,24],[33,30],[33,31],[33,32],[34,9],[34,10],[34,14],[34,15],[34,16],[34,19],[34,20],[34,21],[34,23],[34,24],[34,27],[34,28],[34,29],[34,30],[34,31],[34,32],[34,33]],
columns=['FROM','TO'])
df['SOURCE'] = df['FROM'].astype(str)
df['TARGET'] = df['TO'].astype(str)
df.head()
```
**Hypergroup** doesn't support numeric source and target columns - so make sure to cast them as varchars.
```
if s.tableexists('karate').exists:
s.CASTable('KARATE').droptable()
dataset = s.upload(df,
importoptions=dict(filetype='csv',
vars=[dict(type='double'),
dict(type='double'),
dict(type='varchar'),
dict(type='varchar')]),
casout=dict(name='KARATE', promote=True)).casTable
```
## Data Exploration
### Get to know your data (what are variables?)
```
dataset.head(5)
dataset.summary()
```
### Graph rendering utility
```
def renderNetworkGraph(filterCommunity=-1, size=18, sizeVar='_HypGrp_',
colorVar='', sizeMultipler=500, nodes_table='nodes',
edges_table='edges'):
''' Build an array of node positions and related colors based on community '''
nodes = s.CASTable(nodes_table)
if filterCommunity >= 0:
nodes = nodes.query('_Community_ EQ %F' % filterCommunity)
nodes = nodes.to_frame()
nodePos = {}
nodeColor = {}
nodeSize = {}
communities = []
i = 0
for nodeId in nodes._Value_:
nodePos[nodeId] = (nodes._AllXCoord_[i], nodes._AllYCoord_[i])
if colorVar:
nodeColor[nodeId] = nodes[colorVar][i]
if nodes[colorVar][i] not in communities:
communities.append(nodes[colorVar][i])
nodeSize[nodeId] = max(nodes[sizeVar][i],0.1)*sizeMultipler
i += 1
communities.sort()
# Build a list of source-target tuples
edges = s.CASTable(edges_table)
if filterCommunity >= 0:
edges = edges.query('_SCommunity_ EQ %F AND _TCommunity_ EQ %F' %
(filterCommunity, filterCommunity))
edges = edges.to_frame()
edgeTuples = []
for i, p in enumerate(edges._Source_):
edgeTuples.append( (edges._Source_[i], edges._Target_[i]) )
# Add nodes and edges to the graph
plt.figure(figsize=(size,size))
graph = nx.DiGraph()
graph.add_edges_from(edgeTuples)
# Size mapping
getNodeSize=[nodeSize[v] for v in graph]
# Color mapping
jet = cm = plt.get_cmap('jet')
getNodeColor=None
if colorVar:
getNodeColor=[nodeColor[v] for v in graph]
cNorm = colors.Normalize(vmin=min(communities), vmax=max(communities))
scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=jet)
# Using a figure here to work-around the fact that networkx doesn't
# produce a labelled legend
f = plt.figure(1)
ax = f.add_subplot(1,1,1)
for community in communities:
ax.plot([0],[0], color=scalarMap.to_rgba(community),
label='Community %s' % '{:2.0f}'.format(community), linewidth=10)
# Render the graph
nx.draw_networkx_nodes(graph, nodePos, node_size=getNodeSize,
node_color=getNodeColor, cmap=jet)
nx.draw_networkx_edges(graph, nodePos, width=1, alpha=0.5)
nx.draw_networkx_labels(graph, nodePos, font_size=11, font_family='sans-serif')
if len(communities) > 0:
plt.legend(loc='upper left', prop={'size':11})
plt.title('Zachary Karate Club social network', fontsize=30)
plt.axis('off')
plt.show()
```
### Execute community and hypergroup detection
```
# Create output table objects
edges = s.CASTable('edges', replace=True)
nodes = s.CASTable('nodes', replace=True)
dataset[['SOURCE', 'TARGET']].hyperGroup(
createOut = 'never',
allGraphs = True,
edges = edges,
vertices = nodes
)
renderNetworkGraph(size=10, sizeMultipler=2000)
```
>**Note:** Network of the Zachary Karate Club. Distribution by degree of the node. Node 1 stands for the instructor, node 34 for the president
```
dataset[['SOURCE', 'TARGET']].hyperGroup(
createOut = 'never',
allGraphs = True,
community = True,
edges = edges,
vertices = nodes
)
```
How many hypergroups and communities do we have?
```
nodes.distinct()
nodes.summary()
```
### Basic community analysis
What are the 2 biggest communities?
```
topKOut = s.CASTable('topKOut', replace=True)
nodes[['_Community_']].topk(
aggregator = 'N',
topK = 4,
casOut = topKOut
)
topKOut = topKOut.sort_values('_Rank_').head(10)
topKOut.columns
nCommunities = len(topKOut)
ind = np.arange(nCommunities) # the x locations for the groups
plt.figure(figsize=(8,4))
p1 = plt.bar(ind + 0.2, topKOut._Score_, 0.5, color='orange', alpha=0.75)
plt.ylabel('Vertices', fontsize=12)
plt.xlabel('Community', fontsize=12)
plt.title('Number of nodes for the top %s communities' % '{:2.0f}'.format(nCommunities))
plt.xticks(ind + 0.2, topKOut._Fmtvar_)
plt.show()
```
>**Note:** This shows that the biggest communities have up to 18 vertices.
What nodes belong to community 4?
```
nodes.query('_Community_ EQ 1').head(5)
```
What edges do we have?
```
edges.head(5)
```
### Render the network graph
```
renderNetworkGraph(size=10, colorVar='_Community_', sizeMultipler=2000)
```
### Analyze node centrality
How important is a user in the network?
```
dataset[['SOURCE', 'TARGET']].hyperGroup(
createOut = 'never',
community = True,
centrality = True,
mergeCommSmallest = True,
allGraphs = True,
graphPartition = True,
scaleCentralities = 'central1', # Returns centrality values closer to 1 in the center
edges = edges,
vertices = nodes
)
nodes.head()
```
Between-ness centrality quantifies the number of times a node acts as a bridge along the shortest path(s) between two other nodes. As such it describes the importance of a node in a network.
```
renderNetworkGraph(size=10, colorVar='_Community_', sizeVar='_Betweenness_')
```
### Filter communities
Only filter community 2.
```
renderNetworkGraph(1, size=10, sizeVar='_CentroidAngle_', sizeMultipler=5)
s.close()
```
>Falko Schulz ▪ Principal Software Developer ▪ Business Intelligence Visualization R&D ▪ SAS® Institute ▪ [falko.schulz@sas.com](mailto:falko.schulz@sas.com) ▪ http://www.sas.com
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
def calcR2(H,T,slope,igflag=0):
"""
%
% [R2,S,setup, Sinc, SIG, ir] = calcR2(H,T,slope,igflag);
%
% Calculated 2% runup (R2), swash (S), setup (setup), incident swash (Sinc)
% and infragravity swash (SIG) elevations based on parameterizations from runup paper
% also Iribarren (ir)
% August 2010 - Included 15% runup (R16) statistic that, for a Guassian distribution,
% represents mean+sigma. It is calculated as R16 = setup + swash/4.
% In a wave tank, Palmsten et al (2010) found this statistic represented initiation of dune erosion.
%
%
% H = significant wave height, reverse shoaled to deep water
% T = deep-water peak wave period
% slope = radians
% igflag = 0 (default)use full equation for all data
% = 1 use dissipative-specific calculations when dissipative conditions exist (Iribarren < 0.3)
% = 2 use dissipative-specific (IG energy) calculation for all data
%
% based on:
% Stockdon, H. F., R. A. Holman, P. A. Howd, and J. Sallenger A. H. (2006),
% Empirical parameterization of setup, swash, and runup,
% Coastal Engineering, 53, 573-588.
% author: hstockdon@usgs.gov
# Converted to Python by csherwood@usgs.gov
"""
g = 9.81
# make slopes positive!
slope = np.abs(slope)
# compute wavelength and Iribarren
L = (g*T**2) / (2.*np.pi)
sqHL = np.sqrt(H*L)
ir = slope/np.sqrt(H/L)
if igflag == 2: # use dissipative equations (IG) for ALL data
R2 = 1.1*(0.039 * sqHL)
S = 0.046*sqHL
setup = 0.016*sqHL
elif igflag == 1 and ir < 0.3: # if dissipative site use diss equations
R2 = 1.1*(0.039 * sqHL)
S = 0.046*sqHL
setup = 0.016*sqHL
else: # if int/ref site, use full equations
setup = 0.35*slope*sqHL
Sinc = 0.75*slope*sqHL
SIG = 0.06*sqHL
S = np.sqrt(Sinc**2 + SIG**2)
R2 = 1.1*(setup + S/2.)
R16 = 1.1*(setup + S/4.)
return R2, S, setup, Sinc, SIG, ir, R16
target_R2 = 1.95
tana = np.arange(0.02, 0.16, .02)
Srange = np.arctan(tana)
Trange = np.arange(6.,14.,.5)
Hrange = np.arange(.8,5,.2)
nS = len(Srange)
nH = len(Hrange)
nT = len(Trange)
R2a = np.nan*np.ones((nS,nH,nT))
print(np.shape(R2a))
for i, slope in enumerate(Srange):
for j, H in enumerate(Hrange):
for k, T in enumerate(Trange):
R2, _, _, _, _, _, _, = calcR2(H,T,slope)
R2a[i,j,k]=R2
print(np.shape(np.squeeze(R2a[-1,:,:])))
print(np.shape(Hrange))
print(np.shape(Trange))
#np.squeeze(R2a[-1,:,:])
fig, ax = plt.subplots()
for i, slope in enumerate(Srange):
if(np.max(R2a[i,:,:]))>=target_R2:
CS=ax.contour(Trange,Hrange,np.squeeze(R2a[i,:,:]),levels=[target_R2,5])
ax.clabel(CS, fmt='{:.3f}'.format(slope),inline=1, fontsize=10)
plt.xlabel('Period (s)')
plt.ylabel('Height (m)')
plt.title('Runup Elevation = {:.1f} m'.format(target_R2))
dfp = pd.read_csv('41025_H_v_T.csv',header = 0,delimiter=',')
```
| github_jupyter |
```
import azureml.core
print(azureml.core.VERSION)
from azureml.core.authentication import InteractiveLoginAuthentication
from azureml.core import Workspace
import os
# ws = Workspace.from_config(auth=InteractiveLoginAuthentication(tenant_id=os.environ["AML_TENANT_ID"]))
ws = Workspace.from_config()
ws
from azureml.core.compute import ComputeTarget
compute_name = "cpu-cluster-2"
if not compute_name in ws.compute_targets :
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_D2_V2",
min_nodes=0,
max_nodes=1)
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20)
# Show the result
print(compute_target.get_status().serialize())
from azureml.core.compute import AmlCompute
compute_target = AmlCompute(ws, compute_name)
print(compute_target)
from azureml.core import Datastore
datastore = ws.get_default_datastore()
from azureml.core import Environment
envs = Environment.list(workspace=ws)
# List Environments and packages in my workspace
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
#print("packages", envs[env].python.conda_dependencies.serialize_to_string())
# Get curated environment
curated_environment = Environment.get(workspace=ws, name="AzureML-Tutorial") # Custom environment: Environment.get(workspace=ws,name="myenv",version="1")
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
aml_run_config = RunConfiguration()
aml_run_config.target = compute_target
# Assing curated environment
aml_run_config.environment = curated_environment
# from azureml.core.runconfig import RunConfiguration
# from azureml.core.conda_dependencies import CondaDependencies
# aml_run_config = RunConfiguration()
# aml_run_config.target = compute_target
# # Use conda_dependencies.yml to create a conda environment in the Docker image for execution
# aml_run_config.environment.python.user_managed_dependencies = False
# # Specify CondaDependencies obj, add necessary packages
# aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(
# conda_packages=['pandas','scikit-learn', 'pyarrow'],
# pip_packages=['azureml-sdk[automl,explain]', 'azureml-dataprep[fuse,pandas]'],
# pin_sdk_version=False)
```
## Step 0: Grab an open dataset and register it
This is baseline data. If the `Dataset` does not exist, create and register it. Not a part of the Pipeline.
```
from azureml.core import Dataset
if not 'titanic_ds' in ws.datasets.keys() :
# create a TabularDataset from Titanic training data
web_paths = ['https://dprepdata.blob.core.windows.net/demo/Titanic.csv',
'https://dprepdata.blob.core.windows.net/demo/Titanic2.csv']
titanic_ds = Dataset.Tabular.from_delimited_files(path=web_paths)
titanic_ds.register(workspace = ws,
name = 'titanic_ds',
description = 'new titanic training data',
create_new_version = True)
titanic_ds = Dataset.get_by_name(ws, 'titanic_ds')
type(titanic_ds)
# if not 'titanic_files_ds' in ws.datasets.keys() :
# # create a TabularDataset from Titanic training data
# web_paths = ['https://dprepdata.blob.core.windows.net/demo/Titanic.csv',
# 'https://dprepdata.blob.core.windows.net/demo/Titanic2.csv']
# titanic_ds = Dataset.File.from_files(path=web_paths)
#
# titanic_ds.register(workspace = ws,
# name = 'titanic_files_ds',
# description = 'File Dataset of titanic training data',
# create_new_version = True)
```
## Step 1: Dataprep
```
%%writefile dataprep.py
from azureml.core import Run
import pandas as pd
import numpy as np
import pyarrow as pa
import pyarrow.parquet as pq
from sklearn.model_selection import train_test_split
import argparse
RANDOM_SEED=42
def prepare_age(df):
# Fill in missing Age values from distribution of present Age values
mean = df["Age"].mean()
std = df["Age"].std()
is_null = df["Age"].isnull().sum()
# compute enough (== is_null().sum()) random numbers between the mean, std
rand_age = np.random.randint(mean - std, mean + std, size = is_null)
# fill NaN values in Age column with random values generated
age_slice = df["Age"].copy()
age_slice[np.isnan(age_slice)] = rand_age
df["Age"] = age_slice
df["Age"] = df["Age"].astype(int)
# Quantize age into 5 classes
df['Age_Group'] = pd.qcut(df['Age'],5, labels=False)
df.drop(['Age'], axis=1, inplace=True)
return df
def prepare_fare(df):
df['Fare'].fillna(0, inplace=True)
df['Fare_Group'] = pd.qcut(df['Fare'],5,labels=False)
df.drop(['Fare'], axis=1, inplace=True)
return df
def prepare_genders(df):
genders = {"male": 0, "female": 1, "unknown": 2}
df['Sex'] = df['Sex'].map(genders)
df['Sex'].fillna(2, inplace=True)
df['Sex'] = df['Sex'].astype(int)
return df
def prepare_embarked(df):
df['Embarked'].replace('', 'U', inplace=True)
df['Embarked'].fillna('U', inplace=True)
ports = {"S": 0, "C": 1, "Q": 2, "U": 3}
df['Embarked'] = df['Embarked'].map(ports)
return df
parser = argparse.ArgumentParser()
parser.add_argument('--output_path', dest='output_path', required=True)
args = parser.parse_args()
titanic_ds = Run.get_context().input_datasets['titanic_ds']
df = titanic_ds.to_pandas_dataframe().drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis=1)
df = prepare_embarked(prepare_genders(prepare_fare(prepare_age(df))))
os.makedirs(os.path.dirname(args.output_path), exist_ok=True)
pq.write_table(pa.Table.from_pandas(df), args.output_path)
print(f"Wrote test to {args.output_path} and train to {args.output_path}")
from azureml.pipeline.core import PipelineData
prepped_data_path = PipelineData("titanic_train", datastore).as_dataset()
from azureml.pipeline.steps import PythonScriptStep
dataprep_step = PythonScriptStep(
name="dataprep",
script_name="dataprep.py",
compute_target=compute_target,
runconfig=aml_run_config,
arguments=["--output_path", prepped_data_path],
inputs=[titanic_ds.as_named_input("titanic_ds")],
outputs=[prepped_data_path],
allow_reuse=True
)
```
### Step 2: Train with AutoMLStep
```
from azureml.train.automl import AutoMLConfig
prepped_data_potds = prepped_data_path.parse_parquet_files(file_extension=None)
X = prepped_data_potds.drop_columns('Survived')
y = prepped_data_potds.keep_columns('Survived')
# Change iterations to a reasonable number (50) to get better accuracy
automl_settings = {
"iteration_timeout_minutes" : 10,
"iterations" : 2,
"experiment_timeout_hours" : 0.25,
"primary_metric" : 'AUC_weighted',
"n_cross_validations" : 2
}
automl_config = AutoMLConfig(task = 'classification',
path = '.',
debug_log = 'automated_ml_errors.log',
compute_target = compute_target,
run_configuration = aml_run_config,
featurization = 'auto',
training_data = prepped_data_potds,
label_column_name = 'Survived',
**automl_settings)
print("AutoML config created.")
from azureml.pipeline.core import TrainingOutput
from azureml.pipeline.steps import AutoMLStep
metrics_data = PipelineData(name='metrics_data',
datastore=datastore,
pipeline_output_name='metrics_output',
training_output=TrainingOutput(type='Metrics'))
model_data = PipelineData(name='best_model_data',
datastore=datastore,
pipeline_output_name='model_output',
training_output=TrainingOutput(type='Model'))
train_step = AutoMLStep(name='AutoML_Classification',
automl_config=automl_config,
passthru_automl_config=False,
outputs=[metrics_data,model_data],
allow_reuse=True)
print("train_step created.")
```
## Step 3: Register the model
```
%%writefile register_model.py
from azureml.core.model import Model, Dataset
from azureml.core.run import Run, _OfflineRun
from azureml.core import Workspace
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--model_name", required=True)
parser.add_argument("--model_path", required=True)
args = parser.parse_args()
print(f"model_name : {args.model_name}")
print(f"model_path: {args.model_path}")
run = Run.get_context()
ws = Workspace.from_config() if type(run) == _OfflineRun else run.experiment.workspace
model = Model.register(workspace=ws,
model_path=args.model_path,
model_name=args.model_name)
print("Registered version {0} of model {1}".format(model.version, model.name))
from azureml.pipeline.core.graph import PipelineParameter
# The model name with which to register the trained model in the workspace.
model_name = PipelineParameter("model_name", default_value="TitanicSurvival")
register_step = PythonScriptStep(script_name="register_model.py",
name="register_model",
allow_reuse=False,
arguments=["--model_name", model_name, "--model_path", model_data],
inputs=[model_data],
compute_target=compute_target,
runconfig=aml_run_config)
```
## Submit it
```
from azureml.core import Experiment
if not 'titanic_automl' in ws.experiments.keys() :
Experiment(ws, 'titanic_automl')
experiment = ws.experiments['titanic_automl']
from azureml.pipeline.core import Pipeline
pipeline = Pipeline(ws, [dataprep_step, train_step, register_step])
run = experiment.submit(pipeline, show_output=True)
run.wait_for_completion()
```
### Examine Results from Pipeline
#### Retrieve the metrics of all child runs
Outputs of above run can be used as inputs of other steps in pipeline. In this tutorial, we will examine the outputs by retrieve output data and running some tests.
```
metrics_output = run.get_pipeline_output('metrics_output')
num_file_downloaded = metrics_output.download('.', show_progress=True)
import pandas as pd
# import numpy as np
import json
with open(metrics_output._path_on_datastore) as f:
metrics_output_result = f.read()
deserialized_metrics_output = json.loads(metrics_output_result)
df = pd.DataFrame(deserialized_metrics_output)
df
model_output = run.get_pipeline_output('model_output')
model_output
num_file_downloaded = model_output.download('.', show_progress=True)
# In order to load the model in the local notebook environment you'd need to have installed thee needed packages and imports set
# !pip install xgboost==0.90
!pip list
# import pickle
# import xgboost
# with open(model_output._path_on_datastore, "rb" ) as f:
# best_model = pickle.load(f)
# best_model
# You could inference and try predictions with that model at this point, calculate your own metrics, etc.
# Run on local machine
# ws = Workspace.from_config()
# experiment = ws.experiments['titanic_automl']
# run = next(run for run in ex.get_runs() if run.id == 'aaaaaaaa-bbbb-cccc-dddd-0123456789AB')
# automl_run = next(r for r in run.get_children() if r.name == 'AutoML_Classification')
# outputs = automl_run.get_outputs()
# metrics = outputs['default_metrics_AutoML_Classification']
# model = outputs['default_model_AutoML_Classification']
#
# metrics.get_port_data_reference().download('.')
# model.get_port_data_reference().download('.')
```
| github_jupyter |
```
# Required Libraries
import numpy as np
import pandas as pd
import sklearn
from sklearn.cluster import KMeans # K-Means Clustering
from sklearn.neighbors import KNeighborsClassifier # KNN Classification
from sklearn import metrics # Prediction Accuracy
from sklearn.decomposition import PCA # Principle component Analysis
from sklearn.model_selection import KFold # KFold validation
import matplotlib.pyplot as plt
# Read dataset and split into train and test
X_raw = pd.read_csv("Iris.csv") #Read dataset
X_raw = X_raw.drop('Id', axis=1) #drop id column (unnesecceray features)
y_raw = X_raw['Species']
X_raw = X_raw.drop('Species', axis=1) #drop label column for clustering
# Reduce the dimension of the dataset to 2
pca = PCA(n_components=2)
reduced_train = pca.fit_transform(X_raw)
# Implement KMeans and KNN
f = 2
kfold = KFold(n_splits=f, shuffle=True, random_state=0) # KFold Model : Splitting 2 fold test and train data
kmeans = KMeans(n_clusters=3, random_state=0) # KMeans Model : number of centroids :3
knn = KNeighborsClassifier(n_neighbors=6) # KNN Model : number of neighbors :6
# Saving train and test datasets to use again for plots.
test_sets = []
kmeans_preds = []
knn_preds = []
for train_index , test_index in kfold.split(reduced_train):
X_train , X_test = reduced_train[train_index,:],reduced_train[test_index,:]
y_train , y_test = y_raw[train_index] , y_raw[test_index]
test_sets.append((X_test, y_test))
kmeans.fit(X_train)
kmeans_preds.append(kmeans.predict(X_test))
knn.fit(X_train, y_train)
knn_preds.append(knn.predict(X_test))
#Plot KMeans Distribution
for i in range(f):
(X_test, y_test) = test_sets[i] # Taken saved test data fold(i)
yup = kmeans_preds[i] # Taken saved prediction results
label_names = np.unique(yup) # Get unique label names
centroids = kmeans.cluster_centers_ # Get cluster centers from the model
for j in label_names: # Plot distribution
plt.scatter(X_test[yup == j , 0] , X_test[yup == j , 1] , label = j)
plt.scatter(centroids[:,0] , centroids[:,1] , s = 80, color = 'k')
plt.legend()
plt.show()
# Plot KNN Distribution
for i in range(f):
ysp = knn_preds[i] # Taken saved prediction results
(X_test, y_test) = test_sets[i] # Taken saved test data fold(i)
label_names = np.unique(ysp) # Get unique label names
print("KNN Accuracy:",sklearn.metrics.accuracy_score(y_test, ysp))
for j in label_names: # Plot distribution
plt.scatter(X_test[ysp == j , 0] , X_test[ysp == j , 1] , label = j)
plt.legend()
plt.show()
```
| github_jupyter |
```
%%time
#print("1")
import tensorflow as tf
from numba import cuda
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
from keras.preprocessing.sequence import pad_sequences
# #print("2")
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import StandardScaler
import pickle
from keras.layers import Dense, Input, Dropout
#print("3")
from keras import Sequential
#print("4")
from sklearn.preprocessing import StandardScaler
from tensorflow.keras.callbacks import EarlyStopping
import matplotlib.pyplot as plt
import keras
print(keras.__version__)
from sklearn.model_selection import train_test_split
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import StratifiedKFold
from keras.constraints import maxnorm
from sklearn.pipeline import Pipeline
from sklearn.externals import joblib
print("5")
%%time
with open(r"../input/challengedadata/comments.txt", "rb") as f:
clean_train_comments = pickle.load(f)
f.close()
with open(r"../input/challengedadata/targets.txt", "rb") as ft:
y= pickle.load(ft)
ft.close()
y = [int(s) for s in y]
#tfidf vectorization
tfidf = TfidfVectorizer(sublinear_tf=True, min_df=5,
ngram_range=(1, 2),
stop_words='english')
# We transform each complaint into a vector
X = tfidf.fit_transform(clean_train_comments ).toarray()
vocab_size = len(tfidf.vocabulary_) + 1
print(vocab_size)
maxlen = max([len(x) for x in X])
X_pad = pad_sequences(X, padding='post', maxlen=maxlen, dtype='float32') #https://stackoverflow.com/questions/54031161/how-can-i-get-around-keras-pad-sequences-rounding-float-values-to-zero
# end of tfidf vectorization
# standard scaler
scaler = StandardScaler(with_mean=False)
X_pad_scal = scaler.fit_transform(X_pad)
# end of scaler
# Define the model.
callbacks = [
EarlyStopping(
monitor='val_accuracy',
min_delta=1e-2,
patience=2,
verbose=1)
]
def model0(optimizer = "Adam", dropout = 0.1, init = "uniform" , dense_nparams = 256, activation = "relu",weight_constraint = 3.0): # from https://medium.com/@am.benatmane/keras-hyperparameter-tuning-using-sklearn-pipelines-grid-search-with-cross-validation-ccfc74b0ce9f
METRICS = [
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.AUC(name='auc')
]
nbr_features= 120536-1 #2500
model = Sequential()
model.add(Dense(dense_nparams, activation=activation, input_shape=(nbr_features,), kernel_initializer=init, kernel_constraint=maxnorm(weight_constraint)))
model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer=optimizer,metrics = METRICS) #["accuracy"]
return model
%%time
# model parameters
param_grid = {
"batch_size": [8,16,32,64,128],
"epochs" : [3], #[6,7,8,9,10],
"optimizer" : ['SGD', 'RMSprop', 'Adagrad', 'Adadelta', 'Adam', 'Adamax', 'Nadam'],
"init" : ['uniform', 'lecun_uniform', 'normal', 'zero', 'glorot_normal', 'glorot_uniform', 'he_normal', 'he_uniform'],
"activation" : ['softmax', 'softplus', 'softsign', 'relu', 'tanh', 'sigmoid', 'hard_sigmoid', 'linear'],
"dropout" : [0.1,0.2,0.3,0.4],
"dense_nparams" : [64, 128, 256, 512],
"weight_constraint" : [1, 2, 3, 4, 5]
}
# "max_features" : [2000,2500,3000,3500],
# "ngram_range" : [(1,1),(1,2),(1,3)]
# } # learn_rate = [0.001, 0.01, 0.1, 0.2, 0.3]
# momentum = [0.0, 0.2, 0.4, 0.6, 0.8, 0.9] # we don't use thes two as per docuemntation: https://keras.io/optimizers/ the default parameters of the optimizers have been set according to the respective papers, and should then not be changed.
model = KerasClassifier(build_fn=model0, verbose=10)
# Unfortunately, if i definre the model like below, i will not be able t gridsearch along keras model0's parameters, but luckily they are almost all paramreters we would like to gridsearch along, thus it will suffice to manually vary tfidf parameters each time
# estimator = Pipeline([("tfidf", TfidfVectorizer(analyzer ="word", stop_words='english')),
# ('ss', StandardScaler(with_mean=False)),
# ("kc", model)])
skf = StratifiedKFold(n_splits=5, shuffle = True, random_state = 1001) #5
# Model exploration
#n_jobs must be set to 1, otherwise it goses OOM
random_search = RandomizedSearchCV(estimator = model, param_distributions=param_grid, n_iter=24, scoring='roc_auc', n_jobs=1, cv=skf.split(X_pad,y), verbose=10, random_state=9, refit = "roc_auc") #24
random_search.fit(X_pad,y, **{"callbacks": callbacks})
best_model = random_search.best_estimator_
print('\n All results:')
print(random_search.cv_results_)
print('\n Best normalized roc score for %d-fold search with %d parameter combinations:' % (5, 24))
print(random_search.best_score_ ) #*2 -1
print('\n Best hyperparameters:')
print(random_search.best_params_)
# save model
joblib.dump(random_search.best_estimator_, '../working/simpleNetGrided.pkl')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/Monk_Object_Detection/blob/master/example_notebooks/1_gluoncv_finetune/TRAIN-gluon-ssd_300_vgg16_atrous_coco.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Installation
- Run these commands
- git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
- cd Monk_Object_Detection/1_gluoncv_finetune/installation
- Select the right requirements file and run
- cat requirements_cuda9.0.txt | xargs -n 1 -L 1 pip install
```
! git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
# For colab use the command below
! cd Monk_Object_Detection/1_gluoncv_finetune/installation && cat requirements_colab.txt | xargs -n 1 -L 1 pip install
# For Local systems and cloud select the right CUDA version
# !cd Monk_Object_Detection/1_gluoncv_finetune/installation && cat requirements_cuda9.0.txt | xargs -n 1 -L 1 pip install
```
## Dataset Directory Structure
Parent_Directory (root)
|
|-----------Images (img_dir)
| |
| |------------------img1.jpg
| |------------------img2.jpg
| |------------------.........(and so on)
|
|
|-----------train_labels.csv (anno_file)
## Annotation file format
| Id | Labels |
| img1.jpg | x1 y1 x2 y2 label1 x1 y1 x2 y2 label2 |
- Labels: xmin ymin xmax ymax label
- xmin, ymin - top left corner of bounding box
- xmax, ymax - bottom right corner of bounding box
# About the Network
1. Blog 1 on VGG Network - https://towardsdatascience.com/vgg-neural-networks-the-next-step-after-alexnet-3f91fa9ffe2c
2. Blog 2 on VGG Network - https://medium.com/coinmonks/paper-review-of-vggnet-1st-runner-up-of-ilsvlc-2014-image-classification-d02355543a11
3. Blog 1 on SSD - https://towardsdatascience.com/review-ssd-single-shot-detector-object-detection-851a94607d11
4. Blog 2 on SSD - https://towardsdatascience.com/understanding-ssd-multibox-real-time-object-detection-in-deep-learning-495ef744fab
5. Blog on Atrous Convolution - https://towardsdatascience.com/review-deeplabv1-deeplabv2-atrous-convolution-semantic-segmentation-b51c5fbde92d
6. Reference Tutorial - https://gluon.mxnet.io/chapter08_computer-vision/object-detection.html
```
import os
import sys
sys.path.append("Monk_Object_Detection/1_gluoncv_finetune/lib/");
from detector_prototype import Detector
gtf = Detector();
```
# Sample Dataset Credits
- credits: https://github.com/experiencor/kangaroo
```
root = "Monk_Object_Detection/example_notebooks/sample_dataset/kangaroo/";
img_dir = "Images/";
anno_file = "train_labels.csv";
batch_size=4;
gtf.Dataset(root, img_dir, anno_file, batch_size=batch_size);
pretrained = True;
gpu=True;
model_name = "ssd_300_vgg16_atrous_coco";
gtf.Model(model_name, use_pretrained=pretrained, use_gpu=gpu);
gtf.Set_Learning_Rate(0.001);
epochs=10;
params_file = "saved_model.params";
gtf.Train(epochs, params_file);
```
# Running Inference
```
import os
import sys
sys.path.append("Monk_Object_Detection/1_gluoncv_finetune/lib/");
from inference_prototype import Infer
model_name = "ssd_300_vgg16_atrous_coco";
params_file = "saved_model.params";
class_list = ["kangaroo"];
gtf = Infer(model_name, params_file, class_list, use_gpu=True);
img_name = "Monk_Object_Detection/example_notebooks/sample_dataset/kangaroo/test/kg4.jpeg";
visualize = True;
thresh = 0.85;
output = gtf.run(img_name, visualize=visualize, thresh=thresh);
```
# Author - Tessellate Imaging - https://www.tessellateimaging.com/
# Monk Library - https://github.com/Tessellate-Imaging/monk_v1
Monk is an opensource low-code tool for computer vision and deep learning
## Monk features
- low-code
- unified wrapper over major deep learning framework - keras, pytorch, gluoncv
- syntax invariant wrapper
## Enables
- to create, manage and version control deep learning experiments
- to compare experiments across training metrics
- to quickly find best hyper-parameters
## At present it only supports transfer learning, but we are working each day to incorporate
- GUI based custom model creation
- various object detection and segmentation algorithms
- deployment pipelines to cloud and local platforms
- acceleration libraries such as TensorRT
- preprocessing and post processing libraries
## To contribute to Monk AI or Monk Object Detection repository raise an issue in the git-repo or dm us on linkedin
- Abhishek - https://www.linkedin.com/in/abhishek-kumar-annamraju/
- Akash - https://www.linkedin.com/in/akashdeepsingh01/
| github_jupyter |
```
import QUANTAXIS as QA
```
# 在这里我们演示一下 下单/交易/结算的整个流程
我们首先会建立一个账户类和一个回测类
```
# 初始化一个account
user= QA.QA_User(username='admin',password='940809x')
portfolio= user.new_portfolio('order_example')
Account=portfolio.new_account()
# 初始化一个回测类
B = QA.QA_BacktestBroker()
```
在第一天的时候,全仓买入 000001
```
# 全仓买入'000001'
Order=Account.send_order(code='000001',
price=11,
money=0.04*Account.cash_available,
time='2018-05-09',
towards=QA.ORDER_DIRECTION.BUY,
order_model=QA.ORDER_MODEL.MARKET,
amount_model=QA.AMOUNT_MODEL.BY_MONEY
)
## 打印order的占用资金
print('ORDER的占用资金: {}'.format((Order.amount*Order.price)*(1+Account.commission_coeff)))
# 账户剩余资金
print('账户剩余资金 :{}'.format(Account.cash_available))
Account.hold
Account.init_hold
Account.hold_available
```
此时的账户cash并未减少,因为此过程为申报订单(已委托 未成交状态)
回测类接受订单 并返回订单(如果创建成功)
```
rec_mes=B.receive_order(QA.QA_Event(order=Order))
print(rec_mes)
#B.query_orders(rec_mes.account_cookie,rec_mes.realorder_id)
import pandas as pd
B.query_orders(Account.account_cookie)
#pd.DataFrame(list(B.deal_message.values()),columns=B.orderstatus_headers).set_index(['account_cookie','realorder_id'])
B.query_orders(Account.account_cookie,'filled')
trade_mes=B.query_orders(Account.account_cookie,'filled')
res=trade_mes.loc[Order.account_cookie,Order.realorder_id]
Order.trade(res.trade_id,res.trade_price,res.trade_amount,res.trade_time)
#Order.trade((trade_id, trade_price, trade_amount, trade_time))
```
账户类接收到回测返回的回报信息,更新账户
```
Account.history_table
```
此时我们可以打印一下现在的状态(现在的状态可以理解为在交易时 买入一只000001股票,但当天尚未收盘)
```
print('账户的可用资金 {}'.format(Account.cash_available))
Account.hold
Account.hold_available
Account.init_hold.index_name='code'
import pandas as pd
pd.concat([Account.hold_available,Account.init_hold])
+Account.hold_available
```
我们注意到 当最初申报订单的时候,可用资金只有950.2999999999302元,而买入成功后,可用资金有3339.9289999998837元,原因是下单的时候模式是市价单模式(QA.ORDER_MODEL.MARKET),故实际成交金额为10.96元
买入以后 账户的持仓为90800股 000001
```
Account.hold
```
买入后账户现金表被扩展
```
Account.cash
```
因为是t+1的A股市场,故此时可卖数量为0
```
Account.sell_available
```
# 执行结算
```
Account.settle()
```
# 结算后
```
Account.cash
Account.cash_available
Account.sell_available
Account.hold
```
# 执行卖出操作
现在的持仓为: 000001 90800股
```
holdnum=Account.sell_available.get('000001',0)
holdnum
```
申报一个卖出单,把可卖全部卖出
```
Order=Account.send_order(code='000001',
price=11,
amount=holdnum,
time='2018-05-10',
towards=QA.ORDER_DIRECTION.SELL,
order_model=QA.ORDER_MODEL.MARKET,
amount_model=QA.AMOUNT_MODEL.BY_AMOUNT
)
Order
Account.cash_available # 因为此时订单尚未申报成功 可用现金不变
rec_mes=B.receive_order(QA.QA_Event(order=Order))
print(rec_mes)
trade_mes=B.query_orders(Account.account_cookie,'filled')
res=trade_mes.loc[Order.account_cookie,Order.realorder_id]
Order.trade(res.trade_id,res.trade_price,res.trade_amount,res.trade_time)
Account.cash_available # 此时订单已成交 cash_available立刻结转
Account.history_table
Account.orders
Account.orders.order_list
```
# 测试T0账户
```
# 初始化一个account
AccountT0=portfolio.new_account(running_environment=QA.RUNNING_ENVIRONMENT.TZERO,init_hold={'000001':10000},init_cash=200000)
# 初始化一个回测类
B = QA.QA_BacktestBroker()
AccountT0.init_assets
AccountT0.init_hold
AccountT0.hold_available
AccountT0.sell_available
```
这是一个有10000股000001持仓的T0账户
```
Order=AccountT0.send_order(code='000001',
price=11,
amount=AccountT0.sell_available.get('000001',0),
time='2018-05-10',
towards=QA.ORDER_DIRECTION.SELL,
order_model=QA.ORDER_MODEL.MARKET,
amount_model=QA.AMOUNT_MODEL.BY_AMOUNT
)
Order.datetime
rec_mes=B.receive_order(QA.QA_Event(order=Order))
print(rec_mes)
rec_mes
B.query_orders(AccountT0.account_cookie,'filled')
trade_mes=B.query_orders(AccountT0.account_cookie,'filled')
res=trade_mes.loc[Order.account_cookie,Order.realorder_id]
Order.trade(res.trade_id,res.trade_price,res.trade_amount,res.trade_time)
AccountT0.sell_available
Order.trade_time
AccountT0.buy_available
AccountT0.hold_available
AccountT0.running_time
AccountT0.datetime
AccountT0.buy_available.get('000001')
r=AccountT0.close_positions_order
AccountT0.date
for Order in r:
#print(vars(Order))
rec_mes=B.receive_order(QA.QA_Event(order=Order))
trade_mes=B.query_orders(AccountT0.account_cookie,'filled')
res=trade_mes.loc[Order.account_cookie,Order.realorder_id]
Order.trade(res.trade_id,res.trade_price,res.trade_amount,res.trade_time)
AccountT0.cash
AccountT0.sell_available
AccountT0.hold_available
AccountT0.hold
AccountT0.settle()
AccountT0.daily_hold
AccountT0.daily_cash
AccountT0.hold_table()
AccountT0.hold_price()
AccountT0.datetime
AccountT0.sell_available
risk_t0=QA.QA_Risk(AccountT0)
risk_t0.init_assets
risk_t0.init_cash-risk_t0.assets
risk_t0.assets.iloc[-1]/risk_t0.init_cash-1
risk_t0.profit_construct
```
# 测试期货账户
```
rb_ds=QA.QA_fetch_future_min_adv('RBL8','2019-01-01','2019-02-01','15min')
AccountFuture=portfolio.new_account(init_cash=1000000,allow_sellopen=True,allow_t0=True,account_cookie='future_test',market_type=QA.MARKET_TYPE.FUTURE_CN,frequence=QA.FREQUENCE.FIFTEEN_MIN)
#Account.reset_assets(10000000)
Broker=QA.QA_BacktestBroker()
#rb_ds.data
# 全仓买入'000001'
hq=next(rb_ds.panel_gen)
order=AccountFuture.send_order(code='RBL8',
price=hq.open,
money=AccountFuture.cash_available,
time=hq.datetime[0],
towards=QA.ORDER_DIRECTION.BUY_OPEN,
order_model=QA.ORDER_MODEL.MARKET,
amount_model=QA.AMOUNT_MODEL.BY_MONEY
)
Broker.receive_order(QA.QA_Event(order=order,market_data=hq))
trade_mes = Broker.query_orders(AccountFuture.account_cookie, 'filled')
res = trade_mes.loc[order.account_cookie, order.realorder_id]
order.trade(res.trade_id, res.trade_price, res.trade_amount, res.trade_time)
AccountFuture.save()
```
| github_jupyter |
# pyplearnr demo
Here I demonstrate pyplearnr, a wrapper for building/training/validating scikit learn pipelines using GridSearchCV or RandomizedSearchCV.
Quick keyword arguments give access to optional feature selection (e.g. SelectKBest), scaling (e.g. standard scaling), use of feature interactions, and data transformations (e.g. PCA, t-SNE) before being fed to a classifier/regressor.
After building the pipeline, data can be used to perform a nested (stratified if classification) k-folds cross-validation and output an object containing data from the process, including the best model.
Various default pipeline step parameters for the grid-search are available for quick iteration over different pipelines, with the option to ignore/override them in a flexible way.
This is an on-going project that I intend to update with more models and pre-processing options and also with corresponding defaults.
## Titanic dataset example
Here I use the Titanic dataset I've cleaned and pickled in a separate tutorial.
### Import data
```
import pandas as pd
df = pd.read_pickle('trimmed_titanic_data.pkl')
df.info()
```
By "cleaned" I mean I've derived titles (e.g. "Mr.", "Mrs.", "Dr.", etc) from the passenger names, imputed the missing Age values using polynomial regression with grid-searched 10-fold cross-validation, filled in the 3 missing Embarked values with the mode, and removed all fields that could be considered an id for that individual.
Thus, there is no missing/null data.
## Set categorical features as type 'category'
In order to one-hot encode categorical data, its best to set the features that are considered categorical:
```
simulation_df = df.copy()
categorical_features = ['Survived','Pclass','Sex','Embarked','Title']
for feature in categorical_features:
simulation_df[feature] = simulation_df[feature].astype('category')
simulation_df.info()
```
## One-hot encode categorical features
```
simulation_df = pd.get_dummies(simulation_df,drop_first=True)
simulation_df.info()
```
Now we have 17 features.
### Split into input/output data
```
# Set output feature
output_feature = 'Survived_1'
# Get all column names
column_names = list(simulation_df.columns)
# Get input features
input_features = [x for x in column_names if x != output_feature]
# Split into features and responses
X = simulation_df[input_features].copy()
y = simulation_df[output_feature].copy()
```
### Null model
```
simulation_df['Survived_1'].value_counts().values/float(simulation_df['Survived_1'].value_counts().values.sum())
```
Thus, null accuracy of ~62% if always predict death.
### Import pyplearnr and initialize optimized pipeline collection
```
%matplotlib inline
%load_ext autoreload
import sys
import os
sys.path.append("./pyplearnr")
optimized_pipelines = {}
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
reload(ppl)
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3,
inner_loop_fold_count=3)
pipeline_schematic = [
{'scaler': {
'none': {},
'standard': {},
'min_max': {},
'normal': {}
}},
{'estimator': {
'knn': {
'n_neighbors': range(1,31),
'weights': ['uniform','distance']
}}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='auc')
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {0:59})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=59)
%autoreload
kfcv.plot_best_pipeline_scores(number_size=10,markersize=8, figsize=(9,3), box_line_thickness=1)
%autoreload
kfcv.plot_contest(color_by='scaler', markersize=3)
%autoreload
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {1:6})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=8)
%autoreload
%matplotlib inline
kfcv.plot_best_pipeline_scores(number_size=18, markersize=14)
%autoreload
%matplotlib inline
kfcv.plot_contest(number_size=8, markersize=7, all_folds=True, figsize=(10,40),
color_by='scaler', box_line_thickness=2)
kfcv.pipelines[29]
# cmap = pylab.cm.viridis
# print cmap.__doc__
worst_pipelines = [85, 67, 65, 84, 69, 83]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
worst_pipelines = [86, 75, 84, 79, 85, 83]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
worst_pipelines = [77, 61, 81, 83, 74, 82, 84]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
best_pipelines = [89, 93, 2, 91, 4, 3]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [91, 93, 5, 43, 4, 100]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [5, 4, 91, 3, 55, 49, 2]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
reload(ppl)
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3,
inner_loop_fold_count=3)
pipeline_bundle_schematic = [
{'scaler': {
'standard': {},
'normal': {},
'min_max': {},
'binary': {}
}},
{'estimator': {
'knn': {
'n_neighbors': range(1,30)
},
# 'svm': {
# 'C': np.array([1.00000000e+00])
# }
}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_bundle_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='accuracy')
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {1:24, 2:55})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=55)
%autoreload
%matplotlib inline
kfcv.plot_best_pipeline_scores()
%autoreload
%matplotlib inline
kfcv.plot_contest()
best_pipelines = [91, 44, 89, 45, 3, 90]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [21, 18, 40, 38, 36, 35, 24]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [55, 39, 41, 42, 47, 40, 114, 110]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
%autoreload
kfcv.print_report()
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {2:18})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=18)
%autoreload
kfcv.print_report()
best_inner_fold_pipelines = {
2: 9
}
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = best_inner_fold_pipelines)
best_outer_fold_pipeline = 45
kfcv.fit(X.values, y.values, pipelines,
best_outer_fold_pipeline = best_outer_fold_pipeline)
```
# Regression
```
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
reload(ppl)
data = pd.read_csv('Advertising.csv',index_col=0)
# Start with all features
feature_cols = ['TV','Radio','Newspaper']
# Split data
X = data[feature_cols]
y = data.Sales
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=5,
inner_loop_fold_count=3)
pipeline_bundle_schematic = [
{'scaler': {
'none': {},
'standard': {}
}},
{'pre_estimator': {
'polynomial_features': {
'degree': range(1,5)
}
}},
{'estimator': {
'linear_regression': {},
}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_bundle_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='rmse')
kfcv.fit(X.values, y.values, pipelines, scoring_metric='rmse', best_outer_fold_pipeline=1)
%autoreload
kfcv.print_report()
%autoreload
kfcv.print_report()
%%time
%autoreload
import itertools
estimators = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
feature_interaction_options = [True,False]
feature_selection_options = [None,'select_k_best']
scaling_options = [None,'standard','normal','min_max','binary']
transformations = [None,'pca']
pipeline_steps = [feature_interaction_options,feature_selection_options,scaling_options,
transformations,estimators]
pipeline_options = list(itertools.product(*pipeline_steps))
optimized_pipelines = {}
for pipeline_step_combo in pipeline_options:
model_name = []
feature_interactions = pipeline_step_combo[0]
if feature_interactions:
model_name.append('interactions')
feature_selection_type = pipeline_step_combo[1]
if feature_selection_type:
model_name.append('select')
scale_type = pipeline_step_combo[2]
if scale_type:
model_name.append(scale_type)
transform_type = pipeline_step_combo[3]
if transform_type:
model_name.append(transform_type)
estimator = pipeline_step_combo[4]
model_name.append(estimator)
model_name = '_'.join(model_name)
print model_name
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': feature_selection_type,
'scale_type': scale_type,
'transform_type': transform_type
}
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': None,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save optimized pipeline
optimized_pipelines[model_name] = optimized_pipeline
```
### KNN with and without pre-processing and various options
#### Basic KNN
Here we do a K-nearest neighbors (KNN) classification with stratified 10-fold (default) cross-validation with a grid search over the default of 1 to 30 nearest neighbors and the use of either "uniform" or "distance" weights:
```
%%time
estimator = 'knn'
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': None,
'scale_type': None,
'transform_type': None
}
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': 6,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[estimator] = optimized_pipeline
```
Note the default OptimizedPipeline parameters and those for its fit() method.
The OptimizedPipeline class contains all of the data associated with the nested stratified k-folds cross-validation.
After use of the fit() method, this includes the data, its test/train splits (based on the test_size percentage keyword argument), the GridSearchCV or RandomizedGridSearchCV object, the Pipeline object that has been retrained using all of the data with the best parameters, test/train scores, and validation metrics/reports.
A report can be printed immediately after the fit by setting the suppress_output keyword argument to True.
Printing the OptimizedPipeline instance also shows the report:
```
print optimized_pipeline
```
The report lists the steps in the pipeline, their optimized settings, the test/training accuracy (or L2 regression score), the grid search parameters, and the best parameters.
If the estimator used is a classifier it also includes the confusion matrix, normalized confusion matrix, and a classification report containing precision/recall/f1-score for each class.
Turns out that the best settings for this optimized pipeline are 12 neighbors and the use of the 'uniform' weight.
Note how I've set the random_state keyword agument to 6 so that the models can be compared using the same test/train split.
#### Default pipeline step grid parameters
The default parameters to grid-search over for k-nearest neighbors are 1 to 30 neighbors and either the 'uniform' or 'distance' weight.
The defaults for the pre-processing steps, classifiers, and regressors can be viewed by using the get_default_pipeline_step_parameters() method with the number of features as the input:
```
pre_processing_grid_parameters,classifier_grid_parameters,regression_grid_parameters = \
optimized_pipeline.get_default_pipeline_step_parameters(X.shape[0])
classifier_grid_parameters['knn']
```
#### KNN with custom pipeline step grid parameters
These default parameters can be ignored by setting the use_default_param_dist keyword argument to False.
The param_dist keyword argument can be used to keep default parameters (if use_default_param_dist set to True) or to be used as the sole source of parameters (if use_default_param_dist set to False).
Here is a demonstration of generation of default parameters with those in param_dist being overridden:
```
%%time
estimator_name = 'knn'
model_name = 'custom_override_%s'%(estimator_name)
# Set custom parameters
param_dist = {
'estimator__n_neighbors': range(30,500)
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
```
Note how the n_neighbors parameter was 30 to 499 instead of 1 to 30.
Here's an example of only using param_dist for parameters:
```
%%time
model_name = 'from_scratch_%s'%(estimator_name)
# Set custom parameters
param_dist = {
'estimator__n_neighbors': range(10,30)
}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': False,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
```
Note how the estimator\_\_weights parameter isn't set for the KNN estimator.
### KNN with scaling
The currently supported scaling options are standard, normal, min-max, and binary using scikit-learn's StandardScaler, Normalizer, MinMaxScaler, and Binarizer, respectively. These are set by the pipeline initialization kwarg 'scale_type' like this:
```
%%time
estimator = 'knn'
scaling_options = ['standard','normal','min-max','binary']
for scaling_option in scaling_options:
model_name = '%s_%s'%(scaling_option,estimator_name)
optimized_pipeline_kwargs = {
'scale_type': scaling_option
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
```
Let's compare the pipelines so far:
```
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
Binary scaling fed into a KNN classifier appears to have the best training score.
#### KNN with custom min-max and binary scaling settings
MinMaxScaler scales each feature value to between 0 and 1 by default. Different scaling ranges can be gridded over by setting the 'scaler\_\_feature_range' keyword argument in param_dist.
Binarizer sets each value to 0 or 1 depending on a threshold. The default for pyplearnr is 0.5. This can be changed by setting 'scaler\_\_threshold' using param_dist.
Here is an example of setting both:
```
%%time
reload(ppl)
estimator = 'knn'
scaling_options = ['min_max','binary']
param_dists = {
'min_max': {
'scaler__feature_range': [(1,2),(3,4)]
},
'binary': {
'scaler__threshold': np.arange(0,1,0.1)
}
}
for scaling_option in scaling_options:
model_name = 'custom_%s_%s'%(scaling_option,estimator_name)
optimized_pipeline_kwargs = {
'scale_type': scaling_option
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dists[scaling_option]
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
Switching the range for min_max scaling boosted it to rank 1 for pipeline training scores:
```
print optimized_pipelines['custom_min_max_knn']
```
The range of 1 to 2 for the MinMaxScaler appeared to be the best.
### KNN with feature selection using SelectKBest with f_classif
Currently only one form of feature selection, SelectKBest with f_classif, is supported. This is set using the 'feature_selection_type' keyword argument.
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'select_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_selection_type': 'select_k_best'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
Feature selection and KNN did had a mid-level training score:
```
print optimized_pipelines['select_knn']
```
SelectKBest with f_classif chose 5 features as the best to use in the model.
The features selected by SelectKBest can be accessed normally, using the mask obtained from the get_support() method on the columns:
```
feature_selection_mask = optimized_pipelines['select_knn'].pipeline.named_steps['feature_selection'].get_support()
print np.array(X.columns)[feature_selection_mask]
```
Thus, Pclass 3, being male, and the titles Miss, Mr, and Mrs were considered the most important features by SelectKBest using f_classif.
#### Setting custom feature selection
The default number of features is 1 to all of them. This can be gridded over different values by setting 'feature_selection\_\_k' in param_dist:
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'custom_select_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_selection_type': 'select_k_best'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
param_dist = {
'feature_selection__k': [5,7,8]
}
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dist
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['custom_select_knn']
```
### KNN using feature interactions
Feature products of different degrees can be used as additional features by setting the 'feature_interaction' OptimizedPipeline keyword argument to True:
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'interaction_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_interactions': True
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['interaction_knn']
```
The optimal number of interactions (number of features multiplied by each other at once) was found to be 1.
#### KNN using custom number of feature interactions
The 'feature_interactions__degree' dictates the number of interactions. The default setting is to try no interactions (degree 1) and 2 interactions. Setting this in param_dist allows custom numbers:
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'custom_interaction_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_interactions': True
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
param_dist = {
'feature_interactions__degree': [2,3,4]
}
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dist
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['custom_interaction_knn']
```
### KNN with pre-processing transforms
Currently Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE) are supported as pre-processing options.
#### KNN with PCA pre-processing
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'pca_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'transform_type': 'pca'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['pca_knn']
```
We can look at the transformed data after PCA normally:
```
transformed_data = optimized_pipelines['pca_knn'].pipeline.named_steps['transform'].transform(X.values)
column_names = ['PCA_%d'%(feature_ind+1) for feature_ind in range(transformed_data.shape[1])]
pca_df = pd.DataFrame(transformed_data,columns=column_names)
pca_df.plot(x='PCA_1',y='PCA_2',style='ro')
```
This is currently a very manual process and would be difficult with more and more processing steps. I'm thinking of automating this with a class containing all optimized pipelines in the future.
Any of the parameters displayed in the pipeline section of the report (iterated_power, random_state, whiten, n_components, etc) can be set in param_dist by 'transform\__setting' as done previously.
#### KNN with t-SNE pre-processing
The t-SNE algorithm can be used as a pre-processing algorithm as well by setting the 'transform_type' keyword argument to 't-sne':
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 't-sne_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'transform_type': 't-sne'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
This t-SNE step takes longer than most in pyplearnr unfortunately. It also resulted in the worst score. I'll try to optimize this in the future.
### Reducing the number of grid combinations
Setting the 'num_parameter_combos' fit() method keyword argument to an integer will limit the number of grid combinations to perform using RandomizedSearchCV instead of GridSearchCV:
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'less_combos_%s'%(estimator_name)
optimized_pipeline_kwargs = {}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'num_parameter_combos': 5
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
This is a good way to speed up computations and give you an idea as to how long a particular pipeline takes to train.
Here's the corresponding report:
```
print optimized_pipelines['less_combos_knn']
```
The best parameter combination, of those attempted by RandomizedSearchCV, was 12 nearest neighbors with the 'uniform' weight.
### Other models
This code currently supports K-nearest neighbors, logistic regression, support vector machines, multilayer perceptrons, random forest, and adaboost:
```
%%time
classifiers = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
for estimator in classifiers:
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'suppress_output': True,
'use_default_param_dist': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[estimator] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
Logistic regression, random forest, multilayer perceptron, and adaboost outperform KNN, even with all of the attempted pre-processing so far.
### Putting it all together
Different combinations of these options can be strung together simultaneously to iterate over multiple models:
```
%%time
import itertools
estimators = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
feature_interaction_options = [True,False]
feature_selection_options = [None,'select_k_best']
scaling_options = [None,'standard','normal','min_max','binary']
transformations = [None,'pca']
pipeline_steps = [feature_interaction_options,feature_selection_options,scaling_options,
transformations,estimators]
pipeline_options = list(itertools.product(*pipeline_steps))
optimized_pipelines = {}
for pipeline_step_combo in pipeline_options:
model_name = []
feature_interactions = pipeline_step_combo[0]
if feature_interactions:
model_name.append('interactions')
feature_selection_type = pipeline_step_combo[1]
if feature_selection_type:
model_name.append('select')
scale_type = pipeline_step_combo[2]
if scale_type:
model_name.append(scale_type)
transform_type = pipeline_step_combo[3]
if transform_type:
model_name.append(transform_type)
estimator = pipeline_step_combo[4]
model_name.append(estimator)
model_name = '_'.join(model_name)
print model_name
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': feature_selection_type,
'scale_type': scale_type,
'feature_interactions': feature_interactions,
'transform_type': transform_type
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': None,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save optimized pipeline
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black',figsize=(10,40))
print optimized_pipelines['min_max_pca_multilayer_perceptron']
len(optimized_pipelines.keys())
```
Out of 240 different possible pipelines, best pipeline, with a test score of 0.899, appears to be min-max scaling between 0 and 1 funneled into a PCA and then into a multilayer perceptron with one hidden layer of size 5.
It took roughly 3 hours to find it.
### Predicting survival with the optimal model
All one has to do to make a prediction is use the .predict method of the pipeline in the .pipeline field.
Here's an example of predicting whether I would survive on the Titanic. I'm 32, would probably have one family member with me, might be Pclass1 (I'd hope), male, have a Ph.D (if that's what they mean by Dr.). I'm using the median Fare for Pclass 1 and randomly chose a city to have embarked from:
```
personal_stats = [32,1,0,df[df['Pclass']==1]['Fare'].median(),0,0,1,1,0,1,0,0,0,0,0,0]
zip(personal_stats,X.columns)
optimized_pipelines['min_max_pca_multilayer_perceptron'].pipeline.predict(personal_stats)
```
Looks like I died!
Let's look at my predicted probability of surviving:
```
optimized_pipelines['min_max_pca_multilayer_perceptron'].pipeline.predict_proba(personal_stats)
```
I would have a 0.77% chance of survival.
## Summary
I've shown how to use pyplearnr to try out 240 different pipeline combinations validated with stratified 10-folds cross-validation using a combination of simple keyword arguments with some additional customization options. Also, I've shown how to access the model parameters, predict survival, and check the actual predicted probability according to the optimized pipeline.
Please let me know if you have any questions or suggestions about how to improve this tool, my code, the approach I'm taking, etc.
```
%%time
%matplotlib inline
import pyplearnr as ppl
repeated_k_folds = []
for i in range(100):
# Alert user of step number
print('Step %d/%d'%(i+1,100))
# Set custom parameters
param_dist = {}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'use_default_param_dist': True,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
repeated_k_folds.append(optimized_pipeline)
data = {
'train scores': [pipeline_optimization.train_score_
for pipeline_optimization in repeated_k_folds],
'test scores': [pipeline_optimization.test_score_
for pipeline_optimization in repeated_k_folds],
}
repeated_kfcv_df = pd.DataFrame(data)
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
%%time
reload(ppl)
%matplotlib inline
import pyplearnr as ppl
repeated_five_folds = []
for i in range(100):
# Alert user of step number
print('Step %d/%d'%(i+1,100))
# Set custom parameters
param_dist = {}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'use_default_param_dist': True,
'param_dist': param_dist,
'cv': 5,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
repeated_five_folds.append(optimized_pipeline)
data = {
'train scores': [pipeline_optimization.train_score_
for pipeline_optimization in repeated_five_folds],
'test scores': [pipeline_optimization.test_score_
for pipeline_optimization in repeated_five_folds],
}
repeated_fivefcv_df = pd.DataFrame(data)
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_fivefcv_df['test scores'].plot(kind='hist',bins=8,color='red')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
repeated_fivefcv_df['train scores'].plot(kind='hist',bins=8,color='blue')
repeated_fivefcv_df['test scores'].plot(kind='hist',bins=8,color='red')
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
repeated_fivefcv_df['train scores'].plot(kind='hist',bins=8,color='blue')
import sys
sys.path.append('/Users/cmshymansky/documents/code/library/pairplotr')
import pairplotr as ppr
repeated_fivefcv_df.info()
reload(ppr)
ppr.compare_data(repeated_fivefcv_df,bins=8,marker_size=10,plot_medians=True)
reload(ppr)
ppr.compare_data(repeated_fivefcv_df,bins=8,marker_size=10,plot_medians=True)
repeated_fivefcv_df['train scores'].describe()
from matplotlib import pylab as plt
ax = plt.subplot(111)
print ax
# repeated_fivefcv_df.plot(ax=ax,x='train scores',y='test scores',style='bo')
repeated_kfcv_df.plot(ax=ax,x='train scores',y='test scores',style='ro')
print dir(repeated_k_folds[0].grid_search)
all_scores = []
for x in repeated_k_folds[0].grid_search.grid_scores_:
all_scores.extend(list(x.cv_validation_scores))
print max(x.cv_validation_scores),x.best_score_
print repeated_k_folds[0].grid_search.cv_results_
pd.Series(all_scores).plot(kind='hist',color='grey',bins=8)
def get_bootstrapped_datasets(orig_data_set, num_samples=100, points_per_sample=50):
import random
data_sets = []
for i in range(num_samples):
sample = [random.choice(orig_data_set) for x in range(points_per_sample)]
data_sets.append(sample)
return data_sets
def cdf(aList, x):
''' 'aList' must be sorted (low to high) '''
returnVal=0
for v in aList:
if v<=x:
returnVal+=1
return returnVal/float(len(aList))
def inv_cdf(aList, percentile):
''' 'percentile' is between 0 and 1.
'aList' must be sorted (low to high)
'''
returnVal = 0
for i in xrange(len(aList)):
if cdf(aList, aList[i])>=percentile:
returnVal = aList[i]
break
return returnVal
def conf_interval(data_set, alpha=0.05):
data_set.sort()
low_end = inv_cdf(data_set, alpha)
high_end = inv_cdf(data_set, 1-alpha)
return (low_end, high_end)
from matplotlib import pylab as plt
bootstrapped_samples = get_bootstrapped_datasets(repeated_fivefcv_df['test scores'].values)
avg_vals = [float(sum(l))/len(l) for l in bootstrapped_samples]
conf_10000 = conf_interval(avg_vals)
pd.Series(avg_vals).hist(bins=10, normed=True)
plt.axvspan(conf_10000[0],conf_10000[1],alpha=0.5,color='red')
from sklearn.learning_curve import learning_curve
import numpy as np
fig, ax = plt.subplots(1,1, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
N, train_lc, val_lc = learning_curve(optimized_pipeline.pipeline,
X, y, cv=5,
train_sizes=np.linspace(0.3, 1, 25))
ax.plot(N, np.mean(train_lc, 1), color='blue', label='training score')
ax.plot(N, np.mean(val_lc, 1), color='red', label='validation score')
ax.hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
color='gray', linestyle='dashed')
ax.set_ylim(0, 1)
ax.set_xlim(N[0], N[-1])
ax.set_xlabel('training size')
ax.set_ylabel('score')
ax.legend(loc='best')
# ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
# ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
# ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
# color='gray', linestyle='dashed')
# ax[i].set_ylim(0, 1)
# ax[i].set_xlim(N[0], N[-1])
# ax[i].set_xlabel('training size')
# ax[i].set_ylabel('score')
# ax[i].set_title('degree = {0}'.format(degree), size=14)
# ax[i].legend(loc='best')
train_lc
# Set output feature
output_feature = 'diabetes'
# Get input features
input_features = [x for x in X_interaction.columns if x != output_feature]
# Split into features and responses
X = X_interaction.copy()
y = test_df[output_feature].copy()
reload(ppl)
ppl.OptimizationBundle().get_options()
%%time
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Fit data
optimized_pipeline.fit(X,y,random_state=6)
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
import sklearn.metrics as sklearn_metrics
X_array = X.copy().values
y_array = y.copy().values
param_grid = {
'estimator__n_neighbors': range(31),
'estimator__weights': ['uniform', 'distance']
}
X_train, X_val, y_train, y_val = \
train_test_split(X_array,y_array,test_size=0.2,random_state=6,stratify=y_array)
from sklearn.model_selection import StratifiedKFold
kfolds_kwargs = dict(
n_splits=10,
shuffle=True,
random_state=6
)
skf = StratifiedKFold(**kfolds_kwargs)
fold_optimizations = {}
for fold_ind, data_inds in enumerate(skf.split(X_train, y_train)):
fold_optimizations[fold_ind] = {}
train_index, test_index = data_inds[0],data_inds[1]
X_train_inner, X_test_inner = X_array[train_index], X_array[test_index]
y_train_inner, y_test_inner = y_array[train_index], y_array[test_index]
pipeline = Pipeline([('estimator',KNeighborsClassifier(n_neighbors=11,weights='distance'))])
pipeline.fit(X_train_inner,y_train_inner)
y_pred_inner = pipeline.predict(X_test_inner)
confusion_matrix = sklearn_metrics.confusion_matrix(y_test_inner, y_pred_inner)
score = confusion_matrix.trace()/float(confusion_matrix.sum())
fold_optimizations[fold_ind]['confusion_matrix'] = confusion_matrix
fold_optimizations[fold_ind]['score'] = confusion_matrix.trace()/float(confusion_matrix.sum())
fold_optimizations[fold_ind]['pipeline'] = pipeline
print np.array([fold_optimizations[fold_ind]['score'] for fold_ind in fold_optimizations]).mean()
y_pred = pipeline.predict(X_val)
test_confusion_matrix = sklearn_metrics.confusion_matrix(y_val, y_pred)
score = test_confusion_matrix.trace()/float(test_confusion_matrix.sum())
print score
# TRAIN: [1 3] TEST: [0 2]
# TRAIN: [0 2] TEST: [1 3]
fold_optimizations
print dir(optimized_pipeline.grid_search.best_estimator_)
dir(folds[0].named_steps['estimator'])
```
| github_jupyter |
# Advanced Feature Engineering in Keras
## Learning Objectives
1. Process temporal feature columns in Keras.
2. Use Lambda layers to perform feature engineering on geolocation features.
3. Create bucketized and crossed feature columns.
## Introduction
In this notebook, we use Keras to build a taxifare price prediction model and utilize feature engineering to improve the fare amount prediction for NYC taxi cab rides.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [Solution Notebook](../solutions/4_keras_adv_feat_eng.ipynb) for reference.
## Set up environment variables and load necessary libraries
We will start by importing the necessary libraries for this lab.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import datetime
import logging
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers
from tensorflow.keras import models
# set TF error log verbosity
logging.getLogger("tensorflow").setLevel(logging.ERROR)
print(tf.version.VERSION)
```
## Load taxifare dataset
The Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict.
First, let's download the .csv data by copying the data from a cloud storage bucket.
```
if not os.path.isdir("../data"):
os.makedirs("../data")
# The `gsutil cp` command allows you to copy data between the bucket and current directory.
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/taxi-train1_toy.csv ../data
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/taxi-valid1_toy.csv ../data
```
Let's check that the files were copied correctly and look like we expect them to.
```
!ls -l ../data/*.csv
!head ../data/*.csv
```
## Create an input pipeline
Typically, you will use a two step process to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model.
```
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
# A function to define features and labesl
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
# A utility method to create a tf.data dataset from a Pandas Dataframe
def load_dataset(pattern, batch_size=1, mode='eval'):
dataset = tf.data.experimental.make_csv_dataset(pattern,
batch_size,
CSV_COLUMNS,
DEFAULTS)
dataset = dataset.map(features_and_labels) # features, label
if mode == 'train':
dataset = dataset.shuffle(1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
```
## Create a Baseline DNN Model in Keras
Now let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques.
```
# Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred): # Root mean square error
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
# feature_columns
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Constructor for DenseFeatures takes a list of numeric columns
dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
```
We'll build our DNN model and inspect the model architecture.
```
model = build_dnn_model()
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
```
## Train the model
To train the model, simply call [model.fit()](https://keras.io/models/model/#fit). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
We start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model.
```
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 59621 * 5
NUM_EVALS = 5
NUM_EVAL_EXAMPLES = 14906
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
'train')
evalds = load_dataset('../data/taxi-valid*',
1000,
'eval').take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
```
### Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
```
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'mse'])
```
### Predict with the model locally
To predict with Keras, you simply call [model.predict()](https://keras.io/models/model/#predict) and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime.
```
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
```
## Improve Model Performance Using Feature Engineering
We now improve our model's performance by creating the following feature engineering types: Temporal, Categorical, and Geolocation.
### Temporal Feature Columns
**Lab Task #1**: Processing temporal feature columns in Keras
We incorporate the temporal feature pickup_datetime. As noted earlier, pickup_datetime is a string and we will need to handle this within the model. First, you will include the pickup_datetime as a feature and then you will need to modify the model to handle our string feature.
```
# TODO 1a - Your code here
# TODO 1b - Your code here
# TODO 1c - Your code here
```
### Geolocation/Coordinate Feature Columns
The pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.
Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.
#### Computing Euclidean distance
The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.
```
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
```
#### Scaling latitude and longitude
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation features. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1.
First, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitudinal value and then divide by 8 to return a scaled value.
```
def scale_longitude(lon_column):
return (lon_column + 78)/8.
```
Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling latitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value.
```
def scale_latitude(lat_column):
return (lat_column - 37)/8.
```
### Putting it all together
We will create a function called "euclidean" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclidean distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features.
**Lab Task #2:** We will use Lambda layers to create two new "geo" functions for our model.
**Lab Task #3:** Creating the bucketized and crossed feature columns
```
def transform(inputs, numeric_cols, string_cols, nbuckets):
print("Inputs before features transformation: {}".format(inputs.keys()))
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in numeric_cols
}
# Scaling longitude from range [-70, -78] to [0, 1]
# TODO 2a
# TODO -- Your code here.
# Scaling latitude from range [37, 45] to [0, 1]
# TODO 2b
# TODO -- Your code here.
# add Euclidean distance
transformed['euclidean'] = layers.Lambda(
euclidean,
name='euclidean')([inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# TODO 3a
# TODO -- Your code here.
# TODO 3b
# TODO -- Your code here.
# create embedding columns
feature_columns['pickup_and_dropoff'] = fc.embedding_column(pd_pair, 100)
print("Transformed features: {}".format(transformed.keys()))
print("Feature columns: {}".format(feature_columns.keys()))
return transformed, feature_columns
```
Next, we'll create our DNN model now with the engineered features. We'll set `NBUCKETS = 10` to specify 10 buckets when bucketizing the latitude and longitude.
```
NBUCKETS = 10
# DNN MODEL
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer is all float except for pickup_datetime which is a string
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(inputs,
numeric_cols=NUMERIC_COLS,
string_cols=STRING_COLS,
nbuckets=NBUCKETS)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# Compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
```
Let's see how our model architecture has changed now.
```
tf.keras.utils.plot_model(model, 'dnn_model_engineered.png', show_shapes=False, rankdir='LR')
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
'train')
evalds = load_dataset('../data/taxi-valid*',
1000,
'eval').take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS+3,
steps_per_epoch=steps_per_epoch)
```
As before, let's visualize the DNN model layers.
```
plot_curves(history, ['loss', 'mse'])
```
Let's a prediction with this new model with engineered features on the example we had above.
```
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
```
Below we summarize our training results comparing our baseline model with our model with engineered features.
| Model | Taxi Fare | Description |
|--------------------|-----------|-------------------------------------------|
| Baseline | value? | Baseline model - no feature engineering |
| Feature Engineered | value? | Feature Engineered Model |
Copyright 2021 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
```
import librosa
import os
import numpy as np
import matplotlib.pyplot as plt
from scipy.fftpack import dct
from scipy.signal import spectrogram
import operator
import pickle
import time
import csv
from random import shuffle
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from sklearn import metrics
import logging
import joblib
```
## for GPU only
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
```
### CHOICE 1 : Network without dropout or batch norm
- comment line `net.to(device)` to run on CPU
```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1,128,3)
self.conv2 = nn.Conv2d(128,384,3)
self.conv3 = nn.Conv2d(384,768,3)
self.conv4 = nn.Conv2d(768,2048,3)
self.conv5 = nn.Conv2d(2048,50,1) # network with 50 outputs
# self.conv5 = nn.Conv2d(2048,10,1) # network with 10 outputs
self.pool1 = nn.MaxPool2d((2,4),(2,4))
self.pool2 = nn.MaxPool2d((2,4),(2,4))
self.pool3 = nn.MaxPool2d((2,6),(2,6))
self.pool4 = nn.MaxPool2d((1,7),(1,7))
# self.sig = nn.Sigmoid()
def forward(self,x):
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = self.pool3(F.relu(self.conv3(x)))
x = self.pool4(F.relu(self.conv4(x)))
x = self.conv5(x)
x = x.view(-1, 50) # network with 50 outputs
# x = x.view(-1,10) # network with 10 outputs
# x = self.sig(x)
return x
net = Net()
net.to(device)
```
### CHOICE 2 : Network with drop out and batch norm
- comment the line `net.to(device)` to run on CPU
```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1,128,3)
self.norm1 = nn.BatchNorm2d(128)
self.conv2 = nn.Conv2d(128,384,3)
self.norm2 = nn.BatchNorm2d(384)
self.conv3 = nn.Conv2d(384,768,3)
self.norm3 = nn.BatchNorm2d(768)
self.conv4 = nn.Conv2d(768,2048,3)
self.conv5 = nn.Conv2d(2048,50,1) # network with 50 outputs
# self.conv5 = nn.Conv2d(2048,10,1) # network with 10 outputs
self.pool1 = nn.MaxPool2d((2,4),(2,4))
self.pool2 = nn.MaxPool2d((2,4),(2,4))
self.pool3 = nn.MaxPool2d((2,6),(2,6))
self.pool4 = nn.MaxPool2d((1,7),(1,7))
self.drop = nn.Dropout2d(.5)
# self.sig = nn.Sigmoid()
def forward(self,x):
x = self.pool1(F.relu(self.norm1(self.conv1(x))))
x = self.pool2(self.drop(F.relu(self.norm2(self.conv2(x)))))
x = self.pool3(self.drop(F.relu(self.norm3(self.conv3(x)))))
x = self.pool4(F.relu(self.conv4(x)))
x = self.conv5(x)
x = x.view(-1, 50) # network with 50 outputs
# x = x.view(-1,10) # network with 10 outputs
# x = self.sig(x)
return x
net = Net()
net.to(device)
```
### Specify file to load pre-saved network parameters IF NETWORK PARAMETERS SAVED
- specify filename in `filename`
- following trained networks available
- network_10_epoch_10_output_norm_all_ex.pt
- network_10_epoch_BNDO_norm_all_ex.pt
- network_shuffled.pt
- network_10_epoch_1.pt
- network_10_epoch_norm_all_ex.pt
- network_10_epoch_2.pt
- network_10_epoch_norm_binwise.pt
```
filename = 'network_10_epoch_BNDO_norm_all_ex.pt'
net.load_state_dict(torch.load(os.path.join('./networks',filename)))
net.eval()
```
### creating the training , test and validation dataset
```
# indices = list(np.arange(len(keys)))
# shuffle(indices)
# train_ind = indices[0:15516]
# val_ind = indices[15517:20689]
# test_ind = indices[20690:]
# print(len(train_ind))
# print(len(val_ind))
# print(len(test_ind))
# with open('train_ind.pickle','wb') as handle:
# pickle.dump(train_ind,handle)
# with open('val_ind.pickle','wb') as handle:
# pickle.dump(val_ind,handle)
# with open('test_ind.pickle','wb') as handle:
# pickle.dump(test_ind,handle)
```
- available datasets:
- combined_dict_norm_all_examples.pickle
- combined_dict_norm_binwise.pickle
- combined_dict_norm_per.pickle
- combined_dict_updated.pickle - network_10_epoch_1
- combined_dict_norm_all_examples_newSpecs.pkl
```
with open('../database/train_ind.pickle','rb') as handle:
train_ind = pickle.load(handle)
with open('../database/val_ind.pickle','rb') as handle:
val_ind = pickle.load(handle)
with open('../database/test_ind.pickle','rb') as handle:
test_ind = pickle.load(handle)
with open('../database/combined_dict_norm_binwise.pickle','rb') as handle:
combined_dict = pickle.load(handle)
# combined_dict = joblib.load('../database/combined_dict_norm_all_examples_newSpecs.pkl')
with open('../database/sorted_tags.pickle', 'rb') as handle:
sorted_stats = pickle.load(handle)
```
# TRAINING
### loading the stored training, test and validation data
```
# TEST TO CHECK CONTENTS OF DICTIONARY
plt.imshow(combined_dict['2']['mel_spectrum'][0],aspect=20)
plt.show()
```
### Calculating weights for weighted inary Cross Entropy Loss
```
pos_weight = []
for i in range(50): # make 50 for 50 output
pos_weight.append(sorted_stats[0][1]/sorted_stats[i][1])
pos_weight = np.array(pos_weight).reshape(1,50) # make 50 for 50 output
print(pos_weight)
pos_weight = torch.from_numpy(pos_weight)
pos_weight = pos_weight.float()
print(type(pos_weight))
```
### Loss function and Optimization functions
```
# criterion = nn.CrossEntropyLoss()
criterion = nn.BCEWithLogitsLoss(pos_weight=pos_weight).cuda()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
keys = list(combined_dict.keys())
print(len(keys))
```
- comment the line `inputs = inputs.to(device)` and `labels = labels.to(device)` to run on CPU
```
# remember to change the name of file before execution
batch_size=4
num_channels=1
start_time = time.time()
loss_hist=[]
for epoch in range(10): # loop over the dataset multiple times
running_loss = 0.0
loss_epoch = []
#creating a mini batch
for i in range(0,len(train_ind),4):
shp = combined_dict[keys[train_ind[i]]]['mel_spectrum'][0].shape
lab_shp = combined_dict[keys[train_ind[i]]]['output'].shape[0] # outputs 50 labels
# lab_shp=10 # outputs 10 labels
inputs = np.zeros((batch_size,num_channels,shp[0],shp[1]+1))
labels = np.zeros((batch_size,lab_shp))
for j in range(4):
inputs[j,0,:,0:-1] = combined_dict[keys[train_ind[i+j]]]['mel_spectrum'][0]
labels[j,:] = combined_dict[keys[train_ind[i+j]]]['output'] # remove last indexing if output is 50 labels
# labels[j,:] = labels[j,:]*np.arange(50) # was done for crossentropyloss
inputs = torch.from_numpy(inputs)
inputs = inputs.float()
labels = torch.from_numpy(labels)
labels = labels.float()
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs,labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
# print(i)
if i % 20 == 0: # every 50 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 5))
loss_epoch.append(running_loss/5)
running_loss = 0.0
loss_hist.append(loss_epoch)
end_time = time.time()
print('Finished Training')
print('Exe. Time:',end_time-start_time)
```
### to save the network parameters onto disk
```
# change name before saving the network
torch.save(net.state_dict(),'./networks/network_10_epoch_BNDO_norm_binwise.pt')
```
# CODE TO VALIDATION SET PERFORMANCE
```
keys = list(combined_dict.keys())
print(len(keys))
```
- comment line `inputs = inputs.to(device)` to run on CPU
```
batch_size=4
num_channels=1
out_all = np.zeros((1,50))# change for 50
labels_all = np.zeros((1,50)) # change for 50
for i in range(0,len(val_ind),4):
shp = combined_dict[keys[val_ind[i]]]['mel_spectrum'][0].shape
lab_shp = combined_dict[keys[val_ind[i]]]['output'].shape[0]
# lab_shp = 10 # uncomment for 10 output
inputs = np.zeros((batch_size,num_channels,shp[0],shp[1]+1))
labels = np.zeros((batch_size,lab_shp))
for j in range(4):
inputs[j,0,:,0:-1] = combined_dict[keys[val_ind[i+j]]]['mel_spectrum'][0]
labels[j,:] = combined_dict[keys[val_ind[i+j]]]['output'] # remove last indexing if output is 50 labels
labels_all = np.concatenate((labels_all,labels),axis=0)
inputs = torch.from_numpy(inputs)
inputs = inputs.float()
inputs = inputs.to(device)
outputs = net(inputs)
temp = outputs.cpu().detach().numpy()
out_all = np.concatenate((out_all,temp),axis=0)
if i%100 == 0:
print(i)
print('Finished')
labels_all = labels_all[1:,:]
out_all = out_all[1:,:]
print(labels_all.shape)
print(out_all.shape)
out_all_tensor = torch.from_numpy(out_all)
out_all_prob = torch.sigmoid(out_all_tensor).numpy() # for auc_roc
out_all_bin_tensor = torch.round(torch.sigmoid(out_all_tensor))
# out_all_bin = np.where(out_all_prob<.3,0,1) # to change the threshold of rounding
out_all_bin = out_all_bin_tensor.numpy() # for other metrics
print(out_all_bin.shape)
```
### calculating per tag metrics
```
metrics_dict={}
for i in range(50): # change for 50
# print(i)
precision,recall,fscore,support = metrics.precision_recall_fscore_support(labels_all[:,i],out_all_bin[:,i])
metrics_dict[sorted_stats[i][0]] = {}
metrics_dict[sorted_stats[i][0]]['precision'] = precision
metrics_dict[sorted_stats[i][0]]['recall'] = recall
metrics_dict[sorted_stats[i][0]]['fscore'] = fscore
metrics_dict[sorted_stats[i][0]]['support'] = support
for key in metrics_dict.keys():
print(key,':',metrics_dict[key])
print('\n')
```
### caluclating the metrics(precision,recall,fscore) using all tags at once
```
precision,recall,fscore,support = metrics.precision_recall_fscore_support(labels_all,out_all_bin)
```
### calculating the AUC-ROC curve
```
auc_roc = metrics.roc_auc_score(labels_all,out_all_prob)
print(auc_roc)
```
# CODE FOR TEST SET
```
batch_size=4
num_channels=1
out_all = np.zeros((1,50))# change for 50
labels_all = np.zeros((1,50)) # change for 50
for i in range(0,len(test_ind)-4,4):
shp = combined_dict[keys[test_ind[i]]]['mel_spectrum'][0].shape
lab_shp = combined_dict[keys[test_ind[i]]]['output'].shape[0]
# lab_shp = 10 # uncomment for 10 output
inputs = np.zeros((batch_size,num_channels,shp[0],shp[1]+1))
labels = np.zeros((batch_size,lab_shp))
for j in range(4):
inputs[j,0,:,0:-1] = combined_dict[keys[test_ind[i+j]]]['mel_spectrum'][0]
labels[j,:] = combined_dict[keys[test_ind[i+j]]]['output'] # remove last indexing if output is 50 labels
labels_all = np.concatenate((labels_all,labels),axis=0)
inputs = torch.from_numpy(inputs)
inputs = inputs.float()
inputs = inputs.to(device)
outputs = net(inputs)
temp = outputs.cpu().detach().numpy()
out_all = np.concatenate((out_all,temp),axis=0)
if i%100 == 0:
print(i)
print('Finished')
labels_all = labels_all[1:,:]
out_all = out_all[1:,:]
print(labels_all.shape)
print(out_all.shape)
out_all_tensor = torch.from_numpy(out_all)
out_all_prob = torch.sigmoid(out_all_tensor).numpy() # for auc_roc
out_all_bin_tensor = torch.round(torch.sigmoid(out_all_tensor))
# out_all_bin = np.where(out_all_prob<.3,0,1) # to change the threshold of rounding
out_all_bin = out_all_bin_tensor.numpy() # for other metrics
print(out_all_bin.shape)
auc_roc = metrics.roc_auc_score(labels_all,out_all_prob)
print(auc_roc)
```
- storing all metrics in the dictionary
```
metrics_dict['all_precesion'] = precision
metrics_dict['all_recall'] = recall
metrics_dict['all_fscore'] = fscore
metrics_dict['all_support'] = support
metrics_dict['auc_roc'] = auc_roc
metrics_dict['loss_hist'] = loss_hist
# saving the metric to disk
with open('./metrics/metrics_10_epoch_BNDO_norm_all_ex_test.pickle','wb') as handle:
pickle.dump(metrics_dict,handle)
```
| github_jupyter |
# To run this, you need to run (or have run) the following in docker:
```
pip install textblob
pip install nltk
pip install twitterscraper
pip install pandas_datareader
pip install yahoo-finance
```
```
from twitterscraper import query_tweets
from twitterscraper.query import query_tweets_once as query_tweets_advanced
from sklearn.decomposition import LatentDirichletAllocation as LDA
from sklearn.feature_extraction.text import CountVectorizer
from scipy.sparse import csr_matrix
import seaborn as sns
import hypertools as hyp
import numpy as np
from textblob import TextBlob as tb
import pandas_datareader as pdr
import pandas as pd
import numpy as np
import datetime as dt
from yahoo_finance import Share
import nltk
nltk.download('brown')
nltk.download('punkt')
%matplotlib inline
```
# Define some useful Twitter-related functions
- Find most recent tweets containing a given keyword
- Fit topic models to a set of tweets
- Do sentiment analyses on tweets
- Get the tweet text and dates
```
# function for scraping twitter for one or more keywords and returning a dictionary with:
# - tweets: the tweet text (list of length n_tweets)
# - datetimes: the tweet date/time (as a DateTime object)
# - topicvecs: the tweet topic vectors (numpy array with n_tweets rows and n_topics columns)
# - topwords: the top n words from each topic (list of length n_topics, where each element is a list of n_words)
# - sentiments: the sentiment valence of each tweet (numpy array of length n_tweets)
def twitter_witch(keywords, n_tweets=500, n_topics=10, n_words=5, model=None, use_advanced=False):
#if keywords is a list, combine all keywords into a single string, where each word is separated by " OR "
if type(keywords) == list:
if use_advanced:
print('Cannot scrape lists of advanced queries')
return None
else:
keywords = ' OR '.join(keywords)
#get the tweets
tweets = []
if not use_advanced:
for tweet in query_tweets(keywords, n_tweets)[:n_tweets]:
tweets.append(tweet)
else:
tweets = query_tweets_advanced(keywords, num_tweets=n_tweets, limit=n_tweets)
#get the tweet text
tweet_text = list(map(lambda x: x.text, tweets))
#fit a topic model to the tweet text
n_features = 1000
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=n_features, stop_words='english')
tf = tf_vectorizer.fit_transform(tweet_text)
vocab = tf_vectorizer.get_feature_names()
if model == None:
lda = LDA(n_topics=n_topics, max_iter=5, learning_method='online', learning_offset=50., random_state=0).fit(tf)
else:
lda = model
tweet_topics = lda.fit(tf)
def get_top_words(model, vocab, n_words):
top_words = []
for topic_idx, topic in enumerate(model.components_):
next = topic.argsort()[:-n_words - 1:-1]
top_words.append(list(map(lambda x: vocab[x], next)))
return top_words
def tweet_sentiment(tweet):
b = tb(tweet)
return np.sum(np.array(list(map(lambda x: x.sentiment.polarity, b.sentences))))
#get the tweet datetimes
tweet_dts = list(map(lambda x: x.timestamp, tweets))
return {'tweets': tweet_text,
'datetimes': tweet_dts,
'topicvecs': lda.components_.T,
'topwords': get_top_words(lda, vocab, n_words),
'sentiments': np.array(list(map(tweet_sentiment, tweet_text))),
'model': lda}
```
# Define some useful finance-related functions
Given a stock ticker symbol and a date, return a dictionary with the following keys/values (strings or floats, or None if unavailable):
- name: The company name
- symbol: The ticker symbol
- open: The opening price from that day
- close: The closing price from that day
- vol: The trading volume from that day
- price_change: The change in price from the previous day, in whatever the trading currency is
- percent_change: The change in price from the previous day, as a percentage
- currency: The currency (e.g. USD)
```
def datefloor(date):
'''
Reset a date to 00:00:00 and return the new datetime object
'''
return dt.datetime(year=date.year, month=date.month, day=date.day)
def finance_wizard(name, date):
def floater(n):
'''
Turn n into a float. Acceptable values are: floats or dataframes
'''
if type(n) == pd.core.series.Series:
return n.values[-1]
else:
return n
x = Share(name.upper())
info = {'name': '',
'symbol': name,
'open': np.nan,
'close': np.nan,
'vol': np.nan,
'price_change': np.nan,
'percent_change': np.nan,
'currency': ''}
info['name'] = x.get_name()
info['currency'] = x.get_currency()
if info['name'] == None: #ticker symbol not found
return info
end = datefloor(date) + dt.timedelta(1)
start = end - dt.timedelta(4) #look up to 5 days prior to the target date to account for days when the markets were closed
try:
data = pdr.data.DataReader(name.upper(), 'yahoo', start, end)
except:
return info
info['open'] = floater(data.loc[data.index[-1]]['Open'])
info['close'] = floater(data.loc[data.index[-1]]['Adj Close'])
info['vol'] = floater(data.loc[data.index[-1]]['Volume'])
info['price_change'] = info['open'] - floater(data.loc[data.index[-2]]['Open'])
info['percent_change'] = np.divide(info['price_change'], info['open'])
return info
```
## Tweet topics and sentiments as market predictors
Define a stock symbol and company name. Define a start and end date. For each day, get up to n tweets containing the company name (with twitter_witch stats), along with financial info for that day (with finance_wizard stats).
```
def data_getter(d, keyword, symbol, n_tweets, model=None):
search_string = '"' + keyword + '"%20since%3A' + date2str(d) + '%20until%3A' + date2str(d + dt.timedelta(1))
if model == None:
twitter_data = twitter_witch(search_string, n_tweets=n_tweets, use_advanced=True)
else:
twitter_data = twitter_witch(search_string, n_tweets=n_tweets, use_advanced=True, model=model)
finance_data = finance_wizard(symbol, d)
print('.', end='')
return [twitter_data, finance_data]
def date2str(date):
return date.strftime('%Y-%m-%d')
def get_tweets_and_stocks(symbol='AAPL', keyword='apple', start=None, end=None, n_tweets=100):
if end == None:
end = dt.datetime.today()
if start == None:
start = end - dt.timedelta(30)
def daterange(start, end):
start = datefloor(start)
end = datefloor(end)
dates = [start]
d = start
while d < end:
d = d + dt.timedelta(1)
dates.append(d)
return dates
if start >= end:
return None
dates = daterange(start, end)
data1 = data_getter(dates[0], keyword, symbol, n_tweets)
data2 = list(map(lambda x: data_getter(x, keyword, symbol, n_tweets, model=data1[0]['model']), dates))
print('done')
twitter_data = [data1[0]]
financial_data = [data1[1]]
for d in np.arange(len(data2)):
twitter_data.append(data2[d][0])
financial_data.append(data2[d][1])
return {'tweets': twitter_data, 'stocks': financial_data}
today = dt.datetime.today()
lastmonth = today - dt.timedelta(100)
info = get_tweets_and_stocks(symbol='SPY', keyword='financial', n_tweets=1000, start=lastmonth, end=today)
#compile useful info (by day)
average_sentiments = list(map(lambda x: np.mean(x['sentiments']), info['tweets']))
average_topics = np.vstack(list(map(lambda x: np.mean(x['topicvecs'], axis=0), info['tweets'])))
closing_prices = list(map(lambda x: x['close'], info['stocks']))
hyp.plot(average_topics, 'o', group=average_sentiments, palette='coolwarm');
h = sns.jointplot(x=np.array(average_sentiments), y=np.array(closing_prices), kind='reg');
h.set_axis_labels(xlabel='Sentiment', ylabel='Adjusted closing price (' + info['stocks'][0]['currency'] + ')');
```
| github_jupyter |
# xarray use case: Neural network training
**tl;dr**
1. This notebook is an example of reading from a climate model netCDF file to train a neural network. Neural networks (for use in parameterization research) require random columns of several stacked variables at a time.
2. Experiments in this notebook show:
1. Reading from raw climate model output files is super slow (1s per batch... need speeds on the order of ms)
2. open_mfdataset is half as fast as opening the same dataset with open_dataset
3. Pure h5py is much faster than reading the same dataset using xarray (even using the h5 backend)
3. Currently, I revert to preformatting the dataset (flatten time, lat, lon). This gets the reading speed down to milliseconds per batch.
**Conclusions**
Reading straight from the raw netCDF files (with all dimensions intact) is handy and might be necessary for later applications (using continuous time slices or lat-lon regions for RNNs or CNNs).
However, at the moment this is many orders of magnitude too slow. Preprocessing seems required.
What would be a good way of speeding this up without too extensive post processing?
```
import xarray as xr
import numpy as np
xr.__version__
```
## Load an example dataset
I uploaded a sample dataset here: http://doi.org/10.5281/zenodo.2559313
The files are around 1GB large. Let's download it.
NOTE: I have all my data on an SSD
```
# Modify this path!
DATADIR = '/local/S.Rasp/tmp/'
!wget -P $DATADIR https://zenodo.org/record/2559313/files/sample_SPCAM_1.nc
!wget -P $DATADIR https://zenodo.org/record/2559313/files/sample_SPCAM_2.nc
!wget -P $DATADIR https://zenodo.org/record/2559313/files/sample_SPCAM_concat.nc
!ls -lh $DATADIR/sample_SPCAM*
```
The files are typical climate model output files. `sample_SPCAM_1.nc` and `sample_SPCAM_2.nc` are two contiguous output files. `sample_SPCAM_concat.nc` is the concatenated version of the two files.
```
%%time
ds = xr.open_mfdataset(DATADIR + 'sample_SPCAM_1.nc')
ds
```
## Random columns for machine learning parameterizations
For the work on ML parameterizations that a few of us are doing now, we would like to work one column at a time. One simple example would be predicting the temperature and humidity tendencies (TPHYSTND and PHQ) from the temperature and humidity profiles (TAP and QAP).
This means we would like to give the neural network a stacked vector containing the inputs (2 x 30 levels) and ask it to predict the outputs (also 2 x 30 levels).
In NN training, we usually train on a batch of data at a time. Batches typically have a few hundred samples (columns in our case). It is really important that the samples in a batch are not correlated but rather represent a random sample of the entire dataset.
To achieve this we will write a data generator that loads the batches by randomly selecting along the time, lat and lon dimensions.
```
class DataGenerator(object):
"""
Data generator that randomly (if shuffle = True) picks columns from the dataset and returns them in
batches. For each column the input variables and output variables will be stacked.
"""
def __init__(self, fn_or_ds, batch_size=128, input_vars=['TAP', 'QAP'], output_vars=['TPHYSTND', 'PHQ'],
shuffle=True, engine='netcdf4'):
self.ds = xr.open_mfdataset(fn_or_ds, engine=engine) if type(fn_or_ds) is str else fn_or_ds
self.batch_size = batch_size
self.input_vars = input_vars
self.output_vars = output_vars
self.ntime, self.nlat, self.nlon = self.ds.time.size, self.ds.lat.size, self.ds.lon.size
self.ntot = self.ntime * self.nlat * self.ntime
self.n_batches = self.ntot // batch_size
self.indices = np.arange(self.ntot)
if shuffle:
self.indices = np.random.permutation(self.indices)
def __getitem__(self, index):
time_indices, lat_indices, lon_indices = np.unravel_index(
self.indices[index*self.batch_size:(index+1)*self.batch_size], (self.ntime, self.nlat, self.nlon)
)
X, Y = [], []
for itime, ilat, ilon in zip(time_indices, lat_indices, lon_indices):
X.append(
np.concatenate(
[self.ds[v].isel(time=itime, lat=ilat, lon=ilon).values for v in self.input_vars]
)
)
Y.append(
np.concatenate(
[self.ds[v].isel(time=itime, lat=ilat, lon=ilon).values for v in self.output_vars]
)
)
return np.array(X), np.array(Y)
```
### Multi-file dataset
Let's start by using the split dataset `sample_SPCAM_1.nc` and `sample_SPCAM_2.nc`.
```
gen = DataGenerator(DATADIR + 'sample_SPCAM_[1-2].nc')
# This is how we get one batch of inputs and corresponding outputs
x, y = gen[0]
x.shape, y.shape
# A little test function to check the timing.
def test(g, n):
for i in range(n):
x, y = g[i]
%%time
test(gen, 10)
# does shuffling make a big difference
gen = DataGenerator(DATADIR + 'sample_SPCAM_[1-2].nc', shuffle=True)
%time test(gen, 10)
```
So it takes more than one second to read one batch. This is way too slow to train a neural network in a reasonable amount of time. Shuffling doesn't seem to be a huge problem, but even without shuffling I am probably accessing the data in a different order than saved on disc.
Let's check what actually takes that long.
```
%load_ext line_profiler
%lprun -f gen.__getitem__ test(gen, 10)
```
Output:
```
Timer unit: 1e-06 s
Total time: 24.5229 s
File: <ipython-input-57-78b9d254df3b>
Function: __getitem__ at line 18
Line # Hits Time Per Hit % Time Line Contents
==============================================================
18 def __getitem__(self, index):
19 10 17.0 1.7 0.0 time_indices, lat_indices, lon_indices = np.unravel_index(
20 10 267.0 26.7 0.0 self.indices[index*self.batch_size:(index+1)*self.batch_size], (self.ntime, self.nlat, self.nlon)
21 )
22
23 10 10.0 1.0 0.0 X, Y = [], []
24 1290 4642.0 3.6 0.0 for itime, ilat, ilon in zip(time_indices, lat_indices, lon_indices):
25 1280 1399.0 1.1 0.0 X.append(
26 1280 1721.0 1.3 0.0 np.concatenate(
27 1280 12256070.0 9575.1 50.0 [self.ds[v].isel(time=itime, lat=ilat, lon=ilon).values for v in self.input_vars]
28 )
29 )
30 1280 2393.0 1.9 0.0 Y.append(
31 1280 1750.0 1.4 0.0 np.concatenate(
32 1280 12253415.0 9573.0 50.0 [self.ds[v].isel(time=itime, lat=ilat, lon=ilon).values for v in self.output_vars]
33 )
34 )
35
36 10 1218.0 121.8 0.0 return np.array(X), np.array(Y)
```
### Using the concatenated dataset
Let's see whether it makes a difference to use the pre-concatenated dataset.
```
ds = xr.open_dataset(f'{DATADIR}sample_SPCAM_concat.nc')
gen = DataGenerator(ds, shuffle=True)
%time test(gen, 10)
ds = xr.open_mfdataset(f'{DATADIR}sample_SPCAM_concat.nc')
gen = DataGenerator(ds, shuffle=True)
%time test(gen, 10)
```
So yes, it approximately halves the time but only if the single dataset is NOT opened with `open_mfdataset`.
### With h5py engine
Let's see whether using the h5py backend makes a difference
```
import h5netcdf
ds.close()
ds = xr.open_dataset(f'{DATADIR}sample_SPCAM_concat.nc', engine='h5netcdf')
gen = DataGenerator(ds)
%%time
test(gen, 10)
```
Doesn't seem to speed it up
```
ds.close()
```
### Using plain h5py
Let's write a version of the data generator that uses plain h5py for data loading.
```
class DataGeneratorH5(object):
def __init__(self, fn, batch_size=128, input_vars=['TAP', 'QAP'], output_vars=['TPHYSTND', 'PHQ'], shuffle=True):
self.ds = xr.open_dataset(fn)
self.batch_size = batch_size
self.input_vars = input_vars
self.output_vars = output_vars
self.ntime, self.nlat, self.nlon = self.ds.time.size, self.ds.lat.size, self.ds.lon.size
self.ntot = self.ntime * self.nlat * self.ntime
self.n_batches = self.ntot // batch_size
self.indices = np.arange(self.ntot)
if shuffle:
self.indices = np.random.permutation(self.indices)
# Close xarray dataset and open h5py object
self.ds.close()
self.ds = h5py.File(fn, 'r')
def __getitem__(self, index):
time_indices, lat_indices, lon_indices = np.unravel_index(
self.indices[index*self.batch_size:(index+1)*self.batch_size], (self.ntime, self.nlat, self.nlon)
)
X, Y = [], []
for itime, ilat, ilon in zip(time_indices, lat_indices, lon_indices):
X.append(
np.concatenate(
[self.ds[v][itime, :, ilat, ilon] for v in self.input_vars]
)
)
Y.append(
np.concatenate(
[self.ds[v][itime, :, ilat, ilon] for v in self.output_vars]
)
)
return np.array(X), np.array(Y)
gen = DataGeneratorH5(f'{DATADIR}sample_SPCAM_concat.nc')
%%time
test(gen, 10)
gen.ds.close()
```
So this is significantly faster than xarray.
## Use in a simple neural network
How would we actually use this data generator for network training...
Note that this neural network will not actually learn much because we didn't normalize the input data. But we only care about computational performance here, right?
```
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.models import Sequential
tf.keras.__version__
model = Sequential([
Dense(128, input_shape=(60,), activation='relu'),
Dense(60),
])
model.summary()
model.compile('adam', 'mse')
# Load the xarray version using the concatenated dataset
ds = xr.open_dataset(f'{DATADIR}sample_SPCAM_concat.nc')
gen = DataGenerator(ds, shuffle=True)
model.fit_generator(iter(gen), steps_per_epoch=gen.n_batches)
```
So as you can see, it would take around 1 hour to go through one epoch (i.e. the entire dataset once). This is crazy slow since we only used 2 days of data. The full dataset contains a year of data...
## Pre-processing the dataset
What I have resorted to to solve this issue is to prestack the data, preshuffle the data and save it to disc conveniently.
These files contain the exactly same information for the input (features) and output (targets) variables required.
The files only have two dimensions: sample, which is the shuffled, flattened time, lat and lon dimensions and lev which is the stacked vertical coordinate.
The preprocessing for these two files only takes a few seconds but for an entire year of data, the preprocessing alone can take around an hour.
```
!wget -P $DATADIR https://zenodo.org/record/2559313/files/preproc_features.nc
!wget -P $DATADIR https://zenodo.org/record/2559313/files/preproc_targets.nc
!ls -lh $DATADIR/preproc*
ds = xr.open_dataset(f'{DATADIR}preproc_features.nc')
ds
# Write a new data generator
class DataGeneratorPreproc(object):
"""
Data generator that randomly (if shuffle = True) picks columns from the dataset and returns them in
batches. For each column the input variables and output variables will be stacked.
"""
def __init__(self, feature_fn, target_fn, batch_size=128, shuffle=True, engine='netcdf4'):
self.feature_ds = xr.open_dataset(feature_fn, engine=engine)
self.target_ds = xr.open_dataset(target_fn, engine=engine)
self.batch_size = batch_size
self.ntot = self.feature_ds.sample.size
self.n_batches = self.ntot // batch_size
self.indices = np.arange(self.ntot)
if shuffle:
self.indices = np.random.permutation(self.indices)
def __getitem__(self, index):
batch_indices = self.indices[index*self.batch_size:(index+1)*self.batch_size]
X = self.feature_ds.features.isel(sample=batch_indices)
Y = self.target_ds.targets.isel(sample=batch_indices)
return X, Y
gen = DataGeneratorPreproc(f'{DATADIR}preproc_features.nc', f'{DATADIR}preproc_targets.nc')
x, y = gen[0]
x.shape, y.shape
%%time
test(gen, 10)
gen = DataGeneratorPreproc(f'{DATADIR}preproc_features.nc', f'{DATADIR}preproc_targets.nc', shuffle=False)
%%time
test(gen, 10)
gen.feature_ds.close(); gen.target_ds.close()
gen = DataGeneratorPreproc(f'{DATADIR}preproc_features.nc', f'{DATADIR}preproc_targets.nc', engine='h5netcdf')
%%time
test(gen, 10)
```
So these are the sort of times that are required for training a neural network.
### Pure h5py version
```
class DataGeneratorPreprocH5(object):
"""
Data generator that randomly (if shuffle = True) picks columns from the dataset and returns them in
batches. For each column the input variables and output variables will be stacked.
"""
def __init__(self, feature_fn, target_fn, batch_size=128):
self.feature_ds = xr.open_dataset(feature_fn)
self.target_ds = xr.open_dataset(target_fn)
self.batch_size = batch_size
self.ntot = self.feature_ds.sample.size
self.n_batches = self.ntot // batch_size
# Close xarray dataset and open h5py object
self.feature_ds.close()
self.feature_ds = h5py.File(feature_fn, 'r')
self.target_ds.close()
self.target_ds = h5py.File(target_fn, 'r')
def __getitem__(self, index):
X = self.feature_ds['features'][index*self.batch_size:(index+1)*self.batch_size, :]
Y = self.target_ds['targets'][index*self.batch_size:(index+1)*self.batch_size, :]
return X, Y
gen.feature_ds.close(); gen.target_ds.close()
gen = DataGeneratorPreprocH5(f'{DATADIR}preproc_features.nc', f'{DATADIR}preproc_targets.nc')
%%time
test(gen, 10)
```
So again, the pure h5py version is an order of magnitude faster than the xarray version.
## End
| github_jupyter |
# Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
**Instructions:**
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
**After this assignment you will:**
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
## About iPython Notebooks ##
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
```
### START CODE HERE ### (≈ 1 line of code)
test = 'Hello World'
### END CODE HERE ###
print ("test: " + test)
```
**Expected output**:
test: Hello World
<font color='blue'>
**What you need to remember**:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
## 1 - Building basic functions with numpy ##
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
### 1.1 - sigmoid function, np.exp() ###
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
**Reminder**:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
```
# GRADED FUNCTION: basic_sigmoid
import math
import numpy as np
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1 + np.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
```
**Expected Output**:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
```
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(np.asarray(x)) # you will see this give an error when you run it, because x is a vector.
```
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
```
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
```
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
```
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
```
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html).
You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.
**Exercise**: Implement the sigmoid function using numpy.
**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \\
x_2 \\
... \\
x_n \\
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \\
\frac{1}{1+e^{-x_2}} \\
... \\
\frac{1}{1+e^{-x_n}} \\
\end{pmatrix}\tag{1} $$
```
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(np.asarray(-x)))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
### 1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
```
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = 1/(1+np.exp((-x)))
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
### 1.3 - Reshaping arrays ###
Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html).
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
``` python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
```
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
```
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2], 1))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
### 1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \\
2 & 6 & 4 \\
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \\
\sqrt{56} \\
\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \\
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
```
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord = 2, axis = 1, keepdims = True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
**Note**:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
### 1.5 - Broadcasting and the softmax function ####
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
**Instructions**:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
- $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \\
softmax\text{(second row of x)} \\
... \\
softmax\text{(last row of x)} \\
\end{pmatrix} $$
```
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis=1, keepdims= True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
**Note**:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
**What you need to remember:**
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
## 2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
```
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
```
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.
### 2.1 Implement the L1 and L2 loss functions
**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
**Reminder**:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
```
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y - yhat) ,keepdims=True)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$.
- L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
```
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum((y-yhat)**2, keepdims=True)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L2** </td>
<td> 0.43 </td>
</tr>
</table>
Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!
<font color='blue'>
**What to remember:**
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...
| github_jupyter |
Section 1
\ /
---> O ---> -->
/ \
Each neuron/node is a function with simple (but potentially nonlinear) behavior.
Eg is F(x) { 0 : x<=0 ; x : x>0 } thresholding fxn
Neural networks can track complicated functions with modular components
```
from __future__ import print_function
import torch; print(torch.__version__)
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim #set of optimizers
import torch.utils.data as utils #inputs/outputs
import matplotlib.pyplot as plt
class Net(nn.Module):
def __init__(self, breadth=500, depth=2):
super(Net, self).__init__()
# an affine operation: y = Wx + b
# 2 hidden layers?
self.breadth = breadth
self.depth = depth
# self.fc1 = nn.Linear(1, self.breadth)
# self.fc2 = nn.Linear(500, 500)
# self.fc3 = nn.Linear(500, 1)
self.fc = []
self.fc1 = nn.Linear(1, self.breadth)
for i in range(self.depth-1):
dep = i+2
setattr(self, "fc%d" % dep, nn.Linear(self.breadth, self.breadth))
setattr(self, 'fc%d' % self.depth, nn.Linear(self.breadth, 1))
# self.fc3 = nn.Linear(self.breadth, 1) # 500 nodes to 1 node
# def forward(self, x):
# x = F.relu(self.fc1(x))
# x = F.relu(self.fc2(x))
# x = self.fc3(x)
# return x
def forward(self, x):
# This "runs" the neural network
for i in range(self.depth):
dep = i+1
fc = getattr(self, 'fc%d' % dep)
x = F.relu(fc(x))
return x
net = Net(breadth=300, depth=3)
print(net)
inputs = Variable(torch.arange(0,10,0.05))
#true_vals = torch.mul(inputs, inputs)
true_vals = torch.sin(inputs*inputs)
plt.plot(list(inputs.data), list(true_vals.data))
plt.show()
net.zero_grad()
# Do before new gradients to avoid depending on old data
outputs = net(Variable(torch.Tensor([0])))
outputs.backward(torch.randn(1)) # use random gradients to break symmetry
learning_rate = 1
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
from tqdm import trange # Used to provide progress bar
# create your optimizer
optimizer = optim.Adam(net.parameters())
criterion = nn.MSELoss()
num_epochs = 100
t = trange(num_epochs)
for epoch in t: # loop over the dataset multiple times
running_loss = 0.0
# wrap them in Variable
reshaped_inputs = inputs.view(-1, 1) # Structure with each input in its own row
reshaped_outputs = true_vals.view(-1, 1) # Neglecting to have outputs and true vals to match dimension is a common mistake.
# forward + backward + optimize
outputs = net(reshaped_inputs)
#print(outputs)
#print(reshaped_outputs)
#loss = criterion(outputs, reshaped_outputs)
error = reshaped_outputs - outputs
#print("ERROR")
#print(error)
loss = (error ** 2).mean()
loss.backward()
optimizer.step()
# zero the parameter gradients
optimizer.zero_grad()
t.set_description('ML (loss=%g)' % loss.item()) # Updates Loss information
#t.set_description('ML Loss: ' + str(loss.item())) # Updates Loss information
print('Finished Training')
t = trange(10)
loss = []
breadth = []
for i in t:
breadth_num = 100*i + 50
breadth.append(breadth_num)
net = Net(breadth=breadth_num)
num_epochs = 100
a = torch.arange(num_epochs)
for epoch in a: # loop over the dataset multiple times
running_loss = 0.0
# wrap them in Variable
reshaped_inputs = inputs.view(-1, 1) # Structure with each input in its own row
reshaped_outputs = true_vals.view(-1, 1) # Neglecting to have outputs and true vals to match dimension is a common mistake.
# forward + backward + optimize
outputs = net(reshaped_inputs)
#print(outputs)
#print(reshaped_outputs)
#loss = criterion(outputs, reshaped_outputs)
error = reshaped_outputs - outputs
#print("ERROR")
#print(error)
loss_current = (error ** 2).mean()
loss_current.backward()
optimizer.step()
# zero the parameter gradients
optimizer.zero_grad()
loss.append(loss_current)
t.set_description('ML (loss=%g)' % loss_current) # Updates Loss information
#t.set_description('ML Loss: ' + str(loss.item())) # Updates Loss information
predicted =net.forward(reshaped_inputs).data.numpy()
#plt.plot(list(inputs.data), list(true_vals.data), 'g')
#plt.plot(list(inputs.data), predicted, 'b')
plt.plot(list(breadth), list(loss), 'g')
plt.show()
t = trange(10)
loss = []
depth = []
for i in t:
depth_num = i+2
depth.append(depth_num)
net = Net(depth=depth_num)
num_epochs = 100
a = torch.arange(num_epochs)
for epoch in a: # loop over the dataset multiple times
running_loss = 0.0
# wrap them in Variable
reshaped_inputs = inputs.view(-1, 1) # Structure with each input in its own row
reshaped_outputs = true_vals.view(-1, 1) # Neglecting to have outputs and true vals to match dimension is a common mistake.
# forward + backward + optimize
outputs = net(reshaped_inputs)
#print(outputs)
#print(reshaped_outputs)
#loss = criterion(outputs, reshaped_outputs)
error = reshaped_outputs - outputs
#print("ERROR")
#print(error)
loss_current = (error ** 2).mean()
loss_current.backward()
optimizer.step()
# zero the parameter gradients
optimizer.zero_grad()
loss.append(loss_current)
t.set_description('ML (loss=%g)' % loss_current) # Updates Loss information
#t.set_description('ML Loss: ' + str(loss.item())) # Updates Loss information
predicted =net.forward(reshaped_inputs).data.numpy()
#plt.plot(list(inputs.data), list(true_vals.data), 'g')
#plt.plot(list(inputs.data), predicted, 'b')
plt.plot(list(depth), list(loss), 'g')
plt.show()
t = trange(20)
for i in t:
num_pts = 10*(i+1)
inputs = Variable(torch.arange(0,10,1.0/(i+1)))
# inputs = Variable(torch.arange(0,10,0.05))
#true_vals = torch.mul(inputs, inputs)
true_vals = torch.sin(inputs*inputs)
plt.plot(list(inputs.data), list(true_vals.data))
plt.show()
```
| github_jupyter |
# Chapter 5 - Image Filtering
In this chapter, we're introducing the concept of Image Filtering.
Filters can be applied on 2D Image data either for various applications. We can broadly differenciate low-pass filters smooth images
(retrain low-frequenciy components) and high-pass filters (retain contours / edges, e.g. high frequencies).
## Low-pass filters
Low pass filters are typically applied to reduce noise in images. Noise can be seen as random artifacts in an image.
For example, salt & pepper noise describes the random occurrence of black / white pixels in the image, while
gaussian noise is a random increase/decrease in each pixel’s color value, following a gaussian distribution.

*Figure 1: Different noise types. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
Low-pass filters assume that the pixel variation in the image should be lower than perceived, e.g. pixels should have a
color value close to their neighbours. Low-pass filters therefore replace each pixels value with an average of the values in
the pixels neighbourhood. The neighbours can either be weighted based on their distance to the center pixel, or equally.
Moving a filter over all possible positions in an image is called a *convolution*, the filter is called a *Kernel* or *Mask*
and denoted *H*.
When convoluting a filter over an image, we flip the kernel by 180° before performing at each position before computing the weighted
average between the filter values and the pixel. If we do not flip the kernel, we speak of a cross-correlation instead of a convolution.
For symmetric filters like "Gaussian Filter" or "Median Filter", a convolution and a cross-correlation will of course produce
the same results.

*Figure 2: Convolution vs. Cross-corelation formula. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
In the following example, a smoothing averaging filter is applied to a 1D signal, bringing the neighbouring values significantly closer together and reducing outliers.

*Figure 3: World Coordinates -> Pixel Coordinates. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
The same principle can be applied to 2D Data like images. In the following example, a 2D Filter with size 1/1 averages a neighbourhood of 9 pixels, overwriting the central pixel with an equally weighted average of all 9 pixels.
Such a filter is called a "box" filter.

*Figure 4: Illustration of a box filter over a black image with a white square. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
The output is - of course - a blurred version of the very same image.

*Figure 5: Output of a smoothed image using a box-filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
While the box filter smoothens an image quite well, it produces horizontal and vertical aftifacts since the filter itself has
sharp edges. These artifacts are also called "aliasing" and is caused by the high frequency components of the box filter.
A better way to smooth an image is with a gaussian filter, a filter implementing the 2D gaussian function.
For perfect results, take a large gaussian filters with smooth edges, e.g. low standard derivation that ensures that the outermost
values of the filter are close to 1 while preserving a smooth derivative.

*Figure 6: Gaussian filter visualization. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*

*Figure 7: Gaussian Filter comparison. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
When we apply any filter on an image, the question remains how to deal with the image boundary. Since - in most cases - we don't
want the resulting image to be smaller than the input image, we have to simulate additional boundary pixels.
There are different strategies with varying results, like zero-padding (surrounding black pixels), wrap-around (repeating the image),
copy-edge (always use outermost pixel values) or reflect accross edge (mirroring around edge, gives best results).
### Non-linear Low-pass filters
Gaussian filters or box-filters do not denoise salt & pepper noise since they get influenced by outliers by a high degree.
That's where **median filters** come into play. They can not be interpreted as a classical convolution filter like a Gaussian
filter, it rather takes the median pixel value from the neighbourhood. The median filter is therefore much less influenced
by strong noise, while he also preserves edges much better than the linear smoothing filters.

*Figure 8: Median Filter removing Salt & Pepper Noise. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
Another such filter is the **billateral filter**. It acts like a median filter with preserving edges even more by adapting the kernel
locally to the intensitiy profile of the underlaying image. They only average pixels with similar brightness: Pixels that fall below
a brightness difference compared to the center pixel.

*Figure 9: Bilateral filer with mask. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
The extend to which neighbouring pixels have to be similar to the central pixel is controlled via a factor *sigma*.
## High-pass filters
High-pass filters are mainly used for edge detection since react to sharp change in pixel intensity. Edges are sharp changes in
an image functions intensity value. Applying the first derivative on an image would leave us with an image where sharp edges
are shown.

*Figure 10: Image derivative detecting edges. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
We therefore construct a filter that acts like a derivative by approximating the image derivative
*dI(x,y) / dx ~ I(x+1, y) - I(x,y)* and *dI(x,y) / dy ~ I(x, y+1) - I(x,y)*.
So we essentially compare each pixel to its direct neighbour and take the difference as an output.

*Figure 11: Partial derivative filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
More advanced filters are larger in size and therefore produce less artifacts. The sobel-filter is an example for a larger
derivative filter:

*Figure 12: Prewitt & Sobel filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
The direction of the edge can be determined by calculating the pixel regions gradient, so the diretion of fastest intensity change.
The gradient direction is given by *angle = arctan2(dI/dx, dI/dy)*, so the two dimensional arcus tangens of the image derivative
values. The edge strenght is given by the gradients magnitude: *strength = sqrt((dI/dx)^2 + (dI/dy)^2).
A big problem for high-pass filters is gaussian noise: there will always be a steep difference between two neighbouring pixels, caused
by normal gaussian noise produced by the image sensor. It is therefore best practice to softly filter the image first with a
gaussian filter before applying a high-pass filter.
In the following graphic, we see the original image I, the kernel H, the resulting image when H is applied I*H as well as the derrived
image d(I*H)/dx

*Figure 13: Process steps for edge detection. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
A way better approach is to directly include the smoothing in the filter itself, giving us the filter dH/dx as seen in
the following image:

*Figure 13: Gaussian smoothing within a derivative filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
This is called a "derivative of gaussian" filter: it multiplies a normal gaussian filters with a high-pass 2x1 derivative filter.

*Figure 14: Difference of Gaussians filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
Since we deal with two partial derivatives, we'd need to filter the image twice. A solution to this is given by the
"Laplacian of Gaussian"-Filter, which finds the derivative in all directions simultaniously. It is constructed by
subtracting a smaller radius gaussian Filter from a large radius gaussian filter.

*Figure 15: Laplacian of Gaussian. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
## Canny edge detection
Canny edge detection uses partial gaussian derivative filters to find all corners in an image. It then sets all pixelsvalues to 0 that
fall under a given threshold. Finally, Canny takes the local maximum of any corner along the gradient direction, e.g. it only
takes the peak of a wide edge.

*Figure 16: Canny edge detection. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)*
# Overview summary
Let me quickly illustrate the main differences of Smoothing and Derivaitve filters.
Smoothing filters always contain positive filter values that sum to 1 to preserve the overall brightness of constant regions. They are constructed to remove high-frequency components.
In contrast, derivative filters have two regions with opposite signs to get a high response in regions of high contrast. Their components sum to 1 to create no response on images with constant color. They are created to highlight high-frequency, not to remove them.
| github_jupyter |
```
from IPython.display import display, Markdown as md
display(md(f"# Social Media Monitoring"))
display(md(f"**Search for a Politician**"))
display(md(f"***The following table contains data on the politicians and their associated social acounts***"))
import os
from smm_wrapper import SMM
import qgrid
from ipywidgets import widgets, Output
from IPython.display import display, clear_output
smm = SMM()
politicians_df = smm.dv.politicians_df()
qgrid_widget = qgrid.show_grid(politicians_df)
qgrid_widget
```
Instructions
```
politician_id = 3
granularity = 5
smm.api.tweets_by(politician_id=politician_id, aggregate_by=agg)
plot()
display(md(f"**Select a politician**"))
def on_button_clicked(b):
global politician
# use the out widget so the output is overwritten when two or more
# searches are performed
with out:
try:
politician = smm.api.politician_search(politician_id=searchTerm.value)
clear_output()
display('Result found for #: {}. Name and last name: {} {}'.format(politician['politician_id'],
politician['firstname'],
politician['name']))
except:
clear_output()
display(md(f'The politician with id *"{searchTerm.value}"* was not found. Please enter a number between 1 and 2516'))
# create and display the button
searchTerm = widgets.Text()
button = widgets.Button(description="Search")
example = md("Please enter a politician id from the table above. Example: *2193 for Angela Merkel*")
display(example, searchTerm, button)
# the output widge is used to remove the output after the search field
out = Output()
display(out)
# set the event
button.on_click(on_button_clicked)
# trigger the event with the default value
on_button_clicked(button)
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
from bokeh.layouts import column
from bokeh.models import ColumnDataSource, RadioButtonGroup
output_notebook()
import pandas as pd
from datetime import datetime
def modify_tw(doc):
#getting the data from SMM api
def extracting_data(politician_id, agg):
filtered_tweets = smm.api.tweets_by(politician_id=politician_id, aggregate_by=agg)
filtered_replies = smm.api.reply_to(politician_id=politician_id, aggregate_by=agg)
filtered_posts = smm.api.posts_by(politician_id=politician_id, aggregate_by=agg)
filtered_comments = smm.api.comments_by(politician_id=politician_id, aggregate_by=agg)
filtered_tweets['labels'] = [datetime.strptime(item, '%Y-%m-%d') for item in filtered_tweets['labels']]
filtered_replies['labels'] = [datetime.strptime(item, '%Y-%m-%d') for item in filtered_replies['labels']]
filtered_posts['labels'] = [datetime.strptime(item, '%Y-%m-%d') for item in filtered_posts['labels']]
filtered_comments['labels'] = [datetime.strptime(item, '%Y-%m-%d') for item in filtered_comments['labels']]
source_tw = ColumnDataSource(data = dict(labels = filtered_tweets['labels'],values = filtered_tweets['values']))
source_re = ColumnDataSource(data = dict(labels=filtered_replies['labels'],values=filtered_replies['values']))
source_p = ColumnDataSource(data = dict(labels = filtered_posts['labels'],values = filtered_posts['values']))
source_co = ColumnDataSource(data = dict(labels=filtered_comments['labels'],values=filtered_comments['values']))
return(source_tw, source_re, source_p, source_co)
#getting the data for the plots
source_tw, source_re, source_p, source_co = extracting_data(politician['politician_id'], agg='month')
#creating plots
#twitter plot
t = figure(title="Aggregated tweets and replies by Politician ID", x_axis_type='datetime',
plot_height=400, plot_width=700,
background_fill_color='#efefef', x_axis_label='Date', y_axis_label='Tweets')
t.line('labels', 'values', source = source_tw, legend="number of tweets", line_width=2, line_color="blue")
t.line('labels', 'values', source = source_re, legend="number of replies", line_width=1, line_color="red")
t.legend.location = "top_left"
t.legend.click_policy="hide"
#facebook posts plot
p = figure(title="Aggregated facebook posts by Politician ID", x_axis_type='datetime',
plot_height=400, plot_width=700,
background_fill_color='#efefef', x_axis_label='Date', y_axis_label='Facebook posts')
p.line('labels', 'values', source = source_p, legend="number of posts", line_width=2, line_color="blue")
p.legend.location = "top_left"
p.legend.click_policy="hide"
#facebook comments plot
c = figure(title="Aggregated facebook comments by Politician ID", x_axis_type='datetime',
plot_height=400, plot_width=700,
background_fill_color='#efefef', x_axis_label='Date', y_axis_label='Facebook comments')
c.line('labels', 'values', source = source_co, legend="number of comments", line_width=2, line_color="green")
c.legend.location = "top_left"
c.legend.click_policy="hide"
#radio button handler
def callback(attr, old, new):
if new == 0:
agg = 'day'
elif new == 1:
agg = 'week'
else:
agg = 'month'
source_tw_up, source_re_up, source_p_up, source_co_up = extracting_data(politician['politician_id'], agg=agg)
source_tw.data, source_re.data, source_p.data, source_co.data = source_tw_up.data, source_re_up.data, source_p_up.data, source_co_up.data
#choosing the aggregation
radio_button_group = RadioButtonGroup(
labels=["Daily", "Weekly", "Monthly"], active=2)
radio_button_group.on_change('active', callback)
doc.add_root(column(radio_button_group, t, p, c))
show(modify_tw, notebook_url="http://10.6.13.139:8020")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/SepideHematian/my_course_projects/blob/main/602/Project1/Final_version_project1_group1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data602 - Project 1
**Group A/1** This project has been done by folllowing people:
* Tahereh Hematian Pour Fard
* Kent Butler
* Leslie Li
* Colleen Boarman
# Import modules
```
import pandas as pd
import numpy as np
pd.options.mode.chained_assignment = None # default='warn'
```
# Load Data
---
```
# from google.colab import drive
# drive.mount('/content/gdrive')
# Load all data
df_a = pd.read_csv('AAPL.csv')
df_bc = pd.read_csv('BTC-USD.csv')
df_s = pd.read_csv('SPY.csv')
# Map data in tuples for convenience
# AAPL => df_a
# BTC => df_bc
# SPY => df_s
DATA_MAP = [('AAPL', df_a), ('BTC', df_bc), ('SPY', df_s)]
TICKER = 0
DF = 1
df_a.shape, df_bc.shape, df_s.shape
df_a.head()
for data in DATA_MAP:
data[DF].rename(columns={'Adj Close':'Adj_Close'}, inplace=True)
```
## Problem 1
---
a. Change the ‘Date’ column to pd.timestamp format. (10pt)
b. Output the median timestamp for each df.
**a. Re-type Date as a timestamp**
```
# Re-type the Date column
for data in DATA_MAP:
data[DF]['Date'] = pd.to_datetime(data[DF]['Date'])
```
**b. Output median dates per ticker**
```
# Output median timestamps
for data in DATA_MAP:
print (data[DF]['Date'].median())
```
**Observation**
The trading calendar for `BTC` is different than that of `AAPL` and `SPY`, which will result in slightly different time series data.
## Problem 2
----
**a.** Define a udf, called `daily_return`, as $\frac{x_t - x_{(t-1)}}{x_{(t-1)}}$
**b.** Apply this function to both “Open” and “Adj Close” for each df; name the new columns properly.
For each df:
* **c.** Output the average Open_daily_return
* **d.** Output the average Adj_Close_daily_return, weighted by `volume`
```
# Aggregate previous Open and Adj_Close columns for convenience
for data in DATA_MAP:
data[DF]['Prev_Open'] = data[DF]['Open'].shift(1)
data[DF]['Prev_Adj_Close'] = data[DF]['Adj_Close'].shift(1)
# Fill in missing values
for data in DATA_MAP:
data[DF].backfill(inplace=True)
```
**a. Define a udf to calculate daily return**
```
# Calculate percent change between the given fields
def daily_return (x, field, prevField):
prevVal = x[prevField]
return (x[field] - prevVal)/prevVal
```
**b. Apply to “Open” and “Adj Close”**
```
# Calculate daily returns
for data in DATA_MAP:
data[DF]['Open_daily_return'] = data[DF].apply(daily_return, axis=1, args=('Open','Prev_Open'))
data[DF]['Adj_Close_daily_return'] = data[DF].apply(daily_return, axis=1, args=('Adj_Close','Prev_Adj_Close'))
# We expect 0 returns in the first row, since current Open/Close == Prev Open/Close due to backfill
df_s
```
**c. Average open Daily Return**
```
# Average over the given feature
def feature_average (df, avgField):
return df[avgField].mean()
# Print results
for data in DATA_MAP:
print (data[TICKER], ' avg Open return: ', feature_average(data[DF], 'Open_daily_return'))
```
**d. Average Daily Return weighted by Volume**
```
# Define a weighted average return
def weighted_average_return (df, returnField, weightField):
return (df[returnField] * df[weightField]).sum() / df[weightField].sum()
# Print results
for data in DATA_MAP:
print (data[TICKER], ' weighted avg return: ', weighted_average_return(data[DF], 'Adj_Close_daily_return', 'Volume'))
```
**Observation**
It appears that the volume of trade on the `AAPL` ticker makes up for a lower average daily return than `BTC`; and `SPY` appears to have the opposite situation - volume is low enough to bring down the weighted average.
## Problem 3
---
**a.** Concatenate the 3 df into a single df, vertically. Make sure you have a column `ticker` that tells you which ticker the observation is. (10pt)
### Output:
* **b.** By ticker, output the Min, Max, Mean of `Open_daily_return`
* **c.** Percentile rank `Open_daily_return` across the df; name this column properly.
* **d.** Within each quartile bucket of `Open_daily_return` (0 - 25%, 25% - 50%, 50% ~ 75%, and > 75%), count the number of observations of each ticker
**a. Concatente into a single df**
```
# Mark each df with its ticker of origin before merging
for data in DATA_MAP:
data[DF]['ticker'] = [data[TICKER]] * len(data[DF].index)
# merge
mergeDf = pd.concat ([df[DF] for df in DATA_MAP])
df_a.shape, df_bc.shape, df_s.shape
mergeDf
```
**b. Output Min, Max, Mean by ticker**
```
# Accumulate statistics on Open_daily_return, by ticker
mergeDf.groupby('ticker')['Open_daily_return'].agg(['min', 'max', 'mean'])
```
**c. Percentile Rank Open_daily_return across the df**
```
# Compute quartiles of Open_daily_return across the entire dataset
mergeDf['Open_daily_return_Quartiles'] = pd.qcut(mergeDf['Open_daily_return'], q=[0, .25, .5, .75, 1], labels=['1st Qtile', '2nd Qtile', '3rd Qtile', '4th Qtile'])
```
**d. Number of observations of each ticker**
```
# Accumulate counts on Open_daily_return_Quartiles by ticker
mergeDf.groupby(['Open_daily_return_Quartiles','ticker'])['Open_daily_return'].agg(['count'])
```
**Observation**
As it is clear from the values of quartiles, the most volatile assets are `BTC` and `SPY` because their values have great variation between some of the quartiles - interestingly `BTC` peaks in Q1 and Q4, while `SPY` is exactly opposite, peaking in Q2 and Q3, and dropping drastically in Q1 and Q4. `AAPL` is almost uniform in a sense, as Q1-Q4 have similar values.
## Problem 4
---
Using the concatenated df from step 3, pivot the long table into wide (10pt)
* Indexed by date,
* Columned by ticker,
* Valued by Open
Output:
* Show the shape of data prior and after the pivot.
* Compute the average of `Open`,for each group by ticker
```
# Shape before
mergeDf.shape
# Create the Open pivot
openDf = mergeDf.pivot(index='Date', columns='ticker', values='Open')
# Shape after
openDf.shape
openDf
```
**Average Open by ticker**
```
# Compute average of each ticker
openDf.agg(['mean'])
```
## Problem 5
---
Using the 3 dfs from step 2:
* a. perform outer join of the 3 dfs on dates into another df, denoted as df_merge. (10pt)
* b. rename the column names properly
* c. melt df_merge, denoted as df_melt, indexed by date, ticker;
**Output**:
* d. Compute the average Open by ticker, and compare the numbers against the output from 4.
**a. Join 3 dataFrames**
```
# Revert steps from Problem 3 by removing the Ticker column
for data in DATA_MAP:
data[DF].drop(columns='ticker', inplace=True)
# Merge datasets
df_merge = pd.merge(df_a, df_s, how='outer', on='Date', suffixes=['_a','_s'])
df_merge = pd.merge(df_merge, df_bc, how='outer', on='Date', suffixes=['','_bc'])
```
**b. Rename columns**
```
# Ensure BTC columns are properly labelled
df_merge.rename(columns={'Open':'Open_bc','High':'High_bc','Low':'Low_bc','Close':'Close_bc','Adj Close':'Adj Close_bc','Volume':'Volume_bc','Prev_Open':'Prev_Open_bc','Prev_Adj_Close':'Prev_Adj_Close_bc','Open_daily_return':'Open_daily_return_bc','Adj_Close_daily_return':'Adj_Close_daily_return_bc','ticker':'ticker_bc'}, inplace=True)
```
**c. Melt df_merge and index by date & ticker**
```
# Given a dataFrame and list of suffixes and matching tickers,
# extract a dataFrame for each suffix/ticker pair
# example:
# df_splitBySuffix(myDf, ['_x','_y'], ['X-ticker', 'Y-ticker']])
#
def df_splitBySuffix(df, suffixes, tickers):
dfs = []
if suffixes is not None:
# filter out sub-dataframes
for idx,suffix in enumerate(suffixes):
cols = [col for col in df.columns if suffix in col]
cols.sort()
# Create name map for removing suffixes from col names
name_map = {}
for col in df.columns:
if col in cols:
# strip suffix
name_map[col] = col[0:-len(suffix)]
# Add Date columns which will not be suffixed
cols.append('Date')
# Split columns related to suffix out into separate df
this_df = df[cols]
# Rename columns for uniformity, for later concatenation
this_df.rename(columns=name_map, inplace=True)
# Add a ticker column
this_df['ticker'] = [tickers[idx]] * len(this_df.index)
dfs.append(this_df)
else:
dfs.append(df)
return dfs
# Define suffixes and tickers, matching by position
suffixes = ['_a','_bc','_s']
tickers = ['AAPL', 'BTC', 'SPY']
# Split our merged df out by suffix so we can melt columns
# using distinct ticker indices
dfs = df_splitBySuffix(df_merge, suffixes, tickers)
# Accumulate melted dfs
melted = []
for df in dfs:
melted.append(df.melt(id_vars=['Date','ticker']))
# Concatenate melted dfs
df_melt = pd.concat(melted)
# Inspect results
df_melt[df_melt['ticker'] == 'SPY']
```
**d. Compute average Open by ticker, and compare output**
```
# Compute average Open by Ticker
res = df_melt[df_melt['variable'] == 'Open'].groupby('ticker').mean()
# Rename this output column for display clarity
res.rename(columns={'value':'Problem5Result'}, inplace=True)
# Package original results for comparison
cols = openDf.columns
ores = openDf.mean()
# Convert from Series to DF with same index and columns
oresDf = pd.DataFrame(ores, index=cols, columns=['Problem4Result'])
# Compare results side-by-side
pd.merge(oresDf, res, how='inner', on='ticker')
```
## Problem 6
---
Using df_melt from 5., define a 7-day moving average of `Open_daily_return`, denoted as Open_daily_return_7day_MA (10pt)
Plot 3 time-series of `Open_daily_return_7day_MA` for each ticker
(hint, in Seaborn’s plotting functions, try the argument, ‘hue’)
(Alternatively, you can also use 3 separate dfs to perform the plot)
```
# Calculate 7-day moving average of Open_daily_return, across tickers
Open_daily_return_7day_MA = df_melt[df_melt['variable'] == 'Open_daily_return'].groupby(['ticker'], as_index = False).rolling(window=7).mean()
import matplotlib.pyplot as plt
tickers = ['AAPL', 'BTC', 'SPY']
# Plot 7-day moving average for each ticker
for ticker in tickers:
fig, ax = plt.subplots(figsize=(8,6))
ma7 = Open_daily_return_7day_MA[Open_daily_return_7day_MA['ticker'] == ticker]['value'].dropna()
x = range(len(ma7))
ax.plot(x, 100*ma7)
ax.set_xlabel('Day', fontsize=15)
ax.set_ylabel(f'7-D:MA %', fontsize=15, rotation=0, labelpad=25)
ax.set_title(f'7-day Moving Average of {ticker}', fontsize=15)
ax.legend([ticker])
plt.show()
tickers = ['AAPL', 'BTC', 'SPY']
legend = []
fig, ax = plt.subplots(figsize=(8,6))
# Plot 7-day moving average for each ticker in an aggregate view
for ticker in tickers:
ma7 = Open_daily_return_7day_MA[Open_daily_return_7day_MA['ticker'] == ticker]['value'].dropna()
x = range(len(ma7))
ax.plot(x, ma7)
ax.set_xlabel('Day', fontsize=15)
ax.set_ylabel(f'7-D:MA', fontsize=15, rotation=0, labelpad=20)
ax.set_title(f'7-day Moving Average of {ticker}', fontsize=15)
legend.append(ticker)
ax.legend(legend)
plt.show()
```
**Observation**
As it appears `BTC` is the most volatile asset among the three tickets that we have analyzed because even it avarage (7-day MA) is so volatile. Also, it seems that `AAPL` is sort of seasonal and it peaks happen every season.
## Problem 7
---
Like step 5 and perform an inner join (instead of outer join).
Please notice the df shape difference between this inner join output. (again, please rename the columns properly) (10pt)
**Output**:
The difference in the number of rows between 5 and 7
A short sentence describing your investigation on the root cause why the 3 dfs’ dates are not aligned.
```
# Perform merge in 2 steps due to limitation of the merge function
df_merge2 = pd.merge(df_a, df_s, how='inner', on='Date', suffixes=['_a','_s'])
df_merge2 = pd.merge(df_merge2, df_bc, how='inner', on='Date', suffixes=['_x','_bc'])
df_merge2
# Ensure BTC columns are properly named
df_merge2.rename(columns={'Open':'Open_bc','High':'High_bc','Low':'Low_bc','Close':'Close_bc','Adj Close':'Adj Close_bc','Volume':'Volume_bc','Prev_Open':'Prev_Open_bc','Prev_Adj Close':'Prev_Adj_Close_bc','Open_daily_return':'Open_daily_return_bc','Adj_Close_daily_return':'Adj_Close_daily_return_bc','ticker':'ticker_bc'}, inplace=True)
# Compare inner vs. outer joins - expecting larger number for outer join
print ('Inner join rows: ', df_merge2.index.size)
print ('Outer join rows: ', df_merge.index.size)
```
**Observations**:
* `AAPL` and `SPY` data provided only 252 rows each
* `BTC` provided 364 rows
Hence, inner joining on Date can only provide 252 rows max, assuming `AAPL` and
`BTC` reported data on the same days.
Outer joining should provide 364 rows, since out join would pull all data from all datasets involved.
Note that the inner join results in a rename of all columns, which makes the data more difficult to work with.
## Problem 8
---
A typical ML practice is to standardize/normalize across all features (sometimes targets as well) (10pt)
Two common scaling practices are
* Min Max Scalar
* Standard Scalar
Write your own scalar UDFs, and apply them to the 3 renamed `Open_daily_return` columns from 7.; output the scaled features to new columns
```
# Define min/max scaling as a lambda
udf_min_max = lambda x: x-x.min()/x.max()-x.min()
# Define min/max scaling function
def min_max_scalar(x):
return (x-x.min(axis=0))/(x.max(axis=0)-x.min(axis=0))
# Extract the Open_daily_return columns into a new df
opendaily = df_merge2[['Open_daily_return_a', 'Open_daily_return_bc', 'Open_daily_return_s']]
# Rename columns for convenience
opendaily.columns = ['AAPL', 'BTC', 'SPY']
# Apply the min/max scaler to the Open_daily_returns
opendaily_minmax = opendaily.apply(udf_min_max)
opendaily_minmax
# Define standard scaler as a lambda
udf_standard = lambda x: x-x.mean()/x.std()
# Apply the standard scaler to the Open_daily_returns
opendaily_standard = opendaily.apply(udf_standard)
opendaily_standard
```
## Problem 9
---
Another ML practice to slice data row-wise into training and testing data. For now, we are going to ignore the fact that the data is time-series, and just randomly shuffle the data and split them into training and test sets. (20pt)
**Part I:**
Write a function that inputs a dataframe, a parameter that determines the size of the training (vs. testing) set, and a seed parameter in case users need to repeat the randomness.
Hint, there are a couple approaches, and feel free to choose either
Shuffle the df, and split the df into ‘training’ and ‘testing’ set
Use pd.DataFrame.sample()
Output: apply your function to the df of 7, use training = 80%, and set seed =1. return both training and testing data.
```
# Permutation-based solution
import numpy as np
# fix the seed so that we can repeat our experiment
np.random.seed(1)
# specifiy the percentage of training
pct_train = 80
# find the number of rows in our data frame
n = len(df_merge2)
# create a shuffled array of numbers equal to the number of rows
shuffled_indices = np.random.permutation(n)
print(shuffled_indices)
# get the number of rows that should be in the training set
n_train_sample = int(pct_train*n/100)
# convert arrays to list
index_train_list = shuffled_indices[:n_train_sample].tolist()
index_test_list = shuffled_indices[n_train_sample:].tolist()
print(len(index_train_list))
print(len(index_test_list))
# use train and test list index to get the training and test dataframes
df_train = df_merge2.iloc[index_train_list]
df_test = df_merge2.iloc[index_test_list]
df_train
# Using pd.DataFrame.sample() for train/test split
def sample_train_test(df, pct_train, seed):
# sample rows dateframe for training
df_train = df.sample(frac=pct_train/100, random_state = seed)
# get indicies that are left for creating a list of indices for test dataframe
test_index = list(set(df.index.tolist()) - set(df_train.index.tolist()))
# get the test dataframe
df_test = df.iloc[test_index]
return df_train, df_test
# Execute the train/test split
train_set, test_set = sample_train_test (df_merge2, pct_train = 80, seed = 1)
print(train_set.shape)
print(test_set.shape)
```
**Part II**
Sometimes, one needs to do Stratified splitting. That is, randomly splitting data into train and test within each stratum. Change the function that you just wrote, and make sure it can handle the group by splitting.
Output: Use df from step 3, let the strata be ticker labels, use training = 80%, and set seed =1. return both training and testing data.
```
# Use original datasets as input
for data in DATA_MAP:
print(data[DF].shape)
train_list = []
test_list = []
for data in DATA_MAP:
# Call the split on this single df
train_set, test_set = sample_train_test (data[DF], 80, 1)
# Append to output results
train_list.append(train_set)
test_list.append(test_set)
# Assemble all results
train_result = pd.concat([df for df in train_list], join='outer')
test_result = pd.concat([df for df in test_list], join='outer')
# Print final train/test results
print('train:', train_result.shape)
print('test:', test_result.shape)
```
| github_jupyter |
# Multi Armed Bandit Problem
Multi-armed bandit (MAB) problem is one of the classical problems in reinforcement learning. A multi-armed bandit is actually a slot machine, a gambling game played in a casino where you pull the arm(lever) and get a payout(reward) based on some randomly generated probability distribution. A single slot machine is called one-armed bandit and when there are multiple slot machines it is called as multi-armed bandits or k-armed bandits. Multi-armed bandits are shown below,

As each slot machine gives us the reward from its own probability distribution, our goal is to find out which slot machine will give us the maximum cumulative reward over a sequence of time.So at each time step t, the agent performs an action i.e pulls an arm from the slot machine and receives a reward and goal of our agent is to maximize the cumulative reward.
We define the value of an arm Q(a) as average rewards received by pulling the arm,
$$Q(a) = \frac{Sum \,of \,rewards \,\,received \,from \,the \,arm}{Total\, number \,of \,times \,the \,arm \,was \,pulled} $$
So the optimal arm is the one which gives us maximum cumulative reward i.e
$$Q(a^*)= Max \; Q(a) $$
The goal of our agent is to find the optimal arm and also to minimize the regret which can be defined as the cost of knowing which of the k arms is optimal. Now how do we find the best arm? Whether we should explore all the arms or choose the arm which already gives us a maximum cumulative reward? Here comes exploration-exploitation dilemma. Now we will see how to solve this dilemma using various exploration strategies as follows,
1. Epsilon-greedy policy
2. Softmax exploration
3. Upper Confidence bound algorithm
4. Thomson sampling technique
First let us import the libraries,
```
import gym_bandits
import gym
import numpy as np
import math
import random
env = gym.make("BanditTenArmedGaussian-v0")
```
### Epsilon-Greedy Policy
```
def epsilon_greedy(epsilon):
rand = np.random.random()
if rand < epsilon:
action = env.action_space.sample()
else:
action = np.argmax(Q)
return action
```
Now, let us initialize all the necessary variables
```
# number of rounds (iterations)
num_rounds = 20000
# Count of number of times an arm was pulled
count = np.zeros(10)
# Sum of rewards of each arm
sum_rewards = np.zeros(10)
# Q value which is the average reward
Q = np.zeros(10)
```
Start pulling the arm!!!!!!!!
```
for i in range(num_rounds):
# Select the arm using epsilon greedy
arm = epsilon_greedy(0.5)
# Get the reward
observation, reward, done, info = env.step(arm)
# update the count of that arm
count[arm] += 1
# Sum the rewards obtained from the arm
sum_rewards[arm]+=reward
# calculate Q value which is the average rewards of the arm
Q[arm] = sum_rewards[arm]/count[arm]
print( 'The optimal arm is {}'.format(np.argmax(Q)))
```
### Softmax Exploration
```
def softmax(tau):
total = sum([math.exp(val/tau) for val in Q])
probs = [math.exp(val/tau)/total for val in Q]
threshold = random.random()
cumulative_prob = 0.0
for i in range(len(probs)):
cumulative_prob += probs[i]
if (cumulative_prob > threshold):
return i
return np.argmax(probs)
# number of rounds (iterations)
num_rounds = 20000
# Count of number of times an arm was pulled
count = np.zeros(10)
# Sum of rewards of each arm
sum_rewards = np.zeros(10)
# Q value which is the average reward
Q = np.zeros(10)
for i in range(num_rounds):
# Select the arm using softmax
arm = softmax(0.5)
# Get the reward
observation, reward, done, info = env.step(arm)
# update the count of that arm
count[arm] += 1
# Sum the rewards obtained from the arm
sum_rewards[arm]+=reward
# calculate Q value which is the average rewards of the arm
Q[arm] = sum_rewards[arm]/count[arm]
print( 'The optimal arm is {}'.format(np.argmax(Q)))
```
### Upper Confidence Bound
```
def UCB(iters):
ucb = np.zeros(10)
#explore all the arms
if iters < 10:
return i
else:
for arm in range(10):
# calculate upper bound
upper_bound = math.sqrt((2*math.log(sum(count))) / count[arm])
# add upper bound to the Q valyue
ucb[arm] = Q[arm] + upper_bound
# return the arm which has maximum value
return (np.argmax(ucb))
# number of rounds (iterations)
num_rounds = 20000
# Count of number of times an arm was pulled
count = np.zeros(10)
# Sum of rewards of each arm
sum_rewards = np.zeros(10)
# Q value which is the average reward
Q = np.zeros(10)
for i in range(num_rounds):
# Select the arm using UCB
arm = UCB(i)
# Get the reward
observation, reward, done, info = env.step(arm)
# update the count of that arm
count[arm] += 1
# Sum the rewards obtained from the arm
sum_rewards[arm]+=reward
# calculate Q value which is the average rewards of the arm
Q[arm] = sum_rewards[arm]/count[arm]
print( 'The optimal arm is {}'.format(np.argmax(Q)))
```
### Thompson Sampling
```
def thompson_sampling(alpha,beta):
samples = [np.random.beta(alpha[i]+1,beta[i]+1) for i in range(10)]
return np.argmax(samples)
# number of rounds (iterations)
num_rounds = 20000
# Count of number of times an arm was pulled
count = np.zeros(10)
# Sum of rewards of each arm
sum_rewards = np.zeros(10)
# Q value which is the average reward
Q = np.zeros(10)
# initialize alpha and beta values
alpha = np.ones(10)
beta = np.ones(10)
for i in range(num_rounds):
# Select the arm using thompson sampling
arm = thompson_sampling(alpha,beta)
# Get the reward
observation, reward, done, info = env.step(arm)
# update the count of that arm
count[arm] += 1
# Sum the rewards obtained from the arm
sum_rewards[arm]+=reward
# calculate Q value which is the average rewards of the arm
Q[arm] = sum_rewards[arm]/count[arm]
# If it is a positive reward increment alpha
if reward >0:
alpha[arm] += 1
# If it is a negative reward increment beta
else:
beta[arm] += 1
print( 'The optimal arm is {}'.format(np.argmax(Q)))
```
| github_jupyter |
# Sensor invariance of signal bouts
We assume that the bouts in the signal are caused by encounters of a plume filament with high gas concentration.
The aim of this figure is to show that the sensor bouts are sensor invariant, that is, encountering them is (by and large) independent of the sensor used. As we will show, it's particularly the bout onsets that allow to identify corresponding bouts of gas concentration across all sensors.
### Preliminaries
```
import sys
import os
#add path to the directory containing the plumy module to PYTHONPATH
from matplotlib import cm
plumy_path = os.path.abspath(os.path.join(os.path.pardir, os.path.pardir))
sys.path.append(os.path.join(plumy_path))
toplevel_path = os.path.abspath(os.path.join(os.path.pardir, os.path.pardir, os.path.pardir))
import matplotlib.pyplot as plt
%matplotlib inline
from tqdm.auto import tqdm
from plumy.utils import DataSelector
from plumy.utils import HDFDataSelector
from plumy.utils import ZipFileDataSelector
from plumy.bouts import *
import plumy.bouts
import importlib
importlib.reload(plumy.bouts)
plt.rc('text', usetex=False)
mpl.rcParams['savefig.dpi'] = 150 # for print, go to 600
from __future__ import unicode_literals
rem_dupes = True # Drop duplicate timestamps
resample = True # Signal resampling active
path = os.path.join(toplevel_path,'WTD_upload') # path to dataset
ds = DataSelector(path, drop_duplicates = rem_dupes, resample = resample, verbose = False, use_HDFcache=True)
path = os.path.join(toplevel_path, 'WTD_upload.zip')
dsz = ZipFileDataSelector(path, drop_duplicates = rem_dupes, resample = resample, verbose=False, use_HDFcache=True)
ds = dsz
path = os.path.join(toplevel_path, 'WTD_upload.zip_HDFcache')
dsh = HDFDataSelector(path, drop_duplicates = rem_dupes, resample = resample, verbose=False)
ds = dsh
sensornames = ["TGS2611", # Sensor 1
"TGS2612", # Sensor 2
"TGS2610", # Sensor 3
"TGS2602", # Sensor 4
"TGS2600a", # Sensor 5
"TGS2600b", # Sensor 6
"TGS2620a", # Sensor 7
"TGS2620b"] # Sensor 8
```
### Load the data
```
gas = 1
voltage = 5
speed = 1
trial = 'all'
print("using Gas: {}, Voltage: {}, Fan Speed: {}, Trial #{}.".format(
DataSelector.GasNames[gas],
DataSelector.SensorVoltages[voltage], DataSelector.AltFanSpeeds[speed], trial))
data = []
for dist in tqdm(range(1,7)):
data.append(ds.select(gas,dist,voltage,speed))
sensornames_bynumber = ['Sensor{}'.format(i) for i in range(1,9) ]
distance = 5 # middle row because bouts overlap less here (on the first board it's mayhem)
ebcs = []
halflife = 40.
smooth_std = 30.
for i,sn in enumerate(sensornames_bynumber):
ebcss = make_boutcounters(data, sensorname=sn, boardname='Board5', halflife=halflife, smooth_std=smooth_std,
ampthresh=None, use_data_baseline=True)
ebcs.append(ebcss)
```
### Analysis
#### Artifacts on sensor 1 (TIGS 2611)
Unfortunately, the signals from sensor 1 on board 5 contains artefacts that we were not able to correct. Below we show one example, but the artefacts actually exist on all recordings from that sensor on that board that we looked at. Thus, we exclude sensor 1 from further analysis.
```
sensor = 0
distance = 0
trial = 19
e = ebcs[sensor][distance][trial]
s = e.signal
timeax = np.arange(0, len(s)*0.01, 0.01)
f = plt.figure(figsize=(4,2))
ax = f.gca()
ax.plot(timeax, s)
ax.set_xlim(0,timeax[-1])
ax.set_xlabel('time [s]')
ax.set_ylabel('response [a.u.]')
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
```
#### No response on sensor 2 (TGS 2612)
Sensor 2 shows only a very small (if any) response to the stimulus that we analyse here. See the analysis below - the response to the gas should kick in around t=60s. Hence, we do not show sensor 2 in the actual figure for the paper.
```
sensor = 1
distance = 0
trial = 19
e = ebcs[sensor][distance][trial]
s = e.signal
timeax = np.arange(0, len(s)*0.01, 0.01)
f = plt.figure(figsize=(4,2))
ax = f.gca()
s = []
for i in range(20): # loop over trials
e = ebcs[sensor][distance][i]
ax.plot(timeax, e.signal)
ax.set_xlim(0,timeax[-1])
ax.set_xlabel('time [s]')
ax.set_ylabel('response [a.u.]')
ax.set_title("sensor 2 (TGS 2612), all trials")
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
```
### Compare bout occurrence across all sensors
```
trial = 19
distance = 3
f = plt.figure(figsize=(8,4))
gs = mpl.gridspec.GridSpec(6,2, wspace=0.5, hspace=0.4)
ax = f.add_subplot(gs[:,0])
yticks = []
maxy = 800
for i in range(2,8): #sensor1 artifacts, sensor2 no response
signal = ebcs[i][distance][trial].signal
signal = signal.rolling(300, win_type='gaussian').mean(std=smooth_std)
signal = signal.dropna()
s = signal.values - signal[signal.index[0]]
if i == 3: #sensor 4, scale down by factor 10 to get approx. same amplitude
s = s / 10.
s = s + (i-2)* 100. + 30
ax.plot(signal.index, s, color='k')
yticks.append(s[0])
#panel decoration
ax.set_ylim(0,maxy)
ax.set_xlim(0, signal.index[-1])
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_yticks(yticks)
ax.set_yticklabels([sensornames[si] for si in range(2,8)])
#ax.set_yticklabels(["Sensor {}".format(si) for si in xrange(3,9)])
ax.set_xticklabels(["{:d}".format(int(xtl/1000)) for xtl in ax.get_xticks()])
ax.set_xlabel("time [s]")
#scalebar
ax.plot([20,20], [670,770], color='k', lw=2)
ax.text(8000,720, "∆V 100 mV\n(TGS2610: 1000 mV)", fontsize=7)
#bouts
ax = f.add_subplot(gs[:,1])
yticks = []
maxy = 800
for i in range(2,8): #sensor1 artifacts, sensor2 no response
offset = (i-2) + 0.1
if i == 3:
scale = 1.
else:
scale=10.
line = plumy.bouts.plot_bout(ebcs[i][distance][trial], ax, offset, scale)
data = line[0].get_data()[1]
yticks.append(data[0])
#decorate panel
ax.set_yticks(yticks)
ax.set_yticklabels([sensornames[si] for si in range(2,8)])
#ax.set_yticklabels(["Sensor {}".format(si) for si in xrange(3,9)])
ax.set_xlim(0, len(data)/100)
ax.set_ylim(-1,7)
ax.set_xlabel('time [s]')
#add scalebar
ax.plot([20,20], [6.5,7], color='k', lw=1)
ax.text(30,6.5, "1 a.u.\n(TGS2602: 0.1 a.u.)", fontsize=7)
f.text(0.05,0.87,"A)", fontsize=12, weight='bold')
f.text(0.5,0.87,"B)", fontsize=12, weight='bold')
f.savefig('Figures/Fig. 6 - sensor invariance.png',dpi=600)
```
Sensor 2 is not shown because it doesn't show any discernible response to the stimulus.
The response of Sensor 3 shows the most bouts, probably it's most senitive to the signal. Sensors 7 and 8 are most likely of the same type, their response is very similar (but not identical). Sensors 5 and 6 also show very similar responses, with Sensor 6 having slightly higher amount of noise.
### Bouts only, no other signal
```
f = plt.figure(figsize=(8,3))
ax = f.add_subplot(111)
color_iter = iter(cm.rainbow(np.linspace(0,1,6)))
for i in range(2,8):
ebc = ebcs[i][distance][trial]
col = next(color_iter)
s = ebc.smooth_time_deriv_ewma()
p = ebc.filtered_posneg
for j in p.T.astype(int):
lp = ax.plot(np.arange(j[0], j[1]), (s[j[0]:j[1]] - s[j[0]]), color=col)
lp[0].set_label(sensornames[i])
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
plt.legend(frameon=False, fontsize=8)
```
Not so intuitive, because everything overlaps. Normalising for max height does not really help, it rather makes things even less overseeable. Therefore, pick nice sensors and compare these.
e.g., try sensors 3 + 4 and 7 + 8. 3 + 4 are different but with nice responses, 7 + 8 are putatively the same sensor. While the latter is also true for 5+6, their response is noisy and the overlap not so good.
```
f = plt.figure(figsize=(6,3.5))
gs = mpl.gridspec.GridSpec(2,1, hspace=0.4)
pairs = [[6,7], [2,3]]
markers = ['x','+']
lines = ['-','-']
color_iters = [iter(cmm([0.2,0.8], alpha=0.9)) for cmm in [cm.RdBu, cm.PuOr]]
yticks = [[0,0.5,1],[0.0, 0.5, 1.0]]
for i,pair in enumerate(pairs):
ax = f.add_subplot(gs[i])
for pi,pa in enumerate(pair):
ebc = ebcs[pa][distance][trial]
s = ebc.smooth_time_deriv_ewma()
p = ebc.filtered_posneg
color = next(color_iters[i])
#normalise by max height
max_height = 0
for j in p.T.astype(int):
height = s[j[1]] - s[j[0]]
if height > max_height:
max_height = height
print(max_height)
for j in p.T.astype(int):
lp = ax.plot(np.arange(j[0], j[1])/100., (s[j[0]:j[1]] - s[j[0]])/max_height, linestyle=lines[pi], linewidth=.6, color=color)
lpl = ax.plot((j[0]/100., j[0]/100.), (0,1), linestyle='-', linewidth=.2, color=color)
lpm = ax.plot(j[0]/100., 1, linestyle='none', marker=markers[pi], markersize=4, color=color)
lp[0].set_label(sensornames[pa])
# ax.set_frame_on(True)
ax.set_frame_on(False)
# for sp in ["top", "bottom", "right"]:
# ax.spines[sp].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_xlim(0,320)
ax.set_ylim(-0.01,1.05)
ax.set_yticks(yticks[i])
lg = plt.legend(loc="upper right", frameon=False, fontsize=8)
lg.set_frame_on(False)
ax.set_xticks(range(0,251,50))
ax.set_xticklabels([])
ax.set_xlabel('time [s]')
ax.set_xticklabels(range(0,251,50))
ax.set_ylabel('bout amp. [a.u.]')
f.text(0.015,0.89, "A)", fontweight='bold')
f.text(0.015,0.44, "B)", fontweight='bold')
f.savefig("Figures/Fig. 7 - Bout coincidence.png", dpi=600)
```
### Test for event correlation
In order to quantify the similarity of bouts across sensors we adopt an approach first described by Schreiber et al (2003) that aims at measuring the similarity of event series. This approach is based on convolving a time series of discrete events (in the original study, neuronal action potentials) with a gaussian kernel, thus creating a continuous time series. The similarity of two time series is then quantified by the pearson correlation of these continuous series.
Here, we apply this measure of event series to the bout onsets as discrete events. Fig. S2 depicts the bout onset times for the signals in Fig. 5 (Acetaldehyde, source distance 1.18 m, trial 19). We convolved these time series with gaussian kernels of width $\sigma=2s$. We then computed the pairwise correlation coefficients between the generated continuous time series. This analysis was done for all 20 trials that were present in the data set for Acetaldehyde, measured in 1.18 m distance from the source. The average correlation between all time series was $c=0.38$ ± $0.21$ (standard deviation).
To check against a random background, we scrambled the trials, i.e., computing correlations between time series that were randomly chosen from different trials. Here we obtained $c=0.17$ ± $0.15$. Fig. S3 depicts the histograms of pairwise correlations obtained in matched and randomised trials. A 2-sample Kolmogorov-Smirnov test confirmed that the correlations observed in pairs of matched trials is significantly different from randomised trials ($p=2.1*10^{-29}$).
#### References
Schreiber, S., Fellous, J. M., Whitmer, D., Tiesinga, P., and Sejnowski, T. J. (2003). A new correlation-based measure of spike timing reliability. Neurocomputing 52-54, 925–931. doi:10.1016/S0925-2312(02)00838-X.
```
#windowsize is 10 sigma
#padding 5 sigma at the end to avoid truncating late events
def convolve_with_gaussian(train, sigma, range=None):
if range is None:
range = [0,26000+5*sigma]
hist,bins = np.histogram(train, bins=range[1]-range[0], range=range)
ts = pd.Series(hist)
signal = ts.rolling(10*sigma, win_type='gaussian', center=True).mean(std=sigma)
signal = signal.dropna().values
retval = np.concatenate((np.zeros(5*sigma), signal)) #pad 5 sigma at the start that have been dropped as NAN
return retval
distance = 3
trial = 19
f = plt.figure(figsize=(5,4))
gs = mpl.gridspec.GridSpec(1,1, left=0.3)
ax = f.add_subplot(gs[0])
sigs = []
sigma = 200
for i in range(2,8):
bouts = ebcs[i][distance][i].filtered_posneg
for i in range(2,8):
bouts = ebcs[i][distance][trial].filtered_posneg
train = bouts[0]
sig_smooth = convolve_with_gaussian(train, sigma)
sigs.append(sig_smooth)
for ons in train/100.:
ax.plot([ons,ons], [i-0.25,i+0.25], color='k')
xaxis = np.arange(len(sig_smooth))/100.
ax.plot(xaxis, i-0.5+sig_smooth/max(sig_smooth), color=[0.5,0.5,0.5,0.5])
sigs = np.array(sigs)
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_yticks(range(2,8))
ax.set_yticklabels(sensornames[2:8])
ax.set_xlabel('time [s]')
ax.set_xlim(0,260)
f.savefig('Figures/Fig. S4 - Event onsets.png', dpi=600)
sigma = 200
corrs_pertrial = []
for trial in range(20):
sigs = []
for i in range(2,8):
bouts = ebcs[i][distance][trial].filtered_posneg
train = bouts[0]
sigs.append(convolve_with_gaussian(train, sigma))
sigs = np.array(sigs)
corr = np.corrcoef(sigs)
all_corrs = []
for i in range(corr.shape[0]):
for j in range(i+1, corr.shape[1]):
all_corrs.append(corr[i,j])
corrs_pertrial.extend(all_corrs)
corrs_random = []
for trial in range(20):
trialperm = np.random.permutation(20)
sigs = []
for i in range(2,8):
bouts = ebcs[i][distance][trialperm[i]].filtered_posneg
train = bouts[0]
sigs.append(convolve_with_gaussian(train, sigma))
sigs = np.array(sigs)
corr = np.corrcoef(sigs)
all_corrs = []
for i in range(corr.shape[0]):
for j in range(i+1, corr.shape[1]):
all_corrs.append(corr[i,j])
corrs_random.extend(all_corrs)
f = plt.figure(figsize=(5,2.8))
gs = mpl.gridspec.GridSpec(1,1, bottom=0.2)
ax = f.add_subplot(gs[0])
bins = np.linspace(-1,1,30)
ax.plot(bins, np.histogram(corrs_pertrial, bins=30, range=(-1,1))[0], label='matched trials', color='k',zorder=1)
ax.plot(bins, np.histogram(corrs_random, bins=30, range=(-1,1))[0], label='random trials', color='gray', zorder=0, ls=":")
plt.legend(frameon=False, loc="upper left", fontsize=8)
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_ylabel("number of pairs")
ax.set_xlabel("correlation")
ax.set_ylim(-3, ax.get_ylim()[1])
ax.set_xlim(-.5, 1.)
print(u"matched trials: corr = {:.2f} ± {:.2f}".format(np.mean(corrs_pertrial), np.std(corrs_pertrial)))
print(u"random trials: corr = {:.2f} ± {:.2f}".format(np.mean(corrs_random), np.std(corrs_random)))
import scipy.stats as ss
p = ss.ks_2samp(corrs_pertrial, corrs_random)
print("Kolmogorov-Smirnov 2 sample test: p = {:.2g}".format(p.pvalue))
f.savefig('Figures/Fig. S5 - Event correlation statistics.png', dpi=600)
```
| github_jupyter |
```
# Importing required libraries
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
ad = pd.read_csv("Advertising.csv")
ad.drop("Unnamed: 0", axis=1, inplace=True)
ad.head()
```
## The Bias
**We will use Linear Regression to display bias or underfitting**
```
# Fitting Linear Regression
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(ad.TV.values.reshape(-1,1), ad.Sales)
plt.figure(figsize=(8,5))
plt.subplot(111, facecolor='black')
plt.scatter(ad.TV, ad.Sales, color='gold')
plt.plot(ad.TV,model.predict(ad.TV.values.reshape(-1,1)),color='red',linewidth=4)
plt.text(0,22,"Restrictive Model\nHigh Bias\nUnderfitting", color='white',size=15)
plt.text(220,1,"@DataScienceWithSan", color='white')
```
Simple Linear Regression is an inflexible model that assumes a linear relationship between input and output variables. This assumption, approximation, and restriction introduce **bias** to this model.
*Hence bias refers to the error which is observed while approximating a complex problem using a simple (or restrictive) model.*
```
# Generating random data
np.random.seed(0)
x = np.random.normal(0,1,100)
y = 30 + 4*x - 2*(x**2) + 3*(x**3) + np.random.normal(0,1,100)
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.ylim(10,60)
plt.xlim(-1,2)
plt.scatter(x,y,s=20,c='seagreen')
plt.plot(x, 27+12*x, color='orange', linewidth=1)
plt.subplot(122)
plt.ylim(10,60)
plt.xlim(-1,2)
plt.scatter(x,y,s=20,c='seagreen')
x2 = np.linspace(-3,3,50)
plt.plot(x2,30 + 4*x2 - 2*(x2**2) + 3*(x2**3), c='orange', linewidth=2)
```
The plot on the right is quite more flexible than the one on the left. It fits more smoothly with the data. On the other hand, the plot on the left represents a poorly fitted model, which assumes a linear relationship in data. This poor-fitting due to high bias is also known as ***underfitting***. Underfitting results in poor performance and low accuracies and can be rectified if needed by using more flexible models.
Let’s summarise the key points about bias:
* Bias is introduced when restrictive (inflexible) models are used to solve complex problems
* As the flexibility of a model increases, the bias starts decreasing for training data.
* Bias can cause underfitting, which further leads to poor performance and predictions.
## The Variance
**Performing KNN on above data to show high variance or overfitting**
```
from sklearn import neighbors
knn = neighbors.KNeighborsRegressor(n_neighbors=1, weights='distance')
knn.fit(ad.TV.values.reshape(-1,1), ad.Sales)
x_points_ad = np.linspace(0,300,100)
y_knn_ad = knn.predict(x_points_ad.reshape(-1,1))
plt.figure(figsize=(8,5))
plt.subplot(111, facecolor='black')
plt.scatter(ad.TV, ad.Sales, color='gold')
plt.plot(x_points_ad, y_knn_ad, color='red',linewidth=4)
plt.text(0,22,"Complex Model\nHigh Variance\nOverfitting", color='white', size=15)
plt.text(220,1,"@DataScienceWithSan", color='white')
```
In Machine Learning, when a model performs so well on the training dataset, that it almost memorizes every outcome, it is likely to perform quite badly when running for testing dataset.
This phenomenon is known as ***overfitting*** and is generally observed while building very complex and flexible models.
What is Variance?
***Variance is the amount by which our model will have to change if it were to make predictions on a different dataset.***
Let’s simplify the above statement → Our model should not yield high errors if we use it to estimate the outputs of unseen data. That is if the model shows good results on the training dataset, but poor results on testing, it is said to have high variance.
Hence, building a very complex model can come at the cost of overfitting. One must understand that too much learning can bring high variance.
Let’s see some of the key points about Variance.
* Variance is the amount by which a model needs changing if it were to make predictions on unseen data.
* High variance is equivalent to the overfitting of the model.
* Restrictive models such as Linear Regression show low variance, whereas more complex and flexible models can introduce high variance.
```
## Making Random Data
np.random.seed(0)
x = np.random.normal(0,10,50)
y = 0.1*x + 0.01*(x**2) + 0.01*(x**3) + np.random.normal(0,10,50)
x = np.array(x).reshape(-1,1)
y = np.array(y).reshape(-1,1)
plt.figure(figsize=(14,8))
## Fitting and plotting Linear Regression
regression_model = LinearRegression()
regression_model.fit(x,y)
plt.subplot(221)
plt.scatter(x, y, edgecolor='skyblue',color='royalblue')
plt.plot(x,regression_model.predict(x),color='orange',linewidth=1)
plt.title("Figure 1 - Linear Regression")
#############################################################################
## Fitting and plotting polynomial regression
x_points = np.linspace(-25,25,100)
y2 = 0.1*x_points + 0.01*(x_points**2) + 0.01*(x_points**3)
plt.ylim(-150,150)
plt.subplot(222)
plt.scatter(x, y, edgecolor='skyblue',color='royalblue')
plt.plot(x_points,y2,color='orange',linewidth=2)
plt.title("Figure 2 - Polynomial Regression")
#############################################################################
## Fitting and plotting KNN with high k
from sklearn import neighbors
knn = neighbors.KNeighborsRegressor(n_neighbors=9, weights='distance')
knn.fit(x, y)
y_knn_h = knn.predict(x_points.reshape(-1,1))
plt.subplot(223)
plt.scatter(x, y, edgecolor='skyblue', color='royalblue')
plt.plot(x_points, y_knn_h, color='orange',linewidth=2)
plt.title("Figure 3 - KNN with 9 nearest neighbors")
#############################################################################
## Fitting and plotting KNN with k=1
from sklearn import neighbors
knn = neighbors.KNeighborsRegressor(n_neighbors=1, weights='distance')
knn.fit(x, y)
y_knn_h = knn.predict(x_points.reshape(-1,1))
#plt.subplot(111,facecolor='navy')
plt.subplot(224)
plt.scatter(x, y, edgecolor='skyblue',color='royalblue' )
plt.plot(x_points, y_knn_h, color='orange', linewidth=2)
plt.title("Figure 4 - KNN with 1 nearest neighbor")
```
Now take a few seconds to observe these plots and see how increasing the complexity of our model to train on the same data reduces bias and underfitting, thus reducing the training error.
* Figure 1 shows the most restrictive model that is the linear model as we saw earlier. The bias in this model is certainly the highest.
* Figure 2, the polynomial regression, is a smoother curve and reduces the training error further, hence showing a somewhat lesser bias than the linear model.
* Figure 3 shows the K-Nearest Neighbors (KNN) model with K=9, which is able to classify the data more accurately than the previous two models.
* Figure 4, the KNN model with K=1, closely follows the data and hardly misclassifies any samples. It gives the least amount of bias and underfitting as compared to all the previous models, but it shows very high variance and overfitting.
## Bias Variance trade-off
It is important to keep in mind that some bias and some variance will always be there while building a Machine Learning model. Both bias and variance add to the overall error in the model.
To minimize the reducible error (bias+variance), we must find the sweet spot between the two, where both bias and variance coexist with minimum possible values. This is called Bias-Variance Trade-off. It’s more of a trade-off between Prediction Accuracy (Variance) and Model Interpretability (Bias).
```
```
| github_jupyter |
```
import codecs
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
```
Load data in Tensorflow.
```
root = "../"
training_data_folder = '%straining_data/web-radio/output/rec' % root
embDir = '%sembeddings' % root
what = 'artist'
uri_file = '%s/%s.emb.u' % (embDir, what)
vector_file = '%s/%s.emb.v' % (embDir, what)
# header_file = '%s/%s.emb.h' % (embDir, what)
training_file = '%s/%s.dat' % (training_data_folder, what)
vectors = np.array([line.strip().split(' ') for line in codecs.open(vector_file, 'r', 'utf-8')])
# heads = np.array([line.strip() for line in codecs.open(header_file, 'r', 'utf-8')])
uris = np.array([line.strip() for line in codecs.open(uri_file, 'r', 'utf-8')])
train_array = np.array([line.strip().split(' ') for line in codecs.open(training_file, 'r', 'utf-8')])
pd.DataFrame(train_array, columns=['seed', 'target', 'score']).head()
```
Data pre-processing: I want to substitute the seed and target with their embeddings
```
col1 = np.array([get_embs(xi) for xi in train_array[:, 0]])
col2 = np.array([get_embs(xi) for xi in train_array[:, 1]])
col1 = np.concatenate((col1, [12., 45., 73.] * np.ones((train_array.shape[0], 3))), axis=1)
col2 = np.concatenate((col2, [12., 45., 73.] * np.ones((train_array.shape[0], 3))), axis=1)
col3 = np.array(train_array[:, 2]).astype('float32')
col3 = col3.reshape((col3.size, 1))
def get_embs(x):
# uri to embedding
v = vectors[np.argwhere(uris == x)]
if v.size == 0:
result = -2. * np.ones(vectors[0].size)
else:
result = v[0][0]
return result.astype('float32')
training_vector = np.concatenate((col1, col2, col3), axis=1)
training_vector.shape
```
Split test and train
```
train, test = train_test_split(training_vector, train_size=0.7)
train_vector = train[:, :-1]
train_label = train[:, -1]
train_label = train_label.reshape((len(train_label), 1))
test_vector = test[:, :-1]
test_label = test[:, -1]
test_label = test_label.reshape((len(test_label), 1))
print('Train')
print(train_vector.shape)
print(train_label.shape)
print('Test')
print(test_vector.shape)
print(test_label.shape)
# Parameters
learning_rate = 0.1
num_steps = 1000
batch_size = 64
display_step = 100
# Network Parameters
n_hidden_1 = 256 # 1st layer number of neurons
n_hidden_2 = 256 # 2nd layer number of neurons
num_input = train_vector[0].size
num_output = int(num_input / 2)
num_output_wrap = train_label[0].size
# tf Graph input
X = tf.placeholder(tf.float32, [None, num_input], name="X")
Y = tf.placeholder(tf.float32, [None, num_output_wrap], name="Y")
```
Neural network
```
# Create model
def neural_net(x):
with tf.name_scope('hidden_1') as scope:
# Hidden fully connected layer with 256 neurons
w1 = tf.Variable(tf.random_normal([num_input, n_hidden_1]), name='w')
b1 = tf.Variable(tf.random_normal([n_hidden_1]), name='b')
layer_1 = tf.add(tf.matmul(x, w1), b1, name='o')
with tf.name_scope('hidden_2') as scope:
# Hidden fully connected layer with 256 neurons
w2 = tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]), name='w')
b2 = tf.Variable(tf.random_normal([n_hidden_2]), name='b')
layer_2 = tf.add(tf.matmul(layer_1, w2), b2, name='o')
with tf.name_scope('out_layer') as scope:
# Output fully connected layer with a neuron for each class
wo = tf.Variable(tf.random_normal([n_hidden_2, num_output]), name='w')
bo = tf.Variable(tf.random_normal([num_output]), name='b')
out_layer = tf.add(tf.matmul(layer_2, wo), bo, name="o")
with tf.name_scope('u_norm') as scope:
row_sum = tf.reduce_sum(out_layer, axis=1, keepdims=True)
return tf.divide(out_layer, row_sum)
def weighted_l2(a, b, w):
with tf.name_scope('weighted_l2') as scope:
# https://stackoverflow.com/a/8861999/1218213
q = tf.subtract(a, b, name="q")
# return np.sqrt((w * q * q).sum())
pow_q = tf.cast(tf.pow(q, 2), tf.float32, name="q-power")
return tf.reduce_sum(tf.multiply(w, pow_q), axis=1, name="o", keepdims=True)
def compute_penalty(expected, taken, total):
with tf.name_scope('penalty') as scope:
penalty = tf.divide(tf.subtract(expected, taken), total)
return tf.cast(penalty, tf.float32)
def neural_net_wrap(x, previous_out):
with tf.name_scope('nn_wrapper') as scope:
lt = previous_out.shape.as_list()[0] # vertical size of the tensor
lh = previous_out[0].shape.as_list()[0] # horizontal size of the tensor
seed, target = tf.split(x, [lh, lh], axis=1)
bs = tf.equal(seed, -2.)
bt = tf.equal(target, -2.)
_ones = tf.ones_like(previous_out, tf.float32)
max_distance = weighted_l2(_ones, _ones * -1., previous_out)
bad_mask = tf.logical_or(bs, bt)
good_mask = tf.logical_not(bad_mask)
bs_count = tf.count_nonzero(tf.logical_not(bs), axis=1, keepdims=True)
good_count = tf.count_nonzero(good_mask, axis=1, keepdims=True)
_zeros = tf.zeros_like(previous_out, tf.float32)
_seed = tf.where(good_mask, seed, _zeros)
_target = tf.where(good_mask, target, _zeros)
# distance
d = weighted_l2(_seed, _target, previous_out)
# how much info I am not finding
penalty = compute_penalty(bs_count, good_count, lh)
multiplier = tf.subtract(1., penalty)
# score
s = tf.divide(tf.subtract(max_distance, d), max_distance)
return tf.multiply(s, multiplier)
# Construct model
intermediate = neural_net(X)
logits = neural_net_wrap(X, intermediate)
logits.shape
# Define loss and optimizer
# loss_op = MSE
loss_op = tf.reduce_mean(tf.square(tf.subtract(logits, Y)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.less(tf.subtract(logits, Y), 0.1)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
def next_batch(num, data, labels):
"""
Return a total of `num` random samples and labels.
"""
idx = np.arange(0, len(data))
np.random.shuffle(idx)
idx = idx[:num]
data_shuffle = data[idx]
labels_shuffle = labels[idx]
return data_shuffle, labels_shuffle
with tf.Session() as sess:
writer = tf.summary.FileWriter("output", sess.graph)
# Run the initializer
sess.run(init)
print("Start learning")
for step in range(1, num_steps + 1):
batch_x, batch_y = next_batch(batch_size, train_vector, train_label)
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
preds, my_weights, loss, acc = sess.run([logits, intermediate, loss_op, accuracy],
feed_dict={X: batch_x, Y: batch_y})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
# print("Predictions %s VS %s" % (preds[0], batch_y[0]))
np.set_printoptions(precision=2)
print("My weights %s" % np.mean(my_weights, axis=0))
print("Optimization Finished!")
print("Testing Accuracy:",
sess.run(accuracy, feed_dict={X: test_vector, Y: test_label}))
writer.close()
```
| github_jupyter |
```
from collections import namedtuple
import json
import matplotlib.pyplot as plt
import pandas as pd
import requests
try:
import geopandas as gpd
except ModuleNotFoundError:
!pip install geopandas
import geopandas as gpd
#get the Folium library for map generation
try:
import folium
except ModuleNotFoundError:
!pip install folium -q
import folium
#get the descartes library for map generation
try:
import descartes
except ModuleNotFoundError:
!pip install descartes -q
import descartes
# Read in the data from the URL with Python's requests library
url = 'https://gcc.azure-api.net/traffic/carparks?format=json'
response = requests.get(url)
data = response.json()
# Coordinates = namedtuple('Coordinates', 'latitude longitude')
def extract_parking_data(data: dict) -> pd.DataFrame:
data = data['d2lm$d2LogicalModel']['d2lm$payloadPublication']['d2lm$situation']
parking_records = []
for record in data:
location_data = record['d2lm$situationRecord']
coords_pt = location_data['d2lm$groupOfLocations']['d2lm$locationContainedInGroup']['d2lm$pointByCoordinates']
coords_pt = coords_pt['d2lm$pointCoordinates']
lat = coords_pt['d2lm$latitude']
long = coords_pt['d2lm$longitude']
spaces_available = int(location_data['d2lm$totalCapacity']) - int(location_data['d2lm$occupiedSpaces'])
name = location_data['d2lm$carParkIdentity']
parking_records.append({
'name': name,
'latitude': lat,
'longitude': long,
'spaces': spaces_available
})
return pd.DataFrame(parking_records)
df = extract_parking_data(data)
df.head(9)
```
The following `clean_name` method is applied to the `name` column of the DataFrame - this performs some simple data cleaning operations on the text.
Data Cleaning is a key part of any data analytics task, including in the Urban Analytics domain!
```
def clean_name(x):
""" Function to perform data cleaning on the car park name
returned from the API
"""
x = x.replace('&', '&').replace('\t','')
x = x.split(":CPG")[0]
return x
df['name'] = df['name'].apply(clean_name)
# further cleaning/preparation: convert latitude/longitude to numeric floating point types
df['latitude'] = pd.to_numeric(df['latitude'])
df['longitude'] = pd.to_numeric(df['longitude'])
df
```
For geospatial analysis, there's a very good library that builts on top of Pandas and gives new geometry/geography based data types and operations - **GeoPandas**.
Below we convert the DataFrame extracted from the API into a **GeoDataFrame** - this is a data structure provided by GeoPandas that subclasses the DataFrame class, and augments it with geospatial functionality.
```
gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df.longitude, df.latitude))
gdf.head()
```
Quick (and dirty) visualization of the proximity of the car parks - not a proper visualization!
```
gdf.plot()
plt.title("Glasgow Car Parks", fontsize=16)
plt.show()
```
### Asking Questions of this data
**We go to a given car park, and find that the car park has been closed. We want to find the nearest car park to this location**
```
# We use the 'nearest_points' operation from the Shapely library
from shapely.ops import nearest_points
def get_nearest(carpark):
point = carpark.geometry
idx = carpark.name
multipoint = gdf.drop(idx, axis=0).geometry.unary_union # drop the current index and use unary_union op
_, nearest_geom = nearest_points(point, multipoint)
return gdf[gdf.geometry == nearest_geom]['name'].item()
gdf['nearest_car_park'] = gdf.apply(get_nearest, axis=1)
gdf.head()
gdf[['name', 'nearest_car_park']]
# create leaflet web map centered/zoomed to Glasgow
GLASGOW = (55.871, -4.2518)
ZOOM_START = 13
M = folium.Map(location=GLASGOW, zoom_start=ZOOM_START, tiles='cartodbpositron')
# add red markers for each car park
cols = ['latitude', 'longitude', 'name', 'spaces']
for lat, lng, name, spaces in gdf[cols].values:
folium.CircleMarker(location=(float(lat), float(lng)), radius=5, color='#dc143c',
fill=True, fill_color='#dc143c', tooltip=f"{name}: <br/><b>{spaces}</b> spaces").add_to(M)
# show the Map
M
```
### Reading GeoJSON: Public Parks, Glasgow
Open Dataset - a GeoJSON file with data about public parks in Glasgow.
http://opendata.uksouth.cloudapp.azure.com/dataset/e42cc638-989b-43e1-a666-8f9ccc54fe88/resource/965b2846-8081-429e-a44c-8cd70115b8d4/download/country-parks.geojson
```
url = 'http://opendata.uksouth.cloudapp.azure.com/dataset/e42cc638-989b-43e1-a666-8f9ccc54fe88/resource/965b2846-8081-429e-a44c-8cd70115b8d4/download/country-parks.geojson'
gdf2 = gpd.read_file(url)
centre = (55.8, -4.2518)
M = folium.Map(location=centre, zoom_start=11, tiles='cartodbpositron')
# show the parks on the map
folium.GeoJson(
gdf2[['NAME', 'geometry']].to_json(), # convert the GeoDataFrame to GeoJSON
name="Parks in Glasgow",
show=True,
tooltip=folium.features.GeoJsonTooltip(
fields=['NAME'],
aliases=['Park name:'],
)
).add_to(M)
M
```
Just to show we can, the car park points can be added to the same map.
```
# add red markers for each car park
cols = ['latitude', 'longitude', 'name', 'spaces']
for lat, lng, name, spaces in gdf[cols].values:
folium.CircleMarker(location=(lat, lng), radius=5, color='#dc143c',
fill=True, fill_color='#dc143c', tooltip=f"{name}: <br/><b>{spaces}</b> spaces").add_to(M)
M
gdf2
```
We can access the geometric centroid of each park using the `centroid` attribute that exists on a GeoSeries.
```
gdf2['geometry'].centroid
```
This can be plotted on the Folium map
```
for i, row in gdf2.iterrows():
centroid = row.geometry.centroid.coords[0]
print(centroid)
folium.CircleMarker(location=centroid[::-1], radius=2, color='green',
fill=True, fill_color='pink').add_to(M)
M
```
| github_jupyter |
# MAST Table Access Protocol PanSTARRS 1 DR2 Demo
<br> This tutorial demonstrates how to use astroquery to access PanSTARRS 1 Data Release 2 via a Virtual Observatory standard Table Access Protocol (TAP) service at MAST, and work with the resultant data. It relies on Python 3 and astroquery, as well as some other common scientific packages.
***
### Table of Contents
1. [TAP Service Introduction](#TAP-Service-Introduction)
2. [Imports](#Imports)
3. [Connecting to a TAP Service](#Connecting-to-a-TAP-Service)
4. [Use Cases](#Use-Cases)
5. [Additional Resources](#Additional-Resources)
6. [About This Notebook](#About-this-Notebook)
***
## TAP Service Introduction
Table Access Protocol (TAP) services allow more direct and flexible access to astronomical data than the simpler types of IVOA standard data services. Queries are built with the SQL-like Astronomical Data Query Language (ADQL), and can include geographic / spatial queries as well as filtering on other characteristics of the data. This also allows the user fine-grained control over the returned columns, unlike the fixed set of coumns returned from cone, image, and spectral services.
For this example, we'll be using the astropy affiliated PyVO client, which is interoperable with other valid TAP services, including those at MAST. PyVO documentation is available at ReadTheDocs: https://pyvo.readthedocs.io
We'll be using PyVO to call the TAP service at MAST serving PanSTARRS 1 Data Release 2, now with individual detection information. The schema is described within the service, and we'll show how to inspect it. The schema is also the same as the one available via the CasJobs interface, with an additional view added for the most common positional queries. CasJobs has its own copy of the schema documentation, which can be accessed through its own site: http://mastweb.stsci.edu/ps1casjobs/
***
## Imports
```
# Use the pyvo library as our client to the data service.
import pyvo as vo
# For resolving objects with tools from MAST
from astroquery.mast import Mast
# For handling ordinary astropy Tables in responses
from astropy.table import Table
# For displaying and manipulating some types of results
%matplotlib inline
import requests
import astropy
import numpy as np
import pylab
import time
import json
from matplotlib import pyplot as plt
# For calling the object name resolver. Note these are Python 3 dependencies
#import sys
#import http.client as httplib
#from urllib.parse import quote as urlencode
#from urllib.request import urlretrieve
# To allow display tweaks for wider response tables
from IPython.core.display import display
from IPython.core.display import HTML
# suppress unimportant unit warnings from many TAP services
import warnings
warnings.filterwarnings("ignore", module="astropy.io.votable.*")
warnings.filterwarnings("ignore", module="pyvo.utils.xml.elements")
```
***
## Connecting to a TAP Service
The PyVO library is able to connect to any TAP service, given the "base" URL as noted in metadata registry resources describing the service. This is the URL for the PanSTARRS 1 DR2 TAP service.
```
TAP_service = vo.dal.TAPService("http://vao.stsci.edu/PS1DR2/tapservice.aspx")
TAP_service.describe()
```
### List available tables
```
TAP_tables = TAP_service.tables
for tablename in TAP_tables.keys():
if not "tap_schema" in tablename:
TAP_tables[tablename].describe()
print("Columns={}".format(sorted([k.name for k in TAP_tables[tablename].columns ])))
print("----")
```
# Use Cases
## Simple Positional Query
This searches the mean object catalog for objects within .2 degrees of M87 (RA=187.706, Dec=12.391 in degrees). The view used contains information from the [ObjectThin](https://outerspace.stsci.edu/x/W4Oc) table (which has information on object positions and the number of available measurements) and the [MeanObject](https://outerspace.stsci.edu/x/WYOc) table (which has information on photometry averaged over the multiple epochs of observation).
Note that the results are restricted to objects with `nDetections>1`, where `nDetections` is the total number of times the object was detected on the single-epoch images in any filter at any time. Objects with `nDetections=1` tend to be artifacts, so this is a quick way to eliminate most spurious objects from the catalog.
This query runs in TAP's asynchronous mode, which is a queued batch mode with some overhead and longer timeouts, useful for big catalogs like PanSTARRS. It may not be necessary for all queries to PS1 DR2, but the PyVO client can automatically handle the additional processing required over synchronous mode.
```
job = TAP_service.run_async("""
SELECT objID, RAMean, DecMean, nDetections, ng, nr, ni, nz, ny, gMeanPSFMag, rMeanPSFMag, iMeanPSFMag, zMeanPSFMag, yMeanPSFMag
FROM dbo.MeanObjectView
WHERE
CONTAINS(POINT('ICRS', RAMean, DecMean),CIRCLE('ICRS',187.706,12.391,.2))=1
AND nDetections > 1
""")
TAP_results = job.to_table()
TAP_results
```
## Get DR2 light curve for RR Lyrae star KQ UMa
This time we start with the object name, use the MAST name resolver (which relies on Simbad and NED) to convert the name to RA and Dec, and then query the PS1 DR2 mean object catalog at that position. Then we run a spatial query to TAP using those coordinates.
```
objname = 'KQ UMa'
coords = Mast.resolve_object(objname)
ra,dec = coords.ra.value,coords.dec.value
radius = 1.0/3600.0 # radius = 1 arcsec
query = """
SELECT objID, RAMean, DecMean, nDetections, ng, nr, ni, nz, ny, gMeanPSFMag,
rMeanPSFMag, iMeanPSFMag, zMeanPSFMag, yMeanPSFMag
FROM dbo.MeanObjectView
WHERE
CONTAINS(POINT('ICRS', RAMean, DecMean),CIRCLE('ICRS',{},{},{}))=1
AND nDetections > 1
""".format(ra,dec,radius)
print(query)
job = TAP_service.run_async(query)
TAP_results = job.to_table()
TAP_results
```
### Get Repeated Detection Information
Extract all the objects with the same object ID from the [Detection](https://outerspace.stsci.edu/x/b4Oc) table, which contains all the individual measurements for this source. The results are joined to the [Filter](https://outerspace.stsci.edu/x/nIOc) table to convert the filter numbers to names.
```
objid = TAP_results['objID'][0]
query = """
SELECT
objID, detectID, Detection.filterID as filterID, Filter.filterType, obsTime, ra, dec,
psfFlux, psfFluxErr, psfMajorFWHM, psfMinorFWHM, psfQfPerfect,
apFlux, apFluxErr, infoFlag, infoFlag2, infoFlag3
FROM Detection
NATURAL JOIN Filter
WHERE objID={}
ORDER BY filterID, obsTime
""".format(objid)
print(query)
job = TAP_service.run_async(query)
detection_TAP_results = job.to_table()
detection_TAP_results
```
### Plot the light curves
The `psfFlux` values from the Detection table are converted from Janskys to AB magnitudes. Measurements in the 5 different filters are plotted separately.
```
# convert flux in Jy to magnitudes
t = detection_TAP_results['obsTime']
mag = -2.5*np.log10(detection_TAP_results['psfFlux']) + 8.90
xlim = np.array([t.min(),t.max()])
xlim = xlim + np.array([-1,1])*0.02*(xlim[1]-xlim[0])
pylab.rcParams.update({'font.size': 14})
pylab.figure(1,(10,10))
#detection_TAP_results['filterType'] is a byte string, compare accordingly:
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
pylab.subplot(511+i)
w = np.where(detection_TAP_results['filterType']==filter)
pylab.plot(t[w],mag[w],'-o')
pylab.ylabel(filter.decode('ascii')+' [mag]')
pylab.xlim(xlim)
pylab.gca().invert_yaxis()
if i==0:
pylab.title(objname)
pylab.xlabel('Time [MJD]')
pylab.tight_layout()
```
Plot differences from the mean magnitudes in the initial search.
```
# convert flux in Jy to magnitudes
t = detection_TAP_results['obsTime']
mag = -2.5*np.log10(detection_TAP_results['psfFlux']) + 8.90
xlim = np.array([t.min(),t.max()])
xlim = xlim + np.array([-1,1])*0.02*(xlim[1]-xlim[0])
pylab.rcParams.update({'font.size': 14})
pylab.figure(1,(10,10))
#detection_TAP_results['filterType'] is a byte string, compare accordingly:
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
pylab.subplot(511+i)
w = np.where(detection_TAP_results['filterType']==filter)
magmean = TAP_results[filter.decode('ascii')+'MeanPSFMag'][0]
pylab.plot(t[w],mag[w] - magmean,'-o')
pylab.ylabel('{} [mag - {:.2f}]'.format(filter,magmean))
pylab.xlim(xlim)
pylab.gca().invert_yaxis()
if i==0:
pylab.title(objname)
pylab.xlabel('Time [MJD]')
pylab.tight_layout()
```
### Identify bad data
There is one clearly bad $z$ magnitude with a very large difference. Select the bad point and look at it in more detail.
Note that indexing a table (or numpy array) with a logical expression selects just the rows where that expression is true.
```
detection_TAP_results[ (detection_TAP_results['filterType']=='z') & (np.abs(mag-TAP_results['zMeanPSFMag'][0]) > 2) ]
```
From examining this table, it looks like `psfQfPerfect` is bad. This flag is the PSF-weighted fraction of unmasked pixels in the image (see the [documentation](https://outerspace.stsci.edu/x/IoOc) for more details). Values near unity indicate good data that is not significantly affected by bad pixels.
Check all the `psfQfPerfect` values for the $z$ filter to see if this value really is unusual. The list of values below are sorted by magnitude. The bad point is the only value with `psfQfPerfect` < 0.95.
```
w = np.where(detection_TAP_results['filterType']=='z')
zdtab = detection_TAP_results[w]
zdtab['mag'] = mag[w]
zdtab['dmag'] = zdtab['mag'] - TAP_results['zMeanPSFMag'][0]
ii = np.argsort(-np.abs(zdtab['dmag']))
zdtab = zdtab[ii]
zdtab['objID','obsTime','mag','dmag','psfQfPerfect']
```
### Repeat the plot with bad psfQfPerfect values excluded
Do the plot again but exclude low psfQfPerfect values.
```
# convert flux in Jy to magnitudes
t = detection_TAP_results['obsTime']
mag = -2.5*np.log10(detection_TAP_results['psfFlux']) + 8.90
magmean = 0.0*mag
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
magmean[detection_TAP_results['filterType']==filter] = TAP_results[filter.decode('ascii')+'MeanPSFMag'][0]
dmag = mag - magmean
dmag1 = dmag[detection_TAP_results['psfQfPerfect']>0.9]
# fix the x and y axis ranges
xlim = np.array([t.min(),t.max()])
xlim = xlim + np.array([-1,1])*0.02*(xlim[1]-xlim[0])
# flip axis direction for magnitude
ylim = np.array([dmag1.max(),dmag1.min()])
ylim = ylim + np.array([-1,1])*0.02*(ylim[1]-ylim[0])
pylab.rcParams.update({'font.size': 14})
pylab.figure(1,(10,10))
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
pylab.subplot(511+i)
w = np.where((detection_TAP_results['filterType']==filter) & (detection_TAP_results['psfQfPerfect']>0.9))[0]
pylab.plot(t[w],dmag[w],'-o')
pylab.ylabel('{} [mag - {:.2f}]'.format(filter,magmean[w[0]]))
pylab.xlim(xlim)
pylab.ylim(ylim)
if i==0:
pylab.title(objname)
pylab.xlabel('Time [MJD]')
pylab.tight_layout()
```
### Plot versus the periodic phase instead of epoch
Plot versus phase using known RR Lyr period from Simbad (table [J/AJ/132/1202/table4](http://vizier.u-strasbg.fr/viz-bin/VizieR-3?-source=J/AJ/132/1202/table4&-c=KQ%20UMa&-c.u=arcmin&-c.r=2&-c.eq=J2000&-c.geom=r&-out.max=50&-out.form=HTML%20Table&-oc.form=sexa)).
```
period = 0.48636 #days, from Simbad
# convert flux in Jy to magnitudes
t = (detection_TAP_results['obsTime'] % period) / period
mag = -2.5*np.log10(detection_TAP_results['psfFlux']) + 8.90
magmean = 0.0*mag
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
magmean[detection_TAP_results['filterType']==filter] = TAP_results[filter.decode('ascii')+'MeanPSFMag'][0]
dmag = mag - magmean
dmag1 = dmag[detection_TAP_results['psfQfPerfect']>0.9]
# fix the x and y axis ranges
xlim = np.array([t.min(),t.max()])
xlim = xlim + np.array([-1,1])*0.02*(xlim[1]-xlim[0])
# flip axis direction for magnitude
ylim = np.array([dmag1.max(),dmag1.min()])
ylim = ylim + np.array([-1,1])*0.02*(ylim[1]-ylim[0])
pylab.rcParams.update({'font.size': 14})
pylab.figure(1,(10,10))
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
pylab.subplot(511+i)
w = np.where((detection_TAP_results['filterType']==filter) & (detection_TAP_results['psfQfPerfect']>0.9))[0]
w = w[np.argsort(t[w])]
pylab.plot(t[w],dmag[w],'-o')
pylab.ylabel('{} [mag - {:.2f}]'.format(filter,magmean[w[0]]))
pylab.xlim(xlim)
pylab.ylim(ylim)
if i==0:
pylab.title(objname)
pylab.xlabel('Phase')
pylab.tight_layout()
```
## Repeat search using eclipsing binary KIC 2161623
From [Villanova Kepler Eclipsing Binaries](http://keplerebs.villanova.edu)
```
objname = 'KIC 2161623'
coords = Mast.resolve_object(objname)
ra,dec = coords.ra.value,coords.dec.value
radius = 1.0/3600.0 # radius = 1 arcsec
query = """
SELECT objID, RAMean, DecMean, nDetections, ng, nr, ni, nz, ny, gMeanPSFMag, rMeanPSFMag, iMeanPSFMag, zMeanPSFMag, yMeanPSFMag
FROM dbo.MeanObjectView
WHERE
CONTAINS(POINT('ICRS', RAMean, DecMean),CIRCLE('ICRS',{},{},{}))=1
AND nDetections > 1
""".format(ra,dec,radius)
print(query)
job = TAP_service.run_async(query)
TAP_results = job.to_table()
TAP_results
```
### Get Repeated Detection Information
This time include the `psfQfPerfect` limit directly in the database query.
```
objid = TAP_results['objID'][0]
query = """
SELECT
objID, detectID, Detection.filterID as filterID, Filter.filterType, obsTime, ra, dec,
psfFlux, psfFluxErr, psfMajorFWHM, psfMinorFWHM, psfQfPerfect,
apFlux, apFluxErr, infoFlag, infoFlag2, infoFlag3
FROM Detection
NATURAL JOIN Filter
WHERE objID={}
AND psfQfPerfect >= 0.9
ORDER BY filterID, obsTime
""".format(objid)
print(query)
job = TAP_service.run_async(query)
detection_TAP_results = job.to_table()
# add magnitude and difference from mean
detection_TAP_results['magmean'] = 0.0
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
detection_TAP_results['magmean'][detection_TAP_results['filterType']==filter] = TAP_results[filter.decode('ascii')+'MeanPSFMag'][0]
detection_TAP_results['mag'] = -2.5*np.log10(detection_TAP_results['psfFlux']) + 8.90
detection_TAP_results['dmag'] = detection_TAP_results['mag']-detection_TAP_results['magmean']
detection_TAP_results
t = detection_TAP_results['obsTime']
dmag = detection_TAP_results['dmag']
xlim = np.array([t.min(),t.max()])
xlim = xlim + np.array([-1,1])*0.02*(xlim[1]-xlim[0])
ylim = np.array([dmag.max(),dmag.min()])
ylim = ylim + np.array([-1,1])*0.02*(ylim[1]-ylim[0])
pylab.rcParams.update({'font.size': 14})
pylab.figure(1,(10,10))
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
pylab.subplot(511+i)
w = np.where(detection_TAP_results['filterType']==filter)[0]
pylab.plot(t[w],dmag[w],'-o')
magmean = detection_TAP_results['magmean'][w[0]]
pylab.ylabel('{} [mag - {:.2f}]'.format(filter,magmean))
pylab.xlim(xlim)
pylab.ylim(ylim)
if i==0:
pylab.title(objname)
pylab.xlabel('Time [MJD]')
pylab.tight_layout()
```
### Plot versus phase using known period
Eclipsing binaries basically vary by same amount in all filters since it is a geometrical effect, so combine the data into a single light curve. Wrap using known period and plot versus phase.
```
period = 2.2834698
bjd0 = 54999.599837
t = ((detection_TAP_results['obsTime']-bjd0) % period) / period
dmag = detection_TAP_results['dmag']
w = np.argsort(t)
t = t[w]
dmag = dmag[w]
xlim = np.array([t.min(),t.max()])
xlim = xlim + np.array([-1,1])*0.02*(xlim[1]-xlim[0])
ylim = np.array([dmag.max(),dmag.min()])
ylim = ylim + np.array([-1,1])*0.02*(ylim[1]-ylim[0])
pylab.rcParams.update({'font.size': 14})
pylab.figure(1,(10,6))
pylab.plot(t,dmag,'-o')
pylab.xlim(xlim)
pylab.ylim(ylim)
pylab.xlabel('Phase')
pylab.ylabel('Delta magnitude from mean [mag]')
pylab.title(objname)
pylab.tight_layout()
```
## Repeat search for another eclipsing binary KIC 8153568
```
objname = 'KIC 8153568'
coords = Mast.resolve_object(objname)
ra,dec = coords.ra.value,coords.dec.value
radius = 1.0/3600.0 # radius = 1 arcsec
query = """
SELECT objID, RAMean, DecMean, nDetections, ng, nr, ni, nz, ny, gMeanPSFMag, rMeanPSFMag, iMeanPSFMag, zMeanPSFMag, yMeanPSFMag
FROM dbo.MeanObjectView
WHERE
CONTAINS(POINT('ICRS', RAMean, DecMean),CIRCLE('ICRS',{},{},{}))=1
AND nDetections > 1
""".format(ra,dec,radius)
print(query)
job = TAP_service.run_async(query)
TAP_results = job.to_table()
TAP_results
objid = TAP_results['objID'][0]
query = """
SELECT
objID, detectID, Detection.filterID as filterID, Filter.filterType, obsTime, ra, dec,
psfFlux, psfFluxErr, psfMajorFWHM, psfMinorFWHM, psfQfPerfect,
apFlux, apFluxErr, infoFlag, infoFlag2, infoFlag3
FROM Detection
NATURAL JOIN Filter
WHERE objID={}
AND psfQfPerfect >= 0.9
ORDER BY filterID, obsTime
""".format(objid)
print(query)
job = TAP_service.run_async(query)
detection_TAP_results = job.to_table()
# add magnitude and difference from mean
detection_TAP_results['magmean'] = 0.0
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
detection_TAP_results['magmean'][detection_TAP_results['filterType']==filter] = TAP_results[filter.decode('ascii')+'MeanPSFMag'][0]
detection_TAP_results['mag'] = -2.5*np.log10(detection_TAP_results['psfFlux']) + 8.90
detection_TAP_results['dmag'] = detection_TAP_results['mag']-detection_TAP_results['magmean']
detection_TAP_results
t = detection_TAP_results['obsTime']
dmag = detection_TAP_results['dmag']
xlim = np.array([t.min(),t.max()])
xlim = xlim + np.array([-1,1])*0.02*(xlim[1]-xlim[0])
ylim = np.array([dmag.max(),dmag.min()])
ylim = ylim + np.array([-1,1])*0.02*(ylim[1]-ylim[0])
pylab.rcParams.update({'font.size': 14})
pylab.figure(1,(10,10))
for i, filter in enumerate([b'g',b'r',b'i',b'z',b'y']):
pylab.subplot(511+i)
w = np.where(detection_TAP_results['filterType']==filter)[0]
pylab.plot(t[w],dmag[w],'-o')
magmean = detection_TAP_results['magmean'][w[0]]
pylab.ylabel('{} [mag - {:.2f}]'.format(filter,magmean))
pylab.xlim(xlim)
pylab.ylim(ylim)
if i==0:
pylab.title(objname)
pylab.xlabel('Time [MJD]')
pylab.tight_layout()
```
Eclipsing binaries basically vary by same amount in all filters since it is a geometrical effect, so combine the data into a single light curve.
Wrap using known period and plot versus phase. Plot two periods of the light curve this time.
This nice light curve appears to show a secondary eclipse.
```
period = 3.6071431
bjd0 = 54999.289794
t = ((detection_TAP_results['obsTime']-bjd0) % period) / period
dmag = detection_TAP_results['dmag']
w = np.argsort(t)
# extend to two periods
nw = len(w)
w = np.append(w,w)
t = t[w]
# add one to second period
t[-nw:] += 1
dmag = dmag[w]
xlim = [0,2.0]
ylim = np.array([dmag.max(),dmag.min()])
ylim = ylim + np.array([-1,1])*0.02*(ylim[1]-ylim[0])
pylab.rcParams.update({'font.size': 14})
pylab.figure(1,(12,6))
pylab.plot(t,dmag,'-o')
pylab.xlim(xlim)
pylab.ylim(ylim)
pylab.xlabel('Phase')
pylab.ylabel('Delta magnitude from mean [mag]')
pylab.title(objname)
pylab.tight_layout()
```
***
# Additional Resources
## Table Access Protocol
* IVOA standard for RESTful web service access to tabular data
* http://www.ivoa.net/documents/TAP/
## PanSTARRS 1 DR 2
* Catalog for PanSTARRS with additional Detection information
* https://outerspace.stsci.edu/display/PANSTARRS/
## Astronomical Query Data Language (2.0)
* IVOA standard for querying astronomical data in tabular format, with geometric search support
* http://www.ivoa.net/documents/latest/ADQL.html
## PyVO
* an affiliated package for astropy
* find and retrieve astronomical data available from archives that support standard IVOA virtual observatory service protocols.
* https://pyvo.readthedocs.io/en/latest/index.html
***
## About this Notebook
**Authors:** Rick White & Theresa Dower, STScI Archive Scientist & Software Engineer
**Updated On:** 01/09/2020
***
<img style="float: right;" src="./stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="stsci_pri_combo_mark_horizonal_white_bkgd" width="200px"/>
| github_jupyter |
```
from sklearn import linear_model
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from astropy.stats import LombScargle
%matplotlib inline
plt.style.use('seaborn')
# in order to use custom modules in parent path
import os
import sys
nb_dir = os.path.split(os.getcwd())[0]
if nb_dir not in sys.path:
sys.path.append(nb_dir)
from mfilter.implementations.simulate import SimulateSignal
from mfilter.types.arrays import Array
from mfilter.types.frequencyseries import FrequencySamples
from mfilter.implementations.regressions import *
```
$$ \int_{-\infty}^{\infty} \frac{\tilde{x}(f)\tilde{h}(f)}{S_n(f)} e^{2\pi i f t_0} df$$
```
n_samples = 100
freq = [0.5/8000, 0.001, 0.01, 0.1]
weights=[1.5, 0.4, 0.4, 0.4]
config="mix1"
pos_start_peaks = 0
n_peaks = 1
simulated = SimulateSignal(n_samples, freq, weights=weights, noise_level=0.1,
dwindow="tukey", underlying_delta=50)
weights2= np.array(weights) / 2
simulated2 = SimulateSignal(n_samples, freq, weights=weights2, noise_level=0.2,
dwindow="tukey", underlying_delta=50)
times = simulated.get_times(configuration=config)
# data = simulated.get_data(pos_start_peaks=pos_start_peaks, n_peaks=n_peaks,
# with_noise=True,
# configuration=config)
noise = simulated.get_noise(None)
temp = simulated.get_data(pos_start_peaks=n_samples//4, n_peaks=2, with_noise=False,
configuration=config)
temp = abs(temp)
temp2 = simulated.get_data(pos_start_peaks=n_samples//4, n_peaks=0.5,
configuration=config)
temp2 = abs(temp2)
temp3 = simulated2.get_data(pos_start_peaks=n_samples//4, n_peaks=1, with_noise=False,
configuration=config)
temp3 = abs(temp3)
temp4 = simulated2.get_data(pos_start_peaks=n_samples//4, n_peaks=1.5, with_noise=False,
configuration=config)
temp4 = abs(temp4)
data = temp + noise
# templates with same energy (any value, here we take the energy of the data)
E = np.sum(data**2)
E_n = np.sum(noise**2)
print("ratio E.signal/E.noise: ", E/E_n)
# E = E_n
#temp *= E / np.sum(temp**2)
#temp2 *= E / np.sum(temp2**2)
plt.figure(figsize=(15, 4))
plt.plot(times, data, label='data')
plt.plot(times, temp, label='template 1')
plt.plot(times, temp2, label='template 2')
plt.plot(times, temp3, label='template 3')
plt.plot(times, temp4, label='template 4')
plt.legend()
plt.figure()
plt.plot(times, noise, 'k', label="noise")
samples_per_peak = 5
T = max(times) - min(times)
fs = n_samples / T
print("sampling rate is: ", (n_samples / T))
df = 1 / T / samples_per_peak
max_freq = 2 * max(freq) + 20 * df
freqs = FrequencySamples(Array(times),
minimum_frequency=0,
maximum_frequency=max_freq)
print(len(freqs))
def lomb_psd(values, freqs, times):
lomb = LombScargle(times, values, normalization="standard")
if freqs.has_zero:
zero_idx = freqs.zero_idx
psd = np.zeros(len(freqs))
if zero_idx == 0:
psd[1:] = lomb.power(freqs.data[1:])
psd[0] = 0.0000001
else:
neg_freq, pos_freq = freqs.split_by_zero()
right_psd = lomb.power(pos_freq)
left_psd = lomb.power(np.abs(neg_freq))
psd[:zero_idx] = left_psd
psd[zero_idx] = 0.000001
psd[zero_idx+1:] = right_psd
else:
psd = lomb.power(np.abs(freqs.data))
return psd
psd = lomb_psd(noise, freqs, times)
psd_data = lomb_psd(data, freqs, times)
print(len(freqs), len(times))
plt.figure()
plt.plot(freqs.data, psd)
plt.plot(freqs.data, psd_data, 'g')
# plt.xlim([0, 0.2])
# plt.ylim([0, 0.02])
T
# say that my psd of noise is the square of the ft of the noise
#psd = (abs(noise_ft)**2)
#print(np.where(psd == 0.0))
#psd[0] = 0.000000001
reg = ElasticNetRegression(alpha=0.001, l1_ratio=0.7)
reg = RidgeRegression(alpha=0.01)
F = Dictionary(times, freqs)
data_ft = reg.get_ft(data, F)
temp_ft = reg.get_ft(temp, F)
temp2_ft = reg.get_ft(temp2, F)
temp3_ft = reg.get_ft(temp3, F)
temp4_ft = reg.get_ft(temp4, F)
noise_ft = reg.get_ft(noise, F)
plt.plot(freqs.data, abs(noise_ft)**2)
plt.plot(freqs.data, psd , 'g', alpha=0.5)
print("lambda is: ", np.sqrt(1 / df**2), np.sqrt(df))
def get_z(ft, temp_ft, psd, F):
corr = ft * temp_ft.conjugate() / psd
return np.dot(F.matrix, corr).real
z_data = get_z(data_ft, temp_ft, psd, F)
z_data2 = get_z(data_ft, temp2_ft, psd, F)
z_data3 = get_z(data_ft, temp3_ft, psd, F)
z_data4 = get_z(data_ft, temp4_ft, psd, F)
z_noise = get_z(noise_ft, temp_ft, psd, F)
z_noise2 = get_z(noise_ft, temp2_ft, psd, F)
z_noise3 = get_z(noise_ft, temp3_ft, psd, F)
z_noise4 = get_z(noise_ft, temp4_ft, psd, F)
plt.plot(times - times[n_samples//2], np.roll(z_data, n_samples//2))
# plt.plot(times - times[n_samples//2], np.roll(noise, n_samples//2), 'r', alpha=0.4)
# plt.plot(times - times[n_samples//2], np.random.normal(0, 0.05, n_samples), 'g', alpha=0.5)
h1 = get_z(temp_ft, temp_ft, psd, F)
plt.plot(times - times[n_samples//2], np.roll(np.sqrt(abs(h1)), n_samples//2))
def get_sigma(temp_ft, psd):
return np.sum(temp_ft * temp_ft.conjugate() / psd)
sigma_temp = get_sigma(temp_ft, psd)
sigma2_temp = get_sigma(temp2_ft, psd)
sigma3_temp = get_sigma(temp3_ft, psd)
sigma4_temp = get_sigma(temp4_ft, psd)
print("var is ", sigma_temp, sigma2_temp, sigma3_temp, sigma4_temp)
snr_data = z_data / np.sqrt(sigma_temp.real)
snr_data2 = z_data2 / np.sqrt(sigma2_temp.real)
snr_data3 = z_data3 / np.sqrt(sigma3_temp.real)
snr_data4 = z_data4 / np.sqrt(sigma4_temp.real)
snr_noise = z_noise / np.sqrt(sigma_temp.real)
snr_noise2 = z_noise2 / np.sqrt(sigma2_temp.real)
snr_noise3 = z_noise3 / np.sqrt(sigma3_temp.real)
snr_noise4 = z_noise4 / np.sqrt(sigma4_temp.real)
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(15,4))
ax1.plot(times - times[n_samples//2], np.roll(snr_data, n_samples//2), 'r')
ax1.plot(times - times[n_samples//2], np.roll(snr_data2, n_samples//2), 'b')
ax1.plot(times - times[n_samples//2], np.roll(snr_data3, n_samples//2), 'g')
ax1.plot(times - times[n_samples//2], np.roll(snr_data4, n_samples//2), 'k')
ax2.plot(times - times[n_samples//2], np.roll(snr_noise, n_samples//2), 'r')
ax2.plot(times - times[n_samples//2], np.roll(snr_noise2, n_samples//2), 'b')
ax2.plot(times - times[n_samples//2], np.roll(snr_noise3, n_samples//2), 'g', alpha=0.5)
ax2.plot(times - times[n_samples//2], np.roll(snr_noise4, n_samples//2), 'k', alpha=0.5)
np.sum(data_ft.conjugate() * temp_ft / psd) / np.sqrt(sigma_temp)
lomb = LombScargle(times, snr_data2, normalization="standard")
if freqs.has_zero:
zero_idx = freqs.zero_idx
neg_freq, pos_freq = freqs.split_by_zero()
right_psd = lomb.power(pos_freq)
left_psd = lomb.power(np.abs(neg_freq))
psd_z = np.zeros(len(freqs))
psd_z[:zero_idx] = left_psd
psd_z[zero_idx] = 0.000001
psd_z[zero_idx+1:] = right_psd
else:
psd_z = lomb.power(np.abs(freqs.data))
plt.plot(freqs.data, psd_z)
norm = sp.stats.norm(0, (sigma_temp.real)**(1/2))
print("for only noise: ", (1 - norm.cdf(max(snr_noise))))
print("for data: ", (1 - norm.cdf(max(snr_data))))
norm = sp.stats.norm(0, (sigma2_temp.real)**(1/2))
print("for only noise: ", (1 - norm.cdf(max(snr_noise2))))
print("for data: ", (1 - norm.cdf(max(snr_data2))))
norm = sp.stats.norm(0, (sigma3_temp.real)**(1/2))
print("for only noise: ", (1 - norm.cdf(max(snr_noise3))))
print("for data: ", (1 - norm.cdf(max(snr_data3))))
norm = sp.stats.norm(0, (sigma4_temp.real)**(1/2))
print("for only noise: ", (1 - norm.cdf(max(snr_noise4))))
print("for data: ", (1 - norm.cdf(max(snr_data4))))
# or using same sigma as the original noise
norm = sp.stats.norm(0, 0.)
plt.figure(figsize=(15, 4))
plt.plot(times, np.dot(F.matrix, data_ft / np.sqrt(psd)).real, 'k')
plt.plot(times, np.roll(np.dot(F.matrix, temp_ft / np.sqrt(psd)).real, snr_data.argmax() + 1), '.-', label="temp1")
# plt.plot(times, np.roll(np.dot(F.matrix, temp2_ft / np.sqrt(psd)).real, snr_data2.argmax() + 1), 'o--', label="temp2")
# plt.plot(times, np.roll(np.dot(F.matrix, temp3_ft / np.sqrt(psd)).real, snr_data3.argmax()+1), label="temp3")
# plt.plot(times, np.roll(np.dot(F.matrix, temp4_ft / np.sqrt(psd)).real, snr_data4.argmax()+1), label="temp4")
plt.legend()
plt.plot((times + times[n_samples//3] + times[snr_data.argmax()]) % max(times), temp)
plt.plot(times, np.roll(temp, n_samples//3 + snr_data.argmax()))
max(times)
norm.stats()
nn = np.random.normal(0, 5, 500)
dtt = 0.4
tt = np.arange(500) * dtt
plt.plot(tt, nn)
nn_ft = sp.fft(nn)
freq = np.fft.fftfreq(500, d=dtt)
f, nn_psd = sp.signal.welch(nn, 1/dtt, return_onesided=False, nfft=500)
plt.plot(freq, np.abs(nn_ft)**2 / 500)
plt.plot(f, nn_ft / nn_psd)
nn_r = np.fft.ifft(nn_ft / nn_psd)
plt.plot(tt, nn_r.real)
plt.plot(tt, nn_r.imag, 'r')
print(np.std(nn_r.real), np.std(nn))
```
| github_jupyter |
# Deps
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai import *
from fastai.text import *
from fastai.vision import *
from fastai.callbacks import *
def CountVocab(ds):
return len(set([p for o in ds.items for p in o]))
def VocabTransfered(path_dict,itos_new):
old_itos = pickle.load(open(path_dict, 'rb'))
vocab = [];
n_vocab_trns = 0
for i,w in enumerate(itos_new):
if w in old_itos:
vocab.append(w)
n_vocab_trns +=1
return n_vocab_trns, len(itos_new), vocab;
def SampleTexts(df,max_tokens = 100000000):
lens = np.array(df.apply(lambda x : len(re.findall(r'\b',x.text))/2,axis=1))
n_tokens = np.sum(lens)
if n_tokens <= max_tokens:
return range(0,df.shape[0]), n_tokens
#if np.sum(lens) > max_tokens:
iDec = (-lens).argsort()
cumSum = np.cumsum(lens[iDec])
iCut = np.argmax(cumSum>max_tokens)
return iDec[:iCut], cumSum[iCut-1]
def calc_f1(cm):
"Calculate the various f1 scores using the confustion matrix"
n_rows, n_cols = cm.shape
if(n_rows != n_cols):
return 0
f1 = np.zeros(n_rows)
tp = 0
for i in range(0,3):
tp += cm[i,i]
col_sum = np.sum(cm[:,i])
if col_sum == 0 or cm[i,i] == 0:
f1[i] = 0
else:
pr = cm[i,i]/col_sum
rc = cm[i,i]/(np.sum(cm[i,:]))
f1[i] = (2*pr*rc)/(pr+rc)
f1_macro = np.mean(f1)
f1_micro = tp/np.sum(cm)
f1_weighted = np.sum(cm,axis=1)@f1/np.sum(cm)
return f1_micro,f1_macro,f1_weighted,f1
def calc_f1_batch(x,y):
"Calculate the various f1 scores as using prediction and target"
n_classes = 3
tp = np.zeros(n_classes)
tp_plus_fp = np.zeros(n_classes)
tp_plus_fn = np.zeros(n_classes)
f1 = np.zeros(n_classes)
# on_batch_end - uses += to work with batches as metric
preds = x.argmax(1)
targets = y
for i in range(0,n_classes):
tp[i] += ((preds==i) * (targets==i)).float().sum()
tp_plus_fp[i] += (preds==i).float().sum()
tp_plus_fn[i] += (targets==i).float().sum()
# on_epoch_end
for i in range(0,n_classes):
if tp[i] == 0 or tp_plus_fp[i] == 0:
f1[i] = 0
else:
pr = tp[i]/tp_plus_fp[i]
rc = tp[i]/tp_plus_fn[i]
f1[i] = (2*pr*rc)/(pr+rc)
n = tp_plus_fn.sum()
f1_macro = np.mean(f1)
f1_micro = tp.sum()/n
f1_weighted = tp_plus_fn@f1/n
return torch.from_numpy(np.array(f1_weighted))
class f1(Callback):
"Calculate f1 score in a callback"
def __init__(self, n_classes, method = 'f1_micro'):
self.n_classes = n_classes
self.tp = np.zeros(n_classes)
self.tp_plus_fp = np.zeros(n_classes)
self.tp_plus_fn = np.zeros(n_classes)
self.f1 = np.zeros(n_classes)
self.method = method
def on_epoch_begin(self, **kwargs):
self.tp.fill(0)
self.tp_plus_fp.fill(0)
self.tp_plus_fn.fill(0)
self.f1.fill(0)
def on_batch_end(self, last_output, last_target, **kwargs):
preds = last_output.argmax(1)
targets = last_target
for i in range(0,self.n_classes):
self.tp[i] += ((preds==i) * (targets==i)).float().sum()
self.tp_plus_fp[i] += (preds==i).float().sum()
self.tp_plus_fn[i] += (targets==i).float().sum()
def on_epoch_end(self, **kwargs):
for i in range(0,self.n_classes):
if self.tp[i] == 0 or self.tp_plus_fp[i] == 0:
self.f1[i] = 0
else:
pr = self.tp[i]/self.tp_plus_fp[i]
rc = self.tp[i]/self.tp_plus_fn[i]
self.f1[i] = (2*pr*rc)/(pr+rc)
if self.method == 'f1_macro':
self.metric = np.mean(self.f1)
else:
n = self.tp_plus_fn.sum()
if self.method == 'f1_micro':
self.metric = self.tp.sum()/n
elif self.method == 'f1_weighted':
self.metric = self.tp_plus_fn@self.f1/n
class Results():
# class containing mutable data type to return results from a callback
def __init__(self):
self.vals = []
self.epoch = []
def add(self,epoch,val):
self.epoch.append(epoch)
self.vals.append(val)
class RecordTestImprovement(TrackerCallback):
"A `TrackerCallback` that calculates the test metric every time the validation metric improves."
def __init__(self, learn:Learner, results:Results, data_test, monitor:str='val_loss', mode:str='auto', every:str='improvement'):
super().__init__(learn, monitor=monitor, mode=mode)
self.results = results
self.every = every
self.data_test = data_test
def on_epoch_end(self, epoch, **kwargs:Any)->None:
"Compare the value monitored to its best score and maybe save the model."
current = self.get_monitor_value()
if current is not None and self.operator(current, self.best):
previous_data = self.learn.data
self.learn.data = self.data_test
x,y = self.learn.get_preds(pbar = self.learn.recorder.pbar);
f1 = calc_f1_batch(x,y)
self.learn.data = previous_data
self.results.add(epoch,f1)
self.best = current
import ntpath
txt_proc = [TokenizeProcessor(tokenizer=Tokenizer(lang='el') ),NumericalizeProcessor()]
def cross_val(path,data_dir,source_txt,n_folds,lrs,n_cycles,bs,pretrained_fnames=None,ft_lm=False,stratified=True,ft_lm_epochs=100,rnd_seed=1024):
"""Simulate results from the paper by performing cross validation.
1) Without pretrained language model pretrained_fnames=None
2) With pretrained language model pretrained_fnames = [learn.save(THIS_FILE_NAME),pickle.dump(data_lm.vocab.itos, THIS_FILE_NAME)]
3) UlmFit pretrained_fnames and ft_lm = True
"""
if n_folds < 3:
return
ident = ntpath.basename(source_txt)
lm_encoder_file = 'cv_enc_' + ident
vocab = None;
df = pd.read_csv(path/f'{data_dir}/{source_txt}');
n_recs = len(df.index)
n_classes = df.iloc[:,2].value_counts().count()
vocab_sizes = np.zeros(n_folds,'float')
vocab_trns_val = np.zeros(n_folds,'float')
vocab_trns_test = np.zeros(n_folds,'float')
vocab_lm_trns = np.zeros(n_folds,'float')
f1_val = np.zeros([n_folds,n_classes])
val_metrics = []
val_losses = []
f1_test = np.zeros([n_folds,n_classes])
test_metrics = []
if stratified:
df_list = []
for i in range(0,n_folds):
df_sample = df.groupby('Polarity1', group_keys=False).apply(
lambda x: x.sample(frac=1/(n_folds-i),random_state=rnd_seed))
df = df.drop(df_sample.index)
df_list.append(df_sample)
df = pd.concat(df_list)
idx = range(0,n_recs)
else:
np.random.seed(rnd_seed)
idx = np.random.permutation(n_recs)
cuts = np.linspace(0,n_recs,n_folds+1,dtype='int')
for i in range(0,n_folds):
val_idx = idx[cuts[i]:cuts[i+1]]
if i == n_folds-1:
test_idx = idx[cuts[0]:cuts[1]]
else:
test_idx = idx[cuts[i+1]:cuts[i+2]]
trn_idx = np.setdiff1d(range(0,n_recs), np.concatenate((val_idx,test_idx)))
if pretrained_fnames != None:
data_class_lm = (TextList.from_df(df,path_class/data_dir,cols=1,processor=txt_proc)
.split_by_idxs(trn_idx,val_idx)
.label_for_lm()
.databunch(bs=bs,num_workers=0))
vocab = data_class_lm.vocab
vocab_lm_trns[i],_,_= VocabTransfered(str(pretrained_fnames[1]) + '.pkl',vocab.itos)
learn = language_model_learner(data_class_lm, AWD_LSTM,pretrained_fnames=pretrained_fnames, drop_mult=0.3)
if ft_lm:
# Fine tune the given language model
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7))
learn.unfreeze()
learn.fit_one_cycle(ft_lm_epochs, 1e-3, moms=(0.8,0.7),callbacks=[ShowGraph(learn),
SaveModelCallback(learn,monitor='accuracy',mode='max',name='cv_bestmodel_lm_ft_' + ident)])
learn.load('cv_bestmodel_lm_ft_' + ident)
learn.save_encoder(lm_encoder_file)
data_class = (TextList.from_df(df,path_class/data_dir,cols=1,vocab=vocab,processor=txt_proc)
.split_by_idxs(trn_idx,val_idx)
.label_from_df(cols=2)
.databunch(bs=bs,num_workers=0))
vocab_sizes[i] = len(data_class.vocab.itos)
vocab_trns_val[i] = CountVocab(data_class.valid_ds)
data_class_val_is_test = (TextList.from_df(df,path_class/data_dir,cols=1,vocab=vocab,processor=txt_proc)
.split_by_idxs(trn_idx,test_idx)
.label_from_df(cols=2)
.databunch(bs=bs,num_workers=0))
vocab_trns_test[i] = CountVocab(data_class.valid_ds)
learn = text_classifier_learner(data_class, AWD_LSTM, drop_mult=0.5,
metrics=[accuracy,f1(3,'f1_macro'),f1(3,'f1_micro'),f1(3,'f1_weighted')])
if pretrained_fnames != None:
learn.load_encoder(lm_encoder_file)
learn.freeze()
learn.fit_one_cycle(n_cycles[0], lrs[0], moms=(0.8,0.7))
learn.freeze_to(-2)
learn.fit_one_cycle(n_cycles[1], slice(lrs[1]/(2.6**4),lrs[1]), moms=(0.8,0.7))
learn.freeze_to(-3)
learn.fit_one_cycle(n_cycles[2], slice(lrs[2]/(2.6**4),lrs[2]), moms=(0.8,0.7))
learn.unfreeze()
test_results = Results()
learn.fit_one_cycle(n_cycles[3], slice(lrs[3]/(2.6**4),lrs[3]), moms=(0.8,0.7),
callbacks=[ShowGraph(learn),
SaveModelCallback(learn,monitor='f1',mode='max',name='cv_bestmodel_'+ str(i)),
RecordTestImprovement(learn,test_results,data_class_val_is_test,monitor='f1',mode='max')])
learn.load('cv_bestmodel_'+ str(i));
interp = ClassificationInterpretation.from_learner(learn)
_,_,_,f1_val[i,:] = calc_f1(interp.confusion_matrix())
# get results directly from recorded metrics
val_metrics.append(learn.recorder.metrics)
val_losses.append(learn.recorder.val_losses)
test_metrics.append(test_results)
learn.data = data_class_val_is_test
interp = ClassificationInterpretation.from_learner(learn)
_,_,_,f1_test[i,:] = calc_f1(interp.confusion_matrix())
return ((vocab_sizes,vocab_lm_trns,vocab_trns_val,vocab_trns_test),(f1_val),(f1_test),(val_metrics,test_metrics,val_losses))
def extract_test_res(max_epoch,test_res):
n_folds = len(test_res)
extracted = np.zeros([n_folds,max_epoch])
for idx, result in enumerate(test_res):
epochs = np.array(result.epoch)
indices = np.zeros(max_epoch).astype(np.int)
if len(epochs) > 1:
indices[epochs[1:]-1] = 1
vals = np.array(result.vals)
extracted[idx,:] = vals[indices.cumsum()]
return extracted
def get_progress(f1_res):
max_epoch = len(f1_res[3][0][0]);
f1_test = extract_test_res(max_epoch,f1_res[3][1])
f1_val = np.array([[epoch_res[-1] for epoch_res in fold_res] for fold_res in f1_res[3][0]])
f1_val_loss = np.array([[epoch_res for epoch_res in fold_res] for fold_res in f1_res[3][2]])
i_last_increase = f1_val.mean(axis=0).argmax()
return f1_test, f1_val, f1_val_loss, i_last_increase
path_class = Path('/home/jupyter/tutorials/')
source_txt_class='GRGE_sentiment.csv'
path_lang_model = Path('/home/jupyter/tutorials/')
data_dir = 'data'
wiki_txt_file = 'el_wiki_df_all.csv'
```
# Greek Election Tweets Sentiment (<a href="https://link.springer.com/content/pdf/10.1007%2Fs10579-018-9420-4.pdf">tsakalidis2018</a>)
Data <a href="http://mklab.iti.gr/resources/tsakalidis2017building.zip">source</a>
```
paper_class_counts = pd.Series({'Neutral' : 979, 'Negative' : 582, 'Positive' : 79})
paper_baseline = 0.4463
sota = 0.8066
txt_proc = [TokenizeProcessor(tokenizer=Tokenizer(lang='el') ),NumericalizeProcessor()]
```
## Process data
```
bs = 128
proc_data = True
stratified = True
rnd_seed = 1024
# process all the data in the begining - the vocab is small because of the size of the data set, therefore it seems
# to be important to use the same training indices for the lm and classification model as this results in the largest
# vocab for classification.
if proc_data:
# split the classifcation data into train, test and validation sets and choose the test set
i_test_fold = 0
n_folds = 10
df = pd.read_csv(path_class/f'{data_dir}/{source_txt_class}');
n_recs = len(df.index)
if stratified:
df_list = []
for i in range(0,n_folds):
df_sample = df.groupby('Polarity1', group_keys=False).apply(
lambda x: x.sample(frac=1/(n_folds-i),random_state=rnd_seed))
df = df.drop(df_sample.index)
df_list.append(df_sample)
df = pd.concat(df_list)
idx = range(0,n_recs)
else:
np.random.seed(rnd_seed)
idx = np.random.permutation(n_recs)
cuts = np.linspace(0,n_recs,n_folds+1,dtype='int')
test_idx = idx[cuts[i_test_fold]:cuts[i_test_fold+1]]
if i_test_fold == n_folds-1:
valid_idx = idx[cuts[0]:cuts[1]]
else:
valid_idx = idx[cuts[i_test_fold+1]:cuts[i_test_fold+2]]
train_idx = np.setdiff1d(range(0,n_recs), np.concatenate((valid_idx,test_idx)))
data_class_lm = (TextList.from_df(df, path_class,cols=1,processor = txt_proc)
.split_by_idxs(train_idx,valid_idx)
.label_for_lm()
.databunch(bs=bs))
data_class_lm.save('tmp_data_lm'+ source_txt_class)
data_class = (TextList.from_df(df,path_class,cols=1,vocab=data_class_lm.train_ds.vocab,processor = txt_proc)
.split_by_idxs(train_idx,valid_idx)
.label_from_df(cols=2)
.databunch(bs=bs))
#print(f'Vocab size for classification model:{len(data_class.vocab.itos)}')
data_class.save('tmp_data_class'+ source_txt_class)
data_class_val_is_test = (TextList.from_df(df, path_class,cols=1,vocab=data_class_lm.train_ds.vocab,processor = txt_proc)
.split_by_idxs(train_idx,test_idx)
.label_from_df(cols=2)
.databunch(bs=bs))
data_class_val_is_test.save('tmp_data_class_test'+ source_txt_class)
else:
data_class_lm = TextLMDataBunch.load(path_class, 'tmp_data_lm'+ source_txt_class, bs=bs)
data_class = TextClasDataBunch.load(path_class, 'tmp_data_class'+ source_txt_class, bs=bs)
data_class_val_is_test = TextClasDataBunch.load(path_class, 'tmp_data_class_test'+ source_txt_class, bs=bs)
#print(f'Vocab size for language model:{len(data_class_lm.vocab.itos)}')
#print(f'Vocab size for classification model:{len(data_class.vocab.itos)}')
print(f'Vocab size:{len(data_class_lm.vocab.itos)}')
print(f'Vocab present in val:{CountVocab(data_class_lm.valid_ds)}')
print(f'Vocab present in test:{CountVocab(data_class_val_is_test.valid_ds)}')
```
## Quick sanity checks
Confirm class distributions and metric by calculating the weighted f1 score by just using the majority class and comparing to table 4 in the paper
```
df = pd.read_csv(path_class/f'data/{source_txt_class}');
class_counts = np.array(df.iloc[:,2].value_counts())
cm_mc = np.zeros([class_counts.size,class_counts.size])
mc = class_counts.argmax()
cm_mc[:,mc] = class_counts
f1_micro,f1_macro,f1_weighted,_= calc_f1(cm_mc)
print(f'f1 scores using majority class\nMicro: {f1_micro}\nMacro: {f1_macro}\nWeighted: {f1_weighted} <- mc from paper')
if(df.Polarity1.value_counts().equals(paper_class_counts) and round(f1_weighted,4) == paper_baseline):
print("Basic test passed")
else:
print("Something has gone wrong")
```
## Train classifier without pre-trained language model
To confirm that ULMFiT shows an improvement train the classifier without a pretrained language model
### Train
```
learn = text_classifier_learner(data_class, AWD_LSTM,drop_mult=0.5,
metrics=[accuracy,f1(3,'f1_macro'),f1(3,'f1_micro'),f1(3,'f1_weighted')])
learn.freeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7))
learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2), moms=(0.8,0.7))
learn.freeze_to(-3)
learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3), moms=(0.8,0.7))
# try to overfit
learn.unfreeze()
learn.fit_one_cycle(300, slice(2.5e-3/(2.6**4),2.5e-3), moms=(0.8,0.7),callbacks=[ShowGraph(learn), SaveModelCallback(learn,monitor='f1',mode='max',name='bestmodel_sent_base')])
```
### Examine Results
#### Results on validation set
```
learn.load('bestmodel_sent_base');
learn.data = data_class
interp = ClassificationInterpretation.from_learner(learn)
cm = interp.plot_confusion_matrix(figsize=(6,6), dpi=60)
_,_,f1_macro_weighted,_ = calc_f1(interp.confusion_matrix())
print(f'Macro weighted f1 on best valid set without pre-trained language model: {f1_macro_weighted}')
```
#### Results on test set
```
learn.load('bestmodel_sent_base');
learn.data = data_class_val_is_test
interp = ClassificationInterpretation.from_learner(learn)
cm = interp.plot_confusion_matrix(figsize=(6,6), dpi=60)
_,_,f1_macro_weighted,_ = calc_f1(interp.confusion_matrix())
print(f'Macro weighted f1 on best test set without pre-trained language model: {f1_macro_weighted}')
```
### Cross Validation
The results in the paper use the mean of the weighted f1 score over 10 folds - examine the effect of choosing the best model from a greater and greater number of training cycles
```
n_folds = 10
lr = 2e-2
lrs = np.array([lr,lr/2,lr/4,lr/20])
n_cycles = np.array([1,1,1,300])
bs=128
stratified = True
pretrained_fnames = None;
ft_lm = False
f1_res = cross_val(path_class,data_dir,source_txt_class,n_folds,lrs,n_cycles,bs,pretrained_fnames,ft_lm)
max_epoch = len(f1_res[3][0][0]);
f1_test = extract_test_res(max_epoch,f1_res[3][1])
f1_val = np.array([[epoch_res[-1] for epoch_res in fold_res] for fold_res in f1_res[3][0]])
f1_val_loss = np.array([[epoch_res for epoch_res in fold_res] for fold_res in f1_res[3][2]])
i_last_increase = f1_val.mean(axis=0).argmax()
print(f'Highest mean f1 micro on validation set: {f1_val.mean(axis=0)[i_last_increase]}')
print(f'Mean f1 micro on Test set with model from best val set: {f1_test.mean(axis=0)[i_last_increase]}')
print(f'Highest mean f1 micro on Test set: {f1_test.mean(axis=0).max()}')
ax = plt.subplot()
ax.plot(f1_test.mean(axis=0),label='f1 test');
ax.plot(f1_val.mean(axis=0),label='f1 valid');
ax.plot(f1_val_loss.mean(axis=0),label='valid loss')
ax.set_xlim(1, i_last_increase);
ax.legend(loc='lower right', shadow=True, fontsize='x-large');
ax.set_title('Training progress without ULMFit');
import pickle
with open('base_progress.pkl', 'wb') as f:
pickle.dump(f1_res, f)
```
## Perform classification using the wiki language model without fine tuning
### Save wiki103 encoder
```
learn = language_model_learner(data_class_lm,AWD_LSTM, pretrained_fnames=[path_lang_model/f'models/bestmodel_lm5_{wiki_txt_file}',path_lang_model/f'data/dict_{wiki_txt_file}'], drop_mult=0.3)
learn.save_encoder('wk103_enc')
```
### Train
```
learn = text_classifier_learner(data_class, AWD_LSTM,drop_mult=0.5,
metrics=[accuracy,f1(3,'f1_macro'),f1(3,'f1_micro'),f1(3,'f1_weighted')])
learn.load_encoder('wk103_enc')
learn.freeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7))
learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2), moms=(0.8,0.7))
learn.freeze_to(-3)
learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3), moms=(0.8,0.7))
learn.unfreeze()
learn.fit_one_cycle(300, slice(1e-3/(2.6**4),1e-3), moms=(0.8,0.7),callbacks=[ShowGraph(learn), SaveModelCallback(learn,monitor='f1',mode='max',name='bestmodel_sent_lm_wk103')])
```
### Examine Results
#### Result on validation set
```
learn.load('bestmodel_sent_lm_wk103');
learn.data = data_class
interp = ClassificationInterpretation.from_learner(learn)
cm = interp.plot_confusion_matrix(figsize=(6,6), dpi=60)
_,_,f1_macro_weighted,_ = calc_f1(interp.confusion_matrix())
print(f'Macro weighted f1 on best valid set without finetuning the language model: {f1_macro_weighted}')
```
#### Result on test set
```
learn.load('bestmodel_sent_lm_wk103');
learn.data = data_class_val_is_test
interp = ClassificationInterpretation.from_learner(learn)
cm = interp.plot_confusion_matrix(figsize=(6,6), dpi=60)
_,_,f1_macro_weighted,_ = calc_f1(interp.confusion_matrix())
print(f'Macro weighted f1 on test set without finetuning the language model: {f1_macro_weighted}')
```
### Cross Validation
```
n_folds = 10
lr = 2e-2
lrs = np.array([lr,lr/2,lr/4,lr/20])
n_cycles = np.array([1,1,1,300])
bs=128
stratified = True
pretrained_fnames = [path_lang_model/f'models/bestmodel_lm5_{wiki_txt_file}',path_lang_model/f'data/dict_{wiki_txt_file}']
ft_lm = False
f1_res = cross_val(path_class,data_dir,source_txt_class,n_folds,lrs,n_cycles,bs,pretrained_fnames,ft_lm,stratified)
max_epoch = len(f1_res[3][0][0]);
f1_test = extract_test_res(max_epoch,f1_res[3][1])
f1_val = np.array([[epoch_res[-1] for epoch_res in fold_res] for fold_res in f1_res[3][0]])
f1_val_loss = np.array([[epoch_res for epoch_res in fold_res] for fold_res in f1_res[3][2]])
i_last_increase = f1_val.mean(axis=0).argmax()
print(f'Highest mean f1 micro on validation set: {f1_val.mean(axis=0)[i_last_increase]}')
print(f'Mean f1 micro on Test set with model from best val set: {f1_test.mean(axis=0)[i_last_increase]}')
print(f'Highest mean f1 micro on Test set: {f1_test.mean(axis=0).max()}')
ax = plt.subplot()
ax.plot(f1_test.mean(axis=0),label='f1 test');
ax.plot(f1_val.mean(axis=0),label='f1 valid');
ax.plot(f1_val_loss.mean(axis=0),label='valid loss')
ax.set_xlim(1, i_last_increase);
ax.legend(loc='lower right', shadow=True, fontsize='x-large');
ax.set_title('Training progress without fine tuning the language model');
import pickle
with open('ulm_progress.pkl', 'wb') as f:
pickle.dump(f1_res, f)
```
## ULMFiT
### Fine tune the language model using given tweets
```
learn = language_model_learner(data_class_lm, AWD_LSTM, pretrained_fnames=[path_lang_model/f'models/bestmodel_lm5_{wiki_txt_file}',path_lang_model/f'data/dict_{wiki_txt_file}'], drop_mult=0.3)
learn.lr_find()
learn.recorder.plot(skip_end=5)
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7))
learn.unfreeze()
learn.fit_one_cycle(100, 1e-3, moms=(0.8,0.7),callbacks=[ShowGraph(learn), SaveModelCallback(learn,monitor='accuracy',mode='max',name='bestmodel_sent_lm_wk103_ft' )])
learn.load('bestmodel_sent_lm_wk103_ft');
learn.save_encoder('sent_fine_tuned_enc')
```
### Train the classifier
```
learn = text_classifier_learner(data_class, AWD_LSTM,drop_mult=0.5,
metrics=[accuracy,f1(3,'f1_macro'),f1(3,'f1_micro'),f1(3,'f1_weighted')])
learn.load_encoder('sent_fine_tuned_enc')
learn.freeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7))
learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2), moms=(0.8,0.7))
learn.freeze_to(-3)
learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3), moms=(0.8,0.7))
learn.unfreeze()
learn.fit_one_cycle(300, slice(1e-3/(2.6**4),1e-3), moms=(0.8,0.7),callbacks=[ShowGraph(learn), SaveModelCallback(learn,monitor='accuracy',mode='max',name='bestmodel_sent_lm_wk103_ft')])
```
### Examine results
#### Result on validation set
```
learn.load('bestmodel_sent_lm_wk103_ft');
learn.data = data_class
interp = ClassificationInterpretation.from_learner(learn)
cm = interp.plot_confusion_matrix(figsize=(6,6), dpi=60)
f1_micro,_,f1_macro_weighted,_ = calc_f1(interp.confusion_matrix())
print(f'Accuracy on best valid set after finetuning the language model: {f1_micro}')
print(f'Macro weighted f1 on best valid set after finetuning the language model: {f1_macro_weighted}')
```
#### Result on test set
```
learn.load('bestmodel_sent_lm_wk103_ft');
learn.data = data_class_val_is_test
interp = ClassificationInterpretation.from_learner(learn)
cm = interp.plot_confusion_matrix(figsize=(6,6), dpi=60)
f1_micro,_,f1_macro_weighted,_ = calc_f1(interp.confusion_matrix())
print(f'Accuracy on test set after finetuning the language model: {f1_micro}')
print(f'Macro weighted f1 on test set after finetuning the language model: {f1_macro_weighted}')
```
### Cross Validation
```
n_folds = 10
lr = 2e-2
lrs = np.array([lr,lr/2,lr/4,lr/20])
n_cycles = np.array([1,1,1,300])
bs=128
stratified = True
pretrained_fnames = [path_lang_model/f'models/bestmodel_lm5_{wiki_txt_file}',path_lang_model/f'data/dict_{wiki_txt_file}']
ft_lm = True
f1_res = cross_val(path_class,data_dir,source_txt_class,n_folds,lrs,n_cycles,bs,pretrained_fnames,ft_lm,stratified)
max_epoch = len(f1_res[3][0][0]);
f1_test = extract_test_res(max_epoch,f1_res[3][1])
f1_val = np.array([[epoch_res[-1] for epoch_res in fold_res] for fold_res in f1_res[3][0]])
f1_val_loss = np.array([[epoch_res for epoch_res in fold_res] for fold_res in f1_res[3][2]])
i_last_increase = f1_val.mean(axis=0).argmax()
print(f'Highest mean f1 micro on validation set: {f1_val.mean(axis=0)[i_last_increase]}')
print(f'Mean f1 micro on Test set with model from best val set: {f1_test.mean(axis=0)[i_last_increase]}')
print(f'Highest mean f1 micro on Test set: {f1_test.mean(axis=0).max()}')
ax = plt.subplot()
ax.plot(f1_test.mean(axis=0),label='f1 test');
ax.plot(f1_val.mean(axis=0),label='f1 valid');
ax.plot(f1_val_loss.mean(axis=0),label='valid loss')
ax.set_xlim(1, i_last_increase);
ax.legend(loc='lower right', shadow=True, fontsize='x-large');
ax.set_title('Training progress ULMFit');
import pickle
with open('ulmfit_progress.pkl', 'wb') as f:
pickle.dump(f1_res, f)
```
## Examine learning progress
```
with open('base_progress.pkl', 'rb') as f:
base_progress = pickle.load(f)
with open('ulm_progress.pkl', 'rb') as f:
ulm_progress = pickle.load(f)
with open('ulmfit_progress.pkl', 'rb') as f:
ulmfit_progress = pickle.load(f)
f1_test_base, f1_val_base, f1_val_loss_base, i_last_increase_base = get_progress(base_progress)
f1_test_ulm, f1_val_ulm, f1_val_loss_ulm, i_last_increase_ulm = get_progress(ulm_progress)
f1_test_ulmfit, f1_val_ulmfit, f1_val_loss_ulmfit, i_last_increase_ulmfit = get_progress(ulmfit_progress)
print('Mean f1 macro on test set with model from best val set')
print(f'Base: {100*f1_test_base.mean(axis=0)[i_last_increase_base]:.2f} %')
print(f'ULM: {100*f1_test_ulm.mean(axis=0)[i_last_increase_ulm]:.2f} %')
print(f'ULMFit: {100*f1_test_ulmfit.mean(axis=0)[i_last_increase_ulmfit]:.2f} %')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
ax.plot(f1_test_base.mean(axis=0),'b',label='f1 test base');
ax.plot(f1_val_base.mean(axis=0),'b',label='f1 valid base',linewidth=0.5);
ax.plot(f1_val_loss_base.mean(axis=0),'b',label='valid loss base')
ax.plot(f1_test_ulm.mean(axis=0),'g',label='f1 test ulm');
ax.plot(f1_val_ulm.mean(axis=0),'g',label='f1 valid ulm',linewidth=0.5);
ax.plot(f1_val_loss_ulm.mean(axis=0),'g',label='valid loss ulm')
ax.plot(f1_test_ulmfit.mean(axis=0),'r',label='f1 test ulmfit');
ax.plot(f1_val_ulmfit.mean(axis=0),'r',label='f1 valid ulmfit',linewidth=0.5);
ax.plot(f1_val_loss_ulmfit.mean(axis=0),'r',label='valid loss ulmfit')
ax.axvline(x=i_last_increase_base,linestyle=':',color='b',label='last valid increase base')
ax.axvline(x=i_last_increase_ulm,linestyle=':',color='g',label='last valid increase ulm')
ax.axvline(x=i_last_increase_ulmfit,linestyle=':',color='r',label='last valid increase ulmfit')
ax.set_xlim(1, np.max([i_last_increase_base,i_last_increase_ulm,i_last_increase_ulmfit]));
ax.legend(loc='lower right', shadow=True, fontsize='x-large');
ax.set_title('Training progress');
table_dict = {'': ['SOTA','Base','ULM','ULMFit'],
'f1 micro %': [80.66,
f1_test_base.mean(axis=0)[i_last_increase_base]*100,
f1_test_ulm.mean(axis=0)[i_last_increase_ulm]*100,
f1_test_ulmfit.mean(axis=0)[i_last_increase_ulmfit]*100]}
pd.DataFrame(data=table_dict).round(2).style.hide_index()
```
| github_jupyter |
```
!pip install transformers
!pip install sentence-transformers
import numpy as np
import pandas as pd
import torch
import csv
from scipy import stats
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from sentence_transformers import SentenceTransformer
device = 'cuda'
SINGLE_TRAIN_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/train/lcp_single_train.tsv"
SINGLE_TEST_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/test-labels/lcp_single_test.tsv"
MULTI_TRAIN_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/train/lcp_multi_train.tsv"
MULTI_TEST_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/test-labels/lcp_multi_test.tsv"
model = SentenceTransformer('paraphrase-distilroberta-base-v1')
def prepare_dataset(TRAIN_DATAPATH, TEST_DATAPATH):
df_train = pd.read_csv(TRAIN_DATAPATH, sep = '\t', quotechar="'", quoting = csv.QUOTE_NONE)
df_test = pd.read_csv(TEST_DATAPATH, sep = '\t', quotechar="'", quoting = csv.QUOTE_NONE)
df_train['complexity'] = df_train['complexity'].astype(float)
df_test['complexity'] = df_test['complexity'].astype(float)
train_input = [i for i in df_train['sentence']]
test_input = [i for i in df_test['sentence']]
labels = [i for i in df_train['complexity']]
test_labels = [i for i in df_test['complexity']]
train_emb = model.encode(train_input)
test_emb = model.encode(test_input)
return train_emb, test_emb, labels, test_labels
class NN(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.linear1 = nn.Linear(input_dim, 1536)
self.linear2 = nn.Linear(1536, 3072)
self.linear3 = nn.Linear(3072, 3072)
self.linear4 = nn.Linear(3072, 1536)
self.linear5 = nn.Linear(1536, 768)
self.linear6 = nn.Linear(768, 768)
self.linear7 = nn.Linear(768, 768)
self.linear8 = nn.Linear(768, 256)
self.linear9 = nn.Linear(256, 128)
self.linear10 = nn.Linear(128, 64)
self.linear11 = nn.Linear(64, 1)
def forward(self, input): # gelu or elu in initial layers is quite good, gives 0.5 r
out = F.gelu(self.linear1(input))
out = F.gelu(self.linear2(out))
out = F.gelu(self.linear3(out))
out = F.gelu(self.linear4(out))
out = F.gelu(self.linear5(out))
out = F.gelu(self.linear6(out))
out = F.gelu(self.linear7(out))
out = F.gelu(self.linear8(out))
out = F.gelu(self.linear9(out))
out = F.gelu(self.linear10(out))
out = F.sigmoid(self.linear11(out))
out = torch.squeeze(out)
return out
def pearson_loss(target, output):
eps = 0.000001
output_mean = torch.mean(output)
target_mean = torch.mean(target)
x = output - output_mean.expand_as(output)
y = target - target_mean.expand_as(target)
pearson = torch.dot(x, y)/ ((torch.std(output) + eps) * (torch.std(target) + eps))
loss = (-1.0 * pearson / len(target))
return loss
input_dim = 768
print("++++Input Dimension of NN: " + str(input_dim))
nn_num_epochs = 1500
nn_model = NN(input_dim)
nn_model.to(device)
nn_optimizer = optim.Adam(nn_model.parameters(), lr = 0.00001)
def train_nn(nn_model, input):
nn_model.train()
nn_optimizer.zero_grad()
output = nn_model(input)
loss = pearson_loss(labels, output)
loss.backward()
nn_optimizer.step()
return loss.item()
def test_nn(nn_model, input):
nn_model.eval()
with torch.no_grad():
output = nn_model(input)
return output
def calculate_metrics(y, y_hat):
vx = y.astype(float)
vy = y_hat.astype(float)
pearsonR = np.corrcoef(vx, vy)[0, 1]
spearmanRho = stats.spearmanr(vx, vy)
MSE = np.mean((vx - vy) ** 2)
MAE = np.mean(np.absolute(vx - vy))
RSquared = (pearsonR ** 2)
print("Pearson's R: {}".format(pearsonR))
print("Spearman's rho: {}".format(spearmanRho))
print("R Squared: {}".format(RSquared))
print("MSE: {}".format(MSE))
print("MAE: {}".format(MAE))
train_emb, test_emb, labels, test_labels = prepare_dataset(SINGLE_TRAIN_DATAPATH, SINGLE_TEST_DATAPATH)
nn_input = torch.tensor(train_emb, device = device, requires_grad = True)
labels = torch.tensor(labels, dtype = torch.float32, device = device, requires_grad = True)
nn_input_test = torch.tensor(test_emb, device = device)
print("++++++Running for single++")
for epoch in range(nn_num_epochs):
nn_train_loss = train_nn(nn_model, nn_input)
print("Epoch {} : {}".format(epoch + 1, nn_train_loss))
output = test_nn(nn_model, nn_input_test)
print("------Metrics for test-----")
calculate_metrics(np.array(test_labels), np.array(output.tolist()))
train_emb, test_emb, labels, test_labels = prepare_dataset(MULTI_TRAIN_DATAPATH, MULTI_TEST_DATAPATH)
nn_input = torch.tensor(train_emb, device = device, requires_grad = True)
labels = torch.tensor(labels, dtype = torch.float32, device = device, requires_grad = True)
nn_input_test = torch.tensor(test_emb, device = device)
print("++++++Running for multi+++")
for epoch in range(nn_num_epochs):
nn_train_loss = train_nn(nn_model, nn_input)
print("Epoch {} : {}".format(epoch + 1, nn_train_loss))
output = test_nn(nn_model, nn_input_test)
print("------Metrics for test-----")
calculate_metrics(np.array(test_labels), np.array(output.tolist()))
```
| github_jupyter |
# Oscillations
Two practicals
## 1. Finding $g$ by using a pendulumn
$\begin{aligned}
T = 2\pi \sqrt{\frac{l}{g}}
\end{aligned}$
- $T$: period in s
- $l$: length in m
- $g$ acceleration due to gravity: $m/s^2$
Linearisation of the sqrt function (as $T \propto \sqrt{l})$:
$\begin{aligned}
T^2 &= (2 \pi)^2 \frac{l}{g}\\
\underbrace{T^2}_{y} &= \overbrace{\frac{4 \pi^2}{g}}^{m} \underbrace{l}_{x}
\end{aligned}$
### Method
- 10 periods, 5 times
- min. 5 lengths
| String length / cm ± 0.05cm |
Wave transports energy from 1 location to another.
**Travelling waves:**
```
Motion of waves -->
/ \ Longitudinal
Transverse | <---->
\ /
oscillations
```
```
import matplotlib.pyplot as plt
```
## Wave equation
$\begin{aligned}
v &= \lambda f = \frac{\lambda}{T}\\
f &= \frac{1}{T}
\end{aligned}$
## Simple Harmonic Motion
$\underbrace{a}_{\text{acceleration}} = -\overbrace{k}^{\text{proportionality factor}} \underbrace{x}_{\text{displacement}}$
$\omega = 2\pi f = \frac{2 \pi}{T}$
$\begin{aligned}
x(t) &= \underbrace{A}_{\text{Amplitude}} \cos(\underbrace{\omega}_{\text{angular frequency}} t + \underbrace{\varphi}_{\text{phase shift}})\\
v(t) &= -\omega A \sin(\omega t + \varphi)\\
a(t) &= -\omega^2 A \cos(\omega t + \varphi)\\
\iff a(t) &= -\underbrace{k}_{\omega^2} x(t)
\end{aligned}$
compression & rarefaction
**Non-polarised** light: oscillation at all angles of axis of propogation
**Polarised** light: only oscillates in one plane
Polarised sources are: screens (phone, computer), reflected light.
When totally polarised light is incident on an analyser, the intensity of light let through is $$I = I_0 \cos^2 \underbrace{\theta}_{\text{angle between polarised light and analyser}}.$$
## Measuring speed of sound
* oscilloscope
* signal generator
* ultrasound emitter - receiver
1. Connect signal generator and oscilloscope.
2. Set up emitter and receiver.
3. Starting from 0cm (ignoring offset inside devices), measure phase change with cursors.
4. Move receiver away by 5cm until 25cm.
5. Record data, plot time against distance.
6. Using $\text{speed} = \frac{\text{distance}}{\text{time}}$, we know $\text{speed of sound} = \frac{1}{\text{gradient}}$ of graph.
# Wave behaviour (4.4)
**Reflection**: a wave bouncing off a surface
Incident
ray Normal
\ | _ /
_\| /|
\ | /
\- -/
_____\|/_____
Angle Angle of
of reflection
incidence
$\text{Angle } i = \text{Angle } r$
**Refraction**: the change in direction of a wave when it passes a boundary between 2 mdeia.
Eg: from air to water; from water to glass
Medium 1 Medium 2
| .
|_.
|.|
_ _ _ _|_>_ _ _
<. |
_. |
. | |
Ray |
$\theta_i, \theta_{\text{refracted}}$;
Refrective medium 1: $n_1$; ~ 2: $n_2$.
## Practical: Investigating Snell/Descartes Law
* Changing incident angle
* measuring angle of refraction
* Plot one against the other
Goal: find $n_2$.
$\begin{aligned}
&n_1 \sin\theta_1 = n_2 \sin\theta_2\\
&\sin\theta_2 = \frac{n_1}{n_2} \sin\theta_1\\
\implies &\frac{n_1}{n_2} \sin\theta_1 > 1\\
&\theta_1 > \arcsin\frac{n_2}{n_1}
\end{aligned}$
The angle at which the refracted ray passes along the boundary is called the **critical angle**. When the incident angle $< \theta_i$, refraction + reflection; $\geq \theta_i$, no refracted ray, T.I.R = total internal reflection.
When a wave is refracted, the speed changes but the frequency remains constant. Since $v = \lambda f$, $\lambda$ must change, so refraction changes the wavelength.
## Diffraction
Waves diffract when they go through an aperture, especially when wavelength $\approx$ aperture.
```
import matplotlib.pyplot as plt
import numpy as np
t = np.linspace(0, 6)
w1 = .5 * np.sin(t)
w2 = 1. * np.sin(t + np.pi)
w = w1 + w2
plt.figure()
plt.plot(t, w1)
plt.plot(t, w2)
plt.plot(t, w, '--')
plt.legend(['', '', "sum of both waves"])
plt.xlabel("t / s")
plt.title("Superposition of waves")
plt.show()
```
| github_jupyter |
## Mutable variables
```
var i:Int = 1
```
We can reassign a value to `i`
```
i = 2
```
## Immutable values
```
val s:String = "a"
```
Declaring `s` as `val` prevents new assignement
```
s = "c"
```
## Function
We can declare inline function, the type will take a form of an application, as we can see below where the function applies a `Int` onto a `String`.
```
val f:String => Int =
(s: String) => {
val removeWhiteChars = s.replaceAll(" ", "")
s.size
}
f("So, Scala is as powerful as it is simple !")
```
Since functions can be defined as `val`s then they can be _serialized_. This is **crucial** to understand since it is linked to closures serialization.
> Note: the `return` can be avoided because every thing is an expression, see below
## Types
Scala is strongly and statically typed, hence we can encapsulate our structure (or even behavior) using a powerful object oriented system.
### Class
```
class NewClass(val a:String, b:Int)
```
Instances of `NewClass` can be created using `new`.
```
val newClassInstance = new NewClass("ok", 1)
```
`a` is a public field
```
newClassInstance.a
```
`b` is a constructor parameter
```
newClassInstance.b
```
#### Methods
The behavior can be declared using methods (local functions), a method is essentially like this:
```
def <method name> ( (<arg name> : <type name>)* ) = <body>
```
So we assign (`=`) a body to a name with its typed arguments.
```
class ClassWithBehavior(i:Int) {
def add(j:Int) = i + j
def random():Int = i + scala.util.Random.nextInt
}
val classWithBehaviorInstance = new ClassWithBehavior(5)
classWithBehaviorInstance.add(10)
classWithBehaviorInstance.random()
```
#### `apply`
There is a special method in Scala that has an important meaning: `apply`.
If an `apply` method is declared in any type (see below) then an instance of this type can be used as a function which respects the `apply` signature.
```
class ThisCanBeAFunction(log:String=>Unit) {
def apply(s:String) = log(s)
}
val trickyFun:ThisCanBeAFunction = new ThisCanBeAFunction((s:String) => println("trick: " + s))
trickyFun("no apply")
```
> Note: `Unit` is like void like in any other language, but it has a type in Scala and can be used as such. Its only instance is the unit value `()`.
### Trait
Abstraction in Scala can be declared into `trait`s which are like interfaces but allow more flexibility: a trait can be mixed in, have a state, implement methods.
However, a trait cannot be instantiated and needs to be extended first.
```
trait Partial {
def todo:Double
val value:Int = 10
def done:String = ""+scala.util.Random.nextPrintableChar
}
class FullType extends Partial {
override val value:Int = 20
def todo:Double = scala.util.Random.nextDouble
}
```
### Object
Sometimes we just want to lift an implementation as an object (thing about singleton). For this, you can simply declare a new structure to be an `object` which is combining declaration and instantiation.
```
object FullObject extends Partial {
val todo:Double = 1.0
}
FullObject.todo
```
### Type Parameter (~ generic)
Some behavior can be declared without fixing the real values onto which it'll be applied, in Java this is called generics.
Scala allows type parameters to be associated to high level types in order to bag behaviors together (for instance).
```
trait Converted[I, O] {
def i:I
def convert(i:I):O
val o:O = convert(i)
}
```
We can now override and give _type values_.
Note: we can also override the modifier for field, `def` can be overriden with a `val`.
```
class StringToInt(val i:String) extends Converted[String, Int] {
def convert(s:String) = s.toInt
}
val converted:StringToInt = new StringToInt("124")
val convertedO:Int = converted.o
```
## Case
The language keyword `case` can be used in different scenarii but all are about **dealing with structures that can be parsed and introspected safely**. This statement implies that the structure has to be static and known by the compiler.
### Immutable structures
Based on the fact that the structure is statically known at compile time, the compiler can help on several phases. One thing he can do for instance is to generate code based on the known structure. For that purpose, the `case` modifier is used to declare immutable structures. That is because immutable structures are hard to deal for reasons like updating the values requires a copy constructor which is hard to maintain.
Immutable structures in Scala are then defined using `case` and will generate (at least):
* an attached object with **the same name**, called the companion object, defining a static `apply` function that acts as a factory method
* copy constructor: which create a new instance based on all untouched values of the current instance identical
* deconstructor: which explodes the current structure in a way that we can deal with internal values (see **Pattern Matching** below).
```
case class ImmutableClass[V](k:String, value:V)
```
We can now create an instance of this class.
The `new` keyword can be omitted because the function `ImmutableClass.apply` is actually called.
```
val immutableData:ImmutableClass[Int] = ImmutableClass[Int]("zero", 0)
```
The `copy` method is now available
```
immutableData.copy("o", immutableData.value)
```
#### Default and named values
Using the `copy` function (for instance) can be tedious if there are many arguments, also you don't want to repeat yourself.
So Scala has also the concept of default values for function's arguments, in the case of `copy` the Scala compiler has assigned the value of the current instance as the default value for each respective argument.
To simplify its usage (regarding order specially), there is also the concept of named parameter, allowing the arguments to be given a value tagged with their name
```
case class ImmutableClassWithDefault[V](k:String, value:V)
val immutableClassWithDefaultInstance:ImmutableClassWithDefault[Double] = ImmutableClassWithDefault[Double]("zero", 0.0)
immutableClassWithDefaultInstance.copy(k = "o")
```
### Interesting Types
#### Tuples
```
val stringAndFloat:(String, Float) = ("one", 1)
val stringAndFloatAndInt:(String, Float, Int) = ("one", 1, 1)
```
We can then access safely element by index, yet be able to rely on the type infomation.
```
stringAndFloatAndInt._2 + " is a Float"
:markdown
**Note**: string interpolation is also possible like this
${stringAndFloatAndInt._2} is a Float
```
#### Option
Null are bad, Scala hates `null`s.
Since it's still relevant to have _no values_ for an object, and we want to deal with it safely, Scala has used a (now) common pattern: using optional values.
An optional value is a type that has only two forms:
* `None`: no value
* `Some(v)`: has a value `v`
Option has many advantages and methods attached to it, although it can seem to complexify the code, with a bit of practice `Option` becomes a killer feature that increase the code, yet increasing its safety, some interesting functions are listed below.
```
val noValueOption:None.type = None
val someStringValue :Some[String] = Some("ok")
val optionString1:Option[String] = noValueOption
val optionString2:Option[String] = someStringValue
```
> Note: `Nothing` is a specific type in Scala which is actually **deriving all** types, hence it is always in the bottom of the type hierarchy and can be assigned to any other.
```
:markdown
- `getOrElse`: return the value or the provided default: **${noValueOption.getOrElse("hello")}** and **${someStringValue.getOrElse("no")}**
- `map`: deal with contained value if any: **${optionString2.map((s:String) => s.size)}**
- `orElse`: chain optional values: **${optionString1 orElse optionString2}**
```
> Note: sometimes omitting the `.` in Scala can make sense like in `optionString1 orElse optionString2`.
> Note': so, yes dots and parenthesis can be omitted in Scala
#### Collections
Collections in Scala are by default immutable structures from the package `scala.collection.immutable`, so you can use them like any other `case class` for instance.
```
val oneToThree = List(1, 2, 3)
```
Hence adding an element needs an extra object referring to the previous one as the tail or the init (for instance):
```
val oneToFour = oneToThree :+ 4
```
The initial list remains though
```
oneToThree
```
### Pattern matching
One of the most interesting feature of the Scala language regarding conciseness and readability is its pattern matching.
Thanks to static typing, immutable values and other characteristics of the language we can safely introspect structures and values to deal with different logical paths.
This is the second usage of `case`, it will be now used to declare a structure that need to be matched by a given value. When a match is found, the behavior attached to the case is executed.
Think of it as a powerful `switch` statement.
```
(optionString1 orElse optionString2) match {
case Some("ok") => 0
case Some(x) if x.size >2 => 1
case x => // small leter == catch all
val message = "unknown"
s"$x is $message"
}
```
Match are typesafe, and thus `case` can be checked, so that you'll avoid mixing things up:
```
(optionString1 orElse optionString2) match {
case Some("ok") => 0
case Some(x) if x.size >2 => 1
case "not a string" => ???
case x =>
val message = "unknown"
s"$x is $message"
}
```
## Everything is an expression
Whatever you do in Scala, you will return a value. This value will be simply the last result of the last statement of the expression (code block, ...).
This avoid decoupling the declaration with the assignement like
```
var s:String = null
if (true) {
s = "ok"
} else {
s = "nok"
}
```
### `if`
```
val ifExp:String = if (true) "ok" else "nok"
```
### `for`
```
val forExp:Seq[(Int, Int)] = for {
listElement1 <- 1 to 10
listElement2 <- 2 to 20 by 2
} yield (listElement1, listElement2)
```
## Types are inferred
By the way, if you felt submerged by type notations... I agree. Luckily, although Scala is strongly and statically typed, type tagging is optional.
Since everything is an expression, everything has a type and the type can be inferred by the last statement of the expression. The upper type in the common hierarchy is taken if there are several branches in the expressions that need to be resolved. The highest type being `Any`.
```
val optionString = if (scala.util.Random.nextBoolean) None else Some("ok")
val listOfDoubles = optionString match {
case Some("ok") => List(1.0, 2, 3)
case Some(x) => Nil
case None => List(0d)
}
val mess = listOfDoubles.map(x => (x, scala.util.Random.nextBoolean)).map {
case (x, true) => s"$x is true"
case (x, false) if x > 2 => x - 2
case x => x
}
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
val complex = Future { Left ( Option ( List( (1, 'a'), (2, 'b') ) zip List("one", "two") ) ) }
```
## Short notation: `_`
When using functional paradigm like in Spark for instance, we have to deal with a ton of functions in order to describe the logic in a smooth and comprehensible manner.
Which means that we regularly have to define small and sometimes trivial functions, which look like bloated using the full function definition notation.
```
List(1, 2, 3).
map(x => x + 1).
map(x => (x, x % 2)).
toMap.
mapValues(x => x == 0)
```
The above code can take a simple form using the short notation, using the underscore (`_`) as a replacement for the $i^{th}$ argument.
```
def addMethod(x:Int, y:Int):Int = x + y
val add = (x:Int, y:Int) => x + y
val addShort:(Int, Int)=>Int = _ + _
val listOfTuples = (1 to 100)
listOfTuples.reduce(addMethod)
listOfTuples.reduce(add)
listOfTuples.reduce(addShort)
listOfTuples.reduce((x:Int, y:Int) => x + y)
listOfTuples.reduce(_ + _)
```
| github_jupyter |
# Class Project Reference
# What analysis are you going to do?
Please run your study idea by Jeremy or Jin before progressing.
A few potentially interesting ones may include:
Scary VS non-scary
Loud VS quiet
People VS no-people
Anything your imagination comes up with!
# Designing your GLM
Before we can make our GLM in SPM we need to create onsets. Once you have cleared your research question with Jin or Jeremy go ahead and start coding the movie as was done in the excel file available in the Class Project folder on canvas. The movie clips are also in that folder.
You want to record the time that each event or block of interest starts and when it stops so we can calculate onsets and durations.
Remember that when we convert from seconds to TR's we want to subtract 1 TR since we start counting at TR 0.
Our study had 456 TR's in each run, the 1st TR of the first run counts as TR 0, the last TR of the first run as 455 and the first TR of the 2nd run counts as TR 456.
For each condition you have save your onsets as conditionname.txt and conditionname_dur.txt.
# FTP
Next we need to move the onsets we to the server. FTP stands for File Transfer Protocol.
Download Cyberduck here: https://cyberduck.io/?l=en for mac or pc.
Install Cyberduck and click `Add Connection`
Click on the pull-down menu where it says FTP with the hard drive symbol next to it and select SFTP.
Under server put your server: (hera or eros).dartmouth.edu
Enter your username (pbs60x) and password then click connect.
This is now showing you your file listing in your home directory on your server.
Make a new folder by clicking on the `Action` button then `New Folder`.
Label the folder YOURCOMPARISON_ONSETS.
Drag and drop your onsets into your new folder.
Now you can reconnect to your VNC session and continue from there.
# GLM
Once you have your onsets created you want to build the GLM for each subject.
First lets generate/load the nuissance regressors for a subject. You will need to do this separately for each subject as you want to run their GLM.
Building your nuisance regressors (linear trends for each run, binary run regressor, the constant is generated automatically)
In matlab type
```
runregressors=[linspace(0,1,456)' zeros(456,1) ones(456,1); zeros(456,1) linspace(0,1,456)' zeros(456,1)]
```
You will use this for each subject in combination with the subjects motion and the onsets you made to create your GLM.
I would suggest opening gedit text editor (Click the Foot, Accessories, gedit) and copying this command in to a new document so you can use it multiple times.
<img src="Images/CA_runregressors.png"></img>
Remember, you only want to run the GLM for subjects that you think should be included in the analysis. Be prepared to say why you decided that subjects should be included or excluded in the analysis in the write-up.
Next lets load in the a subject's motion regressors and combine the nuisance regressors.
```
load('/afs/dbic.dartmouth.edu/usr/PBS60/DATA/prep/epi_norm/s001/rp_aepi_r01.txt')
load('/afs/dbic.dartmouth.edu/usr/PBS60/DATA/prep/epi_norm/s001/rp_aepi_r02.txt')
s001_regressors=[[rp_aepi_r01;rp_aepi_r02] runregressors]
whos
cd ~
```
You should see at least 4 varaibles, importantly one named for each of the files you loaded, runregressors and your subject's combined regressors file. `whos` shows you what variables are in the matlab workspace and cd ~ brings you back to your home directory.
Do this for each subject that you are including in the analysis. When you are done make sure you have regressors for all of the subjects you want to run using whos and save these files so we don't need to remake them later:
```
save('all_subject_regressors.mat')
```
Once you have these variables saved as a .mat file lets output each
```
mkdir('~/SUBJECT_REGRESSORS')
cd('~/SUBJECT_REGRESSORS')
csvwrite('s001_regressors.txt',s001_regressors)
```
Do this for each of the subjects, replacing s001 with each subjects' #.
Next we need to make some directories to store the GLM files for each subject you are using. Below replace YOURANALYSIS with something descriptive like SCARY_NOTSCARY
```
cd ~
mkdir ./GLM_YOURANALYSIS
cd GLM_YOURANALYSIS
mkdir ./s001
mkdir ./s002
...
```
Continue until you have made directories for each subject.
# GLM: Specify 1st-level
Now return to the SPM GUI.
To run a GLM on the first subject we need to chose `Specify 1st-level` from the SPM menu.
# Chose output directory of GLM
You want to select the subject's GLM directory you created
Once you do this it should say something like .../PBS60/pbs60a/GLM_YOURANALYSIS/s001
# Units for design
We can specify our study design in second OR in TRs. We are going to use TR's here. Click `Timing parameters` > `Units for design` and select `Scans`.
Next select `Interscan interval`, click button `Specify` and type in 2.5 since our scans were acquired every 2.5 seconds
# Loading our preprocessed files
Our class data was preprocessed and is good to go for us to make a 1st-level (single subject) GLM.
We are going to be using 4D .nii files for this part. These files contain data from multiple TRs (scans) and make data management easier.
Click `Data & Design` then `Scans`
Navigate to /afs/dbic.dartmouth.edu/usr/PBS60/DATA/prep/epi_norm/s001/
Type swuaepi* in filter
Below filter type 1:456
<img src="Images/Lab_03_DataDesign_Scans.png"></img>
This should show 912 files, 456 from each run.
The top file should be highlighted or click on the top most file, then
scroll to the bottom, hold down shift and click the last one which should move all of them to the Selected menu then click `Done`.
# Conditions
This tab is where we setup how many experimental conditions we have. We can have as many conditions as we want.
Click on `Conditions`, `New Condition`, `Name` then `Specify`. You want to type in YOURCONDITION1 or something similar like that so you can remember that we are marking when the movie is on in this condition. Do this again for as many conditions as you have.
Lets use the onsets you created. The onsets are values that tell the computer/model what times the events we care about started.
Next click on `Onsets` then `Specify`
Since we already loaded the movieon.mat file that contains our onsets we are can load them by just naming the matlab variable. Check to see that it looks correct by typing `whos` which will show you the loaded variables AND their size. If you have 10 events/durations then the variables should be of size 10x1.
# Nuisance regressors
Click Multiple regressors, Specify and Navigate to /afs/dbic.dartmouth.edu/usr/PBS60/ `yourdirectory` /SUBJECT_REGRESSORS/ and select this subject's _regressors.txt file.
# Saving GLM and creating model
Before you try to run your GLM you should save it.
Click `File` then `Save Batch`
Save it as GLM_s001_YOURCONDITIONS.mat or something descriptive of your analysis.
Once you have saved you can click play (green arrow).
# Estimating model
Next we need to estimate the model. This means we want to apply the results of the GLM we estimated above to the brain data.
Click `Estimate` from the SPM menu. This only asks us for one thing... the SPM.mat file within the subjects GLM directory.
Select the SPM.mat file in your GLM folder then click the green play button and wait for a bit.
# Contrast of interest
Go back to the SPM menu and click `Results`
Select your SPM.mat file
Click `Define contrast`, name it based on your condition, ie MOVIEON or scaryVSnonscary etc.
We want to identify regions that are more active when the movie is on so we want to weight the first column and ignore the others.
Click `t-contrast` since we want a directional contrast.
Click in the contrast weights vector and enter #'s that appropriately match your comparison of interest. Feel free to ask for help at this point.
Click `Submit`.
Doing this weights our columns of interest by the factors entered above and treats all of the other columns as nuisance regressors.
Click `Done`
A different window will ask you if you want to do any masking... click none
Title for comparison: yourcomparison
You don't need to do any adjustment for p-value, click through the defaults for p-value and extent.
This will create con_0001.nii and spmT_0001.nii files in the GLM directory for the current subject.
# Repeat for each subject
Repeat the steps above for each subject, substituting the new subject number for s001. When you are all done you should have GLM folder for each subject with files in them.
# Random Effects (RFX)
Once you have run your GLM on each subject then you can run RFX on the resulting files to identify areas that were consistently activated across individuals. Below replace YOURANALYSIS with something descriptive like SCARY_NOTSCARY
```
cd ~
mkdir ./RFX_YOURANALYSIS
cd('RFX_YOURANALYSIS')
Click `Specify 2nd-level`
This will open batch editor. Lets select our current directory as the output directory from the left menu, "."
Under `Design` chose `One-sample t-test` then under `Scans` click `Specify`
Select the con_001.nii files for each subject that you want to include in your random effects model.
Next from the `Batch Editor` menubar select `SPM`, `Stats`, `Model Estimation`
This will create a new module below `Factorial design`
Click on the `Model Estimation` module then `Dependancy` and click `OK` in the new window that pops up.
Next from the `Batch Editor` menubar select `SPM`, `Stats`, `Contrast Manager`
This will create a new module below `Model Estimation`
Click on the `Contrast Manager` module then `Dependancy`, select `Model Estimation` and click `OK` in the new window that pops up.
Next within the `Contrast Manager` module click on `Contrast Sessions` then `New: T-contrast`.
Edit name to be `MOVIEON`, change the weights vector to be `1`
Click File, Save Batch and save as RFX_YOURANALYSIS.mat in your home directory.
Click play then inspect your results (spmT file) in your RFX_YOURANALYSIS folder in xjview once matlab says its done processing.
```
# Viewing and exporting your data
View the results of your RFX analysis in xjview. To do this load the spmT_0001 file in your random effects folder.
<img src="Images/CA_xjview.png"></img>
Click `Done` and you should see your image open with a default p value of 0.001 and cluster size of 5. The p-value listed here is the p-value for each voxel and the cluster size is the minimum number of voxels that have to be next to each other to show up in our image.
Below the top right window pane select single T1
<img src="Images/CA_xjviewT1.png"></img>
Multiple comparisons here
<img src="Images/CA_xjviewThresholds.png"></img>
# Visualization
Once you have identified a significace threshold/cluster size that you are going to use you can click on Render V (should say View but is often obscured). This will make a simple surface representation of your data. You can screen shot this from your computer.
Mac: Click back to your desktop or finder, then press Command, Shift 4 at the same time. Click and drag to take a screen shot of the views you want.
PC: https://support.microsoft.com/en-us/help/13776/windows-use-snipping-tool-to-capture-screenshots
Next you can make some publication quality slice views by click `Slice View`, which is directly above render. You can select how many columns, rows and how far between each slice you want to view. You can also view the slices in different orientations. You can take screen shots of the images the same way as above.
# General notes for writeup
Image Acquisition:
3T Siemens Prisma fMRI scanner equipped with a 32-channel head coil.
We collected a scout, anatomical scan, and two functional scans.
Anatomical scan parameters (MPRAGE)
Voxel size : 1 x 1 x 1 mm
Slices: 192 saggital slices
TE : 2.32 ms
TR: 2300 ms
Flip Angle: 8 degrees
Matrix size: 256 x 256
FOV = 240mm x 240mm
Acceleration: Grappa 2
Functional scan parameters
Voxel size : 3 x 3 x 3 mm (3mm isotropic)
Slices: 40 transversal slices
TE: 35 ms
TR : 2500 ms
Flip Angle: 79 degrees
Matrix size: 80 x 80
Field of view 240mm x 240 mm
Acceleration: Grappa 2
456 measurements
Image Preprocessing:
All imaging preprocessing and subsequent analyses were conducted in SPM12 (Wellcome Department of Cognitive Neurology) in conjunction with a suite of tools for preprocessing and analysis (https://github.com/ddwagner/SPM12w). Functional images were slice-time-corrected and realigned to account for temporal differences in slice acquisition and head motion, respectively. Resulting volumes were spatially normalized to the ICBM 152 template brain (Montreal Neurological Institute) and spatially smoothed using an 6-mm (FWHM) Gaussian kernel.
| github_jupyter |
(gravity)=
# Gravity
## Newton's Law of Universal Gravitation
The gravitational force \\(F\\) that body A exerts on body B has a magnitude that is directly proportional to the mass of body A \\(m_a\\) and the mass of body B \\(m_b\\), and inversely proportional to the square of the distance between their centres \\(R\\).
Mathematically:
\\[F=\frac{Gm_am_b}{R^2}\\]
where \\(G\\) is the gravitational constant, equal to \\(6.674\times 10^{-11}Nm^2kg^{-2}\\).
Therefore, the gravitational force exerted on a mass \\(m\\) by a planet of mass \\(m_p\\) is:
\\[F=\frac{Gm_pm}{R_p^2}\\]
where \\(m_p\\) and \\(R_p\\) is the mass and radius of a planet respectively.
The local gravitational acceleration \\(g\\) is:
\\[a=\frac{F}{m}=\frac{Gm_p}{R_p^2}=g\\]
In terms of density \\(\rho\\):
\\[g=\frac{4\pi\rho R_p^3G}{3R_p^2}=\frac{4\pi\rho R_pG}{3}\\]
## Gravitational Potential Energy
The definition of change in energy is that a force \\(F\\) moves a body from position 1 \\(R_1\\) to position 2 \\(R_2\\). In the case of change in gravitational potential energy \\(GPE\\) on Earth:
\\[F=-\frac{Gm_Em}{R^2}\\]
where \\(m_E\\) and \\(R\\) are the mass of the Earth and distance from Earth's centre respectively,
\\[GPE_2-GPE_1=-\int_{R_1}^{R_2}F(R)dR=Gm_Em(\frac{1}{R_1}-\frac{1}{R_2})\\]
Taking \\(R_2\\) to be infinitely far away from the Earth:
\\[GPE(\infty)-GPE(R_1)=Gm_Em(\frac{1}{R_1}-0)=\frac{Gm_Em}{R_1}\\]
Rearranging for \\(GPE(R_1)\\):
\\[GPE(R_1)=-\frac{Gm_Em}{R_1}\\]
For a body at an elevation \\(h\\) close to the surface, \\(GPE\\) is roughly equal to \\(mgh\\).
## Escape Velocity
To fully escape the gravitational field of a planet, the object must reach "infinity" where \\(GPE=0\\). Thus, taking position 2 as infinity:
\\[KE(1)+GPE(R_E)=KE(2)+GPE(\infty)\\]
To find the minimum velocity required, we set \\(KE(2)=0\\), and since \\(GPE(\infty)=0\\):
\\[KE(1)+GPE(R_E)=0\\]
\\[\frac{mv_e^2}{2}-\frac{Gm_Em}{R_E}=0\\]
Rearranging for escape velocity \\(v_e\\):
\\[v_e=\sqrt{\frac{2Gm_E}{R_E}}\\]
### Maxwell-Boltzmann distribution
At absolute temperature \\(T\\), the mean energy of monoatomic molecules is given by:
\\[\frac{3}{2}k_BT\\]
where \\(k_B\\) is the Boltzmann's constant equal to \\(1.38\times10^{-23}m^2kgs^{-2}K^{-1}\\).
Ignoring potential energy and assuming that the energy of the monoatomic molecules is only from their kinetic energy, we can equate the above equation with the kinetic energy equation:
\\[\frac{mv_{mean}^2}{2}=\frac{3}{2}k_BT\\]
where \\(m\\) is the mass of that monoatomic molecule, and \\(v_{mean}\\) is the mean velocity of the monoatomic molecules at temperature \\(T\\).
Rearranging for \\(v_{mean}\\):
\\[v_{mean}=\sqrt{\frac{3k_BT}{m}}\\]
The probability of a molecule having a velocity higher than some value \\(v\\) is given by:
\\[\frac{v}{v_{mean}}e^{-1.27(\frac{v}{v_{mean}})^2}\\]
## Tutorial Problem 5.3
What is the probability of Helium atoms escaping from the Moon's surface?
Given that the mass and radius of the Moon are \\(7.35\times10^{22}kg\\) and \\(1.74\times10^6m\\) respectively, average surface temperature on the Moon is \\(400K\\), and mass of a helium atom is \\(6.54\times10^{-27}kg\\).
```
import numpy as np
import matplotlib.pyplot as plt
def escape(m, R, G=6.674e-11): # function for calculating escape velocity given mass and radius
return np.sqrt((2*G*m)/R)
mM = 7.35e22 # mass of Moon (kg)
RM = 1.74e6 # radius of Moon (m)
ve = escape(mM, RM)
print("Escape velocity of the Moon is %.f m/s" % (ve))
def find_vmean(m, T, kB=1.38e-23):
return np.sqrt((3*kB*T)/m)
mHe = 6.54e-27 # mass of helium atom (kg)
TM = 400 # surface temperature on the Moon (T)
vm = find_vmean(mHe, TM)
print("Mean velocity of He atoms on the Moon is %.f m/s" % (vm))
def maxwell_boltzmann(v, v_mean): # maxwell-boltzmann distribution
return v/v_mean * np.exp(-1.27*(v/v_mean)**2)
v = np.linspace(0, 5000, 1001)
prob = maxwell_boltzmann(v, vm) # array of probabilities at different velocities
# plot probability distribution
fig = plt.figure(figsize=(10,8))
plt.plot(v, prob, 'k')
plt.plot(v[475], prob[475], 'ro', label='velocity >= %.fm/s, probability = %.2f' % (v[475], prob[475]))
plt.xlabel('velocity (m/s)')
plt.ylabel('probability')
plt.title('Probability distribution of He molecules with a velocity greater than some value v', fontsize=14)
plt.legend(loc='upper right', fontsize=12)
plt.grid(True)
plt.show()
```
### References
Course notes from Lecture 5 of the module ESE 95011 Mechanics
| github_jupyter |
# Introduction to Biomechanics
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
## Biomechanics @ UFABC
```
from IPython.display import IFrame
IFrame('http://demotu.org', width='100%', height=500)
```
## Biomechanics
The origin of the word *Biomechanics* is evident:
$$ Biomechanics := bios \, (life) + mechanics $$
Professor Herbert Hatze, on a letter to the editors of the Journal of Biomechanics in 1974, proposed a (very good) definition for *the science called Biomechanics*:
> "*Biomechanics is the study of the structure and function of biological systems by means of the methods of mechanics.*"
Hatze H (1974) [The meaning of the term biomechanics](https://github.com/demotu/BMC/blob/master/courses/HatzeJB74biomechanics.pdf).
### Biomechanics & Mechanics
And Hatze, advocating for *Biomechanics to be a science of its own*, argues that Biomechanics **is not** simply Mechanics of (applied to) living systems:
> "*It would not be correct to state that 'Biomechanics is the study of the mechanical aspects of the structure and function of biological systems' because biological systems do not have mechanical aspects. They only have biomechanical aspects (otherwise mechanics, as it exists, would be sufficient to describe all phenomena which we now call biomechanical features of biological systems).*" Hatze (1974)
### Biomechanics vs. Mechanics
To support this argument, Hatze illustrates the difference between Biomechanics and the application of Mechanics, with an example of a javelin throw: studying the mechanics aspects of the javelin flight trajectory (use existing knowledge about aerodynamics and ballistics) vs. studying the biomechanical aspects of the phase before the javelin leaves the thrower’s hand (there are no established mechanical models for this system).
### Branches of Mechanics
**A good knowledge of Mechanics is a necessary condition, but not sufficient!, to have a good knowledge of Biomechanics**.
In fact, only a subset of Mechanics matters to Biomechanics, the Classical Mechanics subset, the domain of mechanics for bodies with moderate speeds $(\ll 3.10^8 m/s!)$ and not very small $(\gg 3.10^{-9} m!)$ as shown in the following diagram (image from [Wikipedia](http://en.wikipedia.org/wiki/Classical_mechanics)):
<figure><img src="http://upload.wikimedia.org/wikipedia/commons/thumb/f/f0/Physicsdomains.svg/500px-Physicsdomains.svg.png" width=300 alt="Domains of mechanics"/>
### Biomechanics & other Sciences I
One last point about the excellent letter from Hatze, already in 1974 he points for the following problem:
> "*The use of the term biomechanics imposes rather severe restrictions on its meaning because of the established definition of the term, mechanics. This is unfortunate, since the synonym Biomechanics, as it is being understood by the majority of biomechanists today, has a much wider meaning.*" Hatze (1974)
Although the term Biomechanics may sound new to you, it's not rare that people think the use of methods outside the realm of Mechanics as Biomechanics.
For instance, electromyography and thermography are two methods that although may be useful in Biomechanics, particularly the former, they clearly don't have any relation with Mechanics; Electromagnetism and Thermodynamics are other [branches of Physics](https://en.wikipedia.org/wiki/Branches_of_physics).
### Biomechanics & Engineering
Even seeing Biomechanics as a field of Science, as argued by Hatze, it's also possible to refer to Engineering Biomechanics considering that Engineering is "*the application of scientific and mathematical principles to practical ends*" [[The Free Dictionary](http://www.thefreedictionary.com/engineering)] and particularly that "*Engineering Mechanics is the application of Mechanics to solve problems involving common engineering elements*" [[Wikibooks]](https://en.wikibooks.org/wiki/Engineering_Mechanics), and, last but not least, that Biomedical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare purposes [[Wikipedia](https://en.wikipedia.org/wiki/Biomedical_engineering)].
### Applications of Biomechanics
Biomechanics matters to fields of science and technology related to biology and health and it's also relevant for the development of synthetic systems inspired on biological systems, as in robotics. To illustrate the variety of applications of Biomechanics, this is the current list of topics covered in the Journal of Biomechanics:
```
from IPython.display import IFrame
IFrame('http://www.jbiomech.com/aims', width='100%', height=500)
```
### On the branches of Mechanics and Biomechanics I
Mechanics is a branch of the physical sciences that is concerned with the state of rest or motion of bodies that are subjected to the action of forces. In general, this subject can be subdivided into three branches: rigid-body mechanics, deformable-body mechanics, and fluid mechanics (Hibbeler, 2012; Ruina and Rudra, 2015).
(Classical) Mechanics is typically partitioned in Statics and Dynamics (Hibbeler, 2012; Ruina and Rudra, 2015).
In turn, Dynamics is divided in Kinematics and Kinetics.
This classification is clear; dynamics is the study of the motions of bodies and Statics is the study of forces in the absence of changes in motion. Kinematics is the study of motion without considering its possible causes (forces) and Kinetics is the study of the possible causes of motion.
### On the branches of Mechanics and Biomechanics II
Nevertheless, it's common in Biomechanics to adopt a slightly different classification: to partition it between Kinematics and Kinetics, and then Kinetics into Statics and Dynamics (David Winter, Nigg & Herzog, and Vladimir Zatsiorsky, among others, use this classification in their books). The rationale is that we first separate the study of motion considering or not its causes (forces). The partition of (Bio)Mechanics in this way is useful because is simpler to study and describe (measure) the kinematics of human motion and then go to the more complicated issue of understanding (measuring) the forces related to the human motion.
Anyway, these different classifications reveal a certain contradiction between Mechanics (particularly from an engineering point of view) and Biomechanics; some scholars will say that this taxonomy in Biomechanics is simply wrong and it should be corrected to align with the Mechanics. Be aware.
### The future of Biomechanics
(Human) Movement Science combines many disciplines of science (such as, Physiology, Biomechanics, and Psychology) for the study of human movement. Professor Benno Nigg claims that with the growing concern for the well-being of humankind, Movement Science will have an important role:
> Movement science will be one of the most important and most recognized science fields in the twenty-first century... The future discipline of movement science has a unique opportunity to become an important contributor to the well-being of mankind.
Nigg BM (1993) [Sport science in the twenty-first century](http://www.ncbi.nlm.nih.gov/pubmed/8230394). Journal of Sports Sciences, 77, 343-347.
And so Biomechanics will also become an important contributor to the well-being of humankind.
### Biomechanics and the Biomedical Engineering at UFABC I
At the university level, the study of Mechanics is typically done in the disciplines Statics and Dynamics (rigid-body mechanics), Strength of Materials (deformable-body mechanics), and Mechanics of Fluids (fluid mechanics). Consequently, the study on Biomechanics must also cover these topics for a greater understanding of the structure and function of biological systems.
### Biomechanics and the Biomedical Engineering at UFABC II
The Biomedical Engineering degree at UFABC covers these topics for the study of biological systems in different courses: Ciência dos Materiais Biocompatíveis, Modelagem e Simulação de Sistemas Biomédicos, Métodos de Elementos Finitos aplicados a Sistemas Biomédicos, Mecânica dos Fluidos, Caracterização de Biomateriais, Sistemas Biológicos, and last but not least, [Biomecânica I](http://demotu.org/ensino/biomecanica-i/) & Biomecânica II.
How much of biological systems is in fact studied in these disciplines varies a lot. Anyway, none of these courses cover the study of human motion with implications to health, rehabilitation, and sports, except the last course. This is the reason why the courses [Biomecânica I](http://demotu.org/ensino/biomecanica-i/) & II focus on the analysis of the human movement.
### More on Biomechanics
The Wikipedia page on biomechanics is a good place to read more about Biomechanics:
```
from IPython.display import IFrame
IFrame('http://en.m.wikipedia.org/wiki/Biomechanics', width='100%', height=400)
```
## History of Biomechanics
Biomechanics progressed basically with the advancements in Mechanics and with the invention of instrumentations for measuring mechanical quantities and computing.
The development of Biomechanics was only possible because people became more interested in the understanding of the structure and function of biological systems and to apply these concepts to the progress of the humankind.
## Aristotle (384-322 BC)
Aristotle was the first to have written about the movement of animals in his works *On the Motion of Animals (De Motu Animalium)* and *On the Gait of Animals (De Incessu Animalium)* [[Works by Aristotle]](http://classics.mit.edu/Browse/index-Aristotle.html).
Aristotle clearly already knew what we nowadays refer as Newton's third law of motion:
"*For as the pusher pushes so is the pushed pushed, and with equal force.*" [Part 3, [On the Motion of Animals](http://classics.mit.edu/Aristotle/motion_animals.html)]
### Aristotle & the Scientific Revolution I
Although Aristotle's contributions were invaluable to humankind, to make his discoveries he doesn't seem to have employed anything similar to what we today refer as [scientific method](https://en.wikipedia.org/wiki/Scientific_method) (the systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses).
Most of the Physics of Aristotle was ambiguous or incorrect; for example, for him there was no motion without a force. He even deduced that speed was proportional to force and inversely proportional to resistance [[Book VII, Physics](http://classics.mit.edu/Aristotle/physics.7.vii.html)]. Perhaps Aristotle was too influenced by the observation of motion of a body under the action of a friction force, where this notion is not at all unreasonable.
### Aristotle & the Scientific Revolution II
If Aristotle performed any observation/experiment at all in his works, he probably was not good on that as, ironically, evinced in this part of his writing:
> "Males have more teeth than females in the case of men, sheep, goats, and swine; in the case of other animals observations have not yet been made". Aristotle [The History of Animals](http://classics.mit.edu/Aristotle/history_anim.html).
## Leonardo da Vinci (1452-1519)
<div><figure><img src='https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Da_Vinci_Vitruve_Luc_Viatour.jpg/353px-Da_Vinci_Vitruve_Luc_Viatour.jpg' width="240" alt="Vitruvian Man" style="float:right;margin: 0 0 0 20px;"/></figure></div>
Contributions of Leonardo to Biomechanics:
- Studies on the proportions of humans and animals
- Anatomy studies of the human body, especially the foot
- Studies on the mechanical function of muscles
<br><br>
*"Le proporzioni del corpo umano secondo Vitruvio", also known as the [Vitruvian Man](https://en.wikipedia.org/wiki/Vitruvian_Man), drawing by [Leonardo da Vinci](https://en.wikipedia.org/wiki/Leonardo_da_Vinci) circa 1490 based on the work of [Marcus Vitruvius Pollio](https://en.wikipedia.org/wiki/Vitruvius) (1st century BC), depicting a man in supposedly ideal human proportions (image from [Wikipedia](https://en.wikipedia.org/wiki/Vitruvian_Man)).*
```
x = './../images/borelli.jpg'
```
## Giovanni Alfonso Borelli (1608-1679)
<div><figure><img src='https://upload.wikimedia.org/wikipedia/commons/d/d5/Giovanni_Borelli_-_lim_joints_%28De_Motu_Animalium%29.jpg' width="240" alt="Borelli" style="float:right;margin: 0 0 0 20px;"/></figure></div>
- [The father of biomechanics](https://en.wikipedia.org/wiki/Giovanni_Alfonso_Borelli); the first to use modern scientific method into 'Biomechanics' in his book [De Motu Animalium](http://www.e-rara.ch/doi/10.3931/e-rara-28707).
- Proposed that the levers of the musculoskeletal system magnify motion rather than force.
- Calculated the forces required for equilibrium in various joints of the human body before Newton published the laws of motion.
<br><br>
*Excerpt from the book De Motu Animalium*.
## More on the history of Biomechanics
See:
- http://courses.washington.edu/bioen520/notes/History_of_Biomechanics_(Martin_1999).pdf
- [http://biomechanics.vtheatre.net/doc/history.html](http://biomechanics.vtheatre.net/doc/history.html)
- Chapter 1 of Nigg and Herzog (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678)
### The International Society of Biomechanics
The biomechanics community has an official scientific society, the [International Society of Biomechanics](http://isbweb.org/), with a journal, the [Journal of Biomechanics](http://www.jbiomech.com), and an e-mail list, the [Biomch-L](http://biomch-l.isbweb.org):
```
from IPython.display import IFrame
IFrame('https://biomch-l.isbweb.org/forums/2-General-Discussion', width='100%', height=400)
```
## Biomechanics by (my) examples
Biomechanics has been applied to many different problems; let's see a few examples of applications of Biomechanics:
- Clinical gait analysis: [http://demotu.org/services/cga/](http://demotu.org/services/cga/)
- Biomechanics of sports: [http://demotu.org/biomechanics-of-the-bicycle-kick/](http://demotu.org/biomechanics-of-the-bicycle-kick/)
### Examples of Biomechanics Classes around the World
```
from IPython.display import IFrame
IFrame('http://pages.uoregon.edu/karduna/biomechanics/bme.htm', width='100%', height=400)
```
## Biomechanics classes @ UFABC
- **[Biomecânica I](http://demotu.org/ensino/biomecanica-i/)**
- **[Biomecânica II](http://demotu.org/ensino/biomecanica-ii/)**
## Problems
1. Go to [Biomechanics Classes on the Web](http://pages.uoregon.edu/karduna/biomechanics/) to visit websites of biomechanics classes around the world and find out how biomechanics is studied in different fields.
2. Find examples of applications of biomechanics in different areas.
3. Watch the video [The Weird World of Eadweard Muybridge](http://youtu.be/5Awo-P3t4Ho) to learn about [Eadweard Muybridge](http://en.wikipedia.org/wiki/Eadweard_Muybridge), an important person to the development of instrumentation for biomechanics.
4. Think about practical problems in nature that can be studied in biomechanics with simple approaches (simple modeling and low-tech methods) or very complicated approaches (complex modeling and high-tech methods).
5. What the study in the biomechanics of athletes, children, elderlies, persons with disabilities, other animals, and computer animation for the cinema industry may have in common and different?
6. Visit the website of the Laboratory of Biomechanics and Motor Control at UFABC and find out what we do and if there is anything you are interested in.
7. Is there anything in biomechanics that interests you? How could you pursue this interest?
## References
- [Biomechanics - Wikipedia, the free encyclopedia](http://en.wikipedia.org/wiki/Biomechanics)
- [Mechanics - Wikipedia, the free encyclopedia](http://en.wikipedia.org/wiki/Mechanics)
- [International Society of Biomechanics](http://isbweb.org/)
- [Biomech-l, the biomechanics' e-mail list](http://biomch-l.isbweb.org/)
- [Journal of Biomechanics' aims](http://www.jbiomech.com/aims)
- <a href="http://courses.washington.edu/bioen520/notes/History_of_Biomechanics_(Martin_1999).pdf">A Genealogy of Biomechanics</a>
- Hatze H (1974) [The meaning of the term biomechanics](https://github.com/demotu/BMC/blob/master/courses/HatzeJB74biomechanics.pdf). Journal of Biomechanics, 7, 189–190.
- Hibbeler RC (2012) [Engineering Mechanics: Statics](http://books.google.com.br/books?id=PSEvAAAAQBAJ). Prentice Hall; 13 edition.
- Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley.
- Ruina A, Rudra P (2015) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press.
- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, EUA: Wiley.
- Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics.
- Zatsiorsky VM (2002) [Kinetics of human motion](http://books.google.com.br/books?id=wp3zt7oF8a0C). Human Kinetics.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from deluca.envs import PlanarQuadrotor
from deluca.envs.classic._planar_quadrotor import dissipative
from deluca.agents import ILQR
from deluca.agents import ILC
from deluca.agents import IGPC
from deluca.agents._ilqr import rollout
import numpy as np
global_log = []
wind = 0.4
angle = 0.0
wind_func = dissipative
T = 50
ALPHA = 0.2
LR = 0.0015625
env_true, env_sim = PlanarQuadrotor(wind=wind, wind_func=wind_func), PlanarQuadrotor(wind_func=wind_func)
print('-------------- warmup on ilqr ----------------')
ilqr_sim = ILQR()
c, _ = ilqr_sim.train(env_sim, 20)
print(f"Warmup cost is {c}")
global_log += [f"Warmup cost is {c}"]
print('----------- compute zero_cost -----------')
_,_,ZEROCOST = rollout(env_true, ilqr_sim.U, ilqr_sim.k, ilqr_sim.K, ilqr_sim.X)
print('ZEROCOST:' + str(ZEROCOST))
global_log += ['ZEROCOST: ' + str(ZEROCOST)]
print('-------------- ilqr_true ----------------')
ilqr_oracle = ILQR()
c, log = ilqr_true.train(env_true, T, ilqr_sim.U)
global_log += log[5:]
print(global_log)
### Hyperparameter Search - ILC Alpha
ilc = ILC()
c_min = 1000000000
good_alpha = 1.0
# for alpha in [0.1,0.2,0.5, 0.8, 1.0, 2.0, 5.0, 10.0]:
for alpha in [ALPHA]:
print(f"Trying alpha {alpha}")
c, log = ilc.train(env_true, env_sim, T, ilqr_sim.U, ilqr_sim.k, ilqr_sim.K, ilqr_sim.X, ref_alpha=alpha)
if alpha == ALPHA: # best alpha
global_log += log
if c < c_min:
c_min = c
good_alpha = alpha
print(f"Better Min Changing alpha to {good_alpha}")
env_true.close(), env_sim.close()
### Hyperparameter Search - IGPC Alpha
igpc = IGPC()
U = ilqr_sim.U
c_min = 1000000000
good_alpha = 100000
good_lr = 10000
# for alpha in [0.5, 1.0]:
# for lr in [0.001, 0.005, 0.01, 0.05, 0.1]:
# for alpha in [0.25,0.5,0.75,1.0]:
# for alpha in [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
for alpha in [0.05]:
for lr in [0.004]:
print(f"Trying pair {alpha},{lr}")
c, log = igpc.train(env_true, env_sim, T, U, ilqr_sim.k, ilqr_sim.K, ilqr_sim.X, lr=lr, ref_alpha=alpha)
# _,_,_,_,c = Eastman_closed(env_true, env_sim, U, 10, k, K, X, lr=lr, ref_alpha=alpha)
if alpha == 0.05 and lr == 0.004: # best alpha/lr
global_log += log
if c < c_min:
c_min = c
good_alpha = alpha
good_lr = lr
print(f"Better Min Changing alpha,lr to {good_alpha},{lr}")
print('good_alpha:' + str(good_alpha))
print('good_lr:' + str(good_lr))
print('c_min:' + str(c_min))
# Plot alpha = 0.5 experiment
import matplotlib.pyplot as plt
from pprint import pprint
import re
import sys
global_log_alpha_half = ''
for s in global_log:
global_log_alpha_half += s + '\n'
print(global_log_alpha_half)
def postprocess(a):
rs, vs = a
newrs, newvs = [], []
ref = 0
for (i, (r, v)) in enumerate(zip(rs, vs)):
r = int(r)
if i == len(rs) - 1:
newrs += [r]
newvs += [v]
else:
next_r = int(rs[i + 1])
newrs += list(range(r, next_r))
newvs += [v for _ in range(r, next_r)]
return newrs, newvs
### This file is to plot the results of the first experiment on Quadrotor
all_results = {}
for param in [-10.0]:
txt = global_log_alpha_half
pattern = f"ZEROCOST: ([0-9.]*)"
results = re.findall(pattern, txt)
zerocost = float(results[0])
pattern = f"(.*): t = [0-9]*, r = ([0-9]*), c = ([0-9.]*)"
results = list(zip(*re.findall(pattern, txt)))
if len(results) > 0:
results = results[0], list(map(float, results[1])), list(map(float, results[2]))
all_keys = list(set(results[0]))
results_dict = {}
for k in all_keys:
indices = [i for i, x in enumerate(results[0]) if x == k]
results_dict[k] = [
[0] + [results[1][i] for i in indices],
[zerocost] + [results[2][i] for i in indices],
]
all_results[param] = results_dict
print(all_results)
#print(postprocess(all_results[-10.0]['Eastman (closed+de+lr+0.01)']))
plt.figure()
fig, ax = plt.subplots(1, 1)
x_dict = {-10.0: 15}
for (i, w) in enumerate([-10.0]):
a = ax
px, py = list(range(0, 100)), [
all_results[w]["(iLQR)"][1][0] for _ in range(0, 100)
]
a.plot(px, py, label="iLQR (closed)", color="aqua")
px, py = postprocess(all_results[w]["(iLQR)"])
a.plot(px, py, label="iLQR (oracle)", color="black")
px, py = postprocess(all_results[w]["(iLC)"])
a.plot(px, py, label="iLC", color="green")
px, py = postprocess(all_results[w]["(iGPC)"])
a.plot(px, py, label="iGPC", color="red")
a.set_ylim(0, 5000)
a.set_xlim(0, 30)
a.set_xlabel("No. of Rollouts")
a.set_ylabel("Cost")
a.set_title(f"Quadrotor with dissipative={wind}")
# #plt.yscale('log')
a.legend(loc="best")
plt.show()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow.keras
print(tensorflow.__version__)
# for GPU and not to overclock it which means that 50% of the GPU will be used and tensorflow, maybe won't be able to give proper results.
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
from tensorflow.keras.layers import Input, Lambda, Dense, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img
from tensorflow.keras.models import Sequential
import numpy as np
from glob import glob
import matplotlib.pyplot as plt
train_path = 'D:\ML_workspace\cotton_prediction\datasets\train'
test_path = 'D:\ML_workspace\cotton_prediction\datasets\test'
image_size = [224,224]
resnet = ResNet50(input_shape = image_size + [3], weights = 'imagenet', include_top = False)
# in the previous step we work with the pretrained weights from the imagenet model.
# imagenet already has 1000 categories with different images.
# 90 MB file.
# And we also set the include_top to false because we are setting up the image size on our own i.e.,[224,224].
# include_top, the first and the last layer are removed.
# so we have to not train the model entirely
for layer in resnet.layers:
layer.trainable = False
folders = glob(r'D:\ML_workspace\cotton_prediction\datasets\train\*')
folders
resnet.output
# now we will flatten it, so that we can add the last number of nodes, on our wish.
x = Flatten()(resnet.output)
len(folders)
# because we have many categories we will go with softmax or else if we just had 2 categories, sigmoid would have been better.
prediction = Dense(len(folders), activation='softmax')(x)
model = Model(inputs = resnet.input, outputs = prediction)
model.summary()
# it will be a combination of 50 layers because we are using ResNet50.
# tell the model what cost and optimization method to use
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
train_datagen = ImageDataGenerator(rescale = 1./255,
zoom_range = 0.2,
shear_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
train_set = train_datagen.flow_from_directory(r'D:\ML_workspace\cotton_prediction\datasets\train',
target_size=(224, 224),
batch_size = 32,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory(r'D:\ML_workspace\cotton_prediction\datasets\test',
target_size = (224,224),
batch_size = 32,
class_mode = 'categorical')
r = model.fit_generator(train_set,
validation_data= test_set,
epochs=20,
steps_per_epoch= len(train_set),
validation_steps = len(test_set))
# ResNet50 has 25 million params.
# loss
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label = 'val_loss')
plt.legend()
plt.show()
plt.savefig('lossVal_loss_ResNet50')
# accuracy
plt.plot(r.history['accuracy'], label= 'accuracy')
plt.plot(r.history['val_accuracy'], label='validation_accuracy')
plt.legend()
plt.show()
plt.savefig('acurracyVal_accuracy_ResNet50')
y_pred = model.predict(test_set)
y_pred
len(y_pred)
y_pred.shape
final_pred = np.argmax(y_pred, axis=1)
final_pred
len(final_pred)
final_pred.shape
model.save('model_ResNet50.h5')
model=load_model('model_resnet50.h5')
```
| github_jupyter |
# This notebook is for SV3 results from QSO catalogs
```
import numpy as np
import fitsio
from matplotlib import pyplot as plt
import os
from astropy.table import Table,join,unique
from desitarget.sv3 import sv3_targetmask
```
## Look at z success based on Edmond's file
```
#read Edmond's file
fq = Table.read('/global/cscratch1/sd/edmondc/SHARE/QSO_CATALOG/QSO_catalog_SV3.fits')
#change tile column name to TILEID for matching
fq['TILE'].name = 'TILEID'
fq.dtype.names
#get the redrock redshift file for everything, then select quasars on their first observation with good fiberstatus
ff = fitsio.read('/global/cfs/cdirs/desi/spectro/redux/everest/zcatalog/ztile-sv3-dark-cumulative.fits')
wqso = ff['PRIORITY'] == 103400
wqso &= ff['COADD_FIBERSTATUS'] == 0
ff = ff[wqso]
print(len(ff))
#join, keeping everything from redrock; the repeat columns from the quasar file have '_QF' in the name
ffq = join(ff,fq,keys=['TARGETID','TILEID','LOCATION'],join_type='left',uniq_col_name='{col_name}{table_name}',table_names=['','_QF'])
print(len(ff),len(ffq),len(np.unique(ffq['TARGETID'])))
```
### Now, we have the data ready to look at redshift success rates
```
#A "success" is anything that made it to Edmond's file. Masked values in the _QF columns are rows with failures
selgood = ~ffq['Z_QF'].mask
print(len(ffq[selgood]))
#just for fun, compare to zwarn == 0 and see how many are both selgood and zwarn == 0
wz = ffq['ZWARN'] == 0
print(len(ffq[wz]),len(ffq[wz&selgood]))
efac = 1 # can change this if we want to plot vs efftime instead of TSNR2
nb = 10
rng=(20,75)
a = np.histogram(ffq[selgood]['TSNR2_QSO'],bins=nb,range=rng)
b = np.histogram(ffq['TSNR2_QSO'],bins=a[1])
#plt.clf()
fr = np.mean(a[0]/b[0])
dl = a[0]/b[0]#/fr
varl = dl*b[0]*(1.-dl) #variance for binomial distribution
wv = varl == 0
varl[wv] = 1
el = np.sqrt(varl)/b[0]#/fr
bs = a[1][1]-a[1][0]
vs = (a[1][:-1]+bs/2.)*efac
ol = np.ones(len(vs))*fr
#em = erf((vs-17)/40)*.98
chi2null = np.sum((dl-fr)**2./el**2.)
plt.errorbar(vs,dl,el,label=' chi2null/dof='+str(round(chi2null,1))+'/'+str(nb))#,fmt='ko')
plt.plot(vs,ol,'k:')
plt.title('SV3 QSO ')
plt.xlabel('TSNR2_QSO')
plt.ylabel('fraction of good z')
plt.legend(loc='upper left')
plt.show()
#check fraction with z > 1.6; some fluctuations in z success can be in regions we don't care as much about
efac = 1 # can change this if we want to plot vs efftime instead of TSNR2
nb = 10
rng=(20,75)
selz = ffq['Z_QF'] > 1.6
a = np.histogram(ffq[selgood&selz]['TSNR2_QSO'],bins=nb,range=rng)
b = np.histogram(ffq['TSNR2_QSO'],bins=a[1])
#plt.clf()
fr = np.mean(a[0]/b[0])
dl = a[0]/b[0]#/fr
varl = dl*b[0]*(1.-dl) #variance for binomial distribution
wv = varl == 0
varl[wv] = 1
el = np.sqrt(varl)/b[0]#/fr
bs = a[1][1]-a[1][0]
vs = (a[1][:-1]+bs/2.)*efac
ol = np.ones(len(vs))*fr
#em = erf((vs-17)/40)*.98
chi2null = np.sum((dl-fr)**2./el**2.)
plt.errorbar(vs,dl,el,label=' chi2null/dof='+str(round(chi2null,1))+'/'+str(nb))#,fmt='ko')
plt.plot(vs,ol,'k:')
plt.title('SV3 QSO ')
plt.xlabel('TSNR2_QSO')
plt.ylabel('fraction with good z > 1.6')
plt.legend(loc='upper left')
plt.show()
w = fq['Z']*0 == 0
print(len(fq[w]),len(fq))
ff = fitsio.read('/global/cfs/cdirs/desi/survey/catalogs/SV3/LSS/LSScats/test/QSOAlltiles_full.dat.fits')
print('total number of unique reachable QSO targets is '+str(len(ff)))
wo = ff['LOCATION_ASSIGNED'] == 1
print('total number of unique observed QSO targets is '+str(len(ff[wo])))
wz = ff['ZWARN'] == 0
print('total number of unique QSO targets with good redshifts is '+str(len(ff[wz])))
wq = ff['SPECTYPE'] == 'QSO'
print('total number of unique QSO targets with good redshifts and spectype qso is '+str(len(ff[wz&wq])))
print('targeting completeness is '+str(len(ff[wo])/len(ff)))
print('redshift success rate is '+str(len(ff[wz])/len(ff[wo])))
ngl = [len(ff[wz])]
ntm = [1]
for nt in range(1,7):
wt = ff['NTILE'] > nt
ntm.append(nt+1)
ngl.append(len(ff[wz&wt]))
plt.plot(ntm,np.array(ngl)/len(ff[wz]),'g-')
plt.xlabel('minimum number of tiles')
plt.ylabel('fraction of good QSO redshifts kept')
plt.show()
nz = np.loadtxt('/global/cfs/cdirs/desi/survey/catalogs/SV3/LSS/LSScats/test/QSO_N_nz.dat').transpose()
plt.plot(nz[0],nz[3],':',color='darkgreen',label='BASS/MzLS')
nz = np.loadtxt('/global/cfs/cdirs/desi/survey/catalogs/SV3/LSS/LSScats/test/QSO_S_nz.dat').transpose()
plt.plot(nz[0],nz[3],'--',color='lightgreen',label='DECaLS')
plt.legend()
plt.xlim(0.6,3.5)
plt.ylim(0,0.00007)
xl = [0.32,0.32]
yl = [0,0.001]
plt.plot(xl,yl,'k-')
xl = [0.6,0.6]
yl = [0,0.001]
plt.plot(xl,yl,'k-')
xl = [0.8,0.8]
yl = [0,0.001]
plt.plot(xl,yl,'k-')
xl = [1.05,1.05]
yl = [0,0.001]
plt.plot(xl,yl,'k-')
xl = [1.3,1.3]
yl = [0,0.001]
plt.plot(xl,yl,'k-')
xl = [1.6,1.6]
yl = [0,0.001]
plt.plot(xl,yl,'k-')
xl = [2.1,2.1]
yl = [0,0.001]
plt.plot(xl,yl,'k-')
plt.xlabel('redshift')
plt.ylabel(r'comoving number density ($h$/Mpc)$^3$')
plt.show()
zl = [0.8,1.05,1.3,1.6,2.1]
for i in range(0,len(zl)):
if i == len(zl)-1:
zmin=zl[0]
zmax=zl[-1]
else:
zmin = zl[i]
zmax = zl[i+1]
xils = np.loadtxt('/global/cscratch1/sd/ajross/SV3xi/xi024SV3_testQSO_S'+str(zmin)+str(zmax)+'5st0.dat').transpose()
xil = np.loadtxt('/global/cscratch1/sd/ajross/SV3xi/xi024SV3_testQSO'+str(zmin)+str(zmax)+'5st0.dat').transpose()
xiln = np.loadtxt('/global/cscratch1/sd/ajross/SV3xi/xi024SV3_testQSO_N'+str(zmin)+str(zmax)+'5st0.dat').transpose()
plt.plot(xil[0],xil[0]**2.*xiln[1],'^:',color='darkgreen',label='BASS/MzLS')
plt.plot(xil[0],xil[0]**2.*xils[1],'v--',color='lightgreen',label='DECaLS')
plt.plot(xil[0],xil[0]**2.*xil[1],'s-g',label='combined')
xilin = np.loadtxt(os.environ['HOME']+'/BAOtemplates/xi0Challenge_matterpower0.44.04.08.015.00.dat').transpose()
plt.plot(xilin[0],xilin[0]**2.*xilin[1]*1.1,'k-.',label=r'$\xi_{\rm 0}(z=0),b/D(z)=\sqrt{1.1},\beta=0.4$')
plt.title('QSO SV3, '+str(zmin)+' < z < '+str(zmax))
plt.xlim(0,80)
plt.ylim(-50,80)
plt.xlabel(r'$s$ (Mpc/h)')
plt.ylabel(r'$s^2\xi_0$')
plt.legend()
plt.show()
```
## Looking good, though this should have been easy given high completeness
| github_jupyter |
<h1>CS4618: Artificial Intelligence I</h1>
<h1>Gradient Descent</h1>
<h2>
Derek Bridge<br>
School of Computer Science and Information Technology<br>
University College Cork
</h2>
<h1>Initialization</h1>
$\newcommand{\Set}[1]{\{#1\}}$
$\newcommand{\Tuple}[1]{\langle#1\rangle}$
$\newcommand{\v}[1]{\pmb{#1}}$
$\newcommand{\cv}[1]{\begin{bmatrix}#1\end{bmatrix}}$
$\newcommand{\rv}[1]{[#1]}$
$\DeclareMathOperator{\argmax}{arg\,max}$
$\DeclareMathOperator{\argmin}{arg\,min}$
$\DeclareMathOperator{\dist}{dist}$
$\DeclareMathOperator{\abs}{abs}$
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interactive
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import add_dummy_feature
from sklearn.linear_model import SGDRegressor
```
<h1>Acknowledgement</h1>
<ul>
<li>I based 5 of the diagrams on ones to be found in A. Géron: <i>Hands-On Machine Learning with Scikit-Learn, Keras &
TensorFlow (2nd edn)</i>, O'Reilly, 2019
</li>
</ul>
<h1>Gradient Descent</h1>
<ul>
<li><b>Gradient Descent</b> is a generic method for finding optimal solutions to problems that involve
minimizing a loss function.
</li>
<li>It is a <em>search</em> in the model's <b>parameter space</b> for values of the parameters that minimize
the loss function.
</li>
<li>Conceptually:
<ul>
<li>
It starts with an initial guess for the values of the parameters.
</li>
<li>
Then repeatedly:
<ul>
<li>It updates the parameter values — hopefully to reduce the loss.
</li>
</ul>
</li>
</ul>
<img src="images/fog.jpg" alt="" />
</li>
<li>
Ideally, it keeps doing this until <b>convergence</b> — changes to the parameter values do not result
in lower loss.
</li>
<li>The key to this algorithm is how to update the parameter values.</li>
</ul>
<h2>The update rule</h2>
<ul>
<li>To update the parameter values to reduce the loss:
<ul>
<li>Compute the gradient vector.
<ul>
<li>But this points 'uphill' and we want to go 'downhill'.</li>
<li>And we want to make 'baby steps' (see later), so we use a <b>learning rate</b>,
$\alpha$, which is between 0 and 1.
</li>
</ul>
</li>
<li>So subtract $\alpha$ times the gradient vector from $\v{\beta}$.</li>
</ul>
$$\v{\beta} \gets \v{\beta} - \alpha\nabla_{\v{\beta}}J(\v{X}, \v{y}, \v{\beta})$$
Or
$$\v{\beta} \gets \v{\beta} - \frac{\alpha}{m}\v{X}^T(\v{X}\v{\beta} - \v{y})$$
</li>
<li>(BTW, this is vectorized. Naive loop implementations are wrong: they lose the
<em>simultaneous</em> update of the $\v{\beta}_j$.)
</li>
</ul>
<h2>Gradient descent algorithm</h2>
<ul>
<li>Pseudocode (in fact, this is for <b>batch gradient descent</b>, see later):
<ul style="background: lightgrey; list-style: none">
<li>initialize $\v{\beta}$ randomly
<li>
repeat until convergence
<ul>
<li>
$\v{\beta} \gets \v{\beta} - \frac{\alpha}{m}\v{X}^T(\v{X}\v{\beta} - \v{y})$
</li>
</ul>
</li>
</ul>
</li>
<h2>Baby steps</h2>
<ul>
<li>We'll use an example with a single feature/single parameter $\beta_1$ in order to visualize.</li>
<li>We update $\beta_1$ gradually, one baby step at a time, unitl the algorithm converges on minimum loss:
<figure>
<img src="images/baby_steps1.png" />
</figure>
</li>
<li>The size of the steps is determined by <!--a <b>hyperparameter</b> called--> the learning rate.
<!--
<ul>
<li>(Hyperparamters are explained in CS4619)</li>
</ul>
-->
</li>
<li>If the learning rate is too small, it will take many updates until convergence:
<figure>
<img src="images/baby_steps2.png" />
</figure>
</li>
<li>If the learning rate is too big, the algorithm might jump across the valley — it may even end up with
higher loss than before, making the next step bigger.
<ul>
<li>This might make the algorithm <b>diverge</b>.
</li>
</ul>
<figure>
<img src="images/baby_steps3.png" />
</figure>
</li>
</ul>
<h2>Why we need to scale for Gradient Descent</h2>
<ul>
<li>If we are doing OLS regression using the Normal Equation, we do not need to scale the features.
But if we are doing OLS regression using Gradient Descent, we do need to scale the features.
</li>
<li>If features have different ranges, it affects the shape of the 'bowl'.</li>
<li>E.g. features 1 and 2 have similar ranges of values — a 'bowl':
<figure>
<img src="images/scaled.png" />
</figure>
<ul>
<li>The algorithm goes straight towards the minimum.</li>
</ul>
</li>
<li>E.g. feature 1 has smaller values than feature 2 — an elongated 'bowl':
<figure>
<img src="images/unscaled.png" />
</figure>
<ul>
<li>Since feature 1 has smaller values, it takes a larger change in $\v{\beta}_1$ to affect
the loss function, which is why it is elongated.
</li>
<li>It takes more steps to get to the minimum — steeply down but not really towards the
goal, followed by a long march down a nearly flat valley.
</li>
<li>It makes it more difficult to choose a value for the learning rate that avoids diveregence:
a value that suits one feature may not suit another.
</li>
</ul>
</li>
</ul>
<h2>Variants of Gradient Descent</h2>
<ul>
<li>There are, in fact, three variants:
<ul>
<li>Batch Gradient Descent;</li>
<li>Stochastic Gradient Descent; and</li>
<li>Mini-batch Gradient Descent.</li>
</ul>
</li>
</ul>
<h1>Batch Gradient Descent</h1>
<ul>
<li>The pseudocode we saw earlier (repeated here for convenience) is Batch Gradient Descent:
<ul style="background: lightgrey; list-style: none">
<li>initialize $\v{\beta}$ randomly
<li>
repeat until convergence
<ul>
<li>
$\v{\beta} \gets \v{\beta} - \frac{\alpha}{m}\v{X}^T(\v{X}\v{\beta} - \v{y})$
</li>
</ul>
</li>
</ul>
</li>
<li>Why is it called <em>Batch</em> Gradient Descent?
<ul>
<li>The update involves a calculation over the <em>entire</em> training set $\v{X}$
on every iteration.
</li>
<li>This can be slow for large training sets.</li>
</ul>
</li>
</ul>
<h2>Batch Gradient Descent in numpy</h2>
<ul>
<li>For the hell of it, let's implement it ourselves.</li>
<li>Again for the purposes of this explanation, we will use the entire dataset as our training set.</li>
</ul>
```
# Loss function for OLS regression (assumes X contains all 1s in its first column)
def J(X, y, beta):
return np.mean((X.dot(beta) - y) ** 2) / 2.0
def batch_gradient_descent_for_ols_linear_regression(X, y, alpha, num_iterations):
m, n = X.shape
beta = np.random.randn(n)
Jvals = np.zeros(num_iterations)
for iter in range(num_iterations):
beta -= (1.0 * alpha / m) * X.T.dot(X.dot(beta) - y)
Jvals[iter] = J(X, y, beta)
return beta, Jvals
# Use pandas to read the CSV file
df = pd.read_csv("../datasets/dataset_corkA.csv")
# Get the feature-values and the target values
X = df[["flarea", "bdrms", "bthrms"]].values
y = df["price"].values
# Scale it
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Add the extra column to X
X = add_dummy_feature(X)
# Run the Batch Gradient Descent
beta, Jvals = batch_gradient_descent_for_ols_linear_regression(X, y, alpha = 0.03, num_iterations = 500)
# Display beta
beta
```
<ul>
<li>Bear in mind that the coefficients it finds are on the scaled data.</li>
</ul>
<ul>
<li>It's a good idea to plot the values of the loss function against the number of iterations.
</li>
<li>For OLS regression done using Batch Gradient Descent, if the loss ever increases, then:
<ul>
<li>
the code might be incorrect; or
</li>
<li>
the value of $\alpha$ is too big and is causing divergence.
</li>
</ul>
</li>
</ul>
```
fig = plt.figure(figsize=(8,6))
plt.title("$J$ during learning")
plt.xlabel("Number of iterations")
plt.xlim(1, Jvals.size)
plt.ylabel("$J$")
plt.ylim(3500, 50000)
xvals = np.linspace(1, Jvals.size, Jvals.size)
plt.scatter(xvals, Jvals)
plt.show()
```
<ul>
<li>The algorithm gives us the problem of choosing the number of iterations.</li>
<li>An alternative is to use a very large number of iterations but exit when the gradient vector
becomes tiny:
<ul>
<li>when its norm becomes smaller than <b>tolerance</b>, $\eta$.</li>
</ul>
</li>
</ul>
<ul>
<li>Here's an interactive version that allows you to choose the value of $\alpha$ and to decide
whether to scale the data or not.
</li>
</ul>
```
def bgd(scale=True, alpha=0.03):
# Get the feature-values and the target values
X = df[["flarea", "bdrms", "bthrms"]].values
y = df["price"].values
# Scale the data, if requested
if scale:
X = StandardScaler().fit_transform(X)
# Add the extra column to X
X = add_dummy_feature(X)
# Run the Batch Gradient Descent
beta, Jvals = batch_gradient_descent_for_ols_linear_regression(X, y, alpha, num_iterations = 3000)
# Display beta
print("beta: ", beta)
# Plot loss
fig = plt.figure(figsize=(8,6))
plt.title("$J$ during learning")
plt.xlabel("Number of iterations")
plt.xlim(1, Jvals.size)
plt.ylabel("$J$")
plt.ylim(3500, 50000)
xvals = np.linspace(1, Jvals.size, Jvals.size)
plt.scatter(xvals, Jvals)
plt.show()
interactive_plot = interactive(bgd, {'manual': True},
scale=True, alpha=[("0.00009", 0.00009), ("0.0009", 0.0009), ("0.009", 0.009), ("0.09", 0.09), ("0.9", 0.9)])
interactive_plot
```
<ul>
<li>
Some people suggest a variant of Batch Gradient Descent in which the value of $\alpha$ is decreased
over time, i.e. its value in later iterations is smaller
<ul>
<li>Why do they suggest this? </li>
<li>And why isn't it necessary?
</li>
</ul>
</li>
<li>(But, we'll revisit this idea in Stochastic Gradient Descent.)</li>
</ul>
<h1>Stochastic Gradient Descent</h1>
<ul>
<li>As we saw, in each iteration, Batch Gradient Descent does a calculation on the entire
training set, which, for large training sets, may be slow.
</li>
<li><b>Stochastic Gradient Descent (SGD)</b>:
<ul>
<li>On each iteration, it picks just <em>one</em> training example $\v{x}$ at random and computes
the gradients on just that
one example
$$\v{\beta} \gets \v{\beta} - \alpha\v{x}^T(\v{x}\v{\beta} - y)$$
</li>
</ul>
</li>
<li>This gives huge speed-up.</li>
<li>It enables us to train on huge training sets since only one example needs to be in memory in each iteration.
</li>
<li>But, because it is stochastic (the randomness), the loss will not necessarily decrease on each iteration:
<ul>
<li><em>On average</em>, the loss decreases, but in any one iteration, loss may go up or down.</li>
<li>Eventually, it will get close to the minimum, but it will continue to go up and down a bit.
<ul>
<li>So, once you stop it, the $\v{\beta}$ will be close to the best, but not
necessarily optimal.
</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2>SGD in scikit-learn</h2>
<ul>
<li>The <code>fit</code> method of scikit-learn's <code>SGDRegressor</code> class is doing
what we have described:
<ul>
<li>You must scale the features but it inserts the extra column of 1s for us.</li>
<li>You can supply a <code>learning_rate</code> and lots of other things
(in the code below, we'll just use the defaults).
</li>
</ul>
</li>
<li>(Again, we'll train on the whole dataset.)</li>
</ul>
```
# Get the feature-values and the target values
X = df[["flarea", "bdrms", "bthrms"]].values
y = df["price"].values
# Scale it
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Create the SGDRegressor and fit the model
sgd = SGDRegressor()
sgd.fit(X, y)
```
<h2>SGD in numpy</h2>
<ul>
<li>For the hell of it, let's implement a simple version ourselves</li>
<li>(Again, we'll train on the whole dataset.)</li>
</ul>
```
def stochastic_gradient_descent_for_ols_linear_regression(X, y, alpha, num_epochs):
m, n = X.shape
beta = np.random.randn(n)
Jvals = np.zeros(num_epochs * m)
for epoch in range(num_epochs):
for i in range(m):
rand_idx = np.random.randint(m)
xi = X[rand_idx:rand_idx + 1]
yi = y[rand_idx:rand_idx + 1]
beta -= alpha * xi.T.dot(xi.dot(beta) - yi)
Jvals[epoch * m + i] = J(X, y, beta)
return beta, Jvals
```
<ul>
<li>(One common alternative to the code above is to shuffle between epochs and remove the randomness within the
inner loop.)
</li>
</ul>
```
# Get the feature-values and the target values
X = df[["flarea", "bdrms", "bthrms"]].values
y = df["price"].values
# Scale it
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Add the extra column to X
X = add_dummy_feature(X)
# Run the Stochastic Gradient Descent
beta, Jvals = stochastic_gradient_descent_for_ols_linear_regression(X, y, alpha = 0.03, num_epochs = 50)
# Display beta
beta
fig = plt.figure(figsize=(8,6))
plt.title("$J$ during learning")
plt.xlabel("Number of iterations")
plt.xlim(1, Jvals.size)
plt.ylabel("$J$")
plt.ylim(3500, 50000)
xvals = np.linspace(1, Jvals.size, Jvals.size)
plt.scatter(xvals, Jvals)
plt.show()
```
<ul>
<li>Quite a bumpy ride!</li>
<li>So, let's try <b>simulated annealing</b>.</li>
</ul>
<h2>Simulated Annealing</h2>
<ul>
<li>As we discussed, SGD does not settle at the minimum.</li>
<li>One solution is to gradually reduce the learning rate:
<ul>
<li>Updates start out 'large' so you make progress.</li>
<li>But, over time, updates get smaller, allowing SGD to settle at or near the global minimum.</li>
</ul>
</li>
<li>The function that determines how to reduce the learning rate is called the <b>learning schedule</b>.
<ul>
<li>Reduce it too quickly and you may not converge on or near to the global minimum.</li>
<li>Reduce it too slowly and you may still bounce around a lot and, if stopped after too few iterations,
may end up
with a suboptimal solution.
</li>
</ul>
</li>
</ul>
```
def learning_schedule(t):
return 5 / (t + 50)
def stochastic_gradient_descent_for_ols_linear_regression_with_simulated_annealing(X, y, num_epochs):
m, n = X.shape
beta = np.random.randn(n)
Jvals = np.zeros(num_epochs * m)
for epoch in range(num_epochs):
for i in range(m):
rand_idx = np.random.randint(m)
xi = X[rand_idx:rand_idx + 1]
yi = y[rand_idx:rand_idx + 1]
alpha = learning_schedule(epoch * m + i)
beta -= alpha * xi.T.dot(xi.dot(beta) - yi)
Jvals[epoch * m + i] = J(X, y, beta)
return beta, Jvals
# Run the Stochastic Gradient Descent
beta, Jvals = stochastic_gradient_descent_for_ols_linear_regression_with_simulated_annealing(X, y, num_epochs = 50)
# Display beta
beta
fig = plt.figure(figsize=(8,6))
plt.title("$J$ during learning")
plt.xlabel("Number of iterations")
plt.xlim(1, Jvals.size)
plt.ylabel("$J$")
plt.ylim(3500, 50000)
xvals = np.linspace(1, Jvals.size, Jvals.size)
plt.scatter(xvals, Jvals)
plt.show()
```
<h1>Mini-Batch Gradient Descent</h1>
<ul>
<li>Batch Gradient Descent computes gradients from the full training set.</li>
<li>Stochastic Gradient Descent computes gradients from just one example.</li>
<li>Mini-Batch Gradient Descent lies between the two:
<ul>
<li>It computes gradients from a small randomly-selected subset of the training set, called a
<b>mini-batch</b>.
</li>
</ul>
</li>
<li>Since it lies between the two:
<ul>
<li>It may bounce less and get closer to the global minimum than SGD…
<ul>
<li>…although both of them can reach the global minimum with a good learning schedule.</li>
</ul>
</li>
<li>Its time and memory costs lie between the two.</li>
</ul>
</li>
</ul>
<h1>The Normal Equation versus Gradient Descent</h1>
<ul>
<li>Efficiency/scaling-up to large training sets:
<ul>
<li>Normal Equation:
<ul>
<li>is linear in $m$, so can handle large training sets efficiently if they fit into
main memory;
</li>
<li>but it has to compute the inverse (or psueudo-inverse) of a $n \times n$ matrix, which takes
time between quadratic and cubic in $n$, and so is only feasible for smallish $n$ (up to
a few thousand).
</li>
</ul>
</li>
<li>Gradient Descent:
<ul>
<li>SGD scales really well to huge $m$;</li>
<li>All three Gradient Descent methods can handle huge $n$ (even 100s of 1000s).</li>
</ul>
</li>
</ul>
</li>
<li>Finding the global minimum for OLS regression:
<ul>
<li>Normal Equation: guaranteed to find the global minimum.</li>
<li>Gradient Descent: all a bit dependent on number of iterations, learning rate, learning schedule.</li>
</ul>
</li>
<li>Feature scaling:
<ul>
<li>Normal Equation: scaling is not needed.
</li>
<li>Gradient Descent: scaling <em>is</em> needed.</li>
</ul>
</li>
<li>Finally, Gradient Descent is a general method, whereas the Normal Equation is only for OLS regression.</li>
</ul>
<h1>Non-Convex Functions</h1>
<ul>
<li>The loss function for OLS regression is convex and it has a slope that never changes abruptly.
<ul>
<li>This gives us good 'guarantees' about reaching the minimum
(depending on such things as running for long enough, using a learning rate that isn't too high,
and whether we are using Batch, Mini-Batch or Stochastic Gradient Descent).
</li>
</ul>
</li>
<li>But Gradient Descent is a generic method: you can use it to find the minima of other loss functions.</li>
<li>But not all loss functions are convex, which can cause problems for Gradient Descent:
<figure>
<img src="images/local_minima.png" />
</figure>
<ul>
<li>The algorithm might converge to a local minimum, instead of the global minimum.</li>
<li>It may take a long time to cross a plateau.</li>
</ul>
</li>
<li>What do we do about this?
<ul>
<li>One thing is to prefer Stochastic Gradient Descent (or Mini-Batch Gradient Descent):
because of the way they 'bounce around', they might even escape a
local minimum, and might even get to the global minimum.
</li>
<li>In this context, simulated annealing is also useful: updates start out 'large' allowing these
algorithms to make
progress and even escape local minima; but, over time, updates get smaller, allowing
these algorithms to settle at or near the global minimum.
</li>
<li>But, if using simulated annealing, if you reduce the learning rate too quickly, you may
stil get stuck in a local minimum.
</li>
</ul>
</li>
</ul>
| github_jupyter |
# Set working directory
```
import os
cwd = os.path.split(os.getcwd())
if cwd[-1] == 'tutorials':
os.chdir('..')
assert os.path.split(os.getcwd())[-1] == 'BRON'
```
# Import modules
```
import pandas as pd
import csv
import json
import statistics
import time
from memory_profiler import memory_usage
from typing import Tuple, Set, List, Dict
from path_search.path_search_BRON import main_attack
from meta_analysis.find_riskiest_software import load_graph_network, riskiest_software
```
# BRON-JSON
BRON-JSON is the JSON-based implementation of BRON. Run the next code cell to build BRON-JSON.
```
from download_threat_information.download_threat_data import _download_attack, _download_capec, _download_cwe, _download_cve, main
from download_threat_information.parsing_scripts.parse_attack_tactic_technique import link_tactic_techniques
from download_threat_information.parsing_scripts.parse_cve import parse_cve_file
from download_threat_information.parsing_scripts.parse_capec_cwe import parse_capec_cwe_files
from BRON.build_BRON import build_graph, BRON_PATH
# Download threat information
out_path = 'download_threat_information'
cve_years = ['2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011',
'2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020']
main(cve_years)
# Parse threat data
filename = os.path.join(out_path, 'raw_enterprise_attack.json')
link_tactic_techniques(filename, out_path)
cve_path = os.path.join(out_path, 'raw_CVE.json.gz')
save_path_file = "cve_map_cpe_cwe_score.json"
save_file = os.path.join(out_path, save_path_file)
parse_cve_file(cve_path, save_file)
capec_file = os.path.join(out_path, 'raw_CAPEC.json')
cwe_file = os.path.join(out_path, 'raw_CWE.zip')
parse_capec_cwe_files(capec_file, cwe_file, save_path=out_path)
# Build BRON
BRON_folder_path = 'full_data/full_output_data'
os.makedirs(BRON_folder_path, exist_ok=True)
input_data_folder = 'download_threat_information'
BRON_original_id_to_bron_id_path = os.path.join(BRON_folder_path, BRON_PATH)
os.makedirs(BRON_original_id_to_bron_id_path, exist_ok=True)
build_graph(BRON_folder_path, input_data_folder)
```
# BRON-Graph-DB
BRON-Graph-DB stores BRON in ArangoDB. Run the following code cell to connect to BRON-Graph-DB.
```
import arango
SERVER_IP = 'bron.alfa.csail.mit.edu'
USERNAME = 'guest'
PASSWORD = 'guest'
DB = 'BRON'
client = arango.ArangoClient(hosts=f"http://{SERVER_IP}:8529")
db = client.db(DB, username=USERNAME, password=PASSWORD, auth_method="basic")
```
# Path search queries
Two queries require searching graph paths. For them, the input is a CSV file of node IDs and the output is a CSV file with the IDs of nodes connected to each of the input nodes along an edge in BRON.
The first query finds the threats connected to the top 10 CVEs which involves 390 nodes. The second query finds the threats and vulnerabilities connected to the top 25 CWEs which involves 322K nodes.
# Query: Threats connected to top 10 CVEs
```
top_10_cves_starting_file = 'tutorials/top_10_cves_starting_point.csv'
top_10_cves_results_file = 'tutorials/top_10_cves_search_results.csv'
```
## BRON-JSON
```
top_10_cves_times_BRON_JSON = []
for i in range(30):
start_time = time.time()
main_attack(BRON_folder_path, top_10_cves_starting_file, top_10_cves_results_file, 'cve', length=False)
top_10_cves_times_BRON_JSON.append(time.time() - start_time)
print("Min: ", min(top_10_cves_times_BRON_JSON))
print("Max: ", max(top_10_cves_times_BRON_JSON))
print("Mean: ", statistics.mean(top_10_cves_times_BRON_JSON))
print("SD: ", statistics.stdev(top_10_cves_times_BRON_JSON))
def top_10_cves_path_search_BRON_JSON():
main_attack(BRON_folder_path, top_10_cves_starting_file, top_10_cves_results_file, 'cve', length=False)
top_10_cves_mem_usages_BRON_JSON = []
for i in range(30):
mem_usage = memory_usage(top_10_cves_path_search_BRON_JSON)
top_10_cves_mem_usages_BRON_JSON.append(max(mem_usage))
print("Min: ", min(top_10_cves_mem_usages_BRON_JSON))
print("Max: ", max(top_10_cves_mem_usages_BRON_JSON))
print("Mean: ", statistics.mean(top_10_cves_mem_usages_BRON_JSON))
print("SD: ", statistics.stdev(top_10_cves_mem_usages_BRON_JSON))
```
## BRON-Graph-DB
```
query_template_bron_id = """
FOR c IN {}
FILTER c.original_id == "{}"
RETURN c._key
"""
query_template_connections = """
WITH tactic, technique, capec, cwe, cve, cpe
FOR vertex
IN 1..5
{} "{}"
GRAPH "BRONGraph"
OPTIONS {{ uniqueVertices: 'global', bfs: true }}
RETURN DISTINCT vertex._key
"""
def execute_query(query: str) -> Set[str]:
assert db.aql.validate(query)
cursor = db.aql.execute(query)
results = {_ for _ in cursor}
return results
def convert_original_to_bron_id(data_type: str, original_ids: Tuple[str, ...]) -> Tuple[str, ...]:
bron_ids_list = []
for original_id in original_ids:
query_bron_id = query_template_bron_id.format(data_type, original_id)
results_bron_id = execute_query(query_bron_id)
bron_ids_list.append(results_bron_id.pop())
return tuple(bron_ids_list)
def save_search_results_csv(connections_list: List[Dict[str, Set[str]]], results_file: str):
csv_columns = ['tactic', 'technique', 'capec', 'cwe', 'cve', 'cpe']
with open(results_file, 'w') as f:
writer = csv.DictWriter(f, fieldnames=csv_columns)
writer.writeheader()
for data in connections_list:
writer.writerow(data)
def path_search_BRON_Graph_DB(data_type: str, starting_file: str, results_file: str, length: bool=False):
with open(starting_file) as f:
original_ids_list = [tuple(line) for line in csv.reader(f)]
original_ids = original_ids_list[0]
bron_ids = convert_original_to_bron_id(data_type, original_ids)
directions = ('INBOUND', 'OUTBOUND')
connections_list = [] # List of dictionaries for each ID
for bron_id in bron_ids:
connections = {'tactic': set(), 'technique': set(), 'capec': set(), 'cwe': set(), 'cve': set(), 'cpe': set()}
connections[data_type].add(bron_id) # Add known connection of itself
full_bron_id = f'{data_type}/{bron_id}'
for direction in directions:
query_connections = query_template_connections.format(direction, full_bron_id)
results_connections = execute_query(query_connections)
for result in results_connections:
result_split = result.split('_')
connections[result_split[0]].add(result)
if length: # Store number of data types instead of IDs
connections_count = dict()
for data_type_key, entries in connections.items():
connections_count[data_type_key] = len(entries)
connections_list.append(connections_count)
else:
connections_list.append(connections)
save_search_results_csv(connections_list, results_file)
top_10_cves_times_BRON_Graph_DB = []
for i in range(30):
start_time = time.time()
path_search_BRON_Graph_DB('cve', top_10_cves_starting_file, top_10_cves_results_file)
top_10_cves_times_BRON_Graph_DB.append(time.time() - start_time)
print("Min: ", min(top_10_cves_times_BRON_Graph_DB))
print("Max: ", max(top_10_cves_times_BRON_Graph_DB))
print("Mean: ", statistics.mean(top_10_cves_times_BRON_Graph_DB))
print("SD: ", statistics.stdev(top_10_cves_times_BRON_Graph_DB))
def top_10_cves_path_search_BRON_Graph_DB():
path_search_BRON_Graph_DB('cve', top_10_cves_starting_file, top_10_cves_results_file)
top_10_cves_mem_usages_BRON_Graph_DB = []
for i in range(30):
mem_usage = memory_usage(top_10_cves_path_search_BRON_Graph_DB)
top_10_cves_mem_usages_BRON_Graph_DB.append(max(mem_usage))
print("Min: ", min(top_10_cves_mem_usages_BRON_Graph_DB))
print("Max: ", max(top_10_cves_mem_usages_BRON_Graph_DB))
print("Mean: ", statistics.mean(top_10_cves_mem_usages_BRON_Graph_DB))
print("SD: ", statistics.stdev(top_10_cves_mem_usages_BRON_Graph_DB))
```
# Query: Threats and vulnerabilities connected to top 25 CWEs
```
top_25_cwes_starting_file = 'tutorials/top_25_cwes_starting_point.csv'
top_25_cwes_results_file = 'tutorials/top_25_cwes_search_results.csv'
```
## BRON-JSON
```
top_25_cwes_times_BRON_JSON = []
for i in range(30):
start_time = time.time()
main_attack(BRON_folder_path, top_25_cwes_starting_file, top_25_cwes_results_file, 'cwe', length=False)
top_25_cwes_times_BRON_JSON.append(time.time() - start_time)
print("Min: ", min(top_25_cwes_times_BRON_JSON))
print("Max: ", max(top_25_cwes_times_BRON_JSON))
print("Mean: ", statistics.mean(top_25_cwes_times_BRON_JSON))
print("SD: ", statistics.stdev(top_25_cwes_times_BRON_JSON))
def top_25_cwes_path_search_BRON_JSON():
main_attack(BRON_folder_path, top_25_cwes_starting_file, top_25_cwes_results_file, 'cwe', length=False)
top_25_cwes_mem_usages_BRON_JSON = []
for i in range(30):
mem_usage = memory_usage(top_25_cwes_path_search_BRON_JSON)
top_25_cwes_mem_usages_BRON_JSON.append(max(mem_usage))
print("Min: ", min(top_25_cwes_mem_usages_BRON_JSON))
print("Max: ", max(top_25_cwes_mem_usages_BRON_JSON))
print("Mean: ", statistics.mean(top_25_cwes_mem_usages_BRON_JSON))
print("SD: ", statistics.stdev(top_25_cwes_mem_usages_BRON_JSON))
```
## BRON-Graph-DB
```
top_25_cwes_times_BRON_Graph_DB = []
for i in range(30):
start_time = time.time()
path_search_BRON_Graph_DB('cwe', top_25_cwes_starting_file, top_25_cwes_results_file)
top_25_cwes_times_BRON_Graph_DB.append(time.time() - start_time)
print("Min: ", min(top_25_cwes_times_BRON_Graph_DB))
print("Max: ", max(top_25_cwes_times_BRON_Graph_DB))
print("Mean: ", statistics.mean(top_25_cwes_times_BRON_Graph_DB))
print("SD: ", statistics.stdev(top_25_cwes_times_BRON_Graph_DB))
def top_25_cwes_path_search_BRON_Graph_DB():
path_search_BRON_Graph_DB('cwe', top_25_cwes_starting_file, top_25_cwes_results_file)
top_25_cwes_mem_usages_BRON_Graph_DB = []
for i in range(30):
mem_usage = memory_usage(top_25_cwes_path_search_BRON_Graph_DB)
top_25_cwes_mem_usages_BRON_Graph_DB.append(max(mem_usage))
print("Min: ", min(top_25_cwes_mem_usages_BRON_Graph_DB))
print("Max: ", max(top_25_cwes_mem_usages_BRON_Graph_DB))
print("Mean: ", statistics.mean(top_25_cwes_mem_usages_BRON_Graph_DB))
print("SD: ", statistics.stdev(top_25_cwes_mem_usages_BRON_Graph_DB))
```
# Query: Riskiest software
This query outputs the Affected Product Configuration with the highest sum of CVSS scores for connected Vulnerabilities, which involves 2,453K nodes.
## BRON-JSON
```
riskiest_software_times_BRON_JSON = []
for i in range(30):
start_time = time.time()
graph = load_graph_network(f'{BRON_folder_path}/BRON.json')
riskiest_software(graph)
riskiest_software_times_BRON_JSON.append(time.time() - start_time)
print("Min: ", min(riskiest_software_times_BRON_JSON))
print("Max: ", max(riskiest_software_times_BRON_JSON))
print("Mean: ", statistics.mean(riskiest_software_times_BRON_JSON))
print("SD: ", statistics.stdev(riskiest_software_times_BRON_JSON))
def riskiest_software_BRON_JSON():
graph = load_graph_network(f'{BRON_folder_path}/BRON.json')
riskiest_software(graph)
riskiest_software_mem_usages_BRON_JSON = []
for i in range(30):
max_mem_usage = max(memory_usage(riskiest_software_BRON_JSON))
riskiest_software_mem_usages_BRON_JSON.append(max_mem_usage)
print("Min: ", min(riskiest_software_mem_usages_BRON_JSON))
print("Max: ", max(riskiest_software_mem_usages_BRON_JSON))
print("Mean: ", statistics.mean(riskiest_software_mem_usages_BRON_JSON))
print("SD: ", statistics.stdev(riskiest_software_mem_usages_BRON_JSON))
```
## BRON-Graph-DB
```
query_riskiest_software = """
WITH cve, cpe
FOR c in cpe
LET cvss_scores = (
FOR vertex
IN 1..1
INBOUND c._id
CveCpe
OPTIONS { uniqueVertices: 'global', bfs: true }
RETURN vertex.metadata.weight
)
RETURN { cpe_node: c.original_id, cvss_score: SUM(cvss_scores) }
"""
def execute_query(query: str) -> Set[str]:
assert db.aql.validate(query)
cursor = db.aql.execute(query)
results = [_ for _ in cursor]
return results
def riskiest_software_BRON_Graph_DB():
results_riskiest_software = execute_query(query_riskiest_software)
highest_software = set()
highest_score = -1
for cpe_cvss_dict in results_riskiest_software:
cpe_node = cpe_cvss_dict['cpe_node']
cvss_score = cpe_cvss_dict['cvss_score']
if cvss_score > highest_score:
highest_software = {cpe_node}
highest_score = cvss_score
elif cvss_score == highest_score:
highest_software.add(cpe_node)
return highest_software, highest_score
riskiest_software_times_BRON_Graph_DB = []
for i in range(30):
start_time = time.time()
riskiest_software_BRON_Graph_DB()
riskiest_software_times_BRON_Graph_DB.append(time.time() - start_time)
print("Min: ", min(riskiest_software_times_BRON_Graph_DB))
print("Max: ", max(riskiest_software_times_BRON_Graph_DB))
print("Mean: ", statistics.mean(riskiest_software_times_BRON_Graph_DB))
print("SD: ", statistics.stdev(riskiest_software_times_BRON_Graph_DB))
riskiest_software_mem_usages_BRON_Graph_DB = []
for i in range(30):
max_mem_usage = max(memory_usage(riskiest_software_BRON_Graph_DB))
riskiest_software_mem_usages_BRON_Graph_DB.append(max_mem_usage)
print("Min: ", min(riskiest_software_mem_usages_BRON_Graph_DB))
print("Max: ", max(riskiest_software_mem_usages_BRON_Graph_DB))
print("Mean: ", statistics.mean(riskiest_software_mem_usages_BRON_Graph_DB))
print("SD: ", statistics.stdev(riskiest_software_mem_usages_BRON_Graph_DB))
```
| github_jupyter |
# Photometric monitoring
## Setup
```
%load_ext autoreload
%autoreload 2
import glob as glob
import matplotlib as mpl
import matplotlib.patheffects as PathEffects
import matplotlib.pyplot as plt
import matplotlib.transforms as transforms
import numpy as np
import pandas as pd
import seaborn as sns
import corner
import json
import pathlib
import pickle
import utils
import warnings
from astropy import constants as const
from astropy import units as uni
from astropy.io import ascii, fits
from astropy.time import Time
from mpl_toolkits.axes_grid1 import ImageGrid
# Default figure dimensions
FIG_WIDE = (11, 5)
FIG_LARGE = (8, 11)
# Figure style
sns.set(style="ticks", palette="colorblind", color_codes=True, context="talk")
params = utils.plot_params()
plt.rcParams.update(params)
```
## [Dowload data](https://www.dropbox.com/sh/74sihxztgd82jjz/AADgB_f5RYc3De3IEioUGAfha?dl=1)
Unzip this into a folder named `data` in the same level as this notebook
## Load
```
dirpath = "data/photometric_act"
mid_transit_times = {
"Transit 1": "2016-06-22 08:18:00",
"Transit 2": "2017-06-10 07:05:00",
"Transit 3": "2018-06-04 07:24:00",
"Transit 4": "2018-06-21 06:56",
"Transit 5": "2018-08-22 03:30:00",
}
# Load processed data
df_stell_data = pd.read_csv(
f"{dirpath}/HATP23_lc_norm_v3.csv",
names=["t_HJD", "t_UT", "f"],
parse_dates=[1],
infer_datetime_format=True,
)
# Load model data
df_stell_model = pd.read_csv(
f"{dirpath}/HATP23_GP_model_Prot7_v3.csv", names=["t_HJD", "f", "f_err"]
)
```
## Plot
```
fig, ax = plt.subplots(figsize=FIG_WIDE)
ax.plot(df_stell_data["t_HJD"], df_stell_data["f"], "r.", alpha=0.5, mew=0)
ax.plot(df_stell_model["t_HJD"], df_stell_model["f"], color="grey")
f_d = df_stell_model["f"] - df_stell_model["f_err"]
f_u = df_stell_model["f"] + df_stell_model["f_err"]
ax.fill_between(df_stell_model["t_HJD"], f_d, f_u, alpha=0.3, lw=0, color="grey")
p_kwargs = {"ls": "--", "c": "darkgrey", "lw": 1.0}
trans = transforms.blended_transform_factory(ax.transData, ax.transAxes)
for transit_name, t0 in mid_transit_times.items():
t_mid = Time(t0).jd - 2.4e6
ax.axvline(t_mid, **p_kwargs)
ax.annotate(
transit_name,
xy=(t_mid, 0.1),
xycoords=trans,
ha="right",
rotation=90.0,
fontsize=12,
)
# Save
ax.set_ylim(0.88, 0.98)
ax.set_xlabel("Date (HJD - 2400000)")
ax.set_ylabel("Flux relative to comparisons")
fig.tight_layout()
fig.set_size_inches(FIG_WIDE)
utils.savefig("../paper/figures/photometric_act/phot_mon_full.pdf")
```
| github_jupyter |
# 09 - Ensemble Methods - Bagging
by [Alejandro Correa Bahnsen](albahnsen.com/)
version 0.2, May 2016
## Part of the class [Machine Learning for Security Informatics](https://github.com/albahnsen/ML_SecurityInformatics)
This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [Kevin Markham](https://github.com/justmarkham)
Why are we learning about ensembling?
- Very popular method for improving the predictive performance of machine learning models
- Provides a foundation for understanding more sophisticated models
## Lesson objectives
Students will be able to:
- Define ensembling and its requirements
- Identify the two basic methods of ensembling
- Decide whether manual ensembling is a useful approach for a given problem
- Explain bagging and how it can be applied to decision trees
- Explain how out-of-bag error and feature importances are calculated from bagged trees
- Explain the difference between bagged trees and Random Forests
- Build and tune a Random Forest model in scikit-learn
- Decide whether a decision tree or a Random Forest is a better model for a given problem
# Part 1: Introduction
Ensemble learning is a widely studied topic in the machine learning community. The main idea behind
the ensemble methodology is to combine several individual base classifiers in order to have a
classifier that outperforms each of them.
Nowadays, ensemble methods are one
of the most popular and well studied machine learning techniques, and it can be
noted that since 2009 all the first-place and second-place winners of the KDD-Cup https://www.sigkdd.org/kddcup/ used ensemble methods. The core
principle in ensemble learning, is to induce random perturbations into the learning procedure in
order to produce several different base classifiers from a single training set, then combining the
base classifiers in order to make the final prediction. In order to induce the random permutations
and therefore create the different base classifiers, several methods have been proposed, in
particular:
* bagging
* pasting
* random forests
* random patches
Finally, after the base classifiers
are trained, they are typically combined using either:
* majority voting
* weighted voting
* stacking
There are three main reasons regarding why ensemble
methods perform better than single models: statistical, computational and representational . First, from a statistical point of view, when the learning set is too
small, an algorithm can find several good models within the search space, that arise to the same
performance on the training set $\mathcal{S}$. Nevertheless, without a validation set, there is
a risk of choosing the wrong model. The second reason is computational; in general, algorithms
rely on some local search optimization and may get stuck in a local optima. Then, an ensemble may
solve this by focusing different algorithms to different spaces across the training set. The last
reason is representational. In most cases, for a learning set of finite size, the true function
$f$ cannot be represented by any of the candidate models. By combining several models in an
ensemble, it may be possible to obtain a model with a larger coverage across the space of
representable functions.

## Example
Let's pretend that instead of building a single model to solve a binary classification problem, you created **five independent models**, and each model was correct about 70% of the time. If you combined these models into an "ensemble" and used their majority vote as a prediction, how often would the ensemble be correct?
```
import numpy as np
# set a seed for reproducibility
np.random.seed(1234)
# generate 1000 random numbers (between 0 and 1) for each model, representing 1000 observations
mod1 = np.random.rand(1000)
mod2 = np.random.rand(1000)
mod3 = np.random.rand(1000)
mod4 = np.random.rand(1000)
mod5 = np.random.rand(1000)
# each model independently predicts 1 (the "correct response") if random number was at least 0.3
preds1 = np.where(mod1 > 0.3, 1, 0)
preds2 = np.where(mod2 > 0.3, 1, 0)
preds3 = np.where(mod3 > 0.3, 1, 0)
preds4 = np.where(mod4 > 0.3, 1, 0)
preds5 = np.where(mod5 > 0.3, 1, 0)
# print the first 20 predictions from each model
print(preds1[:20])
print(preds2[:20])
print(preds3[:20])
print(preds4[:20])
print(preds5[:20])
# average the predictions and then round to 0 or 1
ensemble_preds = np.round((preds1 + preds2 + preds3 + preds4 + preds5)/5.0).astype(int)
# print the ensemble's first 20 predictions
print(ensemble_preds[:20])
# how accurate was each individual model?
print(preds1.mean())
print(preds2.mean())
print(preds3.mean())
print(preds4.mean())
print(preds5.mean())
# how accurate was the ensemble?
print(ensemble_preds.mean())
```
**Note:** As you add more models to the voting process, the probability of error decreases, which is known as [Condorcet's Jury Theorem](http://en.wikipedia.org/wiki/Condorcet%27s_jury_theorem).
## What is ensembling?
**Ensemble learning (or "ensembling")** is the process of combining several predictive models in order to produce a combined model that is more accurate than any individual model.
- **Regression:** take the average of the predictions
- **Classification:** take a vote and use the most common prediction, or take the average of the predicted probabilities
For ensembling to work well, the models must have the following characteristics:
- **Accurate:** they outperform the null model
- **Independent:** their predictions are generated using different processes
**The big idea:** If you have a collection of individually imperfect (and independent) models, the "one-off" mistakes made by each model are probably not going to be made by the rest of the models, and thus the mistakes will be discarded when averaging the models.
There are two basic **methods for ensembling:**
- Manually ensemble your individual models
- Use a model that ensembles for you
### Theoretical performance of an ensemble
If we assume that each one of the $T$ base classifiers has a probability $\rho$ of
being correct, the probability of an ensemble making the correct decision, assuming independence,
denoted by $P_c$, can be calculated using the binomial distribution
$$P_c = \sum_{j>T/2}^{T} {{T}\choose{j}} \rho^j(1-\rho)^{T-j}.$$
Furthermore, as shown, if $T\ge3$ then:
$$
\lim_{T \to \infty} P_c= \begin{cases}
1 &\mbox{if } \rho>0.5 \\
0 &\mbox{if } \rho<0.5 \\
0.5 &\mbox{if } \rho=0.5 ,
\end{cases}
$$
leading to the conclusion that
$$
\rho \ge 0.5 \quad \text{and} \quad T\ge3 \quad \Rightarrow \quad P_c\ge \rho.
$$
# Part 2: Manual ensembling
What makes a good manual ensemble?
- Different types of **models**
- Different combinations of **features**
- Different **tuning parameters**

*Machine learning flowchart created by the [winner](https://github.com/ChenglongChen/Kaggle_CrowdFlower) of Kaggle's [CrowdFlower competition](https://www.kaggle.com/c/crowdflower-search-relevance)*
```
# read in and prepare the vehicle training data
import zipfile
import pandas as pd
with zipfile.ZipFile('../datasets/vehicles_train.csv.zip', 'r') as z:
f = z.open('vehicles_train.csv')
train = pd.io.parsers.read_table(f, index_col=False, sep=',')
with zipfile.ZipFile('../datasets/vehicles_test.csv.zip', 'r') as z:
f = z.open('vehicles_test.csv')
test = pd.io.parsers.read_table(f, index_col=False, sep=',')
train['vtype'] = train.vtype.map({'car':0, 'truck':1})
# read in and prepare the vehicle testing data
test['vtype'] = test.vtype.map({'car':0, 'truck':1})
train.head()
```
### Train different models
```
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsRegressor
models = {'lr': LinearRegression(),
'dt': DecisionTreeRegressor(),
'nb': GaussianNB(),
'nn': KNeighborsRegressor()}
# Train all the models
X_train = train.iloc[:, 1:]
X_test = test.iloc[:, 1:]
y_train = train.price
y_test = test.price
for model in models.keys():
models[model].fit(X_train, y_train)
# predict test for each model
y_pred = pd.DataFrame(index=test.index, columns=models.keys())
for model in models.keys():
y_pred[model] = models[model].predict(X_test)
# Evaluate each model
from sklearn.metrics import mean_squared_error
for model in models.keys():
print(model,np.sqrt(mean_squared_error(y_pred[model], y_test)))
```
### Evaluate the error of the mean of the predictions
```
np.sqrt(mean_squared_error(y_pred.mean(axis=1), y_test))
```
## Comparing manual ensembling with a single model approach
**Advantages of manual ensembling:**
- Increases predictive accuracy
- Easy to get started
**Disadvantages of manual ensembling:**
- Decreases interpretability
- Takes longer to train
- Takes longer to predict
- More complex to automate and maintain
- Small gains in accuracy may not be worth the added complexity
# Part 3: Bagging
The primary weakness of **decision trees** is that they don't tend to have the best predictive accuracy. This is partially due to **high variance**, meaning that different splits in the training data can lead to very different trees.
**Bagging** is a general purpose procedure for reducing the variance of a machine learning method, but is particularly useful for decision trees. Bagging is short for **bootstrap aggregation**, meaning the aggregation of bootstrap samples.
What is a **bootstrap sample**? A random sample with replacement:
```
# set a seed for reproducibility
np.random.seed(1)
# create an array of 1 through 20
nums = np.arange(1, 21)
print(nums)
# sample that array 20 times with replacement
print(np.random.choice(a=nums, size=20, replace=True))
```
**How does bagging work (for decision trees)?**
1. Grow B trees using B bootstrap samples from the training data.
2. Train each tree on its bootstrap sample and make predictions.
3. Combine the predictions:
- Average the predictions for **regression trees**
- Take a vote for **classification trees**
Notes:
- **Each bootstrap sample** should be the same size as the original training set.
- **B** should be a large enough value that the error seems to have "stabilized".
- The trees are **grown deep** so that they have low bias/high variance.
Bagging increases predictive accuracy by **reducing the variance**, similar to how cross-validation reduces the variance associated with train/test split (for estimating out-of-sample error) by splitting many times an averaging the results.
```
# set a seed for reproducibility
np.random.seed(123)
n_samples = train.shape[0]
n_B = 10
# create ten bootstrap samples (will be used to select rows from the DataFrame)
samples = [np.random.choice(a=n_samples, size=n_samples, replace=True) for _ in range(1, n_B +1 )]
samples
# show the rows for the first decision tree
train.iloc[samples[0], :]
```
Build one tree for each sample
```
from sklearn.tree import DecisionTreeRegressor
# grow each tree deep
treereg = DecisionTreeRegressor(max_depth=None, random_state=123)
# DataFrame for storing predicted price from each tree
y_pred = pd.DataFrame(index=test.index, columns=[list(range(n_B))])
# grow one tree for each bootstrap sample and make predictions on testing data
for i, sample in enumerate(samples):
X_train = train.iloc[sample, 1:]
y_train = train.iloc[sample, 0]
treereg.fit(X_train, y_train)
y_pred[i] = treereg.predict(X_test)
y_pred
```
Results of each tree
```
for i in range(n_B):
print(i, np.sqrt(mean_squared_error(y_pred[i], y_test)))
```
Results of the ensemble
```
y_pred.mean(axis=1)
np.sqrt(mean_squared_error(y_test, y_pred.mean(axis=1)))
```
## Bagged decision trees in scikit-learn (with B=500)
```
# define the training and testing sets
X_train = train.iloc[:, 1:]
y_train = train.iloc[:, 0]
X_test = test.iloc[:, 1:]
y_test = test.iloc[:, 0]
# instruct BaggingRegressor to use DecisionTreeRegressor as the "base estimator"
from sklearn.ensemble import BaggingRegressor
bagreg = BaggingRegressor(DecisionTreeRegressor(), n_estimators=500,
bootstrap=True, oob_score=True, random_state=1)
# fit and predict
bagreg.fit(X_train, y_train)
y_pred = bagreg.predict(X_test)
y_pred
# calculate RMSE
np.sqrt(mean_squared_error(y_test, y_pred))
```
## Estimating out-of-sample error
For bagged models, out-of-sample error can be estimated without using **train/test split** or **cross-validation**!
On average, each bagged tree uses about **two-thirds** of the observations. For each tree, the **remaining observations** are called "out-of-bag" observations.
```
# show the first bootstrap sample
samples[0]
# show the "in-bag" observations for each sample
for sample in samples:
print(set(sample))
# show the "out-of-bag" observations for each sample
for sample in samples:
print(sorted(set(range(n_samples)) - set(sample)))
```
How to calculate **"out-of-bag error":**
1. For every observation in the training data, predict its response value using **only** the trees in which that observation was out-of-bag. Average those predictions (for regression) or take a vote (for classification).
2. Compare all predictions to the actual response values in order to compute the out-of-bag error.
When B is sufficiently large, the **out-of-bag error** is an accurate estimate of **out-of-sample error**.
```
# compute the out-of-bag R-squared score (not MSE, unfortunately!) for B=500
bagreg.oob_score_
```
## Estimating feature importance
Bagging increases **predictive accuracy**, but decreases **model interpretability** because it's no longer possible to visualize the tree to understand the importance of each feature.
However, we can still obtain an overall summary of **feature importance** from bagged models:
- **Bagged regression trees:** calculate the total amount that **MSE** is decreased due to splits over a given feature, averaged over all trees
- **Bagged classification trees:** calculate the total amount that **Gini index** is decreased due to splits over a given feature, averaged over all trees
# Part 4: Random Forests
Random Forests is a **slight variation of bagged trees** that has even better performance:
- Exactly like bagging, we create an ensemble of decision trees using bootstrapped samples of the training set.
- However, when building each tree, each time a split is considered, a **random sample of m features** is chosen as split candidates from the **full set of p features**. The split is only allowed to use **one of those m features**.
- A new random sample of features is chosen for **every single tree at every single split**.
- For **classification**, m is typically chosen to be the square root of p.
- For **regression**, m is typically chosen to be somewhere between p/3 and p.
What's the point?
- Suppose there is **one very strong feature** in the data set. When using bagged trees, most of the trees will use that feature as the top split, resulting in an ensemble of similar trees that are **highly correlated**.
- Averaging highly correlated quantities does not significantly reduce variance (which is the entire goal of bagging).
- By randomly leaving out candidate features from each split, **Random Forests "decorrelates" the trees**, such that the averaging process can reduce the variance of the resulting model.
# Part 5: Building and tuning decision trees and Random Forests
- Major League Baseball player data from 1986-87: [data](https://github.com/justmarkham/DAT8/blob/master/data/hitters.csv), [data dictionary](https://cran.r-project.org/web/packages/ISLR/ISLR.pdf) (page 7)
- Each observation represents a player
- **Goal:** Predict player salary
```
# read in the data
with zipfile.ZipFile('../datasets/hitters.csv.zip', 'r') as z:
f = z.open('hitters.csv')
hitters = pd.read_csv(f, sep=',', index_col=False)
# remove rows with missing values
hitters.dropna(inplace=True)
hitters.head()
# encode categorical variables as integers
hitters['League'] = pd.factorize(hitters.League)[0]
hitters['Division'] = pd.factorize(hitters.Division)[0]
hitters['NewLeague'] = pd.factorize(hitters.NewLeague)[0]
hitters.head()
# allow plots to appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# scatter plot of Years versus Hits colored by Salary
hitters.plot(kind='scatter', x='Years', y='Hits', c='Salary', colormap='jet', xlim=(0, 25), ylim=(0, 250))
# define features: exclude career statistics (which start with "C") and the response (Salary)
feature_cols = hitters.columns[hitters.columns.str.startswith('C') == False].drop('Salary')
feature_cols
# define X and y
X = hitters[feature_cols]
y = hitters.Salary
```
## Predicting salary with a decision tree
Find the best **max_depth** for a decision tree using cross-validation:
```
# list of values to try for max_depth
max_depth_range = range(1, 21)
# list to store the average RMSE for each value of max_depth
RMSE_scores = []
# use 10-fold cross-validation with each value of max_depth
from sklearn.cross_validation import cross_val_score
for depth in max_depth_range:
treereg = DecisionTreeRegressor(max_depth=depth, random_state=1)
MSE_scores = cross_val_score(treereg, X, y, cv=10, scoring='mean_squared_error')
RMSE_scores.append(np.mean(np.sqrt(-MSE_scores)))
# plot max_depth (x-axis) versus RMSE (y-axis)
plt.plot(max_depth_range, RMSE_scores)
plt.xlabel('max_depth')
plt.ylabel('RMSE (lower is better)')
# show the best RMSE and the corresponding max_depth
sorted(zip(RMSE_scores, max_depth_range))[0]
# max_depth=2 was best, so fit a tree using that parameter
treereg = DecisionTreeRegressor(max_depth=2, random_state=1)
treereg.fit(X, y)
# compute feature importances
pd.DataFrame({'feature':feature_cols, 'importance':treereg.feature_importances_}).sort_values('importance')
```
## Predicting salary with a Random Forest
```
from sklearn.ensemble import RandomForestRegressor
rfreg = RandomForestRegressor()
rfreg
```
### Tuning n_estimators
One important tuning parameter is **n_estimators**, which is the number of trees that should be grown. It should be a large enough value that the error seems to have "stabilized".
```
# list of values to try for n_estimators
estimator_range = range(10, 310, 10)
# list to store the average RMSE for each value of n_estimators
RMSE_scores = []
# use 5-fold cross-validation with each value of n_estimators (WARNING: SLOW!)
for estimator in estimator_range:
rfreg = RandomForestRegressor(n_estimators=estimator, random_state=1, n_jobs=-1)
MSE_scores = cross_val_score(rfreg, X, y, cv=5, scoring='mean_squared_error')
RMSE_scores.append(np.mean(np.sqrt(-MSE_scores)))
# plot n_estimators (x-axis) versus RMSE (y-axis)
plt.plot(estimator_range, RMSE_scores)
plt.xlabel('n_estimators')
plt.ylabel('RMSE (lower is better)')
```
### Tuning max_features
The other important tuning parameter is **max_features**, which is the number of features that should be considered at each split.
```
# list of values to try for max_features
feature_range = range(1, len(feature_cols)+1)
# list to store the average RMSE for each value of max_features
RMSE_scores = []
# use 10-fold cross-validation with each value of max_features (WARNING: SLOW!)
for feature in feature_range:
rfreg = RandomForestRegressor(n_estimators=150, max_features=feature, random_state=1, n_jobs=-1)
MSE_scores = cross_val_score(rfreg, X, y, cv=10, scoring='mean_squared_error')
RMSE_scores.append(np.mean(np.sqrt(-MSE_scores)))
# plot max_features (x-axis) versus RMSE (y-axis)
plt.plot(feature_range, RMSE_scores)
plt.xlabel('max_features')
plt.ylabel('RMSE (lower is better)')
# show the best RMSE and the corresponding max_features
sorted(zip(RMSE_scores, feature_range))[0]
```
### Fitting a Random Forest with the best parameters
```
# max_features=8 is best and n_estimators=150 is sufficiently large
rfreg = RandomForestRegressor(n_estimators=150, max_features=8, oob_score=True, random_state=1)
rfreg.fit(X, y)
# compute feature importances
pd.DataFrame({'feature':feature_cols, 'importance':rfreg.feature_importances_}).sort_values('importance')
# compute the out-of-bag R-squared score
rfreg.oob_score_
```
### Reducing X to its most important features
```
# check the shape of X
X.shape
# set a threshold for which features to include
print(rfreg.transform(X, threshold=0.1).shape)
print(rfreg.transform(X, threshold='mean').shape)
print(rfreg.transform(X, threshold='median').shape)
# create a new feature matrix that only includes important features
X_important = rfreg.transform(X, threshold='mean')
# check the RMSE for a Random Forest that only includes important features
rfreg = RandomForestRegressor(n_estimators=150, max_features=3, random_state=1)
scores = cross_val_score(rfreg, X_important, y, cv=10, scoring='mean_squared_error')
np.mean(np.sqrt(-scores))
```
## Comparing Random Forests with decision trees
**Advantages of Random Forests:**
- Performance is competitive with the best supervised learning methods
- Provides a more reliable estimate of feature importance
- Allows you to estimate out-of-sample error without using train/test split or cross-validation
**Disadvantages of Random Forests:**
- Less interpretable
- Slower to train
- Slower to predict

*Machine learning flowchart created by the [second place finisher](http://blog.kaggle.com/2015/04/20/axa-winners-interview-learning-telematic-fingerprints-from-gps-data/) of Kaggle's [Driver Telematics competition](https://www.kaggle.com/c/axa-driver-telematics-analysis)*
| github_jupyter |
```
# # This line imports the NumPy package
import numpy as np
```
# Introduction
**Pandas** is desgined to make data pre-processing and data analysis fast and easy in Python. Pandas adopts many coding idioms from NumPy, such as avoiding the `for` loops, but pandas is designed for working with heterogenous data represented in tabular format.
To use Pandas, you need to import the `pandas` module, using for example:
```
import pandas as pd
```
This import style is quite standard; all objects and functions the `pandas` package will now be invoked with the `pd.` prefix.
# Series
Pandas has two main data structures, **Series** and **DataFrame**. Series are a the Pandas version of 1-D Numpy arrays. A Series is a single dimension array-like object containing a *sequence of values*, with an array of *data labels*, called its **index**.
A Series can be created easily from a Python list:
```
ts = pd.Series([4, 8, 1, 3])
print(ts)
```
The underlying structure can be recovered with the `values` attribute:
```
print(ts.values)
```
The string representation of a Series display two columns: the first column represents the index array, the second column represents the values array. Since no index was specified, the default indexing consists of increasing integers starting from 0. To create a Series with its own index, you can write:
```
ts = pd.Series([4, 8, 1, 3], index=['first', 'second', 'third', 'fourth'])
print(ts)
```
The labels in the index can be used to select values in the Series (note the list in the second line):
```
print(ts['first'])
print(ts[['second', 'fourth']])
```
Using NumPy functions or NumPy-like operations, such as boolean indexing, universal functions, and so on, will preserve the indexes:
```
print(ts[ts > 3])
print(np.exp(ts))
```
You can think about a Series as a kind of fixed-length, ordered Python's `dict`, mapping index values to data values. In fact, it is possible to create a Series directlty from a Python's `dict`:
```
my_dict = {'Pisa': 80, 'London': 300, 'Paris': 1}
ts = pd.Series(my_dict)
print(ts)
```
Arithmetic operations on Series are automatically aligned on the index labels:
```
ts1 = pd.Series([4, 8, 1, 3], index=['first', 'second', 'third', 'fourth'])
ts2 = pd.Series([4, 8, 1], index=['first', 'second', 'pisa'])
ts_sum = ts1 + ts2
print(ts_sum)
```
Here two index values are correctly computed (corresponding to the label `first` and `second`). The two other index labels `third` and `fourth` in `ts1` are missing in `ts2`, as well as the `pisa` index label in `ts2`. Hence, for each of these index label, a `NaN` value (*not a number*) appears, which Pnadas considers as a **missing value**.
The `pd.isnull` and `pd.notnull` functions detects missing data, as well as the corresponding instance methods:
```
print(pd.isnull(ts_sum))
print(ts_sum.notnull())
```
Both Series and its index have a `name` attribute:
```
ts_sum.name = 'sum'
ts_sum.index.name = 'new name'
print(ts_sum)
```
# DataFrame
A DataFrame is a rectangular table of data. It contains an ordered list of columns. Every column can be of a different value type. A dataFrame has both a *row index* and a *column index*. It can be thought as a dict of Series (one per column) all sharing the same index labels.
There are many way to construct a DataFrame, but the most common is using a dictionary of equally-sized Python's lists (or NumPy's arrays):
```
cars = {'Brand': ['Honda Civic', 'Toyota Corolla', 'Ford Focus', 'Audi A4'],
'Price': [22000, 25000, 27000, 35000]}
df = pd.DataFrame(cars)
print(df)
```
The resulting DataFrame will received its index automatically as with Series.
To pretty-print a DataFrame in a Jupyter notebooks, it is enough to write its name (or using the `head()` instance method for very long DataFrames):
```
df.head()
```
It is possible to change the order of the colums at DataFrame construction time. If you provide a column's name not included in the dictionary, a column with missing values will appear:
```
df = pd.DataFrame(cars, columns=['Color', 'Price', 'Brand'])
print(df)
```
If working with a large table, it might be useful to sometimes have a list of all the columns' names. This is given by the `keys()` methods:
```
print(df.keys())
```
Many feature from the NumPy package can be directly used with Pandas DataFrames
```
print(df.values)
print(df.shape)
```
Another common way to create a DataFrame is to use a *nested dict of dicts*:
```
pop = {'Nevada': {2001: 2.4, 2002: 2.9},
'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}}
```
If this nested dict is passed to the DataFrame, the outer dict keys are interpreted as column labels, and the inner keys are interpreted as row labels:
```
df = pd.DataFrame(pop)
df
```
## Accessing a DataFrame
We now create a more complex DataFrame:
```
dict_of_list = {'birth': [1860, 1770, 1858, 1906],
'death':[1911, 1827, 1924, 1975],
'city':['Kaliste', 'Bonn', 'Lucques', 'Saint-Petersburg']}
composers_df = pd.DataFrame(dict_of_list, index=['Mahler', 'Beethoven', 'Puccini', 'Shostakovich'])
composers_df
```
There are multiple ways of accessing values or series of values in a Dataframe. Unlike in Series, a simple bracket gives access to a column and not an index, for example:
```
composers_df['city']
```
returns a Series. Alternatively one can also use the attributes syntax and access columns by using:
```
composers_df.city
```
The attributes syntax has some limitations, so in case something does not work as expected, revert to the brackets notation.
When specifiying multiple columns, a DataFrame is returned:
```
composers_df[['city', 'birth']]
```
One of the important differences with a regular Numpy array is that **with Pandas' DataFrame regular indexing doesn't work**. Instead one has to use either the `iloc` or the `loc` attributes.
**Remember that `loc` and `iloc` are attributes, not methods, hence they use brackets `[]` and not parenthesis `()`.**
The `loc` attribute allows to recover elements by using the index labels, while the `iloc` attribute can be used to recover the regular indexing:
```
print(composers_df.iloc[0,1])
print(composers_df.loc['Mahler', 'death'])
```
## Adding columns
It is very simple to add a column to a Dataframe:
```
composers_df['country'] = '???'
composers_df
```
Alternatively, an existing list can be used:
```
composers_df['country2'] = ['Austria','Germany','Italy','Russia']
composers_df
```
## Deleting columns
The `del`keyword is used to delete columns:
```
del composers_df['country']
composers_df
```
# Importing Excel files as DataFrames
Another very common way of "creating" a Pandas Dataframe is by importing a table from another format like CSV or Excel. In order to import Excel files, you need to install the `xlrd` package:
```shell
> pip install xlrd
```
An Excel table is provided in the [composers.xlsx](data/composers.xlsx) file and can be read with the `pd.read_excel` function. There are many more readers for other types of data (csv, json, html etc.) but we focus here on Excel.
```
composers_df = pd.read_excel('data/composers.xlsx')
composers_df
```
The reader automatically recognized the heaers of the file. However it created a new index. If needed we can specify which column to use as header:
```
composers_df = pd.read_excel('data/composers.xlsx', index_col = 'composer')
composers_df
```
If we open the file in Excel, we see that it is composed of more than one sheet. Clearly, when not specifying anything, the reader only reads the first sheet. However we can specify a sheet:
```
composers_df = pd.read_excel('data/composers.xlsx', index_col = 'composer', sheet_name='Sheet2')
composers_df
```
As you can see above, some information is missing. Some missing values are marked as "`unknown`" while other are `NaN`. `NaN` is the standard symbol for unknown/missing values and is understood by Pandas while "`unknown`" is just seen as text.
This is impractical as now we have columns with a mix of numbers and text which will make later computations difficult. What we would like to do is to replace all "irrelevant" values with the standard `NaN` symbol that says "*no information*".
For this we can use the `na_values` argument to specify what should be a `NaN`:
```
composers_df = pd.read_excel('data/composers.xlsx', index_col = 'composer', sheet_name='Sheet2',
na_values=['unknown'])
composers_df
```
# Plotting DataFrames
We will learn more about plotting later, but let's see here some possibilities offered by Pandas. Pandas builds on top of Matplotlib but exploits the knowledge included in Dataframes to improve the default output.
We can pass Series to Matplotlib which manages to understand them. Here's a default scatter plot:
```
import matplotlib.pyplot as plt
composers_df = pd.read_excel('data/composers.xlsx', index_col = 'composer', sheet_name='Sheet5')
plt.plot(composers_df.birth, composers_df.death, 'o')
plt.show()
```
Different types of plots are accessible when using the `plot` function of DataFrame instances via the `kind` option. The variables to plot are column names passed as keywords instead of whole series like in Matplotlib:
```
composers_df.plot(x = 'birth', y = 'death', kind = 'scatter')
plt.show()
composers_df.plot(x = 'birth', y = 'death', kind = 'scatter',
title = 'Composer birth and death', grid = True, fontsize = 15)
plt.show()
```
Some additional plotting options are available in the plot() module. For example histograms:
```
composers_df.plot.hist(alpha = 0.5)
plt.show()
```
Here you see again the gain from using Pandas: without specifying anything, Pandas made a histogram of the two columns containing numbers, labelled the axis and even added a legend to the plot.
# DataFrame Operations
One of the great advantages of using Pandas to handle tabular data is how simple it is to extract valuable information from them. Here we are going to see various types of operations that are available for this.
## Matrix operations
The strength of Numpy is its natural way of handling matrix operations, and Pandas reuses a lot of these features. For example one can use simple mathematical operations to opereate at the cell level:
```
df = pd.read_excel('data/composers.xlsx')
df
df['birth'] * 2
np.log(df['birth'])
```
We can directly use an operation's output to create a new column:
```
df['age'] = df['death'] - df['birth']
df
```
Here we applied functions only to series. Indeed, since our Dataframe contains e.g. strings, no operation can be done on it. If however we have a homogenous Dataframe, this is possible:
```
df[['birth', 'death']] * 2
```
## Column operations
There are other types of functions whose purpose is to summarize the data. For example the mean or standard deviation. Pandas by default applies such functions column-wise and returns a series containing e.g. the mean of each column:
```
np.mean(df)
```
Note that columns for which a mean does not make sense, like the city are discarded.
Sometimes one needs to apply to a column a very specific function that is not provided by default. In that case we can use one of the different `apply` methods of Pandas.
The simplest case is to apply a function to a column, or Series of a DataFrame. Let's say for example that we want to define the the age >60 as 'old' and <60 as 'young'. We can define the following general function:
```
define_age = lambda x: 'old' if x > 60 else 'young'
```
We can now apply this function on an entire Series:
```
df.age.apply(define_age)
```
We can also apply a function to an entire DataFrame. For example we can ask how many composers have birth and death dates within the XIXth century:
```
nineteen_century_count = lambda x: np.sum( (x >= 1800) & (x < 1900) )
df[['birth','death']].apply(nineteen_century_count)
```
## Boolean Indexing
Just like with Numpy, it is possible to subselect parts of a Dataframe using boolean indexing.
```
mask = df['birth'] > 1859
print(mask)
```
Just like in Numpy we can use this logical Series as an index to select elements in the Dataframe.
```
df[mask]
```
# Data Merging
Often information is comming from different sources and it is necessary to combine it into one object. We are going to see the different ways in which information contained within separate Dataframes can be combined in a meaningful way.
## Concatenation
The simplest way we can combine two Dataframes is simply to "paste" them together:
```
composers1 = pd.read_excel('data/composers.xlsx', index_col='composer',sheet_name='Sheet1')
composers1
composers2 = pd.read_excel('data/composers.xlsx', index_col='composer',sheet_name='Sheet3')
composers2
```
To be concatenated, Dataframes need to be provided as a list to the `pd.concat` method:
```
all_composers = pd.concat([composers1,composers2])
all_composers
```
One potential problem is that two tables contain duplicated information:
```
all_composers.loc['Mahler']
```
It is very easy to get rid of it using:
```
all_composers.drop_duplicates()
```
## Joining
An other classical case is that of two list with similar index but containing different information:
```
composers1 = pd.read_excel('data/composers.xlsx', index_col='composer',sheet_name='Sheet1')
composers1
composers2 = pd.read_excel('data/composers.xlsx', index_col='composer',sheet_name='Sheet4')
composers2
```
If we use simple concatenation, this doesn't help us much. We just end up with a large matrix with lots of `NaN`'s:
```
pd.concat([composers1, composers2])
```
The better way of doing this is to **join** the tables. This is a classical database concept avaialble in Pandas. `join()` operates on two tables: the first one is the "left" table which uses `join()` as a method. The other table is the "right" one.
Let's try the default join settings:
```
composers1.join(composers2)
```
We see that Pandas was smart enough to notice that the two tables had a index name and used it to combine the tables. We also see that one element from the second table (Brahms) is missing. The reason for this is the way indices not present in both tables are handled. There are four ways of doing this with two tables called here the "left" and "right" table.
### Join left
The two Dataframes that should be merged have a common index, but not necessarily the same items. For example here Shostakovich is missing in the second table, while Brahms is missing in the first one.
When using the "left" join, we use the first Dataframe as basis and only use the indices that appear there.
```
composers1.join(composers2, how = 'left')
```
### Join right
When using the "right" join, we use the second Dataframe as basis and only use the indices that appear there.
```
composers1.join(composers2, how = 'right')
```
### Inner join
When using the "inner" join, we return only the items that appear in both Dataframes:
```
composers1.join(composers2, how = 'inner')
```
### Outer join
When using the "inner" join, we return all the items that appaer in both Dataframes:
```
composers1.join(composers2, how = 'outer')
```
## Merging
Sometimes tables don't have the same indices but similar contents that we want to merge. For example let's imagine whe have the two Dataframes below:
```
composers1 = pd.read_excel('data/composers.xlsx', sheet_name='Sheet1')
composers1
composers2 = pd.read_excel('data/composers.xlsx', sheet_name='Sheet6')
composers2
```
The indices don't match and are not the composer name. In addition the columns containing the composer names have different labels. Here we can use `merge()` and specify which columns we want to use for merging, and what type of merging we need (inner, left etc.)
```
pd.merge(composers1, composers2, left_on='composer', right_on='last name')
```
Again we can use another variety of join than the default inner join.
# Data Splitting
Often Pandas tables mix regular variables (e.g. the size of cells in microscopy images) with categorical variables (e.g. the type of cell to which they belong). In that case, it is quite usual to split the data using the category to do computations. Pandas allows to do this very easily.
## Grouping
```
composers_df.head()
```
What if we want now to count how many composers we have in each category? In classical computing we would maybe do a for loop to count occurrences. Pandas simplifies this with the `groupby()` function, which actually groups elements by a certain criteria, e.g. a categorical variable like the period:
```
composer_grouped = composers_df.groupby('period')
composer_grouped
```
The output is a bit cryptic. What we actually have is a new object called *group* which has a lot of handy properties. First let's see what the groups actually are. As for the Dataframe, let's look at a summary of the object:
```
composer_grouped.describe()
```
So we have a dataframe with a statistical summary of the the contents. The "names" of the groups are here the indices of the Dataframe. These names are simply all the different categories that were present in the column we used for grouping. Now we can recover a single group:
```
composer_grouped.get_group('baroque')
```
If one has multiple categorical variables, one can also do a grouping on several levels. For example here we want to classify composers both by period and country. For this we just give two column names to the `groupby()` function:
```
composer_grouped = composers_df.groupby(['period','country'])
composer_grouped.get_group(('baroque','Germany'))
```
The main advantage of this Group object is that it allows us to do very quickly both computations and plotting without having to loop through different categories. Indeed Pandas makes all the work for us: it applies functions on each group and then reassembles the results into a Dataframe (or Series depending on output).
For example we can apply most functions we used for Dataframes (mean, sum etc.) on groups as well and Pandas seamlessly does the work for us.
## Unstacking
Let's have a look again at one of our grouped Dataframe on which we applied some summary function like a mean on the age column:
```
composers_df['age'] = composers_df['death'] - composers_df['birth']
composers_df.groupby(['country','period']).age.mean()
```
Here we have two level of indices, with the main one being the country which contains all periods. Often for plotting we however need to have the information in another format. In particular we would like each of these values to be one observation in a regular table. For example we could have a country vs period table where all elements are the mean age. To do that we need to **unstack** our multi-level Dataframe:
```
composer_unstacked = composers_df.groupby(['country','period']).age.mean().unstack()
composer_unstacked
```
| github_jupyter |
## How to forecast time series in BigQuery ML
This notebook accompanies the article
[How to do time series forecasting in BigQuery](https://towardsdatascience.com/how-to-do-time-series-forecasting-in-bigquery-af9eb6be8159)
## Install library and extensions if needed
You don't need to do this if you use AI Platform Notebooks
```
#!pip install google-cloud-bigquery
%load_ext google.cloud.bigquery
```
## Helper plot functions
```
import matplotlib.pyplot as plt
import pandas as pd
def plot_historical_and_forecast(input_timeseries, timestamp_col_name, data_col_name, forecast_output=None, actual=None):
input_timeseries = input_timeseries.sort_values(timestamp_col_name)
plt.figure(figsize=(20,6))
plt.plot(input_timeseries[timestamp_col_name], input_timeseries[data_col_name], label = 'Historical')
plt.xlabel(timestamp_col_name)
plt.ylabel(data_col_name)
if forecast_output is not None:
forecast_output = forecast_output.sort_values('forecast_timestamp')
forecast_output['forecast_timestamp'] = pd.to_datetime(forecast_output['forecast_timestamp'])
x_data = forecast_output['forecast_timestamp']
y_data = forecast_output['forecast_value']
confidence_level = forecast_output['confidence_level'].iloc[0] * 100
low_CI = forecast_output['confidence_interval_lower_bound']
upper_CI = forecast_output['confidence_interval_upper_bound']
# Plot the data, set the linewidth, color and transparency of the
# line, provide a label for the legend
plt.plot(x_data, y_data, alpha = 1, label = 'Forecast', linestyle='--')
# Shade the confidence interval
plt.fill_between(x_data, low_CI, upper_CI, color = '#539caf', alpha = 0.4, label = str(confidence_level) + '% confidence interval')
# actual
if actual is not None:
actual = actual.sort_values(timestamp_col_name)
plt.plot(actual[timestamp_col_name], actual[data_col_name], label = 'Actual', linestyle='--')
# Display legend
plt.legend(loc = 'upper center', prop={'size': 16})
```
## Plot the time series
The first step, as with any machine learning problem is to gather the training data and explore it. Assume that we have the data on rentals until mid-June of 2015 and we'd like to predict for the rest of the month. We can gather the past 6 weeks of data using
```
%%bigquery df
SELECT
CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY date
HAVING date BETWEEN '2015-05-01' AND '2015-06-15'
ORDER BY date
plot_historical_and_forecast(df, 'date', 'numrentals');
```
## Train ARIMA model
We can use this data to train an ARIMA model, telling BigQuery which column is the data column and which one the timestamp column:
```
!bq ls ch09eu || bq mk --location EU ch09eu
%%bigquery
CREATE OR REPLACE MODEL ch09eu.numrentals_forecast
OPTIONS(model_type='ARIMA',
time_series_data_col='numrentals',
time_series_timestamp_col='date') AS
SELECT
CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY date
HAVING date BETWEEN '2015-05-01' AND '2015-06-15'
```
We can get the forecast data using:
```
%%bigquery fcst
SELECT * FROM ML.FORECAST(MODEL ch09eu.numrentals_forecast,
STRUCT(14 AS horizon, 0.9 AS confidence_level))
plot_historical_and_forecast(df, 'date', 'numrentals', fcst);
%%bigquery actual
SELECT
CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY date
HAVING date BETWEEN '2015-06-16' AND '2015-07-01'
ORDER BY date
plot_historical_and_forecast(df, 'date', 'numrentals', fcst, actual);
```
## Forecasting a bunch of series
So far, I have been forecasting the overall rental volume for all the bicycle stations in Hyde Park. How do we predict the rental volume for each individual station? Use the time_series_id_col:
```
%%bigquery
CREATE OR REPLACE MODEL ch09eu.numrentals_forecast
OPTIONS(model_type='ARIMA',
time_series_data_col='numrentals',
time_series_timestamp_col='date',
time_series_id_col='start_station_name') AS
SELECT
start_station_name
, CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY start_station_name, date
HAVING date BETWEEN '2015-01-01' AND '2015-06-15'
```
Note that instead of training the series on 45 days (May 1 to June 15), I'm now training on a longer time period.
That's because the aggregate time series will tend to be smoother and much easier to predict than the time series
for individual stations. So, we have to show the model a longer trendline.
```
%%bigquery
SELECT *
FROM ML.ARIMA_COEFFICIENTS(MODEL ch09eu.numrentals_forecast)
ORDER BY start_station_name
%%bigquery fcst
SELECT
*
FROM ML.FORECAST(MODEL ch09eu.numrentals_forecast,
STRUCT(14 AS horizon, 0.9 AS confidence_level))
ORDER By start_station_name, forecast_timestamp
%%bigquery df
SELECT
start_station_name
, CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY start_station_name, date
HAVING date BETWEEN '2015-05-01' AND '2015-06-15' -- this is just for plotting, hence we'll keep this 45 days.
%%bigquery actual
SELECT
start_station_name
, CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY start_station_name, date
HAVING date BETWEEN '2015-06-16' AND '2015-07-01'
```
As you would expect, the aggregated time series over all the stations is much smoother and more predictable than the time series of just one station (the one station data will be more noisy). So, some forecasts will be better than others.
```
%%bigquery stations
SELECT DISTINCT start_station_name
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
ORDER by start_station_name ASC
stations
station = stations['start_station_name'].iloc[3] # Hyde Park Corner
print(station)
plot_historical_and_forecast(df[df['start_station_name']==station],
'date', 'numrentals',
fcst[fcst['start_station_name']==station],
actual[actual['start_station_name']==station]);
station = stations['start_station_name'].iloc[6] # Serpentine Car Park,
print(station)
plot_historical_and_forecast(df[df['start_station_name']==station],
'date', 'numrentals',
fcst[fcst['start_station_name']==station],
actual[actual['start_station_name']==station]);
station = stations['start_station_name'].iloc[4] # Knightsbridge
print(station)
plot_historical_and_forecast(df[df['start_station_name']==station],
'date', 'numrentals',
fcst[fcst['start_station_name']==station],
actual[actual['start_station_name']==station]);
```
## Evaluation
As you can see from the graphs above, the predictions accuracy varies by station. Can we gauge how good the prediction for a station is going to be?
```
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL ch09eu.numrentals_forecast)
ORDER BY variance DESC
```
Note that Hyde Park Corner (#0 on the list) is expected to be worse than Serpentine Corner (#5 on the list). That does pan out. But we expected Knightsbridge (#10) to be the best overall, but it appears that this is a case where cycling activity really picked up in an unexpected way.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
This file is part of MADIP: Molecular Atlas Data Integration Pipeline
This file provide some additional nomenclature refinements and concentrations calculation
Copyright 2021 Blue Brain Project / EPFL
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
from itertools import chain
import re
import pickle as pkl
pd.options.display.max_columns = None
pd.options.display.max_rows = None
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
sns.set_style("whitegrid")
with open('../data/2_df_best_alignedIDs_9May2021.pkl','rb') as f:
df = pkl.load(f)
print(len(df))
```
## Some additional checks and refinements
```
df.loc[df['gene_id_final'].str.contains(';')]
df.loc[df['gene_id_final']=='C2CD4CC2CD4FAMILY','gene_id_final'] = 'C2CD4CC2CD4'
df.loc[df['gene_id_final'].isna()]
df.loc[df['raw_data']==0]
df.loc[df['raw_data']<0]
#for figures
df['log_raw_data'] = np.log(df['raw_data'])
print(len(df['raw_data_units'].unique()))
df['raw_data_units'].unique()
df.loc[df['raw_data_units']=='Protein Abundance (Summerized TMT Reporter Ion Intensities)','raw_data_units'] = 'tmt abundance'
df.loc[df['raw_data_units']=='LFQ','raw_data_units'] = 'LFQintensity'
print(len(df['raw_data_units'].unique()))
df['raw_data_units'].unique()
len(df.loc[(df['Uniprot_unified'].isna()) & (df['uniprot_from_gn'] == 'NoMapping')])
df.loc[(df['Uniprot_unified'].isna()) & (df['uniprot_from_gn'] == 'NoMapping'),'Study'].unique()
df.loc[df['Study']=='Duda 2018','raw_data_units'].unique()
df.loc[df['Study']=='Kjell 2020','raw_data_units'].unique()
df.loc[df['Study']=='Carlyle 2017','raw_data_units'].unique()
df.loc[df['Study']=='Fecher 2019','raw_data_units'].unique()
df.loc[df['Study']=='Han 2014','raw_data_units'].unique()
```
### Check Uniprot to get protein seq to count number of possible peptides
```
# Uniprot 21july2020
#(taxonomy:"Mus musculus (Mouse) [10090]" OR taxonomy:"Rattus norvegicus (Rat) [10116]" OR taxonomy:"Homo sapiens (Human) [9606]") AND reviewed:yes
#(taxonomy:"Mus musculus (Mouse) [10090]" OR taxonomy:"Rattus norvegicus (Rat) [10116]" OR taxonomy:"Homo sapiens (Human) [9606]") AND reviewed:no
uniprot_rev = pd.read_csv('../data/uniprot_rev_taxonomyMRH_21july2020.tab', sep='\t')
print(len(uniprot_rev))
uniprot_unrev = pd.read_csv('../data/uniprot_taxonomyMRH_unreviewed_21july2020.gz', sep='\t')
print(len(uniprot_unrev))
uniprot_rev.loc[uniprot_rev['Entry'].str.contains("-")].head()
uniprot_unrev.loc[uniprot_unrev['Entry'].str.contains("-")].head()
uniprot_rev = uniprot_rev.loc[~uniprot_rev['Gene names'].isna()]
uniprot_unrev = uniprot_unrev.loc[~uniprot_unrev['Gene names'].isna()]
print(len(uniprot_rev))
print(len(uniprot_unrev))
uniprot_rev['Gene names'] = uniprot_rev['Gene names'].str.upper()
uniprot_unrev['Gene names'] = uniprot_unrev['Gene names'].str.upper()
s1 = uniprot_rev['Gene names'].str.split(' ').apply(pd.Series, 1).stack()
s1.index = s1.index.droplevel(-1)
s1.name = 'Gene names'
del uniprot_rev['Gene names']
uniprot_rev = uniprot_rev.join(s1)
s2 = uniprot_unrev['Gene names'].str.split(' ').apply(pd.Series, 1).stack()
s2.index = s2.index.droplevel(-1)
s2.name = 'Gene names'
del uniprot_unrev['Gene names']
uniprot_unrev = uniprot_unrev.join(s2)
len(uniprot_rev['Gene names'].unique())/len(uniprot_rev)
uniprot_rev.columns
uniprot_rev = uniprot_rev.drop(columns = ['Entry name', 'Status', 'Protein names', 'Organism', 'Length'])
uniprot_unrev = uniprot_unrev.drop(columns = ['Entry name', 'Status', 'Protein names', 'Organism', 'Length'])
uniprot_rev = pd.DataFrame(uniprot_rev.groupby('Gene names')['Entry'].apply(list))
uniprot_unrev = pd.DataFrame(uniprot_unrev.groupby('Gene names')['Entry'].apply(list))
uniprot_rev_dict = pd.Series(uniprot_rev['Entry'].values,index=uniprot_rev.index).to_dict()
uniprot_unrev_dict = pd.Series(uniprot_unrev['Entry'].values,index=uniprot_unrev.index).to_dict()
uniprot_rev_genes = list(set([item for sublist in uniprot_rev.index.tolist() for item in sublist]))
uniprot_unrev_genes = list(set([item for sublist in uniprot_unrev.index.tolist() for item in sublist]))
uniprot_rev_ids = list(set([item for sublist in uniprot_rev['Entry'].values.tolist() for item in sublist]))
uniprot_unrev_ids = list(set([item for sublist in uniprot_unrev['Entry'].values.tolist() for item in sublist]))
# Data downloaded on 05june2020 is from
# ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/idmapping/
mouse_uniprot_ids = pd.read_csv('../data/MOUSE_10090_idmapping.dat.gz',header=None,sep='\t')
rat_uniprot_ids = pd.read_csv('../data/RAT_10116_idmapping.dat.gz',header=None,sep='\t')
human_uniprot_ids = pd.read_csv('../data/HUMAN_9606_idmapping.dat.gz',header=None,sep='\t')
mouse_uniprot_ids.columns = ['Uniprot','ID_type','ID']
rat_uniprot_ids.columns = ['Uniprot','ID_type','ID']
human_uniprot_ids.columns = ['Uniprot','ID_type','ID']
mouse_uniprot_ids['id_of_organism'] = 'mouse'
rat_uniprot_ids['id_of_organism'] = 'rat'
human_uniprot_ids['id_of_organism'] = 'human'
#combine data for multiple organisms
uniprot_ids_mrh = pd.concat([mouse_uniprot_ids,rat_uniprot_ids,human_uniprot_ids],ignore_index=True,sort=True)
print((len(mouse_uniprot_ids['Uniprot'].unique())+len(rat_uniprot_ids['Uniprot'].unique())+len(human_uniprot_ids['Uniprot'].unique()))/len(uniprot_ids_mrh['Uniprot'].unique()),len(uniprot_ids_mrh))
#keep only needed id-types
print(len(uniprot_ids_mrh))
uniprot_ids_mrh = uniprot_ids_mrh.loc[uniprot_ids_mrh['ID_type'].isin(['UniProtKB-ID', 'Gene_Name','GeneID','Gene_Synonym','GeneCards','HGNC'])].copy()
print(len(uniprot_ids_mrh))
uniprot_ids_mrh = uniprot_ids_mrh.reset_index(drop=True)
uniprot_ids_mrh['ID'] = uniprot_ids_mrh['ID'].str.upper()
len(uniprot_ids_mrh.loc[uniprot_ids_mrh['ID_type']=='Gene_Name','Uniprot'].unique())
uniprot_ids_mrh_gnDupl = uniprot_ids_mrh.loc[uniprot_ids_mrh['ID_type']=='Gene_Name',['Uniprot','ID']]
print(len(uniprot_ids_mrh_gnDupl))
uniprot_ids_mrh_gnDupl = uniprot_ids_mrh_gnDupl.drop_duplicates(keep=False)
print(len(uniprot_ids_mrh_gnDupl))
uniprot_ids_mrh_gnDupl.head(10)
uniprot_gn = uniprot_ids_mrh.loc[uniprot_ids_mrh['ID_type']=='Gene_Name',['Uniprot','ID']].groupby('Uniprot').aggregate(lambda tdf: tdf.unique().tolist())
uniprot_ids_mrh_dict = pd.Series(uniprot_gn['ID'].values,index=uniprot_gn.index).to_dict()
gn_uniprot = uniprot_ids_mrh.loc[uniprot_ids_mrh['ID_type']=='Gene_Name',['Uniprot','ID']].groupby('ID').aggregate(lambda tdf: tdf.unique().tolist())
uniprot_gn_mrh_dict = pd.Series(gn_uniprot['Uniprot'].values,index=gn_uniprot.index).to_dict()
print(len(df))
print(len(df.loc[~((df['Uniprot_unified'].isna()) & (df['uniprot_from_gn'] == 'NoMapping'))]))
print(len(df.loc[((df['Uniprot_unified'].isna()) & (df['uniprot_from_gn'] == 'NoMapping'))]))
print(len(df.loc[~df['Uniprot_unified'].isna()]))
print(len(df.loc[df['Uniprot_unified'].isna()]))
print(len(df.loc[df['Uniprot'].isna()]))
len(df.loc[(~df['Uniprot_unified'].isna()) & (df['Uniprot_unified'].str.contains('CON_'))])
df.loc[(~df['Uniprot_unified'].isna()) & (df['Uniprot_unified'].str.contains('CON_'))].head()
```
#### Given
```
# gn to uniprot
uniprot_rev_dict["VIM"]
# gn to uniprot
uniprot_unrev_dict["VIM"]
# gn to list of uniprots, based on ftp mapping file
uniprot_gn_mrh_dict['VIM']
df.columns
```
#### Needed
```
df.columns
df_fgn = df[['gene_id_final', 'Uniprot', 'Uniprot_unified']].copy()
print(len(df_fgn))
df_fgn = df_fgn.drop_duplicates(keep='first')
df_fgn = df_fgn.reset_index(drop=True)
print(len(df_fgn))
df_fgn['uniprot_from_gn'] = df_fgn['gene_id_final'].copy()
df_fgn['uniprot_from_gn'] = df_fgn['uniprot_from_gn'].map(uniprot_rev_dict).fillna(df_fgn['uniprot_from_gn'].map(uniprot_unrev_dict).fillna(df_fgn['uniprot_from_gn'].map(uniprot_gn_mrh_dict).fillna('NoMapping'))) #
df_fgn.head()
print(len(df_fgn.loc[df_fgn['uniprot_from_gn']=='NoMapping']))
len(df_fgn.loc[df_fgn['uniprot_from_gn']=='NoMapping'])/len(df_fgn)
df_fgn.loc[(df_fgn['uniprot_from_gn']=='NoMapping') & (df_fgn['Uniprot'].isna())]
len(df_fgn.loc[( (df_fgn['uniprot_from_gn']=='NoMapping') & (df_fgn['Uniprot'].isna()) ) ])
df_fgn.loc[( (df_fgn['uniprot_from_gn']=='NoMapping') & (df_fgn['Uniprot'].isna()) ) ]
df_fgn.loc[( (df_fgn['uniprot_from_gn']=='NoMapping') & (df_fgn['Uniprot_unified'].isna()) ) ]
print(len(df_fgn))
df_fgn = df_fgn.loc[~( (df_fgn['uniprot_from_gn']=='NoMapping') & (df_fgn['Uniprot'].isna()) ) ]
print(len(df_fgn))
df_fgn.loc[df_fgn['uniprot_from_gn']=='NoMapping'].head()
len(df_fgn.loc[(~df_fgn['Uniprot'].isna()) & (df_fgn['Uniprot'].str.contains("-"))])
len(df_fgn.loc[(df_fgn['Uniprot'].isna())])
len(df_fgn.loc[(df_fgn['Uniprot'].isna()) & (df_fgn['uniprot_from_gn'] != "NoMapping")] )
len(df_fgn.loc[(df_fgn['Uniprot'].isna()) & (~df_fgn['Uniprot_unified'].isna())])
len(df_fgn.loc[(~df_fgn['Uniprot'].isna()) & (df_fgn['Uniprot_unified'].isna())])
len(df_fgn.loc[(df_fgn['Uniprot_unified'].isna())])
```
Uniprot_final
1. Uniprot isna & uniprot_from_gn != "NoMapping" -> Uniprot_final = uniprot_from_gn[0]
1.5. Uniprot isna & uniprot_from_gn == "NoMapping" -> Uniprot_final = np.nan
2. Uniprot_unified in Uniprot -> Uniprot_final = Uniprot_unified
3. uniprot_from_gn == NoMapping -> Uniprot_final = Uniprot.split(";")[0].split("-")[0]
4. any(uniprot_from_gn) in Uniprot -> Uniprot_final = uniprot_from_gn which is in Uniprot
5. Uniprot_unified in uniprot_from_gn -> Uniprot_final = Uniprot_unified (attention, check for consistency)
```
#check if at least one item in list exists in another list
any_in = lambda a, b: any(i in b for i in a)
def get_uniprot_final(index,row):
uniprot = row['Uniprot']
uniprot_from_gn = row['uniprot_from_gn']
#case 1
if (isinstance(uniprot,float))&(uniprot_from_gn!="NoMapping"):
return uniprot_from_gn[0].split("-")[0]
#case 1.5
elif (isinstance(uniprot,float))&(uniprot_from_gn=="NoMapping"):
return np.nan
else:
uniprots_list0 = uniprot.replace(" ","").split(";")
uniprots_list = [x.split("-")[0] for x in uniprots_list0 if x is not None]
uniprot_unified = row['Uniprot_unified'].split("-")[0]
#case 2
if uniprot_unified in uniprots_list:
return uniprot_unified
#case 3
elif uniprot_from_gn=="NoMapping":
return uniprots_list[0]
#case 4
elif any_in(uniprot_from_gn,uniprots_list):
ugn = [x for x in uniprot_from_gn if x in uniprots_list]
if isinstance(ugn,list):
if len(ugn)>0:
return ugn[0]
else:
print("attention #################",index)
#case 5
elif uniprot_unified in uniprot_from_gn:
#print("check ",index)
return uniprot_unified
#case 6
else:
if any_in(uniprots_list,uniprot_rev_ids):
ugn1 = [x for x in uniprots_list if x in uniprot_rev_ids]
if isinstance(ugn1,list):
if len(ugn1)>0:
return ugn1[0]
else:
print("attention #################",index)
elif any_in(uniprots_list,uniprot_unrev_ids):
ugn2 = [x for x in uniprots_list if x in uniprot_unrev_ids]
if isinstance(ugn2,list):
if len(ugn2)>0:
return ugn2[0]
else:
print("attention #################",index)
else:
return "#".join(['attention',uniprot_from_gn[0]])
#print("ATTENTTION",index)
# make counts dict and return the most common
#return "#".join(['attention',uniprots_list[0]])
# need Uniprot to get protein seq to count number of possible peptides
#df_fgn = df_fgn.drop(columns='Uniprot_final')
df_fgn = df_fgn.reset_index(drop=True)
df_fgn['Uniprot_final'] = None
for index,row in df_fgn.iterrows():
#print(index)
df_fgn.loc[index,'Uniprot_final'] = get_uniprot_final(index,row)
len(df_fgn.loc[df_fgn['Uniprot_final'].str.contains("attention")])
```
##### Manually check remained ids
```
df_fgn.loc[(df_fgn['gene_id_final']=='SLC8A1') & (df_fgn['Uniprot_final']=='attention#P32418'),'Uniprot_final'] = "P32418"
df_fgn.loc[(df_fgn['gene_id_final']=='GM15800') & (df_fgn['Uniprot_final']=='attention#Q6GQX8'),'Uniprot_final'] = "Q6GQX8"
df_fgn.loc[(df_fgn['gene_id_final']=='TPR') & (df_fgn['Uniprot_final']=='attention#P12270'),'Uniprot_final'] = "P12270"
df_fgn.loc[(df_fgn['gene_id_final']=='TSC2') & (df_fgn['Uniprot_final']=='attention#P49816'),'Uniprot_final'] = "P49816"
df_fgn.loc[(df_fgn['gene_id_final']=='ARMCX4') & (df_fgn['Uniprot_final']=='attention#Q5H9R4'),'Uniprot_final'] = "Q5H9R4"
df_fgn.loc[(df_fgn['gene_id_final']=='IFI204') & (df_fgn['Uniprot_final']=='attention#P0DOV2'),'Uniprot_final'] = "P0DOV2"
df_fgn.loc[(df_fgn['gene_id_final']=='DOS') & (df_fgn['Uniprot_final']=='attention#Q66L44'),'Uniprot_final'] = "Q66L44" # DOS is synonymous GN in Q66L44 entry
df_fgn.loc[(df_fgn['gene_id_final']=='MEX3A') & (df_fgn['Uniprot_final']=='attention#A1L020'),'Uniprot_final'] = "A1L020"
df_fgn.loc[(df_fgn['gene_id_final']=='EVI5L') & (df_fgn['Uniprot_final']=='attention#Q96CN4'),'Uniprot_final'] = "Q96CN4"
df_fgn.loc[(df_fgn['gene_id_final']=='RCOR1') & (df_fgn['Uniprot_final']=='attention#Q9UKL0'),'Uniprot_final'] = "Q9UKL0"
df_fgn.loc[(df_fgn['gene_id_final']=='NAGLU') & (df_fgn['Uniprot_final']=='attention#P54802'),'Uniprot_final'] = "P54802"
df_fgn.loc[(df_fgn['gene_id_final']=='TRPM3') & (df_fgn['Uniprot_final']=='attention#Q9HCF6'),'Uniprot_final'] = "Q9HCF6"
df_fgn.loc[(df_fgn['gene_id_final']=='FXR2') & (df_fgn['Uniprot_final']=='attention#Q9WVR4'),'Uniprot_final'] = "Q9WVR4"
df_fgn.loc[(df_fgn['gene_id_final']=='OSBPL10') & (df_fgn['Uniprot_final']=='attention#S4R1M9'),'Uniprot_final'] = "S4R1M9"
df_fgn.loc[(df_fgn['gene_id_final']=='SAP18') & (df_fgn['Uniprot_final']=='attention#O55128'),'Uniprot_final'] = "O55128"
df_fgn.loc[(df_fgn['gene_id_final']=='HLA-A') & (df_fgn['Uniprot_final']=='attention#P04439'),'Uniprot_final'] = "P04439"
df_fgn.loc[(df_fgn['gene_id_final']=='HLA-C') & (df_fgn['Uniprot_final']=='attention#P10321'),'Uniprot_final'] = "P10321"
df_fgn.loc[(df_fgn['gene_id_final']=='C21ORF33') & (df_fgn['Uniprot_final']=='attention#P0DPI2'),'Uniprot_final'] = "P0DPI2" # C21ORF33 is synonymous GN in Uniprot
df_fgn.loc[(df_fgn['gene_id_final']=='HMG1L1') & (df_fgn['Uniprot_final']=='attention#B2RPK0'),'Uniprot_final'] = "B2RPK0"
df_fgn.loc[(df_fgn['gene_id_final']=='SRP54C') & (df_fgn['Uniprot_final']=='attention#Q99JZ9'),'Uniprot_final'] = "Q99JZ9"
df_fgn.loc[(df_fgn['gene_id_final']=='SYNJ2BP-COX16') & (df_fgn['Uniprot_final']=='attention#A0A087WYV9'),'Uniprot_final'] = "A0A087WYV9"
df_fgn.loc[(df_fgn['gene_id_final']=='PPP1R2P3') & (df_fgn['Uniprot_final']=='attention#Q6NXS1'),'Uniprot_final'] = "Q6NXS1"
df_fgn.loc[(df_fgn['gene_id_final']=='5330417C22RIK') & (df_fgn['Uniprot_final']=='attention#A0A0A0MQC6'),'Uniprot_final'] = "A0A0A0MQC6"
df_fgn.loc[df_fgn['Uniprot_final'].str.contains("attention")]
df_fgn.loc[df_fgn['Uniprot_final'].isna()]
# in some applications ca be useful:
#df['Uniprot_final'] = None
#df.loc[df['Uniprot_unified'].isna(), 'Uniprot_final'] = df.loc[df['Uniprot_unified'].isna(),'uniprot_from_gn']
#df.loc[~df['Uniprot_unified'].isna(), 'Uniprot_final'] = df.loc[~df['Uniprot_unified'].isna(),'Uniprot_unified']
#df.loc[df['Uniprot_unified'].isna(),'Uniprot_final'] = df.loc[df['Uniprot_unified'].isna(),'uniprot_from_gn'].map(lambda x: x[0])
# clean data
print(df.loc[df['Uniprot_unified'].isin(['D3YYU8D3Z0M8','P01900P14427','P47963Q5RKP3','P63242Q8BGY2','Q3TCJ1D3Z4D8','Q6ZWZ6P63323','Q8CAY6Q80X81','Q8CHF5Q9JKP5','Q91WK5Q9CV53','Q9CQK7E9PXV5','Q9CR27Q9CYF6','Q9DCS2E9Q5B2','Q9JM76D3Z2F8']),'Study'].unique())
df_fgn.loc[df_fgn['Uniprot_final']=='D3YYU8D3Z0M8','Uniprot_final'] = 'D3YYU8'
df_fgn.loc[df_fgn['Uniprot_final']=='P01900P14427','Uniprot_final'] = 'P01900'
df_fgn.loc[df_fgn['Uniprot_final']=='P47963Q5RKP3','Uniprot_final'] = 'P47963'
df_fgn.loc[df_fgn['Uniprot_final']=='P63242Q8BGY2','Uniprot_final'] = 'P63242'
df_fgn.loc[df_fgn['Uniprot_final']=='Q3TCJ1D3Z4D8','Uniprot_final'] = 'Q3TCJ1'
df_fgn.loc[df_fgn['Uniprot_final']=='Q6ZWZ6P63323','Uniprot_final'] = 'Q6ZWZ6'
df_fgn.loc[df_fgn['Uniprot_final']=='Q8CAY6Q80X81','Uniprot_final'] = 'Q8CAY6'
df_fgn.loc[df_fgn['Uniprot_final']=='Q8CHF5Q9JKP5','Uniprot_final'] = 'Q8CHF5'
df_fgn.loc[df_fgn['Uniprot_final']=='Q91WK5Q9CV53','Uniprot_final'] = 'Q91WK5'
df_fgn.loc[df_fgn['Uniprot_final']=='Q9CQK7E9PXV5','Uniprot_final'] = 'Q9CQK7'
df_fgn.loc[df_fgn['Uniprot_final']=='Q9CR27Q9CYF6','Uniprot_final'] = 'Q9CR27'
df_fgn.loc[df_fgn['Uniprot_final']=='Q9DCS2E9Q5B2','Uniprot_final'] = 'Q9DCS2'
df_fgn.loc[df_fgn['Uniprot_final']=='Q9JM76D3Z2F8','Uniprot_final'] = 'Q9JM76'
df.loc[df['Uniprot_unified']=='P13864'].head(1)
df.loc[df['Uniprot_unified']=='Q76I79'].head(1)
df = df.drop(columns=['gn_from_uniprot', 'uniprot_from_gn'])
df = df.reset_index(drop=True)
print(len(df))
df_all = pd.merge(df,df_fgn,how='inner',on=['gene_id_final', 'Uniprot', 'Uniprot_unified'])
len(df_all)
extra = pd.merge(df,df_fgn, how='left', indicator=True)
extra.head()
extra['_merge'].unique()
len(extra.loc[extra['_merge']=='left_only'])
extra.loc[(extra['_merge']=='left_only') & (~extra['uniprot_from_gn'].isna()) & (~extra['Uniprot'].isna())]
df = df_all.copy()
```
##### Theor pep number
```
#Proteases
#trypsin: (.*?(?:K|R|$))
#lysC: (.*?(?:K|$))
df['TheorPepNum'] = None
# needed only for:
studiesTheorPepNum = ['Hamezah 2019', 'Sharma 2015, isolated', 'Guergues 2019','Sharma 2015, cultured',
'Han 2014', 'Kjell 2020', 'Krogager 2018','Hamezah 2018',
'Zhu 2018', 'Fecher 2019', 'Carlyle 2017','McKetney 2019',
'Geiger 2013','Hasan 2020', 'Bai 2020'] #,'Wisniewski 2015','Duda 2018'
# trypsin | trypsin + LysC
studiesTrypsin = ['Hamezah 2018','Krogager 2018','Sharma 2015, isolated','Sharma 2015, cultured','Han 2014','Hamezah 2019','Guergues 2019','Kjell 2020','Zhu 2018','Fecher 2019','Carlyle 2017','McKetney 2019','Hasan 2020','Bai 2020'] #,'Wisniewski 2015','Duda 2018']
# 'Krogager 2018': Trypsin & L-Lys ### both (?) sequent.. for approx trypsin because of enzyme specificty
# 'Han 2014': use trypsin for approximation (from supp info: The proteins were first digested with trypsin (enzyme-to-substrate ratio [w/w] of 1:100) at 37°C overnight, after which the peptides were collected by centrifugation. In the second digestion, the filter units were washed sequentially with water, UA buffer, and 40 mM ammonium bicarbonate, respectively, and the proteins were cleaved with trypsin (enzyme-to-substrate ratio [w/w] of 1:200).)
# 'Geiger 2013': digested in solution with endoprotease Lys-C.
# 'Hamezah 2019': Trypsin was set as the digestive enzyme
# 'Guergues 2019': Trypsin/Lys-C protease - for approx trypsin because of enzyme specificty
# 'Kjell 2020': digested with LysC and trypsin - for approx trypsin because of enzyme specificty
# 'Zhu 2018': two-step digestion was performed at 37 °C with Lys-C and trypsin - for approx trypsin because of enzyme specificty
# 'Fecher 2019': LysC (Promega) followed by 4-h incubation with 0.15 μg trypsin - for approx trypsin because of enzyme specificty
# 'Carlyle 2017': Trypsin-digested peptides
# 'McKetney 2019': Trypsin was added to the protein lysate sample at a ratio of 50:1 w/v and digested overnight.
# 'Hasan 2020': Sequencing-grade modified porcine trypsin
# 'Bai 2020': Lys-C (Wako, 1:100 w/w) at 21 C for 2 h, diluted by 4-fold to reduce urea to 2 M for the addition of trypsin - for approx trypsin because of enzyme specificty
# 'Wisniewski 2015': Lys-C and trypsin were used for sequential digestion of proteins.
# using LysC and trypsin.
# lysC
studiesLysC = ['Geiger 2013']
#
uniprot_TP = df.loc[df['Study'].isin(studiesTheorPepNum),'Uniprot_final'].unique().tolist()
uniprot_tryp = df.loc[df['Study'].isin(studiesTrypsin),'Uniprot_final'].unique().tolist()
uniprot_lysc = df.loc[df['Study'].isin(studiesLysC),'Uniprot_final'].unique().tolist()
print(len(df.loc[df['molecular_weight_kDa'].isna(),'Uniprot_final'].unique()))
uniprot_noMW = df.loc[df['molecular_weight_kDa'].isna(),'Uniprot_final'].unique().tolist()
print(len(uniprot_TP),len(uniprot_noMW))
uniprot_all = df['Uniprot_final'].unique().tolist()
#uniprot_toQuery = list(set(uniprot_TP + uniprot_noMW))
uniprot_toQuery = list(set(uniprot_all))
print(len(uniprot_toQuery))
with open('../data/uniprot_toQuery_9May2021.txt', 'w') as f:
f.write("\n".join(uniprot_toQuery))
with open('../data/uniprot_tryp_9May2021.txt', 'w') as f:
f.write("\n".join(uniprot_tryp))
with open('../data/uniprot_lysc_9May2021.txt', 'w') as f:
f.write("\n".join(uniprot_lysc))
with open('../data/uniprot_noMW_9May2021.txt', 'w') as f:
f.write("\n".join(uniprot_noMW))
#! cut -f1 ../data/uniprot_toQuery.txt | cut -f1 -d"-" | sort | uniq | wc -l
#curl to get Uniprot data
mw_seq = pd.read_csv("../data/main_run",header=None, sep = '\t')
mw_seq.columns = ['Uniprot_id','MWDa','seq']
mw_seq['MWDa'] = mw_seq['MWDa'].str.replace(",","").astype('float64')
print(len(mw_seq))
mw_seq = mw_seq.drop_duplicates(keep='first').reset_index(drop=True)
print(len(mw_seq))
mw_seq.head()
mw_seq.loc[mw_seq['Uniprot_id']=='P13864']
mw_seq.loc[mw_seq['Uniprot_id']=='Q76I79']
notFoundIds= mw_seq.loc[(mw_seq['MWDa'].isna()) | (mw_seq['seq'].isna()),'Uniprot_id' ].unique()
len(notFoundIds)
len(mw_seq.loc[(mw_seq['MWDa'].isna()) | (mw_seq['seq'].isna())])
len(mw_seq.loc[mw_seq['MWDa'].isna()])
len(mw_seq.loc[mw_seq['seq'].isna()])
mw_seq.loc[(mw_seq['MWDa'].isna()) | (mw_seq['seq'].isna())]
df.loc[df['Uniprot_final'].isin(notFoundIds)].head()
df.loc[df['Uniprot_final'].isin(notFoundIds),'Study'].unique()
add4notFoundIds = df.loc[(df['Uniprot_final'].isin(notFoundIds)) & (df['uniprot_from_gn']=='NoMapping'), 'Uniprot' ].str.split(";").tolist()
add4notFoundIdsUniq = list(set(list(chain.from_iterable(add4notFoundIds))))
len(add4notFoundIdsUniq)
with open('../data/uniprot_toQuery_add4notFoundIdsUniq_9May2021.txt', 'w') as f:
f.write("\n".join(add4notFoundIdsUniq))
# curl for extra data
mw_seq2 = pd.read_csv("../data/run4notFound",header=None, sep = '\t')
mw_seq2.columns = ['Uniprot_id','MWDa','seq']
mw_seq2['MWDa'] = mw_seq2['MWDa'].str.replace(",","").astype('float64')
print(len(mw_seq2))
mw_seq2 = mw_seq2.drop_duplicates(keep='first').reset_index(drop=True)
print(len(mw_seq2))
mw_seq2.head()
notFoundIds2 = mw_seq2.loc[(mw_seq2['MWDa'].isna()) | (mw_seq2['seq'].isna()),'Uniprot_id' ].unique()
len(notFoundIds2)
add4notFoundIdsFromGN = list(set(list(chain.from_iterable(df.loc[(df['Uniprot_final'].isin(notFoundIds)) & (df['uniprot_from_gn']!='NoMapping'), 'uniprot_from_gn' ].tolist()))))
with open('../data/add4notFoundIdsFromGNUniq_9May2021.txt', 'w') as f:
f.write("\n".join(add4notFoundIdsFromGN))
#curl extra data
mw_seq3 = pd.read_csv("../data/run4notFoundFromGN",header=None, sep = '\t')
mw_seq3.columns = ['Uniprot_id','MWDa','seq']
mw_seq3['MWDa'] = mw_seq3['MWDa'].str.replace(",","").astype('float64')
print(len(mw_seq3))
mw_seq3 = mw_seq3.drop_duplicates(keep='first').reset_index(drop=True)
print(len(mw_seq3))
mw_seq3.head()
mw_seq_full = pd.concat([mw_seq, mw_seq2,mw_seq3]).reset_index(drop=True)
mw_seq_full = mw_seq_full.drop_duplicates(keep='first').reset_index(drop=True)
mw_seq_full = mw_seq_full.loc[~((mw_seq_full['MWDa'].isna()) | (mw_seq_full['seq'].isna()))]
len(mw_seq_full)
mw_seq_full = mw_seq.copy()
print(len(mw_seq_full))
mw_seq_full = mw_seq_full.drop_duplicates(keep='first').reset_index(drop=True)
print(len(mw_seq_full))
mw_seq_full = mw_seq_full.loc[~((mw_seq_full['MWDa'].isna()) | (mw_seq_full['seq'].isna()))]
print(len(mw_seq_full))
mw_seq_full['MWkDa'] = mw_seq_full['MWDa']/1000.0
# enzyme-specific cut seq to estimate the number of peptides that are theoretically possible in the experiment
def trypsin_count(seq):
return len(re.findall('(.{6,29}?(?:R|K|$))', seq)) ### Trypsin
def lysC_count(seq):
return len(re.findall('(.{6,29}?(?:K|$))', seq)) ### Lys-C
mw_seq_full['trypsin'] = mw_seq_full['seq'].apply(trypsin_count)
mw_seq_full['lysC'] = mw_seq_full['seq'].apply(lysC_count)
mw_seq_full = mw_seq_full.drop(columns=['MWDa','seq'])
mw_seq_full.head()
len(df.loc[(~df['Uniprot_final'].isin(mw_seq_full['Uniprot_id'])) & (df['Study']=='Duda 2018'),'Uniprot_final'].unique())
len(df.loc[(~df['Uniprot_final'].isin(mw_seq_full['Uniprot_id'])) & (df['Study']=='Wisniewski 2015'),'Uniprot_final'].unique())
df.loc[df['Study']=='Wisniewski 2015','molecular_weight_kDa'].unique()
df2 = pd.merge(df, mw_seq_full, left_on = 'Uniprot_final', right_on='Uniprot_id', how='left')
len(df2.loc[(df2['MWkDa'].isna()) | (df2['trypsin'].isna()) | (df2['lysC'].isna())])
df3 = df2.loc[(df2['MWkDa'].isna()) | (df2['trypsin'].isna()) | (df2['lysC'].isna()),['gene_names','gene_name_unified','gene_id_final','Uniprot','Uniprot_unified','uniprot_from_gn','Uniprot_final']].copy().reset_index(drop=True)
def list2str(elem):
return ";".join(elem)
df3['uniprot_from_gn'] = df3['uniprot_from_gn'].apply(list2str)
print(len(df3))
df3 = df3.drop_duplicates(keep='first')
print(len(df3))
print(len(df3.loc[df3['Uniprot_final'].isna()]))
print(len(df3.loc[df3['Uniprot_final']=='NoMapping']))
print(len(df3.loc[df3['Uniprot_final'].str.contains(';')]))
print(len(df3.loc[df3['Uniprot'].isna()]))
print(len(df3.loc[df3['Uniprot']=='NoMapping']))
print(len(df3.loc[df3['Uniprot'].str.contains(';')]))
print(len(df3.loc[df3['uniprot_from_gn'].isna()]))
print(len(df3.loc[df3['uniprot_from_gn']=='NoMapping']))
print(len(df3.loc[df3['uniprot_from_gn']=='N;o;M;a;p;p;i;n;g']))
print(len(df3.loc[df3['uniprot_from_gn'].str.contains(';')]))
mw_seq_full.head()
mw_seq_full_dict = mw_seq_full.set_index('Uniprot_id').T.to_dict('list')
df3.columns
# get MW and theor pep nums
#df3['currentUniprotID'] = None
df3['MWkDa'] = None
df3['trypsin'] = None
df3['lysC'] = None
df3 = df3.reset_index(drop=True)
for idx,elem in df3.iterrows():
check = 0
if elem['Uniprot_final'] in mw_seq_full_dict:
#df3['currentUniprotID'] = i
df3.loc[idx,'MWkDa'] = mw_seq_full_dict[i][0]
df3.loc[idx,'trypsin'] = mw_seq_full_dict[i][1]
df3.loc[idx,'lysC'] = mw_seq_full_dict[i][2]
check = check+1
else:
multiMW = list()
multiTr = list()
multiLys = list()
for i in elem['Uniprot'].split(';'):
i = i.split('-')[0]
if i in mw_seq_full_dict:
multiMW.append(mw_seq_full_dict[i][0])
multiTr.append(mw_seq_full_dict[i][1])
multiLys.append(mw_seq_full_dict[i][2])
if len(multiMW) >0:
df3.loc[idx,'MWkDa'] = np.median(multiMW)
df3.loc[idx,'trypsin'] = np.median(multiTr)
df3.loc[idx,'lysC'] = np.median(multiLys)
check = check+1
if ((check == 0) & (elem['uniprot_from_gn'] != 'N;o;M;a;p;p;i;n;g')):
multiMW = list()
multiTr = list()
multiLys = list()
for j in elem['uniprot_from_gn'].split(';'):
j = j.split('-')[0]
if j in mw_seq_full_dict:
multiMW.append(mw_seq_full_dict[j][0])
multiTr.append(mw_seq_full_dict[j][1])
multiLys.append(mw_seq_full_dict[j][2])
if len(multiMW) >0:
df3.loc[idx,'MWkDa'] = np.median(multiMW)
df3.loc[idx,'trypsin'] = np.median(multiTr)
df3.loc[idx,'lysC'] = np.median(multiLys)
check = check+1
elif ((check == 0) & (elem['uniprot_from_gn'] == 'N;o;M;a;p;p;i;n;g')):
print("check ", idx)
else:
print("ATTENTION",idx)
if check == 0:
print("elem: ", idx)
df3.iloc[30]
len(mw_seq_full_dict)
print(len(df3.loc[df3['MWkDa'].isna()]))
print(len(df3.loc[df3['trypsin'].isna()]))
print(len(df3.loc[df3['lysC'].isna()]))
# these seems to be due to isoforms, where entry is found in uniprot but no MW and seq is returned without specifying isoform
# isoforms are different, diff MW and diff seq
df3.loc[df3['MWkDa'].isna()].head()
len(df3.loc[df3['MWkDa'].isna(),'gene_id_final'].unique())
len(df3.loc[(df3['MWkDa'].isna()) & (df3['Uniprot'].str.contains('-')),'gene_id_final'].unique()) # only for 1 gene possible prot isoform is known
len(df2.loc[df2['gene_id_final'].isin(df3.loc[df3['MWkDa'].isna(),'gene_id_final'].unique())])/len(df2) # 0.05% of genes -> drop is to be more accurate
df2nn = df2.loc[(df2['MWkDa'].isna()) | (df2['trypsin'].isna()) | (df2['lysC'].isna())].copy()
print(len(df2))
df2 = df2.loc[~( (df2['MWkDa'].isna()) | (df2['trypsin'].isna()) | (df2['lysC'].isna()) )]
print(len(df2))
print(len(df2nn))
df2nn = df2nn.drop(columns=['Uniprot_id', 'MWkDa', 'trypsin','lysC'])
df2nn.columns
df3.columns
df3 = df3.drop(columns=['uniprot_from_gn'])
df2nnm = pd.merge(df2nn, df3, how='inner', on = ['gene_names', 'gene_name_unified','gene_id_final', 'Uniprot', 'Uniprot_unified','Uniprot_final'])
len(df2nnm)
df2 = df2.drop(columns='Uniprot_id')
df4conc = pd.concat([df2,df2nnm], ignore_index=True, sort=False)
df4conc.loc[df4conc['Study'].isin(studiesTrypsin),'TheorPepNum'] = df4conc.loc[df4conc['Study'].isin(studiesTrypsin),'trypsin']
df4conc.loc[df4conc['Study'].isin(studiesLysC),'TheorPepNum'] = df4conc.loc[df4conc['Study'].isin(studiesLysC),'lysC']
len(df4conc.loc[df4conc['molecular_weight_kDa'].isna()])
df4conc.loc[df4conc['molecular_weight_kDa'].isna(),'molecular_weight_kDa'] = df4conc.loc[df4conc['molecular_weight_kDa'].isna(),'MWkDa']
len(df4conc.loc[df4conc['molecular_weight_kDa'].isna()])
df4conc.loc[df4conc['molecular_weight_kDa'] != df4conc['MWkDa'],['molecular_weight_kDa','MWkDa']].drop_duplicates(keep='first').head()
df4conc = df4conc.drop(columns = ['MWkDa', 'trypsin', 'lysC'])
df4conc = df4conc.reset_index(drop=True)
```
### Calculate concentrations
##### based on total protein approach from Wiśniewski et al. https://doi.org/10.1074/mcp.M113.037309
```
df = df4conc.copy()
df['conc_uM'] = None
df['log_conc_uM'] = None
df['copyNum'] = None
df['totalProtein'] = None
df['totalVolume'] = None
```
##### Median cellular concentration [nM]
```
df.loc[df['raw_data_units']=='Median cellular concentration [nM]','Study'].unique()
df.loc[df['raw_data_units']=='Median cellular concentration [nM]','conc_uM'] = df.loc[df['raw_data_units']=='Median cellular concentration [nM]','raw_data']/1000.0 # nM to uM
```
##### LFQintensity
```
df.loc[df['TheorPepNum']==np.max(df['TheorPepNum']),'gene_id_final'].unique() # TTN = Titin
df['sample_full_id'] = df['Study'].astype(str) + '_' + df['Organism'].astype(str) + '_' + df['location'].astype(str) + '_' + df['Age_days'].astype(str) + '_' + df['condition'].astype(str) + '_' + df['sample_id'].astype(str)
len(df['sample_full_id'].unique())
len(df.loc[df['raw_data_units']=='LFQintensity','sample_full_id'].unique())
def protRulLFQ(lfq,detectabilityNormFactor, mw_kDa):
tcp = 200 #tcp = Total cellular protein concentration [g/l] is 200-300 g/l typically
papc = 200 # "Protein amount per cell [pg]" = 200
avogadro = 6.02214129e23
mwWeightedNormalizedSummedIntensities = np.sum(lfq/detectabilityNormFactor*(mw_kDa*1000))
factor = papc*1e-12*avogadro/mwWeightedNormalizedSummedIntensities
copyNum = lfq/detectabilityNormFactor*factor # if detectability is the number of theorretical peptides
totalProtein = np.sum(copyNum*mw_kDa*1000*1e12/avogadro) # *1000 because of kDa to Da
totalVolume = totalProtein/tcp*1000 # femtoliters
calcConc = copyNum/(totalVolume*1e-15)/avogadro*1e6 # micromolar
return([calcConc,copyNum,totalProtein,totalVolume])
# this will take a while
for sample_full_id in df.loc[df['raw_data_units']=='LFQintensity','sample_full_id'].unique():
#print(sample_full_id)
df.loc[df['sample_full_id']==sample_full_id,['conc_uM','copyNum','totalProtein','totalVolume']] = protRulLFQ(df.loc[df['sample_full_id']==sample_full_id,'raw_data'],df.loc[df['sample_full_id']==sample_full_id,'TheorPepNum'],df.loc[df['sample_full_id']==sample_full_id,'molecular_weight_kDa'])
```
##### iBAQ
```
#iBAQ: total intensity divided by the number of tryptic peptides between 6 and 30 amino acids in length. Jocelyn F. Krey et al 2013
def protRulIBAQ(ibaq, mw_kDa):
tcp = 200 #tcp = Total cellular protein concentration [g/l] is 200-300 g/l typically
papc = 200 # "Protein amount per cell [pg]" = 200
avogadro = 6.02214129e23
mwWeightedNormalizedSummedIntensities = np.sum(ibaq*(mw_kDa*1000))
factor = papc*1e-12*avogadro/mwWeightedNormalizedSummedIntensities
copyNum = ibaq*factor # adapted for iBAQ as sharma['detectabilityNormFactor'] = sharma['MedPepCount']
totalProtein = np.sum(copyNum*mw_kDa*1000*1e12/avogadro) # *1000 because of kDa to Da
totalVolume = totalProtein/tcp*1000 # femtoliters
calcConc = copyNum/(totalVolume*1e-15)/avogadro*1e6 # micromolar
return([calcConc,copyNum,totalProtein,totalVolume])
```
df.loc[df['raw_data_units']=='iBAQ','conc_uM'] = protRulIBAQ(df.loc[df['raw_data_units']=='iBAQ','raw_data'],
df.loc[df['raw_data_units']=='iBAQ','molecular_weight_kDa'])
```
for sample_full_id in df.loc[df['raw_data_units']=='iBAQ','sample_full_id'].unique():
#print(sample_full_id)
df.loc[df['sample_full_id']==sample_full_id,['conc_uM','copyNum','totalProtein','totalVolume']] = protRulIBAQ(df.loc[df['sample_full_id']==sample_full_id,'raw_data'],df.loc[df['sample_full_id']==sample_full_id,'molecular_weight_kDa'])
```
##### Protein concentration (mol/g protein)
```
def protRul_molPerGramProt(molgramprot, mw_kDa):
tcp = 200 #tcp = Total cellular protein concentration [g/l] is 200-300 g/l typically
papc = 200 # "Protein amount per cell [pg]" = 200
avogadro = 6.02214129e23
copyNum = molgramprot*avogadro*papc*1e-12
totalProtein = np.sum(copyNum*mw_kDa*1000*1e12/avogadro) # *1000 because of kDa to Da
totalVolume = totalProtein/tcp*1000 # femtoliters
calcConc = copyNum/(totalVolume*1e-15)/avogadro*1e6 # micromolar
return([calcConc,copyNum,totalProtein,totalVolume])
for sample_full_id in df.loc[df['raw_data_units']=='Protein concentration (mol/g protein)','sample_full_id'].unique():
#print(sample_full_id)
df.loc[df['sample_full_id']==sample_full_id,['conc_uM','copyNum','totalProtein','totalVolume']] = protRul_molPerGramProt(df.loc[df['sample_full_id']==sample_full_id,'raw_data'],df.loc[df['sample_full_id']==sample_full_id,'molecular_weight_kDa'])
len(df.loc[(df['Study']=='Wisniewski 2015') & (df['molecular_weight_kDa'].isna())])
```
##### Mean concentration [mol/(g total protein)]
```
# rat brain density: 1.04-1.05 g/ml https://link.springer.com/chapter/10.1007/978-3-7091-9115-6_12 https://www.researchgate.net/post/What_is_the_average_wet_weight_for_dissected_RAT_brain_areas_substantia_nigra_striatum_septum_hippocampus_nucleus_accumbens_or_VTA
# human brain density: 1.03 kg/l
# From Fischer et al 2004 Average protein density is a molecular-weight-dependent function:
# ro = 1.41 + 0.145 * exp(-M(kDa)/13(4)) g/cm3 1cm3 = 0.001L = 1 mL
# or ~1.4 g/cm3 = 1.4 g/ml = 1.4*1000.0 g/l
# 1 cm3 = 1e-3 L = 1 ml
def protRul_molPerGramProt(molgramprot, mw_kDa):
tcp = 200 #tcp = Total cellular protein concentration [g/l] is 200-300 g/l typically
papc = 200 # "Protein amount per cell [pg]" = 200
avogadro = 6.02214129e23
copyNum = molgramprot*avogadro*papc*1e-12
totalProtein = np.sum(copyNum*mw_kDa*1000*1e12/avogadro) # *1000 because of kDa to Da
totalVolume = totalProtein/tcp*1000 # femtoliters
calcConc = copyNum/(totalVolume*1e-15)/avogadro*1e6 # micromolar
return([calcConc,copyNum,totalProtein,totalVolume])
for sample_full_id in df.loc[df['raw_data_units']=='Mean concentration [mol/(g total protein)]','sample_full_id'].unique():
#print(sample_full_id)
df.loc[df['sample_full_id']==sample_full_id,['conc_uM','copyNum','totalProtein','totalVolume']] = protRul_molPerGramProt(df.loc[df['sample_full_id']==sample_full_id,'raw_data'],df.loc[df['sample_full_id']==sample_full_id,'molecular_weight_kDa'])
```
##### IntensityL
```
# SILAC data, apply the same method as in LFQ, approximation
for sample_full_id in df.loc[df['raw_data_units']=='IntensityL','sample_full_id'].unique():
#print(sample_full_id)
df.loc[df['sample_full_id']==sample_full_id,['conc_uM','copyNum','totalProtein','totalVolume']] = protRulLFQ(df.loc[df['sample_full_id']==sample_full_id,'raw_data'],df.loc[df['sample_full_id']==sample_full_id,'TheorPepNum'],df.loc[df['sample_full_id']==sample_full_id,'molecular_weight_kDa'])
```
##### tmt abundance
```
#apply the same method as in LFQ, approximation
for sample_full_id in df.loc[df['raw_data_units']=='tmt abundance','sample_full_id'].unique():
#print(sample_full_id)
df.loc[df['sample_full_id']==sample_full_id,['conc_uM','copyNum','totalProtein','totalVolume']] = protRulLFQ(df.loc[df['sample_full_id']==sample_full_id,'raw_data'],df.loc[df['sample_full_id']==sample_full_id,'TheorPepNum'],df.loc[df['sample_full_id']==sample_full_id,'molecular_weight_kDa'])
df['log_conc_uM'] = np.log(df['conc_uM'].astype('float64'))
df['log_conc_uM'] = df['log_conc_uM'].astype('float64')
df = df.reset_index(drop=True)
df.columns
fig = plt.figure(figsize=(12, 6))
ax = sns.violinplot(x="Study", y="log_conc_uM", data=df)
ax.grid(False)
plt.xticks(rotation=75)
ax.set_ylabel('log_conc_uM')
plt.show()
with open('../data/3_df_with_conc_PerSampleNorm_9May2021.pkl','wb') as f:
pkl.dump(df,f)
```
| github_jupyter |
<img src="NYUDB-01.png">
# THE TRAGEDY OF THE PREQUELS
## Does bandwagoning in the Star Wars community <em> plague </em> Episodes I, II, and III?
### Feroz Khalidi, [*fk597@nyu.edu*](mailto:fk597@nyu.edu)
## A long time ago, in a galaxy not so far away...
...George Lucas created the epic space movie series called ***Star Wars***. Today there exists some agreement that the original trilogy (Episodes IV, V, and VI) were better than the prequels (Episodes I, II, and III). Fans deride the newer films' plots and redditors create memes out of their dialogue (see [r/prequelmemes](www.reddit.com/r/prequelmemes)). However, this project was inspired in part by an ex-girlfriend who after watching the entire Star Wars said she loved the character Jar Jar Binks, a token for many others of all that is wrong with the prequels (I should have known then that the relationship wouldn't last. We clearly had irreconcilable differences).
But were the newer movies actually that much worse than those of the original trilogy? As a child, I loved the prequels; I grew up on them and became enthralled with their worlds. At some point though, I became wholly convinced that they were awful films. The characters seemed flat and fake and forced into roles they should have never filled. The wellspring of fantasy the CGI once provided dried up. *I freaking hated Jar Jar Binks.*
I never thought poorly of them when I first saw them. Honestly, I probably hadn't begun to dislike them until a friend had mentioned disliking them to me. Maybe I was just a four-year-old being four years old, but at the time pod racing seemed like the most exciting sport ever. Darth Maul and Jango Fett were also the coolest.
Public opinion tends to be shaped by elite cues, by individuals' relying heavily on the opinions of experts in order to form their own. The same soundbites can be parroted repeatedly. The feelings of others quickly become our own genuine sentiments. For my final project in the NYU Stern Data BootCamp course, I examined whether there might be a bandwagon effect within the Star Wars community. Additionally, I was interested in exploring some of the ways the prequels differed from the original trilogy in a quantifiably measurable way. Obviously there is much to a movie that cannot be analyzed from the data alone. Factors such as the actors' performances and the quality of special effects, for example, are highly subjective and qualitative. However, people's opinions and reactions to them can be measured and quantified. This also makes any significant differences that can arise solely on the basis of a movie's script and prescribed dialogue that much more interesting to explore.
First, I look at survey data gathered on individuals' attitudes towards the different Star Wars movies and main characters. Next, I use the Star Wars helper API to examine which characters are in which movies. From there, I decided to get the script for each film and do some basic analysis of who's speaking in which movies. Lastly, just for fun, I conduct some basic natural language processing of the dialogue to tease out any patterns hidden in them.
##### "If many believe so, it is so"
## INITIALIZATION
First, let's load the libraries necessary. I'll need to be able to look at data frames and perform some basic mathematical and statistical manipulations. I will then need packages to generate graphics that can represent them well. Additional libraries are also needed to access the Star Wars API, to scrape and process the movie scripts, and to process the natural language contained within them.
Note: I keep all the data sources linked directly and do all the data manipulation in the script such that someone could open up this notebook and run it. At times, it makes the notebook a bit cluttered, especially when parsing the scripts, but I think the benefits gained from reproducibility far outweigh the excess code.
```
import pandas as pd
import numpy as np
import math
from collections import Counter
import matplotlib.pyplot as plt
import plotly
from plotly import tools
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
import swapi as sw
import requests
from bs4 import BeautifulSoup
import re
import nltk
import string
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
%matplotlib inline
plotly.offline.init_notebook_mode(connected=True)
```
## PART A
### Gathering and Preparing the Survey Data
The first dataset I look at comes from a FiveThirtyEight poll that the site had run through Survey Monkey Audience, surveying 1,186 respondents from June 3 to 6, 2014 (the data is available on [GitHub](https://github.com/fivethirtyeight/data/tree/master/star-wars-survey)). Seventy-nine percent (*n*=835) of those respondents said they had watched at least one of the “Star Wars” films. I am accessing it directly through the 538 Github repository and reading the csv with pandas. After cleaning and parsing the data, I will look at the participants' ratings of the different films and characters.
```
URL = 'https://raw.githubusercontent.com/fivethirtyeight/data/master/star-wars-survey/StarWars.csv' #data location
star_wars = pd.read_csv(URL, encoding="ISO-8859-1")#accessing it
star_wars.iloc[0:10, 9:] #retitle columns
for index in range(3,9):
star_wars.rename(
columns={star_wars.columns[index]:'Seen: ' + star_wars.iloc[0,index][11:]},
inplace=True
)
for index in range(9,15):
star_wars.rename(
columns={star_wars.columns[index]:'Ranking: ' + star_wars.iloc[0,index][11:]},
inplace=True
)
for index in range(15,29):
star_wars.rename(
columns={star_wars.columns[index]:'Popularity: ' + star_wars.iloc[0,index]},
inplace=True
)
#recode as booleans and set index
star_wars.rename(columns={star_wars.columns[31]:'Do you consider yourself to be a fan of the Expanded Universe?'}, inplace=True)
star_wars.drop(0, inplace=True)
star_wars.set_index('RespondentID', inplace=True)
yes_no = {
'Yes': True,
'No': False
}
star_wars.iloc[:,0] = star_wars.iloc[:,0].map(yes_no)
star_wars.iloc[:,1] = star_wars.iloc[:,1].map(yes_no)
star_wars.iloc[:,30] = star_wars.iloc[:,30].map(yes_no)
for index in range(2,8):
col_name = star_wars.columns[index]
star_wars[col_name] = pd.notnull(star_wars[col_name])
star_wars[star_wars.columns[8:14]] = star_wars[star_wars.columns[8:14]].astype(float)
string_to_number = { #recode popularity data
'Very favorably' : 2,
'Somewhat favorably' : 1,
'Neither favorably nor unfavorably (neutral)' : 0,
'Somewhat unfavorably' : -1,
'Very unfavorably' : -2,
'Unfamiliar (N/A)' : None,
None : None
}
no_nan = star_wars[star_wars.columns[14:28]].fillna('Unfamiliar (N/A)')
star_wars[star_wars.columns[14:28]] = no_nan.applymap(lambda x: string_to_number[x])
star_wars.index = range(len(star_wars))
star_wars.head() #preview the data
```
## PART B
### Plotting the Overall Rankings
Now we want to look at people's rankings of each movie. This is taken from each participant's ranking the six movies from first to sixth such that the closer to 1 a rating is, the better.
```
ranking = star_wars.iloc[:,8:14] #splice the ranking data into a seperate variable
ranking_means = ranking.apply(np.mean) #calculate and store the means
ranking_sem = ranking.apply(np.std) / np.sqrt(len(ranking)) #calculate and store the std errors
width = 0.2 #width of the bars
index = np.arange(len(ranking_means)) #index for x_locations of bars
fig, ax = plt.subplots()
rects = ax.bar(index, ranking_means, width, color='grey', yerr=ranking_sem)
ax.set_ylabel('Ranking(mean) [lower is better]')
ax.set_title('Average ranking for the first six Star Wars movies')
ax.set_xticks(index)
ax.set_xticklabels([(x[9:]) for x in ranking.columns], rotation='vertical');
```
Error bars represent the standard errors of the mean. The graph is a not very intuitive because higher bars represent lower rankings, but we can see that the original trilogy is ranked more highly among the six films on average than are the prequels. It's also does not look very nice. Here we'll replot it using the interactive plotly library.
```
numerals = ['I', 'II', 'III', 'IV', 'V', 'VI']
xdata = ['Ep ' + x for x in numerals] #create abreviations for x
trace1 = go.Bar( #Generating the bars
hoverinfo = 'text',
x = xdata,
textfont=dict(
family='sans serif',
size=18,
color='#1f77b4'
),
y = [6 - x for x in ranking_means], #recoding such that shorter is worse, taller better
text = [str(ranking_means[x]) + '<br>Average Film Ranking (out of 6)<br>' + list(ranking.columns)[x][9:] for x in range(6)], #hover captions]
marker=dict(
color='rgb(252,223,0)'),
opacity=1,
width=.6,
)
data = [trace1]
layout = go.Layout( #making it look and function well
title='<b>Original Trilogy Reigns Supreme</b> <br> Average Films Rankings',
titlefont =dict(color='rgb(255,255,255)',
size =21,
family = 'open sans'),
plot_bgcolor='rgba(10, 0, 0, 1)',
paper_bgcolor ='rgba(10, 0, 0, 1)',
height=600,
annotations=[ ],
xaxis=dict(tickfont=dict(family='open sans', size=14,color='rgb(255,255,255)'), linecolor='rgb(255,255,255)', linewidth=1)
,
yaxis=dict(range = [0, 5], tickvals = [0,.5,1,1.5,2,2.5,3,3.5,4, 4.5, 5],tickmode="array", ticktext = [6, 5.5, 5, 4.5, 4, 3.5, 3, 2.5, 2, 1.5, 1], tickfont=dict(family='open sans', size=14,color='rgb(255,255,255)'), linecolor ='rgb(0,0,0)', linewidth=0),
)
fig = go.Figure(data=data, layout=layout) #now let's plot it
iplot(fig)
```
Much better. The prequels are clearly ranked lower among the six films than are the original films. But we need to explore the data further for a couple of reasons. First, this includes people who have not seen all six films, and so it might be better to exclude them because they cannot give a fair assessment. Second, this includes everyone, fans and non-fans. Obviously if you have seen all six star wars films you are more likely to be a fan than if you haven't, but there are still some people, like my Gungan loving ex, who have seen all six films but are not themselves fans.
## PART C
### Looking for Film-Fanking Differences between Fans and Non-Fans
```
seen_all = star_wars[(star_wars['Seen: Episode I The Phantom Menace'] == True) & #store seperately data for only individuals who have seen all the movies
(star_wars['Seen: Episode II Attack of the Clones'] == True) &
(star_wars['Seen: Episode III Revenge of the Sith'] == True) &
(star_wars['Seen: Episode IV A New Hope'] == True) &
(star_wars['Seen: Episode V The Empire Strikes Back'] == True) &
(star_wars['Seen: Episode V The Empire Strikes Back'] == True)]
seen_all_ranking = seen_all.iloc[:,8:14] #get the rankings
seen_all_ranking_means = seen_all_ranking.apply(np.mean) #get the means
fan = seen_all[seen_all["Do you consider yourself to be a fan of the Star Wars film franchise?"] == True]
fan_rankings = fan.iloc[:,8:14] #get the fans' rankings
non_fan = seen_all[seen_all["Do you consider yourself to be a fan of the Star Wars film franchise?"] == False]
non_fan_rankings = non_fan.iloc[:,8:14] #get the non-fans' rankings
print('seen all: n =', len(seen_all)) #how many have seen all the six films
print('seen all, fans: n =', len(fan))
print('seen all, non_fans: n =', len(non_fan))
fan_ranking_means = fan_rankings.apply(np.mean) #get the fans' means
fan_ranking_sem = fan_rankings.apply(np.std) / np.sqrt(len(fan_rankings)) #std error
non_fan_ranking_means = non_fan_rankings.apply(np.mean) #ditto for non-fans
non_fan_ranking_sem = non_fan_rankings.apply(np.std) / np.sqrt(len(non_fan_rankings))
width = 0.2 #let's plot it all
index = np.arange(len(ranking_means))
fig, ax = plt.subplots()
rects_fan = ax.bar(index, fan_ranking_means, width, color='blue', yerr=fan_ranking_sem)
rects_non_fan = ax.bar(index + width, non_fan_ranking_means, width, color='green', yerr=non_fan_ranking_sem)
ax.set_ylabel('Ranking (mean) [lower is better]')
ax.set_title('Average rating for the first six Star Wars movies by group')
ax.set_xticks(index + width/2)
ax.set_xticklabels([(x[9:]) for x in ranking.columns], rotation='vertical');
ax.legend((rects_fan[0], rects_non_fan[0]), ('Self-Identify as Fans', 'Do not Self-identify as Fans'));
```
There it is! Even though the prequels are uniformly disliked by fans and the original trilogy revered, the trend is not as strong among the non-fans. Non-fans tend to rank Episode I almost as highly on average as Episodes IV and VI.
But just as before, let's reconfigure and reconstruct the figure in plotly.
```
trace1 = go.Bar( #Fans bar
name = 'Self-Identify as Fans',
hoverinfo = 'text',
x = xdata,
textfont=dict(
family='sans serif',
size=18,
color='#1f77b4'
),
y = [6 - x for x in fan_ranking_means],
text = [str(fan_ranking_means[x]) + '<br>Average Fan Film Ranking (out of 6)<br>' + list(ranking.columns)[x][9:] for x in range(6)], #
marker=dict(
color='fd972b'),
opacity=1,
width=.4,
)
trace2 = go.Bar( #Non-Fans
name = 'Do not Self-Identify as Fans',
hoverinfo = 'text',
x = xdata,
textfont=dict(
family='sans serif',
size=18,
color='#1f77b4'
),
y = [6 - x for x in non_fan_ranking_means], #recoding such that shorter is worse, taller better
text = [str(non_fan_ranking_means[x]) + '<br>Average Non-Fan Film Ranking (out of 6)<br>' + list(ranking.columns)[x][9:] for x in range(6)], #hover text
marker=dict(
color='1a75bc'),
opacity=1,
width=.4,
)
data = [trace1, trace2]
layout = go.Layout( #make it look nice
barmode='group',
title='<b>Prequel-Hate among Fans</b> <br> Average Films Rankings by Self-Identified Fan Status',
titlefont =dict(color='rgb(255,255,255)',
size =21,
family = 'open sans'),
plot_bgcolor='rgba(10, 0, 0, 1)',
paper_bgcolor ='rgba(10, 0, 0, 1)',
height=700,
annotations=[ ],
xaxis=dict(tickfont=dict(family='open sans', size=14,color='rgb(255,255,255)'), linecolor='rgb(255,255,255)', linewidth=1)
,
yaxis=dict(range = [0, 5], tickvals = [0,.5,1,1.5,2,2.5,3,3.5,4, 4.5, 5],tickmode="array", ticktext = [6, 5.5, 5, 4.5, 4, 3.5, 3, 2.5, 2, 1.5, 1], tickfont=dict(family='open sans', size=14,color='rgb(255,255,255)'), linecolor ='rgb(0,0,0)', linewidth=0), #making the ticks match the inverted values to preserve proportionality
legend=dict(orientation = 'h',font=dict(color='rgb(255,255,255)')),
)
fig = go.Figure(data=data, layout=layout)
iplot(fig) #plot it
```
The above figure clearly shows that those surveyed who identified as fans ranked the prequels much lower relative to the non-fans. This suggests that there may be a bandwagon effect within the Star Wars community in which, as a member, it seems canon to love the original trilogy and sacrilege to like the prequels. One of the benefits of the Survey Monkey Audience is that there is probably less self-selection bias than through other advertised Star Wars related polls. Less than half of the participants had seen all six films and nearly a quarter had seen none. This strengthens the external validity of the data to the overall population. But at least everyone still hates Jar-Jar though, right?
## Part D
### Evaluating Character Popularity between Fans and Non-Fans
This was an idea that has been sitting with me for a while. When did I start hating Jar Jar? Why did I find it so appalling that someone whose opinion I otherwise valued was so clearly wrong about something so clearly awful?
Jar Jar Binks is possibly one of the most hated characters in a film series of all time. Despite his ostensibly loveable goofiness and being both caring and loyal, Jar Jar is almost universally despised. He is easy to hate. For many Star Wars fans, he represents everything that George Lucas did to ruin the series. He is a computer generated mess of nonsensical dialogue and concocted idiocy that at the same time forces the plot and is completely unnecessary to the story. But to someone initially unfamiliar with the series and also to my younger self, he wasn't that bad. *I think I might have even liked him...*
To evaluate the possibly insidious effects of bandwagoning on individuals' views of the characters, I am again taking the data from only participants who had seen all 6 films and splitting it along fans and non-fans. From there I will plot the results.
```
seen_all_JJB = seen_all.iloc[:,14:28] #get character popularities from seen_all data
seen_all_JJB_means = seen_all_JJB.apply(np.mean) #means
seen_all_JJB_sem = seen_all_JJB.apply(np.std) / np.sqrt(len(seen_all_JJB)) #std err
fan_JJB = fan.iloc[:,14:28] #now for fans
fan_JJB_means = fan_JJB.apply(np.mean)
fan_JJB_sem = fan_JJB.apply(np.std) / np.sqrt(len(fan_JJB))
non_fan_JJB = non_fan.iloc[:,14:28] #ditto for non-fans
non_fan_JJB_means = non_fan_JJB.apply(np.mean)
non_fan_JJB_sem = non_fan_JJB.apply(np.std) / np.sqrt(len(non_fan_JJB))
for character in seen_all_JJB.columns: #plot each character's popularity among fans and non-fans using a for loop
fig, ax = plt.subplots()
index = 1
width = 0.2
rects_fan_JJB = ax.bar(index, fan_JJB_means[character], width, color='blue', yerr=fan_JJB_sem[character])
rects_non_fan_JJB = ax.bar(index + width, non_fan_JJB_means[character], width, color='green', yerr=non_fan_JJB_sem[character])
ax.set_title(character);
```
First, one can see a significant difference between the fans and non-fans with regards to their favorability of Jar Jar Binks. While non-fans don't mind or even like the oafish Gungan, fans actively dislike him. Second, aside from Jar Jar and also Padme, there is an almost constantly higher favorability among fans. This makes sense, as fans of a series are more likely to find its characters favorable than someone who had seen all six films and still does not consider themselves a fan. Interestingly, Padme is also ranked higher by non-fans than by fans. Padme, along with Jar Jar, appears exclusively in the prequels. Anakin also appears predominantly in the prequels, but because Darth Vader is Anakin Skywalker [SPOILER ALERT! I'm sorry], it is likely that the two are more or less imagined to be the same character. This is further supported by the almost equal rankings that fall within each other's standard error bars. This trend, however, is not seen among non-fans.
I am interested in exploring this relationship between characters and their appearances in which trilogies further. But before that, let's create a nicer figure with plotly.
```
trace1 = go.Bar(
name = 'Self-Identify as Fans',
hoverinfo = 'text',
x = [character[12:] for character in list(seen_all_JJB.columns)],
textfont=dict(
family='sans serif',
size=18,
color='#1f77b4'
),
y = fan_JJB_means,
text = [str(fan_JJB_means[x]) + '<br>Average Fan Character Rating<br>' + seen_all_JJB.columns[x][12:] for x in range(len(seen_all_JJB.columns))], #
marker=dict(
color='fd972b'),
opacity=1,
width=.4,
)
trace2 = go.Bar(
name = 'Do not Self-Identify as Fans',
hoverinfo = 'text',
x = [character[12:] for character in list(seen_all_JJB.columns)],
textfont=dict(
family='sans serif',
size=18,
color='#1f77b4'
),
y = non_fan_JJB_means,
text = [str(non_fan_JJB_means[x]) + '<br>Average Non-Fan Character Rating<br>' + seen_all_JJB.columns[x][12:] for x in range(len(seen_all_JJB.columns))],
marker=dict(
color='1a75bc'),
opacity=1,
width=.4,
)
data = [trace1, trace2]
layout = go.Layout(
barmode='group',
title='<b>The Jar Jar Effect</b> <br> Average Character Ratings by Self-Identified Fan Status',
titlefont =dict(color='rgb(255,255,255)',
size =21,
family = 'open sans'),
plot_bgcolor='rgba(10, 0, 0, 1)',
paper_bgcolor ='rgba(10, 0, 0, 1)',
height=700,
annotations=[ ],
xaxis=dict(anchor='free',position=0.22,tickfont=dict(family='open sans', size=14,color='rgb(255,255,255)'), linecolor='rgb(0,0,0)', linewidth=0)
,
yaxis=dict(tickfont=dict(family='open sans', size=14,color='rgb(255,255,255)'), linecolor ='rgb(0,0,0)', linewidth=0),
legend=dict(orientation = 'h',font=dict(color='rgb(255,255,255)'), y = 0),
margin=dict(t = 100),
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
```
## PART E
### Creating a Venn Diagram of Characters Appearing in Each Trilogy
The Star Wars API is the world's first quantified and programmatically-formatted set of Star Wars data.
They've watched and re-watched each Star Wars film, collecting and formatting this data in JSON with its own Python helper library. [For more info, please see their documentation.](https://swapi.co/documentation)
Here we use swapi to get the characters in each movie and create a responsive venn diagram with the information.
```
x = {sw.get_film(movie): sw.get_film(movie).get_characters().items for movie in range(1,7)} #get all the character items for each film
d = [] #need to recode movies 0 = 4, 1=5,2=6, 3=1, 4=2, 5=3
for movie in range(len(list(x.values()))): #this for loop creates the structure for a data frame that will contain the info
for person in list(x.values())[movie]:
d.append({'Person': str(person)[10:].rstrip('>'), 'Movie': (movie+4) if movie<3 else (movie-2)})
df = pd.DataFrame(d)#.set_index(['Person'])
df = df.groupby('Person')['Movie'].apply(list).reset_index()
df.set_value(8, 'Person', 'Beru Lars')
df['VD Category'] = "" #VD = Venn Diagram
#parse through it, sorting whether it's a prequel or sequel
preq_only = [] #initialize a bunch of variables for the following loop
both = []
ot_only = []
preq_only_count = 0
both_count = 0
ot_only_count = 0
preq_lbls = []
ot_lbls = []
both_lbls = []
for index, row in df.iterrows():
if (any(i in [1,2,3] for i in row[1])) and (not any(i in [4,5,6] for i in row[1])): #if in prequels but not ot
row[2] = "Prequels" #categorizes it in df
preq_only.append(row[0]) #adds name to list of characters in prequels only
preq_only_count += 1 #adds to a counter (len of the above variable could have worked to but it's nice to have the int as its own variable)
preq_lbls.append(row[1]) #store which movies theyre in for hovertext
elif (not any(i in [1,2,3] for i in row[1])) and (any(i in [4,5,6] for i in row[1])): #if ot but not prequels
row[2] = "Original Trilogy"
ot_only.append(row[0])
ot_only_count += 1
ot_lbls.append(row[1])
elif (any(i in [1,2,3] for i in row[1])) and (any(i in [4,5,6] for i in row[1])): #if in both, (an 'else' may have sufficed but I always prefer to specify conditions where I have one in mind)
row[2] = "Both"
both.append(row[0])
both_count += 1
both_lbls.append(row[1])
df.head(10) #preview it
```
Now that it's been initially parsed, let's combine them in an order such that they can go into the venn diagram
```
#note: coordinates are hard coded, but changing the height and width of the layout resizes proportionally, effectively achieving the same result
ot_x= [2.42 for x in range(ot_only_count)] #x coordinates for left VD
ot_y=np.linspace(1.9,.1, ot_only_count) #y coordinates for left vd
preq_x = [.74 for x in range(math.ceil((preq_only_count / 2)))] + [1.14 for x in range(math.floor((preq_only_count / 2)))]
preq_y=np.concatenate((np.linspace(1.9,.1, math.ceil((preq_only_count / 2))),np.linspace(1.9,.1, math.floor((preq_only_count / 2)))))
both_x = [1.68 for x in range(both_count)]
both_y = (np.linspace(1.6,.4, both_count))
all_x = ot_x + preq_x + both_x
all_y = np.concatenate((ot_y, preq_y, both_y))
all_text = ot_only+preq_only+both
lbls = ot_lbls + preq_lbls + both_lbls
labels=['' for x in all_text] #generate hover text
for i in range(len(all_text)):
lbl = ''
for movie in lbls[i]:
lbl += numerals[movie-1] + ' '
labels[i] = all_text[i] + '<br>Movies: ' + lbl.rstrip()
annotations = []
for i in range(len(all_text)):#generate the words to be plotted on the VD
annotations.append(dict(x=all_x[i], y=all_y[i], text=all_text[i],
font=dict(family='Arial', size=14,
color='black'),
showarrow=False,))
annotations.append(dict(x=2.42, y=-.1, text='Original Trilogy Only', #label right vd
font=dict(family='Arial', size=18,
color='white'),
showarrow=False,))
annotations.append(dict(x=1, y=-.1, text='Prequels Only', #label right vd
font=dict(family='Arial', size=18,
color='white'),
showarrow=False,))
annotations.append(dict(x=1.68, y=.1, text='Both', #label right vd
font=dict(family='Arial', size=18,
color='white'),
showarrow=False,))
trace0 = go.Scatter( #create the scatterplot
x=all_x,
y=all_y,
text=labels,
mode='none',
hoverinfo = 'text',
hoveron = 'points',
textfont=dict(
color='black',
size=12,
family='Arial',
),
)
data = [trace0]
layout = { #make it look good, note:this one had to be done as a dictionary manually so that the shapes attribute would function properly
'title' :'<b>Twin Suns of Tatooine</b> <br>Charaters Appearing in the Different Trilogies',
'titlefont' :{'color' :'rgb(255,255,255)',
'size' :21,
'family': 'open sans'},
'plot_bgcolor':'rgba(10, 0, 0, 1)',
'paper_bgcolor':'rgba(10, 0, 0, 1)',
'hovermode': 'closest',
'xaxis': {
'showticklabels': False,
'autotick': False,
'showgrid': False,
'zeroline': False,
},
'yaxis': {
'showticklabels': False,
'autotick': False,
'showgrid': False,
'zeroline': False,
},
'shapes': [
{
'opacity': 1, #create two background white circles so opacity works on black page
'xref': 'x',
'yref': 'y',
'fillcolor': 'white',
'x0': 0,
'y0': 0,
'x1': 2,
'y1': 2,
'type': 'circle',
'line': {
'color': 'white',
},
},
{
'opacity': 1, #second
'xref': 'x',
'yref': 'y',
'fillcolor': 'white',
'x0': 1.35,
'y0': 0,
'x1': 3.35,
'y1': 2,
'type': 'circle',
'line': {
'color': 'white',
},
},
{
'opacity': 0.3, #Left circle
'xref': 'x',
'yref': 'y',
'fillcolor': 'green',
'x0': 0,
'y0': 0,
'x1': 2,
'y1': 2,
'type': 'circle',
'line': {
'color': 'green',
},
},
{
'opacity': 0.3, #right
'xref': 'x',
'yref': 'y',
'fillcolor': 'blue',
'x0': 1.35,
'y0': 0,
'x1': 3.35,
'y1': 2,
'type': 'circle',
'line': {
'color': 'blue',
},
}
],
'margin': { #get the dimensions right
'l': 20,
'r': 20,
'b': 100,
't': 100
},
'height': 800,
'width': 1000,
'annotations': annotations, #add appropriate labels
}
fig = {
'data': data,
'layout': layout,
}
iplot(fig) #plot it
```
As this venn diagram illustrates, there are a lot of characters in the prequels. Most of the characters that appear in the original trilogy are nameless and otherwise forgettable stormtroopers and rebel fighters that did not get their own character items in swapi. Compared to the initial list from the 538 diagram, we see that Anakin, Jar Jar, and Padme appear in the prequels only, Han and Lando appear in the sequels only, and Boba, Darth Vader, Leia, Luke, Obi-Wan, Palpatine, R2, and Yoda appear in both. However, just because a character appears in one means they actually were a real presence in the film. For example, Luke and Leia appear in the prequels just as infants at the end of Episode III. One might argue that Anakin does appear in the original trilogy when Darth Vader takes off his mask [Again, Spoiler Alert!!] and when he appears as a force ghost at the end of Episode VI. So, while a binary is initially helpful, there is more information that might be contained in the dialogue directly from the movie scripts themselves.
## PART F
### Collecting and Parsing the Star Wars Scripts for Dialogue
The third set of data comes from IMSDB (The Internet Movie Script Database: "The web's largest movie script resource!"). It consists of the six scripts to the original trilogy and the prequels. They are contained in the html webpage and so need to be parsed with beautiful soup 4. Some of them have different formatting and so require novel and somewhat esoteric code in order to get at the language itself. The code itself isn't that brilliant but the pattern recognition required for some of the scripts took careful examination of the scripts in order to be automated (with a few corrections after).
```
url4 = 'http://www.imsdb.com/scripts/Star-Wars-A-New-Hope.html' #get the urls from isdb
url5 = 'http://www.imsdb.com/scripts/Star-Wars-The-Empire-Strikes-Back.html'
url6 = 'http://www.imsdb.com/scripts/Star-Wars-Return-of-the-Jedi.html'
url1 = 'http://www.imsdb.com/scripts/Star-Wars-The-Phantom-Menace.html'
url2 = 'http://www.imsdb.com/scripts/Star-Wars-Attack-of-the-Clones.html'
url3 = 'http://www.imsdb.com/scripts/Star-Wars-Revenge-of-the-Sith.html'
ep4 = requests.get(url4) #access the information contained
ep5 = requests.get(url5)
ep6 = requests.get(url6)
ep1 = requests.get(url1)
ep2 = requests.get(url2)
ep3 = requests.get(url3)
ep4_soup = BeautifulSoup(ep4.content, 'html.parser') #parse the html
ep5_soup = BeautifulSoup(ep5.content, 'html.parser')
ep6_soup = BeautifulSoup(ep6.content, 'html.parser')
ep1_soup = BeautifulSoup(ep1.content, 'html.parser')
ep2_soup = BeautifulSoup(ep2.content, 'html.parser')
ep3_soup = BeautifulSoup(ep3.content, 'html.parser')
ep4_script = ep4_soup.find_all('pre')[0] #for these five scripts, the screenplay is conatined within the <pre> tags
ep4_raw = ep4_script.get_text()
ep5_script = ep5_soup.find_all('pre')[0]
ep5_raw = ep5_script.get_text()
ep6_script = ep6_soup.find_all('pre')[0]
ep6_raw = ep6_script.get_text()
ep1_script = ep1_soup.find_all('pre')[0]
ep1_raw = ep1_script.get_text()
ep2_script = ep2_soup.find_all('pre')[0]
ep2_raw = ep2_script.get_text()
ep3_raw = ep3_soup.get_text() #episode 3 required finding the beginning and end manually
for m in re.finditer('STAR WARS EPISODE 3: REVENGE OF THE SITH SCRIPT', ep3_raw):
I1 = m.start()
for m in re.finditer('END TITLES.', ep3_raw):
I2 = m.end()
ep3_raw = ep3_raw[I1:I2]
```
Now we have the raw scripts and can begin to access the dialogue itself. I am interested in seeing how often a character actually speaks in each movie. The basic structure I use for each script is the same but the exact characters or patterns of characters that signal the beginning of a dialogue and the current speaker vary from one to another. Hence, I modified the code for each script. There are also some typos and differences in names across different scripts (and even sometimes variations within) and so I do some looping and cleaning up a bit before and after parsing.
```
ep4_split = ep4_raw.split('\n') #split the script into lines
ep4_length = len(ep4_split)
speaker_spaces = ' '#indent indicates speaker
dialogue_spaces = ' ' #indent indicates dialogue
ind = ep4_split.index(' Another blast shakes them as they struggle along their way.') #find where dialogue begins
end = ep4_split.index(' INT. MASSASSI OUTPOST - MAIN THRONE ROOM')
j=-1 #-1 so it doesnt start copying until after the first character is done speaking
ep4_parsed = []
char = []
lines = []
while ind < end: #this is gonna go through every line, when there is a speaker it saves the speakers name and stores all their lines of dialogue until the next speaker and then begins it again
if ep4_split[ind] == '':
ind = ind + 1
elif ep4_split[ind] != '':
if ep4_split[ind][15] != ' ':
ind = ind + 1
elif ep4_split[ind][15] == ' ':
if ep4_split[ind][0:37] == speaker_spaces:
if j >= 0:
lines = ''.join(temp)
ep4_parsed.append([char, lines])
char = ep4_split[ind][37:]
temp = []
j = j + 1
ind = ind + 1
elif ep4_split[ind][:25] == dialogue_spaces:
temp.append(ep4_split[ind][25:])
ind = ind + 1
else:
ind = ind + 1
lines = ''.join(temp) # to make sure we get the last lines of dialogue
ep4_parsed.append([char, lines])
for line in ep4_parsed: #some cleaning up
line[1] = line[1].replace(') ', ')')
line[1] = line[1].replace(')', ') ')
if line[0][-1] == ' ':
line[0]= line[0].rstrip()
if line[0][-1] == '\t':
line[0]= line[0][:-1]
if line[0][-7:] == 'S VOICE':
line[0] = line[0][:-8]
if line[0] == 'VADER':
line[0] = 'DARTH VADER'
elif line[0] == 'BEN':
line[0] = 'OBI-WAN'
ep5_split = ep5_raw.split('\n')
ep5_length = len(ep5_split) # number of lines
speaker_spaces = '\t\t\t\t'
dialogue_spaces = '\t\t'
ind = ep5_split.index('transmitter. His Tauntaun shifts and moans nervously beneath him.') #find where dialogue begins
end = ep5_split.index('Luke wriggles his fingers, makes a fist, and relaxes it. His hand is ') #ends
j=-1
ep5_parsed = []
char = []
lines = [] #same basic structure but with tab characters instead of spaces
while ind < end:
if ep5_split[ind] == '':
ind = ind + 1
elif ep5_split[ind][0] != '':
if ep5_split[ind][0] != '\t':
ind = ind + 1
elif ep5_split[ind][0] == '\t':
if ep5_split[ind][:4] == speaker_spaces:
if j >= 0:
lines = ''.join(temp)
ep5_parsed.append([char, lines])
char = ep5_split[ind][4:]
temp = []
j = j + 1
ind = ind + 1
elif ep5_split[ind][:2] == dialogue_spaces:
temp.append(ep5_split[ind][2:])
ind = ind + 1
else:
ind = ind + 1
lines = ''.join(temp) # to make sure we get the last line
ep5_parsed.append([char, lines])
for line in ep5_parsed:
line[1] = line[1].replace(') ', ')')
line[1] = line[1].replace(')', ') ')
line[1] = line[1].replace('\t(',' (')
line[1] = line[1].replace('\t ','')
if line[0][-1] == ' ':
line[0]= line[0].rstrip()
if line[0][-1] == '\t':
line[0]= line[0][:-1]
if line[0][-7:] == 'S VOICE':
line[0] = line[0][:-8]
if line[0] == 'VADER':
line[0] = 'DARTH VADER'
elif line[0] == 'BEN':
line[0] = 'OBI-WAN'
elif line[0] == 'EMPEROR':
line[0] = 'PALPATINE'
#this one gave little way of knoing a speaker was speaking other than that it was in all caps. So other words that were in all caps and not the beginnings of dialogues had to be changed
ep6_raw = ep6_raw.replace('INTO HYPERSPACE.', 'Into hyperspace')
ep6_raw = ep6_raw.replace('PIETT is leaning over', 'Piett is leaning over')
ep6_raw = ep6_raw.replace('THE VOICES of our heroes.','The Voices of our heroes.')
ep6_raw = ep6_raw.replace('FAR AHEAD, Leia and the first', 'Far Ahead, Leia and the first')
ep6_raw = ep6_raw.replace('THEIR POV. Not far', 'Their POV. Not far')
ep6_raw = ep6_raw.replace('BOBA FETT standing near', 'Boba Fett standing near')
ep6_raw = ep6_raw.replace('MON MOTHMA, the', 'Mon Mothma, the')
ep6_raw = ep6_raw.replace('ATTACK CALL. All hell breaks loose', 'Attack Call. All hell breaks loose')
ep6_raw = ep6_raw.replace('SCOUT standing over her with his', 'Scout standing over her with his')
ep6_raw = ep6_raw.replace('ANOTHER PART OF THE FOREST: Luke and the last', 'Another part of the Forest: Luke and the last')
ep6_raw = ep6_raw.replace('RANCOR emerges. The guard runs', 'Rancor emerges. The guard runs')
ep6_split = ep6_raw.split('\n')
ind = ep6_split.index('SHUTTLE CAPTAIN')
fin = ep6_split.index('Han is stunned by this news. She smiles, and they embrace.')
j = 0
dialogue = 0
temp = []
ep6_parsed = []
for line in ep6_split[ind:fin]:
if dialogue == 1:
temp_line = line
if line.startswith(' ') == 1:
temp_line = temp_line[1:]
if line.endswith(' ') == 0:
temp_line = ''.join([temp_line, ' '])
temp.append(temp_line)
if line == '':
lines = ''.join(temp)
ep6_parsed.append([char, lines])
dialogue = 0
j = j + 1
if line != '':
index = line.find(' ')
if index != 0 and index != 1:
letters = set(line[:index])
ALL_CAPS = all(letter.isupper() for letter in letters)
if ALL_CAPS == 1:
subtext = line.find('(')
if subtext != -1:
char = line[:subtext-1]
temp = []
temp.extend([line[subtext:], ' '])
else:
char = line
temp = []
dialogue = 1
#temp.extend([' ', ep2_split[ind][3:]])
for line in ep6_parsed: #clean it up
line[1] = line[1][:-2]
if line[0][-1] == ' ':
line[0]= line[0].rstrip()
if line[0][-1] == '\t':
line[0]= line[0][:-1]
if line[0] == 'VADER':
line[0] = 'DARTH VADER'
elif line[0] == 'BEN':
line[0] = 'OBI-WAN'
elif line[0] == 'EMPEROR':
line[0] = 'PALPATINE'
#This one was particularly tough. As you can see, a dialogue began with a colon and the word before it was the speaker, but
#we also needed a way to know when the name starts. There also needed to be a way to know when the dialogue ended, which was either when
#there were two newlines or a new speaker indicated by a colon. Thus the condition was met by choosing the
#minimum of either distance from the colon and going through again.
#a bunch of lines violated this condition, but having ten out-data samples wasn't that bad relative to the overall length of the doc
ep1_raw = ep1_raw.replace('B :A', 'B : A')
ep1_raw = ep1_raw.replace('br QUI-GON : Just relax', '\n\nQUI-GON : Just relax')
ep1_raw = ep1_raw.replace(' PADME : I\'m Padme', '\n\nPADME : I\'m Padme')
ep1_raw = ep1_raw.replace('Igave', 'I gave')
ep1_raw = ep1_raw.replace('maste ', 'master ')
ep1_raw = ep1_raw.replace(' WATTO :\n(subtitled) Fweepa', '\n\nWATTO : (subtitled) Fweepa')
ep1_raw = ep1_raw.replace(' JAR JAR : Who, mesa?? SEBULBA :\n', '\n\nJAR JAR : Who, mesa??\n\nSEBULBA : ')
ep1_raw = ep1_raw.replace('Yes? CAPT', 'Yes?\n\nCAPT')
ep1_raw = ep1_raw.replace('power charge. ANAKIN :\n', 'power charge.\n\nANAKIN : \n')
ep1_raw = ep1_raw.replace('poerhaps', 'perhaps')
ep1_raw = ep1_raw.replace('patienc,', 'patience')
ep1_raw = ep1_raw.replace(' :\n', ' : ')
ep1_raw = ep1_raw.replace('-\nA: Toogi!', '-\nA : Toogi!')
ep1_raw = ep1_raw.replace('(Cont\'d) : Oops!', '(Cont\'d) Oops!')
ep1_raw = ep1_raw.replace('\n\nPADME looks at him, not knowing what to say. PADME', '\nPADME looks at him, not knowing what to say.\n\nPADME')
ep1_raw = ep1_raw.replace('\n\nQUI-GON hurries into the shop, followed by ARTOO. QUI-GON','\nQUI-GON hurries into the shop, followed by ARTOO.\n\nQUI-GON')
ep1_raw = ep1_raw.replace('The\nprize money would more than pay for the parts they need. JAR JAR', 'The prize money would more than pay for the parts they need.\n\nJAR JAR')
ep1_raw = ep1_raw.replace('FANTA. JAR JAR', 'FANTA.\n\nJAR JAR')
beg_string = 'of the battleships.'
beg = ep1_raw.find(beg_string)
ind = beg + len(beg_string)
end_string = 'They give each other a concerned look.'
end = ep1_raw.find(end_string)
ep1_to_be_parsed = ep1_raw[ind:end]
num_dialogues = ep1_to_be_parsed.count(' : ')
ep1_parsed = []
delimiter = ' : '
delimiter2 = '\n\n'
j = 0
start = 0
while j <= 1108: #this loop takes care of the find the minimally distanced next dialogue delimiter and splices it
fin = ep1_to_be_parsed.find(delimiter, start)
beg = ep1_to_be_parsed.rfind('\n', start, fin)
char = ep1_to_be_parsed[beg+1:fin]
nxt = ep1_to_be_parsed.find(delimiter, fin+3)
nxt1 = ep1_to_be_parsed.rfind('\n', fin+3, nxt)
nxt2 = ep1_to_be_parsed.find(delimiter2, fin)
if nxt1 == -1:
endd = nxt2
elif nxt1 != -1:
endd = min(nxt1, nxt2)
lines = ep1_to_be_parsed[fin+3:endd]
ep1_parsed.append([char, lines])
start = endd
j = j + 1
#cleaning things up
for line in ep1_parsed:
line[1] = line[1].replace('\n', ' ')
if line[0][-1] == ' ':
line[0]= line[0].rstrip()
if line[0][-1] == '\t':
line[0]= line[0][:-1]
if line[0] == 'PADME' or line[0] == 'PAMDE' or line[0] == 'AMIDALA':
line[0] = 'PADMÉ'
elif line[0] == 'QU-IG0N' or line[0] == 'GUI-GON':
line[0] = 'QUI-GON'
elif line[0] == 'ANKAIN' or line[0] == 'ANAKNI' or line[0] == 'ANAKN':
line[0] = 'ANAKIN'
elif line[0] == 'PALAPATINE':
line[0] = 'PALPATINE'
#Nice and simple tabs again
ep2_script = ep2_soup.find_all('pre')[0]
ep2_raw = ep2_script.get_text()
ep2_raw = ep2_raw.replace('\n\nCAPTAIN TYPHO\nWe made it. I guess I was wrong,\nthere was no danger at all.','\n\n\t\t\t\tCAPTAIN TYPHO\n\t\t\tWe made it. I guess I was wrong,\n\t\t\tthere was no danger at all.')
ep2_split = ep2_raw.split('\n')
ep2_length = len(ep2_split) # number of lines
speaker_spaces = '\t\t\t\t'
dialogue_spaces = '\t\t\t'
ind = ep2_split.index("One of the FIGHTER PILOTS jumps from the wing of his ship and removes his helmet. He is CAPTAIN TYPHO, SENATOR AMIDALA'S Security Officer. He moves over to a WOMAN PILOT.",) #find where dialogue begins
end = ep2_split.index('EXT. NABOO LAKE RETREAT, LODGE, GARDEN - LATE DAY')
j=-1
ep2_parsed = []
char = []
lines = []
while ind < end:
if ep2_split[ind] == '':
ind = ind + 1
elif ep2_split[ind][0] != '':
if ep2_split[ind][0] != '\t':
ind = ind + 1
elif ep2_split[ind][0] == '\t':
if ep2_split[ind][:4] == speaker_spaces:
if ep2_split[ind][4] == '(':
temp.extend([' ', ep2_split[ind][4:]])
ind = ind + 1
elif ep2_split[ind][4] != '(':
if j >= 0:
lines = ''.join(temp)
ep2_parsed.append([char, lines])
j = j + 1
else:
char = ep2_split[ind][4:]
j = j + 1
char = ep2_split[ind][4:]
temp = []
ind = ind + 1
elif ep2_split[ind][:3] == dialogue_spaces:
temp.extend([' ', ep2_split[ind][3:]])
ind = ind + 1
else:
ind = ind + 1
lines = ''.join(temp) # to make sure we get the last line
ep2_parsed.append([char, lines])
for line in ep2_parsed:
line[1] = line[1][1:]
if line[0][-1] == ' ':
line[0]= line[0].rstrip()
if line[0][-1] == '\t':
line[0]= line[0][:-1]
if line[0] == 'AMIDALA' or line[0] == 'PADME' or line[0] == 'PAMDE':
line[0] = 'PADMÉ'
elif line[0] == 'MACE' or line[0] == 'WINDU' or line[0] == 'MACE-WINDU':
line[0] = 'MACE WINDU'
elif line[0] == 'DOOKU':
line[0] = 'COUNT DOOKU'
elif line[0] == 'JANGO':
line[0] = 'JANGO FETT'
elif line[0] == 'OBI-WAM' or line[0] == 'OBI-WAN (V.O.)':
line[0] = 'OBI-WAN'
elif line[0] == 'C-3PO':
line[0] = 'THREEPIO'
elif line[0] == 'ELAN':
line[0] = 'ELAN SLEAZEBAGGANO'
elif line[0] == 'BOBA':
line[0] = 'BOBA FETT'
#here it was just finding colons again, a few violations which had to be pre corrected and some post-parsing, loop cleaning
ep3_raw = ep3_raw.replace('MACE WiNDU:','MACE WINDU:' )
ep3_raw = ep3_raw.replace('MACE: ', 'MACE:')
ep3_raw = ep3_raw.replace('MACE:', 'MACE WINDU: ')
ep3_raw = ep3_raw.replace('five other Senators: ', 'five other Senators:\n')
ep3_raw = ep3_raw.replace(' dark cloak: ', ' dark cloak:\n')
ep3_split = ep3_raw.split('\n')
ind = ep3_split.index('ANAKIN smiles as he blasts a TRADE FEDERATION DROID DROP FIGHTER.')
end = ep3_split.index('235 EXT. NABOO-MAIN SQUARE-DAWN')
j=0
ep3_parsed = []
char = []
lines = []
while ind < end:
if ep3_split[ind] == '':
ind = ind + 1
elif ep3_split[ind] != '':
index = ep3_split[ind].find(': ')
if index == -1:
ind = ind + 1
else:
char = ep3_split[ind][:index]
lines = ep3_split[ind][index+2:]
ep3_parsed.append([char, lines])
j = j + 1
ind = ind + 1
for line in ep3_parsed:
if line[0][-1] == ' ':
line[0]= line[0].rstrip()
if line[0][-1] == '\t':
line[0]= line[0][:-1]
if line[0] == 'PADME':
line[0] = 'PADMÉ'
elif line[0] == 'MACE WlNDU':
line[0] = 'MACE WINDU'
elif line[0] == 'DARTH SlDIOUS' or line[0] == 'DABTH SIDIOUS':
line[0] = 'DARTH SIDIOUS'
elif line[0] == 'GlDDEAN DANU' or line[0] == 'GiDDEAN DANU':
line[0] = 'GIDDEAN DANU'
elif line[0] == 'ANAKINN':
line[0] = 'ANAKIN'
elif line[0] == 'QUI -GON':
line[0] = 'QUI-GON'
elif line[0] == 'C-3PO' or line[0] == 'G-3PO':
line[0] = 'THREEPIO'
elif line[0] == 'Kl-ADI-MUNDI':
line[0] = 'KI-ADI-MUNDI'
elif line[0] == 'CAPTAIN ANTILLES':
line[0] = 'WEDGE'
```
Now that the parsing is over, we can begin to take the sorted data and explore it. First let's create manageable dataframes out of it.
```
labels = ['Character', 'Dialogue']
df4 = pd.DataFrame.from_records(ep4_parsed, columns=labels)
df5 = pd.DataFrame.from_records(ep5_parsed, columns=labels)
df6 = pd.DataFrame.from_records(ep6_parsed, columns=labels)
df1 = pd.DataFrame.from_records(ep1_parsed, columns=labels)
df2 = pd.DataFrame.from_records(ep2_parsed, columns=labels)
df3 = pd.DataFrame.from_records(ep3_parsed, columns=labels)
print('Episode VI Dialogue:')
df6.head()
```
## PART G
### Extracting the Dialogue for Analysis
We're going to combine the dialogues for prequels and trilogies in order to find the differences in speakers between them.
```
Characters_IV = [line[0] for line in ep4_parsed]
Dialogues_IV = [line[1] for line in ep4_parsed]
Characters_V = [line[0] for line in ep5_parsed]
Dialogues_V = [line[1] for line in ep5_parsed]
Characters_VI = [line[0] for line in ep6_parsed]
Dialogues_VI = [line[1] for line in ep6_parsed]
Characters_I = [line[0] for line in ep1_parsed]
Dialogues_I = [line[1] for line in ep1_parsed]
Characters_II = [line[0] for line in ep2_parsed]
Dialogues_II = [line[1] for line in ep2_parsed]
Characters_III = [line[0] for line in ep3_parsed]
Dialogues_III = [line[1] for line in ep3_parsed]
Characters_OT = Characters_IV + Characters_V + Characters_VI #to be sorted
Characters_Prequels = Characters_I + Characters_II + Characters_III
Char_OT_Raw = Characters_IV + Characters_V + Characters_VI #I coded these ones seperately because the sorting function replaced the unsorted and I wanted a raw unsorted to refer to for sorting the appropriate dialogue with it without having to clear and restoring the variable everytime in the jupyter notebook.
Char_Prequels_Raw = Characters_I + Characters_II + Characters_III
Dialogues_OT = Dialogues_IV + Dialogues_V + Dialogues_VI
Dialogues_Prequels = Dialogues_I + Dialogues_II + Dialogues_III
```
Now let's find the top speakers in each trilogy.
```
sorted_OT = Characters_OT # sorts by the quantity of times a speaker appears in the prequels and original trilogy
sorted_OT.sort(key=Counter(sorted_OT).get, reverse=True)
sorted_Prequels = Characters_Prequels
sorted_Prequels.sort(key=Counter(sorted_Prequels).get, reverse=True)
def unique(seq): #this function will collapse the data into a set that keeps the sorted order so that we keep the most talkative speakers first
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
Top_Characters_OT = unique(sorted_OT)
Top_Characters_Prequels = unique(sorted_Prequels)
Top_Characters_OT[:9] #after looking through the list, these 9 characters seemed substantial enough to warrant further examination
Top_Characters_Prequels[:11] #ditto but only 11
```
Now we're also going to find how often the characters for which we have popularity data spoke in the different trilogies.
```
def index_character(list_of_characters, name): #this function allows me to see whether a speaker in the dialogue data matches the ones from the survey
for i, s in enumerate(list_of_characters):
if name == s: #or == instead of in
return i
return -1
#'exclude r2d2 b/c cannot speak'
Character_List = ['HAN', 'LUKE', 'LEIA', 'ANAKIN', 'OBI-WAN', 'PALPATINE', 'DARTH VADER', 'LANDO', 'BOBA FETT', 'THREEPIO', 'JAR JAR', 'PADMÉ', 'YODA']
OT_Dialogue_Count = np.zeros(len(Character_List,), dtype=np.int)
Prequels_Dialogue_Count = np.zeros(len(Character_List,), dtype=np.int)
for names in Characters_OT: #this for loop adds one everytime a speaker in the survey speaks in the original trilogy
if index_character(Character_List, names) != -1:
OT_Dialogue_Count[index_character(Character_List, names)] += 1
for names in Characters_Prequels: #ditto but counts for the prequels
if index_character(Character_List, names) != -1:
Prequels_Dialogue_Count[index_character(Character_List, names)] += 1
print(Character_List) #how often did the different characters speak in the two trilogies
print('Prequels: ',Prequels_Dialogue_Count)
print('Original Trilogy: ',OT_Dialogue_Count)
```
This is good information but let's represent it graphically.
```
fig, ax = plt.subplots()
width = 0.2
index = np.arange(len(Character_List))
labels2 = seen_all_JJB.columns[:10].append(seen_all_JJB.columns[11:]) #exclude R2
OT = ax.bar(index - .5*width, OT_Dialogue_Count, width, color='blue')
prequels = ax.bar(index + .5*width, Prequels_Dialogue_Count, width, color='red')
ax.set_ylabel('Instances of dialogue')
ax.set_title('Dialogue Count')
ax.set_xticks(index)
ax.set_xticklabels([(x[12:]) for x in labels], rotation='vertical');
ax.legend((OT[0], prequels[0]), ('Original Trilogy', 'Prequels'));
```
And now again in plotly.
```
trace5 = go.Bar(
name = 'Prequels',
hoverinfo = 'text',
y = [(x[12:]) for x in labels2],
x = -1*Prequels_Dialogue_Count,
orientation = 'h',
text = [str(x) + ' lines of dialogue' for x in Prequels_Dialogue_Count],
opacity=1,
width=.55,
)
trace6 = go.Bar(
name = 'base',
hoverinfo = 'none',
y = [(x[12:]) for x in labels2],
x = Prequels_Dialogue_Count,
orientation = 'h',
opacity=0,
width=.55,
showlegend=False,
)
trace7 = go.Bar(
name = 'Original Trilogy',
hoverinfo = 'text',
y = [(x[12:]) for x in labels2],
x = OT_Dialogue_Count,
orientation = 'h',
text = [str(x) + ' lines of dialogue' for x in OT_Dialogue_Count],
opacity=1,
width=.55,
)
data2 = [trace5, trace6, trace7]
layout = go.Layout(
barmode='stack',
yaxis=dict(autorange='reversed', tickfont=dict(family='open sans', size=14,color='rgb(255,255,255)'), linecolor ='rgb(0,0,0)', linewidth=0 ),
title='<b>Whose Line is it Anyway?</b> <br> Instances of Dialogue per Character',
titlefont =dict(color='rgb(255,255,255)',
size =21,
family = 'open sans'),
plot_bgcolor='rgba(10, 0, 0, 1)',
paper_bgcolor ='rgba(10, 0, 0, 1)',
margin=dict(l=160),
annotations=[ ],
legend=dict(orientation = 'h', font=dict(color='rgb(255,255,255)'), traceorder = 'normal'),
xaxis=dict(tickvals = [-600,-400,-200,0,200,400],tickmode="array", ticktext = [600,400,200,0,200,400], tickfont=dict(family='open sans', size=14,color='rgb(255,255,255)'), linecolor ='rgb(0,0,0)', linewidth=0),
)
fig = go.Figure(data=data2, layout=layout)
iplot(fig)
```
This graph augments the previous plotly figures that looked at popularity between fans and non-fans. We can see that Jar Jar does not even speak that often relative to some of the other characters in the survey and yet he is strongly disliked by fans, even more so than the evil Emperor Palpatine [not sure if that one counts as a spoiler or not].
Although it is unclear whether non-fans like the prequels more because they like the characters or vice versa, it is likely an endogenous effect.
## PART H
### Auxiliary Natural Language Processing
Now, since, we already have all this data, I thought it might be interesting to process the scripts with the natural language processing toolkit. I'm not looking for anything in particular so much as I am just curious what might be found!
```
n_top_Prequels = 11 #some initial cleaning to make keeping track of the data easier
n_top_OT = 9
aux_top_chars_OT = Top_Characters_OT[:n_top_OT]
aux_top_chars_Prequels = Top_Characters_Prequels[:n_top_Prequels]
top_char_diags_OT = [" " for x in aux_top_chars_OT]
top_char_diags_Prequels = [" " for x in aux_top_chars_Prequels]
for i in range(len(Char_OT_Raw)): #combine all the strings together for most talkative characters in Original Trilogy
for j in range(n_top_OT):
if Char_OT_Raw[i] == aux_top_chars_OT[j]:
top_char_diags_OT[j] += Dialogues_OT[i] + " "
for i in range(len(Char_Prequels_Raw)): #same but for prequels
for j in range(n_top_Prequels):
if Char_Prequels_Raw[i] == aux_top_chars_Prequels[j]:
top_char_diags_Prequels[j] += Dialogues_Prequels[i] + " "
stop = set(stopwords.words('english')) #will remove stop words from the script
exclude = set(string.punctuation) #removes the punctuation marks
lemma = WordNetLemmatizer() #converts words to their stems i.e. women to woman and lying to lie
def clean(doc): #function that cleans the text corpus
stop_free = " ".join([i for i in doc.lower().split() if i not in stop])
punc_free = ''.join(ch for ch in stop_free if ch not in exclude)
normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split())
return normalized
#charcter in top_char_diags_OT:
dialogues_OT_clean = [clean(doc) for doc in top_char_diags_OT[:n_top_OT]] # run that function on OT top characters' dialogue
for i in range(n_top_OT): #generate tokens and each speaker's common bigrams
tokens = nltk.word_tokenize(dialogues_OT_clean[i])
text = nltk.Text(tokens)
print(aux_top_chars_OT[i], '^', text.collocations())
#get bigrams for the prequels
dialogues_Prequels_clean = [clean(doc) for doc in top_char_diags_Prequels]
for i in range(n_top_Prequels):
tokens = nltk.word_tokenize(dialogues_Prequels_clean[i])
text = nltk.Text(tokens)
print(aux_top_chars_Prequels[i], '^', text.collocations())
```
If you have seen the Star Wars films before you may agree that these are pretty accurate characterizations of what the characters often say. Many of these are the names of other characters, but others represent idiosyncratic expressions. Even if it is just two words, they capture moments in the movies, like Palpatine's tempting Anakin to join the dark side in order to save Padme or Obi-Wan's then telling Luke about how his father was seduced to the dark side. This was a fun way to end the assignment that just begins to explore the possibilities available to find patterns in the Star Wars scripts.
# Conclusion
In this project I sought to explore the possible effects of bandwagoning within the Star Wars community that has caused many fans to perhaps unfairly deride the prequels. While it is impossible to know for sure the degree to which a bandwagon effect is present, the data do seem to suggest that there is increased polarization towards the trilogies in fans, probably as a result of taking cues from other fans. Certainly the argument for it is more strengthened by the analysis contained here than weakened. There is an association between considering oneself a fan of Star Wars and both disliking the prequels and Jar Jar Binks. Therefore, I think there is a solid case for individuals' self-identifying as fans leading to their wanting to adopt canon views, but such causality cannot be concluded here.
However, this is just one of many possible explanations. While the data supported everything I had initially thought of the community when I first was considering this project, this may just be another psychological effect: the confirmation bias. I tried to keep any conclusions drawn throughout the scope of this notebook limited and qualified, rather presenting the data, code, and figures instead. I hope the reader draws their own conclusions, even if they are opposed to mine.
<img src="NYUBCpng-01.png">
| github_jupyter |
# Huggingface SageMaker-SDK - GPT2 Fine-tuning example
1. [Introduction](#Introduction)
2. [Development Environment and Permissions](#Development-Environment-and-Permissions)
1. [Installation](#Installation)
2. [Permissions](#Permissions)
3. [Uploading data to sagemaker_session_bucket](#Uploading-data-to-sagemaker_session_bucket)
3. [Fine-tuning & starting Sagemaker Training Job](#Fine-tuning-\&-starting-Sagemaker-Training-Job)
1. [Creating an Estimator and start a training job](#Creating-an-Estimator-and-start-a-training-job)
2. [Download fine-tuned model from s3](#Download-fine-tuned-model-from-s3)
3. [Text Generation on Local](#Text-Generation-on-Local)
# Introduction
このnotebookはHuggingFaceの[run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py)を日本語データで動作する様に変更を加えたものです。
**日本語データで動作する様に変更を加えた以外はSageMakerで実行するために変更を加えた部分はありません**
データは[wikiHow日本語要約データセット](https://github.com/Katsumata420/wikihow_japanese)を使用します。
このデモでは、AmazonSageMakerのHuggingFace Estimatorを使用してSageMakerのトレーニングジョブを実行します。
_**NOTE: このデモは、SagemakerNotebookインスタンスで動作検証しています**_
# Development Environment and Permissions
## Installation
このNotebookはSageMakerの`conda_pytorch_p36`カーネルを利用しています。
**_Note: このnotebook上で推論テストを行う場合、(バージョンが古い場合は)pytorchのバージョンアップが必要になります。_**
```
!pip install --upgrade pip
!pip install --upgrade torch
!pip install "sagemaker>=2.48.1" "transformers==4.9.2" "datasets[s3]==1.11.0" --upgrade
!pip install sentencepiece
!pip install sentencepiece
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt2-medium")
tokenizer.do_lower_case = True # due to some bug of tokenizer config loading
```
## Permissions
ローカル環境でSagemakerを使用する場合はSagemakerに必要な権限を持つIAMロールにアクセスする必要があります。[こちら](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html)を参照してください
```
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
# データの準備
事前に`create_wikihow_dataset.ipynb`を実行してwikiHow日本語要約データセットを用意してください。
```
import pandas as pd
from tqdm import tqdm
train = pd.read_json('./wikihow_japanese/data/output/train.jsonl', orient='records', lines=True)
train
dev = pd.read_json('./wikihow_japanese/data/output/dev.jsonl', orient='records', lines=True)
dev
with open('train.txt', 'w') as output_file:
for row in tqdm(train.itertuples(), total=train.shape[0]):
src = row.src
tgt = row.tgt
tokens = tokenizer.tokenize(src)
src = "".join(tokens).replace('▁', '')
text = '<s>' + src + '[SEP]' + tgt + '</s>'
output_file.write(text + '\n')
with open('dev.txt', 'w') as output_file:
for row in tqdm(dev.itertuples(), total=dev.shape[0]):
src = row.src
tgt = row.tgt
tokens = tokenizer.tokenize(src)
src = "".join(tokens).replace('▁', '')
text = '<s>' + src + '[SEP]' + tgt + '</s>'
output_file.write(text + '\n')
```
## Uploading data to `sagemaker_session_bucket`
S3へデータをアップロードします。
```
s3_prefix = 'samples/datasets/wikihow'
input_train = sess.upload_data(
path='train.txt',
key_prefix=f'{s3_prefix}/train'
)
input_validation = sess.upload_data(
path='dev.txt',
key_prefix=f'{s3_prefix}/valid'
)
# データのUpload path
print(input_train)
print(input_validation)
```
# Fine-tuning & starting Sagemaker Training Job
`HuggingFace`のトレーニングジョブを作成するためには`HuggingFace` Estimatorが必要になります。
Estimatorは、エンドツーエンドのAmazonSageMakerトレーニングおよびデプロイタスクを処理します。 Estimatorで、どのFine-tuningスクリプトを`entry_point`として使用するか、どの`instance_type`を使用するか、どの`hyperparameters`を渡すかなどを定義します。
```python
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
base_job_name='huggingface-sdk-extension',
instance_type='ml.p3.2xlarge',
instance_count=1,
transformers_version='4.4',
pytorch_version='1.6',
py_version='py36',
role=role,
hyperparameters={
'epochs': 1,
'train_batch_size': 32,
'model_name':'distilbert-base-uncased'
}
)
```
SageMakerトレーニングジョブを作成すると、SageMakerは`huggingface`コンテナを実行するために必要なec2インスタンスの起動と管理を行います。
Fine-tuningスクリプト`train.py`をアップロードし、`sagemaker_session_bucket`からコンテナ内の`/opt/ml/input/data`にデータをダウンロードして、トレーニングジョブを実行します。
```python
/opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32
```
`HuggingFace estimator`で定義した`hyperparameters`は、名前付き引数として渡されます。
またSagemakerは、次のようなさまざまな環境変数を通じて、トレーニング環境に関する有用なプロパティを提供しています。
* `SM_MODEL_DIR`:トレーニングジョブがモデルアーティファクトを書き込むパスを表す文字列。トレーニング後、このディレクトリのアーティファクトはモデルホスティングのためにS3にアップロードされます。
* `SM_NUM_GPUS`:ホストで使用可能なGPUの数を表す整数。
* `SM_CHANNEL_XXXX`:指定されたチャネルの入力データを含むディレクトリへのパスを表す文字列。たとえば、HuggingFace estimatorのfit呼び出しで`train`と`test`という名前の2つの入力チャネルを指定すると、環境変数`SM_CHANNEL_TRAIN`と`SM_CHANNEL_TEST`が設定されます。
このトレーニングジョブをローカル環境で実行するには、`instance_type='local'`、GPUの場合は`instance_type='local_gpu'`で定義できます(GPUの場合は追加で設定が必要になります[SageMakerのドキュメント](https://sagemaker.readthedocs.io/en/stable/overview.html#local-mode)を参照してください)。
**_Note:これはSageMaker Studio内では機能しません_**
```
# requirements.txtはトレーニングジョブの実行前に実行されます(コンテナにライブラリを追加する際に使用します)
# ファイルはここを参照しています。https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/requirements.txt
# 1点異なる部分は transformers >= 4.8.0 でHuggingFaceコンテナのバージョンが古く本家に追いついていないため、バージョンアップを行なっています。
!pygmentize ./scripts/requirements.txt
# トレーニングジョブで実行されるコード
# 変更点:AutoTokenizer→T5Tokenizer
!pygmentize ./scripts/run_clm.py
from sagemaker.huggingface import HuggingFace
# hyperparameters, which are passed into the training job
hyperparameters={
'model_name_or_path':'rinna/japanese-gpt2-medium',
'train_file': '/opt/ml/input/data/train/train.txt',
'validation_file': '/opt/ml/input/data/validation/dev.txt',
'do_train': 'True',
'do_eval': 'True',
'num_train_epochs': 10,
'per_device_train_batch_size': 1,
'per_device_eval_batch_size': 1,
'use_fast_tokenizer': 'False',
'save_steps': 1000,
'save_total_limit': 1,
'output_dir':'/opt/ml/model',
}
```
## Creating an Estimator and start a training job
```
# estimator
huggingface_estimator = HuggingFace(
role=role,
entry_point='run_clm.py',
source_dir='./scripts',
instance_type='ml.p3.8xlarge',
instance_count=1,
volume_size=200,
transformers_version='4.6',
pytorch_version='1.7',
py_version='py36',
hyperparameters=hyperparameters,
)
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({'train': input_train, 'validation': input_validation})
# ml.p3.8xlarge, 10 epochでの実行時間の目安
# Training seconds: 3623
# Billable seconds: 3623
```
## Download-fine-tuned-model-from-s3
```
import os
OUTPUT_DIR = './output/'
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
from sagemaker.s3 import S3Downloader
# 学習したモデルのダウンロード
S3Downloader.download(
s3_uri=huggingface_estimator.model_data, # s3 uri where the trained model is located
local_path='.', # local path where *.targ.gz is saved
sagemaker_session=sess # sagemaker session used for training the model
)
# OUTPUT_DIRに解凍します
!tar -zxvf model.tar.gz -C output
```
## Text Generation on Local
```
import torch
from transformers import AutoModelForCausalLM, T5Tokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt2-medium")
tokenizer.do_lower_case = True # due to some bug of tokenizer config loading
model = AutoModelForCausalLM.from_pretrained('output/')
model.to(device)
model.eval()
def generate_summary(body, num_gen=5):
input_text = '<s>'+body+'[SEP]'
input_ids = tokenizer.encode(input_text, return_tensors='pt').to(device)
out = model.generate(input_ids, do_sample=True, top_p=0.95, top_k=40,
num_return_sequences=num_gen, max_length=1024, bad_words_ids=[[1], [5]])
print('='*5,'原文', '='*5)
print(body)
print('-'*5, '要約', '-'*5)
for sent in tokenizer.batch_decode(out):
sent = sent.split('</s>')[1]
sent = sent.replace('</s>', '')
print(sent)
body = dev.src[0]
generate_summary(body)
body = dev.src[1]
generate_summary(body)
body = dev.src[2]
generate_summary(body)
```
| github_jupyter |
# Maze tutorial
In this tutorial, we tackle the maze problem.
We use this classical game to demonstrate how
- a new scikit-decide domain can be easily created
- to find solvers from scikit-decide hub matching its characteristics
- to apply a scikit-decide solver to a domain
- to create its own rollout function to play a trained solver on a domain
Notes:
- In order to focus on scikit-decide use, we put some code not directly related to the library in a [separate module](./maze_utils.py) (like maze generation and display).
- A similar maze domain is already defined in [scikit-decide hub](https://github.com/airbus/scikit-decide/blob/master/skdecide/hub/domain/maze/maze.py) but we do not use it for the sake of this tutorial.
```
from enum import Enum
from math import sqrt
from time import sleep
from typing import Any, NamedTuple, Optional, Union
from IPython.display import clear_output, display
# import Maze class from utility file for maze generation and display
from maze_utils import Maze
from stable_baselines3 import PPO
from skdecide import DeterministicPlanningDomain, Solver, Space, Value
from skdecide.builders.domain import Renderable, UnrestrictedActions
from skdecide.hub.solver.astar import Astar
from skdecide.hub.solver.stable_baselines import StableBaseline
from skdecide.hub.space.gym import EnumSpace, ListSpace, MultiDiscreteSpace
from skdecide.utils import match_solvers
# choose standard matplolib inline backend to render plots
%matplotlib inline
```
## About the maze problem
The maze problem is about to make an agent finding the goal in a maze by going up, down, left, or right without going through walls.
We show you such a maze by using the Maze class defined in the [maze module](./maze_utils.py). Here the agent starts at the top-left corner and the goal is at the bottom-right corner of the maze. The following colour convention is used:
- dark purple: walls
- yellow: empty cells
- light green: goal
- blue: current position
```
# size of maze
width = 25
height = 19
# generate the maze
maze = Maze.generate_random_maze(width=width, height=height)
# starting position
entrance = 1, 1
# goal position
goal = height - 2, width - 2
# render the maze
ax, image = maze.render(current_position=entrance, goal=goal)
display(image.figure)
```
## MazeDomain definition
In this section, we will wrap the Maze utility class so that it will be recognized as a scikit-decide domain. Several steps are needed.
### States and actions
We begin by defining the state space (agent positions) and action space (agent movements).
```
class State(NamedTuple):
x: int
y: int
class Action(Enum):
up = 0
down = 1
left = 2
right = 3
```
### Domain type
Then we define the domain type from a base template (`DeterministicPlanningDomain`) with optional refinements (`UnrestrictedActions` and `Renderable`). This corresponds to the following characteristics:
- `DeterministicPlanningDomain`:
- only one agent
- deterministic starting state
- handle only actions
- actions are sequential
- deterministic transitions
- white box transition model
- goal states are defined
- positive costs (i.e. negative rewards)
- fully observable
- renderable (can be displayed)
- `UnrestrictedActions`: all actions are available at each step
- `Renderable`: can be displayed
We also specify the type of states, observations, events, transition values, ...
This is needed so that solvers know how to work properly with this domain, and this will also help IDE or Jupyter to propose you intelligent code completion.
```
class D(DeterministicPlanningDomain, UnrestrictedActions, Renderable):
T_state = State # Type of states
T_observation = State # Type of observations
T_event = Action # Type of events
T_value = float # Type of transition values (rewards or costs)
T_predicate = bool # Type of logical checks
T_info = None # Type of additional information in environment outcome
T_agent = Union # Inherited from SingleAgent
```
### Actual domain class
We can now implement the maze domain by
- deriving from the above domain type
- filling all non-implemented methods
- adding a constructor to define the maze & start/end positions.
We also define (to help solvers that can make use of it)
- an heuristic for search algorithms
*NB: To know the methods not yet implemented, one can either use an IDE which can find them automatically or the [code generators](https://airbus.github.io/scikit-decide/guide/codegen.html) page in the online documentation, which generates the corresponding boilerplate code.*
```
class MazeDomain(D):
"""Maze scikit-decide domain
Attributes:
start: the starting position
end: the goal to reach
maze: underlying Maze object
"""
def __init__(self, start: State, end: State, maze: Maze):
self.start = start
self.end = end
self.maze = maze
# display
self._image = None # image to update when rendering the maze
self._ax = None # subplot in which the maze is rendered
def _get_next_state(self, memory: D.T_state, action: D.T_event) -> D.T_state:
"""Get the next state given a memory and action.
Move agent according to action (except if bumping into a wall).
"""
next_x, next_y = memory.x, memory.y
if action == Action.up:
next_x -= 1
if action == Action.down:
next_x += 1
if action == Action.left:
next_y -= 1
if action == Action.right:
next_y += 1
return (
State(next_x, next_y)
if self.maze.is_an_empty_cell(next_x, next_y)
else memory
)
def _get_transition_value(
self,
memory: D.T_state,
action: D.T_event,
next_state: Optional[D.T_state] = None,
) -> Value[D.T_value]:
"""Get the value (reward or cost) of a transition.
Set cost to 1 when moving (energy cost)
and to 2 when bumping into a wall (damage cost).
"""
#
return Value(cost=1 if next_state != memory else 2)
def _get_initial_state_(self) -> D.T_state:
"""Get the initial state.
Set the start position as initial state.
"""
return self.start
def _get_goals_(self) -> Space[D.T_observation]:
"""Get the domain goals space (finite or infinite set).
Set the end position as goal.
"""
return ListSpace([self.end])
def _is_terminal(self, state: State) -> D.T_predicate:
"""Indicate whether a state is terminal.
Stop an episode only when goal reached.
"""
return self._is_goal(state)
def _get_action_space_(self) -> Space[D.T_event]:
"""Define action space."""
return EnumSpace(Action)
def _get_observation_space_(self) -> Space[D.T_observation]:
"""Define observation space."""
return MultiDiscreteSpace([self.maze.height, self.maze.width])
def _render_from(self, memory: State, **kwargs: Any) -> Any:
"""Render visually the maze.
Returns:
matplotlib figure
"""
# store used matplotlib subplot and image to only update them afterwards
self._ax, self._image = self.maze.render(
current_position=memory,
goal=self.end,
ax=self._ax,
image=self._image,
)
return self._image.figure
def heuristic(self, s: D.T_state) -> Value[D.T_value]:
"""Heuristic to be used by search algorithms.
Here Euclidean distance to goal.
"""
return Value(cost=sqrt((self.end.x - s.x) ** 2 + (self.end.y - s.y) ** 2))
```
### Domain factory
To use scikit-decide solvers on the maze problem, we will need a domain factory recreating the domain at will.
Indeed the method `solve_with()` used [later](#Training-solver-on-the-domain) needs such a domain factory so that parallel solvers can create identical domains on separate processes.
(Even though we do not use parallel solvers in this particular notebook.)
Here is such a domain factory reusing the maze created in [first section](#About-maze-problem). We render again the maze using the `render` method of the wrapping domain.
```
# define start and end state from tuples defined above
start = State(*entrance)
end = State(*goal)
# domain factory
domain_factory = lambda: MazeDomain(maze=maze, start=start, end=end)
# instanciate the domain
domain = domain_factory()
# init the start position
domain.reset()
# display the corresponding maze
display(domain.render())
```
## Solvers
### Finding suitable solvers
The library hub includes a lot of solvers. We can use `match_solvers` function to show available solvers that fit the characteristics of the defined domain, according to the mixin classes used to define the [domain type](#domain-type).
```
match_solvers(domain=domain)
```
In the following, we will restrict ourself to 2 solvers:
- `StableBaseline`, quite generic, allowing us to use reinforcement learning (RL) algorithms by wrapping a stable OpenAI Baselines solver ([stable_baselines3](https://github.com/DLR-RM/stable-baselines3))
- `LazyAstar` (A*), more specific, coming from path planning.
### PPO solver
We first try a solver coming from the Reinforcement Learning community that makes use of OpenAI [stable_baselines3](https://github.com/DLR-RM/stable-baselines3), giving access to a lot of RL algorithms.
Here we choose the [Proximal Policy Optimization (PPO)](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) solver. It directly optimizes the weights of the policy network using stochastic gradient ascent. See more details in stable baselines [documentation](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) and [original paper](https://arxiv.org/abs/1707.06347).
#### Solver instantiation
```
solver = StableBaseline(
PPO, "MlpPolicy", learn_config={"total_timesteps": 10000}, verbose=True
)
```
#### Training solver on the domain
The solver will try to find an appropriate policy to solve the maze.
```
MazeDomain.solve_with(solver, domain_factory)
```
The chosen syntax allows to apply *autocast* scikit-decide core mechanism to the solver so that generic solvers can be used to solve more specific domains. For instance solver that normally apply to multi-agent domain can also apply to single-agent domain thanks to this *autocast* mechanism.
#### Rolling out the solution (found by PPO)
We can use the trained solver to roll out an episode to see if this is actually solving the maze.
For educative purpose, we define here our own rollout (which will probably be needed if you want to actually use the solver in a real case). If you want to take a look at the (more complex) one already implemented in the library, see the [utils.py](https://github.com/airbus/scikit-decide/blob/master/skdecide/utils.py) module.
```
def rollout(
domain: MazeDomain,
solver: Solver,
max_steps: int,
pause_between_steps: Optional[float] = 0.01,
):
"""Roll out one episode in a domain according to the policy of a trained solver.
Args:
domain: the maze domain to solve
solver: a trained solver
max_steps: maximum number of steps allowed to reach the goal
pause_between_steps: time (s) paused between agent movements.
No pause if None.
"""
# Initialize episode
solver.reset()
observation = domain.reset()
# Initialize image
figure = domain.render(observation)
display(figure)
# loop until max_steps or goal is reached
for i_step in range(1, max_steps + 1):
if pause_between_steps is not None:
sleep(pause_between_steps)
# choose action according to solver
action = solver.sample_action(observation)
# get corresponding action
outcome = domain.step(action)
observation = outcome.observation
# update image
figure = domain.render(observation)
clear_output(wait=True)
display(figure)
# final state reached?
if domain.is_terminal(observation):
break
# goal reached?
is_goal_reached = domain.is_goal(observation)
if is_goal_reached:
print(f"Goal reached in {i_step} steps!")
else:
print(f"Goal not reached after {i_step} steps!")
return is_goal_reached, i_step
```
We set a maximum number of steps to reach the goal according to maze size in order to decide if the proposed solution is working or not.
```
max_steps = maze.width * maze.height
print(f"Rolling out a solution with max_steps={max_steps}")
rollout(domain=domain, solver=solver, max_steps=max_steps, pause_between_steps=None)
```
As you can see, the goal is not reached at the end of the episode. Though a generic algorithm that can apply to a lot of problems, PPO seems not to be able to solve this maze. This is actually due to the fact that the reward is sparse (you get rewarded only when you reach the goal) and this is nearly impossible for this kind of RL algorithm to reach the goal just by chance without shaping the reward.
#### Cleaning up the solver
Some solvers need proper cleaning before being deleted.
```
solver._cleanup()
```
Note that this is automatically done if you use the solver within a `with` statement. The syntax would look something like:
```python
with solver_factory() as solver:
MyDomain.solve_with(solver, domain_factory)
rollout(domain=domain, solver=solver)
```
### A* solver
We now use [A*](https://en.wikipedia.org/wiki/A*_search_algorithm) well known to be suited to this kind of problem because it exploits the knowledge of the goal and of heuristic metrics to reach the goal (e.g. euclidean or Manhattan distance).
A* (pronounced "A-star") is a graph traversal and path search algorithm, which is often used in many fields of computer science due to its completeness, optimality, and optimal efficiency.
One major practical drawback is its 𝑂(𝑏𝑑) space complexity, as it stores all generated nodes in memory.
See more details in the [original paper](https://ieeexplore.ieee.org/document/4082128): P. E. Hart, N. J. Nilsson and B. Raphael, "A Formal Basis for the Heuristic Determination of Minimum Cost Paths," in IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100-107, July 1968.
#### Solver instantiation
We use the heuristic previously defined in MazeDomain class.
```
solver = Astar(heuristic=lambda d, s: d.heuristic(s))
```
#### Training solver on the domain
```
MazeDomain.solve_with(solver, domain_factory)
```
#### Rolling out the solution (found by A*)
We use the same rollout function and maximum number of steps as for the PPO solver.
```
rollout(domain=domain, solver=solver, max_steps=max_steps, pause_between_steps=None)
```
This time, the goal is reached!
The fact that A* (which was designed for path planning problems) can do better than Deep RL here is due to:
- mainly the fact that this algorithm uses more information from the domain to solve it efficiently, namely the fact that all rewards are negative here ("positive cost") + exhaustively given list of next states (which enables to explore a structured graph, instead of randomly looking for a sparse reward)
- the possible use of an admissible heuristic (distance to goal), which speeds up even more solving (while keeping optimality guarantee)
#### Cleaning up the solver
```
solver._cleanup()
```
## Conclusion
We saw how to define from scratch a scikit-decide domain by specifying its characteristics at the finer level possible, and how to find the existing solvers matching those characteristics.
We also managed to apply a quite classical solver from the RL community (PPO) as well as a more specific solver (A*) for the maze problem. Some important lessons:
- Even though for many the go-to method for decision making, PPO was not able to solve the "simple" maze problem;
- More precisely, PPO seems not well-fitted to structured domains with sparse rewards (e.g. goal state to reach);
- Solvers that take more advantage of all characteristics available are generally more suited, as A* demonstrated.
That is why it is important to define the domain with the finer granularity possible and also to use the solvers that can exploit at most the known characteristics of the domain.
| github_jupyter |
```
# https://stackoverflow.com/questions/63714679/plotting-gannt-chart-using-timestamps
# https://plotly.com/python-api-reference/generated/plotly.express.timeline.html
# https://towardsdatascience.com/working-with-datetime-in-pandas-dataframe-663f7af6c587
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import pandas as pd
import plotly.express as px
report = '~/qs/time/data/report-1-1-2020.csv'
df = pd.read_csv(report)
df
#dictionary = csv.DictReader(open(report))
#for row in dictionary:
# print(row)
start = df['From']
end = df['To']
activity = df['Activity type']
df['Day'] = start.str[0:10]
day = df['Day']
#day = pd.to_datetime(df.From)
day
df['From'] = pd.to_datetime(df.From)
start = df['From']
start
df['To'] = pd.to_datetime(df.To)
end = df['To']
end
activity
new_df = pd.DataFrame([
dict(Start='2019-12-31 23:07:13', End='2020-01-01 3:56:37', Day="1/1/2020", Activity="Sleeping"),
dict(Start='2020-01-01 3:56:43', End='2020-01-01 4:02:15', Day="1/1/2020", Activity="Toilet"),
dict(Start='2020-01-01 4:03:06', End='2020-01-01 6:45:39', Day="1/1/2020", Activity="Internet"),
dict(Start='2020-01-01 6:56:53', End='2020-01-01 7:21:20', Day="1/1/2020", Activity="Grooming"),
dict(Start='2020-01-01 7:27:13 ', End='2020-01-01 8:48:33 ', Day="1/1/2020", Activity="Cleaning"),
dict(Start='2020-01-01 8:52:21 ', End='2020-01-01 10:04:12 ', Day="1/1/2020", Activity="Internet"),
dict(Start='2020-01-01 10:45:36 ', End='2020-01-01 12:03:52 ', Day="1/1/2020", Activity="Writing"),
dict(Start='2020-01-01 20:38:54 ', End='2020-01-02 7:07:04 ', Day="1/1/2020", Activity="Sleeping"),
])
fig = px.timeline(new_df, x_start='Start', x_end='End', y='Day', color='Activity'
)
# you can manually set the range as well
fig.update_layout(xaxis=dict(
title='Timestamp',
tickformat = '%H:%M:%S',
range=['2020-01-01 00:00:00','2020-01-01 23:59:59']
))
fig.show()
#https://stackoverflow.com/questions/51505291/timeline-bar-graph-using-python-and-matplotlib
#https://pythontic.com/visualization/charts/brokenbarchart_horizontal
#https://matplotlib.org/gallery/lines_bars_and_markers/broken_barh.html
fig, ax = plt.subplots()
activities=[]
for row in activities:
start = [0, 1200]
duration = [600, 400]
yranges= [(5,9.9), (25,9.0)]
facecolor = ['tab:blue', 'tab:green']
data = [(start, duration), yranges, facecolors]
for row in data:
ax.broken_barh([(start, duration)], yrange=yranges, facecolors=facecolor)
#ax.broken_barh([(10, 50), (100, 20), (130, 10)], (15, 9.9), facecolors=('tab:orange', 'tab:green', 'tab:red'))
#ax.broken_barh([(10, 50), (100, 20), (130, 10)], (5, 9.9), facecolors=('tab:green', 'tab:blue', 'tab:orange'))
ax.set_ylim(0, 3)
ax.set_xlim(0, 1440)
ax.set_xlabel('Minute of Day')
n = 3
yticks = range(n)
ax.set_yticks([10,20,30])
#for row in yticklabels:
yticklabels = ['1/1', '1/2', '1/3']
ax.set_yticklabels(yticklabels)
ax.grid(True)
"""ax.annotate('race interrupted', (65, 15),
xytext=(0.8, 0.9), textcoords='axes fraction',
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize=16,
horizontalalignment='right', verticalalignment='top')
"""
plt.gca().invert_yaxis()
plt.title('Activities Chart')
plt.show()
```
| github_jupyter |
# Analyze Facebook Data Using IBM Watson and IBM Watson Studio
This is a three-part notebook meant to show how anyone can enrich and analyze a combined dataset of unstructured and structured information with IBM Watson and IBM Watson Studio. For this example we are using a standard Facebook Analytics export which features texts from posts, articles and thumbnails, along with standard performance metrics such as likes, shares, and impressions.
**Part I** will use the Natural Language Understanding and (optionally) Visual Recognition services from IBM Watson to enrich the Facebook posts, thumbnails, and articles by pulling out `Sentiment`, `Emotion`, `Entities`, `Keywords`, and `Images`. The end result of Part I will be additional features and metrics we can visualize in Part III.
**Part II** will set up multiple pandas DataFrames that will contain the values, and metrics needed to find insights in the Part III tests and experiments.
**Part III** will use charts to visualize the features that we discovered during enrichment and show how they correlate with customer impressions.
#### You should only need to change data in the Setup portion of this notebook. All places where you see <span style="color: red"> User Input </span> is where you should be adding inputs.
### Table of Contents
### [**Part I - Enrich**](#part1)<br>
1. [Setup](#setup)<br>
1.1 [Install Watson Developer Cloud and BeautifulSoup Packages](#setup1)<br>
1.2 [Install PixieDust](#pixie)<br>
1.3 [Restart Kernel](#restart)<br>
1.4 [Import Packages and Libraries](#setup2)<br>
1.5 [Add Service Credentials From IBM Cloud for Watson Services](#setup3)<br>
2. [Load Data](#load)<br>
2.1 [Load Data From Cloud Object Storage as a pandas DataFrame](#load1)<br>
2.2 [Set Variables](#load2)<br>
3. [Prepare Data](#prepare)<br>
3.1 [Data Cleansing with Python](#prepare1)<br>
3.2 [Beautiful Soup to Extract Thumbnails and Extented Links](#prepare2)<br>
4. [Enrich Data](#enrich)<br>
4.1 [NLU for Post Text](#nlupost)<br>
4.2 [NLU for Thumbnail Text](#nlutn)<br>
4.3 [NLU for Article Text](#nlulink)<br>
4.4 [Visual Recognition](#visual)<br>
5. [Write Data](#write)<br>
5.1 [Convert DataFrame to new CSV](#write1)<br>
5.2 [Write Data to Cloud Object Storage](#write2)<br>
### [**Part II - Data Preparation**](#part2)<br>
1. [Prepare Data](#prepare)<br>
1.1 [Create Multiple DataFrames for Visualizations](#visualizations)<br>
1.2 [Create a Consolidated Sentiment and Emotion DataFrame](#tone)<br>
1.3 [Create a Consolidated Keyword DataFrame](#keyword)<br>
1.4 [Create a Consolidated Entity DataFrame](#entity)<br>
### [**Part III - Analyze**](#part3)<br>
1. [Setup](#2setup)<br>
1.1 [Assign Variables](#2setup2)<br>
2. [Visualize Data](#2visual)<br>
2.1 [Run PixieDust Visualization Library with Display() API](#2visual1)
# <a id="part1"></a>Part I - Enrich
## <a id="setup"></a>1. Setup
To prepare your environment, you need to install some packages and enter credentials for the Watson services.
### <a id="setup1"></a> 1.1 Install Latest Watson Developer Cloud and Beautiful Soup Packages
You need to install these packages:
- [Watson APIs Python SDK](https://github.com/watson-developer-cloud/python-sdk): a client library for Watson services.
- <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" target="_blank" rel="noopener noreferrer">Beautiful Soup</a>: a library to parse data from HTML for enriching the Facebook data.
- <a href="https://ibm-cds-labs.github.io/pixiedust/" target="_blank" rel="noopener noreferrer">PixieDust</a>: a library to visualize the data.
Install the Watson Python SDK package:
```
!pip -q install --user --no-warn-script-location ibm-watson==4.3.0
```
Install the Beautiful Soup package:
```
!pip -q install --user beautifulsoup4==4.8.2
```
<a id="pixie"></a>
### 1.2 Install PixieDust Library
```
!pip -q install --user --no-warn-script-location --upgrade pixiedust==1.1.14
```
<a id="restart"></a>
### 1.3 Restart Kernel
> Required after installs/upgrades only.
If any libraries were just installed or upgraded, <span style="color: red">restart the kernel</span> before continuing. After this has been done once, you might want to comment out the `!pip install` lines above for cleaner output and a faster "Run All".
<a id="setup2"></a>
### 1.4 Import Packages and Libraries
> Tip: To check if you have a package installed, open a new cell and write: `help(<package-name>)`.
```
import json
import sys
from ibm_watson import NaturalLanguageUnderstandingV1
from ibm_watson import VisualRecognitionV3
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
from ibm_watson.natural_language_understanding_v1 import Features, EntitiesOptions, KeywordsOptions, EmotionOptions, SentimentOptions
import operator
from functools import reduce
from io import StringIO
import numpy as np
from bs4 import BeautifulSoup as bs
from operator import itemgetter
from os.path import join, dirname
import pandas as pd
import numpy as np
import requests
# Suppress some pandas warnings
pd.options.mode.chained_assignment = None # default='warn'
# Suppress SSL warnings
requests.packages.urllib3.disable_warnings()
```
<a id='setup3'></a>
### 1.5 Add Service Credentials From IBM Cloud for Watson Services
Edit the following cell to provide your credentials for Watson and Natural Language Understanding and Visual Recognition.
You must create a Watson Natural Language Understanding service and, optionally, a Watson Visual Recognition service on [IBM Cloud](https://cloud.ibm.com/).
1. Create a service for [Natural Language Understanding (NLU)](https://cloud.ibm.com/catalog/services/natural-language-understanding).
1. Create a service for [Visual Recognition](https://cloud.ibm.com/catalog/services/visual-recognition).
1. Insert API keys and URLs in the following cell.
1. Run the cell.
### <span style="color: red"> _User Input_</span>
```
# @hidden_cell
# Watson Natural Language Understanding (NLU)
NATURAL_LANGUAGE_UNDERSTANDING_API_KEY = ''
NATURAL_LANGUAGE_UNDERSTANDING_URL = ''
# Watson Visual Recognition (optional)
VISUAL_RECOGNITION_API_KEY = ''
VISUAL_RECOGNITION_URL = ''
# Create the Watson clients
nlu_auth = IAMAuthenticator(NATURAL_LANGUAGE_UNDERSTANDING_API_KEY)
nlu = NaturalLanguageUnderstandingV1(version='2020-03-09',
authenticator=nlu_auth)
nlu.set_service_url(NATURAL_LANGUAGE_UNDERSTANDING_URL)
visual_recognition = False # Making visrec optional.
if VISUAL_RECOGNITION_API_KEY and VISUAL_RECOGNITION_URL:
vr_auth = IAMAuthenticator(VISUAL_RECOGNITION_API_KEY)
visual_recognition = VisualRecognitionV3(version='2019-03-09',
authenticator=vr_auth)
visual_recognition.set_service_url(VISUAL_RECOGNITION_URL)
else:
print("Skipping Visual Recognition")
```
## <a id='load'></a>2. Load Data
The data you'll analyzing is a sample of a standard export of the Facebook Insights Post information from the <a href="https://www.facebook.com/ibmwatson/" target="_blank" rel="noopener noreferrer">IBM Watson Facebook page</a>. Engagement metrics such as clicks, impressions, and so on, are altered and do not reflect actual post performance data. The data is on the Watson Studio community page.
### 2.1 Load the data as a pandas DataFrame
To get the data and load it into a pandas DataFrame:
1. Load the file by clicking the **Find and Add Data** icon and then dragging and dropping the file onto the pane or browsing for the file. The data is stored in the object storage container that is associated with your project.
1. Click in the next cell and then choose **Insert to code > pandas DataFrame** from below the file name and then run the cell. Change the inserted variable name to `df_data_1`
### <span style="color: red"> _User Input_</span>
```
# **Insert to code > pandas DataFrame**
```
### <a id='load2'></a>2.2 Set variables
You need to set these variables:
- The name of the DataFrame
- Your credentials for the source file
- A file name for the enriched DataFrame
Define a variable, `df`, for the DataFrame that you just created. If necessary, change the original DataFrame name to match the one you created.
```
# Make sure this uses the variable above. The number will vary in the inserted code.
try:
df = df_data_1
except NameError as e:
print('Error: Setup is incorrect or incomplete.\n')
print('Follow the instructions to insert the pandas DataFrame above, and edit to')
print('make the generated df_data_# variable match the variable used here.')
raise
```
**Select the cell below and place your cursor on an empty line below the comment.**
Put in the credentials for the file you want to enrich by clicking on the 10/01 (upper right), then click `Insert to code` under the file you want to enrich, and choose `Insert Credentials`.
**Change the inserted variable name to `credentials_1`**
### <span style="color: red"> _User Input_</span>
```
# insert credentials for file - Change to credentials_1
# @hidden_cell
# The following code contains the credentials for a file in your IBM Cloud Object Storage.
# You might want to remove those credentials before you share your notebook.
# @hidden_cell
# The following code contains the credentials for a file in your IBM Cloud Object Storage.
# You might want to remove those credentials before you share your notebook.
# Make sure this uses the variable above. The number will vary in the inserted code.
try:
credentials = credentials_1
except NameError as e:
print('Error: Setup is incorrect or incomplete.\n')
print('Follow the instructions to insert the file credentials above, and edit to')
print('make the generated credentials_# variable match the variable used here.')
raise
```
<a id='prepare'></a>
## 3. Prepare Data
You'll prepare the data by cleansing it and extracting the URLs. Many of the posts contain both text and a URL. The first task is to separate URLs from the text so that they can be analyzed separately. Then you need to get thumbnails for the photos and links, and convert any shortened URLs to full URLs.
<a id='prepare1'></a>
### 3.1 Data Cleansing with Python
Renaming columns, removing noticeable noise in the data, pulling out URLs and appending to a new column to run through NLU.
To cleanse the data, you'll rename a column and add a column with the URLs that were embedded in the post.
Change the name of the `Post Message` column to `Text`:
```
df.rename(columns={'Post Message': 'Text'}, inplace=True)
# Drop the rows that have no value for the text.
df.dropna(subset=['Text'], inplace=True)
```
Use the `str.partition` function to remove strings that contain "http" and "www" from the `Text` column and save them in new DataFrames, then add all web addresses to a new `Link` column in the original DataFrame. This process captures all web addresses: https, http, and www.
```
df_http = df["Text"].str.partition("http")
df_www = df["Text"].str.partition("www")
# Combine delimiters with actual links
df_http["Link"] = df_http[1].map(str) + df_http[2]
df_www["Link1"] = df_www[1].map(str) + df_www[2]
# Include only Link columns
df_http.drop(df_http.columns[0:3], axis=1, inplace = True)
df_www.drop(df_www.columns[0:3], axis=1, inplace = True)
# Merge http and www DataFrames
dfmerge = pd.concat([df_http, df_www], axis=1)
# The following steps will allow you to merge data columns from the left to the right
dfmerge = dfmerge.apply(lambda x: x.str.strip()).replace('', np.nan)
# Use fillna to fill any blanks with the Link1 column
dfmerge["Link"].fillna(dfmerge["Link1"], inplace = True)
# Delete Link1 (www column)
dfmerge.drop("Link1", axis=1, inplace = True)
# Combine Link data frame
df = pd.concat([dfmerge,df], axis = 1)
# Make sure text column is a string
df["Text"] = df["Text"].astype("str")
# Strip links from Text column
df['Text'] = df['Text'].apply(lambda x: x.split('http')[0])
df['Text'] = df['Text'].apply(lambda x: x.split('www')[0])
```
### <a id='prepare2'></a> 3.2 Extract thumbnails and extended links
A standard Facebook export does not provide the thumbnail that usually summarizes the link or photo associated with each post. Use the Beautiful Soup library to go into the HTML of the post and extract the thumbnail text:
```
# Change links from objects to strings
for link in df.Link:
df.Link.to_string()
piclinks = []
description = []
for url in df["Link"]:
if pd.isnull(url):
piclinks.append("")
description.append("")
continue
try:
# Skip certificate check with verify=False.
# Don't do this if your urls are not secure.
page3 = requests.get(url, verify=False)
if page3.status_code != requests.codes.ok:
piclinks.append("")
description.append("")
continue
except Exception as e:
print("Skipping url %s: %s" % (url, e))
piclinks.append("")
description.append("")
continue
soup3 = bs(page3.text,"lxml")
pic = soup3.find('meta', property ="og:image")
if pic:
piclinks.append(pic["content"])
else:
piclinks.append("")
content = None
desc = soup3.find(attrs={'name':'Description'})
if desc:
content = desc['content']
if not content or content == 'null':
# Try again with lowercase description
desc = soup3.find(attrs={'name':'description'})
if desc:
content = desc['content']
if not content or content == 'null':
description.append("")
else:
description.append(content)
# Save thumbnail descriptions to df in a column titled 'Thumbnails'
df["Thumbnails"] = description
# Save image links to df in a column titled 'Image'
df["Image"] = piclinks
```
Convert shortened links to full links.
Use requests module to pull extended links. This is only necessary if the Facebook page uses different links than the articles themselves. For this example we are using IBM Watson's Facebook export which uses an IBM link.
```
shortlink = df["Link"]
extendedlink = []
for link in shortlink:
if isinstance(link, float): # Float is not a URL, probably NaN.
extendedlink.append('')
else:
try:
extended_link = requests.Session().head(link, allow_redirects=True).url
extendedlink.append(extended_link)
except Exception as e:
print("Skipping link %s: %s" % (link, e))
extendedlink.append('')
df["Extended Links"] = extendedlink
```
<a id='enrich'></a>
## 4. Enrichment Time!
<a id='nlupost'></a>
### 4.1 NLU for the Post Text
The following script is an example of how to use Natural Language Understanding to iterate through each post and extract enrichment features for future analysis.
For this example, we are looking at the `Text` column in our DataFrame, which contains the text of each post. NLU can also iterate through a column of URLs, or other freeform text. There's a list within a list for the Keywords and Entities features to allow gathering multiple entities and keywords from each piece of text.
Each extracted feature is appended to the DataFrame in a new column that's defined at the end of the script. If you want to run this same script for the other columns, set the loop iterable to the column name, if you are using URLs, change the `text=response` parameter to `url=response`, and update the new column names as necessary.
```
# Define the list of features to get enrichment values for entities, keywords, emotion and sentiment
features = Features(entities=EntitiesOptions(), keywords=KeywordsOptions(), emotion=EmotionOptions(), sentiment=SentimentOptions())
overallSentimentScore = []
overallSentimentType = []
highestEmotion = []
highestEmotionScore = []
kywords = []
entities = []
# Go through every response and enrich the text using NLU.
for text in df['Text']:
if not text:
# print("Text is empty")
overallSentimentScore.append('0')
overallSentimentType.append('0')
highestEmotion.append("")
highestEmotionScore.append("")
kywords.append("")
entities.append("")
continue
else:
# We are assuming English to avoid errors when the language cannot be detected.
enriched_json = nlu.analyze(text=text, features=features, language='en').get_result()
# Get the SENTIMENT score and type
if 'sentiment' in enriched_json:
if('score' in enriched_json['sentiment']["document"]):
overallSentimentScore.append(enriched_json["sentiment"]["document"]["score"])
else:
overallSentimentScore.append('0')
if('label' in enriched_json['sentiment']["document"]):
overallSentimentType.append(enriched_json["sentiment"]["document"]["label"])
else:
overallSentimentType.append('0')
else:
overallSentimentScore.append('0')
overallSentimentType.append('0')
# Read the EMOTIONS into a dict and get the key (emotion) with maximum value
if 'emotion' in enriched_json:
me = max(enriched_json["emotion"]["document"]["emotion"].items(), key=operator.itemgetter(1))[0]
highestEmotion.append(me)
highestEmotionScore.append(enriched_json["emotion"]["document"]["emotion"][me])
else:
highestEmotion.append("")
highestEmotionScore.append("")
# Iterate and get KEYWORDS with a confidence of over 70%
if 'keywords' in enriched_json:
tmpkw = []
for kw in enriched_json['keywords']:
if(float(kw["relevance"]) >= 0.7):
tmpkw.append(kw["text"])
# Convert multiple keywords in a list to a string and append the string
kywords.append(', '.join(tmpkw))
else:
kywords.append("")
# Iterate and get Entities with a confidence of over 30%
if 'entities' in enriched_json:
tmpent = []
for ent in enriched_json['entities']:
if(float(ent["relevance"]) >= 0.3):
tmpent.append(ent["type"])
# Convert multiple entities in a list to a string and append the string
entities.append(', '.join(tmpent))
else:
entities.append("")
# Create columns from the list and append to the DataFrame
if highestEmotion:
df['TextHighestEmotion'] = highestEmotion
if highestEmotionScore:
df['TextHighestEmotionScore'] = highestEmotionScore
if overallSentimentType:
df['TextOverallSentimentType'] = overallSentimentType
if overallSentimentScore:
df['TextOverallSentimentScore'] = overallSentimentScore
df['TextKeywords'] = kywords
df['TextEntities'] = entities
```
After we extract all of the Keywords and Entities from each Post, we have columns with multiple Keywords and Entities separated by commas. For our Analysis in Part II, we also wanted the top Keyword and Entity for each Post. Because of this, we added two new columns to capture the `MaxTextKeyword` and `MaxTextEntity`.
```
# Choose first of Keywords and Entities
df["MaxTextKeywords"] = df["TextKeywords"].apply(lambda x: x.split(',')[0])
df["MaxTextEntity"] = df["TextEntities"].apply(lambda x: x.split(',')[0])
```
<a id='nlutn'></a>
### 4.2 NLU for Thumbnail Text
We will repeat the same process for Thumbnails and Article Text.
```
# Define the list of features to get enrichment values for entities, keywords, emotion and sentiment
features = Features(entities=EntitiesOptions(), keywords=KeywordsOptions(), emotion=EmotionOptions(), sentiment=SentimentOptions())
overallSentimentScore = []
overallSentimentType = []
highestEmotion = []
highestEmotionScore = []
kywords = []
entities = []
# Go through every response and enrich the text using NLU.
for text in df['Thumbnails']:
if not text:
overallSentimentScore.append(' ')
overallSentimentType.append(' ')
highestEmotion.append(' ')
highestEmotionScore.append(' ')
kywords.append(' ')
entities.append(' ')
continue
enriched_json = nlu.analyze(text=text, features=features, language='en').get_result()
# Get the SENTIMENT score and type
if 'sentiment' in enriched_json:
if('score' in enriched_json['sentiment']["document"]):
overallSentimentScore.append(enriched_json["sentiment"]["document"]["score"])
else:
overallSentimentScore.append("")
if('label' in enriched_json['sentiment']["document"]):
overallSentimentType.append(enriched_json["sentiment"]["document"]["label"])
else:
overallSentimentType.append("")
# Read the EMOTIONS into a dict and get the key (emotion) with maximum value
if 'emotion' in enriched_json:
me = max(enriched_json["emotion"]["document"]["emotion"].items(), key=operator.itemgetter(1))[0]
highestEmotion.append(me)
highestEmotionScore.append(enriched_json["emotion"]["document"]["emotion"][me])
else:
highestEmotion.append("")
highestEmotionScore.append("")
# Iterate and get KEYWORDS with a confidence of over 70%
if 'keywords' in enriched_json:
tmpkw = []
for kw in enriched_json['keywords']:
if(float(kw["relevance"]) >= 0.7):
tmpkw.append(kw["text"])
# Convert multiple keywords in a list to a string and append the string
kywords.append(', '.join(tmpkw))
# Iterate and get Entities with a confidence of over 30%
if 'entities' in enriched_json:
tmpent = []
for ent in enriched_json['entities']:
if(float(ent["relevance"]) >= 0.3):
tmpent.append(ent["type"])
# Convert multiple entities in a list to a string and append the string
entities.append(', '.join(tmpent))
else:
entities.append("")
# Create columns from the list and append to the DataFrame
if highestEmotion:
df['ThumbnailHighestEmotion'] = highestEmotion
if highestEmotionScore:
df['ThumbnailHighestEmotionScore'] = highestEmotionScore
if overallSentimentType:
df['ThumbnailOverallSentimentType'] = overallSentimentType
if overallSentimentScore:
df['ThumbnailOverallSentimentScore'] = overallSentimentScore
df['ThumbnailKeywords'] = kywords
df['ThumbnailEntities'] = entities
```
Add two new columns to capture the `MaxThumbnailKeyword` and `MaxThumbnailEntity`:
```
# Set 'Max' to first one from keywords and entities lists
df["MaxThumbnailKeywords"] = df["ThumbnailKeywords"].apply(lambda x: x.split(',')[0])
df["MaxThumbnailEntity"] = df["ThumbnailEntities"].apply(lambda x: x.split(',')[0])
```
<a id='nlulink'></a>
### 4.3 NLU for Article Text
```
# Define the list of features to get enrichment values for entities, keywords, emotion and sentiment
features = Features(entities=EntitiesOptions(), keywords=KeywordsOptions(), emotion=EmotionOptions(), sentiment=SentimentOptions())
overallSentimentScore = []
overallSentimentType = []
highestEmotion = []
highestEmotionScore = []
kywords = []
entities = []
article_text = []
# Go through every response and enrich the article using NLU
for url in df['Extended Links']:
if not url:
overallSentimentScore.append(' ')
overallSentimentType.append(' ')
highestEmotion.append(' ')
highestEmotionScore.append(' ')
kywords.append(' ')
entities.append(' ')
article_text.append(' ')
continue
# Run links through NLU to get entities, keywords, emotion and sentiment.
# Use return_analyzed_text to extract text for Tone Analyzer to use.
try:
enriched_json = nlu.analyze(url=url,
features=features,
language='en',
return_analyzed_text=True).get_result()
article_text.append(enriched_json["analyzed_text"])
except Exception as e:
print("Skipping url %s: %s" % (url, e))
overallSentimentScore.append(' ')
overallSentimentType.append(' ')
highestEmotion.append(' ')
highestEmotionScore.append(' ')
kywords.append(' ')
entities.append(' ')
article_text.append(' ')
continue
# Get the SENTIMENT score and type
if 'sentiment' in enriched_json:
if('score' in enriched_json['sentiment']["document"]):
overallSentimentScore.append(enriched_json["sentiment"]["document"]["score"])
else:
overallSentimentScore.append('None')
if('label' in enriched_json['sentiment']["document"]):
overallSentimentType.append(enriched_json["sentiment"]["document"]["label"])
else:
overallSentimentType.append('')
# Read the EMOTIONS into a dict and get the key (emotion) with maximum value
if 'emotion' in enriched_json:
me = max(enriched_json["emotion"]["document"]["emotion"].items(), key=operator.itemgetter(1))[0]
highestEmotion.append(me)
highestEmotionScore.append(enriched_json["emotion"]["document"]["emotion"][me])
else:
highestEmotion.append('')
highestEmotionScore.append('')
# Iterate and get KEYWORDS with a confidence of over 70%
if 'keywords' in enriched_json:
tmpkw = []
for kw in enriched_json['keywords']:
if(float(kw["relevance"]) >= 0.7):
tmpkw.append(kw["text"])
# Convert multiple keywords in a list to a string and append the string
kywords.append(', '.join(tmpkw))
else:
kywords.append("")
# Iterate and get Entities with a confidence of over 30%
if 'entities' in enriched_json:
tmpent = []
for ent in enriched_json['entities']:
if(float(ent["relevance"]) >= 0.3):
tmpent.append(ent["type"])
# Convert multiple entities in a list to a string and append the string
entities.append(', '.join(tmpent))
else:
entities.append("")
# Create columns from the list and append to the DataFrame
if highestEmotion:
df['LinkHighestEmotion'] = highestEmotion
if highestEmotionScore:
df['LinkHighestEmotionScore'] = highestEmotionScore
if overallSentimentType:
df['LinkOverallSentimentType'] = overallSentimentType
if overallSentimentScore:
df['LinkOverallSentimentScore'] = overallSentimentScore
df['LinkKeywords'] = kywords
df['LinkEntities'] = entities
df['Article Text'] = article_text
```
Add two new columns to capture the `MaxLinkKeyword` and `MaxLinkEntity`:
```
# Set 'Max' to first one from keywords and entities lists
df["MaxLinkKeywords"] = df["LinkKeywords"].apply(lambda x: x.split(',')[0])
df["MaxLinkEntity"] = df["LinkEntities"].apply(lambda x: x.split(',')[0])
```
<a id='visual'></a>
### 4.4 Visual Recognition
Below uses Visual Recognition to classify the thumbnail images.
> NOTE: When using the **free tier** of Visual Recognition, _classify_ has a limit of 250 images per day.
```
if visual_recognition:
piclinks = df["Image"]
picclass = []
piccolor = []
pictype1 = []
pictype2 = []
pictype3 = []
for pic in piclinks:
if not pic or pic == 'default-img':
picclass.append(' ')
piccolor.append(' ')
pictype1.append(' ')
pictype2.append(' ')
pictype3.append(' ')
continue
classes = []
enriched_json = {}
try:
enriched_json = visual_recognition.classify(url=pic).get_result()
except Exception as e:
print("Skipping url %s: %s" % (pic, e))
if 'error' in enriched_json:
print(enriched_json['error'])
if 'images' in enriched_json and 'classifiers' in enriched_json['images'][0]:
classes = enriched_json['images'][0]["classifiers"][0]["classes"]
color1 = None
class1 = None
type_hierarchy1 = None
for iclass in classes:
# Grab the first color, first class, and first type hierarchy.
# Note: Usually you'd filter by 'score' too.
if not type_hierarchy1 and 'type_hierarchy' in iclass:
type_hierarchy1 = iclass['type_hierarchy']
if not class1:
class1 = iclass['class']
if not color1 and iclass['class'].endswith(' color'):
color1 = iclass['class'][:-len(' color')]
if type_hierarchy1 and class1 and color1:
# We are only using 1 of each per image. When we have all 3, break.
break
picclass.append(class1 or ' ')
piccolor.append(color1 or ' ')
type_split = (type_hierarchy1 or '/ / / ').split('/')
pictype1.append(type_split[1] if len(type_split) > 1 else '-')
pictype2.append(type_split[2] if len(type_split) > 2 else '- ')
pictype3.append(type_split[3] if len(type_split) > 3 else '-')
df["Image Color"] = piccolor
df["Image Class"] = picclass
df["Image Type"] = pictype1
df["Image Subtype"] = pictype2
df["Image Subtype2"] = pictype3
```
<a id='write'></a>
## Enrichment is now COMPLETE!
<a id='write1'></a>
Save a copy of the enriched DataFrame as a file in Cloud Object Storage. To run the upload_file function we first need to create a variable that contains our credentials we created in section 2.2. No user input is required as we already have all of the information we need. To upload the file to COS simply run the next two cells.
```
cos = ibm_boto3.client(service_name='s3',
ibm_api_key_id=credentials['IBM_API_KEY_ID'],
ibm_service_instance_id=credentials['IAM_SERVICE_ID'],
ibm_auth_endpoint=credentials['IBM_AUTH_ENDPOINT'],
config=Config(signature_version='oauth'),
endpoint_url=credentials['ENDPOINT'])
# Build the enriched file name from the original filename.
localfilename = 'enriched_' + credentials['FILE']
# Write a CSV file from the enriched pandas DataFrame.
df.to_csv(localfilename, index=False)
# Use the above put_file method with credentials to put the file in Object Storage.
cos.upload_file(localfilename, Bucket=credentials['BUCKET'],Key=localfilename)
# If you want to use the enriched local file, you can read it back in.
# This might be handy if you already enriched and just want to re-run
# from this cell and below. Uncomment the following line.
# df = pd.read_csv(localfilename)
```
<a id="part2"></a>
# Part II - Data Preparation
<a id='prepare'></a>
## 1. Prepare Data
<a id='visualizations'></a>
### 1.1 Prepare Multiple DataFrames for Visualizations
Before we can create the separate tables for each Watson feature we need to organize and reformat the data. First, we need to determine which data points are tied to metrics. Second, we need to make sure make sure each metric is numeric. _(This is necessary for PixieDust in Part III)_
```
# Put the lifetime metrics in a list
metrics = [metric for metric in df.columns.values.tolist() if 'Lifetime' in metric]
```
<a id='tone'></a>
### 1.2 Create a Consolidated Sentiment and Emotion DataFrame
You'll create a DataFrame for the sentiment and emotion of the post text and a DataFrame for the sentiment and emotion of the article text. Then you'll combine them into one DataFrame.
#### Post Sentiment and Emotion DataFrame
```
# Create a list with only Post sentiment and emotion values
post_tones = ["Text","TextHighestEmotion", "TextHighestEmotionScore", "TextOverallSentimentType", "TextOverallSentimentScore"]
# Append DataFrame with these metrics
post_tones.extend(metrics)
# Create a new DataFrame with metrics and sentiment and emotion
df_post_tones = df[post_tones]
# Determine which tone values are suppose to be numeric and ensure they are numeric.
post_numeric_values = ["TextHighestEmotionScore", "TextOverallSentimentScore"]
for i in post_numeric_values:
df_post_tones[i] = pd.to_numeric(df_post_tones[i], errors='coerce')
# Make all metrics numeric
for i in metrics:
df_post_tones[i] = pd.to_numeric(df_post_tones[i], errors='coerce')
# Add in a column to distinguish what portion the enrichment was happening
df_post_tones["Type"] = "Post"
```
#### Article Sentiment and Emotion DataFrame
```
# Create a list with only Article sentiment and emotion values
article_tones = ["Text", "LinkHighestEmotion", "LinkHighestEmotionScore", "LinkOverallSentimentType", "LinkOverallSentimentScore"]
# Append DataFrame with these metrics
article_tones.extend(metrics)
# Create a new DataFrame with metrics and sentiment and emotion
df_article_tones = df[article_tones]
# Determine which values are suppose to be numeric and ensure they are numeric.
art_numeric_values = ["LinkHighestEmotionScore", "LinkOverallSentimentScore"]
for i in art_numeric_values:
df_article_tones[i] = pd.to_numeric(df_article_tones[i], errors='coerce')
# Make all metrics numeric
for i in metrics:
df_article_tones[i] = pd.to_numeric(df_article_tones[i], errors='coerce')
# Add in a column to distinguish what portion the enrichment was happening
df_article_tones["Type"] = "Article"
```
#### Combine Post and Article DataFrames to Make DataFrame with Sentiment and Emotion
```
# First make the Column Headers the same
df_post_tones.rename(columns={"TextHighestEmotion":"Emotion",
"TextHighestEmotionScore":"Emotion Score",
"TextOverallSentimentType": "Sentiment",
"TextOverallSentimentScore": "Sentiment Score"
},
inplace=True)
df_article_tones.rename(columns={"LinkHighestEmotion":"Emotion",
"LinkHighestEmotionScore":"Emotion Score",
"LinkOverallSentimentType": "Sentiment",
"LinkOverallSentimentScore": "Sentiment Score"
},
inplace=True)
# Combine into one data frame
df_tones = pd.concat([df_post_tones, df_article_tones])
# Only keep the positive, neutral, and negative sentiments. The others are empty or unusable.
df_tones = df_tones[df_tones.Sentiment.isin(['positive', 'neutral', 'negative'])]
```
<a id='keyword'></a>
### 1.3 Create a Consolidated Keyword DataFrame
You'll create DataFrames for the keywords of the article text, the thumbnail text, and the post text. Then you'll combine them into one DataFrame.
#### Article Keyword DataFrame
```
# Create a list with only Article Keywords
article_keywords = ["Text", "MaxLinkKeywords"]
# Append DataFrame with these metrics
article_keywords.extend(metrics)
# Create a new DataFrame with keywords and metrics
df_article_keywords = df[article_keywords]
# Make all metrics numeric
for i in metrics:
df_article_keywords[i] = pd.to_numeric(df_article_keywords[i], errors='coerce')
# Drop NA Values in Keywords Column
df_article_keywords['MaxLinkKeywords'].replace(' ', np.nan, inplace=True)
df_article_keywords.dropna(subset=['MaxLinkKeywords'], inplace=True)
# Add in a column to distinguish what portion the enrichment was happening
df_article_keywords["Type"] = "Article"
```
#### Thumbnail Keyword DataFrame
```
# Create a list with only Thumbnail Keywords
thumbnail_keywords = ["Text", "MaxThumbnailKeywords"]
# Append DataFrame with these metrics
thumbnail_keywords.extend(metrics)
# Create a new DataFrame with keywords and metrics
df_thumbnail_keywords = df[thumbnail_keywords]
# Make all metrics numeric
for i in metrics:
df_thumbnail_keywords[i] = pd.to_numeric(df_thumbnail_keywords[i], errors='coerce')
# Drop NA Values in Keywords Column
df_thumbnail_keywords['MaxThumbnailKeywords'].replace(' ', np.nan, inplace=True)
df_thumbnail_keywords.dropna(subset=['MaxThumbnailKeywords'], inplace=True)
# Add in a column to distinguish what portion the enrichment was happening
df_thumbnail_keywords["Type"] = "Thumbnails"
```
#### Post Keyword DataFrame
```
# Create a list with only Thumbnail Keywords
post_keywords = ["Text", "MaxTextKeywords"]
# Append DataFrame with these metrics
post_keywords.extend(metrics)
# Create a new DataFrame with keywords and metrics
df_post_keywords = df[post_keywords]
# Make all metrics numeric
for i in metrics:
df_post_keywords[i] = pd.to_numeric(df_post_keywords[i], errors='coerce')
# Drop NA Values in Keywords Column
df_post_keywords['MaxTextKeywords'].replace(' ', np.nan, inplace=True)
df_post_keywords.dropna(subset=['MaxTextKeywords'], inplace=True)
# Add in a column to distinguish what portion the enrichment was happening
df_post_keywords["Type"] = "Posts"
```
#### Combine Post, Thumbnail, and Article DataFrames to Make One Keywords DataFrame
```
# First make the column headers the same
df_post_keywords.rename(columns={"MaxTextKeywords": "Keywords"}, inplace=True)
df_thumbnail_keywords.rename(columns={"MaxThumbnailKeywords":"Keywords"}, inplace=True)
df_article_keywords.rename(columns={"MaxLinkKeywords":"Keywords"}, inplace=True)
# Combine into one data frame
df_keywords = pd.concat([df_post_keywords, df_thumbnail_keywords, df_article_keywords])
# Discard posts with lower total reach to make charting easier
df_keywords = df_keywords[df_keywords["Lifetime Post Total Reach"] > 20000]
```
<a id='entity'></a>
### 1.4 Create a Consolidated Entity DataFrame
You'll create DataFrames for the entities of the article text, the thumbnail text, and the post text. Then you'll combine them into one DataFrame.
#### Article Entity DataFrame
```
# Create a list with only Article Keywords
article_entities = ["Text", "MaxLinkEntity"]
# Append DataFrame with these metrics
article_entities.extend(metrics)
# Create a new DataFrame with keywords and metrics
df_article_entities = df[article_entities]
# Make all metrics numeric
for i in metrics:
df_article_entities[i] = pd.to_numeric(df_article_entities[i], errors='coerce')
# Drop NA Values in Keywords Column
df_article_entities['MaxLinkEntity'] = df["MaxLinkEntity"].replace(r'\s+', np.nan, regex=True)
df_article_entities.dropna(subset=['MaxLinkEntity'], inplace=True)
# Add in a column to distinguish what portion the enrichment was happening
df_article_entities["Type"] = "Article"
```
#### Thumbnail Entity DataFrame
```
# Create a list with only Thumbnail Keywords
thumbnail_entities = ["Text", "MaxThumbnailEntity"]
# Append DataFrame with these metrics
thumbnail_entities.extend(metrics)
# Create a new DataFrame with keywords and metrics
df_thumbnail_entities = df[thumbnail_entities]
# Make all metrics numeric
for i in metrics:
df_thumbnail_entities[i] = pd.to_numeric(df_thumbnail_entities[i], errors='coerce')
# Drop NA Values in Keywords Column
df_thumbnail_entities['MaxThumbnailEntity'] = df_thumbnail_entities['MaxThumbnailEntity'].replace(r'\s+', np.nan, regex=True)
df_thumbnail_entities.dropna(subset=['MaxThumbnailEntity'], inplace=True)
# Add in a column to distinguish what portion the enrichment was happening
df_thumbnail_entities["Type"] = "Thumbnails"
```
#### Post Entity DataFrame
```
# Create a list with only Thumbnail Keywords
post_entities = ["Text", "MaxTextEntity"]
# Append DataFrame with these metrics
post_entities.extend(metrics)
# Create a new DataFrame with keywords and metrics
df_post_entities = df[post_entities]
# Make all metrics numeric
for i in metrics:
df_post_entities[i] = pd.to_numeric(df_post_entities[i], errors='coerce')
# Drop NA Values in Keywords Column
df_post_entities['MaxTextEntity'] = df_post_entities['MaxTextEntity'].replace(r'\s+', np.nan, regex=True)
df_post_entities.dropna(subset=['MaxTextEntity'], inplace=True)
# Add in a column to distinguish what portion the enrichment was happening
df_post_entities["Type"] = "Posts"
```
#### Combine Post, Thumbnail, and Article DataFrames to Make One Entity DataFrame
```
# First make the column headers the same
df_post_entities.rename(columns={"MaxTextEntity": "Entities"}, inplace=True)
df_thumbnail_entities.rename(columns={"MaxThumbnailEntity":"Entities"}, inplace=True)
df_article_entities.rename(columns={"MaxLinkEntity":"Entities"}, inplace=True)
# Combine into one data frame
df_entities = pd.concat([df_post_entities, df_thumbnail_entities, df_article_entities])
df_entities["Entities"] = df_entities["Entities"].replace('', np.nan)
df_entities.dropna(subset=["Entities"], inplace=True)
```
<a id='image_dataframe'></a>
### 1.5 Create a Consolidated Image DataFrame
#### Combine Metrics with Type Hierarchy, Class and Color to Make One Image DataFrame
```
if visual_recognition:
# Create a list with only Visual Recognition columns
pic_keywords = ['Image Type', 'Image Subtype', 'Image Subtype2', 'Image Class', 'Image Color']
# Append DataFrame with these metrics
pic_keywords.extend(metrics)
# Create a new DataFrame with keywords and metrics
df_pic_keywords = df[pic_keywords]
# Make all metrics numeric
for i in metrics:
df_pic_keywords[i] = pd.to_numeric(df_pic_keywords[i], errors='coerce')
# Discard posts with lower total reach to make charting easier
df_pic_keywords = df_pic_keywords[df_pic_keywords["Lifetime Post Total Reach"] > 15000]
if visual_recognition:
images = df_pic_keywords[df_pic_keywords['Image Type'] != ' ']
```
<a id="part3"></a>
# Part III
<a id='2setup'></a>
## 1. Setup
<a id='2setup2'></a>
### 1.1 Assign Variables
Assign new DataFrames to variables.
```
entities = df_entities
tones = df_tones
keywords = df_keywords
```
<a id='2visual'></a>
## 2. Visualize Data
<a id='2visual1'></a>
### 2.1 Run PixieDust Visualization Library with Display() API
PixieDust lets you visualize your data in just a few clicks using the display() API. You can find more info at https://pixiedust.github.io/pixiedust/displayapi.html.
#### We can use a pie chart to identify how lifetime engagement was broken up by sentiment.
Click on the `Options` button to change the chart. Here are some things to try:
* Add *Type* to make the breakdown show *Post* or *Article*.
* Show *Emotion* intead of *Sentiment* (or both).
* Try a different metric.
```
import pixiedust
display(tones)
```
#### Now let's look at the same statistics as a bar chart.
It is the same line of code. Use the `Edit Metadata` button to see how PixieDust knows to show us a bar chart. If you don't have a button use the menu and select `View > Cell Toolbar > Edit Metadata`.
A bar chart is better at showing more information. We added `Cluster By: Type` so we already see numbers for posts and articles. Notice what the chart tells you. Most of our articles and posts are `positive`. But what sentiment really engages more users? Click on `Options` and try this:
* Change the aggregation to `AVG`.
What sentiment leads to higher average engagement?
```
display(tones)
```
#### Now let's look at the entities that were detected by Natural Language Understanding.
The following bar chart shows the entities that were detected. This time we are stacking negative feedback and "likes" to get a picture of the kind of feedback the entities were getting. We chose a horizontal, stacked bar chart with descending values for a little variety.
* Try a different renderer and see what you get.
```
display(entities)
```
#### Next we look at the keywords detected by Natural Language Understanding
```
display(keywords)
```
#### Now let's take a look at what Visual Recognition can show us.
See how the images influenced the metrics. We've used visual recognition to identify a class and a type hierarchy for each image. We've also captured the top recognized color for each image. Our sample data doesn't have a significant number of data points, but these three charts demonstrate how you could:
1. Recognize image classes that correlate to higher total reach.
1. Add a type hierarchy for a higher level abstraction or to add grouping/stacking to the class data.
1. Determine if image color correlates to total reach.
Visual recognition makes it surprisingly easy to do all of the above. Of course, you can easily try different metrics as you experiment. If you are not convinced that you should add ultramarine laser pictures to all of your articles, then you might want to do some research with a better data sample.
```
if visual_recognition:
display(images)
if visual_recognition:
display(images)
if visual_recognition:
display(images)
```
<hr>
Copyright © IBM Corp. 2017, 2018. This notebook and its source code are released under the terms of the Apache 2.0.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
| github_jupyter |
```
import sys, os
import tensorflow as tf
import numpy as np
import json
from PIL import Image
import glob
import matplotlib.pyplot as plt
# colab에서 사용한다면
colab = False
if colab:
!git clone --depth 1 --branch ver1.1 https://github.com/malheading/Surface_Crack_Segmentation.git
# filepath = '/content/Surface_Crack_Segmentation/crack_100.json'
# else:
# filepath = "D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/crack_100.json"
# with open(filepath) as json_file:
# json_data = json.load(json_file)
if colab:
pass
else:
img_positive_path = "D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/Positive_jw/"
img_negative_path = "D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/Negative_jw/"
label_positive_path = "D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/Positive_Segmentation/"
positive_imgs = glob.glob(img_positive_path + "*.*")
negative_imgs = glob.glob(img_negative_path + "*.*")
label_positive_datas = glob.glob(label_positive_path + "*.*")
x_train_positive = []
for i in range(len(positive_imgs)):
Im = Image.open(positive_imgs[i])
Im = Im.resize((128,128))
x_train_positive.append(np.array(Im))
x_train_positive = np.array(x_train_positive,dtype=np.float32)
x_train_negative = []
for i in range(len(negative_imgs)):
Im = Image.open(negative_imgs[i])
Im = Im.resize((128,128))
x_train_negative.append(np.array(Im))
x_train_negative = np.array(x_train_negative, dtype=np.float32)
print("x_train_positive shape :",x_train_positive.shape)
print("x_train_negative shape :",x_train_negative.shape)
# 크랙이 있는 것과 없는 것 두가지를 concat해서 x_train 데이터를 만든다.
x_train = np.concatenate((x_train_negative,x_train_positive))
x_train = x_train/255.0
print("x_train shape :",x_train.shape)
y_train_positive = []
for i in range(len(label_positive_datas)):
Im = Image.open(label_positive_datas[i])
Im = Im.resize((128,128),Image.BOX)
y_train_positive.append(np.ceil(np.array(Im)))
y_train_positive = np.array(y_train_positive,dtype=np.float32)/255.0
y_train_positive = y_train_positive.reshape(y_train_positive.shape[0],y_train_positive.shape[1],y_train_positive.shape[2],1)
# plt.imshow(y_train_positive[0])
plt.imshow(y_train_positive[0].reshape((128,128)))
# y_train_positive.dtype
# negative label은 모든 픽셀의 값이 0
y_train_negative = np.zeros((x_train_negative.shape[0], 128, 128, 1),dtype=np.float32)
# label(y_train)도 concat해준다
y_train = np.concatenate((y_train_negative,y_train_positive))
print(y_train.shape)
```
# 데이터셋 형태로 생성
```
# from_tensor_slice로 데이터셋 생성
BATCH_SIZE=10
dataset = tf.data.Dataset.from_tensor_slices((x_train,y_train)).shuffle(10000).batch(BATCH_SIZE)
# 트레인 데이터셋은 0.85 * 200 / 10 --> 170개
# 테스트 데이터셋은 나머지 30개
validation_split = 0.85
train_dataset_size = int(y_train.shape[0] * validation_split / BATCH_SIZE)
train_dataset = dataset.take(train_dataset_size)
test_dataset = dataset.skip(train_dataset_size)
```
# 모델 정의하기
```
CHANNEL = 3
OUTPUT_CHANNELS = 3
# 베이스모델로 MobileNetV2를 사용해서, 조금 더 가벼운 네트워크를 구성하고자 합니다.
base_model = tf.keras.applications.MobileNetV2(input_shape=[128, 128, CHANNEL], include_top=False)
base_model.summary()
# U-Net에서 특징 추출 레이어로 사용할 계층들입니다.
#이 층들의 활성화를 이용합시다
layer_names = [
'block_1_expand_relu', # 64x64
'block_3_expand_relu', # 32x32
'block_6_expand_relu', # 16x16
'block_13_expand_relu', # 8x8
'block_16_project', # 4x4
]
layers = [base_model.get_layer(name).output for name in layer_names]
```
# U-Net은 다음과 같은 구조로, 일부는 정확한 지역화(Localization)을 수행하게 됩니다.
```
# U-net은 기본적으로 아래층으로 심층 특징 추출하는 층과, skip하는 층이 합쳐지는 구조# 특징추출 모델을 만듭시다.
# 이를 'down_stack'한다고 합니다.
down_stack = tf.keras.Model(inputs=base_model.input, outputs=layers)
# 이미 특징 추출은 MobileNet에서 수행되었기 때문에, trainable = False
down_stack.trainable = False
# up_stack을 1회 수행하는 하나의 계층을 만들도록 upsample 함수를 정의합니다.
def upsample(filters, size, apply_dropout=False):
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2DTranspose(filters, size, strides=2,padding='same',kernel_initializer=initializer,use_bias=False))
result.add(tf.keras.layers.BatchNormalization())
if apply_dropout:
result.add(tf.keras.layers.Dropout(0.5))
result.add(tf.keras.layers.ReLU())
return result
up_stack = [
upsample(512, 3), # 4x4 -> 8x8
upsample(256, 3), # 8x8 -> 16x16
upsample(128, 3), # 16x16 -> 32x32
upsample(64, 3), # 32x32 -> 64x64
]
def build_model(num_output_channels):
input_layer = tf.keras.layers.Input(shape=[128,128,3])
x = input_layer
# 모델을 다운 스택
skips = down_stack(x)
x = skips[-1]
skips = reversed(skips[:-1])
# skip connection을 upsampling한다
for up, skip in zip(up_stack,skips):
x = up(x)
# skip해서 넘어오는 connection과 down_stack에서 올라오는 up을 concatenate한다.
concat = tf.keras.layers.Concatenate()
x = concat([x,skip])
# 현재 최종 계층의 output shape = (None, 64,64,1)
# 마지막 계층으로 Conv2DTranspose를 함으로써, output shape를 (None, 64, 64, Channel)로 지정한다
last_layer = tf.keras.layers.Conv2DTranspose(num_output_channels, 3, strides=2, padding='same') # 64x64 -> 128,128
x = last_layer(x)
return tf.keras.Model(inputs=input_layer, outputs=x)
```
# 모델을 컴파일합니다.
### 각각의 픽셀에 대해서 {0,1,2} 3채널의 Sparse Categorical Cross Entropy를 수행하게 됩니다.
### pixcel-wise (픽셀 단위의) 해석이기 때문에 Segmentation의 수행이 가능합니다.
```
OUTPUT_CHANNELS = 3
model = build_model(OUTPUT_CHANNELS)
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])
if colab:
from tensorflow.keras.utils import plot_model
plot_model(model, show_shapes=True)
# model.png 파일이 저장된 것을 확인할 수 있습니다.
plt.figure(figsize=(12,25))
plt.imshow(np.array(Image.open('model.png')))
else:
model.summary()
```
# 초기 Prediction을 출력해 봅시다(epoch=0).
```
sample_image, sample_mask = next(iter(dataset))
def show_predictions(dataset=None, num=1,epoch=None):
if dataset:
for image, mask in dataset.take(num):
predicted_mask = model.predict(image)
# output 3채널 중에서 가장 큰 값들을 찾아서 1채널로 축소
predicted_mask = tf.argmax(predicted_mask, axis=-1)
predicted_mask = np.array(predicted_mask).reshape((10,128,128,1))
# display([image[0], mask[0], predicted_mask])
plt.figure(figsize=(15,5))
for i in range(BATCH_SIZE):
plt.subplot(3,BATCH_SIZE,i+1)
plt.imshow(image[i])
plt.subplot(3,BATCH_SIZE,i+BATCH_SIZE+1)
plt.imshow(np.array(mask[i]).reshape(128,128))
plt.subplot(3,BATCH_SIZE,i+2 * BATCH_SIZE+1)
plt.imshow(predicted_mask[i].reshape(128,128))
else:
predicted_mask = model.predict(sample_image)
predicted_mask = tf.argmax(predicted_mask, axis=-1)
predicted_mask = np.array(predicted_mask).reshape((10,128,128,1))
plt.figure(figsize=(15,5))
if epoch:
plt.title("Current epoch :{}".format(epoch))
for i in range(BATCH_SIZE):
plt.subplot(3,BATCH_SIZE,i+1)
plt.imshow(sample_image[i])
plt.subplot(3,BATCH_SIZE,i+BATCH_SIZE+1)
plt.imshow(np.array(sample_mask[i]).reshape(128,128))
plt.subplot(3,BATCH_SIZE,i+2 * BATCH_SIZE+1)
plt.imshow(predicted_mask[i].reshape(128,128))
# plt.show()
if epoch:
if colab:
save_path = "/content/Surface_Crack_Segmentation/fig_saves/"
else:
save_path = "D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/fig_saves/" # 이미지 저장 경로를 변경해주세요
file_name = "{}.png".format(epoch)
plt.savefig(save_path+file_name)
plt.show()
# 트레이닝 되지않은 초기 데이터를 plot
# 다소 약한 Mask들이 나오는 것을 볼 수 있다.
show_predictions(dataset,1)
```
# 트레이닝
```
# 각 트레인 epoch가 끝날 때마다 트레이닝 sample_image, sample_mask로부터 학습과정을 시각화합니다.
from IPython.display import clear_output
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
clear_output(wait=True)
show_predictions(epoch = epoch)
EPOCHS = 100
# STEPS_PER_EPOCH = x_train.shape[0]/BATCH_SIZE # 트레이닝/검증 나누지 않았을 때 사용하세요
STEPS_PER_EPOCH = train_dataset_size
# model_history = model.fit(dataset, epochs=EPOCHS,
# steps_per_epoch=STEPS_PER_EPOCH,
# callbacks=[DisplayCallback()]) # 트레이닝/검증 나누지 않았을 때 사용하세요.
# 체크포인트 콜백을 생성
if colab:
save_dir = './ckpt_dat/'
checkpoint_path = save_dir + "{epoch:03d}.ckpt"
callback_autosave=tf.keras.callbacks.ModelCheckpoint(checkpoint_path,verbose=1,save_weights_only=True)
else:
checkpoint_path = "D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/CKPT/{epoch:03d}.ckpt"
callback_autosave=tf.keras.callbacks.ModelCheckpoint(checkpoint_path,verbose=1,save_weights_only=True)
model_history = model.fit(train_dataset, validation_data=test_dataset,
epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH,
callbacks=[DisplayCallback(),callback_autosave])
if colab:
model.save("/content/Surface_Crack_Segmentation/MY_MODEL")
else:
model.save("D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/MY_MODEL")
pass
# plt.figure(figsize=(12,10))
# plt.subplot(2,2,1)
# plt.plot(model_history.history['accuracy'])
# plt.title('accuracy')
# plt.subplot(2,2,2)
# plt.plot(model_history.history['loss'])
# plt.title('loss')
# plt.subplot(2,2,3)
# plt.plot(model_history.history['val_accuracy'])
# plt.title('val_accuracy')
# plt.subplot(2,2,4)
# plt.plot(model_history.history['val_loss'])
# plt.title('val_loss')
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.plot(model_history.history['accuracy'],label = 'train')
plt.plot(model_history.history['val_accuracy'], label = 'validation')
plt.title('accuracy')
plt.legend()
plt.subplot(1,2,2)
plt.plot(model_history.history['loss'],label='train')
plt.plot(model_history.history['val_loss'],label='validation')
plt.title('loss')
plt.legend()
model_history.history['val_accuracy'][-1]
model_history.history['val_loss'][-1]
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Dakini/AnimeColorDeOldify/blob/master/ImageColorizerColabSketch2Gray.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### **<font color='blue'> Artistic Colorizer </font>**
#◢ DeOldify - Colorize your own photos!
####**Credits:**
Special thanks to:
Jason Antic for creating the DeOldify for training and inferencing.
Matt Robinson and María Benavente for pioneering the DeOldify image colab notebook.
Dana Kelley for doing things, breaking stuff & having an opinion on everything.
---
#◢ Verify Correct Runtime Settings
**<font color='#FF000'> IMPORTANT </font>**
In the "Runtime" menu for the notebook window, select "Change runtime type." Ensure that the following are selected:
* Runtime Type = Python 3
* Hardware Accelerator = GPU
```
import torch
if not torch.cuda.is_available():
print('GPU not available.')
```
#◢ Git clone and install DeOldify
```
!git clone https://github.com/Dakini/AnimeColorDeOldify.git DeOldify
cd DeOldify
```
#◢ Setup
```
!pip install -r colab_requirements.txt
import fastai
from deoldify.visualize import *
torch.backends.cudnn.benchmark = True
!mkdir 'models'
!wget https://www.dropbox.com/s/6me8m9e7nfmlid6/tDEFrpvevtu6WGRKf2uV5cFtsFAEhuA5kmN7FpgZ.pth?dl=0 -O ./models/ColorizeArtistic_gen.pth
stats = ([0.7137, 0.6628, 0.6519],[0.2970, 0.3017, 0.2979])
colorizer = get_image_colorizer(artistic=True,stats=stats)
```
#◢ Instructions
### source_url
Type in a url to a direct link of an image. Usually that means they'll end in .png, .jpg, etc. NOTE: If you want to use your own image, upload it first to a site like Imgur.
### render_factor
The default value of 12 has been carefully chosen and should work -ok- for most scenarios (but probably won't be the -best-). This determines resolution at which the color portion of the image is rendered. Lower resolution will render faster, and colors also tend to look more vibrant. Older and lower quality images in particular will generally benefit by lowering the render factor. Higher render factors are often better for higher quality images, but the colors may get slightly washed out.
### watermarked
Selected by default, this places a watermark icon of a palette at the bottom left corner of the image. This is intended to be a standard way to convey to others viewing the image that it is colorized by AI. We want to help promote this as a standard, especially as the technology continues to improve and the distinction between real and fake becomes harder to discern. This palette watermark practice was initiated and lead by the company MyHeritage in the MyHeritage In Color feature (which uses a newer version of DeOldify than what you're using here).
### post_process
Selected by default, this outputs the image without being postprocessed. The post processing usually works really well for images that contain some shading, however it does not work for images that are mainly line drawings (sketches). It is recommended to turn this off, if you are colorising a sketch.
#### How to Download a Copy
Simply right click on the displayed image and click "Save image as..."!
## Pro Tips
You can evaluate how well the image is rendered at each render_factor by using the code at the bottom (that cell under "See how well render_factor values perform on a frame here").
## Troubleshooting
If you get a 'CUDA out of memory' error, you probably have the render_factor too high.
#◢ Colorize!!
```
source_url = 'https://i.imgur.com/h1rCA4a.png' #@param {type:"string"}
render_factor = 12 #@param {type:"slider", min:7, max:45, step:1}
watermarked = False #@param {type:"boolean"}
if source_url is not None and source_url !='':
image_path = colorizer.plot_transformed_image_from_url(url=source_url, render_factor=render_factor, compare=True, post_process=False, watermarked=watermarked)
show_image_in_notebook(image_path)
else:
print('Provide an image url and try again.')
```
## See how well render_factor values perform on the image here
```
for i in range(10,45,2):
colorizer.plot_transformed_image('test_images/sketch/0_.png', render_factor=i, display_render_factor=True, post_process=False, figsize=(8,8))
```
| github_jupyter |
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
import os
import sys
CURRENT_DIR = os.path.abspath(os.path.dirname(__name__))
LIBRARY_DIR = os.path.join(CURRENT_DIR, '..', '..')
sys.path.append(LIBRARY_DIR)
import os
CURRENT_DIR = os.path.abspath(os.path.dirname(__name__))
def saveas(name):
image_name = '{}.png'.format(name)
image_path = os.path.join(LIBRARY_DIR, 'site', '2017', '12', '09', 'images', image_name)
plt.savefig(image_path, facecolor='#f8fafb', bbox_inches='tight')
blue = '#348ABD'
red = '#E24A33'
black = '#000000'
purple = '#988ED5'
green = '#8EBA42'
from itertools import product
import numpy as np
from neupy.algorithms.competitive.neighbours import find_neighbours_on_rect_grid
from examples.competitive.utils import plot_2d_grid
grid = np.array(list(product(range(9), range(9))))
fig = plt.figure(figsize=(6, 6))
plt.title("SOFM grid")
plot_2d_grid(
np.transpose(grid.reshape((9, 9, 2)), (2, 0, 1)),
color=blue)
plt.scatter(*grid.T, color=blue)
plt.xlim(-1, 9)
plt.ylim(-1, 9)
plt.xticks([])
plt.yticks([])
saveas('sofm-grid')
fig = plt.figure(figsize=(13, 10))
for index, radius in enumerate(range(6), start=1):
plt.subplot(2, 3, index)
red, blue = ('#E24A33', '#348ABD')
plt.title('2D 10x10 SOFM grid with \nlearning radius = {}'.format(radius))
neuron_winner = (4, 4)
neighbour = find_neighbours_on_rect_grid(
np.zeros((9, 9)), neuron_winner, radius)
neighbour = neighbour.ravel()
plot_2d_grid(
np.transpose(grid.reshape((9, 9, 2)), (2, 0, 1)),
color=blue)
plt.scatter(*grid[neighbour == 0].T, color=blue)
plt.scatter(*grid[neighbour == 1].T, color=red, s=50, zorder=100)
plt.scatter(*neuron_winner, color=red, s=100, zorder=100)
plt.xlim(-1, 9)
plt.ylim(-1, 9)
plt.xticks([])
plt.yticks([])
fig.tight_layout()
saveas('sofm-learning-radius-comparison')
from neupy import algorithms
fig = plt.figure(figsize=(15, 18))
for index, radius in enumerate(range(8), start=1):
plt.subplot(4, 2, index)
radius = radius // 2
neuron_winner = (8, 4)
if index % 2 == 1:
plt.title('SOFM 10x10 feature map \nbefore training')
current_grid = grid.copy()
else:
plt.title('SOFM 10x10 feature map \nafter training')
sofm = algorithms.SOFM(
n_inputs=2,
step=0.4,
features_grid=(9, 9),
learning_radius=radius,
std=1,
weight=current_grid.T)
sofm.train([[15, 4]], epochs=1)
current_grid = sofm.weight.T
neighbour = find_neighbours_on_rect_grid(
np.zeros((9, 9)), neuron_winner, radius)
neighbour = neighbour.ravel()
plot_2d_grid(
np.transpose(current_grid.reshape((9, 9, 2)), (2, 0, 1)),
color=blue)
plt.scatter(*current_grid[neighbour == 0].T, color=blue)
plt.scatter(*current_grid[neighbour == 1].T, color=red, s=50, zorder=100)
plt.scatter(15, 4, color='#000000', s=100)
plt.xlim(-1, 17)
plt.ylim(-1, 9)
plt.xticks([])
plt.yticks([])
fig.tight_layout()
saveas('sofm-training-learning-radius-comparison')
```
| github_jupyter |
# Passive Aggressive Regressor with Scale
This Code template is for the regression analysis using a simple PassiveAggresiveRegressor based on the passive-aggressive algorithms and the feature rescaling technique used is Scale. Passive-aggressive algorithms are a group of algorithms for large-scale learning.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import scale
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.linear_model import PassiveAggressiveRegressor
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
###Data Scaling
sklearn.preprocessing.scale(X, *, axis=0, with_mean=True, with_std=True, copy=True)
Standardize a dataset along any axis. Center to the mean and component wise scale to unit variance.
Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html)
```
x_train=scale(x_train)
x_test=scale(x_test)
```
### Model
The passive-aggressive algorithms are a family of algorithms for large-scale learning. They are similar to the Perceptron in that they do not require a learning rate. However, contrary to the Perceptron, they include a regularization parameter C
> **C** ->Maximum step size (regularization). Defaults to 1.0.
> **max_iter** ->The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method.
> **tol**->The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol).
> **early_stopping**->Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs.
> **validation_fraction**->The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True.
> **n_iter_no_change**->Number of iterations with no improvement to wait before early stopping.
> **shuffle**->Whether or not the training data should be shuffled after each epoch.
> **loss**->The loss function to be used: epsilon_insensitive: equivalent to PA-I in the reference paper. squared_epsilon_insensitive: equivalent to PA-II in the reference paper.
> **epsilon**->If the difference between the current prediction and the correct label is below this threshold, the model is not updated.
```
model = PassiveAggressiveRegressor(random_state=123)
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
score: The score function returns the coefficient of determination R2 of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),y_pred[0:20], color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Arpit Somani , Github: [Profile](https://github.com/arpitsomani8)
| github_jupyter |
```
# Cargar funciones de la librería de python data analysis
import pandas as pd
# Leer csv con datos y cargar en el dataframe data
data = pd.read_csv("data/stockprices.csv")
# Preview de las 5 primeras filas de data
data.head()
# Calcular variables con correlacion positiva o negativa superior a un umbral
corMatrix = data.corr()
corMatrix[(abs(corMatrix) > 0.5) & (corMatrix <> 1)]['nasdaq'].dropna()
# Cargar funciones de la librería de python matplot
import matplotlib.pyplot as plt
# plots
plt.plot(data['nasdaq']/data['nasdaq'].max(), c='darkblue')
plt.plot(data['goog']/data['goog'].max(), c='green')
plt.plot(data['aapl']/data['aapl'].max(), c='red')
plt.plot(data['amzn']/data['amzn'].max(), c='yellow')
plt.show()
df10 = ((data['aapl']-data['aapl'].shift(1))/data['aapl'].shift(1)*100).rename('aaplder1')
df11 = ((data['aapl']-data['aapl'].shift(2))/data['aapl'].shift(2)*100).rename('aaplder2')
df12 = ((data['aapl']-data['aapl'].shift(3))/data['aapl'].shift(3)*100).rename('aaplder3')
df13 = ((data['aapl']-data['aapl'].shift(4))/data['aapl'].shift(4)*100).rename('aaplder4')
df20 = ((data['amzn']-data['amzn'].shift(1))/data['amzn'].shift(1)*100).rename('amznder1')
df21 = ((data['amzn']-data['amzn'].shift(2))/data['amzn'].shift(2)*100).rename('amznder2')
df22 = ((data['amzn']-data['amzn'].shift(3))/data['amzn'].shift(3)*100).rename('amznder3')
df23 = ((data['amzn']-data['amzn'].shift(4))/data['amzn'].shift(4)*100).rename('amznder4')
df30 = ((data['goog']-data['goog'].shift(1))/data['goog'].shift(1)*100).rename('googder1')
df31 = ((data['goog']-data['goog'].shift(2))/data['goog'].shift(2)*100).rename('googder2')
df32 = ((data['goog']-data['goog'].shift(3))/data['goog'].shift(3)*100).rename('googder3')
df33 = ((data['goog']-data['goog'].shift(4))/data['goog'].shift(4)*100).rename('googder4')
dfn = ((data['nasdaq'].shift(-1)-data['nasdaq'])/data['nasdaq']*100).rename('nder')
#df10 = data['aapl']
#df11 = data['aapl'].shift(1).rename('aapl-1')
#df12 = data['aapl'].shift(2).rename('aapl-2')
#df13 = data['aapl'].shift(3).rename('aapl-3')
#df14 = data['aapl'].shift(4).rename('aapl-4')
#df20 = data['amzn']
#df21 = data['amzn'].shift(1).rename('amzn-1')
#df22 = data['amzn'].shift(2).rename('amzn-2')
#df23 = data['amzn'].shift(3).rename('amzn-3')
#df24 = data['amzn'].shift(4).rename('amzn-4')
#df30 = data['goog']
#df31 = data['goog'].shift(1).rename('goog-1')
#df32 = data['goog'].shift(2).rename('goog-2')
#df33 = data['goog'].shift(3).rename('goog-3')
#df34 = data['goog'].shift(4).rename('goog-4')
#dfn = ((data['nasdaq'].shift(-2)-data['nasdaq'].shift(-1))/data['nasdaq'].shift(-1)*100).rename('nder')
#df = pd.concat([df10,df11,df12,df13,df14,df20,df21,df22,df23,df24,df30,df31,df32,df33,df34,dfn], axis=1)
df = pd.concat([df10,df11,df12,df13,df20,df21,df22,df23,df30,df31,df32,df33,dfn], axis=1)
df = df.dropna()
df
df.loc[(df['nder'] > 0), 'nder'] = 1
df.loc[(df['nder'] < 0), 'nder'] = -1
#print(df)
corMatrix = df.corr()
corMatrix[(abs(corMatrix) > 0.07) & (corMatrix <> 1)]['nder'].dropna()
#corMatrix
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error
from math import sqrt
# Seleccion basada en correlacion
df2 = df[['aaplder1','aaplder2','aaplder3','aaplder4','amznder1','amznder2','amznder3','amznder4','googder1','googder2','googder3','googder4','nder']]
# 229 registros para train, 15 para validacion (15 dias)
# última columna es el target, todas las previas son input
col = df2.shape[1]
total = df2.shape[0]
n = 229
x = df2.iloc[0:n,0:col-1]
x_val = df2.iloc[n:,0:col-1]
y = df2.iloc[0:n,col-1:col]
y_val = df2.iloc[n:,col-1:col]
# entrenar el modelo y calcular predicciones
# Documentación de parámetros: https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html
model = MLPRegressor(hidden_layer_sizes=(100), max_iter=100,
activation='relu', solver='lbfgs', random_state=1)
model.fit(x, y['nder'])
y_pred = model.predict(x_val)
#y_pred[y_pred < 0] = 0 # La NN puede lanzar valores negativos, normalizar a 0 esos casos
# graficar serie real y predicción
dias = range(total-n)
plt.plot(dias, y_val, color='green') # REAL
plt.plot(dias, y_pred, color='red') # PREDICCION
plt.show()
# calcular coeficiente r2 y pearson
print("Coeficiente r2: " + str(model.score(x, y)))
print("Coeficiente de Pearson (r): " + str(sqrt(model.score(x, y))))
# calcular root mean square error (RMSE)
print("Raíz del Error cuadrático Medio (RMSE): " + str(sqrt(mean_squared_error(y_val, y_pred))))
#print("Media de Cantidad: " + str(y_val['nder'].mean()))
#print("% de error: " + str(sqrt(mean_squared_error(y_val, y_pred)) / y_val['nder'].mean()))
```
| github_jupyter |
# Analysis of trained models and training logs
This notebook shows how to load, process, and analyze logs that are automatically generated during training. It also demonstrates how to make plots to examine performance of a single model or compare performance of multiple models.
Prerequisites:
- To run this example live, you must train at least two models to generate the trained log directories and set the paths below.
Each log directory contains the following:
- args.txt: the arguments fed into regression.py to train the model
- split: the train-tune-test split used to train the model
- final_evaluation.txt: final evaluation metrics (MSE, Pearson's r, Spearman's r, and r^2) on each of the split sets
- predictions: the model's score predictions for every variant in each of the split sets
- the trained model itself: see the inference notebook for more information on how to use this
This codebase provides several convenient functions for loading this log data.
```
# reload modules before executing code in order to make development and debugging easier
%load_ext autoreload
%autoreload 2
# this jupyter notebook is running inside of the "notebooks" directory
# for relative paths to work properly, we need to set the current working directory to the root of the project
# for imports to work properly, we need to add the code folder to the system path
import os
from os.path import abspath, join, isdir, basename
import sys
if not isdir("notebooks"):
# if there's a "notebooks" directory in the cwd, we've already set the cwd so no need to do it again
os.chdir("..")
module_path = abspath("code")
if module_path not in sys.path:
sys.path.append(module_path)
import numpy as np
import pandas as pd
import sklearn.metrics as skm
from scipy.stats import pearsonr, spearmanr
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import analysis as an
```
# Define the log directories
To run this script live, you must train at least two models. As an example, we are using the avGFP linear regression and fully connected models, trained using the arguments in `pub/regression_args/avgfp_main_lr.txt` and `pub/regression_args/avgfp_main_fc.txt`. You can use these or train your own models. For comparing performance of many trained models, you must write your own function to collect the log directory names. Then, using them with this example is then relatively straightfoward.
```
log_dir_lr = "output/training_logs/log_local_local_2020-09-22_22-02-33_avgfp_lr_lr0.0001_bs128_DKPQxV5s"
log_dir_fc = "output/training_logs/log_local_local_2020-09-22_22-02-36_avgfp_fc-3xh100_lr0.0001_bs32_RbLfpQvW"
log_dirs = [log_dir_lr, log_dir_fc]
```
# Loading score predictions (single model)
The utility function uses the dataset tsv as a base and adds columns for the set name (train, tune, test, etc) and the predicted score.
```
ds_lr = an.load_predictions(log_dir_lr)
ds_lr.head()
```
# Loading evaluation metrics (single model)
```
metrics_lr = an.load_metrics(log_dir_lr)
metrics_lr
```
Sometimes it is convenient to have access to other aspects of the model, such as the learning rate and batch size. You can load the regression arguments as a dictionary using `an.load_args()`. Or, you can use `an.load_metrics_and_args` to load both the metrics and arguments in a single dataframe. The combined dataframe is set up so that each row can be a different model, which helps with comparisons between models.
```
met_args_lr = an.load_metrics_and_args(log_dir_lr)
met_args_lr
```
# Evaluating a single model
The dataframe contains variants from all sets (train, tune, test, etc), so if you are interested in a single set, you must select just those variants.
```
# before creating the testset-only dataframe, add a column with mean absolute error, used below
ds_lr["abs_err"] = np.abs(ds_lr["score"] - ds_lr["prediction"])
# create a subset view of the dataframe containing only test set variants
ds_lr_stest = ds_lr.loc[ds_lr.set_name == "stest"]
```
## Scatterplot of predicted vs. true score
```
fig, ax = plt.subplots(1)
sns.scatterplot(x="score", y="prediction", data=ds_lr_stest, ax=ax)
# draw a line of equivalence
x0, x1 = ax.get_xlim()
y0, y1 = ax.get_ylim()
lims = [max(x0, y0), min(x1, y1)]
ax.plot(lims, lims, ':k')
ax.set(ylabel="Predicted score", xlabel="True score", title="Predicted score vs. score (Linear regression)")
plt.show()
plt.close(fig)
```
## Mean absolute error by number of mutations
```
# plot the mean absolute error vs. number of mutations
# can do this more easily with pandas groupby, apply
grouped_mean = ds_lr_stest.groupby("num_mutations", as_index=False).mean()
fig, ax = plt.subplots(1)
sns.stripplot(x="num_mutations", y="abs_err", data=grouped_mean[grouped_mean.num_mutations < 13], ax=ax)
ax.set(ylabel="Mean absolute error", xlabel="Number of mutations", title="Mean absolute error by number of mutations")
plt.show()
plt.close(fig)
```
## Additional evaluation metrics
The regression training script automatically computes a few metrics, but you can also use the true and predicted scores to compute your own. Here, let's recompute Pearson's correlation coefficient and compare it to the same metric computed during training.
```
my_pearsonr = pearsonr(ds_lr_stest["score"], ds_lr_stest["prediction"])[0]
my_pearsonr
# the pearsonr from the metrics dataframe
met_args_lr.loc[0, "stest_pearsonr"]
```
There's a small amount of floating point imprecision, but otherwise the values are identical.
```
np.isclose(my_pearsonr, met_args_lr.loc[0, "stest_pearsonr"])
```
# Loading score predictions and metrics (multiple models)
The functions used above also accept lists of log directories. For loading predictions, you can optionally specify column names, otherwise the column names will be automatically labeled by number.
```
ds = an.load_predictions(log_dirs, col_names=["lr", "fc"])
ds.head()
```
Loading metrics is also straightforward. Note that `an.load_metrics()` does not support multiple log dirs, only `an.load_metrics_and_args()`.
```
metrics = an.load_metrics_and_args(log_dirs)
metrics
```
# Comparing multiple models
Make multiple scatterplots for different models. Note again, we must subset the dataframe to select our desired train/tune/test set.
```
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))
for i, pred_col in enumerate(["lr", "fc"]):
ax = sns.scatterplot(x="score", y=pred_col, data=ds[ds.set_name == "stest"], ax=axes[i])
# draw a line of equivalence
x0, x1 = ax.get_xlim()
y0, y1 = ax.get_ylim()
lims = [max(x0, y0), min(x1, y1)]
ax.plot(lims, lims, ':k')
ax.set(ylabel="Predicted score", xlabel="True score", title="Predicted score vs. score ({})".format(pred_col))
plt.show()
plt.close(fig)
```
Compare performance metrics between datasets.
```
metrics["parsed_net_file"] = metrics["net_file"].apply(lambda nf: basename(nf).split(".")[0])
fix, ax = plt.subplots(1)
ax = sns.stripplot(x="parsed_net_file", y="stest_pearsonr", data=metrics)
ax.set(xlabel="Network", ylabel="Pearson's r", title="Performance (test set)")
plt.show()
plt.close(fig)
```
| github_jupyter |
```
%pylab inline
```
# FaceDetection
- with high-level API (WebcamFaceDetector)
```
from facelib import WebcamFaceDetector
detector = WebcamFaceDetector()
# please wait: it shows a window
detector.run()
```
- with low-level API(FaceDetector)
```
import matplotlib.pyplot as plt
from facelib import FaceDetector
img = plt.imread('facelib/imgs/face_rec.jpg')
detector = FaceDetector()
faces, boxes, scores, landmarks = detector.detect_align(img)
plt.imshow(faces.cpu()[0]);
```
# AgeGenderEstimator
- with high-level API (WebcamAgeGenderEstimator)
```
from facelib import WebcamAgeGenderEstimator
estimator = WebcamAgeGenderEstimator()
# please wait: it shows a window
estimator.run()
```
- with low-level API(FaceDetector)
```
import matplotlib.pyplot as plt
from facelib import AgeGenderEstimator, FaceDetector
img = plt.imread('facelib/imgs/face_rec.jpg')
face_detector = FaceDetector()
age_gender_detector = AgeGenderEstimator()
faces, boxes, scores, landmarks = face_detector.detect_align(img)
genders, ages = age_gender_detector.detect(faces)
print(genders, ages)
```
# FacialExpression
- with high-level API (WebcamAgeGenderEstimator)
```
from facelib import WebcamEmotionDetector
detector = WebcamEmotionDetector()
# please wait: it shows a window
detector.run()
```
- with low-level API(FaceDetector)
```
import matplotlib.pyplot as plt
from facelib import FaceDetector, EmotionDetector
img = plt.imread('facelib/imgs/face_rec.jpg')
face_detector = FaceDetector(face_size=(224, 224))
emotion_detector = EmotionDetector()
faces, boxes, scores, landmarks = face_detector.detect_align(img)
list_of_emotions, probab = emotion_detector.detect_emotion(faces)
print(list_of_emotions)
```
# Add New Person
```
from facelib import add_from_webcam
add_from_webcam(person_name='sajjad')
from facelib import add_from_folder
add_from_folder(person_name='sajjad', folder_path='./test/')
```
# FaceRecognition
- with high-level API (WebcamVerify)
```
from facelib import WebcamVerify
verifier = WebcamVerify()
# please wait: it shows a window
verifier.run()
```
- with low-level API(FaceRecognizer)
```
import matplotlib.pyplot as plt
from facelib import FaceRecognizer, FaceDetector
from facelib import update_facebank, load_facebank, special_draw, get_config
conf = get_config()
detector = FaceDetector()
face_rec = FaceRecognizer(conf)
face_rec.model.eval()
update = True
if update:
targets, names = update_facebank(conf, face_rec.model, detector)
else:
targets, names = load_facebank(conf)
img = plt.imread('facelib/imgs/face_rec.jpg')
faces, boxes, scores, landmarks = detector.detect_align(img)
results, score = face_rec.infer(conf, faces, targets)
names[results.cpu()]
```
| github_jupyter |
This Notebook illustrates the usage of OpenMC's multi-group calculational mode with the Python API. This example notebook creates and executes the 2-D [C5G7](https://www.oecd-nea.org/science/docs/2003/nsc-doc2003-16.pdf) benchmark model using the `openmc.MGXSLibrary` class to create the supporting data library on the fly.
# Generate MGXS Library
```
import os
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import numpy as np
import openmc
%matplotlib inline
```
We will now create the multi-group library using data directly from Appendix A of the [C5G7](https://www.oecd-nea.org/science/docs/2003/nsc-doc2003-16.pdf) benchmark documentation. All of the data below will be created at 294K, consistent with the benchmark.
This notebook will first begin by setting the group structure and building the groupwise data for UO2. As you can see, the cross sections are input in the order of increasing groups (or decreasing energy).
*Note*: The C5G7 benchmark uses transport-corrected cross sections. So the total cross section we input here will technically be the transport cross section.
```
# Create a 7-group structure with arbitrary boundaries (the specific boundaries are unimportant)
groups = openmc.mgxs.EnergyGroups(np.logspace(-5, 7, 8))
uo2_xsdata = openmc.XSdata('uo2', groups)
uo2_xsdata.order = 0
# When setting the data let the object know you are setting the data for a temperature of 294K.
uo2_xsdata.set_total([1.77949E-1, 3.29805E-1, 4.80388E-1, 5.54367E-1,
3.11801E-1, 3.95168E-1, 5.64406E-1], temperature=294.)
uo2_xsdata.set_absorption([8.0248E-03, 3.7174E-3, 2.6769E-2, 9.6236E-2,
3.0020E-02, 1.1126E-1, 2.8278E-1], temperature=294.)
uo2_xsdata.set_fission([7.21206E-3, 8.19301E-4, 6.45320E-3, 1.85648E-2,
1.78084E-2, 8.30348E-2, 2.16004E-1], temperature=294.)
uo2_xsdata.set_nu_fission([2.005998E-2, 2.027303E-3, 1.570599E-2, 4.518301E-2,
4.334208E-2, 2.020901E-1, 5.257105E-1], temperature=294.)
uo2_xsdata.set_chi([5.87910E-1, 4.11760E-1, 3.39060E-4, 1.17610E-7,
0.00000E-0, 0.00000E-0, 0.00000E-0], temperature=294.)
```
We will now add the scattering matrix data.
*Note*: Most users familiar with deterministic transport libraries are already familiar with the idea of entering one scattering matrix for every order (i.e. scattering order as the outer dimension). However, the shape of OpenMC's scattering matrix entry is instead [Incoming groups, Outgoing Groups, Scattering Order] to best enable other scattering representations. We will follow the more familiar approach in this notebook, and then use numpy's `numpy.rollaxis` function to change the ordering to what we need (scattering order on the inner dimension).
```
# The scattering matrix is ordered with incoming groups as rows and outgoing groups as columns
# (i.e., below the diagonal is up-scattering).
scatter_matrix = \
[[[1.27537E-1, 4.23780E-2, 9.43740E-6, 5.51630E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0],
[0.00000E-0, 3.24456E-1, 1.63140E-3, 3.14270E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0],
[0.00000E-0, 0.00000E-0, 4.50940E-1, 2.67920E-3, 0.00000E-0, 0.00000E-0, 0.00000E-0],
[0.00000E-0, 0.00000E-0, 0.00000E-0, 4.52565E-1, 5.56640E-3, 0.00000E-0, 0.00000E-0],
[0.00000E-0, 0.00000E-0, 0.00000E-0, 1.25250E-4, 2.71401E-1, 1.02550E-2, 1.00210E-8],
[0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 1.29680E-3, 2.65802E-1, 1.68090E-2],
[0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 8.54580E-3, 2.73080E-1]]]
scatter_matrix = np.array(scatter_matrix)
scatter_matrix = np.rollaxis(scatter_matrix, 0, 3)
uo2_xsdata.set_scatter_matrix(scatter_matrix, temperature=294.)
```
Now that the UO2 data has been created, we can move on to the remaining materials using the same process.
However, we will actually skip repeating the above for now. Our simulation will instead use the `c5g7.h5` file that has already been created using exactly the same logic as above, but for the remaining materials in the benchmark problem.
For now we will show how you would use the `uo2_xsdata` information to create an `openmc.MGXSLibrary` object and write to disk.
```
# Initialize the library
mg_cross_sections_file = openmc.MGXSLibrary(groups)
# Add the UO2 data to it
mg_cross_sections_file.add_xsdata(uo2_xsdata)
# And write to disk
mg_cross_sections_file.export_to_hdf5('mgxs.h5')
```
# Generate 2-D C5G7 Problem Input Files
To build the actual 2-D model, we will first begin by creating the `materials.xml` file.
First we need to define materials that will be used in the problem. In other notebooks, either nuclides or elements were added to materials at the equivalent stage. We can do that in multi-group mode as well. However, multi-group cross-sections are sometimes provided as macroscopic cross-sections; the C5G7 benchmark data are macroscopic. In this case, we can instead use the `Material.add_macroscopic` method to specify a macroscopic object. Unlike for nuclides and elements, we do not need provide information on atom/weight percents as no number densities are needed.
When assigning macroscopic objects to a material, the density can still be scaled by setting the density to a value that is not 1.0. This would be useful, for example, when slightly perturbing the density of water due to a small change in temperature (while of course ignoring any resultant spectral shift). The density of a macroscopic dataset is set to 1.0 in the `openmc.Material` object by default when a macroscopic dataset is used; so we will show its use the first time and then afterwards it will not be required.
Aside from these differences, the following code is very similar to similar code in other OpenMC example Notebooks.
```
# For every cross section data set in the library, assign an openmc.Macroscopic object to a material
materials = {}
for xs in ['uo2', 'mox43', 'mox7', 'mox87', 'fiss_chamber', 'guide_tube', 'water']:
materials[xs] = openmc.Material(name=xs)
materials[xs].set_density('macro', 1.)
materials[xs].add_macroscopic(xs)
```
Now we can go ahead and produce a `materials.xml` file for use by OpenMC
```
# Instantiate a Materials collection, register all Materials, and export to XML
materials_file = openmc.Materials(materials.values())
# Set the location of the cross sections file to our pre-written set
materials_file.cross_sections = 'c5g7.h5'
materials_file.export_to_xml()
```
Our next step will be to create the geometry information needed for our assembly and to write that to the `geometry.xml` file.
We will begin by defining the surfaces, cells, and universes needed for each of the individual fuel pins, guide tubes, and fission chambers.
```
# Create the surface used for each pin
pin_surf = openmc.ZCylinder(x0=0, y0=0, R=0.54, name='pin_surf')
# Create the cells which will be used to represent each pin type.
cells = {}
universes = {}
for material in materials.values():
# Create the cell for the material inside the cladding
cells[material.name] = openmc.Cell(name=material.name)
# Assign the half-spaces to the cell
cells[material.name].region = -pin_surf
# Register the material with this cell
cells[material.name].fill = material
# Repeat the above for the material outside the cladding (i.e., the moderator)
cell_name = material.name + '_moderator'
cells[cell_name] = openmc.Cell(name=cell_name)
cells[cell_name].region = +pin_surf
cells[cell_name].fill = materials['water']
# Finally add the two cells we just made to a Universe object
universes[material.name] = openmc.Universe(name=material.name)
universes[material.name].add_cells([cells[material.name], cells[cell_name]])
```
The next step is to take our universes (representing the different pin types) and lay them out in a lattice to represent the assembly types
```
lattices = {}
# Instantiate the UO2 Lattice
lattices['UO2 Assembly'] = openmc.RectLattice(name='UO2 Assembly')
lattices['UO2 Assembly'].dimension = [17, 17]
lattices['UO2 Assembly'].lower_left = [-10.71, -10.71]
lattices['UO2 Assembly'].pitch = [1.26, 1.26]
u = universes['uo2']
g = universes['guide_tube']
f = universes['fiss_chamber']
lattices['UO2 Assembly'].universes = \
[[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u],
[u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, g, u, u, g, u, u, f, u, u, g, u, u, g, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u],
[u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u]]
# Create a containing cell and universe
cells['UO2 Assembly'] = openmc.Cell(name='UO2 Assembly')
cells['UO2 Assembly'].fill = lattices['UO2 Assembly']
universes['UO2 Assembly'] = openmc.Universe(name='UO2 Assembly')
universes['UO2 Assembly'].add_cell(cells['UO2 Assembly'])
# Instantiate the MOX Lattice
lattices['MOX Assembly'] = openmc.RectLattice(name='MOX Assembly')
lattices['MOX Assembly'].dimension = [17, 17]
lattices['MOX Assembly'].lower_left = [-10.71, -10.71]
lattices['MOX Assembly'].pitch = [1.26, 1.26]
m = universes['mox43']
n = universes['mox7']
o = universes['mox87']
g = universes['guide_tube']
f = universes['fiss_chamber']
lattices['MOX Assembly'].universes = \
[[m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m],
[m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m],
[m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m],
[m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m],
[m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m],
[m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m],
[m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],
[m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],
[m, n, g, o, o, g, o, o, f, o, o, g, o, o, g, n, m],
[m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],
[m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],
[m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m],
[m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m],
[m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m],
[m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m],
[m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m],
[m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m]]
# Create a containing cell and universe
cells['MOX Assembly'] = openmc.Cell(name='MOX Assembly')
cells['MOX Assembly'].fill = lattices['MOX Assembly']
universes['MOX Assembly'] = openmc.Universe(name='MOX Assembly')
universes['MOX Assembly'].add_cell(cells['MOX Assembly'])
# Instantiate the reflector Lattice
lattices['Reflector Assembly'] = openmc.RectLattice(name='Reflector Assembly')
lattices['Reflector Assembly'].dimension = [1,1]
lattices['Reflector Assembly'].lower_left = [-10.71, -10.71]
lattices['Reflector Assembly'].pitch = [21.42, 21.42]
lattices['Reflector Assembly'].universes = [[universes['water']]]
# Create a containing cell and universe
cells['Reflector Assembly'] = openmc.Cell(name='Reflector Assembly')
cells['Reflector Assembly'].fill = lattices['Reflector Assembly']
universes['Reflector Assembly'] = openmc.Universe(name='Reflector Assembly')
universes['Reflector Assembly'].add_cell(cells['Reflector Assembly'])
```
Let's now create the core layout in a 3x3 lattice where each lattice position is one of the assemblies we just defined.
After that we can create the final cell to contain the entire core.
```
lattices['Core'] = openmc.RectLattice(name='3x3 core lattice')
lattices['Core'].dimension= [3, 3]
lattices['Core'].lower_left = [-32.13, -32.13]
lattices['Core'].pitch = [21.42, 21.42]
r = universes['Reflector Assembly']
u = universes['UO2 Assembly']
m = universes['MOX Assembly']
lattices['Core'].universes = [[u, m, r],
[m, u, r],
[r, r, r]]
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-32.13, boundary_type='reflective')
max_x = openmc.XPlane(x0=+32.13, boundary_type='vacuum')
min_y = openmc.YPlane(y0=-32.13, boundary_type='vacuum')
max_y = openmc.YPlane(y0=+32.13, boundary_type='reflective')
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = lattices['Core']
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y
# Create root Universe
root_universe = openmc.Universe(name='root universe', universe_id=0)
root_universe.add_cell(root_cell)
```
Before we commit to the geometry, we should view it using the Python API's plotting capability
```
root_universe.plot(origin=(0., 0., 0.), width=(3 * 21.42, 3 * 21.42), pixels=(500, 500),
color_by='material')
```
OK, it looks pretty good, let's go ahead and write the file
```
# Create Geometry and set root Universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
```
We can now create the tally file information. The tallies will be set up to give us the pin powers in this notebook. We will do this with a mesh filter, with one mesh cell per pin.
```
tallies_file = openmc.Tallies()
# Instantiate a tally Mesh
mesh = openmc.RegularMesh()
mesh.dimension = [17 * 2, 17 * 2]
mesh.lower_left = [-32.13, -10.71]
mesh.upper_right = [+10.71, +32.13]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission']
# Add tally to collection
tallies_file.append(tally)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
```
With the geometry and materials finished, we now just need to define simulation parameters for the `settings.xml` file. Note the use of the `energy_mode` attribute of our `settings_file` object. This is used to tell OpenMC that we intend to run in multi-group mode instead of the default continuous-energy mode. If we didn't specify this but our cross sections file was not a continuous-energy data set, then OpenMC would complain.
This will be a relatively coarse calculation with only 500,000 active histories. A benchmark-fidelity run would of course require many more!
```
# OpenMC simulation parameters
batches = 150
inactive = 50
particles = 5000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
# Tell OpenMC this is a multi-group problem
settings_file.energy_mode = 'multi-group'
# Set the verbosity to 6 so we dont see output for every batch
settings_file.verbosity = 6
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-32.13, -10.71, -1e50, 10.71, 32.13, 1e50]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Tell OpenMC we want to run in eigenvalue mode
settings_file.run_mode = 'eigenvalue'
# Export to "settings.xml"
settings_file.export_to_xml()
```
Let's go ahead and execute the simulation! You'll notice that the output for multi-group mode is exactly the same as for continuous-energy. The differences are all under the hood.
```
# Run OpenMC
openmc.run()
```
# Results Visualization
Now that we have run the simulation, let's look at the fission rate and flux tallies that we tallied.
```
# Load the last statepoint file and keff value
sp = openmc.StatePoint('statepoint.' + str(batches) + '.h5')
# Get the OpenMC pin power tally data
mesh_tally = sp.get_tally(name='mesh tally')
fission_rates = mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
fission_rates.shape = mesh.dimension
# Normalize to the average pin power
fission_rates /= np.mean(fission_rates[fission_rates > 0.])
# Force zeros to be NaNs so their values are not included when matplotlib calculates
# the color scale
fission_rates[fission_rates == 0.] = np.nan
# Plot the pin powers and the fluxes
plt.figure()
plt.imshow(fission_rates, interpolation='none', cmap='jet', origin='lower')
plt.colorbar()
plt.title('Pin Powers')
plt.show()
```
There we have it! We have just successfully run the C5G7 benchmark model!
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
def build_dataset(words, n_words, atleast=1):
count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
with open('english-train', 'r') as fopen:
text_from = fopen.read().lower().split('\n')[:-1]
with open('vietnam-train', 'r') as fopen:
text_to = fopen.read().lower().split('\n')[:-1]
print('len from: %d, len to: %d'%(len(text_from), len(text_to)))
concat_from = ' '.join(text_from).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
concat_to = ' '.join(text_to).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab to size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(text_to)):
text_to[i] += ' EOS'
class Chatbot:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate, batch_size):
def cells(reuse=False):
return tf.nn.rnn_cell.GRUCell(size_layer,reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.placeholder(tf.int32, [None])
self.Y_seq_len = tf.placeholder(tf.int32, [None])
batch_size = tf.shape(self.X)[0]
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
main = tf.strided_slice(self.X, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
decoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, decoder_input)
rnn_cells = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)])
_, last_state = tf.nn.dynamic_rnn(rnn_cells, encoder_embedded,
dtype = tf.float32)
with tf.variable_scope("decoder"):
rnn_cells_dec = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)])
outputs, _ = tf.nn.dynamic_rnn(rnn_cells_dec, decoder_embedded,
initial_state = last_state,
dtype = tf.float32)
self.logits = tf.layers.dense(outputs,to_dict_size)
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
y_t = tf.argmax(self.logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 256
num_layers = 2
embedded_size = 128
learning_rate = 0.001
batch_size = 16
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Chatbot(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), learning_rate,batch_size)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
ints.append(dic.get(k,UNK))
X.append(ints)
return X
X = str_idx(text_from, dictionary_from)
Y = str_idx(text_to, dictionary_to)
maxlen_question = max([len(x) for x in X]) * 2
maxlen_answer = max([len(y) for y in Y]) * 2
def pad_sentence_batch(sentence_batch, pad_int, maxlen):
padded_seqs = []
seq_lens = []
max_sentence_len = maxlen
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(maxlen)
return padded_seqs, seq_lens
for i in range(epoch):
total_loss, total_accuracy = 0, 0
X, Y = shuffle(X, Y)
for k in range(0, len(text_to), batch_size):
index = min(k + batch_size, len(text_to))
batch_x, seq_x = pad_sentence_batch(X[k: index], PAD, maxlen_answer)
batch_y, seq_y = pad_sentence_batch(Y[k: index], PAD, maxlen_answer)
predicted, accuracy, loss, _ = sess.run([tf.argmax(model.logits,2),
model.accuracy, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y,
model.X_seq_len:seq_x,
model.Y_seq_len:seq_y})
total_loss += loss
total_accuracy += accuracy
total_loss /= (len(text_to) / batch_size)
total_accuracy /= (len(text_to) / batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
#import matplotlib.pylab as plt
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import silhouette_score
from sklearn import cluster
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
import seaborn as sns
sns.set()
from sklearn.neighbors import NearestNeighbors
from yellowbrick.cluster import KElbowVisualizer
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.model_selection import train_test_split
from mpl_toolkits.mplot3d import Axes3D
from sklearn.metrics import accuracy_score
%matplotlib inline
plt.rcParams['figure.figsize'] = (16, 9)
plt.style.use('ggplot')
```
## Se visualiza los datos y se elimina las columnas que no son necesarias
```
dfRead = pd.read_csv('Suma_todasLasSesiones.csv')
df = dfRead.drop(['Sesion','Id'], axis=1)
#df = df[df['Fsm']!=0]
```
## Filtrado de datos
## Histograma de las notas
```
plt.rcParams['figure.figsize'] = (16, 9)
plt.style.use('ggplot')
datos = df.drop(['Nota'],1).hist()
plt.grid(True)
plt.show()
```
## Se crean los datos para el clusters y las categorias
```
clusters = df[['Nota']]
X = df.drop(['Nota'],1)
## Se reliza la normalización de los datos para que esten en un rango de (0,1)
scaler = MinMaxScaler(feature_range=(0, 1))
x = scaler.fit_transform(X)
```
## Se definen los metodos a emplear en el cluster
```
def clusterDBscan(x):
db = cluster.DBSCAN(eps=0.175, min_samples=5)
db.fit(x)
return db.labels_
def clusterKMeans(x, n_clusters):
return cluster.k_means(x, n_clusters=n_clusters)[1]
```
## Se crea funciones en caso de ser necesarias para poder reducir las dimensiones
```
def reducir_dim(x, ndim):
pca = PCA(n_components=ndim)
return pca.fit_transform(x)
def reducir_dim_tsne(x, ndim):
pca = TSNE(n_components=ndim)
return pca.fit_transform(x)
```
## Se grafica los valores de los posibles cluster en base a silohuette score
```
def calculaSilhoutter(x, clusters):
res=[]
fig, ax = plt.subplots(1,figsize=(20, 5))
for numCluster in range(2, 7):
res.append(silhouette_score(x, clusterKMeans(x,numCluster )))
ax.plot(range(2, 7), res)
ax.set_xlabel("n clusters")
ax.set_ylabel("silouhette score")
ax.set_title("K-Means")
calculaSilhoutter(x, clusters)
```
## Se grafica los valores de los posibles cluster en base a Elbow Method
```
model = KMeans()
visualizer = KElbowVisualizer(model, k=(2,7), metric='calinski_harabasz', timings=False)
visualizer.fit(x) # Fit the data to the visualizer
visualizer.show()
clus_km = clusterKMeans(x, 3)
clus_db = clusterDBscan(x)
def reducir_dataset(x, how):
if how == "pca":
res = reducir_dim(x, ndim=2)
elif how == "tsne":
res = reducir_dim_tsne(x, ndim=2)
else:
return x[:, :2]
return res
results = pd.DataFrame(np.column_stack([reducir_dataset(x, how="tsne"), clusters, clus_km, clus_db]), columns=["x", "y", "clusters", "clus_km", "clus_db"])
def mostrar_resultados(res):
"""Muestra los resultados de los algoritmos
"""
fig, ax = plt.subplots(1, 3, figsize=(20, 5))
sns.scatterplot(data=res, x="x", y="y", hue="clusters", ax=ax[0], legend="full")
ax[0].set_title('Ground Truth')
sns.scatterplot(data=res, x="x", y="y", hue="clus_km", ax=ax[1], legend="full")
ax[1].set_title('K-Means')
sns.scatterplot(data=res, x="x", y="y", hue="clus_db", ax=ax[2], legend="full")
ax[2].set_title('DBSCAN')
mostrar_resultados(results)
kmeans = KMeans(n_clusters=3,init = "k-means++")
kmeans.fit(x)
labels = kmeans.predict(x)
X['Cluster_Km']=labels
dfRead['Cluster_Km']=labels
X.groupby('Cluster_Km').mean()
```
## DBSCAN
```
neigh = NearestNeighbors(n_neighbors=2)
nbrs = neigh.fit(x)
distances, indices = nbrs.kneighbors(x)
distances = np.sort(distances, axis=0)
distances = distances[:,1]
plt.plot(distances)
plt.ylim(0,0.25)
dbscan = cluster.DBSCAN(eps=0.175, min_samples=5)
dbscan.fit(x)
clusterDbscan = dbscan.labels_
X['Cluster_DB']=clusterDbscan
dfRead['Cluster_DB']=clusterDbscan
X.groupby('Cluster_DB').mean()
dfRead
```
| github_jupyter |
## Inter-Subject Correlation and Inter-Subject Functional Correlation
[Contributions](#contributions)
The functional connectivity methods that we used in previous notebooks compared time series of BOLD activity between voxels within participant to infer how different regions of the brain were interacting. However, BOLD activity contains multiple components ([Figure a](#fig1)):
1. Task-based/stimulus-evoked signal that is reliable across participants
2. Intrinsic fluctuations in neural activity that are participant specific
3. Scanner or physiological noise
In this notebook, we will consider methods that combine data across participants to eliminate #2 and #3 when calculating fMRI reliability (intersubject correlation, ISC, [Hasson et al., 2004](https://doi.org/10.1126/science.1089506)) and connectivity (intersubject functional correlation, ISFC, [Simony et al., 2016](https://doi.org/10.1038/ncomms12141)). ISC and ISFC help isolate #1 because it is the only component that ought to be shared across participants.
[Figure b,c](#fig1) show how ISC differs from functional connectivity: rather than correlating brain regions, which preserves participant-specific activity and noise, ISC correlates between the brains of different participants in order to capture only the activity that is shared. In ISC, this correlation is done for every voxel in the brain to the matching voxel in other brains, producing a full brain map. [Figure e](#fig1) shows this as the diagonal of a correlation matrix, where each cell corresponds to a voxel in subject X correlated with the same anatomical voxel in subject Y. In practice, to simplify the computation and the interpretation it is typical for ISC to compare each individual participant with the average of all other participants.
[Figure d](#fig1) shows ISFC: the correlation of every voxel in one participant with every other voxel in another participant (or average of other participants). This is like FCMA except it is between participants rather than within participants. In fact, these analyses use the same computational tricks. ISFC is valuable because it allows us to identify activity coupling in voxels that are not aligned across participants: the off diagonal in [Figure e](#fig1) represents correlations for voxels in different parts of the brain.
<a id="fig1"></a>
We will use ISC and ISFC to identify brain regions that respond preferentially to narrative stories, rather than to a random assortment of words (replicating Simony et al., 2016). Furthermore, seed-based connectivity analysis does not show differences between resting state, random words, and intact narratives, but ISFC does distinguish between these conditions (Simony et al., 2016). Thus, ISFC shows greater sensitivity to the task than seed-based functional connectivity.
## Goal of this script
1. To run intersubject correlation (ISC).
2. To run intersubject functional correlation (ISFC).
3. Use ISFC to examine how a network of brain regions that respond to narrative stimuli.
## Table of Contents
[1. The ISC-ISFC Workflow](#isc_isfc_wkflow)
[2. ISC](#isc)
>[2.1 The "Pieman" data](#dataset)
>[2.2 Data file preparation](#data_prep_isc)
>[2.3 Compute ISC](#isc_compute)
>[2.4 ISC with statistical tests](#isc_stats)
[3. ISFC](#isfc)
>[3.1 Parcel the data](#isfc_parcel)
>[3.2 Compute FC and ISFC](#fc_isfc)
[4. Spatial Correlation](#spat_corr)
>[4.1 Spatial inter-subject correlation](#spatial_isc)
#### Exercises
>[1](#ex1) [2](#ex2) [3](#ex3) [4](#ex4) [5](#ex5) [6](#ex6) [7](#ex7) [8](#ex8) [9](#ex9)
>[Novel contribution](#novel)
## 1. The ISC-ISFC workflow <a id="isc_isfc_wkflow"></a>
The following sequence of steps are recommended for successfully running ISC and ISFC using [BrainIAK](http://brainiak.org/).
1. [**Data Preparation:**](#data_prep_isc) Organize a data directory with fMRI subject data that you want to process. All subjects must be in the same anatomical space for analysis. Also you need to create a whole-brain mask. The outcome of this is an array of anatomically-aligned and temporally-aligned brain data.
2. [**Compute ISC:**](#isc_compute) The ISC function computes correlations across subjects for corresponding voxels in the mask. It uses the `compute_correlation` function in BrainIAK, which is optimized for fast execution (and was used in FCMA).
3. [**Permutation Test for ISC:**](#isc_stats) Perform statistical analysis to determine significant correlation values for ISC.
4. [**Compute ISFC:**](#isfc_compute) The ISFC function computes correlations for every voxel in one subject with every other voxel averaged across subjects.
5. [**Cluster the ISFC results:**](#clust_isfc) Create clusters based on the correlation values.
6. [**Perform ISFC permutation:**](#perm) Perform permutation tests to determine the significance of the results.
```
import warnings
import sys
if not sys.warnoptions:
warnings.simplefilter("ignore")
import os
import glob
import time
from copy import deepcopy
import numpy as np
import pandas as pd
from nilearn import datasets
from nilearn import surface
from nilearn import plotting
from nilearn.input_data import NiftiMasker, NiftiLabelsMasker
import nibabel as nib
from brainiak import image, io
from brainiak.isc import isc, isfc, permutation_isc
import matplotlib.pyplot as plt
import seaborn as sns
%autosave 5
%matplotlib inline
sns.set(style = 'white', context='talk', font_scale=1, rc={"lines.linewidth": 2})
```
## 2. ISC <a id="isc"></a>
### 2.1 The "Pieman" data <a id="dataset"></a>
For this script we will use the "Pieman" dataset from [Simony et al. (2016)](https://doi.org/10.1038/ncomms12141). A description of the dataset is as follows:
>18 native English speakers were scanned (15 females, ages: 18–31), corresponding to the replication dataset from the Pieman study.
>Stimuli for the experiment were generated from a 7 min real life story (["Pie Man", Jim O'Grady](https://www.youtube.com/watch?v=3nZzSUDECLo)) recorded at a live storytelling performance (["The Moth" storytelling event](https://themoth.org/), New York City). Subjects listened to the story from beginning to end (intact condition).
>In addition, subjects listened to scrambled versions of the story, which were generated by dividing the original stimulus into segments of different timescales (paragraphs and words) and then permuting the order of these segments. To generate the scrambled stimuli, the story was segmented manually by identifying the end points of each word and paragraph. Two adjacent short words were assigned to a single segment in cases where we could not separate them. Following segmentation, the intact story was scrambled at two timescales: short—‘words’ (W; 608 words, 0.7±0.5 s each) and long—‘paragraphs’ (P; 11 paragraphs, 38.1±17.6 s each). Laughter and applause were classified as single word events (4.4% of the words). Twelve seconds of neutral music and 3 s of silence preceded, and 15 s of silence followed, each playback in all conditions. These music and silence periods were discarded from all analyses.
More details about the experiment may be accessed in the methods section of the paper.
### 2.2 Data File Preparation <a id="data_prep_isc"></a>
**Loading and preparing the data:**
BrainIAK has methods to efficiently load data. We have used some of these functions in previous notebooks.
> *load_images:* reads data from all subjects in a list that you provide. This is like the function load_images_from_dir but here we specify the names manually.
> *load_boolean_mask:* Create a binary mask from a brain volume
> *mask_images:* Loads the brain images and masks them with the mask provided
> *image.MaskedMultiSubjectData.from_masked_images:* Creates a list of arrays, with each item in the list corresponding to one subject's data. This data format is accepted by the BrainIAK ISC and ISFC function.
```
# Set up experiment metadata
from utils import pieman2_dir, results_path
print('Data directory is: %s' % pieman2_dir)
dir_mask = os.path.join(pieman2_dir, 'masks/')
mask_name = os.path.join(dir_mask, 'avg152T1_gray_3mm.nii.gz')
all_task_names = ['word', 'intact1']
all_task_des = ['word level scramble', 'intact story']
n_subjs_total = 18
group_assignment_dict = {task_name: i for i, task_name in enumerate(all_task_names)}
# Where do you want to store the data
dir_out = results_path + 'isc/'
if not os.path.exists(dir_out):
os.makedirs(dir_out)
print('Dir %s created ' % dir_out)
```
### Helper functions
We provide helper functions to load the data.
<div class="alert alert-block alert-warning">
<strong>Memory limits</strong> Be aware this is going to be run on 18 participants and may push the limits of your memory and computational resources if you are on a laptop. If you want to run it on fewer participants to protect memory, change `n_subjs` to be lower (e.g. 10); however, the anticipated results may not generalize to lower sample sizes.
</div>
```
# Reduce the number of subjects per condition to make this notebook faster
upper_limit_n_subjs = 18
def get_file_names(data_dir_, task_name_, verbose = False):
"""
Get all the participant file names
Parameters
----------
data_dir_ [str]: the data root dir
task_name_ [str]: the name of the task
Return
----------
fnames_ [list]: file names for all subjs
"""
c_ = 0
fnames_ = []
# Collect all file names
for subj in range(1, n_subjs_total):
fname = os.path.join(
data_dir_, 'sub-%.3d/func/sub-%.3d-task-%s.nii.gz' % (subj, subj, task_name_))
# If the file exists
if os.path.exists(fname):
# Add to the list of file names
fnames_.append(fname)
if verbose:
print(fname)
c_+= 1
if c_ >= upper_limit_n_subjs:
break
return fnames_
"""load brain template"""
# Load the brain mask
brain_mask = io.load_boolean_mask(mask_name)
# Get the list of nonzero voxel coordinates
coords = np.where(brain_mask)
# Load the brain nii image
brain_nii = nib.load(mask_name)
"""load bold data"""
# load the functional data
fnames = {}
images = {}
masked_images = {}
bold = {}
group_assignment = []
n_subjs = {}
for task_name in all_task_names:
fnames[task_name] = get_file_names(pieman2_dir, task_name)
images[task_name] = io.load_images(fnames[task_name])
masked_images[task_name] = image.mask_images(images[task_name], brain_mask)
# Concatenate all of the masked images across participants
bold[task_name] = image.MaskedMultiSubjectData.from_masked_images(
masked_images[task_name], len(fnames[task_name])
)
# Convert nans into zeros
bold[task_name][np.isnan(bold[task_name])] = 0
# compute the group assignment label
n_subjs_this_task = np.shape(bold[task_name])[-1]
group_assignment += list(
np.repeat(group_assignment_dict[task_name], n_subjs_this_task)
)
n_subjs[task_name] = np.shape(bold[task_name])[-1]
print('Data loaded: {} \t shape: {}' .format(task_name, np.shape(bold[task_name])))
```
**Exercise 1:**<a id="ex1"></a> Inspect the data and report on the following details.
- Brain template
- Report the shape of `brain_nii`, `brain_mask`
- Visualize `brain_nii` and `brain_mask` by plotting the 30th slice along the Z dimension.
- Describe what `coords` refers to
- Visualize `coords` with a 3d plot. For this, only plot every 10th point, otherwise the plot will be slow to load.
- Brain data
- Inspect the shape of `bold`. How many subjects do we have for each task condition? Do different subjects have the same number of TRs/voxels?
```
# Insert code below
```
### 2.3 Compute ISC <a id="isc_compute"></a>
ISC is the correlation of each voxel's time series for a participant with the corresponding (anatomically aligned) voxel time series in the average of the other participants' brains. BrainIAK has functions for computing ISC by feeding in the concatenated participant data.
This will take about 10 minutes to complete.
```
# run ISC, loop over conditions
isc_maps = {}
for task_name in all_task_names:
isc_maps[task_name] = isc(bold[task_name], pairwise=False)
print('Shape of %s condition:' % task_name, np.shape(isc_maps[task_name]))
```
The output of ISC is a voxel by participant matrix (showing the result of each individual with the group). Below we will visualize the ISC matrix for one participant and condition back on to the brain to see where activity is correlated between participants.
```
# set params
subj_id = 0
task_name = 'intact1'
save_data = False
# Make the ISC output a volume
isc_vol = np.zeros(brain_nii.shape)
# Map the ISC data for the first participant into brain space
isc_vol[coords] = isc_maps[task_name][subj_id, :]
# make a nii image of the isc map
isc_nifti = nib.Nifti1Image(isc_vol, brain_nii.affine, brain_nii.header)
# Save the ISC data as a volume
if save_data:
isc_map_path = os.path.join(dir_out, 'ISC_%s_sub%.2d.nii.gz' % (task_name, subj_id))
nib.save(isc_nifti, isc_map_path)
# Plot the data as a statmap
threshold = .2
f, ax = plt.subplots(1,1, figsize = (12, 5))
plotting.plot_stat_map(
isc_nifti,
threshold=threshold,
axes=ax
)
ax.set_title('ISC map for subject {}, task = {}' .format(subj_id,task_name))
```
**Exercise 2:** <a id="ex2"></a> Visualize the averaged ISC map (averaged across participants) for each task.
- Make the averaged ISC map for the two conditions (intact, word-scrambled).
- Visualize them using `plotting.plot_stat_map`.
Make sure to compare the two maps using the same xyz cut, threshold and vmax.
```
# Insert code here
```
This analysis was performed in volumetric space; however, nilearn makes it easy to compare this data in surface space (assuming the alignment to MNI standard is excellent). Here's an example of surface plot.
```
# set some plotting params
subj_id = 0
task_name = 'intact1'
threshold = .2
view = 'medial'
# get a surface
fsaverage = datasets.fetch_surf_fsaverage5()
# Make the ISC output a volume
isc_vol = np.zeros(brain_nii.shape)
# Map the ISC data for the first participant into brain space
isc_vol[coords] = isc_maps[task_name][subj_id, :]
# make a nii image of the isc map
isc_intact_1subj = nib.Nifti1Image(isc_vol, brain_nii.affine, brain_nii.header)
# make "texture"
texture = surface.vol_to_surf(isc_intact_1subj, fsaverage.pial_left)
# plot
title_text = ('Avg ISC map, {} for one participant'.format(task_name))
surf_map = plotting.plot_surf_stat_map(
fsaverage.infl_left, texture,
hemi='left', view=view,
title= title_text,
threshold=threshold, cmap='RdYlBu_r',
colorbar=True,
bg_map=fsaverage.sulc_left)
```
**Exercise 3:** <a id="ex3"></a> Visualize the averaged ISC map using surface plot.
- Visualize the average ISC maps using `plotting.plot_surf_stat_map` for:
- both conditions
- both `medial` view and `lateral` views
Make sure you are using the same threshold and vmax for all plots.
```
# Insert code here
```
**Exercise 4:** <a id="ex4"></a> Compare the averaged ISC map for the two task conditions. What are some brain regions showing stronger correlation in the intact story condition (vs. the word-level scramble condition)? What does this tell us about the processing of language?
Hint: The following [paper](https://doi.org/10.1523/JNEUROSCI.3684-10.2011) this work comes from will help.
**A:**
### 2.4 ISC with statistical tests <a id="isc_stats"></a>
BrainIAK provides several nonparametric statistical tests for ISC analysis ([Nastase et al., 2019](https://doi.org/10.1101/600114)). Nonparametric tests are preferred due to the inherent correlation structure across ISC values—each subject contributes to the ISC of other subjects, violating assumptions of independence required for standard parametric tests (e.g., t-test, ANOVA). We will use the permutation test below.
#### 2.4.1 Permutation test
Permutation tests are used to compute a null distribution of values. We have used permutation tests in previous notebooks and the steps outlined here are similar to what was done in prior notebooks with one small change, incorporating the group of subjects to compute the ISC:
1. Prepare the data. Here we have two conditions (intact and word_scramble), so we compute ISC for both conditions and concatenate the data for these two conditions for all subjects.
> We use leave-one-subject-out (`pairwise=False`) to compute ISC and then use these correlations `isc_maps_all_tasks` to compute statistics.
2. We are going to permute the condition label for each subject to simulate the randomization of conditions. To do this, we first need to assign subjects to the correct experimental conditions that they were in. We have prepared such a list of assignments when we loaded the data and stored the information in the variable: `group_assignment`.
3. The next steps are executed internally in BrainIAK in the function `permutation_isc`:
> - For each permutation iteration:
>> - BrainIAK permutes the group assignment for each subject.
>> - A mean of the ISC values is then computed for this shuffled
group for each condition.
>> - A difference of group means is computed between each condition.
> - The difference values for all iterations is collected and forms the null distribution.
4. Finally, we pick a threshold value that corresponds to `p` percent of this distribution.
`permutation_isc` returns the actual observed ISC values, p-values, and optionally the resampling distribution.
```
# Concatenate ISCs from both tasks
isc_maps_all_tasks = np.vstack([isc_maps[task_name] for
task_name in all_task_names])
print('group_assignment: {}'.format(group_assignment))
print('isc_maps_all_tasks: {}' .format(np.shape(isc_maps_all_tasks)))
# permutation testing
n_permutations = 1000
summary_statistic='mean'
observed, p, distribution = permutation_isc(
isc_maps_all_tasks,
pairwise=False,
group_assignment=group_assignment,
summary_statistic=summary_statistic,
n_permutations=n_permutations
)
p = p.ravel()
observed = observed.ravel()
print('observed:{}'.format(np.shape(observed)))
print('p:{}'.format(np.shape(p)))
print('distribution: {}'.format(np.shape(distribution)))
```
**Exercise 5:** <a id="ex5"></a> Interpret the results from the permutation test.
- What's the logic of the permutation test?
- What are the outputs `observed`, `p`, `distribution`
- Visualize the correlation contrast map (e.g. intact > word-level scramble) with a significance criterion (e.g. p < .005). Which region(s) showed higher ISC under the chosen contrast?
**A:**
## 3. ISFC <a id="isfc"></a>
The goal of ISFC is to find coupling between brain regions across participants. For example the angular gyrus in subject 1 could be correlated to the pre-frontal cortex in subject 2, if they share some cognitive state. For completely random cognitive states across these two subjects, the correlation should be zero. ISFC helps us identify such commonalities across subjects.
In this section, we will compare functional connectivity vs. ISFC on the Pieman data. Whereas FC is computed within individuals, ISFC is computed between individuals. Hence the only correlations that should be robust in ISFC are those that are present across individuals. At the end of the exercises, you will qualitatively replicate [Simony et al. (2016)](https://doi.org/10.1038/ncomms12141), showing that ISFC is sensitive to the cognitive state of the participants.
### 3.1 Parcel the data <a id="isfc_parcel"></a>
ISFC in voxel space is very computationally intensive, so for this notebook we will divide the brain into a smaller number of parcels. We are going to use predefined ROI masks to select the voxels.
```
# load a parcel
atlas = datasets.fetch_atlas_harvard_oxford('cort-maxprob-thr25-2mm', symmetric_split=True)
plotting.plot_roi(atlas.maps, title='the harvard-oxford parcel')
n_regions = len(atlas.labels)-1 # rm background region
n_TRs = np.shape(bold[task_name])[0]
print('number of voxels:\t {}'.format(np.shape(bold[task_name][1])))
print('number of parcels:\t {}'.format(n_regions))
```
Convert the bold data into ROI parcels
```
# Get a masker for the atlas
masker_ho = NiftiLabelsMasker(labels_img=atlas.maps)
# Transform the data to the parcel space
bold_ho = {
task_name:np.zeros((n_TRs, n_regions, n_subjs[task_name]))
for task_name in all_task_names}
# Collect all data
row_has_nan = np.zeros(shape=(n_regions,), dtype=bool)
for task_name in all_task_names:
for subj_id in range(n_subjs[task_name]):
# get the data for task t, subject s
nii_t_s = nib.load(fnames[task_name][subj_id])
bold_ho[task_name][:,:,subj_id] = masker_ho.fit_transform(nii_t_s)
# figure out missing rois
row_has_nan_ = np.any(np.isnan(bold_ho[task_name][:,:,subj_id]),axis=0)
row_has_nan[row_has_nan_] = True
# Figure out which ROI has missing values
roi_select = np.logical_not(row_has_nan)
n_roi_select = np.sum(roi_select)
rois_filtered = np.array(atlas.labels[1:])[roi_select]
bold_ho_filtered = {
task_name:np.zeros((n_TRs, n_roi_select, n_subjs[task_name]))
for task_name in all_task_names
}
# Remove ROIs with missing values
for task_name in all_task_names:
for subj_id in range(n_subjs[task_name]):
bold_ho_filtered[task_name][:,:,subj_id] = bold_ho[task_name][:,roi_select,subj_id]
print('ROI selected\n {}'.format(rois_filtered))
print('ROI removed due to missing values :( \n {}'.format(np.array(atlas.labels[1:])[row_has_nan]))
```
### 3.2 Compute FC and ISFC <a id="fc_isfc"></a>
Here we compute FC and ISFC on the parcellated data.
```
# Compute FC
fc_maps = {
task_name:np.zeros((n_roi_select,n_roi_select))
for task_name in all_task_names
}
for task_name in all_task_names:
for subj_id in range(n_subjs[task_name]):
fc_maps[task_name] += np.corrcoef(
bold_ho_filtered[task_name][:,:,subj_id].T
) / n_subjs[task_name]
np.fill_diagonal(fc_maps[task_name], np.nan)
# Compute ISFC
isfc_maps_ho = {}
for task_name in all_task_names:
isfc_maps_ho[task_name] = isfc(data=bold_ho_filtered[task_name],
summary_statistic='median',
vectorize_isfcs=False)
```
**Exercise 6:** <a id="ex6"></a> Visualize the FC/ISFC matrices, averaged across subjects
- Use `imshow` to visualize the 4 correlation matrices: (FC vs. ISFC) x (word-level scrambled vs. intact story)
- Mark the rows (or columns) with the ROI labels
```
# Insert code here
```
**Exercise 7:** <a id="ex7"></a> Visualize FC/ISFC connectivity strength on glass brains
- Use `plot_connectome` to visualize the 4 correlation matrices: (FC vs. ISFC) x (word-level scrabled vs. intact story)
- Use common `edge_threshold` for all plots.
*Hint:* The plot_connectome function takes as an input a correlation matrix (such as the FC one plotted above) and also a set of coordinates that define the XYZ coordinates that corresponds to each column/row of the correlation matrix. In order to get the coordinates of these ROIs, we recommend you use: `plotting.find_parcellation_cut_coords(atlas.maps)[roi_select]`
```
# Insert code here
```
**Exercise 8:** <a id="ex8"></a> Do FC maps look different across conditions? How about ISFC? And why? Hint: consult this [paper](https://doi.org/10.1038/ncomms12141).
**A:**
## 4. Spatial pattern correlation across subjects <a id="spat_corr"></a>
### 4.1 Spatial inter-subject correlation <a id="spatial_isc"></a>
<br>
We can apply the idea of inter-subject analysis to RSA. So far, ISC is being computed between aligned pairs of voxels across time points and it is commonly referred to as temporal ISC. However, we could instead correlate between aligned pairs of time points across voxels. That is, how does the pattern of activity across voxels for one time point correlate with the average pattern of the other participants at that time point. By doing this for each time point, we can generate a time course of these correlations to observe the general ebb and flow of coupling in brain activity across participants. This can be done simply by transposing the voxel and time dimensions (for a 3-D matrix, this is accomplished with a 90 degree rotation). This is a simple [transposition](https://lihan.me/2018/01/numpy-reshape-and-transpose/). If we have data in the format: (TRs, voxels, subjects), we can use `data.transpose(1,0,2)`, where the indices refer to the dimensions of the array and we will have an array in the format (voxels, TRs, subjects).
#### Compare the two task conditions with spatial ISC
One way to compare the intact and word_scramble conditions is to plot the correlation values, by TR, for each condition. Again, we can use the same ISC functions from above after transposing the data matrices.
```
# Get a list of ROIs.
roi_mask_path = os.path.join(pieman2_dir,'masks','rois')
all_roi_fpaths = glob.glob(os.path.join(roi_mask_path, '*.nii.gz'))
# Collect all ROIs
all_roi_names = []
all_roi_nii = {}
all_roi_masker = {}
for roi_fpath in all_roi_fpaths:
# Compute ROI name
roi_fname = os.path.basename(roi_fpath)
roi_name = roi_fname.split('.')[0]
all_roi_names.append(roi_name)
# Load roi nii file
roi_nii = nib.load(roi_fpath)
all_roi_nii[roi_name] = roi_nii
# Make roi maskers
all_roi_masker[roi_name] = NiftiMasker(mask_img=roi_nii)
print('Path to all roi masks: {}'.format(roi_mask_path))
print('Here are all ROIs:\n{}'.format(all_roi_names))
# Make a function to load data for one ROI
def load_roi_data(roi_name):
# Pick a roi masker
roi_masker = all_roi_masker[roi_name]
# Preallocate
bold_roi = {task_name:[] for i, task_name in enumerate(all_task_names)}
# Gather data
for task_name in all_task_names:
for subj_id in range(n_subjs[task_name]):
# Get the data for task t, subject s
nii_t_s = nib.load(fnames[task_name][subj_id])
bold_roi[task_name].append(roi_masker.fit_transform(nii_t_s))
# Reformat the data to std form
bold_roi[task_name] = np.transpose(np.array(bold_roi[task_name]), [1,2,0])
return bold_roi
```
Compute spatial ISC on some ROIs.
```
roi_selected = ['dPCC', 'vPCUN', 'V1']
roi_selected_names = ['dorsal posterior cingulate cortex', 'ventral precuneus', 'primary visual cortex']
# compute sISC for all ROIs
iscs_roi_selected = []
for j, roi_name in enumerate(roi_selected):
print(j, roi_name)
# Load data
bold_roi = load_roi_data(roi_name)
# Compute isc
iscs_roi = {}
for task_name in all_task_names:
iscs_roi[task_name] = isc(np.transpose(bold_roi[task_name], [1,0,2]))
iscs_roi_selected.append(iscs_roi)
# Plot the spatial ISC over time
col_pal = sns.color_palette(palette='colorblind', n_colors=len(all_task_names))
ci = 95
f, axes = plt.subplots(len(roi_selected), 1, figsize=(14, 5 * len(roi_selected)), sharex=True)
# For each ROI
for j, roi_name in enumerate(roi_selected):
# For each task
for i, task_name in enumerate(all_task_names):
sns.tsplot(
iscs_roi_selected[j][task_name],
color=col_pal[i], ci=ci,
ax=axes[j]
)
f.legend(all_task_des)
sns.despine()
# Label the plot
for j, roi_name in enumerate(roi_selected):
axes[j].axhline(0, color='black', linestyle='--', alpha=.3)
axes[j].set_ylabel('Linear correlation')
axes[j].set_title('Spatial inter-subject correlation, {}'. format(roi_selected_names[j]))
axes[-1].set_xlabel('TRs')
```
**Exercise 9:**<a id="ex9"></a> Interpret the spatial ISC results you observed above.
**A:**
**Novel contribution:**<a id="novel"></a> be creative and make one new discovery by adding an analysis, visualization, or optimization.
Here are some ideas:
- The ISC package in BrainIAK supports other statistical tests. Study one of them, describe the logic behind it, and re-run the analysis with that test.
- Conduct a sliding window spatial ISC analysis.
- Perform some clustering on the ISFC matrices. Use the clustering (instead of the anatomical parcel) to re-analyze the data, and compare ISFC vs. FC across tasks. See [here](http://scikit-learn.org/stable/modules/clustering.html) for a good resource on clustering algorithms.
### Contributions<a id="contributions"></a>
E. Simony and U. Hasson for providing data
C. Baldassano and C. Chen provided initial code
M. Kumar, C. Ellis and N. Turk-Browne produced the initial notebook 4/4/18
S. Nastase enhanced the ISC brainiak module; added the section on statistical testing
Q. Lu added solutions; switched to S. Nastase's ISC module; replicated Lerner et al 2011 & Simony et al. 2016; added spatial ISC.
M. Kumar edits to section introductions and explanation on permutation test.
K.A. Norman provided suggestions on the overall content and made edits to this notebook.
C. Ellis incorporated edits from cmhn-s19
| github_jupyter |
# Abnormality Detection in Musculoskeletal Radiographs
The objective is to build a machine learning model that can detect an abnormality in the X-Ray radiographs. These models can help towards providing healthcare access to the parts of the world where access to skilled radiologists is limited. According to a study on the Global Burden of Disease and the worldwide impact of all diseases found that, “musculoskeletal conditions affect more than 1.7 billion people worldwide. They are the 2nd greatest cause of disabilities, and have the 4th greatest impact on the overall health of the world population when considering both death and disabilities”. (www.usbji.org, n.d.).
This project attempts to implement deep neural network using DenseNet169 inspired from the Stanford Paper Rajpurkar, et al., 2018.
## XR_WRIST Study Type
## Phase 3: Data Preprocessing
As per the paper, i have normalized the each image to have same mean & std of the images in the ImageNet training set. In the paper, they have used variable-sized images to 320 x 320. But i have chosen to scale 224 x 224. Then i have augmented the data during the training by applying random lateral inversions and rotations of up to 30 degrees using
```
from keras.applications.densenet import DenseNet169, DenseNet121, preprocess_input
from keras.preprocessing.image import ImageDataGenerator, load_img, image
from keras.models import Sequential, Model, load_model
from keras.layers import Conv2D, MaxPool2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint, Callback
from keras import regularizers
import pandas as pd
from tqdm import tqdm
import os
import numpy as np
import random
from keras.optimizers import Adam
import keras.backend as K
import cv2
import matplotlib.pyplot as plt
```
### 3.1 Data preprocessing
```
#Utility function to find the list of files in a directory excluding the hidden files.
def listdir_nohidden(path):
for f in os.listdir(path):
if not f.startswith('.'):
yield f
```
### 3.1.1 Creating a csv file containing path to image & csv
```
def create_images_metadata_csv(category,study_types):
"""
This function creates a csv file containing the path of images, label.
"""
image_data = {}
study_label = {'positive': 1, 'negative': 0}
#study_types = ['XR_ELBOW','XR_FINGER','XR_FOREARM','XR_HAND','XR_HUMERUS','XR_SHOULDER','XR_WRIST']
#study_types = ['XR_ELBOW']
i = 0
image_data[category] = pd.DataFrame(columns=['Path','Count', 'Label'])
for study_type in study_types: # Iterate throught every study types
DATA_DIR = 'data/MURA-v1.1/%s/%s/' % (category, study_type)
patients = list(os.walk(DATA_DIR))[0][1] # list of patient folder names
for patient in tqdm(patients): # for each patient folder
for study in os.listdir(DATA_DIR + patient): # for each study in that patient folder
if(study != '.DS_Store'):
label = study_label[study.split('_')[1]] # get label 0 or 1
path = DATA_DIR + patient + '/' + study + '/' # path to this study
for j in range(len(list(listdir_nohidden(path)))):
image_path = path + 'image%s.png' % (j + 1)
image_data[category].loc[i] = [image_path,1, label] # add new row
i += 1
image_data[category].to_csv(category+"_image_data.csv",index = None, header=False)
#New function create image array by study level
def getImagesInArrayNew(train_dataframe):
images = []
labels = []
for i, data in tqdm(train_dataframe.iterrows()):
img = cv2.imread(data['Path'])
# #random rotation
# angle = random.randint(-30,30)
# M = cv2.getRotationMatrix2D((img_width/2,img_height/2),angle,1)
# img = cv2.warpAffine(img,M,(img_width,img_height))
#resize
img = cv2.resize(img,(img_width,img_height))
img = img[...,::-1].astype(np.float32)
images.append(img)
labels.append(data['Label'])
images = np.asarray(images).astype('float32')
#normalization
mean = np.mean(images[:, :, :])
std = np.std(images[:, :, :])
images[:, :, :] = (images[:, :, :] - mean) / std
labels = np.asarray(labels)
return {'images': images, 'labels': labels}
```
#### 3.1.1.1 Variables intialization
```
img_width, img_height = 224, 224
#Keras ImageDataGenerator to load, transform the images of the dataset
BASE_DATA_DIR = 'data/'
IMG_DATA_DIR = 'MURA-v1.1/'
```
### 3.1.2 XR_SHOULDER ImageDataGenertors
I am going to generate model for every study type and ensemble them. Hence i am preparing data per study type for the model to be trained on.
```
train_data_dir = BASE_DATA_DIR + IMG_DATA_DIR + 'train/XR_WRIST'
valid_data_dir = BASE_DATA_DIR + IMG_DATA_DIR + 'valid/XR_WRIST'
train_datagen = ImageDataGenerator(
rotation_range=30,
horizontal_flip=True
)
test_datagen = ImageDataGenerator(
rotation_range=30,
horizontal_flip=True
)
study_types = ['XR_WRIST']
create_images_metadata_csv('train',study_types)
create_images_metadata_csv('valid',study_types)
valid_image_df = pd.read_csv('valid_image_data.csv', names=['Path','Count', 'Label'])
train_image_df = pd.read_csv('train_image_data.csv', names=['Path', 'Count','Label'])
dd={}
dd['train'] = train_image_df
dd['valid'] = valid_image_df
valid_dict = getImagesInArrayNew(valid_image_df)
train_dict = getImagesInArrayNew(train_image_df)
train_datagen.fit(train_dict['images'],augment=True)
test_datagen.fit(valid_dict['images'],augment=True)
validation_generator = test_datagen.flow(
x=valid_dict['images'],
y=valid_dict['labels'],
batch_size = 1
)
train_generator = train_datagen.flow(
x=train_dict['images'],
y=train_dict['labels']
)
```
### 3.2 Building a model
As per the MURA paper, i replaced the fully connected layer with the one that has a single output, after that i applied a sigmoid nonlinearity. In the paper, the optimized weighted binary cross entropy loss. Please see below for the formula,
L(X, y) = -WT,1 * ylog p(Y = 1|X) -WT,0 * (1 - y)log p(Y = 0|X);
p(Y = 1|X) is the probability that the network assigns to the label i, WT,1 = |NT| / (|AT| + |NT|), and WT,0 = |AT| / (|AT| + |NT|) where |AT|) and |NT|) are the number of abnormal images and normal images of study type T in the training set, respectively.
But i choose to use the default binary cross entropy. The network is configured with Adam using default parameters, batch size of 8, initial learning rate = 0.0001 that is decayed by a factor of 10 each time the validation loss plateaus after an epoch.
### 3.2.1 Model paramaters
```
#model parameters for training
#K.set_learning_phase(1)
nb_train_samples = len(train_dict['images'])
nb_validation_samples = len(valid_dict['images'])
epochs = 10
batch_size = 8
steps_per_epoch = nb_train_samples//batch_size
print(steps_per_epoch)
n_classes = 1
def build_model():
base_model = DenseNet169(input_shape=(None, None,3),
weights='imagenet',
include_top=False,
pooling='avg')
# i = 0
# total_layers = len(base_model.layers)
# for layer in base_model.layers:
# if(i <= total_layers//2):
# layer.trainable = True
# i = i+1
x = base_model.output
predictions = Dense(n_classes,activation='sigmoid')(x)
model = Model(inputs=base_model.input, outputs=predictions)
return model
model = build_model()
#Compiling the model
model.compile(loss="binary_crossentropy", optimizer='adam', metrics=['acc', 'mse'])
#callbacks for early stopping incase of reduced learning rate, loss unimprovement
early_stop = EarlyStopping(monitor='val_loss', patience=8, verbose=1, min_delta=1e-4)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=1, verbose=1, min_lr=0.0001)
callbacks_list = [early_stop, reduce_lr]
```
### 3.2.2 Training the Model
```
#train the module
model_history = model.fit_generator(
train_generator,
epochs=epochs,
workers=0,
use_multiprocessing=False,
steps_per_epoch = nb_train_samples//batch_size,
validation_data=validation_generator,
validation_steps=nb_validation_samples //batch_size,
callbacks=callbacks_list
)
model.save("densenet_mura_rs_v3_xr_wrist.h5")
```
### 3.2.3 Visualizing the model
```
#There was a bug in keras to use pydot in the vis_utils class. In order to fix the bug, i had to comment out line#55 in vis_utils.py file and reload the module
#~/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/utils
from keras.utils import plot_model
from keras.utils.vis_utils import *
import keras
import importlib
importlib.reload(keras.utils.vis_utils)
import pydot
plot_model(model, to_file='images/densenet_archi_xr_shoulder_v3.png', show_shapes=True)
```
### 3.3 Performance Evaluation
```
#Now we have trained our model, we can see the metrics during the training proccess
plt.figure(0)
plt.plot(model_history.history['acc'],'r')
plt.plot(model_history.history['val_acc'],'g')
plt.xticks(np.arange(0, 5, 1))
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Accuracy")
plt.title("Training Accuracy vs Validation Accuracy")
plt.legend(['train','validation'])
plt.figure(1)
plt.plot(model_history.history['loss'],'r')
plt.plot(model_history.history['val_loss'],'g')
plt.xticks(np.arange(0, 5, 1))
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Loss")
plt.title("Training Loss vs Validation Loss")
plt.legend(['train','validation'])
plt.figure(2)
plt.plot(model_history.history['mean_squared_error'],'r')
plt.plot(model_history.history['val_mean_squared_error'],'g')
plt.xticks(np.arange(0, 5, 1))
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("MSE")
plt.title("Training Loss vs Validation Loss")
plt.legend(['train','validation'])
plt.show()
#Now we evaluate the trained model with the validation dataset and make a prediction.
#The class predicted will be the class with maximum value for each image.
ev = model.evaluate_generator(validation_generator, steps=(nb_validation_samples //batch_size)+1, workers=0, use_multiprocessing=False)
ev[1]
#pred = model.predict_generator(validation_generator, steps=1, batch_size=1, use_multiprocessing=False, max_queue_size=25, verbose=1)
validation_generator.reset()
#pred = model.predict_generator(validation_generator,steps=nb_validation_samples)
pred_batch = model.predict_on_batch(valid_dict['images'])
predictions = []
for p in pred_batch:
if(p > 0.5):
predictions+=[1]
else:
predictions+=[0]
error = np.sum(np.not_equal(predictions, valid_dict['labels'])) / valid_dict['labels'].shape[0]
pred = predictions
print('Confusion Matrix')
from sklearn.metrics import confusion_matrix, classification_report, cohen_kappa_score
import seaborn as sn
cm = confusion_matrix( pred ,valid_dict['labels'])
plt.figure(figsize = (30,20))
sn.set(font_scale=1.4) #for label size
sn.heatmap(cm, annot=True, annot_kws={"size": 20},cmap="YlGnBu") # font size
plt.show()
print()
print('Classification Report')
print(classification_report(valid_dict['labels'], pred, target_names=["0","1"]))
from sklearn.metrics import confusion_matrix, classification_report, cohen_kappa_score
cohen_kappa_score(valid_dict['labels'], pred)
```
### ROC Curve
```
from sklearn.metrics import roc_curve
fpr_keras, tpr_keras, thresholds_keras = roc_curve(valid_dict['labels'], pred_batch)
from sklearn.metrics import auc
auc_keras = auc(fpr_keras, tpr_keras)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
plt.figure(2)
plt.xlim(0.0, 0.2)
plt.ylim(0.65, 0.9)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve (zoomed in at top left)')
plt.legend(loc='best')
plt.show()
```
| github_jupyter |
# D-optimal experiment design: comparing ABPG and Frank-Wolfe
Solve the D-Optimal experiment design problem
$$
\begin{array}{ll}
\textrm{minimize} & F(x):=\log\left(\det\left(\sum_{i=1}^n x_i V_i V_i^T\right)\right) \\
\textrm{subject to} & \sum_{i=1}^n x_i = 1, \\
& x_i\geq 0, \quad i=1,\ldots,n
\end{array}
$$
where $V_i\in R^m$ for $i=1,\ldots,n$.
Methods compared:
* Original Frank-Wolfe method
* Frank-Wolfe method with away steps
* Bregman Proximal Gradient (BPG) method with adaptive line search
* Accelerated Bregman Proximal Gradient (ABPG) method with gain adaption
```
cd C:\\github\accbpg
import numpy as np
import accbpg
def compare_FW_ABPG(m, n, Nmax, Nskip):
f, h, L, x0Kh = accbpg.D_opt_design(m, n)
x0KY = accbpg.D_opt_KYinit(f.H)
x0Mx = (1-1e-3)*x0KY + 1e-3*x0Kh
_, F_FWKh, _, _, T_FWKh = accbpg.D_opt_FW(f.H, x0Kh, 1e-8, maxitrs=Nmax, verbskip=Nskip)
_, F_FWKY, _, _, T_FWKY = accbpg.D_opt_FW(f.H, x0KY, 1e-8, maxitrs=Nmax, verbskip=Nskip)
_, F_WAKh, _, _, T_WAKh = accbpg.D_opt_FW_away(f.H, x0Kh, 1e-8, maxitrs=Nmax, verbskip=Nskip)
_, F_WAKY, _, _, T_WAKY = accbpg.D_opt_FW_away(f.H, x0KY, 1e-8, maxitrs=Nmax, verbskip=Nskip)
_, F_LSKh, _, T_LSKh = accbpg.BPG(f, h, L, x0Kh, maxitrs=Nmax, linesearch=True, ls_ratio=1.5, verbskip=Nskip)
_, F_LSKY, _, T_LSKY = accbpg.BPG(f, h, L, x0Mx, maxitrs=Nmax, linesearch=True, ls_ratio=1.5, verbskip=Nskip)
_, F_ABKh, _, _, _, T_ABKh = accbpg.ABPG_gain(f, h, L, x0Kh, gamma=2, maxitrs=Nmax, ls_inc=1.5, ls_dec=1.5, restart=True, verbskip=Nskip)
_, F_ABKY, _, _, _, T_ABKY = accbpg.ABPG_gain(f, h, L, x0Mx, gamma=2, maxitrs=Nmax, ls_inc=1.5, ls_dec=1.5, restart=True, verbskip=Nskip)
f_vals = [F_FWKh, F_FWKY, F_WAKh, F_WAKY, F_LSKh, F_LSKY, F_ABKh, F_ABKY]
t_vals = [T_FWKh, T_FWKY, T_WAKh, T_WAKY, T_LSKh, T_LSKY, T_ABKh, T_ABKY]
return f_vals, t_vals
import matplotlib
import matplotlib.pyplot as plt
# Plot required number of iterations and time
matplotlib.rcParams.update({'font.size': 14, 'font.family': 'serif'})
matplotlib.rcParams.update({'text.usetex': True})
labels = [r"FW", r"FW KY", r"FW-away", r"FW-away KY", r"BPG-LS", r"BPG-LS KY", r"ABPG-g", r"ABPG-g KY"]
linestyles=['g-', 'g--', 'k-', 'k--', 'b-.', 'b:', 'r-', 'r--']
F1, T1 = compare_FW_ABPG(100, 10000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F1, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-2000, 100000], ylim=[1e-6, 100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
T1 = [ts - ts.min() + 1e-3 for ts in T1]
accbpg.plot_comparisons(ax2, F1, labels, x_vals=T1, plotdiff=True, yscale="log", xscale="linear", xlim=[-1, 60], ylim=[1e-6, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc=0, linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m100n10000.pdf", bbox_inches="tight")
F2, T2 = compare_FW_ABPG(100, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F2, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-400, 40000], ylim=[1e-6, 100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F2, labels, x_vals=T2, plotdiff=True, yscale="log", xscale="linear", xlim=[-1, 30], ylim=[1e-6, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m100n1000.pdf", bbox_inches="tight")
F3, T3 = compare_FW_ABPG(300, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F3, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-100, 5000], ylim=[1e-8,100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc=0, linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F3, labels, x_vals=T3, plotdiff=True, yscale="log", xscale="linear", xlim=[-0.2, 60], ylim=[1e-8, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc=0, linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m300n1000.pdf", bbox_inches="tight")
#F4, T4 = compare_FW_ABPG(800, 2500, 100000, 10000)
F4, T4 = compare_FW_ABPG(400, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F4, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-50, 2000], ylim=[1e-6, 100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F4, labels, x_vals=T4, plotdiff=True, yscale="log", xscale="linear", xlim=[-1, 60], ylim=[1e-6, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m400n1000.pdf", bbox_inches="tight")
F5, T5 = compare_FW_ABPG(500, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F5, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-100, 1500], ylim=[1e-6, 2],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F5, labels, x_vals=T5, plotdiff=True, yscale="log", xscale="linear", xlim=[-1, 12], ylim=[1e-6, 2],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower center", linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m500n1000.pdf", bbox_inches="tight")
F6, T6 = compare_FW_ABPG(350, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F6, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-50, 2500], ylim=[1e-6, 100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F6, labels, x_vals=T6, plotdiff=True, yscale="log", xscale="linear", xlim=[-0.1, 10], ylim=[1e-6, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m350n1000.pdf", bbox_inches="tight")
```
| github_jupyter |
# Dictionaries
The third and final new type of variable we'll introduce here is the **dictionary**. Like dictionaries where you look up words and are provided with a definition of that word, Python dictionaries store two pieces of information. These two pieces are referred to as the **key** and its **value**. Dictionaries are a collection of **key-value pairs**.
<div class="alert alert-success">
A <b>dictionary</b> is mutable collection of items, that can be of mixed-type, that are stored as key-value pairs.
</div>
## Defining a dictionary
Specifically, dictionaries are defined using curly brackets `{}`. Within the curly brackets are key-value pairs. Each key-value pair is created using a colon (`:`) between the key and its value. And, separate key-value pairs are stored by separating each key-value pair, using a comma (`,`).
Here we create `my_dictionary` which stores two key-value pairs.
```
# Create a dictionary
my_dictionary = {'key_1' : 'value_1', 'key_2' : 'value_2'}
```
All of our now-familiar operations that we use for lists and tuples are also helpful when operating on dictionaries.
For example, we can use `print()` to retrieve the contents of the dictionary:
```
# Check the contents of the dictionary
print(my_dictionary)
```
We can use `type()` to return the variable type. Note that dictionaries return 'dict' as their variable type:
```
# Check the type of the dictionary
type(my_dictionary)
```
And, we can use `len()` to return the number of key-value pairs stored in a given dictionary:
```
# Dictionaries also have a length
# length refers to how many pairs there are
len(my_dictionary)
```
## Indexing: dictionaries
As with lists and tuples, indexing occurs using square brackets `[]`. However, dictionaries are unique in that they are indexed by their keys. When a specific *key* is indexed, the *value* stored in that key is returned.
For example, if we index to specify we want information from 'key_1', note that 'value_1' is returned. This is because we index *by* keys to return the *values* stored in those keys:
```
# Dictionaries are indexed using their keys
dictionary['key_1']
```
## Uses: dictionaries
Dictionaries are particularly helpful when you want to store related pieces of information. For example, if you had a list of names and their respecititve emails, you would want to store these two pieces of information in such a way that you knew which email addresses were related to which individuals. A dictionary is perfect for this! The names of the individuals therefore become the keys and their respective emails the values:
```
student_emails = {
'Betty Jennings' : 'bjennings@eniac.org',
'Ada Lovelace' : 'ada@analyticengine.com',
'Alan Turing' : 'aturing@thebomb.gov',
'Grace Hopper' : 'ghopper@navy.usa'
}
student_emails
```
Dictionaries, like lists, are *mutable*. This means that dictionaries, once created, values *can* be updated.
For example, if you wanted to store information about students' attendance for a particular lab, you could store `True` for all students who attended and `False` for all who failed to attend.
```
# remember what dictionary we created above
lab_attendance = {
'A1234' : True,
'A5678' : False,
'A9123' : True
}
lab_attendance
```
If later on you were made aware that the student with the ID 'A5678' did in fact attend lab, you could update the value stored for that key.
This occurs as we've seen with other types of collections. The key is index and the value for that key is then assigned to the indexed value. The distiction here is that the *value* for the specified key is updated, and not the key itself.
```
# change value of specified key
lab_attendance['A5678'] = True
lab_attendance
```
With this change, all three keys now store the value `True`.
## Key Deletion
Because dictionaries are mutable, key-value pairs can also be removed from the dictionary using `del`.
In our example above, if student 'A5678' dropped the course, they could be dropped from the dictionary as well. The resulting dictionary now has only two key-value pairs.
```
print(lab_attendance)
len(lab_attendance)
## remove key-value pair using del
del lab_attendance['A5678']
print(lab_attendance)
len(lab_attendance)
```
## Operators: dictionaries
The operators we've discussed previously can be used when working with dictionaries.
To determine if a specified *key* is present in a dictionary we can again use the `in` operator:
```
if 'A1234' in lab_attendance:
print('Yes, that student is in this class')
```
## Dictionary Properties
When storing key-value pairs in dictionaries, there are a number of additional rules and properties that are important to understand to effectively use dictionaries:
**Property #1**
- Only one value per key. No duplicate keys allowed.
- If duplicate keys specified during assignment, the last assignment wins.
In this example here, there are three key-value pairs specified in the creation of the dictionary; however, only the last key-value pair is stored:
```
# Last duplicate key assigned wins
{'Student' : 97, 'Student': 88, 'Student' : 91}
```
**Property #2**
- **keys** must be of an immutable type (string, tuple, integer, float, etc)
- Note: **values** can be of any type
Here, this code produces an error, as the dictionary attempts to use a *mutable* type for the dictionary's key:
```
# lists are not allowed as key types
# this code will produce an error
{['Student'] : 97}
```
**Property #3**
- Dictionary keys are case sensitive.
As with everything when it comes to code, capital and lowercase letters are distinct characters. Thus, case sensitivity matters. The key 'Student' and the key 'STUDENTS' are two distinct keys and will be treated as such in dictionary generation.
```
{'Student' : 97, 'student': 88, 'STUDENT' : 91}
```
## Exercises
Q1. **Which of the following would create a dictionary of length 3?**
A) `{'Student_1' : 97, 'Student_2'}`
B) `{'Student_1', 'Student_2', 'Student_3'}`
C) `['Student_1' : 97, 'Student_2': 88, 'Student_3' : 91]`
D) `{'Student_1' : 97, 'Student_2': 88, 'Student_3' : 91}`
E) `('Student_1' : 97, 'Student_2': 88, 'Student_3' : 91)`
Q2. **Fill in the '---' in the code below to return the value stored in the second key.**
```python
height_dict = {'height_1' : 60, 'height_2': 68, 'height_3' : 65, 'height_4' : 72}
height_dict[---]
```
Q3. **Write the code that would create a dictionary `car` that stores values about your dream car's `make`, `model`, and `year`.**
Q4. **What would the value of `result` be after this code has executed?**
```python
dictionary = {'alpha' : [8, 12],
'beta' : [13, 30],
'theta' : [4, 8]}
check = 10
for item in dictionary:
temp = dictionary[item]
if temp[0] <= check <= temp[1]:
result = item
```
Q5. **Why does the following code produce an error?**
```python
student_emails = {
'Betty Jennings' : 'bjennings@eniac.org',
'Ada Lovelace' : ['ada@analyticengine.com'],
'Ada Lovelace' : 'aturing@thebomb.gov',
['Grace Hopper'] : 'ghopper@navy.usa'
}
```
A) duplicate keys
B) mutable key specified
C) keys are case sensitive
D) mutable value specified
| github_jupyter |
# <center> Практические задания по цифровой обработке сигналов </center>
# <center> Четвёртая лабораторная работа </center>
# <center> Акустические признаки </center>
```
from glob import glob
import hashlib
import IPython.display as ipd
import os
import librosa
import librosa.display
import librosa.filters
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import scipy
import scipy.fft
from sklearn.decomposition import PCA
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
# Функция отрисовки аудиосигнала.
def draw_waveform(wav, sr, figsize=(14, 5)):
# Отрисовка звукового сигнала во временной области
plt.figure(figsize=figsize)
plt.grid(True)
librosa.display.waveplot(wav, sr=sr)
plt.show()
# Для выполнения задания нам понадобится датасет yes/no.
# Про датасет можно почитать тут: https://www.openslr.org/1/
# Скачаем его:
!rm -f waves_yesno.tar.gz
!wget -q https://www.openslr.org/resources/1/waves_yesno.tar.gz
# Распакуем:
!tar -xzf waves_yesno.tar.gz
# P.S Если по каким-либо причинам данные не скачались,
# их можно загрузить отсюда: https://www.openslr.org/1/
# Загрузим один из файлов:
wav, sr = librosa.load("waves_yesno/0_1_0_1_1_1_0_0.wav")
draw_waveform(wav, sr)
ipd.Audio(wav, rate=sr)
```
Как можно услышать, в этом датасете произносятся какие-то два слова (yes/no на иврите). Каждый файл состоит из 8 произнесений. Метки слов указаны в названиях файлов.
```
# Построим спектрограмму загруженного файла:
stft = librosa.stft(wav)
stft_db = librosa.amplitude_to_db(abs(stft))
plt.figure(figsize=(15, 10))
librosa.display.specshow(stft_db, sr=sr, x_axis='time', y_axis='hz');
```
# Задание 0.1: Анализ спектрограммы (0.5 балла)
1. Посмотрите на спектрограмму и попробуйте найти признаки, по которым можно отличить произнесение "yes" от "no".
2. В каких частотах находится основная энергия этого речевого сигнала?
1. О произнесении "yes" можно судить по высокой энергии сигнала в высокочастотной области (от 1000 до 3000 Гц), в то время как для "no" характерна только низкочастотная область (до 1000 Гц).
2. Энергия этого речевого сигнала находится в диапазоне (0÷4000) Гц, основная её часть — в диапазоне (0÷1000) Гц.
# Задание 1: Мел-шкала (1 балл)
Нарисовать спектрограму в [mel-шкале](https://en.wikipedia.org/wiki/Mel_scale).
Использовать формулу, представленную Дугласом О'Шонесси.
```
def mel(spec):
# spec — stft spectrogram
mel_spec = 2595.0 * np.log10(1.0 + spec / 700.0)
return mel_spec
def test_mel():
x = np.random.randint(100, size=(1000, 100))
x_mel = mel(x)
x_hz = 700.0 * (10.0 ** (x_mel / 2595.0) - 1.0)
assert np.allclose(x, x_hz), "TEST Hertz -> Mel -> Hertz failed."
print("All OK!")
test_mel()
```
# Мел-фильтры
Одними из наиболее популярных акустических признаков являются Filter Banks (fbanks).
fbanks вычисляются применением нескольких (количество фильтров = количество fbanks) треугольных фильтров к мел-спектрограмме. Чтобы не делать два действия со спектрограммой, переход к мел-шкале и применение фильтров в мел-шкале можно заменить на перевод мел-фильтров в Герц-шкалу и применение их к Герц-спектрограмме.
## Задание 2 (3 балла)
Реализуйте функцию вычисления fbank.
```
def mel_filters(sr, n_fft, n_mels):
"""
Функция построения треугольных мел-фильтров в герц-шкале
:param sr — sample rate
:param n_fft — length of the FFT window
:param n_mels — number of filters
:return mel filters matrix of shape [n_mels, n_fft // 2 + 1]
"""
# Initialize the weights
weights = np.zeros((n_mels, 1 + n_fft // 2))
# Center freqs of each FFT bin
fft_freqs = np.linspace(0, sr / 2, 1 + n_fft // 2)
# "Center freqs" of mel bands — uniformly spaced between limits
mel_freqs = np.linspace(mel(0.0), mel(sr / 2), n_mels + 2)
mel_freqs = 700.0 * (10.0 ** (mel_freqs / 2595.0) - 1.0)
f_diff = np.diff(mel_freqs)
ramps = np.subtract.outer(mel_freqs, fft_freqs)
for i in range(n_mels):
# lower and upper slopes for all bins...
lower = -ramps[i] / f_diff[i]
upper = ramps[i + 2] / f_diff[i + 1]
# ...then intersect them with each other and zero:
weights[i] = np.maximum(0, np.minimum(lower, upper))
enorm = 2.0 / (mel_freqs[2:n_mels + 2] - mel_freqs[:n_mels])
weights *= enorm[:, np.newaxis]
return weights
assert mel_filters(32, 46, 4).shape == (4, 24) and \
mel_filters(65, 45, 5).shape == (5, 23), "Wrong shape."
assert np.allclose(mel_filters(16, 8, 4),
librosa.filters.mel(16, 8, n_mels=4, htk=True))
assert np.allclose(mel_filters(8600, 512, 40),
librosa.filters.mel(8600, 512, n_mels=40, htk=True))
print("All OK!")
def get_fbanks(wav: np.ndarray, sr: int, window_ms=25, step_mc=10, n_fbanks=40):
# wav — input signal
# sr — sample rate
# window_ms — window length in milliseconds
# step_ms — stft step in milliseconds
# n_fbanks — number of filters
# return fbank matrix [n_fbanks, time]
n_fft = window_ms * sr // 1000
hop_length = step_mc * sr // 1000
wav_padded = np.pad(wav, int(n_fft // 2), mode="reflect")
window = scipy.signal.get_window("hann", Nx=n_fft)
spectrogram = np.zeros((n_fft // 2 + 1, len(wav) // hop_length + 1),
dtype=np.complex64)
for i in range(spectrogram.shape[1]):
j = i * hop_length
spectrogram[:, i] = np.abs(
scipy.fft.fft(wav_padded[j:j + n_fft] * window)[:1 + n_fft // 2]
) ** 2
mel_basis = mel_filters(sr=sr, n_fft=n_fft, n_mels=n_fbanks)
return np.dot(mel_basis, spectrogram)
def test_fbank(wav, sr, window_ms=25, step_mc=10, n_fbanks=40):
n_fft = window_ms * sr // 1000
hop_length = step_mc * sr // 1000
fbanks_lib = librosa.feature.melspectrogram(wav, sr, n_fft=n_fft,
hop_length=hop_length,
n_mels=n_fbanks, htk=True)
fbanks = get_fbanks(wav, sr, window_ms=window_ms,
step_mc=step_mc, n_fbanks=n_fbanks)
if fbanks_lib.shape != fbanks.shape:
print("TEST FAILED")
print(f"Shape {fbanks_lib.shape} != {fbanks.shape}.")
if not np.allclose(fbanks_lib, fbanks):
print("TEST FAILED")
print(f"Average diff is {np.mean(np.abs(fbanks_lib - fbanks))}.")
return -1
print("TEST PASSED")
return 0
assert test_fbank(wav[:sr * 1], sr) == 0, "1 sec wav test failed."
assert test_fbank(wav, sr) == 0 , "All wav tests failed."
print("All OK!")
fbanks = get_fbanks(wav, sr)
plt.figure(figsize=(14, 10))
librosa.display.specshow(fbanks, sr=sr, x_axis='time')
plt.ylabel("Filter number")
plt.show()
```
## Задание 3 (3 балла)
Реализовать вычисление [mfcc](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum).
```
def dct(S):
"""DCT (type 2)"""
N = S.shape[0]
k = np.arange(N).reshape(1, -1)
n = np.arange(N).reshape(-1, 1)
M = 2 * np.dot(S.T, np.cos(np.pi * (n + 0.5) * k / N)).T
M[0, :] /= np.sqrt(2)
M /= np.sqrt(2 * N)
return M
def get_mfcc(wav: np.ndarray, sr: int, window_ms=25, step_mc=10, n_mfcc=13):
# wav — input signal
# sr — sample rate
# window_ms — window length in milliseconds
# step_ms — stft step in milliseconds
# n_mfcc — number of filters
# return mfcc matrix [n_mfcc, time]
n_fft = window_ms * sr // 1000
hop_length = step_mc * sr // 1000
# Get mel-spectrogram
mel_spec = get_fbanks(wav, sr, window_ms=window_ms, step_mc=step_mc)
# Convert power to decibels
magnitude = np.abs(mel_spec)
log_spec = 10.0 * np.log10(np.maximum(1e-10, magnitude))
log_spec = np.maximum(log_spec, log_spec.max() - 80.0)
# Apply discrete cosine transform (type 2)
mfcc = dct(log_spec)[:n_mfcc]
return mfcc
def test_mfcc(wav, sr, window_ms=25, step_mc=10, n_mfcc=13):
n_fft = window_ms * sr // 1000
hop_length = step_mc * sr // 1000
mfcc_lib = librosa.feature.mfcc(wav, sr, n_fft=n_fft, hop_length=hop_length,
n_mels=40, n_mfcc=n_mfcc, htk=True)
mfcc = get_mfcc(wav, sr, window_ms=window_ms,
step_mc=step_mc, n_mfcc=n_mfcc)
if mfcc_lib.shape != mfcc.shape:
print("TEST FAILED")
print(f"Shape {mfcc_lib.shape} != {mfcc.shape}.")
if not np.allclose(mfcc_lib, mfcc, atol=1e-4):
print("TEST FAILED")
print(f"Average diff is {np.mean(np.abs(mfcc_lib - mfcc))}.")
return -1
print("TEST PASSED")
return 0
assert test_mfcc(wav[:sr * 1], sr) == 0, "1 sec wav test failed."
assert test_mfcc(wav, sr) == 0 , "All wav tests failed."
print("All OK!")
mfcc = get_mfcc(wav, sr)
plt.figure(figsize=(15, 10))
librosa.display.specshow(mfcc, sr=sr, x_axis='time')
plt.ylabel("Filter number")
plt.show()
```
# Классификация слов
Построим простую систему, классифицирующую слова yes/no.
Загрузим весь датасет:
```
def load_yn_dataset(directory):
X, labels = [], []
for f in glob(directory + "/*.wav"):
name = os.path.basename(f)[:-4]
y = [int(l) for l in name.split("_")]
x, _ = librosa.load(f)
X.append(x)
labels.append(y)
return X, labels
X, Y = load_yn_dataset("waves_yesno/")
```
Отделим 20% для теста:
```
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.2, random_state=1
)
# 6-th sample of X_test is corrupted, let's remove it:
X_test = X_test[:6] + X_test[7:]
Y_test = Y_test[:6] + Y_test[7:]
```
## Задание 4* (1 балл)
Voice Activity Detector (VAD) определяет, есть ли речь в текущем кадре или нет.
Реализуйте простой VAD.
Настройте VAD, чтобы хорошо определялись границы слов.
```
def moving_average(data, window):
"""Moving average filter"""
return np.convolve(data, np.ones(window), "same") / window
def detect_va(x, sr=22050):
"""Voice activity detector"""
mfcc = get_mfcc(x, sr=sr)
# Moving average filter and level adjusting:
mfcc_smoothed = moving_average(mfcc[1], 10) - 125
vad = np.zeros_like(mfcc_smoothed)
vad[mfcc_smoothed > 0] = 1
return vad
plt.figure(figsize=(14, 3))
plt.plot(detect_va(X_train[6]));
# train_VA: 1 — voice, 0 - silence
# test_VA: 1 - voice, 0 - silence
train_VA = [detect_va(x) for x in X_train]
test_VA = [detect_va(x) for x in X_test]
def test_VAD(VA, Y):
def check_diff(diff, num_words):
if diff.sum() != 0:
print("VAD detected speech at the beginning (or end) of audio.")
return -1
if not (diff > 0).sum() == num_words:
print("Wrong number of words. Each audio contains 8 words.")
return -2
return 0
for i, (va, y) in enumerate(zip(VA, Y)):
diff = va[1:] - va[:-1]
assert check_diff(diff, len(y)) == 0, f"Bad {i}-th example."
test_VAD(train_VA, Y_train)
test_VAD(test_VA, Y_test)
```
## Задание 5* (2 балла)
Обучите классификатор, определяющий, какое слово было сказано. Используйте VAD для разбиения входных файлов на отдельные слова. Классификацию можно сделать, например, с помощью SVM по усреднённым признакам выделеных слов или любым другим удобным для вас способом.
```
def prepare_dataset(x, vad, y):
train, target = [], []
for i, (xi, vai, yi) in enumerate(zip(x, vad, y)):
mfcc = get_mfcc(xi, sr=22050)
# Get indices of VAD changing values:
indices = np.where(vai[:-1] != vai[1:])[0] + 1
# Extract speech parts from signal:
for j in range(0, len(indices), 2):
sample = np.mean(mfcc[:, indices[j]:indices[j + 1]], axis=1)
train.append(sample)
target.append(yi[j // 2])
return np.array(train), np.array(target)
x_train, y_train = prepare_dataset(X_train, train_VA, Y_train)
x_test, y_test = prepare_dataset(X_test, test_VA, Y_test)
print(f"Train set class balance: " \
f"{len(y_train[y_train == 1]) / len(y_train) * 100:.2f}%")
print(f"Test set class balance: " \
f"{len(y_test[y_test == 1]) / len(y_test) * 100:.2f}%")
```
The classes are well balanced, we can use accuracy as evaluation metric.
```
clf = make_pipeline(StandardScaler(),
LinearSVC(random_state=44, tol=1e-5))
clf.fit(x_train, y_train)
print(f"Accuracy: {accuracy_score(y_test, clf.predict(x_test)) * 100:.2f}%.")
```
Wow, the model achieved 100% accuracy! Let's do PCA and check if the samples can really be perfecty separated by some decision boundary.
```
pca = PCA(n_components=2)
y_pca = pca.fit_transform(x_train)
plt.figure(figsize=(14, 6))
plt.scatter(y_pca[:, 0], y_pca[:, 1], c=y_train)
plt.grid()
plt.show()
```
The data is really perfectly separable, it's alright to have 100% accuracy.
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Support-Vector-Machines" data-toc-modified-id="Support-Vector-Machines-1">Support Vector Machines</a></span></li><li><span><a href="#Prepare-data" data-toc-modified-id="Prepare-data-2">Prepare data</a></span></li><li><span><a href="#Linear-SVM" data-toc-modified-id="Linear-SVM-3">Linear SVM</a></span></li><li><span><a href="#Non-linear-SVM" data-toc-modified-id="Non-linear-SVM-4">Non-linear SVM</a></span></li><li><span><a href="#References" data-toc-modified-id="References-5">References</a></span></li></ul></div>
Support Vector Machines
------
```
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import nltk
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import classification_report, confusion_matrix
import re
import string
from sklearn.model_selection import train_test_split
from sklearn import svm
```
## Prepare data
```
# Load data
try:
# Local version
path = "../data/"
filename = 'microfinance_tweets.csv'
data = pd.read_csv(path+filename, encoding="ISO-8859-1")
except FileNotFoundError or ParserError:
# If not local, get from remote repo. Helpful if using colab.
url = 'https://raw.githubusercontent.com/DeltaAnalytics/machine_learning_for_good_data/master/microfinance_tweets.csv'
data = pd.read_csv(url)
# It always a good to visually inspect the data
data.head()
data.loc[data['Sentiment'] == 'negative', 'Sentiment'] = -1
data.loc[data['Sentiment'] == 'neutral', 'Sentiment'] = 0
data.loc[data['Sentiment'] == 'positive', 'Sentiment'] = 1
data.head()
train, test = train_test_split(data, test_size=0.2, random_state=42)
vectorizer = CountVectorizer()
train_features = vectorizer.fit_transform(train['Comments'])
test_features = vectorizer.transform(test['Comments'])
```
We have vectorized our data such that each index corresponds with a word as well as the frequency of that word in the text.
```
print(train_features[0])
```
## Linear SVM
There are many types of SVMs, but we will first try a linear SVM, the most basic. This means that the decision boundary will be linear. <br>
There is another input called decision_function_shape. The two options of one versus rest, and one versus one. This relates to how the decision boundary separates points, whether it separates negative points from everyone else or negative points from neutral points, etc. (https://pythonprogramming.net/support-vector-machine-parameters-machine-learning-tutorial/). The default is one versus rest. One versus rest takes less computational power but may be thrown off by outliers and don't do well on imbalanced data sets, e.g. more of one class than another.
```
clf = svm.SVC(kernel='linear')
clf.fit(train_features, train['Sentiment'])
y_train = clf.predict(train_features)
print(confusion_matrix(train['Sentiment'],y_train))
print(classification_report(train['Sentiment'],y_train))
y_pred = clf.predict(test_features)
print(confusion_matrix(test['Sentiment'],y_pred))
print(classification_report(test['Sentiment'],y_pred))
```
What do you think of the performance of the SVM? We can also adjust gamma to account for overfitting, but it doesn't look like we've overfit too much given the training and test performances.
Remember that support vectors are the data points that lie closest to the decision surface (or hyperplane). We can figure out what those data points are below for each class we are classifying, noting that we have three classes for negative, neutral, and positive.
```
print(clf.support_vectors_)
```
We can check for the number of points in each class using another function. Here we see that most support vectors are in our last class, the positive class.
```
clf.n_support_
```
We can also find the support vector in our original data using the indices provided for us with clf.support_
```
clf.support_
print(train_features[8])
```
## Non-linear SVM
We can also check different kernel types, with rbf being gaussian and sigmoid being similar to the sigmoid function in logistic regression. A visualization is simplest to understand below:
```
clf = svm.SVC(kernel='rbf')
clf.fit(train_features, train['Sentiment'])
y_pred = clf.predict(test_features)
print(confusion_matrix(test['Sentiment'],y_pred))
print(classification_report(test['Sentiment'],y_pred))
clf = svm.SVC(kernel='sigmoid')
clf.fit(train_features, train['Sentiment'])
y_pred = clf.predict(test_features)
print(confusion_matrix(test['Sentiment'],y_pred))
print(classification_report(test['Sentiment'],y_pred))
```
It looks like the linear SVM performs best on this model from both a precision and recall perspective. Remember that precision are the accuracy of the prediction and recall is how much of the true positive space we are capturing.
What does this mean about our underlying data?
References
-------
- https://stackabuse.com/implementing-svm-and-kernel-svm-with-pythons-scikit-learn/
- https://jakevdp.github.io/PythonDataScienceHandbook/05.07-support-vector-machines.html
- https://gist.github.com/WittmannF/60680723ed8dd0cb993051a7448f7805
<br>
<br>
<br>
----
| github_jupyter |
# Federated PyTorch TinyImageNet Tutorial
## Using low-level Python API
# Long-Living entities update
* We now may have director running on another machine.
* We use Federation API to communicate with Director.
* Federation object should hold a Director's client (for user service)
* Keeping in mind that several API instances may be connacted to one Director.
* We do not think for now how we start a Director.
* But it knows the data shape and target shape for the DataScience problem in the Federation.
* Director holds the list of connected envoys, we do not need to specify it anymore.
* Director and Envoys are responsible for encrypting connections, we do not need to worry about certs.
* Yet we MUST have a cert to communicate to the Director.
* We MUST know the FQDN of a Director.
* Director communicates data and target shape to the Federation interface object.
* Experiment API may use this info to construct a dummy dataset and a `shard descriptor` stub.
```
!pip install torchvision==0.8.1
```
## Connect to the Federation
```
# Create a federation
from openfl.interface.interactive_api.federation import Federation
# please use the same identificator that was used in signed certificate
client_id = 'api'
cert_dir = 'cert'
director_node_fqdn = 'localhost'
# 1) Run with API layer - Director mTLS
# If the user wants to enable mTLS their must provide CA root chain, and signed key pair to the federation interface
# cert_chain = f'{cert_dir}/root_ca.crt'
# api_certificate = f'{cert_dir}/{client_id}.crt'
# api_private_key = f'{cert_dir}/{client_id}.key'
# federation = Federation(client_id=client_id, director_node_fqdn=director_node_fqdn, director_port='50051',
# cert_chain=cert_chain, api_cert=api_certificate, api_private_key=api_private_key)
# --------------------------------------------------------------------------------------------------------------------
# 2) Run with TLS disabled (trusted environment)
# Federation can also determine local fqdn automatically
federation = Federation(client_id=client_id, director_node_fqdn=director_node_fqdn, director_port='50051', tls=False)
federation.target_shape
shard_registry = federation.get_shard_registry()
shard_registry
# First, request a dummy_shard_desc that holds information about the federated dataset
dummy_shard_desc = federation.get_dummy_shard_descriptor(size=10)
dummy_shard_dataset = dummy_shard_desc.get_dataset('train')
sample, target = dummy_shard_dataset[0]
print(sample.shape)
print(target.shape)
```
## Creating a FL experiment using Interactive API
```
from openfl.interface.interactive_api.experiment import TaskInterface, DataInterface, ModelInterface, FLExperiment
```
### Register dataset
```
import torchvision
from torchvision import transforms as T
normalize = T.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
augmentation = T.RandomApply(
[T.RandomHorizontalFlip(),
T.RandomRotation(10),
T.RandomResizedCrop(64)],
p=.8
)
training_transform = T.Compose(
[T.Lambda(lambda x: x.convert("RGB")),
augmentation,
T.ToTensor(),
normalize]
)
valid_transform = T.Compose(
[T.Lambda(lambda x: x.convert("RGB")),
T.ToTensor(),
normalize]
)
from torch.utils.data import Dataset
class TransformedDataset(Dataset):
"""Image Person ReID Dataset."""
def __init__(self, dataset, transform=None, target_transform=None):
"""Initialize Dataset."""
self.dataset = dataset
self.transform = transform
self.target_transform = target_transform
def __len__(self):
"""Length of dataset."""
return len(self.dataset)
def __getitem__(self, index):
img, label = self.dataset[index]
label = self.target_transform(label) if self.target_transform else label
img = self.transform(img) if self.transform else img
return img, label
class TinyImageNetDataset(DataInterface):
def __init__(self, **kwargs):
self.kwargs = kwargs
@property
def shard_descriptor(self):
return self._shard_descriptor
@shard_descriptor.setter
def shard_descriptor(self, shard_descriptor):
"""
Describe per-collaborator procedures or sharding.
This method will be called during a collaborator initialization.
Local shard_descriptor will be set by Envoy.
"""
self._shard_descriptor = shard_descriptor
self.train_set = TransformedDataset(
self._shard_descriptor.get_dataset('train'),
transform=training_transform
)
self.valid_set = TransformedDataset(
self._shard_descriptor.get_dataset('val'),
transform=valid_transform
)
def get_train_loader(self, **kwargs):
"""
Output of this method will be provided to tasks with optimizer in contract
"""
return DataLoader(
self.train_set, num_workers=8, batch_size=self.kwargs['train_bs'], shuffle=True
)
def get_valid_loader(self, **kwargs):
"""
Output of this method will be provided to tasks without optimizer in contract
"""
return DataLoader(self.valid_set, num_workers=8, batch_size=self.kwargs['valid_bs'])
def get_train_data_size(self):
"""
Information for aggregation
"""
return len(self.train_set)
def get_valid_data_size(self):
"""
Information for aggregation
"""
return len(self.valid_set)
fed_dataset = TinyImageNetDataset(train_bs=64, valid_bs=64)
```
### Describe the model and optimizer
```
import os
import glob
from torch.utils.data import Dataset, DataLoader
from PIL import Image
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
"""
MobileNetV2 model
"""
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.model = torchvision.models.mobilenet_v2(pretrained=True)
self.model.requires_grad_(False)
self.model.classifier[1] = torch.nn.Linear(in_features=1280, \
out_features=200, bias=True)
def forward(self, x):
x = self.model.forward(x)
return x
model_net = Net()
params_to_update = []
for param in model_net.parameters():
if param.requires_grad == True:
params_to_update.append(param)
optimizer_adam = optim.Adam(params_to_update, lr=1e-4)
def cross_entropy(output, target):
"""Binary cross-entropy metric
"""
return F.cross_entropy(input=output,target=target)
```
### Register model
```
from copy import deepcopy
framework_adapter = 'openfl.plugins.frameworks_adapters.pytorch_adapter.FrameworkAdapterPlugin'
model_interface = ModelInterface(model=model_net, optimizer=optimizer_adam, framework_plugin=framework_adapter)
# Save the initial model state
initial_model = deepcopy(model_net)
```
## Define and register FL tasks
```
task_interface = TaskInterface()
import torch
import tqdm
# The Interactive API supports registering functions definied in main module or imported.
def function_defined_in_notebook(some_parameter):
print(f'Also I accept a parameter and it is {some_parameter}')
# Task interface currently supports only standalone functions.
@task_interface.add_kwargs(**{'some_parameter': 42})
@task_interface.register_fl_task(model='net_model', data_loader='train_loader', \
device='device', optimizer='optimizer')
def train(net_model, train_loader, optimizer, device, loss_fn=cross_entropy, some_parameter=None):
device = torch.device('cuda')
if not torch.cuda.is_available():
device = 'cpu'
function_defined_in_notebook(some_parameter)
train_loader = tqdm.tqdm(train_loader, desc="train")
net_model.train()
net_model.to(device)
losses = []
for data, target in train_loader:
data, target = torch.tensor(data).to(device), torch.tensor(
target).to(device)
optimizer.zero_grad()
output = net_model(data)
loss = loss_fn(output=output, target=target)
loss.backward()
optimizer.step()
losses.append(loss.detach().cpu().numpy())
return {'train_loss': np.mean(losses),}
@task_interface.register_fl_task(model='net_model', data_loader='val_loader', device='device')
def validate(net_model, val_loader, device):
device = torch.device('cuda')
net_model.eval()
net_model.to(device)
val_loader = tqdm.tqdm(val_loader, desc="validate")
val_score = 0
total_samples = 0
with torch.no_grad():
for data, target in val_loader:
samples = target.shape[0]
total_samples += samples
data, target = torch.tensor(data).to(device), \
torch.tensor(target).to(device, dtype=torch.int64)
output = net_model(data)
pred = output.argmax(dim=1,keepdim=True)
val_score += pred.eq(target).sum().cpu().numpy()
return {'acc': val_score / total_samples,}
```
## Time to start a federated learning experiment
```
# create an experimnet in federation
experiment_name = 'tinyimagenet_test_experiment'
fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name)
# The following command zips the workspace and python requirements to be transfered to collaborator nodes
fl_experiment.start(
model_provider=model_interface,
task_keeper=task_interface,
data_loader=fed_dataset,
rounds_to_train=5,
opt_treatment='CONTINUE_GLOBAL'
)
# If user want to stop IPython session, then reconnect and check how experiment is going
# fl_experiment.restore_experiment_state(model_interface)
fl_experiment.stream_metrics(tensorboard_logs=False)
```
| github_jupyter |
# Acquire Sentinel-2 MSI Data for California
This notebook is used for gathering data from California from the Sentinel-2 satellites. Specifically, we are looking to acquire the surface reflectance data (atmosphere corrected - level 2a) as that is what we did our baseline model testing and evaluation with using the Big Earth Net data. We will gather data from a variety of geographic areas but most of our focus will be on the agricultural regions of California (e.g., Central Valley).
```
## ee package
!pip install earthengine-api --upgrade
!pip install Shapely
!pip install folium
import time
from math import sin, cos, sqrt, atan2, radians
import pandas as pd
import ee #https://developers.google.com/earth-engine/guides/python_install
from shapely.geometry import box
import folium
import matplotlib.pyplot as plt
import numpy as np
from scipy.signal import find_peaks
def authenticate():
# Trigger the authentication flow.
ee.Authenticate()
class MSICalifornia():
def __init__(self, center_lat=43.771114, center_lon=-116.736866, edge_len=0.005, year=2019):
'''
Parameters:
center_lat: latitude for the location coordinate
center_lon: longitude for the location coordinate
edge_len: edge length in degrees for the rectangle given the location coordinates
year: year the satellite data should pull images for
'''
# Initialize the library.
ee.Initialize()
# Error handle parameter issues
if center_lat >= -90 and center_lat <= 90:
self.center_lat = center_lat
else:
raise ValueError('Please enter float value for latitude between -90 and 90')
exit()
if center_lon >= -180 and center_lon <= 180:
self.center_lon = center_lon
else:
raise ValueError('Please enter float value for longitude between -180 and 180')
exit()
if (type(edge_len) == float and (edge_len <= 0.5 and edge_len >= 0.005)):
self.edge_len = edge_len
else:
raise ValueError('Please enter float value for edge length between 0.5 and 0.005')
exit()
# (range is 2017 to year prior)
if ((type(year) == int) and (year >= 2017 and year <= int(time.strftime("%Y")) - 1)):
self.year = year
else:
raise ValueError(
'Please enter an integer value for year > 2017 and less than the current year')
exit()
# initialize remaining variables
self.label = []
self.comment = dict()
self.image = ee.Image()
self.simple_image = ee.Image()
self.base_asset_directory = None
# Create the bounding box using GEE API
self.aoi_ee = self.__create_bounding_box_ee()
# Estimate the area of interest
self.dist_lon = self.__calc_distance(
self.center_lon - self.edge_len / 2, self.center_lat, self.center_lon + self.edge_len / 2, self.center_lat)
self.dist_lat = self.__calc_distance(
self.center_lon, self.center_lat - self.edge_len / 2, self.center_lon, self.center_lat + self.edge_len / 2)
print('The selected area is approximately {:.2f} km by {:.2f} km'.format(
self.dist_lon, self.dist_lat))
self.model_projection = "EPSG:3857"
def __create_bounding_box_ee(self):
'''Creates a rectangle for pulling image information using center coordinates and edge_len'''
return ee.Geometry.Rectangle([self.center_lon - self.edge_len / 2, self.center_lat - self.edge_len / 2, self.center_lon + self.edge_len / 2, self.center_lat + self.edge_len / 2])
def __create_bounding_box_shapely(self):
'''Returns a box for coordinates to plug in as an image add-on layer'''
return box(self.center_lon - self.edge_len / 2, self.center_lat - self.edge_len / 2, self.center_lon + self.edge_len / 2, self.center_lat + self.edge_len / 2)
@staticmethod
def __calc_distance(lon1, lat1, lon2, lat2):
'''Calculates the distance between 2 coordinates'''
# Reference: https://stackoverflow.com/questions/19412462/getting-distance-between-two-points-based-on-latitude-longitude
# approximate radius of earth in km
R = 6373.0
lon1 = radians(lon1)
lat1 = radians(lat1)
lon2 = radians(lon2)
lat2 = radians(lat2)
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
return distance
def pull_Sentinel2_data(self):
# 10 of 13 Spectral Bands are retained. 10th band has no surface reflectance per
# http://bigearth.net/static/documents/BigEarthNet_IGARSS_2019.pdf
# Also the baseline model only used the 10 and 20m bands (remove band 1 and 9)
band_names = ['B2', 'B3', 'B4', 'B5',
'B6', 'B7', 'B8', 'B8A', 'B9',
'B11', 'B12']
random_month = np.random.randint(1,13)
start_date = f'{self.year}-{random_month}-01'
if random_month != 12:
end_date = f'{self.year}-{random_month+1}-01'
else:
end_date = f'{self.year +1 }-1-01'
self.Sentinel_MSI = (ee.ImageCollection('COPERNICUS/S2_SR')
.filterDate(start_date, end_date)
.filterBounds(self.aoi_ee)
.select(band_names)
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 1))
.median().clip(self.aoi_ee))
return random_month
def plot_map(self):
'''Plot folium map using GEE api - the map includes are of interest box and associated ndvi readings'''
def add_ee_layer(self, ee_object, vis_params, show, name):
'''Checks if image object classifies as ImageCollection, FeatureCollection, Geometry or single Image
and adds to folium map accordingly'''
try:
if isinstance(ee_object, ee.image.Image):
map_id_dict = ee.Image(ee_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict['tile_fetcher'].url_format,
attr='Google Earth Engine',
name=name,
overlay=True,
control=True,
show=show
).add_to(self)
elif isinstance(ee_object, ee.imagecollection.ImageCollection):
ee_object_new = ee_object.median()
map_id_dict = ee.Image(ee_object_new).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict['tile_fetcher'].url_format,
attr='Google Earth Engine',
name=name,
overlay=True,
control=True,
show=show
).add_to(self)
elif isinstance(ee_object, ee.geometry.Geometry):
folium.GeoJson(
data=ee_object.getInfo(),
name=name,
overlay=True,
control=True
).add_to(self)
elif isinstance(ee_object, ee.featurecollection.FeatureCollection):
ee_object_new = ee.Image().paint(ee_object, 0, 2)
map_id_dict = ee.Image(ee_object_new).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict['tile_fetcher'].url_format,
attr='Google Earth Engine',
name=name,
overlay=True,
control=True,
show=show
).add_to(self)
except:
print("Could not display {}".format(name))
# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
myMap = folium.Map(location=[self.center_lat, self.center_lon], zoom_start=11)
#aoi_shapely = self.__create_bounding_box_shapely()
#folium.GeoJson(aoi_shapely, name="Area of Interest").add_to(myMap)
# Add Sentinel-2 RGB quarterly layers
start = time.time()
visParams = {'max': 3000}
# Add MSI layer for July
myMap.add_ee_layer(self.Sentinel_MSI.select(['B2','B3','B4']), visParams, show=False, name="Sentinel2A")
end = time.time()
print("ADDED S2 RGB LAYERS \t\t--> " + str(round((end - start) / 60, 2)) + " min")
return myMap
def write_image_google_drive(self, filename):
'''Writes predicted image out as an image to Google Drive as individual TIF files
They will need to be combined after the fact'''
bands = ['B2', 'B3', 'B4', 'B5',
'B6', 'B7', 'B8', 'B8A',
'B11', 'B12']
tasks = []
for band in bands:
tasks.append(ee.batch.Export.image.toDrive(
crs=self.model_projection,
region=self.aoi_ee,
image=self.Sentinel_MSI.select(band),
description=f'{filename}_msi_{band}',
scale = 10,
maxPixels=1e13))
print(f"Writing To Google Drive filename = {filename}.tif")
for t in tasks:
t.start()
authenticate()
```
## Degree to distance calculation
- One degree of latitude equals approximately 364,080 feet (69 miles), one minute equals 6,068 feet (1.15 miles), and one-second equals 101 feet.
- One-degree of longitude equals 288,200 feet (54.6 miles), one minute equals 4,800 feet (0.91 mile), and one second equals 80 feet.
- 1.60934 km per mile
- 9748 square kilometers per squared degree
```
# Latitude and Longitude of center point
edge_len = 0.25
# Grab Central Valley region of California
# Fresno to Bakersfield
#lat_range = np.arange(35.125,35.375,edge_len)
#lon_range = np.arange(-119.875,-119.625,edge_len)
#lat_range = np.arange(35.125,35.375,edge_len)
#lon_range = np.arange(-119.125,-118.875,edge_len)
# Sacramento to Merced
#lat_range = np.arange(37.125,37.375,edge_len)
#lon_range = np.arange(-121.125,-120.875,edge_len)
#lat_range = np.arange(37.125,37.375,edge_len)
# Calexico Region
#lat_range = np.arange(32.625,33.375,edge_len)
#lon_range = np.arange(-115.875,-114.875,edge_len)
# North of Sacramento
lat_range = np.arange(39.875, 40.125,edge_len)
lon_range = np.arange(-122.125,-121.875,edge_len)
year = 2019
# Iterate over range of lats and longs
for lat in lat_range:
for lon in lon_range:
# Instantiate the model
print(f'Evaluating irrigation at {lat}, {lon}')
# Instantiate the model
model = MSICalifornia(
center_lat=lat,
center_lon=lon,
edge_len=edge_len,
year=year)
month = model.pull_Sentinel2_data()
base_filename = f'S2SR_{month}_{year}_{lat}_{lon}'
model.write_image_google_drive(base_filename)
%%time
model.plot_map()
rgb_img = model.Sentinel_MSI.select(['B2', 'B3', 'B4', 'B5',
'B6', 'B7', 'B8', 'B8A',
'B11', 'B12'])
rgb_img.getInfo()
model.Sentinel_MSI.bandTypes().getInfo()
```
| github_jupyter |
```
import numpy as np
parameters = {'W1': np.array([[-0.00416758, -0.00056267],
[-0.02136196, 0.01640271],
[-0.01793436, -0.00841747],
[ 0.00502881, -0.01245288]]),
'W2': np.array([[-0.01057952, -0.00909008, 0.00551454, 0.02292208]]),
'b1': np.array([[ 0.],
[ 0.],
[ 0.],
[ 0.]]),
'b2': np.array([[ 0.]])}
cache = {'A1': np.array([[-0.00616578, 0.0020626 , 0.00349619],
[-0.05225116, 0.02725659, -0.02646251],
[-0.02009721, 0.0036869 , 0.02883756],
[ 0.02152675, -0.01385234, 0.02599885]]),
'A2': np.array([[ 0.5002307 , 0.49985831, 0.50023963]]),
'Z1': np.array([[-0.00616586, 0.0020626 , 0.0034962 ],
[-0.05229879, 0.02726335, -0.02646869],
[-0.02009991, 0.00368692, 0.02884556],
[ 0.02153007, -0.01385322, 0.02600471]]),
'Z2': np.array([[ 0.00092281, -0.00056678, 0.00095853]])}
X = np.array([[ 1.62434536, -0.61175641, -0.52817175],
[-1.07296862, 0.86540763, -2.3015387 ]])
Y = np.array([[ True, False, True]], dtype=bool)
m = X.shape[1]
m
# Backward propagation for a single hidden layer neural network
W1, W2 = parameters.get('W1'), parameters.get('W2')
W1.shape
W2.shape
A1, A2 = cache.get('A1'), cache.get('A2')
A1.shape
A2.shape
```

```
# Gradients for backpropagation
dZ2 = A2 - Y
dZ2
print(dZ2.shape)
print(A1.shape)
dW2 = np.dot(dZ2, A1.T) / m
dW2
# axis=1 implies summing horizontally row wise
db2 = np.sum(dZ2, axis=1, keepdims=True) / m
db2
```

```
# For dZ1, we first need to calculate gradient of g1Z1
grad_g1Z1 = 1 - np.power(A1, 2)
grad_g1Z1.shape
dZ1 = np.dot(W2.T, dZ2) * grad_g1Z1
dZ1.shape
dW1 = np.dot(dZ1, X.T) / m
dW1.shape
db1 = np.sum(dZ1, axis=1, keepdims=True) / m
db1.shape
```
# Logistic Regression Forward Propagation and Loss
```
import numpy as np
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-1 * z))
### END CODE HERE ###
return s
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
print(f'Shape of w: {w.shape}')
print(f'Shape of b: {1}')
print(f'Shape of X: {X.shape}')
print(f'Shape of Y: {Y.shape}')
X
m = X.shape[1]
m
# Model output
A = sigmoid(np.dot(w.T, X) + b)
A
cost = -np.mean(Y * np.log(A) + (1 - Y) * np.log(1 - A))
cost
```
# Backward propagation to find grads
```
dz = A - Y
dz
dw = 1/m * np.dot(X, dz.T)
dw
db = 1/m * np.sum(dz)
db
```
# Optimization
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import os
data_dir = os.path.join(os.pardir, "data")
raw_dir = os.path.join(data_dir, "raw")
```
## Training Data
```
train = pd.read_csv(os.path.join(raw_dir, "train", "train.csv"), nrows=10000000,
dtype={'acoustic_data': np.int16, 'time_to_failure': np.float64})
train.shape
for n in range(5):
print(train["time_to_failure"].values[n])
fig, ax = plt.subplots(2,1, figsize=(10,6))
ax[0].plot(train.index.values, train["time_to_failure"].values, c="darkred")
ax[0].set_title("Time to quake of 10 Mio rows")
ax[0].set_xlabel("Index")
ax[0].set_ylabel("Time to quake in ms");
ax[1].plot(train.index.values, train["acoustic_data"].values, c="mediumseagreen")
ax[1].set_title("Signal of 10 Mio rows")
ax[1].set_xlabel("Index")
ax[1].set_ylabel("Acoustic Signal");
plt.tight_layout()
fig, ax = plt.subplots(3,1,figsize=(20,18))
ax[0].plot(train.index.values[0:50000], train["time_to_failure"].values[0:50000], c="Red")
ax[0].set_xlabel("Index")
ax[0].set_ylabel("Time to quake")
ax[0].set_title("How does the second quaketime pattern look like?")
ax[1].plot(train.index.values[0:49999], np.diff(train["time_to_failure"].values[0:50000]))
ax[1].set_xlabel("Index")
ax[1].set_ylabel("Difference between quaketimes")
ax[1].set_title("Are the jumps always the same?")
ax[2].plot(train.index.values[0:4000], train["time_to_failure"].values[0:4000])
ax[2].set_xlabel("Index from 0 to 4000")
ax[2].set_ylabel("Time to Quake")
ax[2].set_title("How does the quaketime changes within the first block?");
```
## Test Data
```
test_dir = os.path.join(raw_dir, "test")
test_files = os.listdir(test_dir)
print(test_files[0:5])
len(test_files)
seg.head()
fig, ax = plt.subplots(4,1, figsize=(10,15))
for n in range(4):
seg = pd.read_csv(os.path.join(test_dir, test_files[n]))
ax[n].plot(seg["acoustic_data"].values, c="mediumseagreen")
ax[n].set_xlabel("Index")
ax[n].set_ylabel("Signal")
ax[n].set_ylim([-300, 300])
ax[n].set_title("Test {}".format(test_files[n]));
```
## Train signal distribution
```
train["acoustic_data"].describe()
low = train["acoustic_data"].mean() - 3 * train["acoustic_data"].std()
high = train["acoustic_data"].mean() + 3 * train["acoustic_data"].std()
index = (train["acoustic_data"] > low) & (train["acoustic_data"] < high)
sns.distplot(train.loc[index, "acoustic_data"], kde=False, bins=150)
np.histogram(train["acoustic_data"])
f, ax = plt.subplots()
sns.distplot(train["acoustic_data"], kde=False, bins=150, ax=ax)
ax.set_ylim([0, 100])
stepsize = np.diff(train["time_to_failure"])
train = train.drop(train.index[len(train)-1])
train = train.assign(stepsize = stepsize)
train.stepsize = train.stepsize.apply(lambda l: np.round(l, 10))
stepsize_counts = train.stepsize.value_counts()
stepsize_counts
```
The giant stepsize of 11.54 seconds is the jump directly after the earthquake. Otherwise, the vast majority of stepsizes are 1 ns or very close to it. There are several hundred stepsizes that are close to 1 ms.
| github_jupyter |
# This notebook is to test a single batch run in ADAM
```
from adam import Batch
from adam import Batches
from adam import BatchRunManager
from adam import PropagationParams
from adam import OpmParams
from adam import ConfigManager
from adam import ProjectsClient
from adam import RestRequests
from adam import AuthenticatingRestProxy
import time
import os
```
This sets up authenticated access to the server. It needs to be done before pretty much everything you want to do with ADAM.
```
# ConfigManager loads the config set up via adamctl.
# See the README at https://github.com/B612-Asteroid-Institute/adam_home/blob/master/README.md
config_manager = ConfigManager()
# Change the "dev" to a different adamctl config name to have your notebook talk to a different ADAM server,
# or comment out to use the default server, prod.
config_manager.set_default_env('dev')
config = config_manager.get_config()
print(f"Using the {config_manager.get_default_env()} ADAM config")
```
## Example Inputs
```
# 6x1 state vector (position [km], velocity [km/s])
state_vec = [130347560.13690618,
-74407287.6018632,
-35247598.541470632,
23.935241263310683,
27.146279819258538,
10.346605942591514]
# Lower triangular covariance matrix (21 elements in a list)
covariance = [3.331349476038534e-04, + \
4.618927349220216e-04, 6.782421679971363e-04, + \
-3.070007847730449e-04, -4.221234189514228e-04, 3.231931992380369e-04, + \
-3.349365033922630e-07, -4.686084221046758e-07, 2.484949578400095e-07, 4.296022805587290e-10, + \
-2.211832501084875e-07, -2.864186892102733e-07, 1.798098699846038e-07, 2.608899201686016e-10, 1.767514756338532e-10, + \
-3.041346050686871e-07, -4.989496988610662e-07, 3.540310904497689e-07, 1.869263192954590e-10, 1.008862586240695e-10, 6.224444338635500e-10]
```
### Set Parameters
Commented parameters are optional. Uncomment to use.
```
propagation_params = PropagationParams({
'start_time': '2017-10-04T00:00:00Z', # propagation start time in ISO format
'end_time': '2017-10-11T00:00:00Z', # propagation end time in ISO format
'project_uuid': config['workspace'],
# 'step_size': 60 * 60, # step size (seconds)
# 'propagator_uuid': '00000000-0000-0000-0000-000000000002', # force model
# 'description': 'some description' # description of run
})
opm_params = OpmParams({
'epoch': '2017-10-04T00:00:00Z',
'state_vector': state_vec,
# 'mass': 500.5, # object mass
# 'solar_rad_area': 25.2, # object solar radiation area (m^2)
# 'solar_rad_coeff': 1.2, # object solar radiation coefficient
# 'drag_area': 33.3, # object drag area (m^2)
# 'drag_coeff': 2.5, # object drag coefficient
# 'covariance': covariance, # object covariance
# 'perturbation': 3, # sigma perturbation on state vector
# 'hypercube': 'FACES', # hypercube propagation type
# 'originator': 'Robot', # originator of run
# 'object_name': 'TestObj', # object name
# 'object_id': 'test1234', # object ID
})
```
### Submit and Run Propagation
```
batch = Batch(propagation_params, opm_params)
print("Submitting OPM:")
print(batch.get_opm_params().generate_opm())
# Submit and wait until batch run is ready
auth_rest = AuthenticatingRestProxy(RestRequests())
batches_module = Batches(auth_rest)
BatchRunManager(batches_module, [batch]).run()
```
### Get Status and Parts Count
```
# Get final status and parts count
parts_count = batch.get_state_summary().get_parts_count()
print("Final state: %s, part count %s\n" % (batch.get_calc_state(), parts_count))
```
### Get Ephemeris of Specified Part
```
# Get ephemeris of specified part
part_to_get = 0
eph = batch.get_results().get_parts()[part_to_get].get_ephemeris()
print("Ephemeris:")
print(eph)
```
### Get ending state vector
```
# Get the end state vector (uncomment to use)
end_state_vector = batch.get_results().get_end_state_vector()
print("State vector at the end of propagation:")
print(end_state_vector)
```
| github_jupyter |
#Price Momentum Factor Algorithm
By Gil Wassermann
Strategy taken from "130/30: The New Long-Only" by Andrew Lo and Pankaj Patel
Part of the Quantopian Lecture Series:
* www.quantopian.com/lectures
* github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution.
Let us imagine that we are traders at a large bank, watching out screens as stock prices fluctuate up and down. Suddenly, everyone around us is buying one particular security. Demand has increased so the stock price increases. We panic. Is there some information that we missed out on -are we out of the loop? In our panic, we blindly decide to buy some shares so we do not miss the boat on the next big thing. Demand further increases as a result of the hype surrounding the stock, driving up the price even more.
Now let us take a step back. From the observational perspective of a quant, the price of the security is increasing because of the animal spirits of investors. In essence, the price is going up because the price is going up. As quants, if we can identify these irrational market forces, we can profit from them.
In this notebook we will go step-by-step through the contruction of an algorithm to find and trade equities experiencing momentum in price.
First, let us import all the necessary libraries for our algorithm.
```
import numpy as np
import pandas as pd
from scipy.signal import argrelmin, argrelmax
import statsmodels.api as sm
import talib
import matplotlib.pyplot as plt
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import Latest
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.research import run_pipeline
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.factors import CustomFactor
```
#Price Momentum
In this notebook, we will use indicators outlined in "130/30: The New Long Only" by Andrew Lo and Pankaj Patel and combine them to create a single factor. It should be clarified that we are looking for long-term momentum as opposed to intra-day momentum. These indicators are:
* Slope of the 52-Week Trendline (20-day Lag)
* Percent Above 260-Day Low (20-day Lag)
* 4/52-Week Oscillator (20-day Lag)
* 39-Week Return (20-day Lag)
##Lag
One thing that all of the indicators have in common is that they are calculated using a 20-day lag. This lag is a way of smoothing out the stock signal so that we can filter out noise and focus on concrete, underlying trends. To calculate lag, we will take our desired data series and calculate its 20-day simple moving average, which is the arithmetic mean of window of the series' last 20 entries.
Let's see an example of this for the closing price of Apple (AAPL) stock from August 2014 to August 2015. We will abstract out the lag calculation into a helper function because we will be needing it so often in the algorithm.
NB: we remove the first 20 entries of the results as these will always be undefined (here, NaN) as because there is not a 20-day window with which to calculate the lag. We also have a check to determine if the entire row of data in NaN, as this can cause issues with the TA-Lib library.
```
# check if entire column is NaN. If yes, return True
def nan_check(col):
if np.isnan(np.sum(col)):
return True
else:
return False
# helper to calculate lag
def lag_helper(col):
# TA-Lib raises an error if whole colum is NaN,
# so we check if this is true and, if so, skip
# the lag calculation
if nan_check(col):
return np.nan
# 20-day simple moving average
else:
return talib.SMA(col, 20)[20:]
AAPL_frame = get_pricing('AAPL', start_date='2014-08-08', end_date='2015-08-08', fields='close_price')
# convert to np.array for helper function and save index of timeseries
AAPL_index = AAPL_frame.index
AAPL_frame = AAPL_frame.as_matrix()
# calculate lag
AAPL_frame_lagged = lag_helper(AAPL_frame)
plt.plot(AAPL_index, AAPL_frame, label='Close')
plt.plot(AAPL_index[20:], AAPL_frame_lagged, label='Lagged Close')
plt.legend(loc=2)
plt.xlabel('Date')
plt.title('Close Prices vs Close Prices (20-Day Lag)')
plt.ylabel('AAPL Price');
```
As you can see from the graph, the lagged closing prices generally follow the same general pattern as the unlagged prices, but do not experience as extreme peaks and troughs. For the rest of the notebook we will use lagged prices as we are interested in long-term trends.
## Slope of 52-Week Trendline
One of the oldest indicators of price momentum is the trendline. The basic idea is to create a bounding line around stock prices that predict when a price should pivot. A trendline that predicts a ceiling is called a resistance trendline, and one that predicts a floor is a support trendline.
To calculate a support trendline here, we take a lagged series, and find its pronounced local minima (here, a local minimum is defined as a data point lower than the five previous and five proceeding points). We then connect the first local minimum and the last local minimum by a straight line. For a resistance trendline, the process is the same, except it uses local maxima. This is just one of many methodologies for calculating trendlines.
Let us code up a function to return the gradient of the trendline. We will include a boolean variable `support` that, when set to `True` gives a support trendline and when set to `False` gives a resistance trendline. Let us have a look at the same dataset of AAPL stock and plot its trendlines.
NB: The y-intercepts used here are purely aesthetic and have no meaning as the indicator itself is only based on the slope of the trendline
```
# Custom Factor 1 : Slope of 52-Week trendline
def trendline_function(col, support):
# NaN check for speed
if nan_check(col):
return np.nan
# lag transformation
col = lag_helper(col)
# support trendline
if support:
# get local minima
minima_index = argrelmin(col, order=5)[0]
# make sure line can be drawn
if len(minima_index) < 2:
return np.nan
else:
# return gradient
return (col[minima_index[-1]] - col[minima_index[0]]) / (minima_index[-1] - minima_index[0])
# resistance trandline
else:
# get local maxima
maxima_index = argrelmax(col, order=5)[0]
if len(maxima_index) < 2:
return np.nan
else:
return (col[maxima_index[-1]] - col[maxima_index[0]]) / (maxima_index[-1] - maxima_index[0])
# make the lagged frame the default
AAPL_frame = AAPL_frame_lagged
# use day count rather than dates to ensure straight lines
days = list(range(0,len(AAPL_frame),1))
# get points to plot
points_low = [(101.5 + (trendline_function(AAPL_frame, True)*day)) for day in days]
points_high = [94 + (trendline_function(AAPL_frame, False)*day) for day in days]
# create graph
plt.plot(days, points_low, label='Support')
plt.plot(days, points_high, label='Resistance')
plt.plot(days, AAPL_frame, label='Lagged Closes')
plt.xlim([0, max(days)])
plt.xlabel('Days Elapsed')
plt.ylabel('AAPL Price')
plt.legend(loc=2);
```
As you can see, at the beginning of the time frame these lines seem to describe the pivot points of the curve well. Therefore it appears that betting against the stock when its price nears the resistance line and betting on the stock when its price nears the support line is a decent strategy. One issue with this is that these trendlines change over time. Even at the end of the above graph, it appears that the lines need to be redrawn in order to accomodate new prevailing price trends.
Now let us create our factor. In order to maintain flexibility between the types of trendlines, we need a way to pass the variable `support` into our Pipeline calculation. To do this we create a function that returns a `CustomFactor class` that *can* take a variable that is in scope of our indicator.
Also, we have abstracted out the trendline calculation so that we can use the builtin Numpy function `apply_along_axis` instead of creating and appending the results of the trendline calculation for each column to a list, which is a slower process.
```
def create_trendline_factor(support):
class Trendline(CustomFactor):
# 52 week + 20d lag
window_length = 272
inputs=[USEquityPricing.close]
def compute(self, today, assets, out, close):
out[:] = np.apply_along_axis(trendline_function, 0, close, support)
return Trendline
temp_pipe_1 = Pipeline()
trendline = create_trendline_factor(support=True)
temp_pipe_1.add(trendline(), 'Trendline')
results_1 = run_pipeline(temp_pipe_1, '2015-08-08', '2015-08-08')
results_1.head(20)
```
## Percent Above 260-Day Low
This indicator is relatively self explanitory. Whereas the trendline metric gives a more indepth picture of price momentum (as the line itself shows how this momentum has evolves over time), this metric is fairly blunt. It is calculated as the price of a stock today less the minimum price in a retrospective 260-day window, all divided by that minimum price.
Let us have a look at a visualization of this metric for the same window of AAPL stock.
```
# Custom Factor 2 : % above 260 day low
def percent_helper(col):
if nan_check(col):
return np.nan
else:
col = lag_helper(col)
return (col[-1] - min(col)) / min(col)
print 'Percent above 260-day Low: %f%%' % (percent_helper(AAPL_frame) * 100)
# create the graph
plt.plot(days, AAPL_frame)
plt.axhline(min(AAPL_frame), color='r', label='260-Day Low')
plt.axhline(AAPL_frame[-1], color='y', label='Latest Price')
plt.fill_between(days, AAPL_frame)
plt.xlabel('Days Elapsed')
plt.ylabel('AAPL Price')
plt.xlim([0, max(days)])
plt.title('Percent Above 260-Day Low')
plt.legend();
```
Now we will create the `CustomFactor` for this metric. We will use the same abstraction process as above for run-time efficiency.
```
class Percent_Above_Low(CustomFactor):
# 260 days + 20 lag
window_length = 280
inputs=[USEquityPricing.close]
def compute(self, today, asseys, out, close):
out[:] = np.apply_along_axis(percent_helper, 0, close)
temp_pipe_2 = Pipeline()
temp_pipe_2.add(Percent_Above_Low(), 'Percent Above Low')
results_2 = run_pipeline(temp_pipe_2, '2015-08-08', '2015-08-08')
results_2.head(20)
```
NB: There are a lot of 0's here for this output. Although this might seem odd at first, it makes sense when we consider that there are many securities on a downwards trend. These stocks would be prime candidates to give a value of 0 as their current price is as low as it has ever been in this lookback window.
##4/52-Week Price Oscillator
This is calculated as the average close price over 4 weeks over the average close price over 52 weeks, all subtracted by 1. To understand this value measures, let us consider what happens to the oscillator in different scenarios. This particular oscillator gives a sense of relative performance between the previous four weeks and the previous year. A value given by this oscillator could be "0.05", which would indicate that the stocks recent closes are outperforming its previous year's performance by 5%. A positive value is an indicator of momentum as more recent performance is stronger than normal and the larger the number, the more momentum.
As close prices can not be negative, this oscillator is bounded by -1 and positive infinity. Let us create a graph to show how, given a particular 52-week average, the value of the oscillator is affected by its four-week average.
```
# set 48-week average
av_52w = 100.
# create list of possible last four-week averages
av_4w = xrange(0,200)
# create list of oscillator values
osc = [(x / av_52w) - 1 for x in av_4w]
# draw graph
plt.plot(av_4w, osc)
plt.axvline(100, color='r', label='52-Week Average')
plt.xlabel('Four-Week Average')
plt.ylabel('4/52 Oscillator')
plt.legend();
```
Now let us create a Pipeline factor and observe some values.
```
# Custom Factor 3: 4/52 Price Oscillator
def oscillator_helper(col):
if nan_check(col):
return np.nan
else:
col = lag_helper(col)
return np.nanmean(col[-20:]) / np.nanmean(col) - 1
class Price_Oscillator(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 272
def compute(self, today, assets, out, close):
out[:] = np.apply_along_axis(oscillator_helper, 0, close)
temp_pipe_3 = Pipeline()
temp_pipe_3.add(Price_Oscillator(), 'Price Oscillator')
results_3 = run_pipeline(temp_pipe_3, '2015-08-08', '2015-08-08')
results_3.head(20)
```
Once again, let us use AAPL stock as an example.
```
# get two averages
av_4w = np.nanmean(AAPL_frame[-20:])
av_52w = np.nanmean(AAPL_frame)
# create the graph
plt.plot(days, AAPL_frame)
plt.fill_between(days[-20:], AAPL_frame[-20:])
plt.axhline(av_4w, color='y', label='Four-week Average' )
plt.axhline(av_52w, color='r', label='Year-long Average')
plt.ylim([80,140])
plt.xlabel('Days Elapsed')
plt.ylabel('AAPL Price')
plt.title('4/52 Week Oscillator')
plt.legend();
```
The section shaded blue under the graph represents the last four weeks of close prices. The fact that this average (shown by the yellow line) is greater than the year-long average (shown by the red line), means that the 4/52 week oscillator for this date will be positive. This fact is backed by our pipeline output, which gives the value of the metric to be 9.4%.
##39-Week Return
This is calculated as the difference price between today and 39-weks prior, all over the price 39-weeks prior.
Although returns as a metric might seem too ubitquitous to be useful or special, the important thing to highlight hear is the window length chosen. By choosing a larger window length (here, 39-weeks) as opposed to daily returns, we see larger fluctuations in value. This is because a larger time window exposes the metric to larger trends and higher volatility.
In the graph below, we illustrate this point by plotting returns calculated over different time windows. To do this we will look at a AAPL close prices between 2002 and 2016. We will also mark important dates in the history of Apple in order to highlight this metric's descriptive power for larger trends.
NB: 39-week return is not a metric that is event driven. The inclusion of these dates is illustrative as opposed to predictive.
```
# create a new longer frame of AAPL close prices
AAPL_frame = get_pricing('AAPL', start_date='2002-08-08', end_date='2016-01-01', fields='close_price')
# use dates as index
AAPL_index = AAPL_frame.index[20:]
AAPL_frame = lag_helper(AAPL_frame.as_matrix())
# 1d returns
AAPL_1d_returns = ((AAPL_frame - np.roll(AAPL_frame, 1))/ np.roll(AAPL_frame,1))[1:]
# 1w returns
AAPL_1w_returns = ((AAPL_frame - np.roll(AAPL_frame, 5))/ np.roll(AAPL_frame, 5))[5:]
# 1m returns
AAPL_1m_returns = ((AAPL_frame - np.roll(AAPL_frame, 30))/ np.roll(AAPL_frame, 30))[30:]
# 39w returns
AAPL_39w_returns = ((AAPL_frame - np.roll(AAPL_frame, 215))/ np.roll(AAPL_frame, 215))[215:]
# plot close prices
plt.plot(AAPL_index[1:], AAPL_1d_returns, label='1-day Returns')
plt.plot(AAPL_index[5:], AAPL_1w_returns, label='1-week Returns')
plt.plot(AAPL_index[30:], AAPL_1m_returns, label='1-month Returns')
plt.plot(AAPL_index[215:], AAPL_39w_returns, label='39-week Returns')
# show events
# iPhone release
plt.axvline('2007-07-29')
# iPod mini 2nd gen. release
plt.axvline('2005-02-23')
# iPad release
plt.axvline('2010-04-03')
# iPhone 5 release
plt.axvline('2012-09-21')
# Apple Watch
plt.axvline('2015-04-24')
# labels
plt.xlabel('Days')
plt.ylabel('Returns')
plt.title('Returns')
plt.legend();
```
There are a few important characteristics to note on the graph above.
Firstly, as we expected, the amplitude of the signal of returns with a longer window length is larger.
Secondly, these new releases, many of which were announced several months before, all lie in or adjacent to a peak in the 39-week return price. Therefore, it would seem that this window length is a useful tool for capturing information on larger trends.
Now let us create the custom factor and run the Pipeline.
```
# Custom Fator 4: 39-week Returns
def return_helper(col):
if nan_check(col):
return np.nan
else:
col = lag_helper(col)
return (col[-1] - col[-215]) / col[-215]
class Return_39_Week(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 235
def compute(self, today, assets, out, close):
out[:] = np.apply_along_axis(return_helper, 0, close)
temp_pipe_4 = Pipeline()
temp_pipe_4.add(Return_39_Week(), '39 Week Return')
results_4 = run_pipeline(temp_pipe_4, '2015-08-08','2015-08-08')
results_4.head(20)
```
##Aggregation
Let us create the full Pipeline. Once again we will need a proxy for the S&P500 for the ordering logic. Also, given the large window lengths needed for the algorithm, we will employ the trick of multiple outputs per factor. This is explained in detail here (https://www.quantopian.com/posts/new-feature-multiple-output-pipeline-custom-factors). Instead of having to process several data frames, we only need to deal with one large one and then apply our helper functions. This will speed up out computation exponentially in the backtester.
```
# This factor creates the synthetic S&P500
class SPY_proxy(CustomFactor):
inputs = [morningstar.valuation.market_cap]
window_length = 1
def compute(self, today, assets, out, mc):
out[:] = mc[-1]
# using helpers to boost speed
class Pricing_Pipe(CustomFactor):
inputs = [USEquityPricing.close]
outputs = ['trendline', 'percent', 'oscillator', 'returns']
window_length=280
def compute(self, today, assets, out, close):
out.trendline[:] = np.apply_along_axis(trendline_function, 0, close[-272:], True)
out.percent[:] = np.apply_along_axis(percent_helper, 0, close)
out.oscillator[:] = np.apply_along_axis(oscillator_helper, 0, close[-272:])
out.returns[:] = np.apply_along_axis(return_helper, 0, close[-235:])
def Data_Pull():
# create the piepline for the data pull
Data_Pipe = Pipeline()
# create SPY proxy
Data_Pipe.add(SPY_proxy(), 'SPY Proxy')
# run all on same dataset for speed
trendline, percent, oscillator, returns = Pricing_Pipe()
# add the calculated values
Data_Pipe.add(trendline, 'Trendline')
Data_Pipe.add(percent, 'Percent')
Data_Pipe.add(oscillator, 'Oscillator')
Data_Pipe.add(returns, 'Returns')
return Data_Pipe
results = run_pipeline(Data_Pull(), '2015-08-08', '2015-08-08')
results.head(20)
```
We will now use the Lo/Patel ranking logic described in the Traditional Value notebook (https://www.quantopian.com/posts/quantopian-lecture-series-long-slash-short-traditional-value-case-study) in order to combine these desriptive metrics into a single factor.
NB: `standard_frame_compute` and `composite_score` have been combined into a single function called `aggregate_data`.
```
# limit effect of outliers
def filter_fn(x):
if x <= -10:
x = -10.0
elif x >= 10:
x = 10.0
return x
# combine data
def aggregate_data(df):
# basic clean of dataset to remove infinite values
df = df.replace([np.inf, -np.inf], np.nan)
df = df.dropna()
# need standardization params from synthetic S&P500
df_SPY = df.sort(columns='SPY Proxy', ascending=False)
# create separate dataframe for SPY
# to store standardization values
df_SPY = df_SPY.head(500)
# get dataframes into numpy array
df_SPY = df_SPY.as_matrix()
# store index values
index = df.index.values
# get data intp a numpy array for speed
df = df.as_matrix()
# get one empty row on which to build standardized array
df_standard = np.empty(df.shape[0])
for col_SPY, col_full in zip(df_SPY.T, df.T):
# summary stats for S&P500
mu = np.mean(col_SPY)
sigma = np.std(col_SPY)
col_standard = np.array(((col_full - mu) / sigma))
# create vectorized function (lambda equivalent)
fltr = np.vectorize(filter_fn)
col_standard = (fltr(col_standard))
# make range between -10 and 10
col_standard = (col_standard / df.shape[1])
# attach calculated values as new row in df_standard
df_standard = np.vstack((df_standard, col_standard))
# get rid of first entry (empty scores)
df_standard = np.delete(df_standard, 0, 0)
# sum up transformed data
df_composite = df_standard.sum(axis=0)
# put into a pandas dataframe and connect numbers
# to equities via reindexing
df_composite = pd.Series(data=df_composite, index=index)
# sort descending
df_composite.sort(ascending=False)
return df_composite
ranked_scores = aggregate_data(results)
ranked_scores
```
##Stock Choice
Now that we have our ranking system, let us have a look at the histogram of the ranked scores. This will allow us to see general trends in the metric and diagnose any issues with our ranking system as a factor. The red lines give our cut-off points for our trading baskets
```
# histogram
ranked_scores.hist()
# baskets
plt.axvline(ranked_scores[26], color='r')
plt.axvline(ranked_scores[-6], color='r')
plt.xlabel('Ranked Scores')
plt.ylabel('Frequency')
plt.title('Histogram of Ranked Scores of Stock Universe');
```
Although there does appear to be some positive skew, this looks to be a robust metric as the tails of this distribution are very thin. A thinner tail means that our ranking system has identified special characteristics about our stock universe possessed by only a few equities. More thorough statistical analysis would have to be conducted in order to see if this strategy could generate good alpha returns. This robust factor analysis will be covered in a later notebook.
Please see the attached algorithm for a full implementation!
*The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory or other services by Quantopian.*
*In addition, the content of the website neither constitutes investment advice nor offers any opinion with respect to the suitability of any security or any specific investment. Quantopian makes no guarantees as to accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| github_jupyter |
```
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from PIL import Image
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
from textblob import TextBlob
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
from wordcloud import WordCloud,STOPWORDS, ImageColorGenerator
from sklearn.feature_extraction import text
import nltk
from nltk.tokenize import word_tokenize
from nltk import pos_tag, bigrams, FreqDist, ne_chunk
import detectEnglish
```
### Step 1: Exploratory Data Analysis
```
listing = pd.read_csv('listings.csv')
calendar = pd.read_csv('calendar.csv')
review = pd.read_csv('reviews.csv')
print(listing.shape)
print(calendar.shape)
print(review.shape)
```
#### a. Inspect Listing Dataset
```
X_cols = ['neighbourhood_group_cleansed','neighbourhood_cleansed','property_type','room_type','bed_type','beds','host_is_superhost','host_response_time']
listing.describe()
#number of unique hosts
unique_host = listing['host_id'].unique()
'Number of unique hosts in listing dataset:', len(unique_host)
#number of nulls columns in listing
null_col = listing.columns[listing.isna().sum()>0]
col_null_val = listing[null_col].isna().sum().sort_values(ascending=False)
per_null_val = (listing[null_col].isna().sum()/listing.shape[0]*100).sort_values(ascending=False)
nulls_listing_df = pd.DataFrame(col_null_val,columns=['#Nulls'])
nulls_listing_df['%Nulls']=per_null_val
print('Number of columns with missing data:',nulls_listing_df.shape[0])
nulls_listing_df
#super host %
print(pd.DataFrame(listing.groupby('host_is_superhost')['host_id'].size()/listing.shape[0]*100))
```
#### b. Calendar Dataset
```
#how many unique listings in calendar:
unique_listing_calendar = calendar['listing_id'].unique()
'Number of unique listings in calendar dataset:', len(unique_listing_calendar)
```
#### c. Review Dataset
```
print('Number of unique listings in review dataset:',len(review['listing_id'].unique()))
print('Number of reviews', review.shape[0])
review.describe()
# Number of missing data in comments field (review dataset)
print(review['comments'].isna().sum())
```
Observation:
- 44 columns have missing data from listing dataset
- No missing data in calendar
- only 18 comments missing from review dataset
### Price prediction
```
X= listing[X_cols]
y=listing['price']
```
### Sentiment Analysis
```
#drop missing comments
review = review.dropna(subset =['comments'],how='any')
review.shape[0]
#detect non-english comments
review['detect_Eng'] = review['comments'].apply(lambda row: detectEnglish.isEnglish(row))
review.head()
non_eng = review[review['detect_Eng']==False]
print('There are {} non English comments'.format(non_eng.shape[0]))
non_eng['comments'].tail(10)
# Remove non-english comments from review
review = review[review['detect_Eng']==True]
review.shape[0]
#Remove 'This is an automated posting' comments which are not real comments
review['automated_posting'] = review['comments'].apply(lambda row: "This is an automated posting" in row)
auto_posting = review[review['automated_posting']==True]
auto_posting.shape[0]
review = review[review['automated_posting']==False]
review.shape[0]
analyzer = SentimentIntensityAnalyzer()
review['review subjectivity Textblob']= review['comments'].apply(lambda row:TextBlob(row).sentiment.subjectivity )
review['review polarity Textblob']= review['comments'].apply(lambda row:TextBlob(row).sentiment.polarity)
review['review polarity Vader']= review['comments'].apply(lambda row:analyzer.polarity_scores(row)['compound'])
review['Average Polarity']=(review['review polarity Vader'] + review['review polarity Textblob'])/2
review['review Sentiment Textblob'] = np.where(review['review polarity Textblob']>= 0.01, 1, \
(np.where(review['review polarity Textblob']<= -0.01, -1, 0)))
review['review Sentiment Vader'] = np.where(review['review polarity Vader']>= 0.05, 1, \
(np.where(review['review polarity Vader']<=-0.05, -1, 0)))
print(review['review Sentiment Textblob'].value_counts())
print(review['review Sentiment Vader'].value_counts())
review['Sentiment (lexicon)'] = np.where(review['Average Polarity']>0, 'Positive',\
(np.where(review['Average Polarity']<0, 'Negative','Neutral')))
print('Combined sentiment from both methods:',review['Sentiment (lexicon)'].value_counts())
review['Average Polarity'].describe()
print('Random comments with the highest positive sentiment: \n')
cl_positive = review.loc[review['Average Polarity']>0.9,['comments']].sample(6).values
for c in cl_positive:
print("*"+c[0])
print('Random comments with the highest neutral sentiment: \n')
cl_neutral = review.loc[review['Average Polarity']==0,['comments']].sample(5).values
for c in cl_neutral:
print("*"+c[0])
print('Random comments with the highest negative sentiment: \n')
cl_negative = review.loc[review['Average Polarity']<-0.5,['comments']].sample(1).values
for c in cl_negative:
print("*"+c[0])
```
### Wordcloud
```
eng_stopword = set(text.ENGLISH_STOP_WORDS)
review['tidy_cm'] = review['comments'].apply(lambda x: ' '.join([w for w in str(x).split() if len(w)>1]))
review['tidy_cm'] = review['tidy_cm'].str.split().apply(lambda x: ' '.join(k for k in x if k.lower() not in eng_stopword))
review['tidy_cm'] = review['tidy_cm'].str.replace("n't", " not")
review['tidy_cm'] = review['tidy_cm'].str.replace("'s", " ")
review['tidy_cm'] = review['tidy_cm'].str.replace(r"[^a-zA-Z']", " ")
def get_top_n_bigram(corpus,gram, n):
vec = text.CountVectorizer(ngram_range=(gram, gram),stop_words = eng_stopword )
bag_of_words = vec.fit_transform(corpus)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
return words_freq[:n]
uni_grams = pd.DataFrame(get_top_n_bigram(review['tidy_cm'],1,40), columns=['Words','count'])
bi_grams = pd.DataFrame(get_top_n_bigram(review['tidy_cm'],2,30), columns=['Words','count'])
tri_grams = pd.DataFrame(get_top_n_bigram(review['tidy_cm'],3,30), columns=['Words','count'])
def plot_gram(data):
data.sort_values(by=['count'], ascending = False)
sns.set(rc={'figure.figsize':(12,7)})
ax = sns.barplot(x='Words', y='count', data = data, palette = 'Blues_d');
ax.set_xticklabels(labels = data['Words'], rotation=90);
ax.set_title('Top grams from Reviews');
plot_gram(uni_grams)
plot_gram(bi_grams)
plot_gram(tri_grams)
house_mask = np.array(Image.open("house3.png"))
comments = ' '.join([text.lower() for text in review['tidy_cm']])
comments_dist = nltk.FreqDist(word for word in word_tokenize(comments))
comments_dist
wordcloud = WordCloud(width=800, height=500,background_color='white', max_font_size=80, stopwords=set(),random_state=42,\
mask=house_mask,contour_width=0.5,contour_color='Green')
wordcloud.generate_from_frequencies(comments_dist)
plt.figure(figsize=(15, 10))
plt.imshow(wordcloud,interpolation="bilinear")
plt.axis('off')
plt.show()
def process1(pos_tag_list):
processed_list = []
i=0
for t1, t2 in zip(pos_tag_list,pos_tag_list[1:]):
if t1[0]==("able"):
processed_list.append(t1[0]+"-"+t2[0])
elif t1[0]==("unable"):
processed_list.append(t1[0]+"-"+t2[0])
elif t1[0] == "not" and (t2[1].startswith('JJ') or t2[1].startswith('VB')):
processed_list.append(t1[0]+"-"+t2[0])
elif t1[1].startswith('JJ') :
processed_list.append(t1[0])
elif t1[1].startswith('NN') :
processed_list.append(t1[0])
return processed_list
review['pos_tag_cm'] = review['tidy_cm'].apply(lambda row:' '.join(process1(pos_tag(word_tokenize(row.lower())))))
neg = review[review['Sentiment (lexicon)']=='Negative']
neg_comments = ' '.join([text.lower() for text in neg['pos_tag_cm']])
neg_dist = nltk.FreqDist(word for word in word_tokenize(neg_comments))
neg_dist
house_mask2 = np.array(Image.open("house4.png"))
wordcloud = WordCloud(width=800, height=500,background_color='white', max_font_size=80, colormap="copper", stopwords=set(),random_state=42,\
mask=house_mask2,contour_width=0.5,contour_color='orange')
wordcloud.generate_from_frequencies(neg_dist)
plt.figure(figsize=(15, 10))
plt.imshow(wordcloud,interpolation="bilinear")
plt.axis('off')
plt.show()
```
| github_jupyter |
# Showcasing Dataset and PipelineParameter
This notebook demonstrates how a **FileDataset** or **TabularDataset** can be parametrized with **PipelineParameters** in an AML Pipeline. By parametrizing datasets, you can dynamically run pipeline experiments with different datasets without any code change.
A common use case is building a training pipeline with a sample of your training data for quick iterative development. When you're ready to test and deploy your pipeline at scale, you can pass in your full training dataset to the pipeline experiment without making any changes to your training script.
## Azure Machine Learning and Pipeline SDK-specific imports
```
import azureml.core
from azureml.core import Workspace, Experiment, Dataset, RunConfiguration
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.environment import CondaDependencies
from azureml.data.dataset_consumption_config import DatasetConsumptionConfig
from azureml.widgets import RunDetails
from azureml.pipeline.core import PipelineParameter
from azureml.pipeline.core import Pipeline, PipelineRun
from azureml.pipeline.steps import PythonScriptStep
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
```
## Initialize Workspace
Initialize a workspace object from persisted configuration.
```
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep="\n")
```
## Create an Azure ML experiment
Let's create an experiment named "showcasing-dataset" and a folder to hold the training scripts. The script runs will be recorded under the experiment in Azure.
```
# Choose a name for the run history container in the workspace.
experiment_name = "showcasing-dataset"
source_directory = "."
experiment = Experiment(ws, experiment_name)
experiment
```
## Create or Attach an AmlCompute cluster
You will need to create a compute target for your AutoML run. In this tutorial, you get the default `AmlCompute` as your training compute resource.
```
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == "AmlCompute":
found = True
print("Found existing compute target.")
compute_target = cts[amlcompute_cluster_name]
if not found:
print("Creating a new compute target...")
provisioning_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
# vm_priority = 'lowpriority', # optional
max_nodes=4,
)
# Create the cluster.
compute_target = ComputeTarget.create(
ws, amlcompute_cluster_name, provisioning_config
)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output=True, timeout_in_minutes=10)
# For a more detailed view of current AmlCompute status, use get_status().
conda_dep = CondaDependencies()
conda_dep.add_pip_package("pandas")
run_config = RunConfiguration(conda_dependencies=conda_dep)
```
## Dataset Configuration
The following steps detail how to create a FileDataset and TabularDataset from an external CSV file, and configure them to be used by a Pipeline.
```
file_dataset = Dataset.File.from_files(
"https://dprepdata.blob.core.windows.net/demo/Titanic.csv"
)
file_pipeline_param = PipelineParameter(
name="file_ds_param", default_value=file_dataset
)
file_ds_consumption = DatasetConsumptionConfig(
"file_dataset", file_pipeline_param
).as_mount()
tabular_dataset = Dataset.Tabular.from_delimited_files(
"https://dprepdata.blob.core.windows.net/demo/Titanic.csv"
)
tabular_pipeline_param = PipelineParameter(
name="tabular_ds_param", default_value=tabular_dataset
)
tabular_ds_consumption = DatasetConsumptionConfig(
"tabular_dataset", tabular_pipeline_param
)
```
We will setup a training script to ingest our passed-in datasets and print their contents. **NOTE** the names of the datasets referenced inside the training script correspond to the `name` of their respective DatasetConsumptionConfig objects we defined above.
```
%%writefile train_with_dataset.py
from azureml.core import Run
input_file_ds_path = Run.get_context().input_datasets["file_dataset"]
with open(input_file_ds_path, "r") as f:
content = f.read()
print(content)
input_tabular_ds = Run.get_context().input_datasets["tabular_dataset"]
tabular_df = input_tabular_ds.to_pandas_dataframe()
print(tabular_df)
```
<a id='index1'></a>
## Create a Pipeline with a Dataset PipelineParameter
Note that the ```file_ds_consumption``` and ```tabular_ds_consumption``` are specified as both arguments and inputs to create a step.
```
train_step = PythonScriptStep(
name="train_step",
script_name="train_with_dataset.py",
arguments=["--param1", file_ds_consumption, "--param2", tabular_ds_consumption],
inputs=[file_ds_consumption, tabular_ds_consumption],
compute_target=compute_target,
source_directory=source_directory,
runconfig=run_config,
)
print("train_step created")
pipeline = Pipeline(workspace=ws, steps=[train_step])
print("pipeline with the train_step created")
```
<a id='index2'></a>
## Submit a Pipeline with a Dataset PipelineParameter
Pipelines can be submitted with default values of PipelineParameters by not specifying any parameters.
```
# Pipeline will run with default file_ds and tabular_ds
pipeline_run = experiment.submit(pipeline)
print("Pipeline is submitted for execution")
RunDetails(pipeline_run).show()
pipeline_run.wait_for_completion()
```
<a id='index3'></a>
## Submit a Pipeline with a different Dataset PipelineParameter value from the SDK
The training pipeline can be reused with different input datasets by passing them in as PipelineParameters
```
iris_file_ds = Dataset.File.from_files(
"https://raw.githubusercontent.com/Azure/MachineLearningNotebooks/"
"4e7b3784d50e81c313c62bcdf9a330194153d9cd/how-to-use-azureml/work-with-data/"
"datasets-tutorial/train-with-datasets/train-dataset/iris.csv"
)
iris_tabular_ds = Dataset.Tabular.from_delimited_files(
"https://raw.githubusercontent.com/Azure/MachineLearningNotebooks/"
"4e7b3784d50e81c313c62bcdf9a330194153d9cd/how-to-use-azureml/work-with-data/"
"datasets-tutorial/train-with-datasets/train-dataset/iris.csv"
)
pipeline_run_with_params = experiment.submit(
pipeline,
pipeline_parameters={
"file_ds_param": iris_file_ds,
"tabular_ds_param": iris_tabular_ds,
},
)
RunDetails(pipeline_run_with_params).show()
pipeline_run_with_params.wait_for_completion()
```
<a id='index4'></a>
## Dynamically Set the Dataset PipelineParameter Values using a REST Call
Let's publish the pipeline we created previously, so we can generate a pipeline endpoint. We can then submit the iris datasets to the pipeline REST endpoint by passing in their IDs.
```
published_pipeline = pipeline.publish(
name="Dataset_Pipeline",
description="Pipeline to test Dataset PipelineParameter",
continue_on_step_failure=True,
)
published_pipeline
published_pipeline.submit(
ws,
experiment_name="publishedexperiment",
pipeline_parameters={
"file_ds_param": iris_file_ds,
"tabular_ds_param": iris_tabular_ds,
},
)
from azureml.core.authentication import InteractiveLoginAuthentication
import requests
auth = InteractiveLoginAuthentication()
aad_token = auth.get_authentication_header()
rest_endpoint = published_pipeline.endpoint
print(
"You can perform HTTP POST on URL {} to trigger this pipeline".format(rest_endpoint)
)
# specify the param when running the pipeline
response = requests.post(
rest_endpoint,
headers=aad_token,
json={
"ExperimentName": "MyRestPipeline",
"RunSource": "SDK",
"DataSetDefinitionValueAssignments": {
"file_ds_param": {"SavedDataSetReference": {"Id": iris_file_ds.id}},
"tabular_ds_param": {"SavedDataSetReference": {"Id": iris_tabular_ds.id}},
},
},
)
try:
response.raise_for_status()
except Exception:
raise Exception(
"Received bad response from the endpoint: {}\n"
"Response Code: {}\n"
"Headers: {}\n"
"Content: {}".format(
rest_endpoint, response.status_code, response.headers, response.content
)
)
run_id = response.json().get("Id")
print("Submitted pipeline run: ", run_id)
published_pipeline_run_via_rest = PipelineRun(ws.experiments["MyRestPipeline"], run_id)
RunDetails(published_pipeline_run_via_rest).show()
published_pipeline_run_via_rest.wait_for_completion()
```
<a id='index5'></a>
| github_jupyter |
<img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png">
© Copyright Quantopian Inc.<br>
© Modifications Copyright QuantRocket LLC<br>
Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
<a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a>
# Universe Selection
by Gil Wassermann, Maxwell Margenot
Selecting the product space in which an algorithm trades can be as important as, if not more than, the strategy itself. In this lecture, we will walk through the basics of constructing a universe.
## What is a Universe?
On a high level, universe selection is the process of choosing the pool of securities upon which your algorithm will trade. For example, an algorithm designed to play with the characteristics of a universe consisting of technology equities may perform exceptionally well in that universe with the tradeoff of falling flat in other sectors. Experimenting with different universes by tweaking their components is an essential part of developing a trading strategy.
Using Pipeline and the full US Stock dataset, we have access to over 8000 securities to choose from each day. However, the securities within this basket are markedly different. Some are different asset classes, some belong to different sectors and super-sectors, some employ different business models, some practice different management styles, and so on. By defining a universe, a trader can narrow in on securities with one or more of these attributes in order to craft a strategy that is most effective for that subset of the population.
Without a properly-constructed universe, your algorithm may be exposed to risks that you just aren't aware of. For example, it could be possible that your universe selection methodology only selects a stock basket whose constituents do not trade very often. Let's say that your algorithm wants to place an order of 100,000 shares for a company that only trades 1,000 on a given day. The inability to fill this order or others might prevent you from achieving the optimal weights for your portfolio, thereby undermining your strategy. These risks can be controlled for by careful and thoughtful universe slection.
In Zipline, universes are often implemented as a Pipeline screen. If you are not familiar with Pipeline, feel free to check out the [Pipeline Tutorial](https://www.quantrocket.com/code/?filter=zipline). Below is an example implementation of a universe that limits Pipeline output to the 500 securities with the largest revenue each day. This can be seen as a naive implementation of the Fortune500.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from zipline.pipeline.data import master
from zipline.pipeline import Pipeline
from zipline.pipeline.data import USEquityPricing
from zipline.research import run_pipeline
from zipline.pipeline.data import sharadar
from zipline.pipeline.factors import CustomFactor
revenue = sharadar.Fundamentals.slice(dimension='ARQ', period_offset=0).REVENUE.latest
pipe = Pipeline(
columns={
'Revenue': revenue
},
screen=revenue.top(500)
)
res = run_pipeline(pipe, start_date='2016-01-04', end_date='2016-01-04', bundle='usstock-1d-bundle')
print("There are %d assets in this universe." % len(res))
res.head(10) # print 10 constituents
```
This is a good start, but again, it is a very naive universe. Normally, high revenue is a characteristic of a healthy, thriving company, but there are many other things that play into the construction of a good universe. While this idea has a reasonable economic basis, more analysis has to be conducted to determine the efficacy of this universe. There may be more subtle things occurring independently of the revenue of its constituent companies.
For the rest of this notebook, we will design our own universe, profile it and check its performance. Let's create the Lectures500!
## Lectures500
### Sector Exposure
If I create a universe that only looks at equities in the technology sector, my algorithm will have an extreme sector bias. Companies in the same industry sector are affected by similar macroeconomic trends and therefore their performance tends to be correlated. In the case of particular strategies, we may find the benefits of working exclusively within a particular sector greater than the downside risks, but this is not suitable for creating a general-purpose, quality universe.
Let's have a look at the sector breakdown of the Lectures500.
```
# Rename our universe to Lectures500
Lectures500 = revenue.top(500)
def get_sectors(day, universe, bundle):
pipe = Pipeline(columns={'Sector': master.SecuritiesMaster.usstock_Sector.latest}, screen=universe)
# Drop the datetime level of the index, since we only have one day of data
return run_pipeline(pipe, start_date=day, end_date=day, bundle=bundle).reset_index(level=0, drop=True)
def calculate_sector_counts(sectors):
counts = (sectors.groupby('Sector').size())
return counts
lectures500_sectors = get_sectors('2016-01-04', Lectures500, 'usstock-1d-bundle')
lectures500_counts = calculate_sector_counts(lectures500_sectors)
def plot_sector_counts(sector_counts):
bar = plt.subplot2grid((10,12), (0,0), rowspan=10, colspan=6)
pie = plt.subplot2grid((10,12), (0,6), rowspan=10, colspan=6)
# Bar chart
sector_counts.plot(
kind='bar',
color='b',
rot=30,
ax=bar,
)
bar.set_title('Sector Exposure - Counts')
# Pie chart
sector_counts.plot(
kind='pie',
colormap='Set3',
autopct='%.2f %%',
fontsize=12,
ax=pie,
)
pie.set_ylabel('') # This overwrites default ylabel, which is None :(
pie.set_title('Sector Exposure - Proportions')
plt.tight_layout();
plot_sector_counts(lectures500_counts)
```
From the above plots it is clear that there is a mild sector bias towards the consumer discretionary industry. Any big events that affect companies in this sector will have a large effect on this universe and any algorithm that uses it.
One option is to equal-weight the sectors, so that equities from each industry sector make up an identical proportion of the final universe. This, however, comes with its own disadvantages. In a sector-equal Lectures500, the universe would include some lower-revenue real estate equities at the expense of higher-revenue consumer discretionary equities.
### Turnover
Another thing to consider when designing a universe is the rate at which the universe changes. Turnover is a way of measuring this rate of change. Turnover is defined as the number of equities to enter or exit the universe in a particular time window.
Let us imagine a universe with a turnover of 0. This universe would be completely unchanged by market movements. Moreover, stocks inappropriate for the universe would never be removed and stocks that should be included will never enter.
Conversely, imagine a universe that changes every one of its constituents every day. An algorithm built on this universe will be forced to sell its entire portfolio every day. This incurs transaction costs which erode returns.
When creating a universe, there is an inherent tradeoff between stagnation and sensitivity to the market.
Let's have a look at the turnover for the Lectures500!
```
res = run_pipeline(Pipeline(columns={'Lectures500' : Lectures500}), start_date='2015-01-01', end_date='2016-01-01', bundle='usstock-1d-bundle')
res = res.unstack().fillna(False).astype(int)
def calculate_daily_turnover(unstacked):
return (unstacked
.diff() # Get 1/0 (True/False) showing where values changed from previous day.
.abs() # take absolute value so that any turnover is a 1
.iloc[1:] # Drop first row, which is meaningless after diff().
.groupby(axis=1, level=0)
.sum()) # Group by universe and count number of 1 values in each row.
def plot_daily_turnover(unstacked):
# Calculate locations where the inclusion state of an asset changed.
turnover = calculate_daily_turnover(unstacked)
# Write the data to an axis.
ax = turnover.plot(figsize=(14, 8))
# Add style to the axis.
ax.grid(False)
ax.set_title('Changes per Day')
ax.set_ylabel('Number of Added or Removed Assets')
def print_daily_turnover_stats(unstacked):
turnover = calculate_daily_turnover(unstacked)
print(turnover.describe().loc[['mean', 'std', '25%', '50%', '75%', 'min', 'max']])
plot_daily_turnover(res)
print_daily_turnover_stats(res)
```
#### Smoothing
A good way to reduce turnover is through smoothing functions. Smoothing is the process of taking noisy data and aggregating it in order to analyze its underlying trends. When applied to universe selection, a good smoothing function prevents equities at the universe boundary from entering and exiting frequently.
One example of a potential smoothing function is a filter that finds equities that have passed the Lectures500 criteria for 16 or more days out of the past 21 days. We will call this filter `AtLeast16`. This aggregation of many days of data lends a certain degree of flexibility to the edges of our universe. If, for example, Equity XYZ is very close to the boundary for inclusion, in a given month, it may flit in and out of the Lectures500 day after day. However, with the `AtLeast16` filter, Equity XYZ is allowed to enter and exit the daily universe a maximum of 5 times before it is excluded from the smoothed universe.
Let's apply a smoothing function to our universe and see its effect on turnover.
```
from zipline.pipeline.filters import AtLeastN
Lectures500 = AtLeastN(inputs=[Lectures500],
window_length=21,
N=16,)
res_smoothed = run_pipeline(Pipeline(columns={'Lectures500 Smoothed' : Lectures500}),
start_date='2015-01-01',
end_date='2016-01-01',
bundle='usstock-1d-bundle')
res_smoothed = res_smoothed.unstack().fillna(False).astype(int)
plot_daily_turnover(res_smoothed)
print_daily_turnover_stats(res_smoothed)
```
Looking at the metrics, we can see that the smoothed universe has a lower turnover than the original Lectures500. Since this is a good characteristic, we will add this logic to the universe.
NB: Smoothing can also be accomplished by downsampling.
---
**Next Lecture:** [The Capital Asset Pricing Model and Arbitrage Pricing Theory](Lecture30-CAPM-and-Arbitrage-Pricing-Theory.ipynb)
[Back to Introduction](Introduction.ipynb)
---
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian") or QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, neither Quantopian nor QuantRocket has taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. Neither Quantopian nor QuantRocket makes any guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| github_jupyter |
# Job Sequencing with Integer Lengths
# Hamiltonian
We get a Hamiltonian from the paper below.
https://arxiv.org/abs/1302.5843
$\displaystyle H = H_A + H_B$
$\displaystyle H_A = A \sum_{i=1}^N \left( 1 - \sum_\alpha x_{i,\alpha} \right)^2 + A\sum_{\alpha=1}^m \left( \sum_{n=1}^M ny_{n,\alpha} + \sum_i L_i \left( x_{i,\alpha} - x_{i,1}\right) \right)^2$
$\displaystyle H_B = B \sum_i L_ix_i$
# Little bit change on Hamiltonian
We did a little bit change on Hamiltonian because the existing hamitonian doesn't give good answer.
①We divided $H_A$ to $A_1,A_2$ and increase the number of coefficient.
②We added new term $\displaystyle A_1\sum_{\alpha}\left( 1 - \sum_n y_{n,\alpha} \right)^2$ to $H_A$
$\displaystyle H_A = A_1\sum_i \left( 1 - \sum_\alpha x_{i,\alpha} \right)^2 + A_1\sum_{\alpha}\left( 1 - \sum_n y_{n,\alpha} \right)^2 + A_2\sum_\alpha \left( \sum_n ny_{n,\alpha} + \sum_i L_i \left( x_{i,\alpha} - x_{i,1}\right) \right)^2$
$\displaystyle = A_1\sum_i \left( -2 \sum_\alpha x_{i,\alpha} + \left( \sum_\alpha x_{i,\alpha} \right)^2 \right) + A_1\sum_\alpha \left( -2 \sum_n y_{n,\alpha} + \left( \sum_n y_{n,\alpha} \right)^2 \right)$
$\displaystyle + A_2\sum_\alpha \left( \left( \sum_n ny_{n,\alpha} \right)^2 + 2\left( \sum_n ny_{n,\alpha} \right)\left(\sum_i L_i \left( x_{i,\alpha} - x_{i,1}\right)\right) +\left(\sum_i L_i \left( x_{i,\alpha} - x_{i,1}\right)\right)^2 \right) + Const.$
$\displaystyle = A_1\sum_i \sum_\alpha \left( -x_{i,\alpha}^2 + \sum_{\beta \left( \gt \alpha \right)} 2x_{i,\alpha}x_{i, \beta} \right) + A_1\sum_\alpha \sum_n \left( -y_{n,\alpha}^2 + \sum_{m \left( \gt n \right) } 2y_{n,\alpha}y_{m, \alpha} \right) + A_2\sum_\alpha \sum_n \left( n^2y_{n, \alpha}^2 + \sum_{m \left( \gt n \right) } 2nmy_{n,\alpha}y_{m, \alpha} \right) $
$\displaystyle + A_2\sum_\alpha \sum_i \left( \left( \sum_n 2nL_i y_{n,\alpha} \left( x_{i,\alpha} - x_{i,1}\right) \right) + L_i^2 \left( x_{i,\alpha} - x_{i,1}\right)^2 + \sum_{j \left( \gt i \right) } 2L_iL_j \left( x_{i,\alpha} - x_{i,1}\right) \left( x_{j,\alpha} - x_{j,1}\right) \right) + Const.$
$\displaystyle =\sum_\alpha \sum_i \left( - A_1x_{i,\alpha}^2 + A_2L_i^2 \left( x_{i,\alpha} - x_{i,1}\right)^2 + \sum_{\beta \left( \gt \alpha \right) } 2A_1x_{i,\alpha}x_{i, \beta} + \sum_{j \left( \gt i \right) } 2A_2L_iL_j \left( x_{i,\alpha} - x_{i,1}\right) \left( x_{j,\alpha} - x_{j,1}\right)+ \sum_n 2A_2nL_i y_{n,\alpha} \left( x_{i,\alpha} - x_{i,1}\right) \right)$
$\displaystyle + \sum_\alpha \sum_n \left( \left( -A_1 + A_2n^2 \right) y_{n, \alpha}^2 + \sum_{m \left( \gt n \right) } 2\left(A_1+ A_2nm \right) y_{n,\alpha}y_{m, \alpha} \right) + Const.$
# QUBO class
```
import blueqat.wq as wq
import numpy as np
class Qubo():
def __init__(self, jobs, n_machine, max_delta, A1, A2, B):
self.__jobs = jobs
self.__n_jobs = len(jobs)
self.__n_machine = n_machine
self.__max_delta = max_delta
self.__A1 = A1
self.__A2 = A2
self.__B = B
self.__index_offset = self.__n_jobs * self.__n_machine
def __calc_sum_alpha_n_m(self, qubo, alpha, n):
A1 = self.__A1
A2 = self.__A2
for m in range(n + 1, self.__max_delta + 1):
v_n_alpha = self.__index_offset + alpha * self.__max_delta + n - 1
v_m_alpha = self.__index_offset + alpha * self.__max_delta + m - 1
qubo[v_n_alpha][v_m_alpha] += 2 * (A2 * n * m + A1)
def __calc_sum_alpha_n(self, qubo, alpha):
A1 = self.__A1
A2 = self.__A2
for n in range(1, self.__max_delta + 1):
v_n_alpha = self.__index_offset + alpha * self.__max_delta + n - 1
qubo[v_n_alpha][v_n_alpha] += (A2 * n ** 2 - A1)
self.__calc_sum_alpha_n_m(qubo, alpha, n)
def __calc_sum_alpha_i_beta(self, qubo, alpha, i):
A1 = self.__A1
for beta in range(alpha + 1, self.__n_machine):
u_i_alpha = i * self.__n_machine + alpha
u_i_beta = i * self.__n_machine + beta
qubo[u_i_alpha][u_i_beta] += 2 * A1
def __calc_sum_alpha_i_j(self, qubo, alpha, i):
A2 = self.__A2
Li = self.__jobs[i]
for j in range(i + 1, self.__n_jobs):
Lj = self.__jobs[j]
u_i_alpha = i * self.__n_machine + alpha
u_j_alpha = j * self.__n_machine + alpha
u_i_0 = i * self.__n_machine
u_j_0 = j * self.__n_machine
qubo[u_i_alpha][u_j_alpha] += 2 * A2 * Li * Lj
qubo[u_i_alpha][u_j_0] += -2 * A2 * Li * Lj
qubo[u_i_0][u_j_alpha] += -2 * A2 * Li * Lj
qubo[u_i_0][u_j_0] += 2 * A2 * Li * Lj
def __calc_sum_alpha_i_n(self, qubo, alpha, i):
A2 = self.__A2
Li = self.__jobs[i]
u_i_alpha = i * self.__n_machine + alpha
u_i_0 = i * self.__n_machine
for n in range(1, self.__max_delta + 1):
v_n_alpha = self.__index_offset + alpha * self.__max_delta + n - 1
qubo[u_i_alpha][v_n_alpha] += 2 * A2 * n * Li
qubo[u_i_0][v_n_alpha] += -2 * A2 * n * Li
def __calc_sum_alpha_i(self,qubo, alpha):
A1 = self.__A1
A2 = self.__A2
for i in range(self.__n_jobs):
u_i_alpha = i * self.__n_machine + alpha
u_i_0 = i * self.__n_machine
Li = self.__jobs[i]
qubo[u_i_alpha][u_i_alpha] += -A1 + A2 * Li ** 2
qubo[u_i_0][u_i_0] += A2 * Li ** 2
qubo[u_i_0][u_i_alpha] += -2 * A2 * Li ** 2
self.__calc_sum_alpha_i_beta(qubo, alpha, i)
self.__calc_sum_alpha_i_j(qubo, alpha, i)
self.__calc_sum_alpha_i_n(qubo, alpha, i)
def __calc_constraint_func(self,qubo):
for alpha in range(self.__n_machine):
self.__calc_sum_alpha_i(qubo, alpha)
self.__calc_sum_alpha_n(qubo, alpha)
def __calc_objective_func(self,qubo):
B = self.__B
for i in range(self.__n_jobs):
u_i_0 = i * self.__n_machine
Li = self.__jobs[i]
qubo[u_i_0][u_i_0] += B * Li
def __calc_qubo(self, qubo):
self.__calc_constraint_func(qubo)
self.__calc_objective_func(qubo)
def get_qubo(self):
size = self.__n_machine * (self.__n_jobs + self.__max_delta)
qubo = np.zeros((size, size))
self.__calc_qubo(qubo)
return qubo
def show_answer(self, solution):
print(f"Solution is {solution}")
assigned_job_sizes = np.zeros(self.__n_machine, dtype=int)
for i in range(self.__n_jobs):
assigned = False
for alpha in range(self.__n_machine):
u_i_alpha = i * n_machine + alpha
if(solution[u_i_alpha] > 0):
print(f"Job{i} has been assigned to the machine{alpha}.")
assigned_job_sizes[alpha] += self.__jobs[i]
assigned = True
if assigned == False:
print(f"Job{i} has not been assigned.")
for alpha in range(self.__n_machine):
print(f"Total size of jobs assigned to machine{alpha} is {assigned_job_sizes[alpha]}.")
```
Let's solve it. We choose $A1, A2, B$ looking at the total balance of each terms.
```
jobs = [1,1,2,2,5,5,7] # the numbers are lengths(Li) of jobs
n_machine = 3
max_delta = 7 # permissive maximum delta of M1 - Malpha. select by yourself.
A1 = 1
A2 = (A1 / max(jobs) ** 2) * 0.9
B = (A1 / max(jobs)) * 0.5
qubo = Qubo(jobs, n_machine, max_delta, A1, A2, B)
annealer = wq.Opt()
annealer.qubo = qubo.get_qubo()
for _ in range(10):
solution = annealer.sa()
qubo.show_answer(solution)
print()
```
| github_jupyter |
```
## import modules
import pandas as pd
import re
import numpy as np
## tell python to display output and print multiple objects
from IPython.display import display, HTML
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
## create range b/t start and end date
## of course
start_date = pd.to_datetime("2021-03-29")
end_date = pd.to_datetime("2021-06-02")
st_alldates = pd.date_range(start_date, end_date)
## subset to days in that range equal to Tuesday or Thursday
st_tuth = st_alldates[st_alldates.day_name().isin(['Tuesday', 'Thursday'])]
## create data frame with that information
st_dates = [re.sub("2021\\-", "", str(day.date())) for day in st_tuth]
course_sched = pd.DataFrame({'dow': st_tuth.day_name(), 'st_tuth': st_dates})
course_sched['date_toprint'] = course_sched.dow.astype(str) + " " + \
course_sched.st_tuth.astype(str)
course_sched = course_sched['date_toprint']
## display the resulting date sequence
display(course_sched)
## create the actual content
### list of concepts
concepts = ["Course intro. and checking software setup",
"Workflow basics: command line, Github workflow, basic LaTeX syntax, pre-analysis plans",
"Python basic data wrangling: data structures (vectors; lists; dataframes; matrices), control flow, and loops",
"Python basic data wrangling: basic regular expressions and text mining",
"Python basic data wrangling: combining data (row binds, column binds, joins); aggregation",
"Review of visualization: ggplot; plotnine",
"Problem set one review",
"Python: writing your own functions and simulation",
"Python: text data using nltk and gensim",
"Python: spatial data using geopandas",
"SQL: reading data from a database and basic SQL (postgres) syntax",
"SQL: more advanced SQL syntax (subqueries; window functions)",
"Python: reading data from APIs and basic web scraping",
"Python: interacting with cloud computing resources (Amazon S3; Dartmouth's Andes or Polaris servers)",
"Problem set two review",
"Interactive visualization: bokeh in Python",
"Workflow: Beamer",
"Workflow: Beamer and Tikz graphics",
"Final presentations"]
## combine
course_sched_concepts = pd.DataFrame({'Week': course_sched,
'Concepts': concepts})
df = course_sched_concepts.copy()
## add datacamp modules conditionally
col = "Concepts"
topics = [df[col] == "Python basic data wrangling: data structures (vectors; lists; dataframes; matrices), control flow, and loops",
df[col] == "Python basic data wrangling: basic regular expressions and text mining",
df[col] == "Python basic data wrangling: combining data (row binds, column binds, joins); aggregation",
df[col] == "Review of visualization: ggplot; plotnine",
df[col] == "Python: writing your own functions",
df[col] == "Python: text data using nltk and gensim",
df[col] == "SQL: reading data from a database and basic SQL (postgres) syntax",
df[col] == "SQL: more advanced SQL syntax (subqueries; window functions)",
df[col] == "Python: reading data from APIs and basic web scraping"]
datacamp_modules = ["Python basics; python lists; Pandas: extracting and transforming data; Intermediate python for data science (loops)",
"First three modules of regular expressions in Python",
"Merging DataFrames with Pandas",
"Introduction to Data Visualization with ggplot2",
"Python data science toolbox (Part one): user-written functions, default args, lambda functions and error handling",
"Natural language processing fundamentals in Python",
"Introduction to databases in Python",
"Intermediate SQL",
"Importing JSON data and working with APIs; Importing data from the Internet"]
df["DataCamp module(s) (if any)"] = np.select(topics,
datacamp_modules,
default = "")
date_col = "Week"
due_dates = [df[date_col] == "Thursday 04-15",
df[date_col] == "Thursday 04-22",
df[date_col] == "Thursday 05-13",
df[date_col] == "Tuesday 06-01"]
assig = ["Problem set one",
"1-page project proposal",
"Problem set two",
"Slides for final presentation (due Monday 05.31 at 9 am)"]
df["Due (11:59 PM EST unless otherwise specified)"] = np.select(due_dates,
assig,
default = "")
HTML(df.to_html(index=False))
```
| github_jupyter |
# Week 10 Discussion
## Infographic
* [Racial Discrimination in Auto Insurance Prices][propublica]
[propublica]: https://www.propublica.org/article/minority-neighborhoods-higher-car-insurance-premiums-methodology
## Links
* [Learn X in Y Minutes, X = JavaScript][js-intro] -- a brief intro to JavaScript
* [MDN JavaScript Guide][js-guide] -- a detailed guide to JavaScript
* [MDN Learning Materials][web-intro] -- more information about web development
* [UC Berkeley Library's GeoData][geodata]
Please fill out TA evals!
[js-intro]: https://learnxinyminutes.com/docs/javascript/
[js-guide]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide
[web-intro]: https://developer.mozilla.org/en-US/docs/Learn
[geodata]: https://geodata.lib.berkeley.edu/
## Web Visualization
Web browsers are ubiquitous and support interactivity (via JavaScript), so the web is an excellent platform for visualizations.
Popular JavaScript libraries used for web visualizations:
<table><tr>
<th>Library</th><th>Based On</th><th>Python Support</th><th>Description</th>
</tr><tr>
<td>[D3.js](https://d3js.org/)</td><td>-</td><td>[mpld3](http://mpld3.github.io/)</td>
<td>
Short for Data-Driven Documents, D3 allows you to bind data to HTML tags.
In other words, you can use data to control the structure and style of a
web page.
</td>
</tr><tr>
<td>[Vega](https://vega.github.io/vega/)</td><td>D3.js</td><td>-</td>
<td>
A visualization grammar (the same idea as ggplot) built on top of D3. You
write a description of what you want in JSON, and Vega produces a D3
visualization.
</td>
</tr><tr>
<td>[Vega Lite](https://vega.github.io/vega-lite/)</td><td>Vega</td><td>[altair](https://altair-viz.github.io/)</td>
<td>
A visualization grammar for _common statistical graphics_ built on top of
Vega. You write a JSON description which is translated to Vega and then D3.
</td>
</tr><tr>
<td>[plotly.js](https://plot.ly/javascript/)</td><td>D3.js</td><td>[plotly](https://plot.ly/python/)</td>
<td>
A visualization library that supports the Python, R, Julia, and MATLAB
plotly packages. Although this is an open-source library, development
is controlled by Plotly (a private company).
</td>
</tr><tr>
<td>[BokehJS](http://bokeh.pydata.org/en/latest/docs/dev_guide/bokehjs.html)</td><td>-</td><td>[bokeh](http://bokeh.pydata.org/)</td>
<td>
A visualization library designed to be used from other (non-JavaScript)
languages. You write Python, R, or Scala code to produce visualizations.
</td>
</tr><tr>
<td>[Leaflet](http://leafletjs.com/)</td><td>-</td><td>[folium](https://github.com/python-visualization/folium)</td>
<td>
An interactive maps library that can display GeoJSON data.
</td>
</tr></table>
Also worth mentioning is the [pygal](http://www.pygal.org/en/stable/) package, which produces SVG plots that can be viewed in a web browser but do not require any JavaScript library.
## Static Visualizations
```
import pandas as pd
dogs = pd.read_feather("data/dogs.feather")
dogs.head()
```
To display Bokeh plots in a Jupyter notebook, you must first call the setup function `output_notebook()`. You don't have to do this if you're going to save your plots to HTML instead.
```
import bokeh.io # conda install bokeh
bokeh.io.output_notebook()
```
Now we can make a plot. The `bokeh.charts` submodule has functions to create common statistical plots. You can also use functions in the `bokeh.models` submodule to fine-tune plots.
Bokeh's plotting functions work with data frames in [tidy](http://vita.had.co.nz/papers/tidy-data.pdf) form.
```
from bokeh.plotting import figure, show
#colormap = {'setosa': 'red', 'versicolor': 'green', 'virginica': 'blue'}
#colors = [colormap[x] for x in flowers['species']]
p = figure(title = "Dogs", width = 300, height = 300)
p.xaxis.axis_label = "Datadog Score"
p.yaxis.axis_label = "Popularity"
p.scatter("datadog", "popularity", source = dogs, fill_alpha = 0.2)
show(p)
# Optional: save the plot to a standalone HTML file.
#bokeh.io.output_file("MY_PLOT.html")
```
## Maps
```
import folium
# Make a map.
m = folium.Map(location = [45.5236, -122.6750])
# Optional: set up a Figure to control the size of the map.
fig = folium.Figure(width = 600, height = 200)
fig.add_child(m)
# Optional: save the map to a standalone HTML file.
# fig.save("MY_MAP.html")
```
The dataset about recent restaurant inspections in Yolo County is available [here](http://anson.ucdavis.edu/~nulle/yolo_food.feather)
```
food = pd.read_feather("data/yolo_food.feather")
food.head()
food.shape
food = food[food.lat.notna() & food.lng.notna()]
m = folium.Map(location = [38.54, -121.74], zoom_start = 11)
cols = ["FacilityName", "lat", "lng"]
for name, lat, lng in food[cols].itertuples(index = False):
popup = folium.Popup(name, parse_html = True)
folium.Marker([float(lat), float(lng)], popup = popup).add_to(m)
fig = folium.Figure(width = 800, height = 400)
fig.add_child(m)
```
Folium can also display boundaries stored in GeoJSON files. See the README for more info.
You can convert shapefiles to GeoJSON with geopandas.
```
m = folium.Map(location = [37.76, -122.44], zoom_start = 12)
m.choropleth("shapefiles/sf_neighborhoods.geojson", fill_opacity = 0.2, fill_color = "green")
fig = folium.Figure(width = 800, height = 400)
fig.add_child(m)
```
## Interactive Visualizations
In order to make a visualization interactive, you need to run some code when the user clicks on a widget. The code can run _client-side_ on the user's machine, or _server-side_ on your server.
For client-side interactivity:
* Your code must be written in JavaScript.
* You can host your visualization on any web server. No special setup is needed.
* Your visualization will use the user's CPU and memory.
For server-side interactivity:
* Your code can be written in any language the server supports. This may require special setup.
* Your visualization will use the server's CPU and memory.
* You can update the data in real-time.
* You can save data submitted by the user.
Shiny is a server-side framework for R. There are lots of server-side frameworks for Python. Two of the most popular are [Django][django] and [Flask][flask].
[django]: https://www.djangoproject.com/
[flask]: http://flask.pocoo.org/
### Client-side
Client-side interactivity is cheaper to get started with because you can use a free web server (like GitHub Pages).
Let's make the diamonds plot interactive so that the user can select which variables get plotted. Unfortunately, Bokeh charts don't work with interactivity, so we have to build the plot with simpler functions. We'll lose the color-coding, although you could still add that with a bit more work.
```
dogs.head()
import bokeh.layouts
from bokeh.models import ColumnDataSource, CustomJS, widgets
from bokeh.plotting import figure, show
original = ColumnDataSource(dogs)
source = ColumnDataSource({"x": dogs["datadog"], "y": dogs["popularity"]})
plt = figure(title = "Dogs", tools = [])
plt.xaxis.axis_label = "datadog"
plt.yaxis.axis_label = "popularity"
plt.scatter("x", "y", source = source, fill_alpha = 0.2)
# Callback for x selector box.
callback_x = CustomJS(args = {"original": original, "source": source, "axis": plt.xaxis[0]}, code = """
// This is the JavaScript code that will run when the x selector box is changed.
// You can use the alert() function to "print" values.
//alert(cb_obj.value);
axis.axis_label = cb_obj.value;
source.data['x'] = original.data[cb_obj.value];
source.change.emit();
""")
# Callback for y selector box.
callback_y = CustomJS(args = {"original": original, "source": source, "axis": plt.yaxis[0]}, code = """
// This is the JavaScript code that will run when the y selector box is changed.
axis.axis_label = cb_obj.value;
source.data['y'] = original.data[cb_obj.value];
source.change.emit();
""")
# Set up selector boxes.
numeric_cols = ["datadog", "popularity", "lifetime_cost", "longevity"]
sel_x = widgets.Select(title = "x-axis", options = numeric_cols, value = "datadog")
sel_y = widgets.Select(title = "y-axis", options = numeric_cols, value = "popularity")
sel_x.js_on_change("value", callback_x)
sel_y.js_on_change("value", callback_y)
# Position the selector boxes to the right of the plot.
layout = bokeh.layouts.column(sel_x, sel_y)
layout = bokeh.layouts.row(plt, layout)
show(layout)
```
### Server-side
Server-side interactivity is a lot more flexible. Flask is a simple framework with great documentation, so it's easy to get started with.
The core of a flask website (or "app") is a script with functions that return the text that should be displayed on each page.
See `hello_app.py` for an example flask website.
#### Example: Query Slack
As an example, let's make a flask website that displays recent messages from the class' Slack.
First you need to [get a Slack API token][slack-apps]. Make sure it has the `channels:read` and `channels:history` permissions.
Then you can use the `slackclient` package to query the Slack API.
[slack-apps]: https://api.slack.com/apps
```
from slackclient import SlackClient
with open("flask/slack_token") as f:
slack_token = f.readline().strip()
sc = SlackClient(slack_token)
```
We'll display messages from the `#flask` channel.
Slack tracks channels by ID, not name, so we need to get the channel ID.
Use `channels.list` to get a list of public channels:
```
channels = sc.api_call("channels.list")
channels = channels["channels"]
chan_id = next(x["id"] for x in channels if x["name"] == "flask")
chan_id
```
Now let's get the history of the channel:
```
history = sc.api_call("channels.history", channel = chan_id)
messages = pd.DataFrame(history["messages"])
messages
```
These steps are turned into a flask website in `slack_app.py`.
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
img = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(img), 'with dimensions:', img.shape)
plt.imshow(img) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def color_select(img,red_threshold, green_threshold, blue_threshold):
color_select = np.copy(img)
rgb_threshold = [red_threshold, green_threshold, blue_threshold]
thresholds = (img[:,:,0] < rgb_threshold[0]) \
| (img[:,:,1] < rgb_threshold[1]) \
| (img[:,:,2] < rgb_threshold[2])
color_select[thresholds] = [0,0,0]
return color_select
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
#return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=5):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
[x1_L,y1_L,x2_L,y2_L,count_L,slope_L]=[0,0,0,0,0,0]
[x1_R,y1_R,x2_R,y2_R,count_R,slope_R]=[0,0,0,0,0,0]
if lines is None:
return
for line in lines:
for x1,y1,x2,y2 in line:
slope=((y2-y1)/(x2-x1))
if slope<0:
x1_L+=x1
y1_L+=y1
x2_L+=x2
y2_L+=y2
count_L+=1
else:
x1_R+=x1
y1_R+=y1
x2_R+=x2
y2_R+=y2
count_R+=1
if count_L>0:
x1_L/=count_L
y1_L/=count_L
x2_L/=count_L
y2_L/=count_L
slope_L=((y2_L-y1_L)/(x2_L-x1_L))
offset_L=y1_L-slope_L*x1_L
y1_L=img.shape[0]-1
y2_L=img.shape[0]*16/27
if slope_L != 0:
x1_L=(y1_L-offset_L)/slope_L
x2_L=(y2_L-offset_L)/slope_L
cv2.line(img, (int(x1_L), int(y1_L)), (int(x2_L), int(y2_L)), [0, 255, 0], thickness)
if count_R>0:
x1_R/=count_R
y1_R/=count_R
x2_R/=count_R
y2_R/=count_R
slope_R=((y2_R-y1_R)/(x2_R-x1_R))
offset_R=y1_R-slope_R*x1_R
y1_R=img.shape[0]-1
y2_R=img.shape[0]*16/27
if slope_R !=0:
x1_R=(y1_R-offset_R)/slope_R
x2_R=(y2_R-offset_R)/slope_R
cv2.line(img, (int(x1_R), int(y1_R)), (int(x2_R), int(y2_R)), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
filenames=os.listdir("test_images/")
os.mkdir('test_images_output')
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
# TODO: Build your pipeline that will draw lane lines on the test_images then save them to the test_images_output directory.
```
for filename in filenames:
img = mpimg.imread('test_images/'+filename)
gray=grayscale(img)
blurred=gaussian_blur(gray,5)
canny_o=canny(blurred,25,150)
[h,w]=canny_o.shape
clipped=region_of_interest(canny_o,np.array([[(0,h),(int(5*w/12),int(16*h/27)),(int(7*w/12),int(16*h/27)),(w,h)]],dtype=np.int32))
hough_lines_o=hough_lines(clipped, rho=2, theta=1*math.pi/180, threshold=int(h/10), min_line_len=int(h/3), max_line_gap=int(h/2))
weighted=weighted_img(hough_lines_o, img, α=0.7, β=1.0, γ=0.)
plt.imshow(weighted)
mpimg.imsave('test_images_output/'+filename,weighted)
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
gray=grayscale(image)
blurred=gaussian_blur(gray,5)
canny_o=canny(blurred,50,100)
[h,w]=canny_o.shape
clipped=region_of_interest(canny_o,np.array([[(w/7,h*10/11),(int(5*w/12),int(17*h/27)),(int(7*w/12),int(17*h/27)),(w*6/7,h*10/11)]],dtype=np.int32))
hough_lines_o=hough_lines(clipped, rho=2, theta=1*math.pi/180, threshold=int(h/11), min_line_len=int(h/6), max_line_gap=int(h/3))
weighted=weighted_img(hough_lines_o, image, α=0.7, β=1.0, γ=0.)
return weighted
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
## clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,3)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import qcodes as qc
```
# QCoDeS config
The QCoDeS config module uses JSON files to store QCoDeS configuration.
The config file controls various options to QCoDeS such as the default path and name of the database in which your data is stored and logging level of the debug output. QCoDeS is shipped with a default configuration. As we shall see, you can overwrite these default values in several different ways to customize the configuration. In particular, you may want to change the path of your database which by default is `~/experiments.db` (here, `~` stands for the path of the user's home directory). In the following example, I have changed the default path of my database, represented by the key `db_location`,
in such a way that my data will be stored inside a sub-folder within my home folder.
QCoDeS loads both the defaults and the active configuration at the module import so that you can directly inspect them
```
qc.config.current_config
qc.config.defaults
```
One can inspect what the configuration options mean at runtime
```
print(qc.config.describe('core'))
```
## Configuring QCoDeS
Defaults are the settings that are shipped with the package, which you can overwrite programmatically.
A way to customize QCoDeS is to write your own JSON files, they are expected to be in the directories printed below.
One will be empty because one needs to define first the environment variable in the OS.
They are ordered by "weight", meaning that the last file always wins if it's overwriting any preconfigured defaults or values in the other files.
Simply copy the file to the directories and you are good to go.
```
print("\n".join([qc.config.home_file_name, qc.config.env_file_name, qc.config.cwd_file_name]))
```
The easiest way to add something to the configuration is to use the provided helper:
```
qc.config.add("base_location", "/dev/random", value_type="string", description="Location of data", default="/dev/random")
```
This will add a `base_location` with value `/dev/random` to the current configuration, and validate it's value to be of type string, will also set the description and what one would want to have as default.
The new entry is saved in the 'user' part of the configuration.
```
print(qc.config.describe('user.base_location'))
```
You can also manually update the configuration from a specific file by supplying the path of the directory as the argument of `qc.config.update_config` method as follows:
```
qc.config.update_config(path="C:\\Users\\jenielse\\")
```
## Saving changes
All the changes made to the defaults are stored, and one can then decide to save them to the desired location.
```
help(qc.config.save_to_cwd)
help(qc.config.save_to_env)
help(qc.config.save_to_home)
```
### Using a custom configured variable in your experiment:
Simply get the value you have set before with dot notation.
For example:
```
loc_provider = qc.data.location.FormatLocation(fmt=qc.config.user.base_location)
qc.data.data_set.DataSet.location_provider=loc_provider
```
## Changing core
One can change the core values at runtime, but there is no guarantee that they are going to be valid.
Since user configuration shadows the default one that comes with QCoDeS, apply care when changing the values under `core` section. This section is, primarily, meant for the settings that are determined by QCoDeS core developers.
```
qc.config.current_config.core.loglevel = 'INFO'
```
But one can maunually validate via
```
qc.config.validate()
```
Which will raise an exception in case of bad inputs
```
qc.config.current_config.core.loglevel = 'YOLO'
qc.config.validate()
# NOTE that you how have a broken config!
```
| github_jupyter |

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Health/CALM/CALM-moving-out-3.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
# CALM - Moving Out 3
## Part 3 - Accommodation
📚You will need somewhere to live. Think about the advantages and disadvantages of an [apartment](https://en.wikipedia.org/wiki/Apartment), [townhouse](https://simple.wikipedia.org/wiki/Townhouse)/[duplex](https://en.wikipedia.org/wiki/Duplex_(building)), and a [single detached house](https://en.wikipedia.org/wiki/Single-family_detached_home) and record your thoughts in the cell below. Remember to `Run` the cell once you have finished.
```
%%writefile moving_out_2.txt
✏️
Advantages of an Apartment
1.
2.
3.
Disadvantages of an Apartment
1.
2.
3.
Advantages of a Townhouse or Duplex
1.
2.
3.
Disadvantages of a Townhouse or Duplex
1.
2.
3.
Advantages of a Single Detatched House
1.
2.
3.
Disadvantages of a Single Detatched House
1.
2.
3.
The best choice of housing for a retired couple with no children
who do not want to cut grass or do other maintenance is
because
The best choice of housing for a middle-aged couple with two small children
who what room for children and friends to visit is
because
The best choice of housing for a young couple with a small child is
because
The best choice of housing for a young, single person who travels frequently for work is
because
The type of home I picture myself in when I decide to move out is (be descriptive)
```
### Finding a Rental Home
📚For the purpose of this project you will consider rental properties only. Find an online listing (e.g. from [Kijiji](https://www.kijiji.ca/)) for a suitable place to rent in the area you would like to live.
Carefully read the listing and fill in the information in the following cell, and `Run` the cell once you have finished.
```
%%writefile moving_out_3.txt
✏️
Link to listing:
Address:
Type of accomodation:
Rent per month:
Utilities included in rent:
Damage deposit or security deposit amount:
Other costs not included in rent (e.g. parking, coin laundry):
Summary of other important information:
```
### Roommate
📚Some expenses can be decreased by having a roommate. For the purposes of this project you may choose to have one roommate. Complete the statements then `Run` the cell below.
```
%%writefile moving_out_4.txt
✏️
Advantages of living on my own are:
1.
2.
3.
Disadvantages of living on my own are:
1.
2.
3.
Advantages of living with a roommate are:
1.
2.
3.
Disadvantages of living with a roommate are:
1.
2.
3.
I have decided to live (on my own / with a roommate) because
Essential characteristics of a roommate are:
1.
2.
3.
4.
```
### Moving Expenses
📚There are one-time moving expenses to consider. Follow the instructions in the code cell below to edit the numbers, then run the cell to calculate and store your expenses.
```
#✏️
# If you plan to have a roommate, change this to a 1 instead of a 0
roommate = 0
# Security Deposit or Damage Deposit: this is usually equal to one month's rent.
# You can get your deposit back when you move out, if you take care of your home.
# Some landlords also charge a non-refundable pet fee.
damageDeposit = 0
petFee = 0
# If you plan to have a pet, estimate the cost of the pet, toys, furniture, etc.
petPurchase = 1000
# There are sometimes utility activation or hookup costs
electricalActivation = 10
naturalGasActivation = 10
waterActivation = 0
internetActivation = 0
mobilePhoneActivitation = 25
mobilePhonePurchase = 300
furnitureAndAppliances = 500
movingCosts = 300
# 📚 You've finished editing numbers, now run the cell to calculate and store expenses
if roommate == 1:
movingExpenses = (
damageDeposit +
electricalActivation +
naturalGasActivation +
waterActivation +
internetActivation +
furnitureAndAppliances
) / 2 + mobilePhoneActivitation + mobilePhonePurchase + movingCosts + petFee + petPurchase
else:
movingExpenses = (
damageDeposit +
electricalActivation +
naturalGasActivation +
waterActivation +
internetActivation +
furnitureAndAppliances +
mobilePhoneActivitation +
mobilePhonePurchase +
movingCosts +
petFee +
petPurchase
)
%store roommate
%store movingExpenses
print('Moving expenses will be about $' + str(movingExpenses))
```
📚`Run` the next cell to check that your answers have been stored. If you get an error, make sure that you have run all of the previous code cells in this notebook.
```
#📚 Run this cell to check that your answers have been stored.
print('Roommate:', roommate)
print('Moving expenses:', movingExpenses)
with open('moving_out_2.txt', 'r') as file2:
print(file2.read())
with open('moving_out_3.txt', 'r') as file3:
print(file3.read())
with open('moving_out_4.txt', 'r') as file4:
print(file4.read())
```
📚You have now completed this section. Proceed to [section 4](./CALM-moving-out-4.ipynb)
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# Assignment 1 - Python Basics Practice
*This assignment is a part of the course ["Data Analysis with Python: Zero to Pandas"](https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas)*
In this assignment, you'll get to practice some of the concepts and skills covered in the following notebooks:
1. [First Steps with Python and Jupyter](https://jovian.ml/aakashns/first-steps-with-python)
2. [A Quick Tour of Variables and Data Types](https://jovian.ml/aakashns/python-variables-and-data-types)
3. [Branching using Conditional Statements and Loops](https://jovian.ml/aakashns/python-branching-and-loops)
As you go through this notebook, you will find a **???** in certain places. To complete this assignment, you must replace all the **???** with appropriate values, expressions or statements to ensure that the notebook runs properly end-to-end.
Some things to keep in mind:
* Make sure to run all the code cells, otherwise you may get errors like `NameError` for undefined variables.
* Do not change variable names, delete cells or disturb other existing code. It may cause problems during evaluation.
* In some cases, you may need to add some code cells or new statements before or after the line of code containing the **???**.
* Since you'll be using a temporary online service for code execution, save your work by running `jovian.commit` at regular intervals.
* Questions marked **(Optional)** will not be considered for evaluation, and can be skipped. They are for your learning.
You can make submissions on this page: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-1-python-basics-practice
If you are stuck, you can ask for help on the community forum: https://jovian.ml/forum/t/assignment-1-python-practice/7761 . You can get help with errors or ask for hints, but **please don't ask for or share the full working answer code** on the forum.
## How to run the code and save your work
The recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on [mybinder.org](https://mybinder.org), a free online service for running Jupyter notebooks.
Before staring the assignment, let's save a snapshot of the assignment to your Jovian.ml profile, so that you can access it later, and continue your work.
```
# Install the library
!pip install jovian --upgrade --quiet
# Import it
import jovian
project_name='python-practice-assignment'
# Capture and upload a snapshot
jovian.commit(project=project_name, privacy='secret', evironment=None)
```
You'll be asked to provide an API Key, to securely upload the notebook to your Jovian.ml account. You can get the API key from your Jovian.ml profile page after logging in / signing up. See the docs for details: https://jovian.ml/docs/user-guide/upload.html . The privacy of your assignment notebook is set to *Secret*, so that you can the evlauators can access it, but it will not shown on your public profile to other users.
## Problem 1 - Variables and Data Types
**Q: Assign your name to the variable `name`.**
```
name = ???
```
**Q: Assign your age (real or fake) to the variable `age`.**
```
age = ???
```
**Q: Assign a boolean value to the variable `has_android_phone`.**
```
has_android_phone = ???
```
You can check the values of these variables by running the next cell.
```
name, age, has_android_phone
```
**Q: Create a dictionary `person` with keys `"Name"`, `"Age"`, `"HasAndroidPhone"` and values using the variables defined above.**
```
person = ???
```
Let's use the `person` dictionary to print a nice message.
```
print("{} is aged {}, and owns an {}.".format(
person["Name"],
person["Age"],
"Android phone" if person["HasAndroidPhone"] else "iPhone"
))
```
**Q (Optional): Use a `for` loop to display the `type` of each value stored against each key in `person`.**
Here's the expected output for the key `"Name"`:
```
The key "Name" has the value "Derek" of the type "<class 'str'>"
```
```
# this is optional
???
```
Now that you've solved one problem, it would be a good idea to record a snapshot of your notebook.
```
jovian.commit(project=project_name,environment=None)
```
## Problem 2 - Working with Lists
**Q: Create a list containing the following 3 elements:**
* your favorite color
* the number of pets you have
* a boolean value describing whether you have previous programming experience
```
my_list = ???
```
Let's see what the list looks like:
```
my_list
```
**Q: Complete the following `print` and `if` statements by accessing the appropriate elements from `my_list`.**
*Hint*: Use the list indexing notation `[]`.
```
print('My favorite color is', ???)
print('I have {} pet(s).'.format(???))
if ???:
print("I have previous programming experience")
else:
print("I do not have previous programming experience")
```
**Q: Add your favorite single digit number to the end of the list using the appropriate list method.**
```
my_list.???
```
Let's see if the number shows up in the list.
```
my_list
```
**Q: Remove the first element of the list, using the appropriate list method.**
*Hint*: Check out methods of list here: https://www.w3schools.com/python/python_ref_list.asp
```
my_list.???
my_list
```
**Q: Complete the `print` statement below to display the number of elements in `my_list`.**
```
print("The list has {} elements.".format(???))
```
Well done, you're making good progress! Save your work before continuing
```
jovian.commit(project=project_name,environment=None)
```
## Problem 3 - Conditions and loops
**Q: Calculate and display the sum of all the numbers divisible by 7 between 18 and 534 i.e. `21+28+35+...+525+532`**.
*Hint*: One way to do this is to loop over a `range` using `for` and use an `if` statement inside it.
```
# store the final answer in this variable
sum_of_numbers = 0
# perform the calculation here
???
print('The sum of all the numbers divisible by 7 between 18 and 534 is', sum_of_numbers)
```
If you are not able to figure out the solution to this problem, you can ask for hints on the community forum: https://jovian.ml/forum/t/assignment-1-python-practice/7761 . Remember to save your work before moving forward.
```
jovian.commit(project=project_name,environment=None)
```
## Problem 4 - Flying to the Bahamas
**Q: A travel company wants to fly a plane to the Bahamas. Flying the plane costs 5000 dollars. So far, 29 people have signed up for the trip. If the company charges 200 dollars per ticket, what is the profit made by the company?**
Fill in values or arithmetic expressions for the variables below.
```
cost_of_flying_plane = ???
number_of_passengers = ???
price_of_ticket = ???
profit = ???
print('The company makes of a profit of {} dollars'.format(profit))
```
**Q (Optional): Out of the 29 people who took the flight, only 12 buy tickets to return from the Bahamas on the same plane. If the flying the plane back also costs 5000 dollars, and does the company make an overall profit or loss? The company charges the same fee of 200 dollars per ticket for the return flight.**
Use an `if` statement to display the result.
```
# this is optional
???
# this is optional
if ???:
print("The company makes an overall profit of {} dollars".format(???))
else:
print("The company makes an overall loss of {} dollars".format(???))
```
Great work so far! Want to take a break? Remember to save and upload your notebook to record your progress.
```
jovian.commit(project=project_name,environment=None)
```
## Problem 5 - Twitter Sentiment Analysis
Are your ready to perform some *Data Analysis with Python*? In this problem, we'll analyze some fictional tweets and find out whether the overall sentiment of Twitter users is happy or sad. This is a simplified version of an important real world problem called *sentiment analysis*.
Before we begin, we need a list of tweets to analyze. We're picking a small number of tweets here, but the exact same analysis can also be done for thousands, or even millions of tweets. The collection of data that we perform analysis on is often called a *dataset*.
```
tweets = [
"Wow, what a great day today!! #sunshine",
"I feel sad about the things going on around us. #covid19",
"I'm really excited to learn Python with @JovianML #zerotopandas",
"This is a really nice song. #linkinpark",
"The python programming language is useful for data science",
"Why do bad things happen to me?",
"Apple announces the release of the new iPhone 12. Fans are excited.",
"Spent my day with family!! #happy",
"Check out my blog post on common string operations in Python. #zerotopandas",
"Freecodecamp has great coding tutorials. #skillup"
]
```
Let's begin by answering a very simple but important question about our dataset.
**Q: How many tweets does the dataset contain?**
```
number_of_tweets = ???
```
Let's create two lists of words: `happy_words` and `sad_words`. We will use these to check if a tweet is happy or sad.
```
happy_words = ['great', 'excited', 'happy', 'nice', 'wonderful', 'amazing', 'good', 'best']
sad_words = ['sad', 'bad', 'tragic', 'unhappy', 'worst']
```
To identify whether a tweet is happy, we can simply check if contains any of the words from `happy_words`. Here's an example:
```
sample_tweet = tweets[0]
sample_tweet
is_tweet_happy = False
# Get a word from happy_words
for word in happy_words:
# Check if the tweet contains the word
if word in sample_tweet:
# Word found! Mark the tweet as happy
is_tweet_happy = True
```
Do you understand what we're doing above?
> For each word in the list of happy words, we check if is a part of the selected tweet. If the word is indded a part of the tweet, we set the variable `is_tweet_happy` to `True`.
```
is_tweet_happy
```
**Q: Determine the number of tweets in the dataset that can be classified as happy.**
*Hint*: You'll need to use a loop inside another loop to do this. Use the code from the example shown above.
```
# store the final answer in this variable
number_of_happy_tweets = 0
# perform the calculations here
???
print("Number of happy tweets:", number_of_happy_tweets)
```
If you are not able to figure out the solution to this problem, you can ask for hints on the community forum: https://jovian.ml/forum/t/assignment-1-python-practice/7761 . Also try adding `print` statements inside your loops to inspect variables and make sure your logic is correct.
**Q: What fraction of the total number of tweets are happy?**
For example, if 2 out of 10 tweets are happy, then the answer is `2/10` i.e. `0.2`.
```
happy_fraction = ???
print("The fraction of happy tweets is:", happy_fraction)
```
To identify whether a tweet is sad, we can simply check if contains any of the words from `sad_words`.
**Q: Determine the number of tweets in the dataset that can be classified as sad.**
```
# store the final answer in this variable
number_of_sad_tweets = 0
# perform the calculations here
???
print("Number of sad tweets:", number_of_sad_tweets)
```
**Q: What fraction of the total number of tweets are sad?**
```
sad_fraction = ???
print("The fraction of sad tweets is:", sad_fraction)
```
The rest of this problem is optional. Let's save your work before continuing.
```
jovian.commit(project=project_name,environment=None)
```
Great work, even with some basic analysis, we already know a lot about the sentiment of the tweets given to us. Let us now define a metric called "sentiment score", to summarize the overall sentiment of the tweets.
**Q (Optional): Calculate the sentiment score, which is defined as the difference betweek the fraction of happy tweets and the fraction of sad tweets.**
```
sentiment_score = ???
print("The sentiment score for the given tweets is", sentiment_score)
```
In a real world scenario, we could calculate & record the sentiment score for all the tweets sent out every day. This information can be used to plot a graph and study the trends in the changing sentiment of the world. The following graph was creating using the Python data visualization library `matplotlib`, which we'll cover later in the course.
<img src="https://i.imgur.com/6CCIwCb.png" style="width:400px">
What does the sentiment score represent? Based on the value of the sentiment score, can you identify if the overall sentiment of the dataset is happy or sad?
**Q (Optional): Display whether the overall sentiment of the given dataset of tweets is happy or sad, using the sentiment score.**
```
if ???:
print("The overall sentiment is happy")
else:
print("The overall sentiment is sad")
```
Finally, it's also important to track how many tweets are neutral i.e. neither happy nor sad. If a large fraction of tweets are marked neutral, maybe we need to improve our lists of happy and sad words.
**Q (Optional): What is the fraction of tweets that are neutral i.e. neither happy nor sad.**
```
# store the final answer in this variable
number_of_neutral_tweets = 0
# perform the calculation here
???
neutral_fraction = ???
print('The fraction of neutral tweets is', neutral_fraction)
```
Ponder upon these questions and try some experiments to hone your skills further:
* What are the limitations of our approach? When will it go wrong or give incorrect results?
* How can we improve our approach to address the limitations?
* What are some other questions you would like to ask, given a list of tweets?
* Try collecting some real tweets from your Twitter timeline and repeat this analysis. Do the results make sense?
**IMPORTANT NOTE**: If you want to try out these experiments, please create a new notebook using the "New Notebook" button on your Jovian.ml profile, to avoid making unintended changes to your assignment submission notebook.
## Submission
Congratulations on making it this far! You've reached the end of this assignment, and you just completed your first data analysis problem. It's time to record one final version of your notebook for submission.
Make a submission here by filling the submission form: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-1-python-basics-practice
```
jovian.commit(project=project_name,environment=None)
```
| github_jupyter |
# Particle Swarm Optimization
>Investigación y entendimiento del algoritmo.
La idea de este notebook es revisar la implementación del algoritmo particle swarm optimization de manera general para tener una idea de la lógica, intuición y los parámetros que éste contempla.
```
import random
import math
import matplotlib.pyplot as plt
```
Se define una función objetivo.
```
# función objetivo
def objective_function(x):
y = x[0]**3 + x[1]**2
return y
```
A continuación se definen los parámetros.
```
bounds=[(-4, 4), (-4, 4)] # variables limite inferior y superior
nv = 2 # numero de variables
mm = -1 # problema de minimización, mm = -1; problema de mazimización, mm = 1
# Parámetros opcionales (Para optimizar el desempeño del PSO necesitamos optimizar estos parámetros)
particle_size = 100 # Número de particulas
iterations = 500 # Máximo número de iteraciones
w = 0.95 # Constante inercia
c1 = 2 # Constante cognitiva
c2 = 2 # Constante social
```
Ahora revisaremos el algoritmo.
```
# Algoritmo
class Particle:
def __init__(self, bounds):
self.particle_position = [] # posición de la particula
self.particle_velocity = [] # velocidad de la particula
self.local_best_particle_position = [] # mejor posición de la particula
self.fitness_local_best_particle_position = initial_fitness # valor inicial de la función objetivo de la mejor particula
self.fitness_particle_position = initial_fitness # valor de la función objetivo de la posición de la particula
for i in range(nv):
self.particle_position.append(random.uniform(bounds[i][0], bounds[i][1])) # generamos una posición inicial al azar
self.particle_velocity.append(random.uniform(-1, 1)) # generamos la velocidad inicial al azar
def evaluate(self, objective_function):
self.fitness_particle_position = objective_function(self.particle_position)
if mm == -1:
if self.fitness_particle_position < self.fitness_local_best_particle_position:
self.local_best_particle_position = self.particle_position # actualizamos el mejor "local"
self.fitness_local_best_particle_position = self.fitness_particle_position # actualizamos el fitness del mejor "local"
if mm == 1:
if self.fitness_particle_position > self.fitness_local_best_particle_position:
self.local_best_particle_position = self.particle_position # actualizamos el mejor "local"
self.fitness_local_best_particle_position = self.fitness_particle_position # actualizamos el fitness del mejor "local"
def update_velocity(self, global_best_particle_position):
for i in range(nv):
r1 = random.random()
r2 = random.random()
cognitive_velocity = c1*r1*(self.local_best_particle_position[i] - self.particle_position[i])
social_velocity = c2*r2*(global_best_particle_position[i] - self.particle_position[i])
self.particle_velocity[i] = w*self.particle_velocity[i] + cognitive_velocity + social_velocity
def update_position(self, bounds):
for i in range(nv):
self.particle_position[i] = self.particle_position[i] + self.particle_velocity[i]
# Validamos y reparamos para satisfacer el limite superior
if self.particle_position[i] > bounds[i][1]:
self.particle_position[i] = bounds[i][1]
# Validamos y reparamos para satisfacer el limite inferior
if self.particle_position[i] < bounds[i][0]:
self.particle_position[i] = bounds[i][0]
class PSO():
def __init__(self, objective_function, bounds, particle_size, iterations):
fitness_global_best_particle_position = initial_fitness
global_best_particle_position = []
swarm_particle = []
for i in range(particle_size):
swarm_particle.append(Particle(bounds))
A = []
for i in range(iterations):
for j in range(particle_size):
swarm_particle[j].evaluate(objective_function)
if mm == -1:
if swarm_particle[j].fitness_particle_position < fitness_global_best_particle_position:
global_best_particle_position = list(swarm_particle[j].particle_position)
fitness_global_best_particle_position = float(swarm_particle[j].fitness_particle_position)
if mm == 1:
if swarm_particle[j].fitness_particle_position > fitness_global_best_particle_position:
global_best_particle_position = list(swarm_particle[j].particle_position)
fitness_global_best_particle_position = float(swarm_particle[j].fitness_particle_position)
for j in range(particle_size):
swarm_particle[j].update_velocity(global_best_particle_position)
swarm_particle[j].update_position(bounds)
A.append(fitness_global_best_particle_position) # Guardamos el mejor fitness
print('Optimal solution:', global_best_particle_position)
print('Objective function value:', fitness_global_best_particle_position)
print('Evolutionary process of the objective function value:')
plt.plot(A)
if mm == -1:
initial_fitness = float("inf") # Para problema de minimización
if mm == 1:
initial_fitness = -float("inf") # Para problema de maximización
# Ejecutamos PSO
PSO(objective_function, bounds, particle_size, iterations)
```
## Puntos para recordar
+ Población: Enjambre (Swarm)
+ Soluciones: Particulas (Particles)
+ Valor asignado a cada particula: fitness
+ Las particulas se mueven en el espacio de busqueda (search-space)
+ Los movimientos de las particulas son guiados por:
- Conocimiento de su mejor posición en el espacio de busqueda.
- Conocimiento de la mejor posición del enjambre entero.
+ Cuando mejores posiciones sean descubiertas estas guiaran los movimientos del enjambre.
+ Iteramos hasta encontrar una solución. (No siempre se encuentra una solución)
+ La meta es minimizar o maximizar una función de costo.
+ Ventaja. Rapidez para converger.
+ Desventaja. Puede quedar atrapado en un mínimo local en lugar del mínimo global.
+ No calcula derivadas como otros optimizadores por lo que se puede utilizar para funciones no diferenciables.
### Referencias:
http://ijcsi.org/papers/IJCSI-9-6-2-264-271.pdf
https://www.youtube.com/watch?v=JhgDMAm-imI
https://www.youtube.com/watch?v=7uZcuaUvwq0&t=134s
| github_jupyter |
```
#export
from fastai2.data.all import *
from fastai2.text.core import *
from nbdev.showdoc import *
#default_exp text.models.awdlstm
#default_cls_lvl 3
```
# AWD-LSTM
> AWD LSTM from [Smerity et al.](https://arxiv.org/pdf/1708.02182.pdf)
## Basic NLP modules
On top of the pytorch or the fastai [`layers`](/layers.html#layers), the language models use some custom layers specific to NLP.
```
#export
def dropout_mask(x, sz, p):
"Return a dropout mask of the same type as `x`, size `sz`, with probability `p` to cancel an element."
return x.new(*sz).bernoulli_(1-p).div_(1-p)
t = dropout_mask(torch.randn(3,4), [4,3], 0.25)
test_eq(t.shape, [4,3])
assert ((t == 4/3) + (t==0)).all()
#export
class RNNDropout(Module):
"Dropout with probability `p` that is consistent on the seq_len dimension."
def __init__(self, p=0.5): self.p=p
def forward(self, x):
if not self.training or self.p == 0.: return x
return x * dropout_mask(x.data, (x.size(0), 1, x.size(2)), self.p)
dp = RNNDropout(0.3)
tst_inp = torch.randn(4,3,7)
tst_out = dp(tst_inp)
for i in range(4):
for j in range(7):
if tst_out[i,0,j] == 0: assert (tst_out[i,:,j] == 0).all()
else: test_close(tst_out[i,:,j], tst_inp[i,:,j]/(1-0.3))
#export
import warnings
#export
class WeightDropout(Module):
"A module that warps another layer in which some weights will be replaced by 0 during training."
def __init__(self, module, weight_p, layer_names='weight_hh_l0'):
self.module,self.weight_p,self.layer_names = module,weight_p,L(layer_names)
for layer in self.layer_names:
#Makes a copy of the weights of the selected layers.
w = getattr(self.module, layer)
self.register_parameter(f'{layer}_raw', nn.Parameter(w.data))
self.module._parameters[layer] = F.dropout(w, p=self.weight_p, training=False)
def _setweights(self):
"Apply dropout to the raw weights."
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
self.module._parameters[layer] = F.dropout(raw_w, p=self.weight_p, training=self.training)
def forward(self, *args):
self._setweights()
with warnings.catch_warnings():
#To avoid the warning that comes because the weights aren't flattened.
warnings.simplefilter("ignore")
return self.module.forward(*args)
def reset(self):
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
self.module._parameters[layer] = F.dropout(raw_w, p=self.weight_p, training=False)
if hasattr(self.module, 'reset'): self.module.reset()
module = nn.LSTM(5,7).cuda()
dp_module = WeightDropout(module, 0.4)
wgts = getattr(dp_module.module, 'weight_hh_l0')
tst_inp = torch.randn(10,20,5).cuda()
h = torch.zeros(1,20,7).cuda(), torch.zeros(1,20,7).cuda()
x,h = dp_module(tst_inp,h)
new_wgts = getattr(dp_module.module, 'weight_hh_l0')
test_eq(wgts, getattr(dp_module, 'weight_hh_l0_raw'))
assert 0.2 <= (new_wgts==0).sum().float()/new_wgts.numel() <= 0.6
#export
class EmbeddingDropout(Module):
"Apply dropout with probabily `embed_p` to an embedding layer `emb`."
def __init__(self, emb, embed_p):
self.emb,self.embed_p = emb,embed_p
def forward(self, words, scale=None):
if self.training and self.embed_p != 0:
size = (self.emb.weight.size(0),1)
mask = dropout_mask(self.emb.weight.data, size, self.embed_p)
masked_embed = self.emb.weight * mask
else: masked_embed = self.emb.weight
if scale: masked_embed.mul_(scale)
return F.embedding(words, masked_embed, ifnone(self.emb.padding_idx, -1), self.emb.max_norm,
self.emb.norm_type, self.emb.scale_grad_by_freq, self.emb.sparse)
enc = nn.Embedding(10, 7, padding_idx=1)
enc_dp = EmbeddingDropout(enc, 0.5)
tst_inp = torch.randint(0,10,(8,))
tst_out = enc_dp(tst_inp)
for i in range(8):
assert (tst_out[i]==0).all() or torch.allclose(tst_out[i], 2*enc.weight[tst_inp[i]])
#export
class AWD_LSTM(Module):
"AWD-LSTM inspired by https://arxiv.org/abs/1708.02182"
initrange=0.1
def __init__(self, vocab_sz, emb_sz, n_hid, n_layers, pad_token=1, hidden_p=0.2, input_p=0.6, embed_p=0.1,
weight_p=0.5, bidir=False):
store_attr(self, 'emb_sz,n_hid,n_layers,pad_token')
self.bs = 1
self.n_dir = 2 if bidir else 1
self.encoder = nn.Embedding(vocab_sz, emb_sz, padding_idx=pad_token)
self.encoder_dp = EmbeddingDropout(self.encoder, embed_p)
self.rnns = nn.ModuleList([self._one_rnn(emb_sz if l == 0 else n_hid, (n_hid if l != n_layers - 1 else emb_sz)//self.n_dir,
bidir, weight_p, l) for l in range(n_layers)])
self.encoder.weight.data.uniform_(-self.initrange, self.initrange)
self.input_dp = RNNDropout(input_p)
self.hidden_dps = nn.ModuleList([RNNDropout(hidden_p) for l in range(n_layers)])
self.reset()
def forward(self, inp, from_embeds=False):
bs,sl = inp.shape[:2] if from_embeds else inp.shape
if bs!=self.bs: self._change_hidden(bs)
output = self.input_dp(inp if from_embeds else self.encoder_dp(inp))
new_hidden = []
for l, (rnn,hid_dp) in enumerate(zip(self.rnns, self.hidden_dps)):
output, new_h = rnn(output, self.hidden[l])
new_hidden.append(new_h)
if l != self.n_layers - 1: output = hid_dp(output)
self.hidden = to_detach(new_hidden, cpu=False, gather=False)
return output
def _change_hidden(self, bs):
self.hidden = [self._change_one_hidden(l, bs) for l in range(self.n_layers)]
self.bs = bs
def _one_rnn(self, n_in, n_out, bidir, weight_p, l):
"Return one of the inner rnn"
rnn = nn.LSTM(n_in, n_out, 1, batch_first=True, bidirectional=bidir)
return WeightDropout(rnn, weight_p)
def _one_hidden(self, l):
"Return one hidden state"
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return (one_param(self).new_zeros(self.n_dir, self.bs, nh), one_param(self).new_zeros(self.n_dir, self.bs, nh))
def _change_one_hidden(self, l, bs):
if self.bs < bs:
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return tuple(torch.cat([h, h.new_zeros(self.n_dir, bs-self.bs, nh)], dim=1) for h in self.hidden[l])
if self.bs > bs: return (self.hidden[l][0][:,:bs].contiguous(), self.hidden[l][1][:,:bs].contiguous())
return self.hidden[l]
def reset(self):
"Reset the hidden states"
[r.reset() for r in self.rnns if hasattr(r, 'reset')]
self.hidden = [self._one_hidden(l) for l in range(self.n_layers)]
```
This is the core of an AWD-LSTM model, with embeddings from `vocab_sz` and `emb_sz`, `n_layers` LSTMs potentialy `bidir` stacked, the first one going from `emb_sz` to `n_hid`, the last one from `n_hid` to `emb_sz` and all the inner ones from `n_hid` to `n_hid`. `pad_token` is passed to the PyTorch embedding layer. The dropouts are applied as such:
- the embeddings are wrapped in `EmbeddingDropout` of probability `embed_p`;
- the result of thise embedding layer goes through an `RNNDropout` of probability `input_p`;
- each LSTM has `WeightDropout` applied with probability `weight_p`;
- between two of the inner LSTM, an `RNNDropout` is applied with probabilith `hidden_p`.
THe module returns two lists: the raw outputs (without being applied the dropout of `hidden_p`) of each inner LSTM and the list of outputs with dropout. Since there is no dropout applied on the last output, those two lists have the same last element, which is the output that should be fed to a decoder (in the case of a language model).
```
tst = AWD_LSTM(100, 20, 10, 2)
x = torch.randint(0, 100, (10,5))
r = tst(x)
test_eq(tst.bs, 10)
test_eq(len(tst.hidden), 2)
test_eq([h_.shape for h_ in tst.hidden[0]], [[1,10,10], [1,10,10]])
test_eq([h_.shape for h_ in tst.hidden[1]], [[1,10,20], [1,10,20]])
test_eq(r.shape, [10,5,20])
test_eq(r[:,-1], tst.hidden[-1][0][0]) #hidden state is the last timestep in raw outputs
#hide
#test bs change
x = torch.randint(0, 100, (6,5))
r = tst(x)
test_eq(tst.bs, 6)
# hide
# cuda
tst = AWD_LSTM(100, 20, 10, 2, bidir=True).to('cuda')
tst.reset()
x = torch.randint(0, 100, (10,5)).to('cuda')
r = tst(x)
x = torch.randint(0, 100, (6,5), device='cuda')
r = tst(x)
#export
def awd_lstm_lm_split(model):
"Split a RNN `model` in groups for differential learning rates."
groups = [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].rnns, model[0].hidden_dps)]
groups = L(groups + [nn.Sequential(model[0].encoder, model[0].encoder_dp, model[1])])
return groups.map(params)
splits = awd_lstm_lm_split
#export
awd_lstm_lm_config = dict(emb_sz=400, n_hid=1152, n_layers=3, pad_token=1, bidir=False, output_p=0.1,
hidden_p=0.15, input_p=0.25, embed_p=0.02, weight_p=0.2, tie_weights=True, out_bias=True)
#export
def awd_lstm_clas_split(model):
"Split a RNN `model` in groups for differential learning rates."
groups = [nn.Sequential(model[0].module.encoder, model[0].module.encoder_dp)]
groups += [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].module.rnns, model[0].module.hidden_dps)]
groups = L(groups + [model[1]])
return groups.map(params)
#export
awd_lstm_clas_config = dict(emb_sz=400, n_hid=1152, n_layers=3, pad_token=1, bidir=False, output_p=0.4,
hidden_p=0.3, input_p=0.4, embed_p=0.05, weight_p=0.5)
```
## QRNN
```
#export
class AWD_QRNN(AWD_LSTM):
"Same as an AWD-LSTM, but using QRNNs instead of LSTMs"
def _one_rnn(self, n_in, n_out, bidir, weight_p, l):
from fastai2.text.models.qrnn import QRNN
rnn = QRNN(n_in, n_out, 1, save_prev_x=(not bidir), zoneout=0, window=2 if l == 0 else 1, output_gate=True, bidirectional=bidir)
rnn.layers[0].linear = WeightDropout(rnn.layers[0].linear, weight_p, layer_names='weight')
return rnn
def _one_hidden(self, l):
"Return one hidden state"
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return one_param(self).new_zeros(self.n_dir, self.bs, nh)
def _change_one_hidden(self, l, bs):
if self.bs < bs:
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return torch.cat([self.hidden[l], self.hidden[l].new_zeros(self.n_dir, bs-self.bs, nh)], dim=1)
if self.bs > bs: return self.hidden[l][:, :bs]
return self.hidden[l]
model = AWD_QRNN(vocab_sz=10, emb_sz=20, n_hid=16, n_layers=2, bidir=False)
x = torch.randint(0, 10, (7,5))
y = model(x)
test_eq(y.shape, (7, 5, 20))
# hide
# test bidir=True
model = AWD_QRNN(vocab_sz=10, emb_sz=20, n_hid=16, n_layers=2, bidir=True)
x = torch.randint(0, 10, (7,5))
y = model(x)
test_eq(y.shape, (7, 5, 20))
#export
awd_qrnn_lm_config = dict(emb_sz=400, n_hid=1552, n_layers=4, pad_token=1, bidir=False, output_p=0.1,
hidden_p=0.15, input_p=0.25, embed_p=0.02, weight_p=0.2, tie_weights=True, out_bias=True)
#export
awd_qrnn_clas_config = dict(emb_sz=400, n_hid=1552, n_layers=4, pad_token=1, bidir=False, output_p=0.4,
hidden_p=0.3, input_p=0.4, embed_p=0.05, weight_p=0.5)
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
### Simulating From the Null Hypothesis
Load in the data below, and use the exercises to assist with answering the quiz questions below.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(42)
full_data = pd.read_csv('coffee_dataset.csv')
sample_data = full_data.sample(200)
```
`1.` If you were interested in studying whether the average height for coffee drinkers is the same as for non-coffee drinkers, what would the null and alternative hypotheses be? Write them in the cell below, and use your answer to answer the first quiz question below.
**Since there is no directional component associated with this statement, a not equal to seems most reasonable.**
$$H_0: \mu_{coff} - \mu_{no} = 0$$
$$H_1: \mu_{coff} - \mu_{no} \neq 0$$
**$\mu_{coff}$ and $\mu_{no}$ are the population mean values for coffee drinkers and non-coffee drinkers, respectivley.**
`2.` If you were interested in studying whether the average height for coffee drinkers is less than non-coffee drinkers, what would the null and alternative be? Place them in the cell below, and use your answer to answer the second quiz question below.
**In this case, there is a question associated with a direction - that is the average height for coffee drinkers is less than non-coffee drinkers. Below is one of the ways you could write the null and alternative. Since the mean for coffee drinkers is listed first here, the alternative would suggest that this is negative.**
$$H_0: \mu_{coff} - \mu_{no} \geq 0$$
$$H_1: \mu_{coff} - \mu_{no} < 0$$
**$\mu_{coff}$ and $\mu_{no}$ are the population mean values for coffee drinkers and non-coffee drinkers, respectivley.**
`3.` For 10,000 iterations: bootstrap the sample data, calculate the mean height for coffee drinkers and non-coffee drinkers, and calculate the difference in means for each sample. You will want to have three arrays at the end of the iterations - one for each mean and one for the difference in means. Use the results of your sampling distribution, to answer the third quiz question below.
```
nocoff_means, coff_means, diffs = [], [], []
for _ in range(10000):
bootsamp = sample_data.sample(200, replace = True)
coff_mean = bootsamp[bootsamp['drinks_coffee'] == True]['height'].mean()
nocoff_mean = bootsamp[bootsamp['drinks_coffee'] == False]['height'].mean()
# append the info
coff_means.append(coff_mean)
nocoff_means.append(nocoff_mean)
diffs.append(coff_mean - nocoff_mean)
np.std(nocoff_means) # the standard deviation of the sampling distribution for nocoff
np.std(coff_means) # the standard deviation of the sampling distribution for coff
np.std(diffs) # the standard deviation for the sampling distribution for difference in means
plt.hist(nocoff_means, alpha = 0.5);
plt.hist(coff_means, alpha = 0.5); # They look pretty normal to me!
plt.hist(diffs, alpha = 0.5); # again normal - this is by the central limit theorem
```
`4.` Now, use your sampling distribution for the difference in means and [the docs](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.normal.html) to simulate what you would expect if your sampling distribution were centered on zero. Also, calculate the observed sample mean difference in `sample_data`. Use your solutions to answer the last questions in the quiz below.
** We would expect the sampling distribution to be normal by the Central Limit Theorem, and we know the standard deviation of the sampling distribution of the difference in means from the previous question, so we can use this to simulate draws from the sampling distribution under the null hypothesis. If there is truly no difference, then the difference between the means should be zero.**
```
null_vals = np.random.normal(0, np.std(diffs), 10000) # Here are 10000 draws from the sampling distribution under the null
plt.hist(null_vals); #Here is the sampling distribution of the difference under the null
```
| github_jupyter |
# Schnellere Zufallsspiele
Beschleunige die zufälligen Spiele indem nicht erst Züge generiert werden die dann an dem RandomPlayer gegeben werden um einen zufälligen zu erhalten. Stattdessen wird einfach ein zufälliger Stein zweimal um 1-6 Schritte bewegt (falls möglich). Diese Methode ist für MCTS sehr wichtig, insbesondere die Geschwindigkeit.
Neu: <br>
- play_random_fast()
- execute_random_move()
Entfernt: <br>
- Die RandomPlayer Cython Klasse
- play_random()
```
%load_ext Cython
%%cython
import itertools
import numpy as np
import random
from libc.stdlib cimport rand
class Game:
PLAYERS = ['black', 'white']
def __init__(self):
#Spielbrett. Index ist Position auf dem Spielfeld und der Wert die Anzahl der Steine auf dem Feld
#Diese Zahl ist positiv für schwarze und negativ für Weiße Steine
cdef int points[24]
points[:] = [2,0,0,0,0,-5,0,-3,0,0,0,5,-5,0,0,0,3,0,5,0,0,0,0,-2]
self.points = points
#Steinpositionen
self.refresh_piece_positions()
#Steine die auf der Bar sind
self.black_taken = 0
self.white_taken = 0
self.players = ['black', 'white']
self.turns = 0
# Methode die exakt die 198 Features liefert die in TD-Gammon 0.0 benutzt wurden
# Nach "Reinforcement Learning: An Introduction", Sutton & Barto, 2017
def extractFeatures(self, player):
points = self.points
# 196 Features kodieren den Zustand der Spielfelder, 98 für jeden Spieler
# 0,1,2,3,4,5 Steine werden kodiert als
# 0000, 1000, 1100, 1110, 1110.5, 1111
# (4. Bit = (n-3)/2)
features = []
#Weiße Steine codieren
whites = 0
for point in points[:24]:
point = -point
if point > 0:
whites += point
features += self.encodePoint(point)
else:
features += [0.,0.,0.,0.]
#Weiße Steine auf der "Bar", n/2
features.append(self.white_taken/2.)
#Weiße Steine die bereits aus dem Spiel sind, n/15
off = 15 - whites + self.white_taken
features.append(off/15.)
#Schwarze Steine codieren
blacks = 0
for point in points[:24]:
if point > 0:
blacks += point
features += self.encodePoint(point)
else:
features += [0.,0.,0.,0.]
#Schwarze Steine auf der "Bar", n/2
features.append(self.black_taken/2.)
#Schwarze Steine die bereits aus dem Spiel sind, n/15
off = 15 - blacks + self.black_taken
features.append(off/15.)
# Zwei Features für den derzeitigen Spieler
if player == self.players[0]:
features += [1., 0.]
else:
features += [0., 1.]
return np.array(features).reshape(1, -1)
def encodePoint(self, point):
if point == 0:
return [0.,0.,0.,0.]
elif point == 1:
return [1.,0.,0.,0.]
elif point == 2:
return [1.,1.,0.,0.]
elif point == 3:
return [1.,1.,1.,0.]
else:
return [1.,1.,1.,(point-3)/2.]
def play(self, player, debug=False):
cdef int player_num
#Wer anfängt ist zufällig
player_num = rand() % 2
#Solange spielen bis es einen Gewinner gibt
while not self.get_winner():
#Zug ausführen
self.next_step(player[player_num], player_num, debug=debug)
#Der andere Spieler ist dran
player_num = (player_num + 1) % 2
#Siegesstats ausgeben
#self.print_game_state()
return self.get_winner()
def play_random_fast(self, start_player, debug=False):
player_num = 0 if start_player == self.players[0] else 1
while not self.get_winner():
#Zug ausführen (zwei Zufallswürfel)
move1 = self.execute_random_move(self.players[player_num])
move2 = self.execute_random_move(self.players[player_num])
#Der andere Spieler ist dran
player_num = (player_num + 1) % 2
#Debuggen
if debug:
print("Current Player:", self.players[player_num])
print("Move:", move1, move2)
self.print_game_state()
print()
return self.get_winner()
def execute_random_move(self, player):
chk = self.black_checkers[:] if player == self.players[0] else self.white_checkers[:]
random.shuffle(chk)
dice = random.randint(1,6)
if player == self.players[1]:
dice = -dice
#Steine auf der Bar
if player == self.players[0] and self.black_taken > 0 or player == self.players[1] and self.white_taken > 0:
pos = dice - 1 if player == self.players[0] else len(self.points) + dice
if self.is_target_valid(pos, player):
self.execute_move(('bar', pos), player)
return ('bar', pos)
else:
#Steine auf dem Brett
for c in chk:
if self.is_target_valid(c + dice, player):
self.execute_move((c, c + dice), player)
return (c, c + dice)
def next_step(self, player, player_num, debug=False):
cdef int a,b
self.turns += 1
#Würfeln
a = rand() % 6 + 1
b = rand() % 6 + 1
roll = (a, b)
#Züge berechnen
moves = self.get_moves(roll, self.players[player_num])
#Spieler fragen welche der Züge er gerne ausführen möchte
move = player.get_action(moves, self) if moves else None
#Zug ausführen falls es möglich ist
if move:
#Einzelne Unterzüge ausführen
if debug:
print(move)
self.execute_moves(move, self.players[player_num])
#Debuggen
if debug:
print("Current Player:", self.players[player_num])
print("Moves:", moves)
print("Roll:", roll, "| Move:", move)
self.print_game_state()
print()
def execute_moves(self, moves, player):
#Unterzüge ausführen
for m in moves:
self.execute_move(m, player)
#Führt einen Zug aus und gibt die vorherige Spielposition zurück
def execute_move(self, move, player):
if move != (0,0):
piece = 1 if player == self.players[0] else - 1
#Stein von der alten Position nehmen, falls nicht auf der Bar
if move[0] != "bar":
self.points[move[0]] -= piece
elif player == self.players[0]:
self.black_taken -= 1
else:
self.white_taken -= 1
#Stein auf die gewünschte Stelle setzen, falls noch auf dem Spielfeld
if move[1] >= 0 and move[1] < len(self.points):
#Falls dort bereits ein Gegnerstein war wird er auf die Bar gelegt
if player == self.players[0] and self.points[move[1]] == -1:
self.points[move[1]] = 0
self.white_taken += 1
elif player == self.players[1] and self.points[move[1]] == 1:
self.points[move[1]] = 0
self.black_taken += 1
#Stein platzieren
self.points[move[1]] += piece
#Positionen der schwarzen und weißen Steine aktualisieren
self.refresh_piece_positions()
def get_state(self):
return (self.points[:], self.black_taken, self.white_taken)
#Setzt das Spiel auf die gegebene Spielposition (zurück)
def reset_to_state(self, state):
self.points = state[0][:]
self.black_taken = state[1]
self.white_taken = state[2]
#Positionen der schwarzen und weißen Steine aktualisieren
self.refresh_piece_positions()
#Aktualisiert die Listen mit den Position der Steine
def refresh_piece_positions(self):
#Positionen der schwarzen Steine
self.black_checkers = [i for i in range(24) if self.points[i] > 0]
#Positionen der weißen Steine
self.white_checkers = sorted([i for i in range(24) if self.points[i] < 0], reverse=True)
def get_moves(self, roll, player):
#Pasch?
if roll[0] == roll[1]:
return self.get_quad_moves(roll[0], player)
#Hat der Spieler Steine die er erst wieder ins Spiel bringen muss?
if self.has_bar_pieces(player):
return self.get_bar_to_board_moves(roll, player)
#Sonstige Züge finden
else:
return self.generate_moves(roll, player)
def get_bar_to_board_moves(self, roll, player):
moves = []
#Sind die Heimfelder blockiert die die Würfel anzeigen?
pos0 = roll[0] - 1 if player == self.players[0] else len(self.points) - roll[0]
pos1 = roll[1] - 1 if player == self.players[0] else len(self.points) - roll[1]
val1 = self.is_target_valid(pos0, player)
val2 = self.is_target_valid(pos1, player)
taken = self.black_taken if player == self.players[0] else self.white_taken
#Falls beide Würfel genutzt werden können müssen sie genutzt werden
if taken > 1 and val1 and val2:
moves.append((("bar", pos0), ("bar", pos1)))
else:
#Falls nicht möglich, andere Züge für den zweiten würfel finden
if val1:
bar_move = ("bar", pos0)
singles = self.generate_single_move(bar_move, roll[1], player)
moves += [(bar_move, s) for s in singles]
if val2:
bar_move = ("bar", pos1)
singles = self.generate_single_move(bar_move, roll[0], player)
moves += [(bar_move, s) for s in singles]
return moves
def generate_moves(self, roll, player):
valid = self.is_target_valid
board = self.points
#Alle zweier Kombinationen aus den derzeitigen Positionen ermitteln
chk = self.black_checkers if player == self.players[0] else self.white_checkers
comb = list(itertools.combinations(chk, 2))
comb += [(a,a) for a in chk if board[a] > 1 or board[a] < -1]
#Züge suchen
moves = []
#Schwarz geht die Zahlen hoch, Weiß runter
if player == self.players[1]:
roll = (-roll[0], -roll[1])
#Jeden Zug prüfen
for (a,b) in comb:
#Zwei Steine Bewegen
a0 = valid(a + roll[0], player)
a1 = valid(a + roll[1], player)
if a0 and valid(b + roll[1], player):
moves.append(((a, a + roll[0]), (b, b + roll[1])))
if a1 and valid(b + roll[0], player) and not (a==b and a0):
moves.append(((a, a + roll[1]), (b, b + roll[0])))
#Ein Stein bewegen
farpos = a + roll[0] + roll[1]
if a == b and farpos >= 0 and farpos < len(self.points) and valid(farpos, player):
if a0:
moves.append(((a, a + roll[0]), (a + roll[0], farpos)))
elif a1:
moves.append(((a, a + roll[1]), (a + roll[1], farpos)))
#Falls man keine zwei Züge generieren kann, schauen ob man vielleicht einen Einzelnen schafft
if moves == []:
m1 = self.generate_single_move(prev_move=None, dice=roll[0], player=player)
m2 = self.generate_single_move(prev_move=None, dice=roll[1], player=player)
m1.extend(m2)
moves += [((0,0), m) for m in m1]
#Zurückgeben
return moves
def generate_single_move(self, prev_move, dice, player):
chk = self.black_checkers if player == self.players[0] else self.white_checkers
if player == self.players[1] and dice > 0:
dice = -dice
#Alle normalen Züge
moves = [(x, x + dice) for x in chk if self.is_target_valid(x + dice, player)]
#kann man den Stein direkt weiter ziehen?
if prev_move and self.is_target_valid(prev_move[1] + dice, player):
moves.append((prev_move[1], prev_move[1] + dice))
return moves
def generate_double_move(self, prev_move, dice, player):
chk = self.black_checkers if player == self.players[0] else self.white_checkers
if player == self.players[1] and dice > 0:
dice = -dice
#Einzelzüge besorgen
s_moves = self.generate_single_move(prev_move, dice, player)
#Weiteren Zug anhängen
moves = [((a,b),(b,b+dice)) for (a,b) in s_moves if self.is_target_valid(b+dice, player)]
for x in chk:
for y in s_moves:
if self.is_target_valid(x+dice, player):
if ((x,x+dice) == y and self.points[x] > 2) or (x,x+dice) != y:
moves.append((y, (x,x+dice)))
return moves
def generate_triple_move(self, prev_move, dice, player):
chk = self.black_checkers if player == self.players[0] else self.white_checkers
if player == self.players[1] and dice > 0:
dice = -dice
#Doppelzüge besorgen
d_moves = self.generate_double_move(prev_move, dice, player)
#Weiteren Zug anhängen
moves = [((a,b), (c,d), (d, d+dice)) for ((a,b), (c,d)) in d_moves if self.is_target_valid(d+dice, player)]
for x in chk:
for (y,z) in d_moves:
if self.is_target_valid(x+dice, player):
if (((x,x+dice) == y or (x,x+dice) == z) and self.points[x] > 2) or ((x,x+dice) != y and (x,x+dice) != z):
moves.append((y, z, (x,x+dice)))
return moves
#Bei einem Pasch müssen 4 Züge ausgeführt werden
def get_quad_moves(self, dice, player):
#Bis zu 4 Steine von der Bar bewegen
if self.has_bar_pieces(player):
return self.get_quad_bar_to_board_moves(dice, player)
#4 Züge finden
else:
return self.generate_quad_moves(dice, player)
def get_quad_bar_to_board_moves(self, dice, player):
moves = []
#Ist das von den Würfeln gezeigt Heimfeld blockiert?
pos = dice - 1 if player == self.players[0] else len(self.points) - dice
valid = self.is_target_valid(pos, player)
taken = self.black_taken if player == self.players[0] else self.white_taken
#Falls alle Würfel genutzt werden können müssen sie genutzt werden
if valid:
bar_move = ("bar", pos)
if taken >= 4:
moves.append((bar_move, bar_move, bar_move, bar_move))
elif taken == 3:
singles = self.generate_single_move(bar_move, dice, player)
moves += [(bar_move, bar_move, bar_move, s) for s in singles]
elif taken == 2:
doubles = self.generate_double_move(bar_move, dice, player)
moves += [(bar_move, bar_move, d1, d2) for (d1, d2) in doubles]
else:
triples = self.generate_triple_move(bar_move, dice, player)
moves += [(bar_move, t1, t2, t3) for (t1, t2, t3) in triples]
return moves
"""
Kombinationsmöglichkeiten:
1. quad
2. triple + single
3. double + double
4. double + single + single
5. single + single + single + single
Der Genickbruch eines jeden schnellen Backgammon-Programms:
Die Berechnung von allen Möglichkeiten bei 4 Würfeln!
"""
def generate_quad_moves(self, dice, player):
#Optimieren
board = self.points
valid = self.is_target_valid
#Weiß läuft entgengesetzt
if player == self.players[1] and dice > 0:
dice = -dice
#Alle zweier Kombinationen aus den derzeitigen Positionen ermitteln
chk = self.black_checkers if player == self.players[0] else self.white_checkers
#Alle Positionen mit mindestens einem Stein
single = chk[:]
#Alle Positionen mit mindestens zwei Steinen
double = [x for x in chk if abs(board[x]) >= 2]
#Alle Positionen mit mindestens drei Steinen
triple = [x for x in chk if abs(board[x]) >= 3]
#Alle Positionen mit mindestens vier Steinen
quad = [x for x in chk if abs(board[x]) >= 4]
#Gültige Zielpunkte sammeln
valid_dict = {}
for s in single:
for i in range(4):
target = s+dice*(i+1)
if target not in valid_dict:
if i == 0:
valid_dict[target] = valid(target, player)
else:
valid_dict[target] = valid(target, player) and target >= 0 and target < len(self.points)
moves = []
#Quads, 1. Kombination
for q in quad:
if valid_dict[q+dice]:
moves.append(((q, q+dice),(q, q+dice),(q, q+dice),(q, q+dice)))
#Triples
for t in triple:
#2. Kombination
for s in single:
if t != s and valid_dict[t+dice] and valid_dict[s+dice]:
moves.append(((t, t+dice),(t, t+dice),(t, t+dice),(s, s+dice)))
#Folgezüge für triples
if valid_dict[t+dice] and valid_dict[t+dice*2]:
moves.append(((t, t+dice),(t, t+dice),(t, t+dice),(t+dice, t+dice*2)))
#Doubles
for d in double:
d1 = valid_dict[d+dice]
d2 = valid_dict[d+dice*2]
d3 = valid_dict[d+dice*3]
#3. Kombination
for ds in double:
#Keine Doppelten bitte
if ds > d:
if d1 and valid_dict[ds+dice]:
moves.append(((d, d+dice),(d, d+dice),(ds, ds+dice),(ds, ds+dice)))
#4. Kombination
for s1 in single:
for s2 in single:
if s2 > s1 and d != s1 and d != s2:
if valid_dict[d+dice] and valid_dict[s1+dice] and valid_dict[s2+dice]:
moves.append(((d, d+dice),(d, d+dice),(s1, s1+dice),(s2, s2+dice)))
#Doppelzug gefolgt von einem Folgezug und einem single
if d1 and d2 and d != s1 and valid_dict[s1+dice]:
moves.append(((d, d+dice),(d, d+dice),(d+dice, d+dice*2),(s1, s1+dice)))
#Folgezüge für doubles
#Jeweils zwei Züge mit zwei Steinen
if d1 and d2:
moves.append(((d, d+dice),(d, d+dice),(d+dice, d+dice*2),(d+dice, d+dice*2)))
#Ein double gefolgt von einem doppelten Folgezug
if d1 and d2 and d3:
moves.append(((d, d+dice),(d, d+dice),(d+dice, d+dice*2),(d+dice*2, d+dice*3)))
#Singles
for s1 in single:
sv1 = valid_dict[s1+dice]
sv2 = valid_dict[s1+dice*2]
sv3 = valid_dict[s1+dice*3]
sv4 = valid_dict[s1+dice*4]
for s2 in single:
sec1 = valid_dict[s2+dice]
sec2 = valid_dict[s2+dice*2]
sec3 = valid_dict[s2+dice*3]
for s3 in single:
for s4 in single:
#5. Kombination
if s4 > s3 > s2 > s1:
if sv1 and valid_dict[s2+dice] and valid_dict[s3+dice] and valid_dict[s4+dice]:
moves.append(((s1, s1+dice),(s2, s2+dice),(s3, s3+dice),(s4, s4+dice)))
#Züge mit mindestens drei Steinen
if s2 > s1 and s3 != s2 and s3 != s1:
#Zwei Einzelzüge gefolgt von einem Folgezug
if sv1 and sec1 and valid_dict[s3+dice] and valid_dict[s3+dice*2]:
moves.append(((s1, s1+dice),(s2, s2+dice),(s3, s3+dice),(s3+dice, s3+dice*2)))
#Folgezüge für singles
if s2 > s1:
#Ein Zug mit dem ersten Stein und drei mit dem zweiten
if sv1 and sec1 and sec2 and sec3:
moves.append(((s1, s1+dice),(s2, s2+dice),(s2+dice, s2+dice*2),(s2+dice*2, s2+dice*3)))
#Zwei Züge mit dem ersten und zwei mit dem zweiten
if sv1 and sv2 and sec1 and sec2:
moves.append(((s1, s1+dice),(s1+dice, s1+dice*2),(s2, s2+dice),(s2+dice, s2+dice*2)))
#Drei Züge mit dem ersten Stein und einen weiteren mit dem zweiten
if sv1 and sv2 and sv3 and sec1:
moves.append(((s1, s1+dice),(s1+dice, s1+dice*2),(s1+dice*2, s1+dice*3),(s2, s2+dice)))
#Zwei Züge mit dem ersten Stein gefolgt von einem double
for d in double:
if d != s1 and sv1 and sv2 and valid_dict[d+dice]:
moves.append(((s1, s1+dice),(s1+dice, s1+dice*2),(d, d+dice),(d, d+dice)))
#Vier Züge mit einem einzigen Stein
if sv1 and sv2 and sv3 and sv4:
moves.append(((s1, s1+dice),(s1+dice, s1+dice*2),(s1+dice*2, s1+dice*3),(s1+dice*3, s1+dice*4)))
#Tupel sortieren um Doppelte zu finden und zu löschen
#Erhöht zwar hier den Rechenaufwand, aber reduziert die Anzahl der Züge erheblich!
if player == self.players[0]:
return set([tuple(sorted(x)) for x in moves])
else:
return set([tuple(sorted(x, reverse=True)) for x in moves])
#Prüft ob das angegeben Ziel gültig ist
def is_target_valid(self, int target, str player):
#Landen wir jenseis des Spielbretts?
if target < 0 or target >= len(self.points):
return self.can_offboard(player)
#Prüfen ob das Ziel blockiert ist (2 oder mehr Gegnersteine vorhanden)
if player == self.players[0]:
return self.points[target] > -2
elif player == self.players[1]:
return self.points[target] < 2
#Hat der Spieler Steine die vom Gegner rausgeworfen wurden?
def has_bar_pieces(self, str player):
if player == self.players[0]:
return self.black_taken > 0
elif player == self.players[1]:
return self.white_taken > 0
#Hat der Spieler alle seine Steine in seiner Homezone?
def can_offboard(self, str player):
if player == self.players[0]:
return self.black_taken == 0 and self.black_checkers[0] > 18
elif player == self.players[1]:
return self.white_taken == 0 and self.white_checkers[0] < 7
#Gibt den Gewinner zurück falls es einen gibt
def get_winner(self):
if self.black_checkers == [] and self.black_taken == 0:
return self.players[0]
elif self.white_checkers == [] and self.white_taken == 0:
return self.players[1]
else:
return None
def get_opponent(self, player):
return self.players[0] if player == self.players[1] else self.players[1]
def Clone(self):
g = Game()
g.reset_to_state(self.get_state())
return g
def print_game_state(self):
print("GameState:", self.points, "|", self.black_checkers, "|", self.white_checkers)
bl = sum([self.points[i] for i in range(len(self.points)) if self.points[i] > 0])
wt = sum([self.points[i] for i in range(len(self.points)) if self.points[i] < 0])
print("Black/White:", bl, "/", wt, "Bar:", self.black_taken, "/", self.white_taken)
g = Game()
g.play_random_fast(debug=True)
import time
wins = {'black': 0, 'white' : 0}
# Zeit messen und Spielen, diesmal ohne loggen
start = time.time()
for i in range(1000):
game = Game()
winner = game.play_random_fast()
wins[winner] += 1
#print("Spiel", i, "geht an", winner)
end = time.time()
# Hübsch ausgeben
print(wins)
print("1000 Spiele in ", end - start, "Sekunden")
```
Doppelt so schnell, wie mit dem RandomPlayer!
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.