code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# CA5 Phase 2
## Mohammad Ali Zare
### 810197626
In this assignment we must classify Xray scans of patients and tell if they're **Normal**, or they have **Covid19**/**Pneuma**.
We do this using neural networks and will try different parameters to see how the performance of the model would change.
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
LABELS = ['COVID19', 'NORMAL', 'PNEUMA']
from google.colab import drive
drive.mount('/content/drive')
!unzip /content/drive/MyDrive/AI/xray.zip
train_dataset = keras.preprocessing.image_dataset_from_directory(
'./Data/train', color_mode='grayscale', batch_size=32, image_size=(80, 80))
test_dataset = keras.preprocessing.image_dataset_from_directory(
'./Data/test', color_mode='grayscale', batch_size=32, image_size=(80, 80))
```
# 2
```
plt.figure(figsize=(10, 10))
for imgs, labels in train_dataset.take(1):
class_i = []
class_i.append(np.where(labels == 0)[0][0])
class_i.append(np.where(labels == 1)[0][0])
class_i.append(np.where(labels == 2)[0][0])
for j,i in enumerate(class_i):
ax = plt.subplot(3, 3, j+1)
plt.imshow(imgs[i].numpy().astype("uint8")[:,:,0], cmap='Greys_r')
plt.title(train_dataset.class_names[labels[i]])
plt.axis('off')
counts = [0, 0, 0]
for imgs, labels in train_dataset.take(-1):
for i,c in enumerate(np.unique(labels, return_counts=True)[1]):
counts[i] += c
barWidth = 0.4
plt.figure(figsize=(7, 7))
bars1 = counts
r1 = np.arange(len(bars1))
plt.bar(r1, bars1, color='#7f6d5f', width=barWidth, edgecolor='white', label='Test')
plt.xlabel('Label', fontweight='bold')
plt.ylabel('Frequency', fontweight='bold')
plt.xticks([r + 0.01 for r in range(len(bars1))], train_dataset.class_names)
plt.legend()
plt.show()
```
--------
```
def plot_report(report):
plt.figure(figsize=(8,8))
plt.plot(report.history['loss'], label='Train')
plt.plot(report.history['val_loss'], label='Test')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.title('Loss Plot')
plt.legend()
plt.show()
plt.figure(figsize=(8,8))
plt.plot(report.history['accuracy'], label='Train')
plt.plot(report.history['val_accuracy'], label='Test')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.title('Accuracy Plot')
plt.legend()
plt.show()
```
----
# 3. Model Summary
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
inp = keras.layers.Input(shape=(80,80,1))
out = keras.layers.Flatten()(inp)
out = keras.layers.Dense(1024, activation='relu')(out)
out = keras.layers.Dense(1024, activation='relu')(out)
out = keras.layers.Dense(3, activation='softmax')(out)
model = keras.models.Model(inputs=inp, outputs=out)
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(learning_rate=0.01),
metrics=['accuracy', keras.metrics.Precision()]
)
```
### Summary
The summary shows that our network has 4 layers. The first Layer is our input layers which has 80*80 nuerons (pixel count). Second and third layers are our hidden layers and both have 1024 neurons. And the last layers is our output layers which has 3 neurons (each representing our classes). Overall we have 7,607,299 parameters (weights and biases) to train.
```
model.summary()
```
------------
# 4. Tanh and Relu (non-normalized data)
```
dataGen = keras.preprocessing.image.ImageDataGenerator()
ntrain_dataset = dataGen.flow_from_directory(
'/content/Data/train',
target_size=(80,80),
color_mode='grayscale',
batch_size=32
)
ntest_dataset = dataGen.flow_from_directory(
'/content/Data/test',
target_size=(80,80),
color_mode='grayscale',
batch_size=32
)
nunshuffled_train = dataGen.flow_from_directory(
'/content/Data/train',
target_size=(80,80),
color_mode='grayscale',
batch_size=32,
shuffle=False
)
nunshuffled_test = dataGen.flow_from_directory(
'/content/Data/test',
target_size=(80,80),
color_mode='grayscale',
batch_size=32,
shuffle=False
)
```
## Relu
As it can be seen in plot and report of the model, it doesn't train at all. The reason is our activation function is Relu and the data is not normalized. As the Relu function is not bounded, it can produce large numbers so it may cause gradiant explosion with large inputs and numbers would overflow. For this reason the loss in first epoch increased instantly and then became NaN so the network couldn't learn.
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(learning_rate=0.01),
metrics=['accuracy']
)
report0 = model.fit(ntrain_dataset, validation_data=ntest_dataset, epochs=10)
plot_report(report0)
pred1_test = model.predict(nunshuffled_test, batch_size=32)
pred1_train = model.predict(nunshuffled_train, batch_size=32)
print("Test:")
print(classification_report(nunshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(nunshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
## Tanh
We can see in the plot that the accuracy and loss aren't changing much. The reason here is **Vanishing Gradiant**. As the output of Tanh is in range of -1 and 1, the gradiants would be too small, and furthermore, in the back-prop process they become even smaller as they are using the chain rule. So as a result of this, the updates to weights and biases are too small and not much learning will happen.
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='tanh'))
model.add(keras.layers.Dense(1024, activation='tanh'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(learning_rate=0.01),
metrics=['accuracy']
)
report00 = model.fit(ntrain_dataset, validation_data=ntest_dataset, epochs=10)
plot_report(report00)
pred1_test = model.predict(nunshuffled_test, batch_size=32)
pred1_train = model.predict(nunshuffled_train, batch_size=32)
print("Test:")
print(classification_report(nunshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(nunshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
## 4.3.
Tanh output is bounded so it won't cause overflow as the Relu did. But in both cases the network couldn't learn at all. Normalization can help with it. Because for Relu the values would be smaller and won't cause overflow or explosion and in Tanh the updates will be more effective.
# 5
With trying different networks, the current model had acceptable F1 scores above 0.90.
```
dataGenNorm = keras.preprocessing.image.ImageDataGenerator(rescale=1/255.0)
train_dataset = dataGenNorm.flow_from_directory(
'/content/Data/train',
target_size=(80,80),
color_mode='grayscale',
batch_size=32,
)
test_dataset = dataGenNorm.flow_from_directory(
'/content/Data/test',
target_size=(80,80),
color_mode='grayscale',
batch_size=32
)
unshuffled_train = dataGenNorm.flow_from_directory(
'/content/Data/train',
target_size=(80,80),
color_mode='grayscale',
batch_size=32,
shuffle=False
)
unshuffled_test = dataGenNorm.flow_from_directory(
'/content/Data/test',
target_size=(80,80),
color_mode='grayscale',
batch_size=32,
shuffle=False
)
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(learning_rate=0.01),
metrics=['accuracy', keras.metrics.Precision()]
)
report1 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)
plot_report(report1)
pred1_test = model.predict(unshuffled_test, batch_size=32)
pred1_train = model.predict(unshuffled_train, batch_size=32)
print("Test:")
print(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
# 6. Optimizers
#### I. Momentum
Normally, the updates to weights are product of learning rate and the gradiant of error/loss. But if we use momentum, we add a new term. For each new weight update, we add the product of momentum value and the previous update. So the previous updates are considered on each. Because of this one batch can't change the direction of descent if most of the previous batchs were moving toward a direction.
Using momentum the learning will happen faster as we will have bigger updates. It can also help us to skip the local minima because of the big update.
#### III. Is momentum always good?
A very large momentum can cause random behavior and can stop the learning proccess. And a very small value won't make much change.
#### IV. Adam
### 6.2. momentum = 0.5
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(learning_rate=0.01, momentum=0.5),
metrics=['accuracy', keras.metrics.Precision()]
)
report2 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)
plot_report(report2)
pred1_test = model.predict(unshuffled_test, batch_size=32)
pred1_train = model.predict(unshuffled_train, batch_size=32)
print("Test:")
print(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
### 6.2. momentum = 0.9
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(learning_rate=0.01, momentum=0.9),
metrics=['accuracy', keras.metrics.Precision()]
)
report3 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)
plot_report(report3)
pred1_test = model.predict(unshuffled_test, batch_size=32)
pred1_train = model.predict(unshuffled_train, batch_size=32)
print("Test:")
print(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
### 6.2. momentum = 0.99
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(learning_rate=0.01, momentum=0.99),
metrics=['accuracy', keras.metrics.Precision()]
)
report4 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)
plot_report(report4)
pred1_test = model.predict(unshuffled_test, batch_size=32)
pred1_train = model.predict(unshuffled_train, batch_size=32)
print("Test:")
print(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
## 6.2. Different Momentums
We can see with large momentum (0.99) the models doesn't perform good and stops learning.
With momentum = 0.5, the previous values aren't affecting the result as much as momentum = 0.9. As a result of this, the low momentum (0.5) still responds to the noises and has a more zigzagy behavior, but the 0.9 one is moving smoother because previous batches have more effect. Overall 0.9 seems a better compromise between the very high and low values.
## 6.4 Adam
We can see from the plot, Adam has less zigzagy behavior and moves faster toward a lower loss but SGD got slightly better results . This optimizer doesn't require much custom tuning Where SGD sometimes requires custom tuning for its parameters (we saw momentum as an example).
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy', keras.metrics.Precision()]
)
report5 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)
plot_report(report5)
pred1_test = model.predict(unshuffled_test, batch_size=32)
pred1_train = model.predict(unshuffled_train, batch_size=32)
print("Test:")
print(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
# 7. Epoch
#### II. Why we need multiple epochs?
Models need to learn parameters for the input data. It learns gradually using usually with a gradual descent algorithm to minimize the loss/error. With a set of data the algorithm may not be able to update its parameters and generelize enough. Because of this we do the learning proccess in multiple iterations (epochs) to learn the data better. Another reason is we may not be able to have a lot of new data to train the model on, so we use the same data in multiple epochs to learn its features. But if we have enough data we may get the optimal result in one iteration.
#### III. Is more epochs always better?
Having a lot epochs may lead to overfitting. Because it learns the features of the data too much, that it even catches the noises. So it can't generalize well and although it has good performance on the train data, it doesn't perform good on the test data.
To prevent overfitting, we can use early-stopping techniques. It means we stop the learning process when the it evaluation data results start to worsening.
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy', keras.metrics.Precision()]
)
report6 = model.fit(train_dataset, validation_data=test_dataset, epochs=20)
plot_report(report6)
pred1_test = model.predict(unshuffled_test, batch_size=32)
pred1_train = model.predict(unshuffled_train, batch_size=32)
print("Test:")
print(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
# 8. Loss
#### I. MSE
We can see the model with MSE isn't good and it stop learning and stalls.
#### II. Why MSE is not good for classifaction?
The formula for MSE is
$\frac{1}{n}((actual_1 - predict_1)^2 + ... + (actual_n - predict_n)^2)$
If we calculate the gradian $\frac{dL}{dW}$ We reach something that has the term $(predict - actual)(x*predict(1-predict)$ and because we are classifying, the $predict$ is in range 0 and 1. So as the prediction approaches very close to 1 or 0 (being certain about a class) the gradian would become too small (because of $predict(1-predict)$). This causes very small updates to weights and the learning process will eventually stall early with no further update.
MSE is used mostly for regression problems.
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='mean_squared_error',
optimizer='adam',
metrics=['accuracy', keras.metrics.Precision()]
)
report8 = model.fit(train_dataset, validation_data=test_dataset, epochs=20)
plot_report(report8)
pred1_test = model.predict(unshuffled_test, batch_size=32)
pred1_train = model.predict(unshuffled_train, batch_size=32)
print("Test:")
print(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
# 9. Regularization
Regularization methods are used to prevent overfitting.
### 9.2. L2=0.0001
Using L2 method, in each update to weight, we add a product of constant and the previous weight. This constant is predefined and independent of the learning process, so this change to weight, keeps the model from overfitting to the train data and being perfect on that.
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu', kernel_regularizer=keras.regularizers.l2(l2=0.0001)))
model.add(keras.layers.Dense(1024, activation='relu', kernel_regularizer=keras.regularizers.l2(l2=0.0001)))
model.add(keras.layers.Dense(3, activation='softmax', kernel_regularizer=keras.regularizers.l2(l2=0.0001)))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy', keras.metrics.Precision()]
)
report8 = model.fit(train_dataset, validation_data=test_dataset, epochs=20)
plot_report(report8)
pred1_test = model.predict(unshuffled_test, batch_size=32)
pred1_train = model.predict(unshuffled_train, batch_size=32)
print("Test:")
print(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
### 9.3. dropout = 0.1
To combat overfitting, ensemble methods can be used where multiple models results are combined to give one final result. But training multiple architectures is expensive. With dropout method we can simulate having multiple models. On each layer we randomly drop/ignore some neurons' outputs so it's like we have a layer with less neurons (different architecture). On each iteration the data are viewed from different model, so the overall result would be more generalized.
We can see from the plots and reports, this models performs good and doesn't suffer from overfitting.
```
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(80, 80, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dropout(0.1))
model.add(keras.layers.Dense(1024, activation='relu'))
model.add(keras.layers.Dropout(0.1))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy', keras.metrics.Precision()]
)
report9 = model.fit(train_dataset, validation_data=test_dataset, epochs=20)
plot_report(report9)
pred1_test = model.predict(unshuffled_test, batch_size=32)
pred1_train = model.predict(unshuffled_train, batch_size=32)
print("Test:")
print(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))
print("\nTrain:")
print(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))
```
| github_jupyter |
```
# Quick and dirty test Auditory perception whole docs vs. other categories
```
### Positive corpus from all Auditory abstracts
- 146 documents in batch_05_AP_pmids (most are actually AP)
### Compare Auditory perception to corpus for other topics
Decreasing distance:
- 1000 disease documents
- 1000 arousal documents
- 1000 auditory perception documents
- 1000 psychology documents, psyc_1000_ids
- 156 new arousal documents, batch_04_AR_pmids (most are probably AP, but a few prob are not.)
## Setup our deepdive app
deepdive_app/my_app/:
- db.url # name for this db
- deepdive.conf # contains extractors, inference rules, specify holdout.
- input/raw_sentences
- input/annotated_sentences
- input/init.sh
- udf/* # user defined functions used within deepdive.conf
Steps to build app:
deepdive initdb:
- db started with schema
- runs init.sh to preload deepdive postgres db with raw, annotated sentences
deepdive run:
- creates run/* directory for each run
- runs deepdive.conf which holds the deepdive pipeline extractors and rules
- in particular, the extractors set up the features and rules to be used by deepdive
We have some of these items as templates in a template directory.
```
!pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start # deepdive and medic
templates='/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/templates_deepdive_app_bagofwords'
app_dir='/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception'
%mkdir {app_dir}
%cd {app_dir}
%cp -r {templates}/* {app_dir}/
# modify the postgres db name
#
!echo postgresql://localhost/8_0_3_quick_auditory_perception > db.url
```
## Fill input directory based on document abstracts
```
%cd '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception/input'
```
### Prepping the Auditory perception positive, negative training, and a set of unkowns to test.
training set:
- 146 documents as postive
- 1000 documents as negative
our unknowns are a mix of positive, likely negative, most likely negative.
abstracts from other topics
```
def get_abstracts(pmid_list_file):
abstracts=!medic --format tsv write --pmid-list {pmid_list_file} 2>/dev/null
return([a.split('\t', 2) for a in abstracts])
ap_146 = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/batch_05_AP_pmids')
ap_1000 = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/AP00_1000_ids')
diss = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/diss_1000_ids')
# psyc = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/psyc_1000_ids')
# ar_1000 = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/AR_1000_ids')
```
### annotated sentences
my_id sentences [tf] \N
- column 3, true, false, null
- column 4 null id for deepdive's use
```
import io
import codecs
from spacy.en import English
nlp = English(parser=True, tagger=True) # so we can sentence parse
def spacy_lemma_gt_len(text, length=2):
'''Create bag of unique lemmas, requiring lemma length > length
Note: setting length to 1 may mess up our postgres arrays as we would
get commas here, unless we were to quote everything.
'''
tokens = []
#doc = nlp(text.decode('utf8')) #"This is a sentence. Here's another...".decode('utf8'))
parsed_data = nlp(text) #"This is a sentence. Here's another...".decode('utf8'))
for token in parsed_data:
if len(token.lemma_) > length:
tokens.append(token.lemma_.lower())
return(list(set(tokens)))
# def remove_stop_words():
# pass
# def spacy_lemma_biwords_gt_len(text, length=3):
# '''Create bag of unique bi-lemmas, requiring lemma length > length
# We are crudely eliminating any bi-lemmas that have commas in them to save us in loading postgres arrays.
# '''
# biwords = []
# parsed_data = nlp(text)
# skip_chars = [',', '"', "'"]
# for i in range(1, len(parsed_data) - 1):
# skip = False
# biword = u'{} {}'.format(parsed_data[i].lemma_.lower(), parsed_data[i+1].lemma_.lower())
# if (parsed_data[i].lemma_ in skip_chars or parsed_data[i+1].lemma_ in skip_chars):
# skip = True
# if len(biword) > length and not skip:
# biwords.append(biword)
# return(list(set(biwords)))
def get_scored_abstract_bow(abstracts, score):
'''Return annotated bag of words.
my_id sentences [tf] \N
- score (postgres boolean) : t f \N
- column 3, true, false, null.
- column 4 null id for deepdive's use.
- {{}} is to wrap list as postgres array.
'''
results = []
for a in abstracts:
# bow = spacy_lemma_gt_len(a[2].decode('utf8'), length=2)
bow = spacy_lemma_gt_len(a[2].decode('utf-8'), length=2)
# maybe remove stop words
bow = u', '.join(bow)
results.append(u'{}\t{{{}}}\t{}\t{}'.format(a[0], bow, score, '\N'))
return(results)
def write_raw_sentences(fname, annotations, score=None):
'''
Annotations (list of lists) : [[id, title, abstract],...]
score (postgres boolean) : t f \N'''
with codecs.open(fname, 'a', encoding = 'utf-8') as f:
for a in annotations:
f.write(u'{}\t{}\t{}\N\n'.format(a[0], a[2].decode('utf-8'), score))
def write_annotated_sentences(fname, annotations):
'''
Annotations (list of strings) : ["id\tbagofwords\tpostgres_boolean\t\N",...]
'''
with codecs.open(fname, 'a', encoding = 'utf-8') as f:
for a in annotations:
a = a.replace('"', '') # avoid postgres malformed array on unescaped quotes
f.write(a + '\n')
%cd '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception/input'
%rm ./raw_sentences
write_raw_sentences('raw_sentences', ap_146, 't')
write_raw_sentences('raw_sentences', diss, 'f')
write_raw_sentences('raw_sentences', ap_1000, '\N')
ap_146_pos = get_scored_abstract_bow(ap_146, 't')
diss_neg = get_scored_abstract_bow(diss, 'f')
ap_1000_null = get_scored_abstract_bow(ap_1000, '\N')
%rm './annotated_sentences'
write_annotated_sentences('annotated_sentences', ap_146_pos)
write_annotated_sentences('annotated_sentences', diss_neg)
write_annotated_sentences('annotated_sentences', ap_1000_null)
```
### Add in Null equivalents for the entire training set.
This is so we can see their predictions, whether they are in the holdout fraction or not.
Otherwise we can not see the results of the non-holdout portion.
```
write_raw_sentences('raw_sentences', ap_146, '\N')
write_raw_sentences('raw_sentences', diss, '\N')
ap_146_null = get_scored_abstract_bow(ap_146, '\N')
diss_null = get_scored_abstract_bow(diss, '\N')
write_annotated_sentences('annotated_sentences', ap_146_null)
write_annotated_sentences('annotated_sentences', diss_null)
```
## Explanation of how the sentences get into deepdive.
input/init.sh is executed when we run deepdive initdb
## init and run deepdive app
From the top level of this deepdive app.
```
%cd '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception'
```
## inspect results
1. inspect - deepdive's calibration graphs showing accuracy, holdout and holdout + unknowns (nulls)
2. extract and inspect expectation vs test values
3. Recall that only the 'holdout' portion of the training data gets an expectation assigned.
### How to get reports out on all the training, not just the holdout?
Just put all the training back in as nulls.
```
cmd = ('select has_term,sentence_id,id,category,expectation '
'from _annotated_sentences_has_term_inference order by random() limit 10')
!deepdive sql "{cmd}"
# output tsv
# !deepdive sql eval "{cmd}" format=tsv
```
### Running deepdive another time, get different holdouts.
- The holdout fraction in the deepdive.conf file hasn't changed.
- The holdout fraction seems to simply be a rough guide.
- There is nothing in documentation about specifying the random seed or how the random selection is made.
## Review our expected input numbers
Yes, everything checks out. We have duplicates due to the pseudolabeling of the training test. And a few duplicates due to not having cleaned up our 1000 unknown-to-predict set that might have overlapped with the training set.
1146 training records, 146 true, 1000 false.
```
!wc input/raw_sentences
!cut -f1 input/raw_sentences | sort | uniq | wc
!cut -f1 input/annotated_sentences | sort | uniq | wc
!cut -f1 input/raw_sentences | sort | uniq -c | sort | grep -v ' 1 ' | wc
```
```
!cut -f1 input/raw_sentences | sort | uniq -c | sort | grep -v ' 1 ' | sed 's/ *. //' > mult_rec_pmids
!cut -f1 input/raw_sentences | sort | uniq -c | sort | grep -v ' 1 ' | sed 's/ *. //' | wc # number training records
#!grep -h -f mult_rec_pmids input/raw_sentences input/annotated_sentences | cut -f1,3 | sort | uniq -c | wc
```
## pull out clean results sets of the full training and unkown test sets.
```
fields = 'terms,has_term,sentence_id,expectation,sentence'
!deepdive sql 'DROP TABLE cc_neg_holdout'
!deepdive sql 'DROP TABLE cc_pos_holdout'
!deepdive sql 'DROP TABLE cc_training'
neg_holdout = ('SELECT DISTINCT r.terms,has_term,a.sentence_id,expectation INTO '
'cc_neg_holdout FROM '
'_annotated_sentences_has_term_inference as a JOIN '
'_raw_sentences as r ON '
'a.sentence_id = r.sentence_id WHERE '
'NOT a.has_term '
'ORDER BY a.sentence_id') # if include r.terms would see we pseudonulled all these.
pos_holdout = ('SELECT DISTINCT r.terms,has_term,a.sentence_id,expectation INTO '
'cc_pos_holdout FROM '
'_annotated_sentences_has_term_inference as a JOIN '
'_raw_sentences as r ON '
'a.sentence_id = r.sentence_id WHERE '
'a.has_term '
'ORDER BY a.sentence_id') # if include r.terms would see we pseudonulled all these.
# test = ("SELECT DISTINCT r.terms,a.has_term,a.sentence_id,a.expectation FROM "
# "_annotated_sentences_has_term_inference as a JOIN "
# "_raw_sentences as r ON "
# "a.sentence_id = r.sentence_id JOIN "
# "cc_neg_holdout as n ON a.sentence_id = n.sentence_id WHERE "
# "a.has_term IS NULL AND r.terms IS NOT NULL "
# "ORDER BY a.sentence_id") # 259
pos_neg_input = ("SELECT DISTINCT r.terms,a.has_term,a.sentence_id,a.expectation INTO "
"cc_all_input FROM "
"_annotated_sentences_has_term_inference as a JOIN "
"_raw_sentences as r ON "
"a.sentence_id = r.sentence_id LEFT JOIN "
"cc_neg_holdout as n ON a.sentence_id = n.sentence_id WHERE "
"a.has_term IS NULL AND r.terms IS NOT NULL "
"ORDER BY a.sentence_id")
training = ("SELECT DISTINCT a.terms,a.has_term,a.sentence_id,a.expectation INTO "
"cc_training FROM "
"cc_all_input as a LEFT JOIN "
"cc_pos_holdout as p ON "
"a.sentence_id = p.sentence_id LEFT JOIN "
"cc_neg_holdout as n ON "
"a.sentence_id = n.sentence_id WHERE "
"p.sentence_id IS NULL AND n.sentence_id IS NULL AND a.terms IS NOT NULL")
report = "select * from cc_pos_holdout UNION ALL select cc_pos_holdout" # as p union all select t.terms from cc_training as t"
# report = ("SELECT * "
# "cc_all_input UNION ALL "
# "SELECT * FROM cc_pos_holdout UNION ALL "
# "SELECT * FROM cc_training ")
# WHERE "
# "c.has_term IS NULL AND r.terms IS NOT NULL "
# "ORDER BY a.sentence_id")
# unk_input = ("SELECT DISTINCT r.terms,has_term,a.sentence_id,category,expectation FROM "
# "_annotated_sentences_has_term_inference as a JOIN "
# "_raw_sentences as r ON "
# "a.sentence_id = r.sentence_id")
#result=!deepdive sql "{neg_holdout}"
#result=!deepdive sql "{pos_holdout}"
#pos_neg_input_results=!deepdive sql "{pos_neg_input}" # should be 1000
###unk_input_results=!deepdive sql "{unk_input}" # should be 1000
#test=!deepdive sql "{test}"
#test = !deepdive sql "{training}"
test = !deepdive sql "{report}"
print(len(test))
test[0:6]
neg_training = 'select has_term,sentence_id,category,expectation from _annotated_sentences_has_term_inference as a WHERE NOT a.has_term'
cmd = ('select {} FROM '
'_annotated_sentences_has_term_inference as a JOIN '
'_raw_sentences as r ON '
'a.sentence_id = r.sentence_id WHERE'.format(fields))
results=!deepdive sql eval "{cmd}" format=tsv
fields = fields.split(',')
print(fields)
#results[0:10]
```
## plot our curves by getting predictions from deepdive by sql.
Holdout data has_term is t or f
Total returned = holdout + null labeled trues + null labeled falses + to be predicteds
2438 = (259f + 33t) + 146 + 1000 + 1000
As sentences (pubmed ids) are shared between classe:
- remove the holdouts (they are retained in the pseudo-null labeled classes)
- extract correct labels onto the pseudo-null
- maybe by queries back to _annotated_sentences
- remove any 'to be predicteds' that are also in the pseudo-null
- because we hadn't cleaned these out prior to building our input files.
```
cmd = ('SELECT has_term,sentence_id,id,category,expectation '
'FROM _annotated_sentences_has_term_inference, _annotated_sentences')
deepdive sql 'select has_term,sentence_id,id,category,expectation from _annotated_sentences_has_term_inference'
pdrx = !deepdive sql eval "{cmd}" format=tsv
print(len(pdrx))
# Total returned = holdout + null labeled trues + null labeled falses + to be predicteds
# 2438 = (259f + 33t) + 146 + 1000 + 1000
```
```
%cd '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception'
%alias plot_cal /Users/ccarey/Documents/Projects/NAMI/rdoc/scripts/plot_deepdive_calibration.R
%plot_cal ./run/LATEST/calibration/_annotated_sentences.has_term.tsv custom_stats_plots/test > /dev/null 2>&1
```
<!---
images are loaded from the root of the notebook rather than the current directory
-->
```
# side by side
# <tr>
# <td><img src=./tasks/deepdive_app/8_0_3_quick_auditory_perception/custom_stats_plots/test_histogram.png width=200 height=200 /> </td>
# <td><img src=./tasks/deepdive_app/8_0_3_quick_auditory_perception/custom_stats_plots/test_stacked_histogram.png /> </td>
# </tr>
# or
# ![my image]./tasks/deepdive_app/8_0_3_quick_auditory_perception/custom_stats_plots/test_stacked_histogram.png
# or (also works for pdf)
from IPython.display import HTML
HTML('<iframe src=./tasks/deepdive_app/8_0_3_quick_auditory_perception/custom_stats_plots/test_histogram.png width=350 height=350></iframe>')
```
# Appendix 1. In R, plot our own curves from deepdive's calibration
DeepDive produces some diagnostics.
- *calibration/....png*. But can't distinguish the holdouts from the predictions on the unkowns in DeepDive's png.
- *calibration/....tsv*. See deepdive documentation.
tsv columns:
[bucket_from] [bucket_to] [num_predictions] [num_true] [num_false]
- buckets are the min and max extent of the probability bins. (1.00 = 100% probability document is on topic).
- Columns 3 is predicted from unknowns + holdouts
- Columns 4 and 5 are predicted only from the holdouts.
| github_jupyter |
## 1. KMeans vs GMM
在第一个例子中,我们将生成一个高斯数据集,并尝试对其进行聚类,看看其聚类结果是否与数据集的原始标签相匹配。
我们可以使用 sklearn 的 [make_blobs] (http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_blobs.html) 函数来创建高斯 blobs 的数据集:
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import cluster, datasets, mixture
%matplotlib inline
n_samples = 1000
varied = datasets.make_blobs(n_samples=n_samples,
cluster_std=[5, 1, 0.5],
random_state=3)
X, y = varied[0], varied[1]
plt.figure( figsize=(16,12))
plt.scatter(X[:,0], X[:,1], c=y, edgecolor='black', lw=1.5, s=100, cmap=plt.get_cmap('viridis'))
plt.show()
```
现在,当我们把这个数据集交给聚类算法时,我们显然不会传入标签。所以让我们从 k-means 开始,看看它是如何处理这个数据集的。是否会产生与原标签相匹配的聚类?
```
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3)
pred = kmeans.fit_predict(X)
plt.figure( figsize=(16,12))
plt.scatter(X[:,0], X[:,1], c=pred, edgecolor='black', lw=1.5, s=100, cmap=plt.get_cmap('viridis'))
plt.show()
```
k-means 的表现怎么样?它是否能够找到与原始标签匹配或相似的聚类?
现在让我们尝试使用 [GaussianMixture](http://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html) 进行聚类:
```
# TODO: Import GaussianMixture
from import
# TODO: Create an instance of Gaussian Mixture with 3 components
gmm =
# TODO: fit the dataset
gmm =
# TODO: predict the clustering labels for the dataset
pred_gmm =
# Plot the clusters
plt.figure( figsize=(16,12))
plt.scatter(X[:,0], X[:,1], c=pred_gmm, edgecolor='black', lw=1.5, s=100, cmap=plt.get_cmap('viridis'))
plt.show()
```
通过视觉比较k-means和GMM聚类的结果,哪一个能更好地匹配原始标签?
# 2. KMeans vs GMM - 鸢尾花(Iris)数据集
对于第二个示例,我们将使用一个具有两个以上特征的数据集。鸢尾花(Iris)数据集在这方面做得很好,因为可以合理地假设它的数据分布是高斯分布。
鸢尾花(Iris)数据集是一个带标签的数据集,具有四个特征:
```
import seaborn as sns
iris = sns.load_dataset("iris")
iris.head()
```
有几种方法 (例如 [PairGrid](https://seaborn.pydata.org/generated/seaborn.PairGrid.html), [t-SNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html), 或 [用 PCA 投影到一个较低的数维](http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_iris.html#sphx-glr-auto-examples-decomposition-plot-pca-iris-py))。让我们尝试用 PairGrid 进行可视化,因为它不会扭曲数据集 --它只是在一个子图中将每一对特征进行相互对应:
```
g = sns.PairGrid(iris, hue="species", palette=sns.color_palette("cubehelix", 3), vars=['sepal_length','sepal_width','petal_length','petal_width'])
g.map(plt.scatter)
plt.show()
```
If we cluster the Iris datset using KMeans, how close would the resulting clusters match the original labels?
```
kmeans_iris = KMeans(n_clusters=3)
pred_kmeans_iris = kmeans_iris.fit_predict(iris[['sepal_length','sepal_width','petal_length','petal_width']])
iris['kmeans_pred'] = pred_kmeans_iris
g = sns.PairGrid(iris, hue="kmeans_pred", palette=sns.color_palette("cubehelix", 3), vars=['sepal_length','sepal_width','petal_length','petal_width'])
g.map(plt.scatter)
plt.show()
```
How do these clusters match the original labels?
You can clearly see that visual inspection is no longer useful if we're working with multiple dimensions like this. So how can we evaluate the clustering result versus the original labels?
You guessed it. We can use an external cluster validation index such as the [adjusted Rand score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) which generates a score between -1 and 1 (where an exact match will be scored as 1).
```
# TODO: Import adjusted rand score
from import
# TODO: calculate adjusted rand score passing in the original labels and the kmeans predicted labels
iris_kmeans_score =
# Print the score
iris_kmeans_score
```
What if we cluster using Gaussian Mixture models? Would it earn a better ARI score?
```
gmm_iris = GaussianMixture(n_components=3).fit(iris[['sepal_length','sepal_width','petal_length','petal_width']])
pred_gmm_iris = gmm_iris.predict(iris[['sepal_length','sepal_width','petal_length','petal_width']])
iris['gmm_pred'] = pred_gmm_iris
# TODO: calculate adjusted rand score passing in the original
# labels and the GMM predicted labels iris['species']
iris_gmm_score =
# Print the score
iris_gmm_score
```
Thanks to ARI socres, we have a clear indicator which clustering result better matches the original dataset.
| github_jupyter |
# AUTOMATIC DIFFERENTIATION WITH [TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/basics/autogradqs_tutorial.html#automatic-differentiation-with-torch-autograd)
When training neural networks, the most frequently used algorithm is **back propagation**. In this algorithm, parameters (model weights) are adjusted according to the **gradient** of the loss function with respect to the given parameter.
To compute those gradients, PyTorch has a built-in differentiation engine called **torch.autograd**. It supports automatic computation of gradient for any computational graph.
Consider the simplest one-layer neural network, with input ***x***, parameters ***w*** and ***b***, and some loss function. It can be defined in PyTorch in the following manner:
```
import torch
x = torch.ones(5) # input tensor
y = torch.zeros(3) # expected output
w = torch.randn(5, 3, requires_grad=True)
b = torch.randn(3, requires_grad=True)
z = torch.matmul(x, w)+b
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) # 二分类输入为logits对交叉熵损失
```
## Tensors, Functions and Computational graph
This code defines the following **computational graph**:

In this network, ***w*** and ***b*** are **parameters**, which we need to optimize. Thus, we need to be able to compute the gradients of loss function with respect to those variables. In order to do that, we set the **requires_grad** property of those tensors.
**NOTE**:
- You can set the value of **requires_grad** when creating a tensor, or later by using **x.requires_grad_(True)** method.
A function that we apply to tensors to construct computational graph is in fact an object of class **Function**. This object knows how to compute the function in the forward direction, and also how to compute its derivative during the ***backward propagation step***. A reference to the backward propagation function is stored in **grad_fn** property of a tensor. You can find more information of **Function** in the [documentation](https://pytorch.org/docs/stable/autograd.html#function).
```
print(f"Gradient function for z = {z.grad_fn}")
print(f"Gradient function for loss = {loss.grad_fn}")
```
## Computing Gradients
To optimize weights of parameters in the neural network, we need to compute the derivatives of our loss function with respect to parameters, namely, we need $\frac{\partial loss}{\partial w}$
and $\frac{\partial loss}{\partial b}$ under some fixed values of ***x*** and ***y***. To compute those derivatives, we call **loss.backward()**, and then retrieve the values from **w.grad** and **b.grad**:
```
loss.backward()
print(w.grad)
print(b.grad)
```
**NOTE**:
- We can only obtain the **grad** properties for the ***leaf nodes*** of the computational graph, which have **requires_grad** property set to **True**. For all other nodes in our graph, gradients will not be available.
- We can only perform gradient calculations using **backward** once on a given graph, for performance reasons. If we need to do several **backward** calls on the same graph, we need to pass **retain_graph=True** to the **backward** call.
## Disabling Gradient Tracking
By default, all tensors with **requires_grad=True** are tracking their computational history and support gradient computation. However, there are some cases when we do not need to do that, for example, when we have trained the model and just want to apply it to some input data, i.e. we only want to do forward computations through the network. We can stop tracking computations by surrounding our computation code with **torch.no_grad()** block:
```
z = torch.matmul(x, w)+b
print(z.requires_grad)
with torch.no_grad(): # 以下的操作不生成计算图
z = torch.matmul(x, w)+b
print(z.requires_grad)
```
Another way to achieve the same result is to use the **detach()** method on the tensor:
```
z = torch.matmul(x, w)+b
z_det = z.detach() # 复制张量z并从原有计算图上剥离下来,与clone()做好区分
print(z_det.requires_grad)
```
There are reasons you might want to disable gradient tracking:
- To mark some parameters in your neural network as **frozen parameters**. This is a very common scenario for [finetuning a pretrained network](https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html).
- To **speed up computations** when you are only doing forward pass, because computations on tensors that do not track gradients would be more efficient.
## More on Computational Graphs
Conceptually, autograd keeps a record of data (tensors) and all executed operations (along with the resulting new tensors) in a directed acyclic graph (DAG) consisting of [Function](https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function) objects. In this DAG, leaves are the input tensors, roots are the output tensors. By tracing this graph from roots to leaves, you can automatically compute the gradients using the chain rule.
In a forward pass, autograd does two things simultaneously:
- run the requested operation to compute a resulting tensor.
- maintain the operation’s gradient function in the DAG.
The backward pass kicks off when **.backward()** is called on the DAG root. **autograd** then:
- computes the gradients from each **.grad_fn**,
- accumulates them in the respective tensor’s **.grad** attribute
- using the chain rule, propagates all the way to the leaf tensors.
**NOTE**:
- ***DAGs are dynamic in PyTorch*** An important thing to note is that the graph is recreated from scratch; after each **.backward()** call, autograd starts populating a new graph. This is exactly what allows you to use control flow statements in your model; you can change the shape, size and operations at every iteration if needed.
## Optional Reading: Tensor Gradients and Jacobian Products
In many cases, we have a scalar loss function, and we need to compute the gradient with respect to some parameters. However, there are cases when the output function is an arbitrary tensor. In this case, PyTorch allows you to compute so-called ***Jacobian product***, and not the actual gradient.
For a vector function $\vec{y}=f(\vec{x})$ , where $\vec{x}=\langle x_1,\dots,x_n\rangle$ and $\vec{y}=\langle y_1,\dots,y_m\rangle$ , a gradient of $\vec{y}$ with respect to $\vec{x}$ is given by Jacobian matrix:
$$J=\left(\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\ \vdots & \ddots & \vdots\\ \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} \end{array}\right)$$
Instead of computing the Jacobian matrix itself, PyTorch allows you to compute **Jacobian Product** $v^T\cdot J$
for a given input vector $v=(v_1 \dots v_m)$. This is achieved by calling **backward** with ***v*** as an argument. The size of vv should be the same as the size of the original tensor, with respect to which we want to compute the product:
```
inp = torch.eye(5, requires_grad=True)
out = (inp+1).pow(2) # shape为5*5
out.backward(torch.ones_like(inp), retain_graph=True) # 这里的torch.ones_like()生成对正是上文中v,正常情况下都使用全1的tensor
print(f"First call\n{inp.grad}")
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"\nSecond call\n{inp.grad}")
inp.grad.zero_()
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"\nCall after zeroing gradients\n{inp.grad}")
```
| github_jupyter |
## Create a classifier to predict the wine color from wine quality attributes using this dataset: http://archive.ics.uci.edu/ml/datasets/Wine+Quality
## The data is in the database we've been using
+ host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com'
+ database='training'
+ port=5432
+ user='dot_student'
+ password='qgis'
+ table name = 'winequality'
```
import pg8000
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from sklearn import datasets, tree, metrics
%matplotlib inline
```
## Query for the data and create a numpy array
```
conn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', user='dot_student', password='qgis', database="training")
cursor = conn.cursor()
cursor.execute("select * from information_schema.columns where table_name='winequality'")
results = cursor.fetchall()
conn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', user='dot_student', password='qgis', database="training")
cursor = conn.cursor()
cursor.execute("select column_name from information_schema.columns where table_name='winequality'") #LIMIT 10
results = cursor.fetchall()
conn.rollback()
for x in results:
print(x)
conn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', user='dot_student', password='qgis', database="training")
cursor = conn.cursor()
db = []
cursor.execute("SELECT * from winequality")
for item in cursor.fetchall():
db.append(item)
results_list = []
for y in results:
results_list.append(y)
result_array = np.array(db)
```
## Split the data into features (x) and target (y, the last column in the table)
### Remember you can cast the results into an numpy array and then slice out what you want
```
x = result_array[:,:11]
y = result_array[:,11]
y
x
```
## Create a decision tree with the data
```
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt = dt.fit(x,y)
```
## Run 10-fold cross validation on the model
```
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(dt,x,y,cv=10)
np.mean(scores)
```
## If you have time, calculate the feature importance and graph based on the code in the [slides from last class](http://ledeprogram.github.io/algorithms/class9/#21)
### Use [this tip for getting the column names from your cursor object](http://stackoverflow.com/questions/10252247/how-do-i-get-a-list-of-column-names-from-a-psycopg2-cursor)
```
plt.plot(dt.feature_importances_,'o')
plt.ylim(0,1)
```
| github_jupyter |
** Build Adjacency Matrix **
**Note:** You must put the generated JSON file into a zip file. We probably should code this in too.
```
import sqlite3
import json
# Progress Bar I found on the internet.
# https://github.com/alexanderkuk/log-progress
from progress_bar import log_progress
PLOS_PMC_DB = 'sqlite_data/data.plos-pmc.sqlite'
ALL_DB = 'sqlite_data/data.all.sqlite'
PLOS_PMC_MATRIX = 'json_data/plos-pmc/adjacency_matrix.json'
ALL_MATRIX = 'json_data/all/adjacency_matrix.json'
conn_plos_pmc = sqlite3.connect(PLOS_PMC_DB)
cursor_plos_pmc = conn_plos_pmc.cursor()
conn_all = sqlite3.connect(ALL_DB)
cursor_all = conn_all.cursor()
```
Queries
```
# For getting the maximum row id
QUERY_MAX_ID = "SELECT id FROM interactions ORDER BY id DESC LIMIT 1"
# Get interaction data
QUERY_INTERACTION = "SELECT geneids1, geneids2, probability FROM interactions WHERE id = {}"
# Get all at once
QUERY_ALL_INTERACTION = "SELECT geneids1, geneids2, probability FROM interactions"
actions = [
# {
# "db":PLOS_PMC_DB,
# "matrix" : PLOS_PMC_MATRIX,
# "conn": conn_plos_pmc,
# "cursor": cursor_plos_pmc,
# },
{
"db":ALL_DB,
"matrix" : ALL_MATRIX,
"conn": conn_all,
"cursor": cursor_all,
},
]
```
Step through every interaction.
1. If geneids1 not in matrix - insert it as dict.
2. If geneids2 not in matrix[geneids1] - insert it as []
3. If probability not in matrix[geneids1][geneids2] - insert it.
4. Perform the reverse.
```
# for action in actions:
for action in log_progress(actions, every=1, name="Matrix"):
print("Executing SQL query. May take a minute.")
matrix = {}
cursor = action["cursor"].execute(QUERY_ALL_INTERACTION)
interactions = cursor.fetchall()
print("Query complete")
for row in log_progress(interactions, every=10000, name=action["matrix"]+" rows"):
if row == None:
continue
id1 = row[0]
id2 = row[1]
try:
prob = int(round(row[2],2) * 1000)
except Exception:
continue
# Forward
if id1 not in matrix:
matrix[id1] = {}
if id2 not in matrix[id1]:
matrix[id1][id2] = []
if prob not in matrix[id1][id2]:
matrix[id1][id2].append(prob)
# Backwards
if id2 not in matrix:
matrix[id2] = {}
if id1 not in matrix[id2]:
matrix[id2][id1] = []
if prob not in matrix[id2][id1]:
matrix[id2][id1].append(prob)
with open(action["matrix"], "w+") as file:
file.write(json.dumps( matrix ))
print("All Matrices generated")
action["conn"].close()
```
| github_jupyter |
# Introduction to Digital Image Treatment
OpenCV is one of the most popular libraries for DIT, it was originally wrote in C but since some time ago we can find Python bindings that let us to use with the simplied pythonic synthaxis.
Let's begin to play some with the library
```
# Import modules
import cv2
import numpy as np
import imutils
import matplotlib.pyplot as plt
```
* **CV2**: The OpenCV python binding library, it will let us to access to all the functions available in the library
* **Numpy**: Numpy is an efficient matrix manipulation library, we will see that all our images will be treated as matrixes or vectors so we need a library to manipulate them.
* **Imutils**: An useful image transformation library.
* **Matplotlib**: As this library can be useful for different purposes, we will use it mainly for visualization purposes.
```
# Let's load an image
image = cv2.imread("teach_images/yiyo_pereza.png") # Loads color image
print(image)
```
What did I print? What those numbers mean?
Most of the computers interpret an image as an array of color representation of 8 bits, this means, a number between 0 and 255. What you see in the last print statemen is this representation splitted into three different color channels: Red, Green and Blue, represented by a matrix that stores the pixels color representation of the image, this is what your computer 'sees' and that is why artificial vision applications requires some of math to be developed. For our fortune, OpenCV treats with it for us !!
<img src="teach_images/image_representation.png">
The RGB color space means that you have a 8-bit unsigned representation for every color, below you can find some examples:
* <img src="teach_images/rgb_255_0_0.png", width="20px" align="left" top="10px" bottom="10px"> **Pure Red** [255, 0, 0]
* <img src="teach_images/rgb_0_255_0.png" width="20px" align="left" top="10px" bottom="10px"> **Pure Green** [0, 255, 0]
* <img src="teach_images/rgb_255_255_0.png" width="20px" align="left" top="10px" bottom="10px"> **Mix of Red and Green** [255, 255, 0]
* <img src="teach_images/rgb_0_127_127.png" width="20px" align="left" top="10px" bottom="10px"> **Attenuate mix of Green and Blue** [0, 127, 127]
Keeping this in mind, it means that the last pixel from Yiyo's picture ```[207 221 220]``` has this color:
* <img src="teach_images/rgb_207_221_220.png" width="20px" align="left" top="10px" bottom="10px"> ** Yiyo's last pixel** [207 221 220]
Finally, most of applications manages the color space as **RGB** but OpenCV manages it as **BGR**, so please **keep this in mind** as it will be very important in the incoming lesson
```
# Let's show the image to see how this array 'looks'
%matplotlib inline
plt.axis('off')
plt.imshow(image)
plt.show()
```
**Why the first signal is plotted in a 'strange' way?**
Images are treated as an array of three channels: Red, Green and Blue. Opencv treats with them as BGR while matplot treats with them as RGB.
Let's solve this
```
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.axis('off')
plt.imshow(image_rgb)
plt.show()
# Nice, now let's split the channels
image = cv2.imread("teach_images/yiyo_pereza.png")
blue, green, red = cv2.split(image)
# Original one
plt.title('Original')
plt.axis('off')
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()
# Splitted channels
plt.figure(figsize=(20, 20))
plt.subplot(131)
plt.axis('off')
plt.title('blue')
plt.imshow(blue)
plt.subplot(132)
plt.axis('off')
plt.title('green')
plt.imshow(green)
plt.subplot(133)
plt.title('red')
plt.axis('off')
plt.imshow(red)
# Let's split another image
image = cv2.imread("teach_images/jerico_2.png")
blue, green, red = cv2.split(image)
# Original one
plt.title('Original')
plt.axis('off')
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()
# Splitted channels
plt.figure(figsize=(20, 20))
plt.subplot(131)
plt.axis('off')
plt.title('blue')
plt.imshow(blue)
plt.subplot(132)
plt.axis('off')
plt.title('green')
plt.imshow(green)
plt.subplot(133)
plt.title('red')
plt.axis('off')
plt.imshow(red)
# Let's split another image
image = cv2.imread("teach_images/rose.png")
blue, green, red = cv2.split(image)
# Original one
plt.title('Original')
plt.axis('off')
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()
# Splitted channels
plt.figure(figsize=(20, 20))
plt.subplot(131)
plt.axis('off')
plt.title('blue')
plt.imshow(blue)
plt.subplot(132)
plt.axis('off')
plt.title('green')
plt.imshow(green)
plt.subplot(133)
plt.title('red')
plt.axis('off')
plt.imshow(red)
```
One of the main RGB split channels are mask, look that the red channel would serve us to get most of the information of the rose's petals. We will review the mask concept later in the lesson.
## Grayspace Images
From our previous job, we know that an image is represented as a RGB matrix, but to process a three columns matrix can be computationally costly and because of this many applications transforms the image in a vector array of 'gray' pixels intensities.
Be careful, a grayscale space vector does not mean black or white, it is a new representation of the RGB space:
Y = 0.299 x R + 0.587 x G + 0.114 x B
due to the cones and receptors in our eyes, we are able to perceive nearly 2x the amount of green than red. And similarly, we notice over twice the amount of red than blue. Thus, we make sure to account for this when converting from RGB to grayscale.
```
# Let's create a grayscale image
image = cv2.imread("teach_images/yiyo_pereza.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
plt.figure(figsize=(20, 20))
plt.subplot(121)
plt.title('Original')
plt.axis('off')
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.subplot(122)
plt.title('gray')
plt.axis('off')
plt.imshow(gray, cmap='gray')
# Let's print the gray array
print(gray)
# Let's print the shapes of both color and gray
print(image.shape)
print(gray.shape)
```
## Binary Images
One of the main application of the grayscale images is to create binary images, that contains **only** two possible pixel values: 0 or 255. The main application of binary images is mask creation to extract a region of interest of the image based on the number of pixels that are equals to zero or higher than zero.
Let's create some examples
```
# Our objective will be to extract a mask with only the letters and number of a license plate
plate = cv2.imread("teach_images/license_plate.png")
gray = cv2.cvtColor(plate, cv2.COLOR_BGR2GRAY)
plt.figure(figsize=(20, 20))
plt.subplot(121)
plt.title('Original Plate')
plt.axis('off')
plt.imshow(cv2.cvtColor(plate, cv2.COLOR_BGR2RGB))
plt.subplot(122)
plt.title('Gray Plate')
plt.axis('off')
plt.imshow(gray, cmap='gray')
```
## Thresholding
Thresholding is one of the most common (and basic) segmentation techniques in computer vision and it allows us to separate the foreground (i.e. the objects that we are interested in) from the background of the image.
For the example, we will use a simple thresholding, We must specify a threshold value T. All pixel intensities below T are set to 255. And all pixel intensities greater than T are set to 0.
Keep in mind that our main objetive is to extract the letter and numbers of the plate, so we want to extract intensities of 255 which are 'pure' black.
```
# Let's look some representations of some aparts of the image
# The top of the image, which is only yellow
print(gray[0:10])
plt.imshow(gray[0:10], cmap='gray')
# Let's look some representations of some aparts of the image
# One of the letters (mainly black)
print(gray[40: 110, 110:150])
plt.imshow(gray[40: 110, 110:150], cmap='gray')
# Let's look some representations of some aparts of the image
# One of the noisy region
print(gray[100: 170, 80:150])
plt.subplot(121)
plt.title('Original Plate')
plt.imshow(cv2.cvtColor(plate[100: 170, 80:150], cv2.COLOR_BGR2RGB))
plt.subplot(122)
plt.title('Gray Plate')
plt.imshow(gray[100: 170, 80:150], cmap='gray')
# Let's apply a threshold
# If a pixel value is greater than our threshold (in this case,
# 213), we set it to be BLACK, otherwise it is WHITE.
# REMEMBER: Black regions are in the order of 213
(T, thresh_1) = cv2.threshold(gray, 213, 255, cv2.THRESH_BINARY_INV)
# If a pixel value is greater than our threshold (in this case,
# 212), we set it to be BLACK, otherwise it is WHITE.
(T, thresh_2) = cv2.threshold(gray, 212, 255, cv2.THRESH_BINARY_INV)
# If a pixel value is greater than our threshold (in this case,
# 80), we set it to be BLACK, otherwise it is WHITE.
(T, thresh_3) = cv2.threshold(gray, 80, 255, cv2.THRESH_BINARY_INV)
# If a pixel value is greater than our threshold (in this case,
# 40), we set it to be BLACK, otherwise it is WHITE.
(T, thresh_4) = cv2.threshold(gray, 40, 255, cv2.THRESH_BINARY_INV)
plt.subplot(221)
plt.title('Greater than 213')
plt.axis('off')
plt.imshow(thresh_1, cmap='gray')
plt.subplot(222)
plt.title('Greater than 212')
plt.axis('off')
plt.imshow(thresh_2, cmap='gray')
plt.subplot(223)
plt.title('Greater than 80')
plt.axis('off')
plt.imshow(thresh_3, cmap='gray')
plt.subplot(224)
plt.title('Greater than 80')
plt.axis('off')
plt.imshow(thresh_4, cmap='gray')
```
For this particular application, a threshold of 40 takes off most of our noise, nice!
## Dilation and Erosion
Morphological transformations have a wide array of uses, i.e. :
* Removing noise
* Isolation of individual elements and joining disparate elements in an image.
* Finding of intensity bumps or holes in an image
Let's look some of them.
* **Dilation**: As its name suggests, it 'dilates' a binary region transforming black pixels into white pixels
<img src="teach_images/dilation.png">
* **Erosion**: The opposite to dilation, it transforms white pixels into black pixels.
<img src="teach_images/dilation.png">
* **Opening**: Opening is just another name of erosion followed by dilation
<img src="teach_images/opening.png">
* **Closing**: Closing is reverse of Opening, Dilation followed by Erosion
<img src="teach_images/closing.png">
Now that you know the basics of image transformation, lets eliminate the noise
```
# Let's use an opening transformation to eliminate most of the noise
kernel = np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(thresh_4, cv2.MORPH_OPEN, kernel)
plt.subplot(121)
plt.title('Original thresholded')
plt.axis('off')
plt.imshow(thresh_4, cmap='gray')
plt.subplot(122)
plt.title('Closing')
plt.axis('off')
plt.imshow(opening, cmap='gray')
```
Nice, most of the noise was taked off!! Now let's begin with a masking job to extract ROIS from our original image
# Masking
'Masking' is the process of extract a Region of Interest from our images, this is basically approached using a bitwise-and operation:
```
0101
AND 0011
= 0001
```
In OpenCV, what we will have is a mask with only two possible values: 0 or 255 (remember, a binary image). Let's look with a concrete example from our initial splitted RGB splitted rose. The main goal is to extract its petals
```
# Let's split another image
rose = cv2.imread("teach_images/rose.png")
blue, green, red = cv2.split(rose)
# Original one
plt.title('Original')
plt.axis('off')
plt.imshow(cv2.cvtColor(rose, cv2.COLOR_BGR2RGB))
plt.show()
# Splitted channels
plt.figure(figsize=(20, 20))
plt.subplot(131)
plt.axis('off')
plt.title('blue')
plt.imshow(blue)
plt.subplot(132)
plt.axis('off')
plt.title('green')
plt.imshow(green)
plt.subplot(133)
plt.title('red')
plt.axis('off')
plt.imshow(red)
# Red channel is the most useful for our purposes
(T, thresh_rose) = cv2.threshold(red, 40, 255, cv2.THRESH_BINARY)
plt.subplot(131)
plt.title('Original')
plt.axis('off')
plt.imshow(rose)
plt.subplot(132)
plt.title('Red')
plt.axis('off')
plt.imshow(red)
plt.subplot(133)
plt.title('thresholded')
plt.axis('off')
plt.imshow(thresh_rose, cmap='gray')
# Let's use an some morphological transformations to eliminate most of the noise
kernel_1 = np.ones((50,50),np.uint8)
opening_rose = cv2.morphologyEx(thresh_rose, cv2.MORPH_OPEN, kernel_1)
kernel_2 = np.ones((5,5),np.uint8)
closing = cv2.morphologyEx(opening_rose, cv2.MORPH_CLOSE, kernel_1)
plt.figure(figsize=(20, 20))
plt.subplot(231)
plt.title('Original')
plt.axis('off')
plt.imshow(rose)
plt.subplot(232)
plt.title('Red')
plt.axis('off')
plt.imshow(red)
plt.subplot(233)
plt.title('thresholded')
plt.axis('off')
plt.imshow(thresh_rose, cmap='gray')
plt.subplot(234)
plt.title('Opening')
plt.axis('off')
plt.imshow(opening_rose, cmap='gray')
plt.subplot(235)
plt.title('Closing')
plt.axis('off')
plt.imshow(closing, cmap='gray')
# Now, let's implement a masking!!
clone = rose.copy()
masked = cv2.bitwise_and(clone, clone, mask=closing)
plt.subplot(221)
plt.title('Original')
plt.axis('off')
plt.imshow(rose)
plt.subplot(222)
plt.title('Red')
plt.axis('off')
plt.imshow(red)
plt.subplot(223)
plt.title('Mask')
plt.axis('off')
plt.imshow(closing, cmap='gray')
plt.subplot(224)
plt.title('Masked')
plt.axis('off')
plt.imshow(masked)
```
Nice, isn't it? :)
## Contours
There is another basic Image processing concept to review: Contours. Contours are simply the outlines of an object in an image. If the image is simple enough, we might be able to get away with using the grayscale image as an input. if not, we will need to apply some transformation and artificial vision techniques to obtain a properly objetive image as we made with our rose masking.
Let's define an initial goal, we want to know in a tetris game how many pieces we need to play at time:
<img src="teach_images/tetris_goal.png">
For the example above, there are three pieces to be played properly to keep our 'live' in the game. Let's begin.
```
# Loads the image
tetris = cv2.imread("teach_images/tetris_1.png")
gray = cv2.imread("teach_images/tetris_1.png", 0)
# Applies a threshold
(T, thresh_tetris) = cv2.threshold(gray, 80, 255, cv2.THRESH_BINARY)
# Applies a closing
kernel = np.ones((5,5),np.uint8)
closing = cv2.morphologyEx(thresh_tetris, cv2.MORPH_CLOSE, kernel)
# Find contours
cnts = cv2.findContours(closing.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]
clone = gray.copy()
clone = cv2.cvtColor(clone, cv2.COLOR_GRAY2BGR)
# draw the contours
cv2.drawContours(clone, cnts, -1, (0, 0, 255), 2)
print("Found {} contours".format(len(cnts)))
# Shows initial results
plt.figure(figsize=(20, 20))
plt.subplot(231)
plt.title('Original')
plt.axis('off')
plt.imshow(cv2.cvtColor(tetris, cv2.COLOR_BGR2RGB))
plt.subplot(232)
plt.title('Gray')
plt.axis('off')
plt.imshow(gray, cmap='gray')
plt.subplot(233)
plt.title('Thresholded')
plt.axis('off')
plt.imshow(thresh_tetris, cmap='gray')
plt.subplot(234)
plt.title('Closing')
plt.axis('off')
plt.imshow(closing, cmap='gray')
plt.subplot(235)
plt.title('Contours')
plt.axis('off')
plt.imshow(clone)
```
Look that we have drawn all the contours of our image, but we have found five of them, how can we know which of them are tetris pieces?
Let's look some more concepts:
### Area:
The number of pixels that reside inside the contour outline. We will expect a fixed max and min area for our tetris pieces.
### Aspect Ratio
The actual definition of the a contour’s aspect ratio is as follows:
```
aspect ratio = image width / image height
```
We will expect this aspect ratio:
```[0.2, 0.4] and [0.6, 1.7]```
Let's add just some line to inclue the area and the aspect ratio
```
# Loads the image
tetris = cv2.imread("teach_images/tetris_1.png")
gray = cv2.imread("teach_images/tetris_1.png", 0)
# Applies a threshold
(T, thresh_tetris) = cv2.threshold(gray, 80, 255, cv2.THRESH_BINARY)
# Applies a closing
kernel = np.ones((5,5),np.uint8)
closing = cv2.morphologyEx(thresh_tetris, cv2.MORPH_CLOSE, kernel)
# Find contours
contours = cv2.findContours(closing.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]
clone = gray.copy()
clone = cv2.cvtColor(clone, cv2.COLOR_GRAY2BGR)
cnts = []
for (i, c) in enumerate(contours):
area = cv2.contourArea(c)
(x, y, w, h) = cv2.boundingRect(c)
checked = 0
aspect_ratio = w / float(h)
checked = (area>400 and area<700)
checked = checked and ((aspect_ratio>=0.2 and aspect_ratio<=0.4) or (aspect_ratio>=0.6 and aspect_ratio<=1.8))
print("contour number: {} area: {} aspect_ratio: {}".format(i, area, aspect_ratio))
print(checked)
if checked:
cnts.append(c)
# draw the contours
cv2.drawContours(clone, cnts, -1, (0, 0, 255), 2)
# Shows initial results
plt.figure(figsize=(20, 20))
plt.subplot(231)
plt.title('Original')
plt.axis('off')
plt.imshow(cv2.cvtColor(tetris, cv2.COLOR_BGR2RGB))
plt.subplot(232)
plt.title('Gray')
plt.axis('off')
plt.imshow(gray, cmap='gray')
plt.subplot(233)
plt.title('Thresholded')
plt.axis('off')
plt.imshow(thresh_tetris, cmap='gray')
plt.subplot(234)
plt.title('Closing')
plt.axis('off')
plt.imshow(closing, cmap='gray')
plt.subplot(235)
plt.title('Contours')
plt.axis('off')
plt.imshow(clone)
# Prints the result
print("The number of pieces to play is {}".format(len(cnts)))
```
## Image Comparing
Finally, let's apply all that we have seen for image comparing.
```
# Image compare script
template = cv2.imread("teach_images/pycon_template.png")
testing = cv2.imread("teach_images/pycon_testing.png")
template_gray = cv2.cvtColor(template, cv2.COLOR_RGB2GRAY)
testing_gray = cv2.cvtColor(testing, cv2.COLOR_RGB2GRAY)
# Applies a bitwise XOR operation that will return one only if some pixel is different
xor = np.bitwise_xor(template_gray, testing_gray)
ones = cv2.countNonZero(xor)
print(ones)
# Let's paint the differences, if there is any
if ones > 0:
result = cv2.absdiff(template, testing)
gray = cv2.cvtColor(result, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 1, 255, 0)
# Find contours
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]
cv2.drawContours(result, cnts, -1, (0, 0, 255), 1)
plt.figure(figsize=(30, 30))
plt.subplot(221)
plt.title('Template')
plt.axis('off')
plt.imshow(cv2.cvtColor(template, cv2.COLOR_BGR2RGB))
plt.subplot(222)
plt.title('Testing')
plt.axis('off')
plt.imshow(cv2.cvtColor(template, cv2.COLOR_BGR2RGB))
plt.subplot(223)
plt.title('Thresholded')
plt.axis('off')
plt.imshow(thresh, cmap='gray')
plt.subplot(224)
plt.title('Contours')
plt.axis('off')
plt.imshow(result, cmap='gray')
```
Nice!!! You have drawn the images differences.
Hope you have enjoyed this lesson :)
All the best,
José García
| github_jupyter |
<a href="https://colab.research.google.com/github/skredenmathias/DS-Unit-2-Applied-Modeling/blob/master/module4/assignment_applied_modeling_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 3, Module 1*
---
# Define ML problems
You will use your portfolio project dataset for all assignments this sprint.
## Assignment
Complete these tasks for your project, and document your decisions.
- [ ] Choose your target. Which column in your tabular dataset will you predict?
- [ ] Is your problem regression or classification?
- [ ] How is your target distributed?
- Classification: How many classes? Are the classes imbalanced?
- Regression: Is the target right-skewed? If so, you may want to log transform the target.
- [ ] Choose your evaluation metric(s).
- Classification: Is your majority class frequency >= 50% and < 70% ? If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. What evaluation metric will you choose, in addition to or instead of accuracy?
- Regression: Will you use mean absolute error, root mean squared error, R^2, or other regression metrics?
- [ ] Choose which observations you will use to train, validate, and test your model.
- Are some observations outliers? Will you exclude them?
- Will you do a random split or a time-based split?
- [ ] Begin to clean and explore your data.
- [ ] Begin to choose which features, if any, to exclude. Would some features "leak" future information?
If you haven't found a dataset yet, do that today. [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2) and choose your dataset.
```
# I will use the worlds_2019 dataset for now.
import pandas as pd
!git clone https://github.com/skredenmathias/DS-Unit-1-Build.git
path = '/content/DS-Unit-1-Build/'
worlds_2019 = pd.read_excel(path+'2019-summer-match-data-OraclesElixir-2019-11-10.xlsx')
(print(worlds_2019.shape))
worlds_2019.head()
```
Choose your target. Which column in your tabular dataset will you predict?
```
df = worlds_2019
target = df['result']
# Initially I seek if I can predict if a team will win or lose.
# The goal is to see how much variance is explained by each factor.
# / see how much certain factors contribute to the result.
# From here I might look at questions such as:
# How do win conditions change per patch?
# What are the win percentages for red / blue side? Are different objectives
# more important to one side?
# See how different positions have different degrees of impact based on teams.
#
```
Is your problem regression or classification?
```
# Classification.
```
How is your target distributed?
Classification: How many classes? Are the classes imbalanced?
```
target.value_counts(normalize=True)
```
Choose your evaluation metric(s).
Classification: Is your majority class frequency >= 50% and < 70% ?
If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading.
What evaluation metric will you choose, in addition to or instead of accuracy?
```
# Accuracy. What others could I choose?
```
Choose which observations you will use to train, validate, and test your model.
Are some observations outliers? Will you exclude them?
Will you do a random split or a time-based split?
```
# Depends on ceteris paribus, other things equal:
# I might have to keep it on the same patch.
# Feature importances will differ across regions & tournaments & patches.
# Gamelength might be a leak?
# Should I also make a separate df with all 5 players grouped as a team w/
# most of the stats retained?
# Outliers:
# Gamelength beyond 50 minutes. I can filter these out if needed.
# Leaks / uninteresting columns:
# gameid, url, (league), (split), date, week, game, (patchno), playerid,
# (position), (team), gamelength?, total gold?, firsttothreetowers?,
# teamtowerkills?, opptowerkills?,
df.head()
df.columns
```
Begin to clean and explore your data.
```
# Lot's of cleaning done in the unit 1 build notebook.
# Will focus on exploration here for now.
df['gamelength'].plot.hist() # We see a small outlier here.
df['gamelength'].describe()
import seaborn as sns
# Note: Will be other outliers in the big dataset.
# Couldn't upload full dataset to Git, 80mb is too big.
# Why is it so big, it's just a text file?
sns.distplot(df['gamelength']);
```
# Fast first model
```
from sklearn.model_selection import train_test_split
train, val = train_test_split(df, test_size=.25)
!pip install category_encoders
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
target = 'result'
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
# X_test = test.drop(columns=target)
# y_test = test[target]
X_train.shape, X_val.shape
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
# Get validation accuracy
y_pred = pipeline.predict(X_val)
print('Validation Accuracy:', pipeline.score(X_val, y_val)) # We've got leakage!
print('X_train shape before encoding', X_train.shape)
encoder = pipeline.named_steps['ordinalencoder']
encoded = encoder.transform(X_train)
print('X_train shape after encoding', encoded.shape)
# Plot feature importances to find leak
%matplotlib inline
import matplotlib.pyplot as plt
# Get feature importances
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, encoded.columns)
# Plot top n feature importances
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey')
```
# XGBoost
```
from xgboost import XGBClassifier
pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
```
# Partial dependence plots
```
import matplotlib.pyplot as plt
# plt.rcParams['figure.dpi] = 72
!pip install pdpbox
!pip install shap
from pdpbox.pdp import pdp_isolate, pdp_plot
feature = 'teamtowerkills'
isolated = pdp_isolate(
model=pipeline,
dataset=X_val,
model_features=X_val.columns,
feature=feature,
num_grid_points=50
)
pdp_plot(isolated, feature_name=feature, plot_lines=True,
frac_to_plot=0.1) # leakage
plt.xlim(5, 12);
```
# Permutation importances
```
!pip install eli5
import eli5
from eli5.sklearn import PermutationImportance
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.transform(X_val)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_train_transformed, y_train)
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=10,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
feature_names = X_val.columns.tolist()
pd.Series(permuter.feature_importances_, feature_names).sort_values()
eli5.show_weights(
permuter,
top=None,
feature_names=feature_names
)
```
# Dropping 'teamtowerkills' & 'opptowerkills'
```
def wrangle(X):
X = X.copy()
# Drop teamtowerkills & opptowerkills
model_breakers = ['teamtowerkills','opptowerkills']
X = X.drop(columns = model_breakers)
return X
# train = wrangle(train)
val = wrangle(val)
val.columns
train.shape, val.shape
```
# Running XGBoost again
```
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
```
# Feature importances, again
```
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.transform(X_val)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_train_transformed, y_train)
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=10,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
feature_names = X_val.columns.tolist()
pd.Series(permuter.feature_importances_, feature_names).sort_values()
eli5.show_weights(
permuter,
top=None,
feature_names=feature_names
)
```
# Dropping a fuckton of columns, then repeat
```
# Leaks / uninteresting columns:
# gameid, url, (league), (split), date, week, game, (patchno), playerid,
# (position), (team), gamelength?, total gold?, firsttothreetowers?,
# teamtowerkills?, opptowerkills?,
def wrangle2(X):
X = X.copy()
# Drops
low_importance = ['gameid', 'url', 'league', 'split', 'date', 'week',
'patchno', 'position', 'gamelength']
X = X.drop(columns = low_importance)
return X
# train = wrangle(train)
train = wrangle2(train)
val = wrangle2(val)
train.columns
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
```
# Feature importances, iterations
```
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.transform(X_val)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_train_transformed, y_train)
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=10,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
feature_names = X_val.columns.tolist()
pd.Series(permuter.feature_importances_, feature_names).sort_values()
eli5.show_weights(
permuter,
top=None,
feature_names=feature_names
)
```
# Shapley plot
```
row = X_val.iloc[[0]]
y_val.iloc[[0]]
row #model.predict(row)
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value = explainer.expected_value,
shap_values = shap_values,
features=row
)
```
# Using importances for feature selection
```
X_train.shape
minimum_importance = 0
mask = permuter.feature_importances_ > minimum_importance
```
| github_jupyter |
<a href="https://colab.research.google.com/github/chrisart10/DeepLearning.ai-Summary/blob/master/pipeline3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Definir dimension de imagen
```
input_shape =300
```
# Importar modelos mediante tranfer learning
```
import os
from tensorflow.keras import layers
from tensorflow.keras import Model
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
-O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
from tensorflow.keras.applications.inception_v3 import InceptionV3
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
pre_trained_model = InceptionV3(input_shape = (input_shape, input_shape, 3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
for layer in pre_trained_model.layers:
layer.trainable = False
# pre_trained_model.summary()
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
```
# Ultima capa de aprendizaje
```
#from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.optimizers import Adam
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense (6, activation='softmax')(x)
model = Model( pre_trained_model.input, x)
model.compile(optimizer = Adam(lr=0.0001),
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
## Rocket science
#model.summary()
```
# Importar Dataset desde kaggle
```
! pip install -q kaggle
from google.colab import files
files.upload()
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
```
# Pegar Api del dataset
```
!kaggle datasets download -d sriramr/fruits-fresh-and-rotten-for-classification
#!kaggle datasets download -d kmader/skin-cancer-mnist-ham10000
```
# Extraer zip
```
import os
import zipfile
#local_zip = '/content/skin-cancer-mnist-ham10000.zip'
local_zip = "/content/fruits-fresh-and-rotten-for-classification.zip"
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
```
# Preparar Dataset y asignar data augmentation
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Define our example directories and files
base_dir = '/tmp/dataset/'
train_dir = os.path.join( base_dir, 'train')
validation_dir = os.path.join( base_dir, 'test')
# Add our data-augmentation parameters to ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255.,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator( rescale = 1.0/255. )
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size = 20,
class_mode = 'categorical',
target_size = (input_shape, input_shape))
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory( validation_dir,
batch_size = 20,
class_mode = 'categorical',
target_size = (input_shape, input_shape))
```
# callback con early stopping
```
import tensorflow as tf
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy') >= 0.98):
print("\nReached 98% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
```
# Entrenar el modelo
```
history = model.fit(
train_generator,
validation_data = validation_generator,
steps_per_epoch = 32,
epochs = 100,
callbacks=[callbacks],
validation_steps = 32,
verbose = 1)
```
# visualizacion del aprendizaje
```
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
import matplotlib.pyplot as plt
acc = history.history[ 'accuracy' ]
val_acc = history.history[ 'val_accuracy' ]
loss = history.history[ 'loss' ]
val_loss = history.history['val_loss' ]
epochs = range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title ('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot ( epochs, loss, 'r', label='Training loss')
plt.plot ( epochs, val_loss, 'b', label='Validation loss')
plt.title ('Training and validation loss' )
plt.legend(loc=0)
plt.figure()
```
# Test del modelo
```
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(input_shape, input_shape))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=32)
print(classes[0])
# if classes[0]>0.5:
# print(fn + " is a dog")
# else:
# print(fn + " is a cat")
```
# forma de guardar Opcion 1
```
import time
path = '/tmp/simple_keras_model'
model.save(saved_model_path)
new_model = tf.keras.models.load_model('/tmp/saved_models/1612553978/')
# Check its architecture
#new_model.summary()
```
# Forma de guardar opcion 2
```
# Save the entire model to a HDF5 file.
# The '.h5' extension indicates that the model should be saved to HDF5.
model.save('/tmp/saved_models/versions/my_model1.h5')
# Recreate the exact same model, including its weights and the optimizer
new_model = tf.keras.models.load_model('/tmp/saved_models/versions/my_model1.h5')
# Show the model architecture
#new_model.summary()
```
| github_jupyter |
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/topNotebooksPython101Coursera">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
</a>
</div>
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>Dictionaries in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about the dictionaries in the Python Programming Language. By the end of this lab, you'll know the basics dictionary operations in Python, including what it is, and the operations on it.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="#dic">Dictionaries</a>
<ul>
<li><a href="content">What are Dictionaries?</a></li>
<li><a href="key">Keys</a></li>
</ul>
</li>
<li>
<a href="#quiz">Quiz on Dictionaries</a>
</li>
</ul>
<p>
Estimated time needed: <strong>20 min</strong>
</p>
</div>
<hr>
<h2 id="Dic">Dictionaries</h2>
<h3 id="content">What are Dictionaries?</h3>
A dictionary consists of keys and values. It is helpful to compare a dictionary to a list. Instead of the numerical indexes such as a list, dictionaries have keys. These keys are the keys that are used to access values within a dictionary.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsList.png" width="650" />
An example of a Dictionary <code>Dict</code>:
```
# Create the dictionary
Dict = {"key1": 1, "key2": "2", "key3": [3, 3, 3], "key4": (4, 4, 4), ('key5'): 5, (0, 1): 6}
Dict
```
The keys can be strings:
```
# Access to the value by the key
Dict["key1"]
```
Keys can also be any immutable object such as a tuple:
```
# Access to the value by the key
Dict[(0, 1)]
```
Each key is separated from its value by a colon "<code>:</code>". Commas separate the items, and the whole dictionary is enclosed in curly braces. An empty dictionary without any items is written with just two curly braces, like this "<code>{}</code>".
```
# Create a sample dictionary
release_year_dict = {"Thriller": "1982", "Back in Black": "1980", \
"The Dark Side of the Moon": "1973", "The Bodyguard": "1992", \
"Bat Out of Hell": "1977", "Their Greatest Hits (1971-1975)": "1976", \
"Saturday Night Fever": "1977", "Rumours": "1977"}
release_year_dict
```
In summary, like a list, a dictionary holds a sequence of elements. Each element is represented by a key and its corresponding value. Dictionaries are created with two curly braces containing keys and values separated by a colon. For every key, there can only be one single value, however, multiple keys can hold the same value. Keys can only be strings, numbers, or tuples, but values can be any data type.
It is helpful to visualize the dictionary as a table, as in the following image. The first column represents the keys, the second column represents the values.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsStructure.png" width="650" />
<h3 id="key">Keys</h3>
You can retrieve the values based on the names:
```
# Get value by keys
release_year_dict['Thriller']
```
This corresponds to:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyOne.png" width="500" />
Similarly for <b>The Bodyguard</b>
```
# Get value by key
release_year_dict['The Bodyguard']
```
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyTwo.png" width="500" />
Now let you retrieve the keys of the dictionary using the method <code>release_year_dict()</code>:
```
# Get all the keys in dictionary
release_year_dict.keys()
```
You can retrieve the values using the method <code>values()</code>:
```
# Get all the values in dictionary
release_year_dict.values()
```
We can add an entry:
```
# Append value with key into dictionary
release_year_dict['Graduation'] = '2007'
release_year_dict
```
We can delete an entry:
```
# Delete entries by key
del(release_year_dict['Thriller'])
del(release_year_dict['Graduation'])
release_year_dict
```
We can verify if an element is in the dictionary:
```
# Verify the key is in the dictionary
'The Bodyguard' in release_year_dict
```
<hr>
<h2 id="quiz">Quiz on Dictionaries</h2>
<b>You will need this dictionary for the next two questions:</b>
```
# Question sample dictionary
soundtrack_dic = {"The Bodyguard":"1992", "Saturday Night Fever":"1977"}
soundtrack_dic
```
a) In the dictionary <code>soundtrack_dict</code> what are the keys ?
```
# Write your code below and press Shift+Enter to execute
soundtrack_dic.keys()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
soundtrack_dic.keys() # The Keys "The Bodyguard" and "Saturday Night Fever"
-->
b) In the dictionary <code>soundtrack_dict</code> what are the values ?
```
# Write your code below and press Shift+Enter to execute
soundtrack_dic.values()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
soundtrack_dic.values() # The values are "1992" and "1977"
-->
<hr>
<b>You will need this dictionary for the following questions:</b>
The Albums <b>Back in Black</b>, <b>The Bodyguard</b> and <b>Thriller</b> have the following music recording sales in millions 50, 50 and 65 respectively:
a) Create a dictionary <code>album_sales_dict</code> where the keys are the album name and the sales in millions are the values.
```
# Write your code below and press Shift+Enter to execute
album_sales_dict = {"Back in Black":50,"The Bodyguard":50,"Thriller":65}
album_sales_dict
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict = {"The Bodyguard":50, "Back in Black":50, "Thriller":65}
-->
b) Use the dictionary to find the total sales of <b>Thriller</b>:
```
# Write your code below and press Shift+Enter to execute
album_sales_dict["Thriller"]
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict["Thriller"]
-->
c) Find the names of the albums from the dictionary using the method <code>keys</code>:
```
# Write your code below and press Shift+Enter to execute
album_sales_dict.keys()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict.keys()
-->
d) Find the names of the recording sales from the dictionary using the method <code>values</code>:
```
# Write your code below and press Shift+Enter to execute
album_sales_dict.value
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict.values()
-->
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| github_jupyter |
# KNN
Importing required python modules
---------------------------------
```
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from sklearn.preprocessing import normalize,scale
from sklearn.cross_validation import cross_val_score
import numpy as np
import pandas as pd
```
The following libraries have been used :
* **Pandas** : pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
* **Numpy** : NumPy is the fundamental package for scientific computing with Python.
* **Matplotlib** : matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments .
* **Sklearn** : It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy.
Retrieving the dataset
----------------------
```
data = pd.read_csv('heart.csv', header=None)
df = pd.DataFrame(data)
x = df.iloc[:, 0:5]
x = x.drop(x.columns[1:3], axis=1)
x = pd.DataFrame(scale(x))
y = df.iloc[:, 13]
y = y-1
```
1. Dataset is imported.
2. The imported dataset is converted into a pandas DataFrame.
3. Attributes(x) and labels(y) are extracted.
```
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.4)
```
Train/Test split is 0.4
Plotting the dataset
--------------------
```
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
ax1.scatter(x[1],x[2], c=y)
ax1.set_title("Original Data")
```
Matplotlib is used to plot the loaded pandas DataFrame.
Learning from the data
----------------------
```
model = KNeighborsClassifier(n_neighbors=5)
scores = cross_val_score(model, x, y, scoring='accuracy', cv=10)
print ("10-Fold Accuracy : ", scores.mean()*100)
model.fit(x_train,y_train)
print ("Testing Accuracy : ",model.score(x_test, y_test)*100)
predicted = model.predict(x)
```
Here **model** is an instance of KNeighborsClassifier method from sklearn.neighbors. 10 Fold Cross Validation is used to verify the results.
```
ax2 = fig.add_subplot(1,2,2)
ax2.scatter(x[1],x[2], c=predicted)
ax2.set_title("KNearestNeighbours")
```
The learned data is plotted.
```
cm = metrics.confusion_matrix(y, predicted)
print (cm/len(y))
print (metrics.classification_report(y, predicted))
plt.show()
```
Compute confusion matrix to evaluate the accuracy of a classification and build a text report showing the main classification metrics.
| github_jupyter |
# Jupyter Superpower - Extend SQL analysis with Python
> Making collboration with Notebook possible and share perfect SQL analysis with Notebook.
- toc: true
- badges: true
- comments: true
- author: noklam
- categories: ["python", "reviewnb", "sql"]
- hide: false
- canonical_url: https://blog.reviewnb.com/jupyter-sql-notebook/
If you have ever written SQL queries to extract data from a database, chances are you are familiar with an IDE like the screenshot below. The IDE offers features like auto-completion, visualize the query output, display the table schema and the ER diagram. Whenever you need to write a query, this is your go-to tool. However, you may want to add `Jupyter Notebook` into your toolkit. It improves my productivity by complementing some missing features in IDE.
")
```
#collapse-hide
# !pip install ipython_sql
%load_ext sql
%config SqlMagic.displaycon = False
%config SqlMagic.feedback = False
# Download the file from https://github.com/cwoodruff/ChinookDatabase/blob/master/Scripts/Chinook_Sqlite.sqlite
%sql sqlite:///sales.sqlite.db
from pathlib import Path
DATA_DIR = Path('../_demo/sql_notebook')
%%sql
select ProductId, Sum(Unit) from Sales group by ProductId;
```
## Notebook as a self-contained report
As a data scientist/data analyst, you write SQL queries for ad-hoc analyses all the time. After getting the right data, you make nice-looking charts and put them in a PowerPoint and you are ready to present your findings. Unlike a well-defined ETL job, you are exploring the data and testing your hypotheses all the time. You make assumptions, which is often wrong but you only realized it after a few weeks. But all you got is a CSV that you cannot recall how it was generated in the first place.
Data is not stationary, why should your analysis be? I have seen many screenshots, fragmented scripts flying around in organizations. As a data scientist, I learned that you need to be cautious about what you heard. Don't trust peoples' words easily, verify the result! To achieve that, we need to know exactly how the data was extracted, what kind of assumptions have been made? Unfortunately, this information usually is not available. As a result, people are redoing the same analysis over and over. You will be surprised that this is very common in organizations. In fact, numbers often do not align because every department has its own definition for a given metric. It is not shared among the organization, and verbal communication is inaccurate and error-prone. It would be really nice if anyone in the organization can reproduce the same result with just a single click. Jupyter Notebook can achieve that reproducibility and keep your entire analysis (documentation, data, and code) in the same place.
## Notebook as an extension of IDE
Writing SQL queries in a notebook gives you extra flexibility of a full programming language alongside SQL.
For example:
* Write complex processing logic that is not easy in pure SQL
* Create visualizations directly from SQL results without exporting to an intermediate CSV
For instance, you can pipe your `SQL` query with `pandas` and then make a plot. It allows you to generate analysis with richer content. If you find bugs in your code, you can modify the code and re-run the analysis. This reduces the hustles to reproduce an analysis greatly. In contrast, if your analysis is reading data from an anonymous exported CSV, it is almost guaranteed that the definition of the data will be lost. No one will be able to reproduce the dataset.
You can make use of the `ipython_sql` library to make queries in a notebook. To do this, you need to use the **magic** function with the inline magic `%` or cell magic `%%`.
```
sales = %sql SELECT * from sales LIMIT 3
sales
```
To make it fancier, you can even parameterize your query with variables. Tools like [papermill](https://www.bing.com/search?q=github+paramter+notebook&cvid=5b17218ec803438fb1ca41212d53d90a&FORM=ANAB01&PC=U531) allows you to parameterize your notebook. If you execute the notebook regularly with a scheduler, you can get a updated dashboard. To reference the python variable, the `$` sign is used.
```
table = "sales"
query = f"SELECT * from {table} LIMIT 3"
sales = %sql $query
sales
```
With a little bit of python code, you can make a nice plot to summarize your finding. You can even make an interactive plot if you want. This is a very powerful way to extend your analysis.
```
import seaborn as sns
sales = %sql SELECT * FROM SALES
sales_df = sales.DataFrame()
sales_df = sales_df.groupby('ProductId', as_index=False).sum()
ax = sns.barplot(x='ProductId', y='Unit', data=sales_df)
ax.set_title('Sales by ProductId');
```
## Notebook as a collaboration tool
Jupyter Notebook is flexible and it fits extremely well with exploratory data analysis. To share to a non-coder, you can share the notebook or export it as an HTML file. They can read the report or any cached executed result. If they need to verify the data or add some extra plots, they can do it easily themselves.
It is true that Jupyter Notebook has an infamous reputation. It is not friendly to version control, it's hard to collaborate with notebooks. Luckily, there are efforts that make collaboration in notebook a lot easier now.
Here what I did not show you is that the table has an `isDeleted` column. Some of the records are invalid and we should exclude them. In reality, this happens frequently when you are dealing with hundreds of tables that you are not familiar with. These tables are made for applications, transactions, and they do not have analytic in mind. Data Analytic is usually an afterthought. Therefore, you need to consult the SME or the maintainer of that tables. It takes many iterations to get the correct data that can be used to produce useful insight.
With [ReviewNB](https://www.reviewnb.com/), you can publish your result and invite some domain expert to review your analysis. This is where notebook shine, this kind of workflow is not possible with just the SQL script or a screenshot of your finding. The notebook itself is a useful documentation and collaboration tool.
### Step 1 - Review PR online

You can view your notebook and add comments on a particular cell on [ReviewNB](https://www.reviewnb.com/). This lowers the technical barrier as your analysts do not have to understand Git. He can review changes and make comments on the web without the need to pull code at all. As soon as your analyst makes a suggestion, you can make changes.
### Step 2 - Review Changes

Once you have made changes to the notebook, you can review it side by side. This is very trivial to do it in your local machine. Without ReviewNB, you have to pull both notebooks separately. As Git tracks line-level changes, you can't really read the changes as it consists of a lot of confusing noise. It would also be impossible to view changes about the chart with git.
### Step 3 - Resolve Discussion

Once the changes are reviewed, you can resolve the discussion and share your insight with the team. You can publish the notebook to internal sharing platform like [knowledge-repo](https://github.com/airbnb/knowledge-repo) to organize the analysis.
I hope this convince you that Notebook is a good choice for adhoc analytics. It is possible to collaborate with notebook with proper software in place. Regarless if you use notebook or not, you should try your best to document the process. Let's make more reproducible analyses!
| github_jupyter |
<img src="../Images/Level1Beginner.png" alt="Beginner" width="128" height="128" align="right">
## Tuplas en Python
Una tupla es una secuencia **inmutable** de elementos de cualquier tipo.
Se comporta como una lista en la que no se puede modificar los elementos individuales.
La discusión sobre listas y tuplas tiene que ver con los mecanismos de gestión de memoria en un entorno de ejecución y sobre la necesidad de contar con valores inmutables para controlar la identidad de claves en conjuntos y mapas.
Una tupla se crea mediante paréntesis encerrando los elementos separados por comas.
Es posible indicar los elementos sin paréntesis asegurándose de incluir una coma al final.
La función **len()** aplicada a una tupla devuelve la cantidad de elementos en ella.
```
sample_tuple = (1, 2, 3, 7, 8,5)
print("Ejemplo de tupla:", sample_tuple)
print("Tipo:", type(sample_tuple), "\n")
empty_tuple = ()
print("Tupla vacía:", empty_tuple, "tiene", len(empty_tuple), "elementos")
print("Tipo:", type(empty_tuple), "\n")
one_element_tuple = ("único elemento", )
print("Tupla con único elemento:", one_element_tuple)
print("Tipo:", type(one_element_tuple), "\n")
another_element_tuple = 1,
print("Otra tupla con único elemento:", another_element_tuple)
print("Tipo:", type(another_element_tuple), "\n")
```
### Concatenar y Multiplicar
Los operadores “**+**” para concatenar y “**\***” para multiplicar se pueden utilizar con tuplas.
```
one_element_tuple = ("único elemento", )
another_element_tuple = 1,
concatenated_tuple = one_element_tuple + sample_tuple
print("Tupla concatenada:", concatenated_tuple,
"tiene", len(concatenated_tuple), "elementos")
print("Tipo:", type(concatenated_tuple), "\n")
multiplied_tuple = one_element_tuple * 3
print("Tupla multiplicada:", multiplied_tuple,
"tiene", len(multiplied_tuple), "elementos")
print("Tipo:", type(multiplied_tuple), "\n")
```
### Convertir tuplas a y desde listas
Los **constructores** **list(...)** y **tuple(...)** permiten crear una lista o tupla a partir de una secuencia como argumento.
```
sample_tuple = (1, 2, 3, 7, 8,5)
print("Ejemplo de tupla:", sample_tuple)
print("Tipo:", type(sample_tuple), "\n")
sample_list = list(sample_tuple)
print("Ejemplo de lista:", sample_list)
print("Tipo:", type(sample_list), "\n")
another_tuple = tuple(sample_list)
print("Otra tupla:", another_tuple)
print("Tipo:", type(another_tuple), "\n")
```
<img src="../Images/Level2Intermediate.png" alt="Intermediate" width="128" height="128" align="right">
### Empaquetar y desempaquetar
Las asignaciones conocidadas como **packing** y **unpacking** permiten asignar multiples valores a una tupla y asignar una tupla a multiples variables.
```
print("packing ...")
one_tuple = True, 145, "Python"
print("one_tuple", one_tuple, "tipo", type(one_tuple), "tiene", len(one_tuple), "elementos")
print("\nunpacking ...")
boolean_var, integer_var, string_var = one_tuple
print("boolean_var", boolean_var, "\ninteger_var", integer_var, "\nstring_var", string_var)
```
| github_jupyter |
```
#import pandas
import pandas as pd
import os
#Load files
School_info_path=os.path.join("Resources","schools_complete.csv")
Student_info_path=os.path.join("Resources","students_complete.csv")
#Read school files
school_data_df=pd.read_csv(School_info_path)
#Read the student info
student_data_df=pd.read_csv(Student_info_path)
#Determine if there are any missing values in the school data
school_data_df.count()
#Determine data types for the school dataframe
school_data_df.dtypes
#Determine if there are any missing values in student data
student_data_df.count()
#Determine student data types
student_data_df.dtypes
#Clean student names
prefixes_suffixes=["Dr. ", "Mr. ","Ms. ", "Mrs. ", "Miss ", " MD", " DDS", " DVM", " PhD"]
for word in prefixes_suffixes:
student_data_df["student_name"]=student_data_df["student_name"].str.replace(word,"")
student_data_df.head(10)
#Merge the school and student dfs
school_data_complete_df=pd.merge(student_data_df,school_data_df, on=["school_name","school_name"])
#Find Student Count
student_count=school_data_complete_df["Student ID"].count()
#Find school count
school_count=len(school_data_complete_df["school_name"].unique())
#Find total budget of district
total_budget=school_data_df["budget"].sum()
#Calculate reading average
average_reading_score=school_data_complete_df["reading_score"].mean()
#Claculate math average
average_math_score=school_data_complete_df["math_score"].mean()
#Calculate passing math students
passing_math=school_data_complete_df[school_data_complete_df["math_score"]>=70]
passing_math_count=passing_math["student_name"].count()
#Calculate passing reading student
passing_reading=school_data_complete_df[school_data_complete_df["reading_score"]>=70]
passing_reading_count=passing_reading["student_name"].count()
#calculate % of passing math
passing_math_percentage=passing_math_count/float(student_count) * 100
#calculate % of passing reading
passing_reading_percentage=passing_reading_count/float(student_count) * 100
#get the overral passing student count
passing_math_reading=school_data_complete_df[(school_data_complete_df["math_score"]>=70) & (school_data_complete_df["reading_score"]>=70)]
#Calculate overall %
passing_math_reading_count=passing_math_reading["student_name"].count()
overall_passing_percentage=passing_math_reading_count/float(student_count) * 100
#Add metrics to a df
district_summary_df=pd.DataFrame([{
"Total Schools":school_count,
"Total Students":student_count,
"Total Budget" :total_budget,
"Average Math Score":average_math_score,
"Average Reading Score":average_reading_score,
"% Passing Math":passing_math_percentage,
"% Passing Reading":passing_reading_percentage,
"% Overall Passing":overall_passing_percentage}])
#Formatting collumns
district_summary_df["Total Students"]=district_summary_df["Total Students"].map("{:,}".format)
district_summary_df["Total Budget"]=district_summary_df["Total Budget"].map("${:,.2f}".format)
district_summary_df["Average Reading Score"]=district_summary_df["Average Reading Score"].map("{:.1f}".format)
district_summary_df["Average Math Score"]=district_summary_df["Average Math Score"].map("{:.1f}".format)
district_summary_df["% Passing Math"]=district_summary_df["% Passing Math"].map("{:.1f}".format)
district_summary_df["% Passing Reading"]=district_summary_df["% Passing Reading"].map("{:.1f}".format)
district_summary_df["% Overall Passing"]=district_summary_df["% Overall Passing"].map("{:.1f}".format)
district_summary_df
per_school_types=school_data_df.set_index(["school_name"])["type"]
per_school_df=pd.DataFrame(per_school_types)
#Get the total students in each hs
per_school_counts=school_data_df.set_index(["school_name"])["size"]
#Calculate the budget per student in each hs
per_school_budget=school_data_df.set_index(["school_name"])["budget"]
per_school_capita=per_school_budget/per_school_counts
#Calculate math scores.
per_school_math=school_data_complete_df.groupby(["school_name"]).mean()["math_score"]
#Calculate reading scores
per_school_reading=school_data_complete_df.groupby(["school_name"]).mean()["reading_score"]
#Count the ammount of students passing math
school_passing_math=school_data_complete_df[(school_data_complete_df["math_score"]>=70)].groupby(["school_name"]).count()["student_name"]
#Count the ammount of students passing reading
school_passing_reading=school_data_complete_df[(school_data_complete_df["reading_score"]>=70)].groupby(["school_name"]).count()["student_name"]
#calculate the % of students passing math
per_school_passing_math_percentage=school_passing_math/per_school_counts * 100
#Calculate the % of students passing reding
per_school_passing_reading_percentage=school_passing_reading/per_school_counts * 100
#Calculate the students who passed math and reading
per_passing_math_reading=school_data_complete_df[(school_data_complete_df["math_score"]>=70) & (school_data_complete_df["reading_score"]>=70)].groupby("school_name").count()["student_name"]
per_school_overall_passing=per_passing_math_reading/per_school_counts * 100
#Add the per-school metrics to a data frame
per_school_summary_df=pd.DataFrame({
"School Type":per_school_types,
"Total Students":per_school_counts,
"Total School Budget":per_school_budget,
"Per Student Budget":per_school_capita,
"Average Math Score":per_school_math,
"Average Reading Score":per_school_reading,
"% Passing Math":per_school_passing_math_percentage,
"% Passing Reading":per_school_passing_reading_percentage,
"% Overall Passing":per_school_overall_passing})
#Formatting the df
per_school_summary_df["Total School Budget"] = per_school_summary_df["Total School Budget"].map("${:,.2f}".format)
per_school_summary_df["Per Student Budget"] = per_school_summary_df["Per Student Budget"].map("${:,.2f}".format)
# Display the data frame
per_school_summary_df
top_schools=per_school_summary_df.sort_values(["% Overall Passing"],ascending=False)
top_schools.head()
#Sort and show bottom five schools
bottom_schools=per_school_summary_df.sort_values(["% Overall Passing"],ascending=True)
bottom_schools.head()
#Get the students of each grade
ninth_graders=school_data_complete_df[(school_data_complete_df["grade"]=="9th")]
tenth_graders=school_data_complete_df[(school_data_complete_df["grade"]=="10th")]
eleventh_graders=school_data_complete_df[(school_data_complete_df["grade"]=="11th")]
twelfth_graders=school_data_complete_df[(school_data_complete_df["grade"]=="12th")]
#Get the average math scores for each grade
ninth_graders_math_scores=ninth_graders.groupby(["school_name"]).mean()["math_score"]
tenth_graders_math_scores=tenth_graders.groupby(["school_name"]).mean()["math_score"]
eleventh_graders_math_scores=eleventh_graders.groupby(["school_name"]).mean()["math_score"]
twelfth_graders_math_scores=twelfth_graders.groupby(["school_name"]).mean()["math_score"]
#Get the average reading scores for each grade
ninth_graders_reading_scores=ninth_graders.groupby(["school_name"]).mean()["reading_score"]
tenth_graders_reading_scores=tenth_graders.groupby(["school_name"]).mean()["reading_score"]
eleventh_graders_reading_scores=eleventh_graders.groupby(["school_name"]).mean()["reading_score"]
twelfth_graders_reading_scores=twelfth_graders.groupby(["school_name"]).mean()["reading_score"]
#Create a DataFrame with the average math score for each grade
average_math_grade={"9th Grade":ninth_graders_math_scores,"10th Grade":tenth_graders_math_scores,"11th Grade":eleventh_graders_math_scores,"12th Grade":twelfth_graders_math_scores}
math_scores_by_grade=pd.DataFrame(average_math_grade)
# Format each grade column.
math_scores_by_grade["9th Grade"] = math_scores_by_grade["9th Grade"].map("{:.1f}".format)
math_scores_by_grade["10th Grade"] = math_scores_by_grade["10th Grade"].map("{:.1f}".format)
math_scores_by_grade["11th Grade"] = math_scores_by_grade["11th Grade"].map("{:.1f}".format)
math_scores_by_grade["12th Grade"] = math_scores_by_grade["12th Grade"].map("{:.1f}".format)
# Remove the index name.
math_scores_by_grade.index.name = None
# Display the DataFrame.
math_scores_by_grade
#Create a DataFrame with the average reading score for each grade
average_reading_grade={"9th Grade":ninth_graders_reading_scores,"10th Grade":tenth_graders_reading_scores,"11th Grade":eleventh_graders_reading_scores,"12th Grade":twelfth_graders_reading_scores}
reading_scores_by_grade=pd.DataFrame(average_reading_grade)
reading_scores_by_grade.head()
reading_scores_by_grade["9th Grade"] = reading_scores_by_grade["9th Grade"].map("{:.1f}".format)
reading_scores_by_grade["10th Grade"] = reading_scores_by_grade["10th Grade"].map("{:.1f}".format)
reading_scores_by_grade["11th Grade"] = reading_scores_by_grade["11th Grade"].map("{:.1f}".format)
reading_scores_by_grade["12th Grade"] = reading_scores_by_grade["12th Grade"].map("{:.1f}".format)
# Remove the index name.
reading_scores_by_grade.index.name = None
# Display the DataFrame.
reading_scores_by_grade
#Get the info of the spending bins
per_school_capita.describe()
#write the bins
spending_bins=[0,585,630,645,675]
group_names=["<$584","$585-629","$630-644","$645-675"]
#Cut the budget per student into the bins
per_school_capita.groupby(pd.cut(per_school_capita,spending_bins,labels=group_names)).count()
#Categorize spending base on the bins
per_school_summary_df["Spending Ranges (Per Student)"]=pd.cut(per_school_capita,spending_bins,labels=group_names)
per_school_summary_df.index.name=None
per_school_summary_df.head()
#Calculate averages for the desire columns
spending_math_score=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average Math Score"]
spending_reading_score=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average Reading Score"]
spending_passing_math=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Math"]
spending_passing_reading=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Reading"]
spending_passing_overall=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Overall Passing"]
#Create the new DataFrame
spending_summary_df=pd.DataFrame({
"Average Math Score":spending_math_score,
"Average Reading Score":spending_reading_score,
"% Passing Math":spending_passing_math,
"% Passing Reading":spending_passing_reading,
"% Overall Passing":spending_passing_overall})
# Formatting
spending_summary_df["Average Math Score"] = spending_summary_df["Average Math Score"].map("{:.1f}".format)
spending_summary_df["Average Reading Score"] = spending_summary_df["Average Reading Score"].map("{:.1f}".format)
spending_summary_df["% Passing Math"] = spending_summary_df["% Passing Math"].map("{:.1f}".format)
spending_summary_df["% Passing Reading"] = spending_summary_df["% Passing Reading"].map("{:.1f}".format)
spending_summary_df["% Overall Passing"] = spending_summary_df["% Overall Passing"].map("{:.1f}".format)
spending_summary_df
#Devide the schools intro population bins
population_bins=[0,1000,2000,5000]
population_names=["Small (<1000)","Medium (1000-2000)","Large (2000-5000)"]
#Cut the data into the population bins and add it to the DataFrame
per_school_summary_df["School Size"]=pd.cut(per_school_summary_df["Total Students"],population_bins,labels=population_names)
#Get the scores and average for each population size
size_math_score=per_school_summary_df.groupby(["School Size"]).mean()["Average Math Score"]
size_reading_score=per_school_summary_df.groupby(["School Size"]).mean()["Average Reading Score"]
size_passing_math=per_school_summary_df.groupby(["School Size"]).mean()["% Passing Math"]
size_passing_reading=per_school_summary_df.groupby(["School Size"]).mean()["% Passing Reading"]
size_overall_passing=per_school_summary_df.groupby(["School Size"]).mean()["% Overall Passing"]
#Add the info to a data Frame
size_summary_df=pd.DataFrame({
"Average Math Score":size_math_score,
"Average Reading Score":size_reading_score,
"% Passing Math":size_passing_math,
"% Passing Reading":size_passing_reading,
"% Overall Passing":size_overall_passing})
#Format the data frame
size_summary_df["Average Math Score"]=size_summary_df["Average Math Score"].map("{:.1f}".format)
size_summary_df["Average Reading Score"]=size_summary_df["Average Reading Score"].map("{:.1f}".format)
size_summary_df["% Passing Math"]=size_summary_df["% Passing Math"].map("{:.1f}".format)
size_summary_df["% Passing Reading"]=size_summary_df["% Passing Reading"].map("{:.1f}".format)
size_summary_df["% Overall Passing"]=size_summary_df["% Overall Passing"].map("{:.1f}".format)
size_summary_df
#Get the average and scores by school types
type_math_score=per_school_summary_df.groupby(["School Type"]).mean()["Average Math Score"]
type_reading_score=per_school_summary_df.groupby(["School Type"]).mean()["Average Reading Score"]
type_passing_math=per_school_summary_df.groupby(["School Type"]).mean()["% Passing Math"]
type_passing_reading=per_school_summary_df.groupby(["School Type"]).mean()["% Passing Reading"]
type_overall_passing=per_school_summary_df.groupby(["School Type"]).mean()["% Overall Passing"]
#Add the metrics into a Data Frame
type_summary_df=pd.DataFrame({
"Average Math Score":type_math_score,
"Average Reading Score":type_reading_score,
"% Passing Math":type_passing_math,
"% Passing Reading":type_passing_reading,
"% Overall Passing":type_overall_passing})
#Formatting
#Format the data frame
type_summary_df["Average Math Score"]=type_summary_df["Average Math Score"].map("{:.1f}".format)
type_summary_df["Average Reading Score"]=type_summary_df["Average Reading Score"].map("{:.1f}".format)
type_summary_df["% Passing Math"]=type_summary_df["% Passing Math"].map("{:.1f}".format)
type_summary_df["% Passing Reading"]=type_summary_df["% Passing Reading"].map("{:.1f}".format)
type_summary_df["% Overall Passing"]=type_summary_df["% Overall Passing"].map("{:.1f}".format)
type_summary_df
```
| github_jupyter |
# 函数
- 函数可以用来定义可重复代码,组织和简化
- 一般来说一个函数在实际开发中为一个小功能
- 一个类为一个大功能
- 同样函数的长度不要超过一屏
Python中的所有函数实际上都是有返回值(return None),
如果你没有设置return,那么Python将不显示None.
如果你设置return,那么将返回出return这个值.
```
def HJN():
print('Hello')
return 1000
b=HJN()
print(b)
HJN
def panduan(number):
if number % 2 == 0:
print('O')
else:
print('J')
panduan(number=1)
panduan(2)
```
## 定义一个函数
def function_name(list of parameters):
do something

- 以前使用的random 或者range 或者print.. 其实都是函数或者类
函数的参数如果有默认值的情况,当你调用该函数的时候:
可以不给予参数值,那么就会走该参数的默认值
否则的话,就走你给予的参数值.
```
import random
def hahah():
n = random.randint(0,5)
while 1:
N = eval(input('>>'))
if n == N:
print('smart')
break
elif n < N:
print('太小了')
elif n > N:
print('太大了')
```
## 调用一个函数
- functionName()
- "()" 就代表调用
```
def H():
print('hahaha')
def B():
H()
B()
def A(f):
f()
A(B)
```

## 带返回值和不带返回值的函数
- return 返回的内容
- return 返回多个值
- 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值

- 当然也可以自定义返回None
## EP:

```
def main():
print(min(min(5,6),(51,6)))
def min(n1,n2):
a = n1
if n2 < a:
a = n2
main()
```
## 类型和关键字参数
- 普通参数
- 多个参数
- 默认值参数
- 不定长参数
## 普通参数
## 多个参数
## 默认值参数
## 强制命名
```
def U(str_):
xiaoxie = 0
for i in str_:
ASCII = ord(i)
if 97<=ASCII<=122:
xiaoxie +=1
elif xxxx:
daxie += 1
elif xxxx:
shuzi += 1
return xiaoxie,daxie,shuzi
U('HJi12')
```
## 不定长参数
- \*args
> - 不定长,来多少装多少,不装也是可以的
- 返回的数据类型是元组
- args 名字是可以修改的,只是我们约定俗成的是args
- \**kwargs
> - 返回的字典
- 输入的一定要是表达式(键值对)
- name,\*args,name2,\**kwargs 使用参数名
```
def TT(a,b)
def TT(*args,**kwargs):
print(kwargs)
print(args)
TT(1,2,3,4,6,a=100,b=1000)
{'key':'value'}
TT(1,2,4,5,7,8,9,)
def B(name1,nam3):
pass
B(name1=100,2)
def sum_(*args,A='sum'):
res = 0
count = 0
for i in args:
res +=i
count += 1
if A == "sum":
return res
elif A == "mean":
mean = res / count
return res,mean
else:
print(A,'还未开放')
sum_(-1,0,1,4,A='var')
'aHbK134'.__iter__
b = 'asdkjfh'
for i in b :
print(i)
2,5
2 + 22 + 222 + 2222 + 22222
```
## 变量的作用域
- 局部变量 local
- 全局变量 global
- globals 函数返回一个全局变量的字典,包括所有导入的变量
- locals() 函数会以字典类型返回当前位置的全部局部变量。
```
a = 1000
b = 10
def Y():
global a,b
a += 100
print(a)
Y()
def YY(a1):
a1 += 100
print(a1)
YY(a)
print(a)
```
## 注意:
- global :在进行赋值操作的时候需要声明
- 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.
- 
# Homework
- 1

```
def getPentagonalNumber(n):
"""
功能:计算五角数
参数:n:循环的数值
"""
return n*(3*n-1)/2
count =0
for n in range(1,101):
if count <9:
print( "%.0f "%getPentagonalNumber(n),end="")
count += 1
else:
print( "%.0f"%getPentagonalNumber(n))
count = 0
```
- 2

```
n = float(input())
def sumDigits(n):
bai = n // 100
shi = n // 10 % 10
ge = n % 100 % 10
sum = bai + shi + ge
return sum
print(sumDigits(n))
```
- 3

```
n = float(input())
def sumDigits(n):
bai = n // 100
shi = n // 10 % 10
ge = n % 100 % 10
sum = bai + shi + ge
return sum
print(sumDigits(n))
```
- 4

```
touzie = input('The amount invested : ')
nialilv = input('Annual interest rate : ')
def FutureInvestmentValue(investmentAmount,monthlyinterestRate,years):
```
- 5

```
def printChars(ch1,ch2,numberPerLine):
'''
参数没用
功能遍历从1到Z所有字符
'''
print(ord("1"),ord("Z"))
for i in range(49,91):
if i%10 == 0:
print(' ')
print(chr(i),end = ' ')
```
- 6

```
def numberOfDaysInAYear(year):
"""
功能:计算闰年
参数:
year:年份
"""
days = 0
if (year % 400 == 0) or (year % 4 == 0) and (year % 100 != 0):
days = 366
else:
days = 365
return days
day1 = int(input(">>"))
day2 = int(input(">>"))
sum = 0
for i in range(day1,day2+1):
sum = numberOfDaysInAYear(i)
print("%d 年的天数为:%d"%(i,sum))
```
- 7

```
重写个锤子!!!
```
- 8

- 9


```
def haomiaoshu():
import time
localtime = time.asctime(time.localtime(time.time()))
print("本地时间为 :", localtime)
```
- 10

```
import random
def dice(x,y):
"""
功能:计算骰子数的和,进行判断胜负
参数: x,y:随机生成的骰子数
"""
ying = [7,11]
shu = [2,3,12]
other = [4,5,6,8,9,10]
count = 0
if(x+y in shu):
print("You shu")
elif(x+y in ying):
print("You ying")
elif (x+y in other):
count += 1
print("point is %d"%(x+y))
num1 = random.randint(1,6)
num2 = random.randint(1,6)
print("You rolled %d + %d = %d"%(num1,num2,num1+num2))
while num1+num2 != x+y or num1+num2 != 7:
if num1+num2 == 7:
print("ni shu le")
break
if num1+num2 == x+y:
print("ni ying le")
break
num1 = random.randint(1,6)
num2 = random.randint(1,6)
print("You rolled %d + %d = %d"%(num1,num2,num1+num2))
x = random.randint(1,6)
y = random.randint(1,6)
print("You rolled %d + %d = %d"%(x,y,x+y))
dice(x,y)
```
- 11
### 去网上寻找如何用Python代码发送邮件
```
import smtplib
import email
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart
# 设置邮箱的域名
HOST = 'smtp.qq.com'
# 设置邮件标题
SUBJECT = 'csdn博客代码'
# 设置发件人邮箱
FROM = '发件人邮箱@qq.com'
# 设置收件人邮箱
TO = '邮箱1@qq.com,邮箱2@qq.com'
message = MIMEMultipart('related')
#--------------------------------------发送文本-----------------
# 发送邮件主体到对方的邮箱中
message_html = MIMEText('<h2 style="color:red;font-size:100px">CSDN博客超级好</h2><img src="cid:big">','html','utf-8')
message.attach(message_html)
#-------------------------------------发送图片--------------------
# rb 读取二进制文件
# 要确定当前目录有1.jpg这个文件
image_data = open('1.jpg','rb')
# 设置读取获取的二进制数据
message_image = MIMEImage(image_data.read())
# 关闭刚才打开的文件
image_data.close()
message_image.add_header('Content-ID','big')
# 添加图片文件到邮件信息当中去
# message.attach(message_image)
#-------------------------------------添加文件---------------------
# 要确定当前目录有table.xls这个文件
message_xlsx = MIMEText(open('table.xls','rb').read(),'base64','utf-8')
# 设置文件在附件当中的名字
message_xlsx['Content-Disposition'] = 'attachment;filename="test1111.xlsx"'
message.attach(message_xlsx)
# 设置邮件发件人
message['From'] = FROM
# 设置邮件收件人
message['To'] = TO
# 设置邮件标题
message['Subject'] = SUBJECT
# 获取简单邮件传输协议的证书
email_client = smtplib.SMTP_SSL()
# 设置发件人邮箱的域名和端口,端口为465
email_client.connect(HOST,'465')
# ---------------------------邮箱授权码------------------------------
result = email_client.login(FROM,'邮箱授权码')
print('登录结果',result)
email_client.sendmail(from_addr=FROM,to_addrs=TO.split(','),msg=message.as_string())
# 关闭邮件发送客户端
email_client.close()
```
| github_jupyter |
<h2>Segmenting and Clustering Neighbourhoods in Toronto</h2>
The project includes scraping the Wikipedia page for the postal codes of Canada and then process and clean the data for the clustering. The clustering is carried out by K Means and the clusters are plotted using the Folium Library. The Boroughs containing the name 'Toronto' in it are first plotted and then clustered and plotted again.
<h3>All the 3 tasks of <i>web scraping</i>, <i>cleaning</i> and <i>clustering</i> are implemented in the same notebook for the ease of evaluation.</h3>
<h3>Installing and Importing the required Libraries</h3>
```
!pip install beautifulsoup4
!pip install lxml
import requests # library to handle requests
import pandas as pd # library for data analsysis
import numpy as np # library to handle data in a vectorized manner
import random # library for random number generation
#!conda install -c conda-forge geopy --yes
from geopy.geocoders import Nominatim # module to convert an address into latitude and longitude values
# libraries for displaying images
from IPython.display import Image
from IPython.core.display import HTML
from IPython.display import display_html
import pandas as pd
import numpy as np
# tranforming json file into a pandas dataframe library
from pandas.io.json import json_normalize
!conda install -c conda-forge folium=0.5.0 --yes
import folium # plotting library
from bs4 import BeautifulSoup
from sklearn.cluster import KMeans
import matplotlib.cm as cm
import matplotlib.colors as colors
print('Folium installed')
print('Libraries imported.')
```
<h3>Scraping the Wikipedia page for the table of postal codes of Canada</h3>
BeautifulSoup Library of Python is used for web scraping of table from the Wikipedia. The title of the webpage is printed to check if the page has been scraped successfully or not. Then the table of postal codes of Canada is printed.
```
source = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text
soup=BeautifulSoup(source,'lxml')
print(soup.title)
from IPython.display import display_html
tab = str(soup.table)
display_html(tab,raw=True)
```
<h3>The html table is converted to Pandas DataFrame for cleaning and preprocessing.</h3>
```
dfs = pd.read_html(tab)
df=dfs[0]
df.head()
```
<h3>Data preprocessing and cleaning</h3>
```
# Dropping the rows where Borough is 'Not assigned'
df1 = df[df.Borough != 'Not assigned']
# Combining the neighbourhoods with same Postalcode
df2 = df1.groupby(['Postcode','Borough'], sort=False).agg(', '.join)
df2.reset_index(inplace=True)
# Replacing the name of the neighbourhoods which are 'Not assigned' with names of Borough
df2['Neighbourhood'] = np.where(df2['Neighbourhood'] == 'Not assigned',df2['Borough'], df2['Neighbourhood'])
df2
# Shape of data frame
df2.shape
```
<h3>Importing the csv file conatining the latitudes and longitudes for various neighbourhoods in Canada</h3>
```
lat_lon = pd.read_csv('https://cocl.us/Geospatial_data')
lat_lon.head()
```
<h3>Merging the two tables for getting the Latitudes and Longitudes for various neighbourhoods in Canada</h3>
```
lat_lon.rename(columns={'Postal Code':'Postcode'},inplace=True)
df3 = pd.merge(df2,lat_lon,on='Postcode')
df3.head()
```
<h2>The notebook from here includes the Clustering and the plotting of the neighbourhoods of Canada which contain Toronto in their Borough</h2>
<h3>Getting all the rows from the data frame which contains Toronto in their Borough.</h3>
```
df4 = df3[df3['Borough'].str.contains('Toronto',regex=False)]
df4
```
<h3>Visualizing all the Neighbourhoods of the above data frame using Folium</h3>
```
map_toronto = folium.Map(location=[43.651070,-79.347015],zoom_start=10)
for lat,lng,borough,neighbourhood in zip(df4['Latitude'],df4['Longitude'],df4['Borough'],df4['Neighbourhood']):
label = '{}, {}'.format(neighbourhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat,lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
```
<h3>The map might not be visible on Github. Check out the README for the map.</h3>
<h3>Using KMeans clustering for the clsutering of the neighbourhoods</h3>
```
k=5
toronto_clustering = df4.drop(['Postcode','Borough','Neighbourhood'],1)
kmeans = KMeans(n_clusters = k,random_state=0).fit(toronto_clustering)
kmeans.labels_
df4.insert(0, 'Cluster Labels', kmeans.labels_)
df4
# create map
map_clusters = folium.Map(location=[43.651070,-79.347015],zoom_start=10)
# set color scheme for the clusters
x = np.arange(k)
ys = [i + x + (i*x)**2 for i in range(k)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, neighbourhood, cluster in zip(df4['Latitude'], df4['Longitude'], df4['Neighbourhood'], df4['Cluster Labels']):
label = folium.Popup(' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
```
<h3>The map might not be visible on Github. Check out the README for the map.</h3>
| github_jupyter |
```
#Import Required Packages
import requests
import time
import schedule
import os
import json
import newspaper
from bs4 import BeautifulSoup
from datetime import datetime
from newspaper import fulltext
import newspaper
import pandas as pd
import numpy as np
import pickle
#Set Today's Date
#dates = [datetime.today().strftime('%m-%d-%y')]
dates = [datetime.today().strftime('%m-%d')]
```
### Define Urls for Newsapi
```
#Define urls for newsapi
urls=[
'https://newsapi.org/v2/top-headlines?sources=associated-press&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=independent&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=bbc-news&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=reuters&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-wall-street-journal&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-washington-post&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=national-geographic&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=usa-today&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=cnn&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=fox-news&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=al-jazeera-english&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=bloomberg&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=business-insider&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=cnbc&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-new-york-times&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=new-scientist&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=news-com-au&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=newsweek&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-economist&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-hill&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-huffington-post&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-next-web&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-telegraph&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-washington-times&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=time&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-jerusalem-post&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-irish-times&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-globe-and-mail&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=the-american-conservative&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=techcrunch-cn&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4',
'https://newsapi.org/v2/top-headlines?sources=recode&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4'
]
```
### Develop news site folder structure and write top 10 headline urls to API file
```
for date in dates:
print('saving {} ...'.format(date))
for url in urls:
r = requests.get(url)
source = url.replace('https://newsapi.org/v2/top-headlines?sources=','').replace('&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4','')
print(source)
filename = './data/Credible/{0}/articles/{1}/api.txt'.format(source, date)
os.makedirs(os.path.dirname(filename), exist_ok=True)
with open(filename, 'w') as f:
json.dump(json.loads(r.text), f)
print('Finished')
```
### From individual API files, download news source link and extract text using newspaper python package
```
def saving_json():
print('saving ...')
for url in urls:
url = url.strip()
for date in dates:
source = url.replace('https://newsapi.org/v2/top-headlines?sources=','').replace('&apiKey=3fb3c0c1e622430b8df3f9693c7a55b4','')
print(source)
sourcename = './data/Credible/{0}/articles/{1}/api.txt'.format(source, date)
os.makedirs(os.path.dirname(sourcename), exist_ok=True)
with open (sourcename) as f:
jdata = json.load(f)
jdata2=jdata['articles']
for i in range(0,len(jdata2)):
r=jdata2[i]['url']
print(r)
link = newspaper.Article(r)
link.download()
html = link.html
if 'video' in r:
pass
elif link:
try:
link.parse()
text = fulltext(html)
date_longform = dates[0]
article = {}
article["html"] = html
article["title"] = link.title
article["url"] = link.url
article["date"] = date_longform
article["source"] = source
article["text"] = link.text
article["images"] = list(link.images)
article["videos"] = link.movies
count=i+1
filename = './data/Credible/{0}/articles/{1}/article_{2}.txt'.format(source, date, count)
os.makedirs(os.path.dirname(filename), exist_ok=True)
with open(filename, 'w',encoding="utf8",newline='') as file:
json.dump(article,file)
except:
pass
else:
pass
print('Finished')
return None
saving_json()
# #Create initial modeling DataFrame - Only Ran Once then commented out
# modeling = pd.DataFrame(columns=('label', 'text', 'title'))
# #Save initial DataFrame - Only Ran Once - Only Ran Once then commented out
# with open('./data/credible_news_df.pickle', 'wb') as file:
# pickle.dump(modeling, file)
#Open Corpus of News Article Text
with open('./data/credible_news_df.pickle', 'rb') as file:
credible_news_df = pickle.load(file)
i = credible_news_df.shape[0] #Will start adding at the last row of the dataframe
for source in os.listdir("./data/Credible/"):
for file in os.listdir('./data/Credible/'+source+'/articles/'+dates[0]):
if file.endswith(".txt") and 'api' not in file:
curr_file = os.path.join('./data/Credible/'+source+'/articles/'+dates[0], file)
#print curr_file
with open(curr_file) as json_file:
try:
data = json.load(json_file)
credible_news_df.loc[i] = [0,data["text"],data["title"]]
i = i + 1
except ValueError:
continue
#Will Increase Daily
credible_news_df.shape
#Save Updated Data Frame
with open('./data/credible_news_df.pickle', 'wb') as file:
pickle.dump(credible_news_df, file)
```
| github_jupyter |
Universidade Federal do Rio Grande do Sul (UFRGS)
Programa de Pós-Graduação em Engenharia Civil (PPGEC)
# PEC00144: Experimental Methods in Civil Engineering
### Reading the serial port of an Arduino device
---
_Prof. Marcelo M. Rocha, Dr.techn._ [(ORCID)](https://orcid.org/0000-0001-5640-1020)
_Porto Alegre, RS, Brazil_
```
# Importing Python modules required for this notebook
# (this cell must be executed with "shift+enter" before any other Python cell)
import sys
import time
import serial
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from MRPy import MRPy
```
### 1. Setup serial communication
In order to run this notebook, the Python module ``pyserial`` must be installed.
To ensure the module availability, open a conda terminal and issue the command:
conda install -c anaconda pyserial
Before openning the serial port, verify with Arduino IDE which USB identifier the
board has be assigned (in Windows it has the form "COMxx", while in Linux it
it is something like "/dev/ttyXXXX").
```
#port = '/dev/ttyUSB0'
#baud = 9600
port = 'COM5' # change this address according to your computer
baud = 9600 # match this number with the Arduino's output baud rate
Ardn = serial.Serial(port, baud, timeout=1)
time.sleep(3) # this is important to give time for serial settling
```
### 2. Define function for reading one incoming line
```
def ReadSerial(nchar, nvar, nlines=1):
Ardn.write(str(nlines).encode())
data = np.zeros((nlines,nvar))
for k in range(nlines):
wait = True
while(wait):
if (Ardn.inWaiting() >= nchar):
wait = False
bdat = Ardn.readline()
sdat = bdat.decode()
sdat = sdat.replace('\n',' ').split()
data[k, :] = np.array(sdat[0:nvar], dtype='int')
return data
```
### 3. Acquire data lines from serial port
```
try:
data = ReadSerial(16, 2, nlines=64)
t = data[:,0]
LC = data[:,1]
Ardn.close()
print('Acquisition ok!')
except:
Ardn.close()
sys.exit('Acquisition failure!')
```
### 4. Create ``MRPy`` instance and save to file
```
ti = (t - t[0])/1000
LC = (LC + 1270)/2**23
data = MRPy.resampling(ti, LC)
data.to_file('read_HX711', form='excel')
print('Average sampling rate is {0:5.1f}Hz.'.format(data.fs))
print('Total record duration is {0:5.1f}Hz.'.format(data.Td))
print((2**23)*data.mean())
```
### 5. Data visualization
```
fig1 = data.plot_time(fig=1, figsize=(12,8), axis_t=[0, data.Td, -0.01, 0.01])
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.text import *
path = Path('./WikiTextTR')
path.ls()
LANG_FILENAMES = [str(f) for f in path.rglob("*/*")]
print(len(LANG_FILENAMES))
print(LANG_FILENAMES[:5])
LANG_TEXT = []
for i in LANG_FILENAMES:
try:
for line in open(i, encoding="utf-8"):
LANG_TEXT.append(json.loads(line))
except:
break
LANG_TEXT = pd.DataFrame(LANG_TEXT)
LANG_TEXT.head()
LANG_TEXT.to_csv(f"{path}/Wiki_Turkish_Corpus.csv", index=False)
LANG_TEXT = pd.read_csv(f"{path}/Wiki_Turkish_Corpus.csv")
LANG_TEXT.head()
LANG_TEXT.drop(["id","url","title"],axis=1,inplace=True)
LANG_TEXT = (LANG_TEXT.assign(labels = 0)
.pipe(lambda x: x[['labels', 'text']])
.to_csv(f"{path}/Wiki_Turkish_Corpus2.csv", index=False))
LANG_TEXT.head()
LANG_TEXT = pd.read_csv(f"{path}/Wiki_Turkish_Corpus2.csv")
LANG_TEXT.head()
def split_title_from_text(text):
words = text.split("\n\n")
if len(words) >= 2:
return ''.join(words[1:])
else:
return ''.join(words)
LANG_TEXT['text'] = LANG_TEXT['text'].apply(lambda x: split_title_from_text(x))
LANG_TEXT.isna().any()
LANG_TEXT.shape
LANG_TEXT['text'].apply(lambda x: len(x.split(" "))).sum()
re1 = re.compile(r' +')
def fixup(x):
x = x.replace('ü', "u").replace('Ü', 'U').replace('ı', "i").replace(
'ğ', 'g').replace('İ', 'I').replace('Ğ', "G").replace('ö', "o").replace(
'Ö', "o").replace('\n\n', ' ').replace("\'",' ').replace('\n\nSection::::',' ').replace(
'\n',' ').replace('\\', ' \\ ').replace('ç', 'c').replace('Ç', 'C').replace('ş', 's').replace('Ş', 'S')
return re1.sub(' ', html.unescape(x))
LANG_TEXT.to_csv(f"{path}/Wiki_Turkish_Corpus3.csv", index=False)
LANG_TEXT = pd.read_csv(f"{path}/Wiki_Turkish_Corpus3.csv")#, chunksize=5000)
LANG_TEXT.head()
import torch
torch.cuda.device(0)
torch.cuda.get_device_name(0)
LANG_TEXT.dropna(axis=0, inplace=True)
df = LANG_TEXT.iloc[np.random.permutation(len(LANG_TEXT))]
cut1 = int(0.8 * len(df)) + 1
cut1
df_train, df_valid = df[:cut1], df[cut1:]
df = LANG_TEXT.iloc[np.random.permutation(len(LANG_TEXT))]
cut1 = int(0.8 * len(df)) + 1
df_train, df_valid = df[:cut1], df[cut1:]
df_train.shape, df_valid.shape
df_train.head()
data_lm = TextLMDataBunch.from_df(path, train_df=df_train, valid_df= df_valid, label_cols="labels", text_cols="text")
data_lm.save('data_lm.pkl')
bs=16
data_lm = load_data(path, 'data_lm.pkl', bs=bs)
data_lm.show_batch()
learner = language_model_learner(data_lm, AWD_LSTM, pretrained=False, drop_mult=0.5)
learner.lr_find()
learner.recorder.plot()
learner.fit_one_cycle(1,1e-2)
learner.save("model1")
learner.load("model1")
TEXT = "Birinci dünya savaşında"
N_WORDS = 40
N_SENTENCES = 2
print("\n".join(learner.predict(TEXT, N_WORDS, temperature=0.75) for _ in range(N_SENTENCES)))
file = open("itos.pkl","wb")
pickle.dump(data_lm.vocab.itos, file)
file.close()
```
| github_jupyter |
# Computational Assignment 1
**Assigned Monday, 9-9-19.**, **Due Thursday, 9-12-19.**
Most of the problems we encounter in computational chemistry are multidimensional. This means that we need to be able to work with vectors and matrices in our code. Even when we consider a 1-dimensional function, we still need to code all of the data points into a list.
Additionally, when we need to analyze data, we don't want to reinvent the wheel. It can be useful to code your own math operation once to learn how it work, but most of the time you should be using existing libraries in your work. We will cover how to use math and science libraries to run calcualtions.
This notebook will cover the following concepts:
1. Lists and arrays
1. Defining lists
1. Accessing list values
1. Changing lists
1. Counting, sorting, and looping methods
## 1. Lists and Arrays
Computers regularly deal with multidimensional data. Even if you have one-dimensional data from some sort of function, $F(t)$, you would write the data points into a list of numbers. When talking about code, we call these lists or arrays.
```
my_list=[1,2,3,4,5,6,7]
```
Python specifically has a few of ways to handle arrays. The most basic data object for handeling data is the **list**. The other two data objects are **tuples** and **dictionaries**.
Lists are defined with brackets `[]`
Tuples are defined with parenthessis `()`
Dictionaries are defined with curly brackets `{}`
We won't spend much time with tuples. They behave a lot like lists, but once make one, you can't change any of the values in the tuple. This can be useful if you want to make sure data values are not changed through a calculation. They also tend be faster to process, but you won't notice the speedup unless you are trying to process lots of data (Gb of data).
## 1.1 Defining Lists
In python, we create **lists** by defining a set of numbers in a bracket
```
[1,2,3]
```
You should also make sure to define your list as a variable so that you can use it later. Otherwise there is no way to call it. If you don't want to overwrite your variable, use a different name for the new varaiable
```
my_list2=[1,2,3]
print(my_list2)
```
You can also define an empty list. This is sometimes usefull if you plan to add alements to the list (more on the later
```
empty=[]
```
Lists also don't have to be numbers. They can be a collection of strings (text)
```
sonoran_plants=["saguaro","ocotillo","pitaya","creosote"]
```
Your list can be a list of lists! Notice the line break. You can do that when there is a comma. This helps make your code readable, which is one of the underlying philosophies of python
```
sonoran_plants_animals=[["saguaro","ocotillo","pitaya","creosote"],
["coyote","kangaroo mouse","javalina","gila monster"]]
```
Notice that the two nested lists are different sizes. Mathematically, these lists are not going to behave like vectors or matrices. Python will not add or multiply a list according to the rules of matrix algebra. This is fine. When we need matrices and vectors, it will make more sense to use arrays associated with the math library Numpy. And the Numpy library can usually understand most python lists. Most of the time, if you are using a list, you are mostly trying to organize data, not run a heavy calculation.
### Exercise 1.1:
Make your own lists.
1. Make a one-dimensional list of numbers
1. A three dimensional list of numbers (a list of three lists)
1. Make a lists of strings
1. Make a two-dimensional list of strings
1. Print all of your lists
Make sure to define your lists with variables. We will use your lists later on
```
trio=[1,2,3]
single_digit_trios=[[1,2,3],
[4,5,6],
[7,8,9]]
dunkin_order=["large","iced","black"]
coffee_pref=[["dunkin","good"],
["starbucks","bad"]]
print(trio)
print(single_digit_trios)
print(dunkin_order)
print(coffee_pref)
```
## 1.2 List indexing
Once you've made a list, you need to know how to get values from the list. To get a value from list use square brackets next to the variable name. `my_list[index]`. The first thing to note is that list indeces start at 0
```
# Printing the list to remind us of the elements
print(my_list)
my_list[0]
```
Your indexing can also be negative. The list indexing is cyclic, so if the first element is 0, the last element is -1, the second to last element is -2, etc.
```
my_list[-1]
my_list[-2]
```
If your list is a list of lists, calling an index will give you the nested list. You need two indices to get individual items
```
# Printing the list to remind us of the elements
print(sonoran_plants_animals)
sonoran_plants_animals[0]
sonoran_plants_animals[0][-1]
```
You can also make a new sublist by calling a range of indices
```
# Printing the list to remind us of the elements
print(my_list)
my_list[2:5]
```
You can also make a range with negatice indices. Order matters here, more negative number must be first
```
my_list[-4:-1]
```
### Exercise 1.2
Using the lists you made from up above do the following:
1. For each list you made before, print the first and last values
1. For each multi-dimensional list print the first and last entry of each nested list
1. For each one-dimensional list, use a range of indices to make a new sublist
```
trio=[1,2,3]
single_digit_trios=[[1,2,3],
[4,5,6],
[7,8,9]]
dunkin_order=["large","iced","black"]
coffee_pref=[["dunkin","good"],
["starbucks","bad"]]
print(trio[0],
trio[2])
print(single_digit_trios[0][0],
single_digit_trios[0][2],
single_digit_trios[1][0],
single_digit_trios[1][2],
single_digit_trios[2][0],
single_digit_trios[2][2])
print(dunkin_order[0],
dunkin_order[2])
print(coffee_pref[0][0],
coffee_pref[0][1],
coffee_pref[1][0],
coffee_pref[1][1],)
print(trio[0:2])
print(dunkin_order[0:2])
```
## 1.3 Changing lists
First, we can get the length of a list. Many times, our list is very long or read in from a file. We may need to knwo how long the list actuall is
```
# Printing the list to remind us of the elements
print(sonoran_plants)
len(sonoran_plants)
```
We can change our lists after we make them. We can change individual values or we can add or remove values from a list. Note that tuples cannot be changed (they are called immutable)
```
# Printing the list to remind us of the elements
print(my_list)
```
Individual values in a list can be changed
```
my_list[2] = -3
print(my_list)
```
Values can be added to a list
```
my_list.append(8)
print(my_list)
```
Values can be removed from a list
```
my_list.remove(-3)
print(my_list)
```
A quick note about objects. Python is an object oriented language. It's underlying philosophy is that everything is an object. An object has atributes and methods. You can get information about the attributes and you can use the methods to change the properties of the object.
In python you call the object attributes or methods using this format: `object_variable.attribute` For a list, you add values by changing the attribute `list.append(x)`, `list.remove(x)`.
We can add the elements of a list to our list
```
my_list.extend([1,2,3,4])
print(my_list)
```
We can insert values at a given index. When using insert, the first value is the index, the second value is the new list element
```
my_list.insert(0,15)
print(my_list)
```
We can remove elements at a given index
```
my_list.pop(3)
print(my_list)
```
### Exercise 1.3
For each one-dimensional list from above
1. Append a new element
1. Remove a previous element
1. Extend the lists with new lists of elements
1. Insert a value at the fourth index
1. Pop the last value
```
trio=[1,2,3]
dunkin_order=["large","iced","black"]
print(trio)
trio.append(4)
print(trio)
trio.remove(1)
print(trio)
trio.extend([5,6,7])
print(trio)
trio.insert(3,4)
print(trio)
trio.pop(6)
print(trio)
print(dunkin_order)
dunkin_order.append("less ice")
print(dunkin_order)
dunkin_order.remove("black")
print(dunkin_order)
dunkin_order.extend(["no milk","no sugar"])
print(dunkin_order)
dunkin_order.insert(3,"cold brew")
print(dunkin_order)
dunkin_order.pop(5)
print(dunkin_order)
```
## 1.4 Counting, sorting, and looping methods
There are a number of other list methods you can use to change your list. To demonstrate these methods, we will make a list of random integers using the append method
```
# Build a list of random integers
import random # import random number generator library
rand_list=[] # Note this starts as an empty list
for i in range(0,100):
rand_num=random.randrange(0,10)
rand_list.append(rand_num)
print(rand_list)
```
We can count the number of times a value is found in our list. This can be really useful for analysis
```
rand_list.count(3)
```
We can determine the index of the first instance of a value
```
rand_list.index(3)
```
The first time that a 3 is found is at list index 0. If you want to keep finding values of 3, you can use a range index to get the other values
```
rand_list[1:-1].index(3)
rand_list[19:-1].index(3)
```
The reason this gives you 1 is that the next instance of 3 is the $20^{\text{th}}$ element $(19+1)$
We can also sort the list
```
rand_list.sort()
print(rand_list)
```
We can reverse the list
```
rand_list.reverse()
print(rand_list)
```
Lastly, any list can be looped over in python
```
sonoran_plants=["saguaro","ocotillo","pitaya","creosote"]
for plants in sonoran_plants:
print(plants)
```
Multidimensional lists can also be looped over. The first loop counts over the nested loops, the second loop counts over the list elements
```
sonoran_plants_animals=[["saguaro","ocotillo","pitaya","creosote"],
["coyote","kangaroo mouse","javalina","gila monster"]]
for collections in sonoran_plants_animals:
for name in collections:
print(name)
```
### Excercise 1.4
For this exercise, we will creat a list of random number
```
rand1=[]
for i in range(0,100):
rand_num1=random.randrange(0,10)
rand1.append(rand_num1)
print(rand1)
```
Using the list above,
1. Count and print the nunber of instances of each integer (**Hint:** you can use a loop ranging from 0-9 to do this)
2. Loop over the elements of rand1 and make a new list that labels each element as even or odd. For example given the list `[1,6,9]`, you would have a list that looked like `["odd","even","odd"]`.
Hint: You can use the modulo operator `a%b`. This operator give you the remainder when you divide a by b. If `a/b` has no remainder, then `a%b=0` See the examples below.
```
# Modulo example
print(4%2)
print(4%3)
# Module conditional
i=4
if (i%2==0):
print("even")
else:
print("odd")
```
Remember that you can always make new cells. Use them to test parts of you code along the way and to seperate your code to make it readable
```
import random
rand=[]
for i in range(0,100):
randnum=random.randrange(0,10)
rand.append(randnum)
print(rand)
print()
for number in range(10):
print(number,rand.count(number))
import random
rand=[]
for i in range(0,100):
randnum=random.randrange(0,10)
rand.append(randnum)
print(rand)
print()
rand_parity=[]
for number in rand:
if(number%2==0):
rand_parity.append("even")
else:
rand_parity.append("odd")
print(rand_parity)
```
| github_jupyter |

#Assignment 2
```
%pylab inline
# we will be using the EEG/MEG analysis library MNE
# documentation is available here: https://mne.tools/stable/index.html
!pip install -U mne
import mne # let's import MNE
# .. and the sample dataset
from mne.datasets import sample
from mne.channels import combine_channels
from mne.evoked import combine_evoked
# These data were acquired with the Neuromag Vectorview system at
# MGH/HMS/MIT Athinoula A. Martinos Center Biomedical Imaging.
# EEG data from an electrode cap was acquired simultaneously with the MEG.
### EXPERIMENT DESCRIPTION ###
# In this experiment, checkerboard patterns were presented to the subject into
# the left and right visual field, interspersed by tones to the left or right ear.
# The interval between the stimuli was 750 ms. Occasionally a smiley face was
# presented at the center of the visual field. The subject was asked to press a
# key with the right index finger as soon as possible after the appearance of the face.
# and let's load it!
data_path = sample.data_path()
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif')
```
**Task 1:** How many EEG channels were used when acquiring the data? [15 Points]
```
# Hint: You can use raw.info or raw.ch_names to figure this out.
EEG_count = 0
for ch_name in raw.ch_names:
if 'EEG' in ch_name:
EEG_count += 1
n_channel = len(raw.ch_names)
print(n_channel)
print(f"There were {EEG_count} EEG channels among {len(raw.ch_names)} channels used when acquiring the data.")
```
* There are in total of 60 collection of channels
**Task 2:** Let's look at some channels! [20 Points]
```
# the code below plots EEG channels 1-8 for 3 seconds after 2 minutes
chs = ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006', 'EEG 007', 'EEG 008']
chan_idxs = [raw.ch_names.index(ch) for ch in chs]
ecg1to8 = raw.plot(order=chan_idxs, start=120, duration=3)
# plot EEG channels 50-60 for 1 second after 200 seconds.
# List comprehension and string concatenation to make the list of channel names
chs = ['EEG 0' + str(i) for i in range(50,61)]
# Get the channel indices
chan_idxs = [raw.ch_names.index(ch) for ch in chs]
# Plot
ecg50to60 = raw.plot(order=chan_idxs, start=200, duration=1)
```
**Task 3:** How long between event and brain activity? [30 Points]
```
# the following code plots the stimulus channel for the same time
channel_indices = [raw.ch_names.index('STI 014')]
stim = raw.plot(order=channel_indices, start=200, duration=1)
# combine the last two plots into one.
# the plot should show EEG channels 50-60 and the stimulus channel together.
# filter by EEG name and then filter again by number 50-60
filtered = filter(lambda x: x[:3] == 'EEG',raw.ch_names)
filtered = filter(lambda x: 50 <= int(x[-3:]) <= 60,filtered)
chs = list(filtered)
# Added this check here to avoid re-appending each time I re-run this cell
if 'STI 014' not in chs:
chs.append('STI 014')
chan_idxs = [raw.ch_names.index(ch) for ch in chs]
ecg1to8 = raw.plot(order=chan_idxs, start=200, duration=1)
# Estimate the time between stimulus and brain activity.
# filter by EEG name and then filter again by number 50-60
filtered = filter(lambda x: x[:3] == 'EEG',raw.ch_names)
filtered = filter(lambda x: 50 <= int(x[-3:]) <= 60,filtered)
chs = list(filtered)
# let's copy to be data safe and cause no weird errors
# we must load first all the data to allow filtering by frequency
# I bandpassed the data with a lowcut to get rid of the baseline shift and a
# high cut to get rid of the noise
# now I can trust that finding the max corresponds to the first spike
bandpassed_raw = raw.copy()
bandpassed_raw.load_data()
bandpassed_raw = bandpassed_raw.filter(l_freq=2,h_freq=20)
# I took the data and put into into numpy form
bandpassed_data = bandpassed_raw.get_data(chs)
# The data is given back flattened so I arranged it into rows for each channel
bandpassed_data = np.reshape(bandpassed_data, (len(chs), len(raw.times)) )
# I took the mean across the channels axis to get the mean filtered signal
mean_bandpassed_data = np.mean(bandpassed_data, axis=0)
# I got the first sample index of the 200 to 201 duration sample range
# and I also got its last sample index
index_range = np.argwhere((raw.times >= 200) & (raw.times <= 201 ))
firstRangeIndex = index_range[0][0]
lastRangeIndex = index_range[-1][0]
print("ANSWERS:")
# The index of the peak is the starting sample index of the duration range plus
# the index of the max value within the duration range
peak_index = firstRangeIndex + mean_bandpassed_data[firstRangeIndex:lastRangeIndex+1].argmax()
raw.times[peak_index]
print("The time in seconds of the first spike after the stimulus is: {}".format(raw.times[peak_index]))
# Now all we have to do is take the argmax value of the stimulus channel to find its starting index in a similar way to the mean filtered EEG data
stimulus_data = raw.get_data('STI 014')
stim_data_inRange = stimulus_data[0,firstRangeIndex:lastRangeIndex+1]
stimulus_start_index = firstRangeIndex + stim_data_inRange.argmax()
print("The time in seconds of the beginning of the stimulus is: {}".format(raw.times[stimulus_start_index]))
print()
# let's subtract the first stimulus presentation time from the first spike in brain activity time to get the duration between stimulus to response
print("Hence the estimated time in seconds between stimulus and brain activity is: {}".format(raw.times[peak_index]-raw.times[stimulus_start_index]))
print("Also the time in milliseconds is {}".format((raw.times[peak_index]-raw.times[stimulus_start_index])*1000))
```
** Estimation **
* As displayed on the graph of EEG 50-60, the highest peak is shown in the EEG-054, the number is roughly 200.36 secs after 200 seconds.
* About 200.25 secs after 200 secs, the number of seconds is represented as peaking for stimilus 014 channel.
* Therefore, the estimation between stimulus and brain activity are 200.36 - 200.25 = 0.11 secs
**Task 4:** Localize different brain waves for different stimuli! [35 Points]
```
# the following code groups all stimuli together
# and allows the visualization of average brain activity per stimuli.
events = mne.find_events(raw, stim_channel='STI 014')
event_dict = {'auditory/left': 1,
'auditory/right': 2,
'visual/left': 3,
'visual/right': 4,
'face': 5,
'button': 32}
picks = mne.pick_types(raw.info, eeg=True)
epochs = mne.Epochs(raw, events, event_id=event_dict, picks=picks,
preload=True)
# here we see the average localized brain activity for the right visual stimuli
visual_activity = epochs['visual/right'].plot_psd_topomap()
# here we see the average localized brain activity for the shown 'face'
face_activity = epochs['face'].plot_psd_topomap()
# TODO Please visualize the average brain activity when the subject pushes the button
button_activity = epochs['button'].plot_psd_topomap()
# Which difference do you see between the visual/right, the face, and the button event?
# Which brain region seems active during the button event?
# visual/right and face seem more similar to the button event.
```
* The most common and equivalent brain scanning to the three activities, including visual/right, face, and pressing button, is having Beta and Gamma stimuli pretty much the same by looking at visualization and the approximation of numbers
* The differences are displayed clearly in the Delta, Theta and Alpha of each individuals.
* For instance, The Delta and Gamma stimuli of the Face activity is covered with red area on the left side of the brain. While, pressing button activity is widely covered on the front side.
* Furthermore, the Alpha stimuli is also displayed the brain analysis differently when comparing between visual/right and button activity.
* However, the visualizations have shown a bit similarities between Delta and Theta stimuli of visual/right and button activities.
**Bonus Task:** What type of event happened in Task 3? [33 Points]
```
# which event type happened?
# the following code groups all stimuli together
# and allows the visualization of average brain activity per stimuli.
events = mne.find_events(raw, stim_channel='STI 014')
event_dict = {'auditory/left': 1,
'auditory/right': 2,
'visual/left': 3,
'visual/right': 4,
'face': 5,
'button': 32}
picks = mne.pick_types(raw.info, eeg=True)
epochs = mne.Epochs(raw, events, event_id=event_dict, picks=picks,
preload=True)
# Display the remaining of activities:
# here we see the average localized brain activity for the left visual stimuli
left_visual_activity = epochs['visual/left'].plot_psd_topomap()
# here we see the average localized brain activity for the left auditory stimuli
left_audio_activity = epochs['auditory/left'].plot_psd_topomap()
# here we see the average localized brain activity for the right auditory stimuli
right_audio_activity = epochs['auditory/right'].plot_psd_topomap()
ecg_stimuli = raw.plot(order=chan_idxs, start=200,
duration=1, events=events,
event_color= {
1: 'chocolate',
2: 'darksalmon',
3: 'navy',
4: 'hotpink',
5: 'saddlebrown',
32: 'gold' }
)
```
** Therefore, the brain activity as shown on the line pointed out the number 4, is visual/right brain stimuli **
```
# You did it!!
#
# ┈┈┈┈┈┈▕▔╲
# ┈┈┈┈┈┈┈▏▕
# ┈┈┈┈┈┈┈▏▕▂▂▂
# ▂▂▂▂▂▂╱┈▕▂▂▂▏
# ▉▉▉▉▉┈┈┈▕▂▂▂▏
# ▉▉▉▉▉┈┈┈▕▂▂▂▏
# ▔▔▔▔▔▔╲▂▕▂▂|
#
```
| github_jupyter |
```
import numpy as np
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib notebook
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
font = {'weight' : 'medium',
'size' : 13}
matplotlib.rc('font', **font)
import time
import concurrent.futures as cf
import warnings
warnings.filterwarnings("ignore")
import scipy.constants
mec2 = scipy.constants.value('electron mass energy equivalent in MeV')*1e6
c_light = scipy.constants.c
e_charge = scipy.constants.e
r_e = scipy.constants.value('classical electron radius')
```
### Parameters
```
gamma = 5000
rho = 1.5 # Bend radius in m
beta = (1-1/gamma**2)**(1/2)
sigma_x = 50e-6
sigma_z = 50e-6
# Entrance angle
phi = 0.1/rho
```
## code
```
from csr2d.core2 import psi_s, psi_x0_hat
import numpy as np
gamma = 5000
rho = 1.5 # Bend radius in m
beta = (1-1/gamma**2)**(1/2)
sigma_x = 50e-6
sigma_z = 50e-6
nz = 100
nx = 100
dz = (10*sigma_z) / (nz - 1)
dx = (10*sigma_x) / (nx - 1)
zvec = np.linspace(-5*sigma_z, 5*sigma_z, nz)
xvec = np.linspace(-5*sigma_x, 5*sigma_x, nx)
zm, xm = np.meshgrid(zvec, xvec, indexing='ij')
psi_s_grid = psi_s(zm, xm, beta)
psi_x_grid = psi_x0_hat(zm, xm, beta, dx)
from csr2d.core2 import psi_s, psi_x_hat, psi_x0_hat
from scipy.interpolate import RectBivariateSpline
from numba import njit, vectorize, float64
from csr2d.kick2 import green_meshes_hat, green_meshes
# Bypassing the beam, use smooth Gaussian distribution for testing
def lamb_2d(z,x):
return 1/(2*np.pi*sigma_x*sigma_z)* np.exp(-z**2 / 2 / sigma_z**2 - x**2 / 2 / sigma_x**2)
def lamb_2d_prime(z,x):
return 1/(2*np.pi*sigma_x*sigma_z)* np.exp(-z**2 / 2 / sigma_z**2 - x**2 / 2 / sigma_x**2) * (-z / sigma_z**2)
nz = 100
nx = 100
zvec = np.linspace(-5*sigma_z, 5*sigma_z, nz)
xvec = np.linspace(-5*sigma_x, 5*sigma_x, nx)
zm, xm = np.meshgrid(zvec, xvec, indexing='ij')
lambda_grid_filtered = lamb_2d(zm,xm)
lambda_grid_filtered_prime = lamb_2d_prime(zm,xm)
dz = (10*sigma_z) / (nz - 1)
dx = (10*sigma_x) / (nx - 1)
psi_s_grid = psi_s(zm, xm, beta)
psi_s_grid, psi_x_grid, zvec2, xvec2 = green_meshes_hat(nz, nx, dz, dx, rho=rho, beta=beta)
```
# Integral term code development
```
# Convolution for a specific observatino point only
@njit
def my_2d_convolve2(g1, g2, ix1, ix2):
d1, d2 = g1.shape
g2_flip = np.flip(g2)
g2_cut = g2_flip[d1-ix1:2*d1-ix1, d2-ix2:2*d2-ix2]
sums = 0
for i in range(d1):
for j in range(d2):
sums+= g1[i,j]*g2_cut[i,j]
return sums
#@njit
# njit doesn't like the condition grid and interpolation....
def transient_calc_lambda(phi, z_observe, x_observe, zvec, xvec, dz, dx, lambda_grid_filtered_prime, psi_s_grid, psi_x_grid):
x_observe_index = np.argmin(np.abs(xvec - x_observe))
#print('x_observe_index :', x_observe_index )
z_observe_index = np.argmin(np.abs(zvec - z_observe))
#print('z_observe_index :', z_observe_index )
# Boundary condition
temp = (x_observe - xvec)/rho
zi_vec = rho*( phi - beta*np.sqrt(temp**2 + 4*(1 + temp)*np.sin(phi/2)**2))
zo_vec = -beta*np.abs(x_observe - xvec)
condition_grid = np.array([(zvec > z_observe - zo_vec[i]) | (zvec < z_observe - zi_vec[i]) for i in range(len(xvec))])
lambda_grid_filtered_prime_bounded = np.where(condition_grid.T, 0, lambda_grid_filtered_prime)
conv_s = my_2d_convolve2(lambda_grid_filtered_prime_bounded, psi_s_grid, z_observe_index, x_observe_index)
conv_x = my_2d_convolve2(lambda_grid_filtered_prime_bounded, psi_x_grid, z_observe_index, x_observe_index)
##conv_s, conv_x = fftconvolve2(lambda_grid_filtered_prime_bounded, psi_s_grid, psi_x_grid)
#Ws_grid = (beta**2 / abs(rho)) * (conv_s) * (dz * dx)
#Wx_grid = (beta**2 / abs(rho)) * (conv_x) * (dz * dx)
#lambda_interp = RectBivariateSpline(zvec, xvec, lambda_grid_filtered) # lambda lives in the observation grid
#lambda_zi_vec = lambda_interp.ev( z_observe - zi_vec, xvec )
#psi_x_zi_vec = psi_x0(zi_vec/2/rho, temp, beta, dx)
#Wx_zi = (beta**2 / rho) * np.dot(psi_x_zi_vec, lambda_zi_vec)*dx
#lambda_zo_vec = lambda_interp.ev( z_observe - zo_vec, xvec )
#psi_x_zo_vec = psi_x0(zo_vec/2/rho, temp, beta, dx)
#Wx_zo = (beta**2 / rho) * np.dot(psi_x_zo_vec, lambda_zo_vec)*dx
#return Wx_grid[ z_observe_index ][ x_observe_index ], Wx_zi, Wx_zo
#return conv_x, Wx_zi, Wx_zo
return conv_x
#return condition_grid
@njit
def transient_calc_lambda_2(phi, z_observe, x_observe, zvec, xvec, dz, dx, lambda_grid_filtered_prime, psi_s_grid, psi_x_grid):
x_observe_index = np.argmin(np.abs(xvec - x_observe))
#print('x_observe_index :', x_observe_index )
z_observe_index = np.argmin(np.abs(zvec - z_observe))
#print('z_observe_index :', z_observe_index )
# Boundary condition
temp = (x_observe - xvec)/rho
zi_vec = rho*( phi - beta*np.sqrt(temp**2 + 4*(1 + temp)*np.sin(phi/2)**2))
zo_vec = -beta*np.abs(x_observe - xvec)
nz = len(zvec)
nx = len(xvec)
# Allocate array for histogrammed data
cond = np.zeros( (nz,nx) )
for i in range(nx):
cond[:,i] = (zvec > z_observe - zo_vec[i]) | (zvec < z_observe - zi_vec[i])
#condition_grid = np.array([(zvec < z_observe - zi_vec[i]) for i in range(len(xvec))])
#condition_grid = np.array([(zvec > z_observe - zo_vec[i]) | (zvec < z_observe - zi_vec[i]) for i in range(len(xvec))])
lambda_grid_filtered_prime_bounded = np.where(cond, 0, lambda_grid_filtered_prime)
conv_s = my_2d_convolve2(lambda_grid_filtered_prime_bounded, psi_s_grid, z_observe_index, x_observe_index)
conv_x = my_2d_convolve2(lambda_grid_filtered_prime_bounded, psi_x_grid, z_observe_index, x_observe_index)
return conv_x
```
# Applying the codes
### Note that numba-jitted code are slower the FIRST time
```
t1 = time.time()
r1 = transient_calc_lambda(phi, 2*sigma_z, sigma_x, zvec, xvec, dz, dx,lambda_grid_filtered_prime, psi_s_grid, psi_x_grid)
print(r1)
t2 = time.time()
print('Mapping takes:', t2-t1, 'sec')
t1 = time.time()
r1 = transient_calc_lambda_2(phi, 2*sigma_z, sigma_x, zvec, xvec, dz, dx,lambda_grid_filtered_prime, psi_s_grid, psi_x_grid)
print(r1)
t2 = time.time()
print('Mapping takes:', t2-t1, 'sec')
```
## super version for parallelism
```
def transient_calc_lambda_super(z_observe, x_observe):
return transient_calc_lambda(phi, z_observe, x_observe, zvec, xvec, dz, dx,lambda_grid_filtered_prime, psi_s_grid, psi_x_grid)
#@njit
@vectorize([float64(float64,float64)], target='parallel')
def transient_calc_lambda_2_super(z_observe, x_observe):
return transient_calc_lambda_2(phi, z_observe, x_observe, zvec, xvec, dz, dx,lambda_grid_filtered_prime, psi_s_grid, psi_x_grid)
t1 = time.time()
with cf.ProcessPoolExecutor(max_workers=20) as executor:
result = executor.map(transient_calc_lambda_super, zm.flatten(), xm.flatten())
g1 = np.array(list(result)).reshape(zm.shape)
t2 = time.time()
print('Mapping takes:', t2-t1, 'sec')
t1 = time.time()
g4 = transient_calc_lambda_boundary_super_new(zm,xm)
t2 = time.time()
print('Mapping takes:', t2-t1, 'sec')
fig, ax = plt.subplots(figsize=(8,8))
ax = plt.axes(projection='3d')
ax.plot_surface(zm*1e5, xm*1e5, yaya , cmap='inferno', zorder=1)
ax.set_xlabel(r'z $(10^{-5}m)$')
ax.set_ylabel(r'x $(10^{-5}m)$')
ax.set_zlabel(r'$W_x$ $(\times 10^3/m^2)$ ')
ax.zaxis.labelpad = 10
ax.set_title(r'$W_x$ benchmarking')
# To be fixed
from scipy.integrate import quad
def transient_calc_lambda_boundary_quad(phi, z_observe, x_observe, dx):
def integrand_zi(xp):
temp = (x_observe - xp)/rho
zi = rho*( phi - beta*np.sqrt(temp**2 + 4*(1 + temp)*np.sin(phi/2)**2))
#return psi_x_hat(zi/2/rho, temp, beta)*lamb_2d(z_observe - zi, xp)
return psi_x0_hat(zi/2/rho, temp, beta, dx)*lamb_2d(z_observe - zi, xp)
def integrand_zo(xp):
zo = -beta*np.abs(x_observe - xp)
#return psi_x_hat(zo/2/rho, temp, beta)*lamb_2d(z_observe - zo, xp)
return psi_x0_hat(zo/2/rho, temp, beta, dx)*lamb_2d(z_observe - zo, xp)
return quad(integrand_zi, -5*sigma_x, 5*sigma_x)[0]/dx
factor = (beta**2 / rho)*dx
diff = np.abs((g4.reshape(zm.shape) - g3.reshape(zm.shape))/g3.reshape(zm.shape) )* 100
diff = np.abs((g0 - g3.reshape(zm.shape))/g3.reshape(zm.shape)) * 100
g3.shape
fig, ax = plt.subplots(figsize=(8,8))
ax = plt.axes(projection='3d')
ax.plot_surface(zm*1e5, xm*1e5, factor*g3, cmap='inferno', zorder=1)
ax.set_xlabel(r'z $(10^{-5}m)$')
ax.set_ylabel(r'x $(10^{-5}m)$')
ax.set_zlabel(r'$W_x$ $(m^{-2}$) ')
ax.zaxis.labelpad = 10
ax.set_title(r'$W_x$ benchmarking')
fig, ax = plt.subplots(figsize=(8,8))
ax = plt.axes(projection='3d')
ax.plot_surface(zm*1e5, xm*1e5, diff, cmap='inferno', zorder=1)
ax.set_xlabel(r'z $(10^{-5}m)$')
ax.set_ylabel(r'x $(10^{-5}m)$')
ax.set_zlabel(r'$W_x$ $(\times 10^3/m^2)$ ')
ax.zaxis.labelpad = 10
ax.set_title(r'$W_x$ benchmarking')
ax.zaxis.set_scale('log')
plt.plot(diff[30:100,100])
```
| github_jupyter |
# M2: Basic Graphing Assignment - Denis Pelevin
```
# Import matplotlib and Pandas
import matplotlib.pyplot as plt
import pandas as pd
# Enable in-cell graphs
%matplotlib inline
# Read-in the input files and tore in Data frames
df_opiods = pd.read_csv('OpiodsVA.csv')
df_pres = pd.read_csv('presidents.csv')
df_cars = pd.read_csv('TOTALNSA.csv')
```
### Problem 1
**Question 1:** Do opioid overdoes tend to be associated with less affluent areas? That is, areas where
families have lower incomes?
**Answer:** I chose scatter plots for this problem because they are the best at showing relationship between two numerical values.
```
# Plotting the data and adjusting the apperance of the plots
fig,ax = plt.subplots()
# Setting properties of graph - Opioids ODs vs Median Income
ax.scatter(df_opiods['MedianHouseholdIncome'], df_opiods['FPOO-Rate'],c = 'blue', alpha = 0.5,s = 50)
ax.xaxis.set_label_text('Median Household Income', fontsize = 14)
ax.yaxis.set_label_text('Opioid Overdoses', fontsize = 14)
# Formating the graph using Figure methods
fig.suptitle('Opioid ODs vs Median Income', fontsize = 18)
fig.set_size_inches(8,5)
# Display graph
plt.show()
```
**Question 2:** What is the relationship in Virginia counties between opioid overdoses and heroin overdoses?
**Answer:** I chose scatter plot for this problem because they are the best at showing relationship between two numerical values.
```
# Plotting the data and adjusting the apperance of the plots
fig,ax = plt.subplots()
# Setting properties of graph - Opioid ODs vs Heroin ODs
ax.scatter(df_opiods['FPOO-Rate'], df_opiods['FFHO-Rate'], c = 'green', alpha = 0.5,s = 50)
ax.xaxis.set_label_text('Opioid Overdoses', fontsize = 14)
ax.yaxis.set_label_text('Heroin Overdoses', fontsize = 14)
# Formating the graph using Figure methods
fig.suptitle('Opioid ODs vs Heroin ODs', fontsize = 18)
fig.set_size_inches(8,5)
# Display graph
plt.show()
```
### Problem 2
**Question:** Which states are associated with the greatest number of United States presidents in terms of the presidents’ birthplaces?
**Answer:** I chose bar graph (or Histogram) since it's the best when comparing frequences of events.
```
# Counting Values and storing them in a dictionary
stateCounts = dict(df_pres['State'].value_counts())
# Sorting the dictionary by frequency values and storing key/value pairs as sub-tuples
sortedList = sorted(stateCounts.items(), key = lambda x: x[1], reverse = True)
# storing sorted values in x and y
x = [state for [state,freq] in sortedList]
y = [freq for [state,freq] in sortedList]
# Plotting the data and adjusting the apperance of the plots
fig, ax = plt.subplots()
# Setting properties of axes
ax.bar(x = x,height = y, color = 'blue', alpha = 0.75)
ax.yaxis.set_label_text('Number of Presidents', fontsize = 14)
ax.xaxis.set_label_text('States', fontsize = 14)
plt.xticks(rotation = 90)
# Setting properties of the graph
fig.suptitle('Presidents Birthplace State Frequency Histogram', fontsize = 18)
fig.set_size_inches(10,5)
# Display graph
plt.show()
```
# Problem 3
**Question:** How have vehicle sales in the United States varied over time?
**Answer:** I chose a line graph since they are the best at showing changes over periods of time (timelines).
```
# Plotting the data and adjusting the apperance of the plots
fig,ax = plt.subplots()
# Setting properties of the graph - Car Sales Overtime
ax.plot(df_cars['DATE'], df_cars['TOTALNSA'],c = 'blue')
ax.xaxis.set_label_text('Date', fontsize = 14)
ax.yaxis.set_label_text('Car Sales', fontsize = 14)
ax.xaxis.set_major_locator(plt.MaxNLocator(50)) # displaying 1 tick per year
plt.xticks(rotation = 90)
# Formating the graph using Figure methods
fig.suptitle('US Car Sales', fontsize = 18)
fig.set_size_inches(16,5)
# Display graph
plt.show()
```
| github_jupyter |
# Data Analysis
# FINM September Launch
# Homework Solution 5
## Imports
```
import pandas as pd
import numpy as np
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
from sklearn.decomposition import PCA
from sklearn.cross_decomposition import PLSRegression
from numpy.linalg import svd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import warnings
warnings.filterwarnings("ignore")
```
## Data
```
data = pd.read_excel("../data/single_name_return_data.xlsx", sheet_name="total returns").set_index("Date")
data.head()
equities = data.drop(columns=['SPY', 'SHV'])
equities.head()
```
## 1 Principal Components
#### 1.1
**Calculate the principal components of the return series.**
Using linear algebra:
```
clean = equities - equities.mean()
u, s, vh = svd(clean)
factors = clean @ vh.T
factors.columns = np.arange(1,23)
factors
```
Using a package:
```
pca = PCA(svd_solver='full')
pca.fit(equities.values)
pca_factors = pd.DataFrame(pca.transform(equities.values),
columns=['Factor {}'.format(i+1) for i in range(pca.n_components_)],
index=equities.index)
pca_factors
```
#### 1.2
**Report the eigenvalues associated with these principal components. Report each eigenvalue as a percentage of the sum of all the eigenvalues. This is the total variation each PCA explains.**
Using linear algebra:
```
PCA_eigenvals = pd.DataFrame(index=factors.columns, columns=['Eigen Value', 'Percentage Explained'])
PCA_eigenvals['Eigen Value'] = s**2
PCA_eigenvals['Percentage Explained'] = s**2 / (s**2).sum()
PCA_eigenvals
```
Using package (no eignvalues method):
```
pkg_explained_var = pd.DataFrame(data = pca.explained_variance_ratio_,
index = factors.columns,
columns = ['Explained Variance'])
pkg_explained_var
```
#### 1.3
**How many PCs are needed to explain 75% of the variation?**
```
pkg_explained_var.cumsum()
```
We need the first 5 PCs to explain at least 75% of the variation
#### 1.4
**Calculate the correlation between the first (largest eigenvalue) principal component with each of the 22 single-name equities. Which correlation is highest?**
```
corr_4 = equities.copy()
corr_4['factor 1'] = factors[1]
corr_equities = corr_4.corr()['factor 1'].to_frame('Correlation to factor 1')
corr_equities.iloc[:len(equities.columns)]
```
#### 1.5
**Calculate the correlation between the SPY and the first, second, third principal components.**
```
fac_corr = factors[[1,2,3]]
fac_corr['SPY'] = data['SPY']
SPY_corr = fac_corr.corr()['SPY'].to_frame('Correlation to SPY').iloc[:3]
SPY_corr
```
## 2 PCR and PLS
#### 2.1
**Principal Component Regression (PCR) refers to using PCA for dimension reduction, and then
utilizing the principal components in a regression. Try this by regressing SPY on the first 3 PCs
calculated in the previous section. Report the r-squared.**
```
y_PCR = data['SPY']
X_PCR = factors[[1,2,3]]
model_PCR = LinearRegression().fit(X_PCR,y_PCR)
print('PCR R-squared: ' + str(round(model_PCR.score(X_PCR, y_PCR),3)))
```
#### 2.2
**Calculate the Partial Least Squares estimation of SPY on the 22 single-name equities. Model it for 3 factors. Report the r-squared.**
```
X_PLS = equities
y_PLS = data['SPY']
model_PLS = PLSRegression(n_components=3).fit(X_PLS, y_PLS)
print('PLS R-squared: ' + str(round(model_PLS.score(X_PLS, y_PLS),3)))
```
#### 2.3
**Compare the results between these two approaches and against penalized regression seen in the past homework.**
PCR and PLS both seek to maximize the ability to explain the variation in y variable, and therefore they will have high $R^{2}$ in-sample. When using LASSO or Ridge as our model, we are conservatively forming factors, and penalizing for additional factors. This makes in-sample $R^{2}$ lower as we saw in Homework #4, but may make more robust OOS predictions.
| github_jupyter |
# Training and Evaluating ACGAN Model
*by Marvin Bertin*
<img src="../../images/keras-tensorflow-logo.jpg" width="400">
# Imports
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
from collections import defaultdict
import cPickle as pickle
import matplotlib.pyplot as plt
import pandas as pd
from PIL import Image
from six.moves import range
from glob import glob
models = tf.contrib.keras.models
layers = tf.contrib.keras.layers
utils = tf.contrib.keras.utils
losses = tf.contrib.keras.losses
optimizers = tf.contrib.keras.optimizers
metrics = tf.contrib.keras.metrics
preprocessing_image = tf.contrib.keras.preprocessing.image
datasets = tf.contrib.keras.datasets
```
# Construct Generator
```
def generator(latent_size, classes=10):
def up_sampling_block(x, filter_size):
x = layers.UpSampling2D(size=(2, 2))(x)
x = layers.Conv2D(filter_size, (5,5), padding='same', activation='relu')(x)
return x
# Input 1
# image class label
image_class = layers.Input(shape=(1,), dtype='int32', name='image_class')
# class embeddings
emb = layers.Embedding(classes, latent_size,
embeddings_initializer='glorot_normal')(image_class)
# 10 classes in MNIST
cls = layers.Flatten()(emb)
# Input 2
# latent noise vector
latent_input = layers.Input(shape=(latent_size,), name='latent_noise')
# hadamard product between latent embedding and a class conditional embedding
h = layers.multiply([latent_input, cls])
# Conv generator
x = layers.Dense(1024, activation='relu')(h)
x = layers.Dense(128 * 7 * 7, activation='relu')(x)
x = layers.Reshape((7, 7, 128))(x)
# upsample to (14, 14, 128)
x = up_sampling_block(x, 256)
# upsample to (28, 28, 256)
x = up_sampling_block(x, 128)
# reduce channel into binary image (28, 28, 1)
generated_img = layers.Conv2D(1, (2,2), padding='same', activation='tanh')(x)
return models.Model(inputs=[latent_input, image_class],
outputs=generated_img,
name='generator')
```
# Construct Discriminator
```
def discriminator(input_shape=(28, 28, 1)):
def conv_block(x, filter_size, stride):
x = layers.Conv2D(filter_size, (3,3), padding='same', strides=stride)(x)
x = layers.LeakyReLU()(x)
x = layers.Dropout(0.3)(x)
return x
input_img = layers.Input(shape=input_shape)
x = conv_block(input_img, 32, (2,2))
x = conv_block(x, 64, (1,1))
x = conv_block(x, 128, (2,2))
x = conv_block(x, 256, (1,1))
features = layers.Flatten()(x)
# binary classifier, image fake or real
fake = layers.Dense(1, activation='sigmoid', name='generation')(features)
# multi-class classifier, image digit class
aux = layers.Dense(10, activation='softmax', name='auxiliary')(features)
return models.Model(inputs=input_img, outputs=[fake, aux], name='discriminator')
```
# Combine Generator with Discriminator
```
# Adam parameters suggested in paper
adam_lr = 0.0002
adam_beta_1 = 0.5
def ACGAN(latent_size = 100):
# build the discriminator
dis = discriminator()
dis.compile(
optimizer=optimizers.Adam(lr=adam_lr, beta_1=adam_beta_1),
loss=['binary_crossentropy', 'sparse_categorical_crossentropy']
)
# build the generator
gen = generator(latent_size)
gen.compile(optimizer=optimizers.Adam(lr=adam_lr, beta_1=adam_beta_1),
loss='binary_crossentropy')
# Inputs
latent = layers.Input(shape=(latent_size, ), name='latent_noise')
image_class = layers.Input(shape=(1,), dtype='int32', name='image_class')
# Get a fake image
fake_img = gen([latent, image_class])
# Only train generator in combined model
dis.trainable = False
fake, aux = dis(fake_img)
combined = models.Model(inputs=[latent, image_class],
outputs=[fake, aux],
name='ACGAN')
combined.compile(
optimizer=optimizers.Adam(lr=adam_lr, beta_1=adam_beta_1),
loss=['binary_crossentropy', 'sparse_categorical_crossentropy']
)
return combined, dis, gen
```
# Load and Normalize MNIST Dataset
```
# reshape to (..., 28, 28, 1)
# normalize dataset with range [-1, 1]
(X_train, y_train), (X_test, y_test) = datasets.mnist.load_data()
# normalize and reshape train set
X_train = (X_train.astype(np.float32) - 127.5) / 127.5
X_train = np.expand_dims(X_train, axis=-1)
# normalize and reshape test set
X_test = (X_test.astype(np.float32) - 127.5) / 127.5
X_test = np.expand_dims(X_test, axis=-1)
nb_train, nb_test = X_train.shape[0], X_test.shape[0]
```
# Training Helper Functions
```
def print_logs(metrics_names, train_history, test_history):
print('{0:<22s} | {1:4s} | {2:15s} | {3:5s}'.format(
'component', *metrics_names))
print('-' * 65)
ROW_FMT = '{0:<22s} | {1:<4.2f} | {2:<15.2f} | {3:<5.2f}'
print(ROW_FMT.format('generator (train)',
*train_history['generator'][-1]))
print(ROW_FMT.format('generator (test)',
*test_history['generator'][-1]))
print(ROW_FMT.format('discriminator (train)',
*train_history['discriminator'][-1]))
print(ROW_FMT.format('discriminator (test)',
*test_history['discriminator'][-1]))
def generate_batch_noise_and_labels(batch_size, latent_size):
# generate a new batch of noise
noise = np.random.uniform(-1, 1, (batch_size, latent_size))
# sample some labels
sampled_labels = np.random.randint(0, 10, batch_size)
return noise, sampled_labels
```
# Train and Evaluate ACGAN on MNIST
```
nb_epochs = 50
batch_size = 100
train_history = defaultdict(list)
test_history = defaultdict(list)
combined, dis, gen = ACGAN(latent_size = 100)
for epoch in range(nb_epochs):
print('Epoch {} of {}'.format(epoch + 1, nb_epochs))
nb_batches = int(X_train.shape[0] / batch_size)
progress_bar = utils.Progbar(target=nb_batches)
epoch_gen_loss = []
epoch_disc_loss = []
for index in range(nb_batches):
progress_bar.update(index)
### Train Discriminator ###
# generate noise and labels
noise, sampled_labels = generate_batch_noise_and_labels(batch_size, latent_size)
# generate a batch of fake images, using the generated labels as a conditioner
generated_images = gen.predict([noise, sampled_labels.reshape((-1, 1))], verbose=0)
# get a batch of real images
image_batch = X_train[index * batch_size:(index + 1) * batch_size]
label_batch = y_train[index * batch_size:(index + 1) * batch_size]
# construct discriminator dataset
X = np.concatenate((image_batch, generated_images))
y = np.array([1] * batch_size + [0] * batch_size)
aux_y = np.concatenate((label_batch, sampled_labels), axis=0)
# train discriminator
epoch_disc_loss.append(dis.train_on_batch(X, [y, aux_y]))
### Train Generator ###
# generate 2 * batch size here such that we have
# the generator optimize over an identical number of images as the
# discriminator
noise, sampled_labels = generate_batch_noise_and_labels(2 * batch_size, latent_size)
# we want to train the generator to trick the discriminator
# so all the labels should be not-fake (1)
trick = np.ones(2 * batch_size)
epoch_gen_loss.append(combined.train_on_batch(
[noise, sampled_labels.reshape((-1, 1))], [trick, sampled_labels]))
print('\nTesting for epoch {}:'.format(epoch + 1))
### Evaluate Discriminator ###
# generate a new batch of noise
noise, sampled_labels = generate_batch_noise_and_labels(nb_test, latent_size)
# generate images
generated_images = gen.predict(
[noise, sampled_labels.reshape((-1, 1))], verbose=False)
# construct discriminator evaluation dataset
X = np.concatenate((X_test, generated_images))
y = np.array([1] * nb_test + [0] * nb_test)
aux_y = np.concatenate((y_test, sampled_labels), axis=0)
# evaluate discriminator
# test loss
discriminator_test_loss = dis.evaluate(X, [y, aux_y], verbose=False)
# train loss
discriminator_train_loss = np.mean(np.array(epoch_disc_loss), axis=0)
### Evaluate Generator ###
# make new noise
noise, sampled_labels = generate_batch_noise_and_labels(2 * nb_test, latent_size)
# create labels
trick = np.ones(2 * nb_test)
# evaluate generator
# test loss
generator_test_loss = combined.evaluate(
[noise, sampled_labels.reshape((-1, 1))],
[trick, sampled_labels], verbose=False)
# train loss
generator_train_loss = np.mean(np.array(epoch_gen_loss), axis=0)
### Save Losses per Epoch ###
# append train losses
train_history['generator'].append(generator_train_loss)
train_history['discriminator'].append(discriminator_train_loss)
# append test losses
test_history['generator'].append(generator_test_loss)
test_history['discriminator'].append(discriminator_test_loss)
# print training and test losses
print_logs(dis.metrics_names, train_history, test_history)
# save weights every epoch
gen.save_weights(
'../logs/params_generator_epoch_{0:03d}.hdf5'.format(epoch), True)
dis.save_weights(
'../logs/params_discriminator_epoch_{0:03d}.hdf5'.format(epoch), True)
# Save train test loss history
pickle.dump({'train': train_history, 'test': test_history},
open('../logs/acgan-history.pkl', 'wb'))
```
# Generator Loss History
```
hist = pickle.load(open('../logs/acgan-history.pkl'))
for p in ['train', 'test']:
for g in ['discriminator', 'generator']:
hist[p][g] = pd.DataFrame(hist[p][g], columns=['loss', 'generation_loss', 'auxiliary_loss'])
plt.plot(hist[p][g]['generation_loss'], label='{} ({})'.format(g, p))
# get the NE and show as an equilibrium point
plt.hlines(-np.log(0.5), 0, hist[p][g]['generation_loss'].shape[0], label='Nash Equilibrium')
plt.legend()
plt.title(r'$L_s$ (generation loss) per Epoch')
plt.xlabel('Epoch')
plt.ylabel(r'$L_s$')
plt.show()
```
<img src="../../images/gen-loss.png" width="500">
** Generator Loss: **
- loss associated with tricking the discriminator
- training losses converges to the Nash Equilibrium point
- shakiness comes from the generator and the discriminator competing at the equilibrium.
# Label Classification Loss History
```
for g in ['discriminator', 'generator']:
for p in ['train', 'test']:
plt.plot(hist[p][g]['auxiliary_loss'], label='{} ({})'.format(g, p))
plt.legend()
plt.title(r'$L_c$ (classification loss) per Epoch')
plt.xlabel('Epoch')
plt.ylabel(r'$L_c$')
plt.semilogy()
plt.show()
```
<img src="../../images/class-loss.png" width="500">
** Label classification loss: **
- loss associated with the discriminator getting the correct label
- discriminator and generator loss reach stable congerence point
# Generate Digits Conditioned on Class Label
```
# load the weights from the last epoch
gen.load_weights(sorted(glob('../logs/params_generator*'))[-1])
# construct batch of noise and labels
noise = np.tile(np.random.uniform(-1, 1, (10, latent_size)), (10, 1))
sampled_labels = np.array([[i] * 10 for i in range(10)]).reshape(-1, 1)
# generate digits
generated_images = gen.predict([noise, sampled_labels], verbose=0)
# arrange them into a grid and un-normalize the pixels
img = (np.concatenate([r.reshape(-1, 28)
for r in np.split(generated_images, 10)
], axis=-1) * 127.5 + 127.5).astype(np.uint8)
# plot images
plt.imshow(img, cmap='gray')
_ = plt.axis('off')
```
<img src="../../images/generated-digits.png" width="500">
## End of Section
<img src="../../images/divider.png" width="100">
| github_jupyter |
# 自动微分
:label:`sec_autograd`
正如我们在 :numref:`sec_calculus`中所说的那样,求导是几乎所有深度学习优化算法的关键步骤。
虽然求导的计算很简单,只需要一些基本的微积分。
但对于复杂的模型,手工进行更新是一件很痛苦的事情(而且经常容易出错)。
深度学习框架通过自动计算导数,即*自动微分*(automatic differentiation)来加快求导。
实际中,根据我们设计的模型,系统会构建一个*计算图*(computational graph),
来跟踪计算是哪些数据通过哪些操作组合起来产生输出。
自动微分使系统能够随后反向传播梯度。
这里,*反向传播*(backpropagate)意味着跟踪整个计算图,填充关于每个参数的偏导数。
## 一个简单的例子
作为一个演示例子,(**假设我们想对函数$y=2\mathbf{x}^{\top}\mathbf{x}$关于列向量$\mathbf{x}$求导**)。
首先,我们创建变量`x`并为其分配一个初始值。
```
import tensorflow as tf
x = tf.range(4, dtype=tf.float32)
x
```
[**在我们计算$y$关于$\mathbf{x}$的梯度之前,我们需要一个地方来存储梯度。**]
重要的是,我们不会在每次对一个参数求导时都分配新的内存。
因为我们经常会成千上万次地更新相同的参数,每次都分配新的内存可能很快就会将内存耗尽。
注意,一个标量函数关于向量$\mathbf{x}$的梯度是向量,并且与$\mathbf{x}$具有相同的形状。
```
x = tf.Variable(x)
```
(**现在让我们计算$y$。**)
```
# 把所有计算记录在磁带上
with tf.GradientTape() as t:
y = 2 * tf.tensordot(x, x, axes=1)
y
```
`x`是一个长度为4的向量,计算`x`和`x`的点积,得到了我们赋值给`y`的标量输出。
接下来,我们[**通过调用反向传播函数来自动计算`y`关于`x`每个分量的梯度**],并打印这些梯度。
```
x_grad = t.gradient(y, x)
x_grad
```
函数$y=2\mathbf{x}^{\top}\mathbf{x}$关于$\mathbf{x}$的梯度应为$4\mathbf{x}$。
让我们快速验证这个梯度是否计算正确。
```
x_grad == 4 * x
```
[**现在让我们计算`x`的另一个函数。**]
```
with tf.GradientTape() as t:
y = tf.reduce_sum(x)
t.gradient(y, x) # 被新计算的梯度覆盖
```
## 非标量变量的反向传播
当`y`不是标量时,向量`y`关于向量`x`的导数的最自然解释是一个矩阵。
对于高阶和高维的`y`和`x`,求导的结果可以是一个高阶张量。
然而,虽然这些更奇特的对象确实出现在高级机器学习中(包括[**深度学习中**]),
但当我们调用向量的反向计算时,我们通常会试图计算一批训练样本中每个组成部分的损失函数的导数。
这里(**,我们的目的不是计算微分矩阵,而是单独计算批量中每个样本的偏导数之和。**)
```
with tf.GradientTape() as t:
y = x * x
t.gradient(y, x) # 等价于y=tf.reduce_sum(x*x)
```
## 分离计算
有时,我们希望[**将某些计算移动到记录的计算图之外**]。
例如,假设`y`是作为`x`的函数计算的,而`z`则是作为`y`和`x`的函数计算的。
想象一下,我们想计算`z`关于`x`的梯度,但由于某种原因,我们希望将`y`视为一个常数,
并且只考虑到`x`在`y`被计算后发挥的作用。
在这里,我们可以分离`y`来返回一个新变量`u`,该变量与`y`具有相同的值,
但丢弃计算图中如何计算`y`的任何信息。
换句话说,梯度不会向后流经`u`到`x`。
因此,下面的反向传播函数计算`z=u*x`关于`x`的偏导数,同时将`u`作为常数处理,
而不是`z=x*x*x`关于`x`的偏导数。
```
# 设置persistent=True来运行t.gradient多次
with tf.GradientTape(persistent=True) as t:
y = x * x
u = tf.stop_gradient(y)
z = u * x
x_grad = t.gradient(z, x)
x_grad == u
```
由于记录了`y`的计算结果,我们可以随后在`y`上调用反向传播,
得到`y=x*x`关于的`x`的导数,即`2*x`。
```
t.gradient(y, x) == 2 * x
```
## Python控制流的梯度计算
使用自动微分的一个好处是:
[**即使构建函数的计算图需要通过Python控制流(例如,条件、循环或任意函数调用),我们仍然可以计算得到的变量的梯度**]。
在下面的代码中,`while`循环的迭代次数和`if`语句的结果都取决于输入`a`的值。
```
def f(a):
b = a * 2
while tf.norm(b) < 1000:
b = b * 2
if tf.reduce_sum(b) > 0:
c = b
else:
c = 100 * b
return c
```
让我们计算梯度。
```
a = tf.Variable(tf.random.normal(shape=()))
with tf.GradientTape() as t:
d = f(a)
d_grad = t.gradient(d, a)
d_grad
```
我们现在可以分析上面定义的`f`函数。
请注意,它在其输入`a`中是分段线性的。
换言之,对于任何`a`,存在某个常量标量`k`,使得`f(a)=k*a`,其中`k`的值取决于输入`a`。
因此,我们可以用`d/a`验证梯度是否正确。
```
d_grad == d / a
```
## 小结
* 深度学习框架可以自动计算导数:我们首先将梯度附加到想要对其计算偏导数的变量上。然后我们记录目标值的计算,执行它的反向传播函数,并访问得到的梯度。
## 练习
1. 为什么计算二阶导数比一阶导数的开销要更大?
1. 在运行反向传播函数之后,立即再次运行它,看看会发生什么。
1. 在控制流的例子中,我们计算`d`关于`a`的导数,如果我们将变量`a`更改为随机向量或矩阵,会发生什么?
1. 重新设计一个求控制流梯度的例子,运行并分析结果。
1. 使$f(x)=\sin(x)$,绘制$f(x)$和$\frac{df(x)}{dx}$的图像,其中后者不使用$f'(x)=\cos(x)$。
[Discussions](https://discuss.d2l.ai/t/1757)
| github_jupyter |
# Rossman data preparation
To illustrate the techniques we need to apply before feeding all the data to a Deep Learning model, we are going to take the example of the [Rossmann sales Kaggle competition](https://www.kaggle.com/c/rossmann-store-sales). Given a wide range of information about a store, we are going to try predict their sale number on a given day. This is very useful to be able to manage stock properly and be able to properly satisfy the demand without wasting anything. The official training set was giving a lot of informations about various stores in Germany, but it was also allowed to use additional data, as long as it was made public and available to all participants.
We are going to reproduce most of the steps of one of the winning teams that they highlighted in [Entity Embeddings of Categorical Variables](https://arxiv.org/pdf/1604.06737.pdf). In addition to the official data, teams in the top of the leaderboard also used information about the weather, the states of the stores or the Google trends of those days. We have assembled all that additional data in one file available for download [here](http://files.fast.ai/part2/lesson14/rossmann.tgz) if you want to replicate those steps.
### A first look at the data
First things first, let's import everything we will need.
```
from fastai.tabular.all import *
```
If you have download the previous file and decompressed it in a folder named rossmann in the fastai data folder, you should see the following list of files with this instruction:
```
path = Config().data/'rossmann'
path.ls()
```
The data that comes from Kaggle is in 'train.csv', 'test.csv', 'store.csv' and 'sample_submission.csv'. The other files are the additional data we were talking about. Let's start by loading everything using pandas.
```
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']
tables = [pd.read_csv(path/f'{fname}.csv', low_memory=False) for fname in table_names]
train, store, store_states, state_names, googletrend, weather, test = tables
```
To get an idea of the amount of data available, let's just look at the length of the training and test tables.
```
len(train), len(test)
```
So we have more than one million records available. Let's have a look at what's inside:
```
train.head()
```
The `Store` column contains the id of the stores, then we are given the id of the day of the week, the exact date, if the store was open on that day, if there were any promotion in that store during that day, and if it was a state or school holiday. The `Customers` column is given as an indication, and the `Sales` column is what we will try to predict.
If we look at the test table, we have the same columns, minus `Sales` and `Customers`, and it looks like we will have to predict on dates that are after the ones of the train table.
```
test.head()
```
The other table given by Kaggle contains some information specific to the stores: their type, what the competition looks like, if they are engaged in a permanent promotion program, and if so since then.
```
store.head().T
```
Now let's have a quick look at our four additional dataframes. `store_states` just gives us the abbreviated name of the sate of each store.
```
store_states.head()
```
We can match them to their real names with `state_names`.
```
state_names.head()
```
Which is going to be necessary if we want to use the `weather` table:
```
weather.head().T
```
Lastly the googletrend table gives us the trend of the brand in each state and in the whole of Germany.
```
googletrend.head()
```
Before we apply the fastai preprocessing, we will need to join the store table and the additional ones with our training and test table. Then, as we saw in our first example in chapter 1, we will need to split our variables between categorical and continuous. Before we do that, though, there is one type of variable that is a bit different from the others: dates.
We could turn each particular day in a category but there are cyclical information in dates we would miss if we did that. We already have the day of the week in our tables, but maybe the day of the month also bears some significance. People might be more inclined to go shopping at the beggining or the end of the month. The number of the week/month is also important to detect seasonal influences.
Then we will try to exctract meaningful information from those dates. For instance promotions on their own are important inputs, but maybe the number of running weeks with promotion is another useful information as it will influence customers. A state holiday in itself is important, but it's more significant to know if we are the day before or after such a holiday as it will impact sales. All of those might seem very specific to this dataset, but you can actually apply them in any tabular data containing time information.
This first step is called feature-engineering and is extremely important: your model will try to extract useful information from your data but any extra help you can give it in advance is going to make training easier, and the final result better. In Kaggle Competitions using tabular data, it's often the way people prepared their data that makes the difference in the final leaderboard, not the exact model used.
### Feature Engineering
#### Merging tables
To merge tables together, we will use this little helper function that relies on the pandas library. It will merge the tables `left` and `right` by looking at the column(s) which names are in `left_on` and `right_on`: the information in `right` will be added to the rows of the tables in `left` when the data in `left_on` inside `left` is the same as the data in `right_on` inside `right`. If `left_on` and `right_on` are the same, we don't have to pass `right_on`. We keep the fields in `right` that have the same names as fields in `left` and add a `_y` suffix (by default) to those field names.
```
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
```
First, let's replace the state names in the weather table by the abbreviations, since that's what is used in the other tables.
```
weather = join_df(weather, state_names, "file", "StateName")
weather[['file', 'Date', 'State', 'StateName']].head()
```
To double-check the merge happened without incident, we can check that every row has a `State` with this line:
```
len(weather[weather.State.isnull()])
```
We can now safely remove the columns with the state names (`file` and `StateName`) since they we'll use the short codes.
```
weather.drop(columns=['file', 'StateName'], inplace=True)
```
To add the weather informations to our `store` table, we first use the table `store_states` to match a store code with the corresponding state, then we merge with our weather table.
```
store = join_df(store, store_states, 'Store')
store = join_df(store, weather, 'State')
```
And again, we can check if the merge went well by looking if new NaNs where introduced.
```
len(store[store.Mean_TemperatureC.isnull()])
```
Next, we want to join the `googletrend` table to this `store` table. If you remember from our previous look at it, it's not exactly in the same format:
```
googletrend.head()
```
We will need to change the column with the states and the columns with the dates:
- in the column `fil`, the state names contain `Rossmann_DE_XX` with `XX` being the code of the state, so we want to remove `Rossmann_DE`. We will do this by creating a new column containing the last part of a split of the string by '\_'.
- in the column `week`, we will extract the date corresponding to the beginning of the week in a new column by taking the last part of a split on ' - '.
In pandas, creating a new column is very easy: you just have to define them.
```
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.head()
```
Let's check everything went well by looking at the values in the new `State` column of our `googletrend` table.
```
store['State'].unique(),googletrend['State'].unique()
```
We have two additional values in the second (`None` and 'SL') but this isn't a problem since they'll be ignored when we join. One problem however is that 'HB,NI' in the first table is named 'NI' in the second one, so we need to change that.
```
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
```
Why do we have a `None` in state? As we said before, there is a global trend for Germany that corresponds to `Rosmann_DE` in the field `file`. For those, the previous split failed which gave the `None` value. We will keep this global trend and put it in a new column.
```
trend_de = googletrend[googletrend.file == 'Rossmann_DE'][['Date', 'trend']]
```
Then we can merge it with the rest of our trends, by adding the suffix '\_DE' to know it's the general trend.
```
googletrend = join_df(googletrend, trend_de, 'Date', suffix='_DE')
```
Then at this stage, we can remove the columns `file` and `week`since they won't be useful anymore, as well as the rows where `State` is `None` (since they correspond to the global trend that we saved in another column).
```
googletrend.drop(columns=['file', 'week'], axis=1, inplace=True)
googletrend = googletrend[~googletrend['State'].isnull()]
```
The last thing missing to be able to join this with or store table is to extract the week from the date in this table and in the store table: we need to join them on week values since each trend is given for the full week that starts on the indicated date. This is linked to the next topic in feature engineering: extracting dateparts.
#### Adding dateparts
If your table contains dates, you will need to split the information there in several column for your Deep Learning model to be able to train properly. There is the basic stuff, such as the day number, week number, month number or year number, but anything that can be relevant to your problem is also useful. Is it the beginning or the end of the month? Is it a holiday?
To help with this, the fastai library as a convenience function called `add_datepart`. It will take a dataframe and a column you indicate, try to read it as a date, then add all those new columns. If we go back to our `googletrend` table, we now have gour columns.
```
googletrend.head()
```
If we add the dateparts, we will gain a lot more
```
googletrend = add_datepart(googletrend, 'Date', drop=False)
googletrend.head().T
```
We chose the option `drop=False` as we want to keep the `Date` column for now. Another option is to add the `time` part of the date, but it's not relevant to our problem here.
Now we can join our Google trends with the information in the `store` table, it's just a join on \['Week', 'Year'\] once we apply `add_datepart` to that table. Note that we only keep the initial columns of `googletrend` with `Week` and `Year` to avoid all the duplicates.
```
googletrend = googletrend[['trend', 'State', 'trend_DE', 'Week', 'Year']]
store = add_datepart(store, 'Date', drop=False)
store = join_df(store, googletrend, ['Week', 'Year', 'State'])
```
At this stage, `store` contains all the information about the stores, the weather on that day and the Google trends applicable. We only have to join it with our training and test table. We have to use `make_date` before being able to execute that merge, to convert the `Date` column of `train` and `test` to proper date format.
```
make_date(train, 'Date')
make_date(test, 'Date')
train_fe = join_df(train, store, ['Store', 'Date'])
test_fe = join_df(test, store, ['Store', 'Date'])
```
#### Elapsed times
Another feature that can be useful is the elapsed time before/after a certain event occurs. For instance the number of days since the last promotion or before the next school holiday. Like for the date parts, there is a fastai convenience function that will automatically add them.
One thing to take into account here is that you will need to use that function on the whole time series you have, even the test data: there might be a school holiday that takes place during the training data and it's going to impact those new features in the test data.
```
all_ftrs = train_fe.append(test_fe, sort=False)
```
We will consider the elapsed times for three events: 'Promo', 'StateHoliday' and 'SchoolHoliday'. Note that those must correspondon to booleans in your dataframe. 'Promo' and 'SchoolHoliday' already are (only 0s and 1s) but 'StateHoliday' has multiple values.
```
all_ftrs['StateHoliday'].unique()
```
If we refer to the explanation on Kaggle, 'b' is for Easter, 'c' for Christmas and 'a' for the other holidays. We will just converts this into a boolean that flags any holiday.
```
all_ftrs.StateHoliday = all_ftrs.StateHoliday!='0'
```
Now we can add, for each store, the number of days since or until the next promotion, state or school holiday. This will take a little while since the whole table is big.
```
all_ftrs = add_elapsed_times(all_ftrs, ['Promo', 'StateHoliday', 'SchoolHoliday'],
date_field='Date', base_field='Store')
```
It added a four new features. If we look at 'StateHoliday' for instance:
```
[c for c in all_ftrs.columns if 'StateHoliday' in c]
```
The column 'AfterStateHoliday' contains the number of days since the last state holiday, 'BeforeStateHoliday' the number of days until the next one. As for 'StateHoliday_bw' and 'StateHoliday_fw', they contain the number of state holidays in the past or future seven days respectively. The same four columns have been added for 'Promo' and 'SchoolHoliday'.
Now that we have added those features, we can split again our tables between the training and the test one.
```
train_df = all_ftrs.iloc[:len(train_fe)]
test_df = all_ftrs.iloc[len(train_fe):]
```
One last thing the authors of this winning solution did was to remove the rows with no sales, which correspond to exceptional closures of the stores. This might not have been a good idea since even if we don't have access to the same features in the test data, it can explain why we have some spikes in the training data.
```
train_df = train_df[train_df.Sales != 0.]
```
We will use those for training but since all those steps took a bit of time, it's a good idea to save our progress until now. We will just pickle those tables on the hard drive.
```
train_df.to_pickle(path/'train_clean')
test_df.to_pickle(path/'test_clean')
```
| github_jupyter |
### Introduction
I am testing the idea of using the juyter notebook as my script so the comments are verbose. Hopefully this helps synchronize the notebook content with the video. Comments on this approach are welcome.
More content like this can be found at [robotsquirrelproductions.com](https://robotsquirrelproductions.com/)
Today, I will take you through the steps to control and download data from your Rigol DS1054Z oscilloscope using Python in a Jupyter notebook. This tutorial uses the [DS1054Z library](https://ds1054z.readthedocs.io/en/stable/) written by Philipp Klaus. This library is required to replicate these examples in your environment.
I want to mention some sites I found helpful in learning about this. [Ken Shirrif's blog](http://www.righto.com/2013/07/rigol-oscilloscope-hacks-with-python.html) includes spectrographic analysis and how-to export to .wav file. Of course, the [programming manual](https://beyondmeasure.rigoltech.com/acton/attachment/1579/f-0386/1/-/-/-/-/DS1000Z_Programming%20Guide_EN.pdf) itself also proved helpful.
### Set up the notebook and import the libraries
I want to document the Python version used in this example. Import the `sys` library and print the version information.
```
import sys
print(sys.version)
```
Begin by importing libraries to connect to the oscilloscope and to display the data. The [Matplotlib package](https://matplotlib.org/) provides data plotting functionality.
```
from ds1054z import DS1054Z
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
import numpy as np
```
The [IPython package](https://ipython.org/) provides libraries needed to display the oscilloscope bitmap images.
```
from IPython.display import Image, display
```
Finally, include the libraries from the [SciPy package](https://scipy.org/). These libraries enable single-sided spectral analysis.
```
from scipy.fft import rfft, rfftfreq
```
### Connect to oscilloscope
Next, I need to connect to the oscilloscope to verify it responds to basic commands.
```
scope = DS1054Z('192.168.1.206')
print(scope.idn)
```
The scope should respond with the make, model, and serial number. It looks like it has, so we have a good connection.
Ensure the scope is in the run mode to collect some data.
```
scope.run()
```
### Download a waveform (source screen)
#### Configure the oscilloscope
I have channel 1 connected to a magnetic pickup. For details on which magnetic pickup I used and how I wired it to oscilloscope, check out my blog post on [selecting magnetic pickups](https://robotsquirrelproductions.com/selecting-a-magnetic-pickup/).
The pickup views a shaft rotating about 600 RPM. I would like to see about 5-6 revolutions on the screen. For this reason, I will set the horizontal time scaling to 50 ms/division. I will also set the probe ratio to unity and the vertical scale to 125 mV/division. These commands use the DS1054Z library, but VISA commands could also be used.
```
scope.set_probe_ratio(1, 1)
scope.set_channel_scale(1, 0.125)
scope.timebase_scale = 0.050
```
#### Configure the trigger
I want to set the trigger to capture a similar waveform each time I read data from the oscilloscope. Setting the trigger fixes the signal with respect to the grid. The magnetic pickup signal rises to approximately 200 mV. For this reason I put the trigger level at 100 mV, about half of the peak.
I switched to VISA commands instead of using the DS1054Z functions. The `write` function sends the VISA commands.
```
d_trigger_level = 0.1
scope.write(':trigger:edge:source CHAN1')
scope.write(':trigger:edge:level ' + format(d_trigger_level))
```
#### Download Rigol screen bitmap
With the signals captured, I place the oscilloscope in **STOP** mode. This ensures the buffer contents do not change as I pull configuration information.
```
scope.stop()
```
Take a screenshot of the data from the scope. A bitmap showing the oscilloscope configuration can be a helpful reference to check some of the calculations below.
```
bmap_scope = scope.display_data
display(Image(bmap_scope, width=800))
```
The screen capture confirms the configuration parameters. For example, the top right shows the trigger configured for a rising edge with a threshold value of 100 mV. In the top left, beside the Rigol logo, the image shows the scope is in stop mode and that the horizontal ("H") axis has 50 ms/division. The bottom left corner indicates that channel 1 has been configured for 125 mV/division.
The screen also has information about the memory layout. The top middle of the screenshot shows a wavy line. The wavy line may have greyed-out areas; reference the image below. The wavy line in the transparent region represents the screen buffer. The origin of the screen buffer begins at the left of this transparent area. In contrast, the RAW memory starts at the left of the wavy line regardless of shading.
```
Image(filename="RigolBufferLayout.png", width=800)
```
#### Download oscilloscope configuration
I now begin preparing for data collection. First,save off the vertical scale characteristics. Next, store the timescale value.
```
d_voltage_scale = scope.get_channel_scale(1)
print('Vertical scale: %0.3f volts' % d_voltage_scale)
d_timebase_scale_actual = float(scope.query(':TIMebase:SCAle?'))
print('Horizontal time scale: %0.3f seconds' % d_timebase_scale_actual)
```
I like to store the instrument identification to describe the device that acquired the signal. I include this descriptive identifier on the plots along with the data. This can be helpful for troubleshooting. For example, if you later find a problem instrument, this helps identify projects or work that might be impacted.
```
str_idn = scope.idn
print(str_idn)
```
#### Download the signal and plot it
Next, I use the `get_waveform_samples` function to download the waveform. I set the mode to **NORM** to capture the screen buffer.
```
d_ch1 = scope.get_waveform_samples(1, mode='NORM')
```
The DS1054Z scope should always return 1200 samples. I use the `scope.memory_depth_curr_waveform` command to pull the number of samples.
```
i_ns = scope.memory_depth_curr_waveform
print('Number of samples: %0.f' % i_ns)
```
The scope has twelve horizontal divisions, so the total time for the sample is 50 ms * 12 = 600 ms. Knowing the number of samples and the total length of time, I estimate the sampling frequency as 1200/600 ms = 2000 hertz.
```
d_fs = i_ns/(12.0 * d_timebase_scale_actual)
print('Sampling frequency: %0.3f hertz' % d_fs)
```
Next, I create a time series vector for the independent axis in the plot.
```
np_d_time = np.linspace(0,(i_ns-1),i_ns)/d_fs
```
Restore the oscilloscope to **RUN** mode.
```
scope.run()
```
Calculation of the time series completes the plotting preparation. Next, I write up the lines needed to create the [timebase plot](https://robotsquirrelproductions.com/vibration-data-visualization/#timebase-plot). To zoom in and see more detail, change the x-axis limits to `plt.xlim([0, 0.1])` and comment out both `plt.xticks(np.linspace(0, xmax, 13))` and `ax.xaxis.set_major_formatter(FormatStrFormatter('%.2f'))` lines. With these changes, the plot will show the first 100 ms.
```
plt.rcParams['figure.figsize'] = [16, 4]
plt.figure
plt.plot(np_d_time, d_ch1)
plt.grid()
plt.xlabel('Time, seconds')
xmax = 12.0*d_timebase_scale_actual
plt.xlim([0, xmax])
plt.xticks(np.linspace(0, xmax, 13))
ax = plt.gca()
ax.xaxis.set_major_formatter(FormatStrFormatter('%.2f'))
plt.ylabel('Amplitude, volts')
plt.ylim([-4.0*d_voltage_scale, 4.0 *d_voltage_scale])
plt.title(str_idn + '\nChannel 1 | No. Samples %0.f | Sampling freq.: ' % i_ns)
figure = plt.gcf()
figure.set_size_inches(4*1.6, 4)
plt.savefig('Timebase_Screen.pdf')
plt.show
```
One note of caution: I do not know if the oscilloscope anti-aliased the signal before downsampling for the screen. The programming manual does not provide much detail. It only says it samples at equal time intervals to rebuild the waveform. Not knowing more details, I believe the RAW waveform should be collected to avoid aliasing problems.
The plot matches the signal presented in the screen capture. Next, we will take a spectrum of this data to examine the frequency content.
#### Spectral analysis (source: screen)
A separate [video](https://youtu.be/8KWPlno6VP0) and [blog post](https://robotsquirrelproductions.com/spectral-analysis-in-python/) covers the basics of single-sided spectral analysis in Python. For this reason, I only present the code here. These commands calculate the single-sided spectrum and labels for the frequency axis.
```
cx_y = rfft(d_ch1)/float(i_ns/2.)
d_ws = rfftfreq(i_ns,1./d_fs)
```
Create the plot and display the single-sided spectrum. I used the plot function to create linear scales in the code below. In the video I also use a logarithmic vertical axis scale ("log scale"). Log scales show details that linear scales may miss. For example, to see the noise floor change the second line below to `plt.semilogy(d_ws, abs(cx_y))`. Also, change the fifth line to `plt.xlim([0, 1000])` so that the x-axis limits to 0 to 1000 hertz. The flat section of the spectrum from 200 to 1000 hertz shows the noise floor.
```
plt.figure()
plt.plot(d_ws, abs(cx_y))
plt.grid()
plt.xlabel('Frequency, hertz')
plt.xlim([0, 200])
plt.ylabel('Amplitude, volts')
plt.title('Signal spectrum (Screen)')
figure = plt.gcf()
figure.set_size_inches(4*1.6, 4)
plt.savefig('Spectrum_Screen.pdf')
```
### Download a waveform (RAW)
The previous example downloaded the samples from the screen buffer. This example takes it further and downloads the data stored in memory. The overall workflow will be similar, but some details are different.
Begin by setting the scope mode to **STOP**.
```
scope.stop()
```
Next, get the information needed to make sense of the signals. I start with the vertical scale.
```
d_voltage_scale_raw = scope.get_channel_scale(1)
print('Vertical scale: %0.3f volts' % d_voltage_scale_raw)
```
The sampling frequency can be downloaded directly from the oscilloscope for this example. The value returned by `:ACQuire:SRATe?` should match the value in the screenshot, highlighted by a red rectangle in the image below.
```
d_fs_raw = float(scope.query(":ACQuire:SRATe?"))
print("Sampling rate: %0.1f Msp/s " % (d_fs_raw/1e6))
Image(filename="RigolSampling.png", width=800)
```
The `memory_depth_internal_currently_shown` returns the number of samples in raw (or deep) memory. This describes the number of samples in the raw memory for the screen's current signal.
```
i_ns_raw = scope.memory_depth_internal_currently_shown
print('Number of samples: %0.f' % i_ns_raw)
```
In keeping with good practices, I pull the instrument identification again.
```
str_idn_raw = scope.idn
print(str_idn_raw)
```
Next, I enter the Python command to download data from the oscilloscope. Downloading the signal takes a lot of time, on the order of 5-8 minutes for my arrangement. I have found that the cell must be run manually, using `Ctrl-Enter`. Alternatively, the `time.sleep()` function could pause the notebook execution and allow the upload to complete.
```
d_ch1_raw = scope.get_waveform_samples(1, mode='RAW')
```
Lastly, I set up the time series for this raw waveform.
```
np_d_time_raw = np.linspace(0,(i_ns_raw-1), i_ns_raw)/d_fs_raw
```
Place the oscilloscope in **RUN** mode.
```
scope.run()
```
Now I can plot this channel signal data. To zoom in and see more detail, change the x-axis limits to `plt.xlim([0, 0.1])` and comment out both `plt.xticks(np.linspace(0, xmax, 13))` and `ax.xaxis.set_major_formatter(FormatStrFormatter('%.2f'))` lines. With these changes, the plot will show the first 100 ms.
```
plt.rcParams['figure.figsize'] = [16, 4]
plt.figure
plt.plot(np_d_time_raw, d_ch1_raw)
plt.grid()
plt.xlabel('Time, seconds')
xmax = float(i_ns_raw)/d_fs_raw
plt.xlim([0, xmax])
plt.xticks(np.linspace(0, xmax, 13))
ax = plt.gca()
ax.xaxis.set_major_formatter(FormatStrFormatter('%0.2f'))
plt.ylabel('Amplitude, volts')
plt.ylim([-4.0*d_voltage_scale_raw, 4.0 *d_voltage_scale_raw])
plt.title(str_idn_raw + '\n' + 'Raw Channel 1')
figure = plt.gcf()
figure.set_size_inches(4*1.6, 4)
plt.savefig('Timebase_Raw.pdf')
plt.show
```
#### Spectral analysis (source: RAW)
These commands calculate the single-sided spectrum and labels for the frequency axis for the raw data.
```
cx_y_raw = rfft(d_ch1_raw)/float(i_ns_raw/2.)
d_ws_raw = rfftfreq(i_ns_raw,1./d_fs_raw)
```
Create the plot and display the single-sided spectrum. To replicate the results in the video, floor change the second line below to `plt.semilogy(d_ws, abs(cx_y))`. Also, change the fifth line to `plt.xlim([0, 1000])` so that the x-axis limits to 0 to 1000 hertz. The flat section of the spectrum from 200 to 1000 hertz shows the noise floor.
```
plt.figure()
plt.plot(d_ws_raw, abs(cx_y_raw))
plt.grid()
plt.xlabel('Frequency, hertz')
plt.xlim([0, 200])
plt.ylabel('Amplitude, -')
plt.title('Signal spectrum (RAW)')
figure = plt.gcf()
figure.set_size_inches(4*1.6, 4)
plt.savefig('Spectrum_Raw.pdf')
```
### Conclusion
I used Python to pull data from a Rigol DS1054Z and plot both timebase and spectrum domain data in this posting. The notebook belongs to a collection of [short examples](https://github.com/RobotSquirrelProd/Shorts) on Github. Here is the [link](https://github.com/RobotSquirrelProd/Shorts/blob/main/Using%20Python%20and%20a%20Rigol%20DS1054Z%20Oscilloscope%20for%20Spectral%20Analysis.ipynb) to the Jupyter notebook. I hope you find the notes helpful, and I look forward to reading your comments.
| github_jupyter |
<style>div.container { width: 100% }</style>
<img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/holoviz-logo-unstacked.svg" />
<div style="float:right; vertical-align:text-bottom;"><h2>SciPy 2020 Tutorial Index</h2></div>
<div class="alert alert-warning" role="alert"> <strong>NOTE:</strong> This material is subject to change before the tutorial begins. Check out the <a href="https://github.com/pyviz/holoviz/tree/scipy20">scipy20 tag</a> once the tutorial date approaches to access the materials included in the tutorial. For the latest version of the tutorial, visit <a href="https://holoviz.org/tutorial">holoviz.org</a>.
</div>
This tutorial will take you through all of the steps involved in exploring data of many different types and sizes, building simple and complex figures, working with billions of data points, adding interactive behavior, widgets and controls, and deploying full dashboards and applications.
We'll be using a wide range of open-source Python libraries, but focusing on the tools we help maintain as part of the HoloViz project:
[Panel](https://panel.pyviz.org),
[hvPlot](https://hvplot.pyviz.org),
[HoloViews](http://holoviews.org),
[GeoViews](http://geoviews.org),
[Datashader](http://datashader.org),
[Param](http://param.pyviz.org), and
[Colorcet](http://colorcet.pyviz.org).
<img width="800" src="../assets/pn_hp_hv_gv_ds_pa_cs.png"/>
These tools were previously part of [PyViz.org](http://pyviz.org), but have been pulled out into [HoloViz.org](http://holoviz.org) to allow PyViz to be fully neutral and general.
The HoloViz tools have been carefully designed to work together with each other and with the SciPy ecosystem to address a very wide range of data-analysis and visualization tasks, making it simple to discover, understand, and communicate the important properties of your data.
<img align="center" src="../assets/earthquakes.png"></img>
This notebook serves as the homepage of the tutorial, including a table of contents letting you launch each tutorial section.
## Index and Schedule
- **Introduction and setup**
* **5 min** [Setup](./00_Setup.ipynb): Setting up the environment and data files.
* **20 min** [Overview](./01_Overview.ipynb): Overview of the HoloViz tools, philosophy, and approach.
- **Building dashboards using Panel**
* **15 min** [Building_Panels](./02_Building_Panels.ipynb): How to make apps and dashboards from Python objects.
* **5 min** [*Exercise 1*](./exercises/Building_a_Dashboard.ipynb#Exercise-1): Using a mix of visualizable types, create a panel and serve it.
* **10 min** [Interlinked Panels](./03_Interlinked_Panels.ipynb): Customizing linkages between widgets and displayable objects.
* **5 min** [*Exercise 2*](./exercises/Building_a_Dashboard.ipynb#Exercise-2): Add widgets to control your dashboard.
* **10 min** *Break*
- **The `.plot` API: a data-centric approach to visualization**
* **30 min** [Basic Plotting](./04_Basic_Plotting.ipynb): Quick introduction to the `.plot` interface.
* **10 min** [Composing Plots](./05_Composing_Plots.ipynb): Overlaying and laying out `.hvplot` outputs to show relationships.
* **10 min** [*Exercise 3*](./exercises/Plotting.ipynb#Exercise-3): Add some `.plot` or `.hvplot` visualizations to your dashboard.
* **10 min** *Break*
- **Custom interactivity**
* **25 min** [Interlinked Plots](./06_Interlinked_Plots.ipynb): Connecting HoloViews "streams" to customize behavior.
* **10 min** [*Exercise 4*](./exercises/Plotting.ipynb#Exercise-4): Add a linked visualization with HoloViews.
- **Working with large datasets**
* **20 min** [Large Data](./07_Large_Data.ipynb): Using Datashader to pre-render data in Python
* **10 min** *Break*
- **Building advanced dashboards**
* **15 min** [Advanced Dashboards](./08_Advanced_Dashboards.ipynb): Using Panel to create an advanced dashboard with linked plots and streams.
* **30 min** [*Exercise 5*](./exercises/Advanced_Dashboarding.ipynb): Build a new dashboard using everything you've learned so far.
## Related links
You will find extensive support material on the websites for each package. You may find these links particularly useful during the tutorial:
* [hvPlot user guide](https://hvplot.pyviz.org/user_guide): Guide to the plots available via `.hvplot()`
* [HoloViews reference gallery](http://holoviews.org/reference/index.html): Visual reference of all HoloViews elements and containers, along with some other components
* [Panel reference gallery](http://panel.pyviz.org/reference/index.html): Visual reference of all panes, layouts and widgets.
* [PyViz Examples](http://examples.pyviz.org): Example projects using HoloViz and other PyViz tools
| github_jupyter |
```
print('Materialisation Data Test')
import os
import compas
from compas.datastructures import Mesh, mesh_bounding_box_xy
from compas.geometry import Vector, Frame, Scale
HERE = os.getcwd()
FILE_I = os.path.join(HERE, 'blocks and ribs_RHINO', 'sessions', 'bm_vertical_equilibrium', 'simple_tripod.rv2')
FILE_O1 = os.path.join(HERE, 'blocks and ribs_RHINO', 'data', 'form.json')
FILE_O2 = os.path.join(HERE, 'blocks and ribs_RHINO', 'data', 'scaled_form.json')
session = compas.json_load(FILE_I)
mesh = Mesh.from_data(session['data']['form'])
```
### to delete extra faces(more than 4 edges) if subdivided with catmulclark or other weird subdivision that connects the mesh with the ground
```
delete_faces =[]
for fkey in mesh.faces():
if len(mesh.face_vertices(fkey)) > 4:
delete_faces.append(fkey)
for fkey in delete_faces:
mesh.delete_face(fkey)
mesh.remove_unused_vertices()
```
### scale up the form if needed
```
scaled_mesh = mesh.copy()
box_points = mesh_bounding_box_xy(scaled_mesh)
base_mesh = scaled_mesh.from_points(box_points)
centroid = base_mesh.centroid()
#print (centroid)
frame = Frame(centroid,Vector(1,0,0),Vector(0,1,0))
S = Scale.from_factors([100, 100, 100], frame)
scaled_mesh.transform(S)
```
### Visualise and export Initial Mesh
```
mesh.to_json(FILE_O1)
scaled_mesh.to_json(FILE_O2)
print(mesh)
from pythreejs import *
import numpy as np
from IPython.display import display
vertices = []
for face in mesh.faces():
for v in mesh.face_vertices(face):
xyz = mesh.vertex_attributes(v, "xyz")
vertices.append(xyz)
print(vertices)
vertices = BufferAttribute(
array = np.array(vertices,dtype=np.float32),
normalized = False)
geometry = BufferGeometry(
attribute={'position': vertices})
geometry.exec_three_obj_method('computeVertexNormals')
mesh_3j = Mesh(geometry=geometry,
material=MeshPhongMaterial(color='#0092D2'),
position=[0,0,0])
c = PerspectiveCamera(position = [0, 5, 5], up = [0, 1, 0],
children=[DirectionalLight(color='white', position=[3,5,1], intensity=0.5)])
scene=Scene(children=[mesh_3j,c, AmbientLight(color='#777777')])
renderer = Renderer(camera=c, scene=scene, controls=[OrbitControls(controlling=c)],
width=800, height=600)
display(renderer)
print(geometry)
from pythreejs._example_helper import use_example_model_ids
use_example_model_ids()
BoxGeometry(
width=5,
height=10,
depth=15,
widthSegments=5,
heightSegments=10,
depthSegments=15)
```
| github_jupyter |
* [1.0 - Introduction](#1.0---Introduction)
- [1.1 - Library imports and loading the data from SQL to pandas](#1.1---Library-imports-and-loading-the-data-from-SQL-to-pandas)
* [2.0 - Data Cleaning](#2.0---Data-Cleaning)
- [2.1 - Pre-cleaning, investigating data types](#2.1---Pre-cleaning,-investigating-data-types)
- [2.2 - Dealing with non-numerical values](#2.2---Dealing-with-non-numerical-values)
* [3.0 - Creating New Features](#)
- [3.1 - Creating the 'gender' column](#3.1---Creating-the-'gender'-column)
- [3.2 - Categorizing job titles](#3.2---Categorizing-job-titles)
* [4.0 - Data Analysis and Visualizations](#4.0---Data-Analysis-and-Visualizations)
- [4.1 - Overview of the gender gap](#4.1---Overview-of-the-gender-gap)
- [4.2 - Exploring the year column](#4.2---Exploring-the-year-column)
- [4.3 - Full time vs. part time employees](#4.3---Full-time-vs.-part-time-employees)
- [4.4 - Breaking down the total pay](#4.4---Breaking-down-the-total-pay)
- [4.5 - Breaking down the base pay by job category](#4.5---Breaking-down-the-base-pay-by-job-category)
- [4.6 - Gender representation by job category](#4.6---Gender-representation-by-job-category)
- [4.7 - Significance testing by exact job title](#4.7---Significance-testing-by-exact-job-title)
* [5.0 - San Francisco vs. Newport Beach](#5.0---San Francisco-vs.-Newport-Beach)
- [5.1 - Part time vs. full time workers](#5.1---Part-time-vs.-full-time-workers)
- [5.2 - Comparisons by job cateogry](#5.2---Comparisons-by-job-cateogry)
- [5.3 - Gender representation by job category](#5.3---Gender-representation-by-job-category)
* [6.0 - Conclusion](#6.0---Conclusion)
### 1.0 - Introduction
In this notebook, I will focus on data analysis and preprocessing for the gender wage gap. Specifically, I am going to focus on public jobs in the city of San Francisco and Newport Beach. This data set is publically available on [Kaggle](https://www.kaggle.com/kaggle/sf-salaries) and [Transparent California](https://transparentcalifornia.com/).
I also created a web application based on this dataset. You can play arround with it [here](https://gendergapvisualization.herokuapp.com/). For a complete list of requirements and files used for my web app, check out my GitHub repository [here](https://github.com/sengkchu/gendergapvisualization).
In this notebook following questions will be explored:
+ Is there an overall gender wage gap for public jobs in San Francisco?
+ Is the gender gap really 78 cents on the dollar?
+ Is there a gender wage gap for full time employees?
+ Is there a gender wage gap for part time employees?
+ Is there a gender wage gap if the employees were grouped by job categories?
+ Is there a gender wage gap if the employees were grouped by exact job title?
+ If the gender wage gap exists, is the data statistically significant?
+ If the gender wage gap exists, how does the gender wage gap in San Francisco compare with more conservative cities in California?
Lastly, I want to mention that I am not affiliated with any political group, everything I write in this project is based on my perspective of the data alone.
#### 1.1 - Library imports and loading the data from SQL to pandas
The SQL database is about 18 megabytes, which is small enough for my computer to handle. So I've decided to just load the entire database into memory using pandas. However, I created a function that takes in a SQL query and returns the result as a pandas dataframe just in case I need to use SQL queries.
```
import pandas as pd
import numpy as np
import sqlite3
import matplotlib.pyplot as plt
import seaborn as sns
import gender_guesser.detector as gender
import time
import collections
%matplotlib inline
sns.set(font_scale=1.5)
def run_query(query):
with sqlite3.connect('database.sqlite') as conn:
return pd.read_sql(query, conn)
#Read the data from SQL->Pandas
q1 = '''
SELECT * FROM Salaries
'''
data = run_query(q1)
data.head()
```
### 2.0 - Data Cleaning
Fortunately, this data set is already very clean. However, we should still look into every column. Specifically, we are interested in the data types of each column, and check for null values within the rows.
#### 2.1 - Pre-cleaning, investigating data types
Before we do anything to the dataframe, we are going to simply explore the data a little bit.
```
data.dtypes
data['JobTitle'].nunique()
```
There is no gender column, so we'll have to create one. In addition, we'll need to reduce the number of unique values in the `'JobTitle'` column. `'BasePay'`, `'OvertimePay'`, `'OtherPay'`, and `'Benefits'` are all object columns. We'll need to find a way to covert these into numeric values.
Let's take a look at the rest of the columns using the `.value_counts()` method.
```
data['Year'].value_counts()
data['Notes'].value_counts()
data['Agency'].value_counts()
data['Status'].value_counts()
```
It looks like the data is split into 4 years. The `'Notes'` column is empty for 148654 rows, so we should just remove it. The `'Agency'` column is also not useful, because we already know the data is for San Francisco.
The `'Status'` column shows a separation for full time employees and part time employees. We should leave that alone for now.
#### 2.2 - Dealing with non-numerical values
Let's tackle the object columns first, we are going to convert everything into integers using the `pandas.to_numeric()` function. If we run into any errors, the returned value will be NaN.
```
def process_pay(df):
cols = ['BasePay','OvertimePay', 'OtherPay', 'Benefits']
print('Checking for nulls:')
for col in cols:
df[col] = pd.to_numeric(df[col], errors ='coerce')
print(len(col)*'-')
print(col)
print(len(col)*'-')
print(df[col].isnull().value_counts())
return df
data = process_pay(data.copy())
```
Looking at our results above, we found 609 null values in `BasePay` and 36163 null values in `Benefits`. We are going to drop the rows with null values in `BasePay`. Not everyone will recieve benefits for their job, so it makes more sense to fill in the null values for `Benefits` with zeroes.
```
def process_pay2(df):
df['Benefits'] = df['Benefits'].fillna(0)
df = df.dropna()
print(df['BasePay'].isnull().value_counts())
return df
data = process_pay2(data)
```
Lastly, let's drop the `Agency` and `Notes` columns as they do not provide any information.
```
data = data.drop(columns=['Agency', 'Notes'])
```
### 3.0 - Creating New Features
Unfortunately, this data set does not include demographic information. Since this project is focused on investigating the gender wage gap, we need a way to classify a person's gender. Furthermore, the `JobTitle` column has 2159 unique values. We'll need to simplify this column.
#### 3.1 - Creating the 'gender' column
Due to the limitations of this data set. We'll have to assume the gender of the employee by using their first name. The `gender_guesser` library is very useful for this.
```
#Create the 'Gender' column based on employee's first name.
d = gender.Detector(case_sensitive=False)
data['FirstName'] = data['EmployeeName'].str.split().apply(lambda x: x[0])
data['Gender'] = data['FirstName'].apply(lambda x: d.get_gender(x))
data['Gender'].value_counts()
```
We are just going to remove employees with ambiguous or gender neutral first names from our analysis.
```
#Retain data with 'male' and 'female' names.
male_female_only = data[(data['Gender'] == 'male') | (data['Gender'] == 'female')].copy()
male_female_only['Gender'].value_counts()
```
#### 3.2 - Categorizing job titles
Next, we'll have to simplify the `JobTitles` column. To do this, we'll use the brute force method. I created an ordered dictionary with keywords and their associated job category. The generic titles are at the bottom of the dictionary, and the more specific titles are at the top of the dictionary. Then we are going to use a for loop in conjunction with the `.map()` method on the column.
I used the same labels as this [kernel](https://www.kaggle.com/mevanoff24/data-exploration-predicting-salaries) on Kaggle, but I heavily modified the code for readability.
```
def find_job_title2(row):
#Prioritize specific titles on top
titles = collections.OrderedDict([
('Police',['police', 'sherif', 'probation', 'sergeant', 'officer', 'lieutenant']),
('Fire', ['fire']),
('Transit',['mta', 'transit']),
('Medical',['anesth', 'medical', 'nurs', 'health', 'physician', 'orthopedic', 'pharm', 'care']),
('Architect', ['architect']),
('Court',['court', 'legal']),
('Mayor Office', ['mayoral']),
('Library', ['librar']),
('Public Works', ['public']),
('Attorney', ['attorney']),
('Custodian', ['custodian']),
('Gardener', ['garden']),
('Recreation Leader', ['recreation']),
('Automotive',['automotive', 'mechanic', 'truck']),
('Engineer',['engineer', 'engr', 'eng', 'program']),
('General Laborer',['general laborer', 'painter', 'inspector', 'carpenter', 'electrician', 'plumber', 'maintenance']),
('Food Services', ['food serv']),
('Clerk', ['clerk']),
('Porter', ['porter']),
('Airport Staff', ['airport']),
('Social Worker',['worker']),
('Guard', ['guard']),
('Assistant',['aide', 'assistant', 'secretary', 'attendant']),
('Analyst', ['analy']),
('Manager', ['manager'])
])
#Loops through the dictionaries
for group, keywords in titles.items():
for keyword in keywords:
if keyword in row.lower():
return group
return 'Other'
start_time = time.time()
male_female_only["Job_Group"] = male_female_only["JobTitle"].map(find_job_title2)
print("--- Run Time: %s seconds ---" % (time.time() - start_time))
male_female_only['Job_Group'].value_counts()
```
### 4.0 - Data Analysis and Visualizations
In this section, we are going to use the data to answer the questions stated in the [introduction section](#1.0---Introduction).
#### 4.1 - Overview of the gender gap
Let's begin by splitting the data set in half, one for females and one for males. Then we'll plot the overall income distribution using kernel density estimation based on the gausian function.
```
fig = plt.figure(figsize=(10, 5))
male_only = male_female_only[male_female_only['Gender'] == 'male']
female_only = male_female_only[male_female_only['Gender'] == 'female']
ax = sns.kdeplot(male_only['TotalPayBenefits'], color ='Blue', label='Male', shade=True)
ax = sns.kdeplot(female_only['TotalPayBenefits'], color='Red', label='Female', shade=True)
plt.yticks([])
plt.title('Overall Income Distribution')
plt.ylabel('Density of Employees')
plt.xlabel('Total Pay + Benefits ($)')
plt.xlim(0, 350000)
plt.show()
```
The income distribution plot is bimodal. In addition, we see a gender wage gap in favor of males in between the ~110000 and the ~275000 region. But, this plot doesn't capture the whole story. We need to break down the data some more. But first, let's explore the percentage of employees based on gender.
```
fig = plt.figure(figsize=(5, 5))
colors = ['#AFAFF5', '#EFAFB5']
labels = ['Male', 'Female']
sizes = [len(male_only), len(female_only)]
explode = (0.05, 0)
sns.set(font_scale=1.5)
ax = plt.pie(sizes, labels=labels, explode=explode, colors=colors, shadow=True, startangle=90, autopct='%1.f%%')
plt.title('Estimated Percentages of Employees: Overall')
plt.show()
```
Another key factor we have to consider is the number of employees. How do we know if there are simply more men working at higher paying jobs? How can we determine if social injustice has occured?
The chart above only tells us the total percentage of employees across all job categories, but it does give us an overview of the data.
#### 4.2 - Exploring the year column
The data set contain information on employees between 2011-2014. Let's take a look at an overview of the income based on the `Year` column regardless of gender.
```
data_2011 = male_female_only[male_female_only['Year'] == 2011]
data_2012 = male_female_only[male_female_only['Year'] == 2012]
data_2013 = male_female_only[male_female_only['Year'] == 2013]
data_2014 = male_female_only[male_female_only['Year'] == 2014]
plt.figure(figsize=(10,7.5))
ax = plt.boxplot([data_2011['TotalPayBenefits'].values, data_2012['TotalPayBenefits'].values, \
data_2013['TotalPayBenefits'].values, data_2014['TotalPayBenefits'].values])
plt.ylim(0, 350000)
plt.xticks([1, 2, 3, 4], ['2011', '2012', '2013', '2014'])
plt.xlabel('Year')
plt.ylabel('Total Pay + Benefits ($)')
plt.tight_layout()
```
From the boxplots, we see that the total pay is increasing for every year. We'll have to consider inflation in our analysis. In addition, it is very possible for an employee to stay at their job for multiple years. We don't want to double sample on these employees.
To simplify the data for the purpose of investigating the gender gap. It makes more sense to only choose only one year for our analysis. From our data exploration, we noticed that the majority of the `status` column was blank. Let's break the data down by year using the `.value_counts()` method.
```
years = ['2011', '2012', '2013', '2014']
all_data = [data_2011, data_2012, data_2013, data_2014]
for i in range(4):
print(len(years[i])*'-')
print(years[i])
print(len(years[i])*'-')
print(all_data[i]['Status'].value_counts())
```
The status of the employee is critical to our analysis, only year 2014 has this information. So it makes sense to focus on analysis on 2014.
```
data_2014_FT = data_2014[data_2014['Status'] == 'FT']
data_2014_PT = data_2014[data_2014['Status'] == 'PT']
```
#### 4.3 - Full time vs. part time employees
Let's take a look at the kernal density estimation plot for part time and full time employees.
```
fig = plt.figure(figsize=(10, 5))
ax = sns.kdeplot(data_2014_PT['TotalPayBenefits'], color = 'Orange', label='Part Time Workers', shade=True)
ax = sns.kdeplot(data_2014_FT['TotalPayBenefits'], color = 'Green', label='Full Time Workers', shade=True)
plt.yticks([])
plt.title('Part Time Workers vs. Full Time Workers')
plt.ylabel('Density of Employees')
plt.xlabel('Total Pay + Benefits ($)')
plt.xlim(0, 350000)
plt.show()
```
If we split the data by employment status, we can see that the kernal distribution plot is no longer bimodal. Next, let's see how these two plots look if we seperate the data by gender.
```
fig = plt.figure(figsize=(10, 10))
fig.subplots_adjust(hspace=.5)
#Generate the top plot
male_only = data_2014_FT[data_2014_FT['Gender'] == 'male']
female_only = data_2014_FT[data_2014_FT['Gender'] == 'female']
ax = fig.add_subplot(2, 1, 1)
ax = sns.kdeplot(male_only['TotalPayBenefits'], color ='Blue', label='Male', shade=True)
ax = sns.kdeplot(female_only['TotalPayBenefits'], color='Red', label='Female', shade=True)
plt.title('Full Time Workers')
plt.ylabel('Density of Employees')
plt.xlabel('Total Pay & Benefits ($)')
plt.xlim(0, 350000)
plt.yticks([])
#Generate the bottom plot
male_only = data_2014_PT[data_2014_PT['Gender'] == 'male']
female_only = data_2014_PT[data_2014_PT['Gender'] == 'female']
ax2 = fig.add_subplot(2, 1, 2)
ax2 = sns.kdeplot(male_only['TotalPayBenefits'], color ='Blue', label='Male', shade=True)
ax2 = sns.kdeplot(female_only['TotalPayBenefits'], color='Red', label='Female', shade=True)
plt.title('Part Time Workers')
plt.ylabel('Density of Employees')
plt.xlabel('Total Pay & Benefits ($)')
plt.xlim(0, 350000)
plt.yticks([])
plt.show()
```
For part time workers, the KDE plot is nearly identical for both males and females.
For full time workers, we still see a gender gap. We'll need to break down the data some more.
#### 4.4 - Breaking down the total pay
We used total pay including benefits for the x-axis for the KDE plot in the previous section. Is this a fair way to analyze the data? What if men work more overtime hours than women? Can we break down the data some more?
```
male_only = data_2014_FT[data_2014_FT['Gender'] == 'male']
female_only = data_2014_FT[data_2014_FT['Gender'] == 'female']
fig = plt.figure(figsize=(10, 15))
fig.subplots_adjust(hspace=.5)
#Generate the top plot
ax = fig.add_subplot(3, 1, 1)
ax = sns.kdeplot(male_only['OvertimePay'], color ='Blue', label='Male', shade=True)
ax = sns.kdeplot(female_only['OvertimePay'], color='Red', label='Female', shade=True)
plt.title('Full Time Workers')
plt.ylabel('Density of Employees')
plt.xlabel('Overtime Pay ($)')
plt.xlim(0, 60000)
plt.yticks([])
#Generate the middle plot
ax2 = fig.add_subplot(3, 1, 2)
ax2 = sns.kdeplot(male_only['Benefits'], color ='Blue', label='Male', shade=True)
ax2 = sns.kdeplot(female_only['Benefits'], color='Red', label='Female', shade=True)
plt.ylabel('Density of Employees')
plt.xlabel('Benefits Only ($)')
plt.xlim(0, 75000)
plt.yticks([])
#Generate the bottom plot
ax3 = fig.add_subplot(3, 1, 3)
ax3 = sns.kdeplot(male_only['BasePay'], color ='Blue', label='Male', shade=True)
ax3 = sns.kdeplot(female_only['BasePay'], color='Red', label='Female', shade=True)
plt.ylabel('Density of Employees')
plt.xlabel('Base Pay Only ($)')
plt.xlim(0, 300000)
plt.yticks([])
plt.show()
```
We see a gender gap for all three plots above. Looks like we'll have to dig even deeper and analyze the data by job cateogries.
But first, let's take a look at the overall correlation for the data set.
```
data_2014_FT.corr()
```
The correlation table above uses Pearson's R to determine the values. The `BasePay` and `Benefits` column are very closely related. We can visualize this relationship using a scatter plot.
```
fig = plt.figure(figsize=(10, 5))
ax = plt.scatter(data_2014_FT['BasePay'], data_2014_FT['Benefits'])
plt.ylabel('Benefits ($)')
plt.xlabel('Base Pay ($)')
plt.show()
```
This makes a lot of sense because an employee's benefits is based on a percentage of their base pay. The San Francisco Human Resources department includes this information on their website [here](http://sfdhr.org/benefits-overview).
As we move further into our analysis of the data, it makes the most sense to focus on the `BasePay` column. Both `Benefits` and `OvertimePay` are dependent of the `BasePay`.
#### 4.5 - Breaking down the base pay by job category
Next we'll analyze the base pay of full time workers by job category.
```
pal = sns.diverging_palette(0, 255, n=2)
ax = sns.factorplot(x='BasePay', y='Job_Group', hue='Gender', data=data_2014_FT,
size=10, kind="bar", palette=pal, ci=None)
plt.title('Full Time Workers')
plt.xlabel('Base Pay ($)')
plt.ylabel('Job Group')
plt.show()
```
At a glance, we can't really draw any conclusive statements about the gender wage gap. Some job categories favor females, some favor males. It really depends on what job group the employee is actually in. Maybe it makes more sense to calculate the the difference between these two bars.
```
salaries_by_group = pd.pivot_table(data = data_2014_FT,
values = 'BasePay',
columns = 'Job_Group', index='Gender',
aggfunc = np.mean)
count_by_group = pd.pivot_table(data = data_2014_FT,
values = 'Id',
columns = 'Job_Group', index='Gender',
aggfunc = len)
salaries_by_group
fig = plt.figure(figsize=(10, 15))
sns.set(font_scale=1.5)
differences = (salaries_by_group.loc['female'] - salaries_by_group.loc['male'])*100/salaries_by_group.loc['male']
labels = differences.sort_values().index
x = differences.sort_values()
y = [i for i in range(len(differences))]
palette = sns.diverging_palette(240, 10, n=28, center ='dark')
ax = sns.barplot(x, y, orient = 'h', palette = palette)
#Draws the two arrows
bbox_props = dict(boxstyle="rarrow,pad=0.3", fc="white", ec="black", lw=1)
t = plt.text(5.5, 12, "Higher pay for females", ha="center", va="center", rotation=0,
size=15,
bbox=bbox_props)
bbox_props2 = dict(boxstyle="larrow,pad=0.3", fc="white", ec="black", lw=1)
t = plt.text(-5.5, 12, "Higher pay for males", ha="center", va="center", rotation=0,
size=15,
bbox=bbox_props2)
#Labels each bar with the percentage of females
percent_labels = count_by_group[labels].iloc[0]*100 \
/(count_by_group[labels].iloc[0] + count_by_group[labels].iloc[1])
for i in range(len(ax.patches)):
p = ax.patches[i]
width = p.get_width()*1+1
ax.text(15,
p.get_y()+p.get_height()/2+0.3,
'{:1.0f}'.format(percent_labels[i])+' %',
ha="center")
ax.text(15, -1+0.3, 'Female Representation',
ha="center", fontname='Arial', rotation = 0)
plt.yticks(range(len(differences)), labels)
plt.title('Full Time Workers (Base Pay)')
plt.xlabel('Mean Percent Difference in Pay (Females - Males)')
plt.xlim(-11, 11)
plt.show()
```
I believe this is a better way to represent the gender wage gap. I calculated the mean difference between female and male pay based on job categories. Then I converted the values into a percentage by using this formula:
$$ \text{Mean Percent Difference} = \frac{\text{(Female Mean Pay - Male Mean Pay)*100}} {\text{Male Mean Pay}} $$
The theory stating that women makes 78 cents for every dollar men makes implies a 22% pay difference. None of these percentages were more than 10%, and not all of these percentage values showed favoritism towards males. However, we should keep in mind that this data set only applies to San Francisco public jobs. We should also keep in mind that we do not have access to job experience data which would directly correlate with base pay.
In addition, I included a short table of female representation for each job group on the right side of the graph. We'll dig further into this on the next section.
#### 4.6 - Gender representation by job category
```
contingency_table = pd.crosstab(
data_2014_FT['Gender'],
data_2014_FT['Job_Group'],
margins = True
)
contingency_table
#Assigns the frequency values
femalecount = contingency_table.iloc[0][0:-1].values
malecount = contingency_table.iloc[1][0:-1].values
totals = contingency_table.iloc[2][0:-1]
femalepercentages = femalecount*100/totals
malepercentages = malecount*100/totals
malepercentages=malepercentages.sort_values(ascending=True)
femalepercentages=femalepercentages.sort_values(ascending=False)
length = range(len(femalepercentages))
#Plots the bar chart
fig = plt.figure(figsize=(10, 12))
sns.set(font_scale=1.5)
p1 = plt.barh(length, malepercentages.values, 0.55, label='Male', color='#AFAFF5')
p2 = plt.barh(length, femalepercentages, 0.55, left=malepercentages, color='#EFAFB5', label='Female')
labels = malepercentages.index
plt.yticks(range(len(malepercentages)), labels)
plt.xticks([0, 25, 50, 75, 100], ['0 %', '25 %', '50 %', '75 %', '100 %'])
plt.xlabel('Percentage of Males')
plt.title('Gender Representation by Job Group')
plt.legend(bbox_to_anchor=(0, 1, 1, 0), loc=3,
ncol=2, mode="expand", borderaxespad=0)
plt.show()
```
The chart above does not include any information based on pay. I wanted to show an overview of gender representation based on job category. It is safe to say, women don't like working with automotives with <1% female representation. Where as female representation is highest for medical jobs at 73%.
#### 4.7 - Significance testing by exact job title
So what if breaking down the wage gap by job category is not good enough? Should we break down the gender gap by exact job title? Afterall, the argument is for equal pay for equal work. We can assume equal work if the job titles are exactly the same.
We can use hypothesis testing using the Welch's t-test to determine if there is a statistically significant result between male and female wages. The Welch's t-test is very robust as it doesn't assume equal variance and equal sample size. It does however, assume a normal distrbution which is well represented by the KDE plots. I talk about this in detail in my blog post [here](https://codingdisciple.com/hypothesis-testing-welch-python.html).
Let's state our null and alternative hypothesis:
$ H_0 : \text{There is no statistically significant relationship between gender and pay.} $
$ H_a : \text{There is a statistically significant relationship between gender and pay.} $
We are going to use only job titles with more than 100 employees, and job titles with more than 30 females and 30 males for this t-test. Using a for loop, we'll perform the Welch's t-test on every job title tat matches our criteria.
```
from scipy import stats
#Significance testing by job title
job_titles = data_2014['JobTitle'].value_counts(dropna=True)
job_titles_over_100 = job_titles[job_titles > 100 ]
t_scores = {}
for title,count in job_titles_over_100.iteritems():
male_pay = pd.to_numeric(male_only[male_only['JobTitle'] == title]['BasePay'])
female_pay = pd.to_numeric(female_only[female_only['JobTitle'] == title]['BasePay'])
if female_pay.shape[0] < 30:
continue
if male_pay.shape[0] < 30:
continue
t_scores[title] = stats.ttest_ind_from_stats(
mean1=male_pay.mean(), std1=(male_pay.std()), nobs1= male_pay.shape[0], \
mean2=female_pay.mean(), std2=(female_pay.std()), nobs2=female_pay.shape[0], \
equal_var=False)
for key, value in t_scores.items():
if value[1] < 0.05:
print(len(key)*'-')
print(key)
print(len(key)*'-')
print(t_scores[key])
print(' ')
print('Male: {}'.format((male_only[male_only['JobTitle'] == key]['BasePay']).mean()))
print('sample size: {}'.format(male_only[male_only['JobTitle'] == key].shape[0]))
print(' ')
print('Female: {}'.format((female_only[female_only['JobTitle'] == key]['BasePay']).mean()))
print('sample size: {}'.format(female_only[female_only['JobTitle'] == key].shape[0]))
len(t_scores)
```
Out of the 25 jobs that were tested using the Welch's t-test, 5 jobs resulted in a p-value of less than 0.05. However, not all jobs showed favoritism towards males. 'Registered Nurse' and 'Senior Clerk' both showed an average pay in favor of females. However, we should take the Welch's t-test results with a grain of salt. We do not have data on the work experience of the employees. Maybe female nurses have more work experience over males. Maybe male transit operators have more work experience over females. We don't actually know. Since `BasePay` is a function of work experience, without this critical piece of information, we can not make any conclusions based on the t-test alone. All we know is that a statistically significant difference exists.
### 5.0 - San Francisco vs. Newport Beach
Let's take a look at more a more conservative city such as Newport Beach. This data can be downloaded at Transparent California [here](https://transparentcalifornia.com/salaries/2016/newport-beach/).
We can process the data similar to the San Francisco data set. The following code performs the following:
+ Read the data using pandas
+ Create the `Job_Group` column
+ Create the `Gender` column
+ Create two new dataframes: one for part time workers and one for full time workers
```
#Reads in the data
nb_data = pd.read_csv('newport-beach-2016.csv')
#Creates job groups
def find_job_title_nb(row):
titles = collections.OrderedDict([
('Police',['police', 'sherif', 'probation', 'sergeant', 'officer', 'lieutenant']),
('Fire', ['fire']),
('Transit',['mta', 'transit']),
('Medical',['anesth', 'medical', 'nurs', 'health', 'physician', 'orthopedic', 'pharm', 'care']),
('Architect', ['architect']),
('Court',['court', 'legal']),
('Mayor Office', ['mayoral']),
('Library', ['librar']),
('Public Works', ['public']),
('Attorney', ['attorney']),
('Custodian', ['custodian']),
('Gardener', ['garden']),
('Recreation Leader', ['recreation']),
('Automotive',['automotive', 'mechanic', 'truck']),
('Engineer',['engineer', 'engr', 'eng', 'program']),
('General Laborer',['general laborer', 'painter', 'inspector', 'carpenter', 'electrician', 'plumber', 'maintenance']),
('Food Services', ['food serv']),
('Clerk', ['clerk']),
('Porter', ['porter']),
('Airport Staff', ['airport']),
('Social Worker',['worker']),
('Guard', ['guard']),
('Assistant',['aide', 'assistant', 'secretary', 'attendant']),
('Analyst', ['analy']),
('Manager', ['manager'])
])
#Loops through the dictionaries
for group, keywords in titles.items():
for keyword in keywords:
if keyword in row.lower():
return group
return 'Other'
start_time = time.time()
nb_data["Job_Group"]=data["JobTitle"].map(find_job_title_nb)
#Create the 'Gender' column based on employee's first name.
d = gender.Detector(case_sensitive=False)
nb_data['FirstName'] = nb_data['Employee Name'].str.split().apply(lambda x: x[0])
nb_data['Gender'] = nb_data['FirstName'].apply(lambda x: d.get_gender(x))
nb_data['Gender'].value_counts()
#Retain data with 'male' and 'female' names.
nb_male_female_only = nb_data[(nb_data['Gender'] == 'male') | (nb_data['Gender'] == 'female')]
nb_male_female_only['Gender'].value_counts()
#Seperates full time/part time data
nb_data_FT = nb_male_female_only[nb_male_female_only['Status'] == 'FT']
nb_data_PT = nb_male_female_only[nb_male_female_only['Status'] == 'PT']
nb_data_FT.head()
```
#### 5.1 - Part time vs. full time workers
```
fig = plt.figure(figsize=(10, 5))
nb_male_only = nb_data_PT[nb_data_PT['Gender'] == 'male']
nb_female_only = nb_data_PT[nb_data_PT['Gender'] == 'female']
ax = fig.add_subplot(1, 1, 1)
ax = sns.kdeplot(nb_male_only['Total Pay & Benefits'], color ='Blue', label='Male', shade=True)
ax = sns.kdeplot(nb_female_only['Total Pay & Benefits'], color='Red', label='Female', shade=True)
plt.title('Newport Beach: Part Time Workers')
plt.ylabel('Density of Employees')
plt.xlabel('Total Pay + Benefits ($)')
plt.xlim(0, 400000)
plt.yticks([])
plt.show()
```
Similar to the KDE plot for San Francisco, the KDE plot is nearly identical for both males and females for part time workers.
Let's take a look at the full time workers.
```
fig = plt.figure(figsize=(10, 10))
fig.subplots_adjust(hspace=.5)
#Generate the top chart
nb_male_only = nb_data_FT[nb_data_FT['Gender'] == 'male']
nb_female_only = nb_data_FT[nb_data_FT['Gender'] == 'female']
ax = fig.add_subplot(2, 1, 1)
ax = sns.kdeplot(nb_male_only['Total Pay & Benefits'], color ='Blue', label='Male', shade=True)
ax = sns.kdeplot(nb_female_only['Total Pay & Benefits'], color='Red', label='Female', shade=True)
plt.title('Newport Beach: Full Time Workers')
plt.ylabel('Density of Employees')
plt.xlabel('Total Pay + Benefits ($)')
plt.xlim(0, 400000)
plt.yticks([])
#Generate the bottom chart
male_only = data_2014_FT[data_2014_FT['Gender'] == 'male']
female_only = data_2014_FT[data_2014_FT['Gender'] == 'female']
ax2 = fig.add_subplot(2, 1, 2)
ax2 = sns.kdeplot(male_only['TotalPayBenefits'], color ='Blue', label='Male', shade=True)
ax2 = sns.kdeplot(female_only['TotalPayBenefits'], color='Red', label='Female', shade=True)
plt.title('San Francisco: Full Time Workers')
plt.ylabel('Density of Employees')
plt.xlabel('Total Pay + Benefits ($)')
plt.xlim(0, 400000)
plt.yticks([])
plt.show()
```
The kurtosis of the KDE plot for Newport Beach full time workers is lower than KDE plot for San Francisco full time workers. We can see a higher gender wage gap for Newport beach workers than San Francisco workers. However, these two plots do not tell us the full story. We need to break down the data by job category.
#### 5.2 - Comparisons by job cateogry
```
nb_salaries_by_group = pd.pivot_table(data = nb_data_FT,
values = 'Base Pay',
columns = 'Job_Group', index='Gender',
aggfunc = np.mean,)
nb_salaries_by_group
fig = plt.figure(figsize=(10, 7.5))
sns.set(font_scale=1.5)
differences = (nb_salaries_by_group.loc['female'] - nb_salaries_by_group.loc['male'])*100/nb_salaries_by_group.loc['male']
nb_labels = differences.sort_values().index
x = differences.sort_values()
y = [i for i in range(len(differences))]
nb_palette = sns.diverging_palette(240, 10, n=9, center ='dark')
ax = sns.barplot(x, y, orient = 'h', palette = nb_palette)
plt.yticks(range(len(differences)), nb_labels)
plt.title('Newport Beach: Full Time Workers (Base Pay)')
plt.xlabel('Mean Percent Difference in Pay (Females - Males)')
plt.xlim(-25, 25)
plt.show()
```
Most of these jobs shows a higher average pay for males. The only job category where females were paid higher on average was 'Manager'. Some of these job categories do not even have a single female within the category, so the difference cannot be calculated. We should create a contingency table to check the sample size of our data.
#### 5.3 - Gender representation by job category
```
nb_contingency_table = pd.crosstab(
nb_data_FT['Gender'],
nb_data_FT['Job_Group'],
margins = True
)
nb_contingency_table
```
The number of public jobs is much lower in Newport Beach compared to San Francisco. With only 3 female managers working full time in Newport Beach, we can't really say female managers make more money on average than male managers.
```
#Assigns the frequency values
nb_femalecount = nb_contingency_table.iloc[0][0:-1].values
nb_malecount = nb_contingency_table.iloc[1][0:-1].values
nb_totals = nb_contingency_table.iloc[2][0:-1]
nb_femalepercentages = nb_femalecount*100/nb_totals
nb_malepercentages = nb_malecount*100/nb_totals
nb_malepercentages=nb_malepercentages.sort_values(ascending=True)
nb_femalepercentages=nb_femalepercentages.sort_values(ascending=False)
nb_length = range(len(nb_malepercentages))
#Plots the bar chart
fig = plt.figure(figsize=(10, 10))
sns.set(font_scale=1.5)
p1 = plt.barh(nb_length, nb_malepercentages.values, 0.55, label='Male', color='#AFAFF5')
p2 = plt.barh(nb_length, nb_femalepercentages, 0.55, left=nb_malepercentages, color='#EFAFB5', label='Female')
labels = nb_malepercentages.index
plt.yticks(range(len(nb_malepercentages)), labels)
plt.xticks([0, 25, 50, 75, 100], ['0 %', '25 %', '50 %', '75 %', '100 %'])
plt.xlabel('Percentage of Males')
plt.title('Gender Representation by Job Group')
plt.legend(bbox_to_anchor=(0, 1, 1, 0), loc=3,
ncol=2, mode="expand", borderaxespad=0)
plt.show()
fig = plt.figure(figsize=(10, 5))
colors = ['#AFAFF5', '#EFAFB5']
labels = ['Male', 'Female']
sizes = [len(nb_male_only), len(nb_female_only)]
explode = (0.05, 0)
sns.set(font_scale=1.5)
ax = fig.add_subplot(1, 2, 1)
ax = plt.pie(sizes, labels=labels, explode=explode, colors=colors, shadow=True, startangle=90, autopct='%1.f%%')
plt.title('Newport Beach: Full Time')
sizes = [len(male_only), len(female_only)]
explode = (0.05, 0)
sns.set(font_scale=1.5)
ax2 = fig.add_subplot(1, 2, 2)
ax2 = plt.pie(sizes, labels=labels, explode=explode, colors=colors, shadow=True, startangle=90, autopct='%1.f%%')
plt.title('San Francisco: Full Time')
plt.show()
```
Looking at the plots above. There are fewer females working full time public jobs in Newport Beach compared to San Francisco.
### 6.0 - Conclusion
It is very easy for people to say there is a gender wage gap and make general statements about it. But the real concern is whether if there is social injustice and discrimination involved. Yes, there is an overall gender wage gap for both San Francisco and Newport Beach. In both cases, the income distribution for part time employees were nearly identical for both males and females.
For full time public positions in San Francisco, an overall gender wage gap can be observed. When the full time positions were broken down to job categories, the gender wage gap went both ways. Some jobs favored men, some favored women. For full time public positions in Newport Beach, the majority of the jobs favored men.
However, we were missing a critical piece of information in this entire analysis. We don't have any information on the job experience of the employees. Maybe the men just had more job experience in Newport Beach, we don't actually know. For San Francisco, we assumed equal experience by comparing employees with the same exact job titles. Only job titles with a size greater than 100 were chosen. Out of the 25 job titles that were selected, 5 of them showed a statistically significant result with the Welch's t-test. Two of those jobs showed an average base pay in favor of females.
Overall, I do not believe the '78 cents to a dollar' is a fair statement. It generalizes the data and oversimplifies the problem. There are many hidden factors that is not shown by the data. Maybe women are less likely to ask for a promotion. Maybe women perform really well in the medical world. Maybe the men's body is more suitable for the police officer role. Maybe women are more organized than men and make better libarians. The list goes on and on, the point is, we should always be skeptical of what the numbers tell us. The truth is, men and women are different on a fundamental level. Social injustices and gender discrimination should be analyzed on a case by case basis.
| github_jupyter |
# Lung damage - linear regression model
```
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error
from sklearn import linear_model
from sklearn.model_selection import train_test_split
import seaborn as sns
import matplotlib.pyplot as plt
from urls import lung_damage_url
#CSV are read into a dataframe
#Data is read from AWS
lung_damage_df = pd.read_csv(lung_damage_url)
lung_damage_df
#We drop individual_id as this doesn't provide any relevant information to the model
lung_damage_df = lung_damage_df.drop(['individual_id'], axis = 'columns')
lung_damage_df
```
### Testing if a data present a normal distribution
```
fig = plt.figure(figsize = (15,20))
ax = fig.gca()
lung_damage_df.hist(ax = ax)
#lung_damage_df.hist()
```
### Certain values are normally distributed, which suggest linear regression
## Correlation analysis
```
fig, axes = plt.subplots(ncols=1, nrows=1,figsize=(15,15))
corr_matrix = lung_damage_df.select_dtypes(include=['int64', 'float64']).corr(method = 'pearson')
sns.heatmap(corr_matrix, annot = True)
axes.set_xticklabels(labels=axes.get_xticklabels(),rotation=45)
```
### Lung damage stronly correlates to: 1. Weight, 2. height and 3. cigarettes a week
## Linear regresion model 1 - using all features
```
#One hot encoding by dummy variables
sex_dummy = pd.get_dummies(lung_damage_df.sex)
cancer_dummy = pd.get_dummies(lung_damage_df.ancestry_cancer_flag,prefix='cancer')
diabetes_dummy = pd.get_dummies(lung_damage_df.ancestry_diabetes_flag,prefix='diabetes')
overweight_dummy = pd.get_dummies(lung_damage_df.ancestry_overweight_flag,prefix='overweight')
dummies = pd.concat([sex_dummy, cancer_dummy, diabetes_dummy, overweight_dummy], axis = 'columns')
merged_lung_dummies = pd.concat([lung_damage_df, dummies], axis = 'columns')
X = merged_lung_dummies.drop(['sex', 'ancestry_cancer_flag', 'ancestry_diabetes_flag',
'ancestry_overweight_flag', 'F', 'cancer_False', 'diabetes_False',
'overweight_False', 'lung_damage'], axis = 'columns')
y = lung_damage_df['lung_damage'].to_frame()
print("X")
print(X)
print(X.shape)
print("y")
print(y)
print(y.shape)
#Algorithm
l_reg = linear_model.LinearRegression()
#Looking if relation is appropiate for linear regression
#We can only make a relationship for one feature at the time
plt.scatter(X['weight_in_kg'], y)
plt.show()
#We want to know is if the data is appropiate for linear regresion, which is
l_reg = linear_model.LinearRegression()
plt.scatter(X['height_in_meters'], y)
plt.show()
```
### This suggest data is linear, so linear regression will be fine
```
#separating between the testing and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 1234, shuffle = True)
#train
model = l_reg.fit(X_train, y_train)
predictions = model.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("predictions: ", predictions)
print("R^2: ", l_reg.score(X, y))
print("mse: ", mse)
print("coeff: ", l_reg.coef_)
print("intercept: ", l_reg.intercept_)
#lung_damage_df[lung_damage_df.lung_damage > 0.8]
#lung_damage_df[lung_damage_df.lung_damage < 0.3]
test_de_prueba = [66, 91.9, 1.63, 20, 5, 0, 50, 70, 1, 1, 1, 0]
test_de_prueba = [test_de_prueba]
test_de_prueba = np.array(test_de_prueba).reshape(1,-1)
prediction = model.predict(test_de_prueba)
print(prediction)
```
***Linear regression model predicted a value greater than 1. An arbitrary fix may be suitable***
```
test_de_prueba = [35, 70.9, 1.75, 2, 5, 0, 50, 60, 1, 0, 0, 0]
test_de_prueba = [test_de_prueba]
test_de_prueba = np.array(test_de_prueba).reshape(1,-1)
prediction = model.predict(test_de_prueba)
print(prediction)
```
***Last two predictions are reasonable***
## Linear regresion model 2
### 3 variables: Weight, height and cigarettes per week
```
X= pd.concat([lung_damage_df.weight_in_kg, lung_damage_df.height_in_meters,
lung_damage_df.cigarettes_a_week], axis = 'columns')
y = lung_damage_df['lung_damage'].to_frame()
print("X")
print(X)
print(X.shape)
print("y")
print(y)
print(y.shape)
l_reg = linear_model.LinearRegression()
#separating between the testing and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 1234, shuffle = True)
#train
model = l_reg.fit(X_train, y_train)
predictions = model.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("predictions: ", predictions)
print("R^2: ", l_reg.score(X, y))
print("mse: ", mse)
print("coeff: ", l_reg.coef_)
print("intercept: ", l_reg.intercept_)
test_de_prueba = [91.9, 1.63, 20]
test_de_prueba = [test_de_prueba]
test_de_prueba = np.array(test_de_prueba).reshape(1,-1)
prediction = model.predict(test_de_prueba)
print(prediction)
test_de_prueba = [70.9, 1.75, 2]
test_de_prueba = [test_de_prueba]
test_de_prueba = np.array(test_de_prueba).reshape(1,-1)
prediction = model.predict(test_de_prueba)
print(prediction)
```
### *Linear regresion model 1 gave better results*
## Linear regresion model 1*
#### Certain part of the data will not be considered for the split, as it will be used to test the predictions
```
#One hot encoding by dummy variables
sex_dummy = pd.get_dummies(lung_damage_df.sex)
cancer_dummy = pd.get_dummies(lung_damage_df.ancestry_cancer_flag,prefix='cancer')
diabetes_dummy = pd.get_dummies(lung_damage_df.ancestry_diabetes_flag,prefix='diabetes')
overweight_dummy = pd.get_dummies(lung_damage_df.ancestry_overweight_flag,prefix='overweight')
dummies = pd.concat([sex_dummy, cancer_dummy, diabetes_dummy, overweight_dummy], axis = 'columns')
merged_lung_dummies = pd.concat([lung_damage_df, dummies], axis = 'columns')
X = merged_lung_dummies.drop(['sex', 'ancestry_cancer_flag', 'ancestry_diabetes_flag',
'ancestry_overweight_flag', 'F', 'cancer_False', 'diabetes_False',
'overweight_False', 'lung_damage'], axis = 'columns')
X = X.iloc[:9000] #se dejan fuera los ultimos 1000 datos
X_left = lung_damage_df.iloc[9000:10000] #los datos que quedan se guardan aqui
y = lung_damage_df['lung_damage'].to_frame()
y = y.iloc[:9000]
print("X")
print(X)
print(X.shape)
print("y")
print(y)
print(y.shape)
#Algorithm
l_reg = linear_model.LinearRegression()
#separating between the testing and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 1785, shuffle = True)
#train
model = l_reg.fit(X_train, y_train)
predictions = model.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("predictions: ", predictions)
print("R^2: ", l_reg.score(X, y))
print("mse: ", mse)
print("coeff: ", l_reg.coef_)
print("intercept: ", l_reg.intercept_)
```
### R-squared is 97%, which is a great value. The data is ideal in this model.
## Testing predictions vs real
```
X
#lung_damage_df[lung_damage_df.lung_damage > 0.8]
#lung_damage_df[lung_damage_df.lung_damage < 0.3]
X_left
test_de_prueba = [46, 81.5, 1.64, 0, 2, 0, 53.5, 63.9, 1, 0, 0, 0]
test_de_prueba = [test_de_prueba]
test_de_prueba = np.array(test_de_prueba).reshape(1,-1)
prediction = model.predict(test_de_prueba)
print(prediction)
```
***The prediction says 63% of lung damage, in comparison with 65% real data***
```
test_de_prueba = [32, 53.6, 1.60, 12, 5, 4, 28.1, 41.7, 0, 0, 1, 0]
test_de_prueba = [test_de_prueba]
test_de_prueba = np.array(test_de_prueba).reshape(1,-1)
prediction = model.predict(test_de_prueba)
print(prediction)
```
***49% predicted vs 47% real***
## Conclusions:
- Model seems great using all features
- An arbitrary fix that maps all variables >1 to 1 will be suitable
- This data is surely generated by a program and it's not real data, which could explain the accuracy of the model
- Even though, this model is good which can be optimized using a correction to the linear regression model
| github_jupyter |
```
import tensorflow as tf
import keras
import keras.backend as K
from sklearn.utils import shuffle
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score, f1_score
from collections import Counter
from keras import regularizers
from keras.models import Sequential, Model, load_model, model_from_json
from keras.utils import to_categorical
from keras.layers import Input, Dense, Flatten, Reshape, Concatenate, Dropout
from keras.layers import Conv2D, MaxPooling2D, UpSampling2D, Conv2DTranspose
from keras.layers.normalization import BatchNormalization
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
from keras.layers.advanced_activations import LeakyReLU
def get_class_weights(y):
counter = Counter(y)
majority = max(counter.values())
return {cls: float(majority/count) for cls, count in counter.items()}
class Estimator:
l2p = 0.001
@staticmethod
def early_layers(inp, fm = (1,3), hid_act_func="relu"):
# Start
x = Conv2D(64, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(inp)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
# 1
x = Conv2D(64, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
return x
@staticmethod
def late_layers(inp, num_classes, fm = (1,3), act_func="softmax", hid_act_func="relu", b_name="Identifier"):
# 2
x = Conv2D(32, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(inp)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
# 3
x = Conv2D(32, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
# End
x = Flatten()(x)
x = Dense(256, kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(64, kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(num_classes, activation=act_func, name = b_name)(x)
return x
@staticmethod
def build(height, width, num_classes, name, fm = (1,3), act_func="softmax",hid_act_func="relu"):
inp = Input(shape=(height, width, 1))
early = Estimator.early_layers(inp, fm, hid_act_func=hid_act_func)
late = Estimator.late_layers(early, num_classes, fm, act_func=act_func, hid_act_func=hid_act_func)
model = Model(inputs=inp, outputs=late ,name=name)
return model
import numpy as np
import pandas as pd
from pandas.plotting import autocorrelation_plot
import matplotlib.pyplot as plt
def get_ds_infos():
"""
Read the file includes data subject information.
Data Columns:
0: code [1-24]
1: weight [kg]
2: height [cm]
3: age [years]
4: gender [0:Female, 1:Male]
Returns:
A pandas DataFrame that contains inforamtion about data subjects' attributes
"""
dss = pd.read_csv("data_subjects_info.csv")
print("[INFO] -- Data subjects' information is imported.")
return dss
def set_data_types(data_types=["userAcceleration"]):
"""
Select the sensors and the mode to shape the final dataset.
Args:
data_types: A list of sensor data type from this list: [attitude, gravity, rotationRate, userAcceleration]
Returns:
It returns a list of columns to use for creating time-series from files.
"""
dt_list = []
for t in data_types:
if t != "attitude":
dt_list.append([t+".x",t+".y",t+".z"])
else:
dt_list.append([t+".roll", t+".pitch", t+".yaw"])
return dt_list
def creat_time_series(dt_list, act_labels, trial_codes, mode="mag", labeled=True, combine_grav_acc=False):
"""
Args:
dt_list: A list of columns that shows the type of data we want.
act_labels: list of activites
trial_codes: list of trials
mode: It can be "raw" which means you want raw data
for every dimention of each data type,
[attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)].
or it can be "mag" which means you only want the magnitude for each data type: (x^2+y^2+z^2)^(1/2)
labeled: True, if we want a labeld dataset. False, if we only want sensor values.
combine_grav_acc: True, means adding each axis of gravity to corresponding axis of userAcceleration.
Returns:
It returns a time-series of sensor data.
"""
num_data_cols = len(dt_list) if mode == "mag" else len(dt_list*3)
if labeled:
dataset = np.zeros((0,num_data_cols+7)) # "7" --> [act, code, weight, height, age, gender, trial]
else:
dataset = np.zeros((0,num_data_cols))
ds_list = get_ds_infos()
print("[INFO] -- Creating Time-Series")
for sub_id in ds_list["code"]:
for act_id, act in enumerate(act_labels):
for trial in trial_codes[act_id]:
fname = 'A_DeviceMotion_data/'+act+'_'+str(trial)+'/sub_'+str(int(sub_id))+'.csv'
raw_data = pd.read_csv(fname)
raw_data = raw_data.drop(['Unnamed: 0'], axis=1)
vals = np.zeros((len(raw_data), num_data_cols))
if combine_grav_acc:
raw_data["userAcceleration.x"] = raw_data["userAcceleration.x"].add(raw_data["gravity.x"])
raw_data["userAcceleration.y"] = raw_data["userAcceleration.y"].add(raw_data["gravity.y"])
raw_data["userAcceleration.z"] = raw_data["userAcceleration.z"].add(raw_data["gravity.z"])
for x_id, axes in enumerate(dt_list):
if mode == "mag":
vals[:,x_id] = (raw_data[axes]**2).sum(axis=1)**0.5
else:
vals[:,x_id*3:(x_id+1)*3] = raw_data[axes].values
vals = vals[:,:num_data_cols]
if labeled:
lbls = np.array([[act_id,
sub_id-1,
ds_list["weight"][sub_id-1],
ds_list["height"][sub_id-1],
ds_list["age"][sub_id-1],
ds_list["gender"][sub_id-1],
trial
]]*len(raw_data))
vals = np.concatenate((vals, lbls), axis=1)
dataset = np.append(dataset,vals, axis=0)
cols = []
for axes in dt_list:
if mode == "raw":
cols += axes
else:
cols += [str(axes[0][:-2])]
if labeled:
cols += ["act", "id", "weight", "height", "age", "gender", "trial"]
dataset = pd.DataFrame(data=dataset, columns=cols)
return dataset
#________________________________
#________________________________
def ts_to_secs(dataset, w, s, standardize = False, **options):
data = dataset[dataset.columns[:-7]].values
act_labels = dataset["act"].values
id_labels = dataset["id"].values
trial_labels = dataset["trial"].values
mean = 0
std = 1
if standardize:
## Standardize each sensor’s data to have a zero mean and unity standard deviation.
## As usual, we normalize test dataset by training dataset's parameters
if options:
mean = options.get("mean")
std = options.get("std")
print("[INFO] -- Test/Val Data has been standardized")
else:
mean = data.mean(axis=0)
std = data.std(axis=0)
print("[INFO] -- Training Data has been standardized: the mean is = "+str(mean)+" ; and the std is = "+str(std))
data -= mean
data /= std
else:
print("[INFO] -- Without Standardization.....")
## We want the Rows of matrices show each Feature and the Columns show time points.
data = data.T
m = data.shape[0] # Data Dimension
ttp = data.shape[1] # Total Time Points
number_of_secs = int(round(((ttp - w)/s)))
## Create a 3D matrix for Storing Sections
secs_data = np.zeros((number_of_secs , m , w ))
act_secs_labels = np.zeros(number_of_secs)
id_secs_labels = np.zeros(number_of_secs)
k=0
for i in range(0 , ttp-w, s):
j = i // s
if j >= number_of_secs:
break
if id_labels[i] != id_labels[i+w-1]:
continue
if act_labels[i] != act_labels[i+w-1]:
continue
if trial_labels[i] != trial_labels[i+w-1]:
continue
secs_data[k] = data[:, i:i+w]
act_secs_labels[k] = act_labels[i].astype(int)
id_secs_labels[k] = id_labels[i].astype(int)
k = k+1
secs_data = secs_data[0:k]
act_secs_labels = act_secs_labels[0:k]
id_secs_labels = id_secs_labels[0:k]
return secs_data, act_secs_labels, id_secs_labels, mean, std
##________________________________________________________________
ACT_LABELS = ["dws","ups", "wlk", "jog", "std", "sit"]
TRIAL_CODES = {
ACT_LABELS[0]:[1,2,11],
ACT_LABELS[1]:[3,4,12],
ACT_LABELS[2]:[7,8,15],
ACT_LABELS[3]:[9,16],
ACT_LABELS[4]:[6,14],
ACT_LABELS[5]:[5,13],
}
#https://stackoverflow.com/a/45305384/5210098
def f1_metric(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
def eval_id(sdt, mode, ep, cga):
print("[INFO] -- Selected sensor data types: "+str(sdt)+" -- Mode: "+str(mode)+" -- Grav+Acc: "+str(cga))
act_labels = ACT_LABELS [0:4]
print("[INFO] -- Selected activites: "+str(act_labels))
trial_codes = [TRIAL_CODES[act] for act in act_labels]
dt_list = set_data_types(sdt)
dataset = creat_time_series(dt_list, act_labels, trial_codes, mode=mode, labeled=True, combine_grav_acc = cga)
print("[INFO] -- Shape of time-Series dataset:"+str(dataset.shape))
#*****************
TRAIN_TEST_TYPE = "trial" # "subject" or "trial"
#*****************
if TRAIN_TEST_TYPE == "subject":
test_ids = [4,9,11,21]
print("[INFO] -- Test IDs: "+str(test_ids))
test_ts = dataset.loc[(dataset['id'].isin(test_ids))]
train_ts = dataset.loc[~(dataset['id'].isin(test_ids))]
else:
test_trail = [11,12,13,14,15,16]
print("[INFO] -- Test Trials: "+str(test_trail))
test_ts = dataset.loc[(dataset['trial'].isin(test_trail))]
train_ts = dataset.loc[~(dataset['trial'].isin(test_trail))]
print("[INFO] -- Shape of Train Time-Series :"+str(train_ts.shape))
print("[INFO] -- Shape of Test Time-Series :"+str(test_ts.shape))
print("___________________________________________________")
## This Variable Defines the Size of Sliding Window
## ( e.g. 100 means in each snapshot we just consider 100 consecutive observations of each sensor)
w = 128 # 50 Equals to 1 second for MotionSense Dataset (it is on 50Hz samplig rate)
## Here We Choose Step Size for Building Diffrent Snapshots from Time-Series Data
## ( smaller step size will increase the amount of the instances and higher computational cost may be incurred )
s = 10
train_data, act_train, id_train, train_mean, train_std = ts_to_secs(train_ts.copy(),
w,
s,
standardize = True)
s = 10
test_data, act_test, id_test, test_mean, test_std = ts_to_secs(test_ts.copy(),
w,
s,
standardize = True,
mean = train_mean,
std = train_std)
print("[INFO] -- Training Sections: "+str(train_data.shape))
print("[INFO] -- Test Sections: "+str(test_data.shape))
id_train_labels = to_categorical(id_train)
id_test_labels = to_categorical(id_test)
act_train_labels = to_categorical(act_train)
act_test_labels = to_categorical(act_test)
## Here we add an extra dimension to the datasets just to be ready for using with Convolution2D
train_data = np.expand_dims(train_data,axis=3)
print("[INFO] -- Training Sections:"+str(train_data.shape))
test_data = np.expand_dims(test_data,axis=3)
print("[INFO] -- Test Sections:"+str(test_data.shape))
height = train_data.shape[1]
width = train_data.shape[2]
id_class_numbers = 24
act_class_numbers = 4
fm = (1,5)
print("___________________________________________________")
## Callbacks
#eval_metric= "val_acc"
eval_metric= "val_f1_metric"
early_stop = keras.callbacks.EarlyStopping(monitor=eval_metric, mode='max', patience = 7)
filepath="MID.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor=eval_metric, verbose=0, save_best_only=True, mode='max')
callbacks_list = [early_stop,
checkpoint
]
## Callbacks
eval_id = Estimator.build(height, width, id_class_numbers, name ="EVAL_ID", fm=fm, act_func="softmax",hid_act_func="relu")
eval_id.compile( loss="categorical_crossentropy", optimizer='adam', metrics=['acc', f1_metric])
print("Model Size = "+str(eval_id.count_params()))
eval_id.fit(train_data, id_train_labels,
validation_data = (test_data, id_test_labels),
epochs = ep,
batch_size = 128,
verbose = 0,
class_weight = get_class_weights(np.argmax(id_train_labels,axis=1)),
callbacks = callbacks_list
)
eval_id.load_weights("MID.best.hdf5")
eval_id.compile( loss="categorical_crossentropy", optimizer='adam', metrics=['acc',f1_metric])
result1 = eval_id.evaluate(test_data, id_test_labels, verbose = 2)
id_acc = result1[1]
print("***[RESULT]*** ID Accuracy: "+str(id_acc))
rf1 = result1[2].round(4)*100
print("***[RESULT]*** ID F1: "+str(rf1))
preds = eval_id.predict(test_data)
preds = np.argmax(preds, axis=1)
conf_mat = confusion_matrix(np.argmax(id_test_labels, axis=1), preds)
conf_mat = conf_mat.astype('float') / conf_mat.sum(axis=1)[:, np.newaxis]
print("***[RESULT]*** ID Confusion Matrix")
print((np.array(conf_mat).diagonal()).round(3)*100)
d_test_ids = [4,9,11,21]
to_avg = 0
for i in range(len(d_test_ids)):
true_positive = conf_mat[d_test_ids[i],d_test_ids[i]]
print("True Positive Rate for "+str(d_test_ids[i])+" : "+str(true_positive*100))
to_avg+=true_positive
atp = to_avg/len(d_test_ids)
print("Average TP:"+str(atp*100))
f1id = f1_score(np.argmax(id_test_labels, axis=1), preds, average=None).mean()
print("***[RESULT]*** ID Averaged F-1 Score : "+str(f1id))
return [round(id_acc,4), round(f1id,4), round(atp,4)]
results ={}
## Here we set parameter to build labeld time-series from dataset of "(A)DeviceMotion_data"
## attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)
sdt = ["rotationRate"]
mode = "mag"
ep = 40
cga = False # Add gravity to acceleration or not
for i in range(5):
results[str(sdt)+"--"+str(mode)+"--"+str(cga)+"--"+str(i)] = eval_id(sdt, mode, ep, cga)
## Here we set parameter to build labeld time-series from dataset of "(A)DeviceMotion_data"
## attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)
sdt = ["rotationRate"]
mode = "raw"
ep = 40
cga = False # Add gravity to acceleration or not
for i in range(5):
results[str(sdt)+"--"+str(mode)+"--"+str(cga)+"--"+str(i)] = eval_id(sdt, mode, ep, cga)
results
## Here we set parameter to build labeld time-series from dataset of "(A)DeviceMotion_data"
## attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)
sdt = ["userAcceleration"]
mode = "mag"
ep = 40
cga = True # Add gravity to acceleration or not
for i in range(5):
results[str(sdt)+"--"+str(mode)+"--"+str(cga)+"--"+str(i)] = eval_id(sdt, mode, ep, cga)
results
## Here we set parameter to build labeld time-series from dataset of "(A)DeviceMotion_data"
## attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)
sdt = ["userAcceleration"]
mode = "raw"
ep = 40
cga = True # Add gravity to acceleration or not
for i in range(5):
results[str(sdt)+"--"+str(mode)+"--"+str(cga)+"--"+str(i)] = eval_id(sdt, mode, ep, cga)
results
## Here we set parameter to build labeld time-series from dataset of "(A)DeviceMotion_data"
## attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)
sdt = ["rotationRate","userAcceleration"]
mode = "mag"
ep = 40
cga = True # Add gravity to acceleration or not
for i in range(5):
results[str(sdt)+"--"+str(mode)+"--"+str(cga)+"--"+str(i)] = eval_id(sdt, mode, ep, cga)
results
## Here we set parameter to build labeld time-series from dataset of "(A)DeviceMotion_data"
## attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)
sdt = ["rotationRate","userAcceleration"]
mode = "raw"
ep = 40
cga = True # Add gravity to acceleration or not
for i in range(5):
results[str(sdt)+"--"+str(mode)+"--"+str(cga)+"--"+str(i)] = eval_id(sdt, mode, ep, cga)
results
#https://stackoverflow.com/a/45305384/5210098
def f1_metric(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
def eval_id(sdt, mode, ep, cga):
print("[INFO] -- Selected sensor data types: "+str(sdt)+" -- Mode: "+str(mode)+" -- Grav+Acc: "+str(cga))
act_labels = ACT_LABELS [0:4]
print("[INFO] -- Selected activites: "+str(act_labels))
trial_codes = [TRIAL_CODES[act] for act in act_labels]
dt_list = set_data_types(sdt)
dataset = creat_time_series(dt_list, act_labels, trial_codes, mode=mode, labeled=True, combine_grav_acc = cga)
print("[INFO] -- Shape of time-Series dataset:"+str(dataset.shape))
#*****************
TRAIN_TEST_TYPE = "trial" # "subject" or "trial"
#*****************
if TRAIN_TEST_TYPE == "subject":
test_ids = [4,9,11,21]
print("[INFO] -- Test IDs: "+str(test_ids))
test_ts = dataset.loc[(dataset['id'].isin(test_ids))]
train_ts = dataset.loc[~(dataset['id'].isin(test_ids))]
else:
test_trail = [11,12,13,14,15,16]
print("[INFO] -- Test Trials: "+str(test_trail))
test_ts = dataset.loc[(dataset['trial'].isin(test_trail))]
train_ts = dataset.loc[~(dataset['trial'].isin(test_trail))]
print("[INFO] -- Shape of Train Time-Series :"+str(train_ts.shape))
print("[INFO] -- Shape of Test Time-Series :"+str(test_ts.shape))
# print("___________Train_VAL____________")
# val_trail = [11,12,13,14,15,16]
# val_ts = train_ts.loc[(train_ts['trial'].isin(val_trail))]
# train_ts = train_ts.loc[~(train_ts['trial'].isin(val_trail))]
# print("[INFO] -- Training Time-Series :"+str(train_ts.shape))
# print("[INFO] -- Validation Time-Series :"+str(val_ts.shape))
print("___________________________________________________")
## This Variable Defines the Size of Sliding Window
## ( e.g. 100 means in each snapshot we just consider 100 consecutive observations of each sensor)
w = 128 # 50 Equals to 1 second for MotionSense Dataset (it is on 50Hz samplig rate)
## Here We Choose Step Size for Building Diffrent Snapshots from Time-Series Data
## ( smaller step size will increase the amount of the instances and higher computational cost may be incurred )
s = 10
train_data, act_train, id_train, train_mean, train_std = ts_to_secs(train_ts.copy(),
w,
s,
standardize = True)
s = 10
test_data, act_test, id_test, test_mean, test_std = ts_to_secs(test_ts.copy(),
w,
s,
standardize = True,
mean = train_mean,
std = train_std)
print("[INFO] -- Training Sections: "+str(train_data.shape))
print("[INFO] -- Test Sections: "+str(test_data.shape))
id_train_labels = to_categorical(id_train)
id_test_labels = to_categorical(id_test)
act_train_labels = to_categorical(act_train)
act_test_labels = to_categorical(act_test)
## Here we add an extra dimension to the datasets just to be ready for using with Convolution2D
train_data = np.expand_dims(train_data,axis=3)
print("[INFO] -- Training Sections:"+str(train_data.shape))
test_data = np.expand_dims(test_data,axis=3)
print("[INFO] -- Test Sections:"+str(test_data.shape))
height = train_data.shape[1]
width = train_data.shape[2]
id_class_numbers = 24
act_class_numbers = 4
fm = (2,5)
print("___________________________________________________")
## Callbacks
#eval_metric= "val_acc"
eval_metric= "val_f1_metric"
early_stop = keras.callbacks.EarlyStopping(monitor=eval_metric, mode='max', patience = 7)
filepath="MID.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor=eval_metric, verbose=0, save_best_only=True, mode='max')
callbacks_list = [early_stop,
checkpoint
]
## Callbacks
eval_id = Estimator.build(height, width, id_class_numbers, name ="EVAL_ID", fm=fm, act_func="softmax",hid_act_func="relu")
eval_id.compile( loss="categorical_crossentropy", optimizer='adam', metrics=['acc', f1_metric])
print("Model Size = "+str(eval_id.count_params()))
eval_id.fit(train_data, id_train_labels,
validation_data = (test_data, id_test_labels),
epochs = ep,
batch_size = 128,
verbose = 0,
class_weight = get_class_weights(np.argmax(id_train_labels,axis=1)),
callbacks = callbacks_list
)
eval_id.load_weights("MID.best.hdf5")
eval_id.compile( loss="categorical_crossentropy", optimizer='adam', metrics=['acc',f1_metric])
result1 = eval_id.evaluate(test_data, id_test_labels, verbose = 2)
id_acc = result1[1]
print("***[RESULT]*** ID Accuracy: "+str(id_acc))
rf1 = result1[2].round(4)*100
print("***[RESULT]*** ID F1: "+str(rf1))
preds = eval_id.predict(test_data)
preds = np.argmax(preds, axis=1)
conf_mat = confusion_matrix(np.argmax(id_test_labels, axis=1), preds)
conf_mat = conf_mat.astype('float') / conf_mat.sum(axis=1)[:, np.newaxis]
print("***[RESULT]*** ID Confusion Matrix")
print((np.array(conf_mat).diagonal()).round(3)*100)
d_test_ids = [4,9,11,21]
to_avg = 0
for i in range(len(d_test_ids)):
true_positive = conf_mat[d_test_ids[i],d_test_ids[i]]
print("True Positive Rate for "+str(d_test_ids[i])+" : "+str(true_positive*100))
to_avg+=true_positive
atp = to_avg/len(d_test_ids)
print("Average TP:"+str(atp*100))
f1id = f1_score(np.argmax(id_test_labels, axis=1), preds, average=None).mean()
print("***[RESULT]*** ID Averaged F-1 Score : "+str(f1id))
return [round(id_acc,4), round(f1id,4), round(atp,4)]
## Here we set parameter to build labeld time-series from dataset of "(A)DeviceMotion_data"
## attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)
sdt = ["rotationRate","userAcceleration"]
mode = "mag"
ep = 40
cga = True # Add gravity to acceleration or not
for i in range(5):
results[str(sdt)+"-2D-"+str(mode)+"--"+str(cga)+"--"+str(i)] = eval_id(sdt, mode, ep, cga)
```
| github_jupyter |
# DAT257x: Reinforcement Learning Explained
## Lab 2: Bandits
### Exercise 2.3: UCB
```
# import numpy as np
# import sys
# if "../" not in sys.path:
# sys.path.append("../")
# from lib.envs.bandit import BanditEnv
# from lib.simulation import Experiment
# #Policy interface
# class Policy:
# #num_actions: (int) Number of arms [indexed by 0 ... num_actions-1]
# def __init__(self, num_actions):
# self.num_actions = num_actions
# def act(self):
# pass
# def feedback(self, action, reward):
# pass
# #Greedy policy
# class Greedy(Policy):
# def __init__(self, num_actions):
# Policy.__init__(self, num_actions)
# self.name = "Greedy"
# self.total_rewards = np.zeros(num_actions, dtype = np.longdouble)
# self.total_counts = np.zeros(num_actions, dtype = np.longdouble)
# def act(self):
# current_averages = np.divide(self.total_rewards, self.total_counts, where = self.total_counts > 0)
# current_averages[self.total_counts <= 0] = 0.5 #Correctly handles Bernoulli rewards; over-estimates otherwise
# current_action = np.argmax(current_averages)
# return current_action
# def feedback(self, action, reward):
# self.total_rewards[action] += reward
# self.total_counts[action] += 1
# #Epsilon Greedy policy
# class EpsilonGreedy(Greedy):
# def __init__(self, num_actions, epsilon):
# Greedy.__init__(self, num_actions)
# if (epsilon is None or epsilon < 0 or epsilon > 1):
# print("EpsilonGreedy: Invalid value of epsilon", flush = True)
# sys.exit(0)
# self.epsilon = epsilon
# self.name = "Epsilon Greedy"
# def act(self):
# choice = None
# if self.epsilon == 0:
# choice = 0
# elif self.epsilon == 1:
# choice = 1
# else:
# choice = np.random.binomial(1, self.epsilon)
# if choice == 1:
# return np.random.choice(self.num_actions)
# else:
# current_averages = np.divide(self.total_rewards, self.total_counts, where = self.total_counts > 0)
# current_averages[self.total_counts <= 0] = 0.5 #Correctly handles Bernoulli rewards; over-estimates otherwise
# current_action = np.argmax(current_averages)
# return current_action
```
Now let's implement a UCB algorithm.
```
# xx = np.ones(10)
# xy = np.divide(2, xx)
# np.sqrt(xy)
# #UCB policy
# class UCB(Greedy):
# def __init__(self, num_actions):
# Greedy.__init__(self, num_actions)
# self.name = "UCB"
# self.round = 0
# def act(self):
# current_action = None
# # self.round += 1
# if self.round < self.num_actions:
# """The first k rounds, where k is the number of arms/actions, play each arm/action once"""
# current_action = self.round
# else:
# """At round t, play the arms with maximum average and exploration bonus"""
# current_averages = np.divide(self.total_rewards, self.total_counts, where = self.total_counts > 0)
# current_averages[self.total_counts <= 0] = 0.5 #Correctly handles Bernoulli rewards; over-estimates otherwise
# exp_bonus = np.sqrt(np.divide(2.0 * np.log(self.round), self.total_counts, where = self.total_counts > 0))
# current_averages = current_averages + exp_bonus
# current_action = np.argmax(current_averages)
# self.round += 1
# return current_action
# class UCB(Greedy):
# def __init__(self, num_actions, k):
# Greedy.__init__(self, num_actions)
# self.name = "UCB"
# self.round = 0
# self.k = k
# self.previous_action = num_actions - 1
# self.num_actions = num_actions
# self.qvalues = np.zeros(num_actions)
# self.counts = np.zeros(num_actions)
# self.t = 0
# def act(self):
# if (self.round < self.k):
# """The first k rounds, play each arm/action once"""
# current_action = self.previous_action + 1
# if current_action >= self.num_actions:
# self.round += 1
# current_action = 0
# self.previous_action = current_action
# if (self.round >= self.k):
# """play the arms with maximum average and exploration bonus"""
# r_hats = self.total_rewards/self.total_counts
# scores = r_hats + np.sqrt(np.log(self.t)/self.total_counts)
# current_action = np.argmax(scores)
# return current_action
# def feedback(self, action, reward):
# self.total_rewards[action] += reward
# self.total_counts[action] += 1
# self.t += 1
```
Now let's prepare the simulation.
```
# evaluation_seed = 1239
# num_actions = 10
# trials = 10000
# distribution = "bernoulli"
# # distribution = "normal"
```
What do you think the regret graph would look like?
```
# env = BanditEnv(num_actions, distribution, evaluation_seed)
# agent = UCB(num_actions)
# # K = 1
# # agent = UCB(num_actions, K)
# experiment = Experiment(env, agent)
# experiment.run_bandit(trials)
```
# DAT257x: Reinforcement Learning Explained
## Lab 2: Bandits
### Exercise 2.4 Thompson Beta
```
import numpy as np
import sys
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.bandit import BanditEnv
from lib.simulation import Experiment
#Policy interface
class Policy:
#num_actions: (int) Number of arms [indexed by 0 ... num_actions-1]
def __init__(self, num_actions):
self.num_actions = num_actions
def act(self):
pass
def feedback(self, action, reward):
pass
```
Now let's implement a Thompson Beta algorithm.
```
#Tompson Beta policy
class ThompsonBeta(Policy):
def __init__(self, num_actions):
Policy.__init__(self, num_actions)
#PRIOR Hyper-params: successes = 1; failures = 1
self.total_counts = np.zeros(num_actions, dtype = np.longdouble)
self.name = "Thompson Beta"
#For each arm, maintain success and failures
self.successes = np.ones(num_actions, dtype = np.int)
self.failures = np.ones(num_actions, dtype = np.int)
def act(self):
"""Sample beta distribution from success and failures"""
"""Play the max of the sampled values"""
current_action = 0
return current_action
def feedback(self, action, reward):
if reward > 0:
self.successes[action] += 1
else:
self.failures[action] += 1
self.total_counts[action] += 1
```
Now let's prepare the simulation.
```
evaluation_seed = 1239
num_actions = 10
trials = 10000
distribution = "bernoulli"
```
What do you think the regret graph would look like?
```
env = BanditEnv(num_actions, distribution, evaluation_seed)
agent = ThompsonBeta(num_actions)
experiment = Experiment(env, agent)
experiment.run_bandit(trials)
```
Now let's prepare another simulation by setting a different distribution, that is set distribion = "normal"
Run the simulation and observe the results.
What do you think the regret graph would look like?
| github_jupyter |
<figure>
<IMG SRC="https://raw.githubusercontent.com/mbakker7/exploratory_computing_with_python/master/tudelft_logo.png" WIDTH=250 ALIGN="right">
</figure>
# Exploratory Computing with Python
*Developed by Mark Bakker*
## Notebook 9: Discrete random variables
In this Notebook you learn how to deal with discrete random variables. Many of the functions we will use are included in the `random` subpackage of `numpy`. We will import this package and call it `rnd` so that we don't have to type `np.random.` all the time.
```
import numpy as np
import matplotlib.pyplot as plt
import numpy.random as rnd
%matplotlib inline
```
### Random numbers
A random number generator lets you draw, at random, a number from a specified distribution. Several random number generators are included in the `random` subpackage of `numpy`. For example, the `ranint(low, high, size)` function returns an integer array of shape `size` at random from `low` up to (but not including) `high`. For example, let's flip a coin 10 times and assign a 0 to heads and a 1 to tails. Note that the `high` is specified as `1 + 1`, which means it is `1` higher than the value we want.
```
rnd.randint(0, 1 + 1, 10)
```
If we call the `ran_int` function again, we get a different sequence of heads (zeros) and tails (ones):
```
rnd.randint(0, 1 + 1, 10)
```
Internally, the random number generator starts with what is called a *seed*. The seed is a number and is generated automatically (and supposedly at random) when you call the random number generator. The value of the seed exactly defines the sequence of random numbers that you get (so some people may argue that the generated sequence is at best pseudo-random, and you may not want to use the sequence for any serious cryptographic use, but for our purposes they are random enough). For example, let's set `seed` equal to 10
```
rnd.seed(10)
rnd.randint(0, 1 + 1, 10)
```
If we now specify the seed again as 10, we can generate the exact same sequence
```
rnd.seed(10)
rnd.randint(0, 1 + 1, 10)
```
The ability to generate the exact same sequence is useful during code development. For example, by seeding the random number generator, you can compare your output to output of others trying to solve the same problem.
### Flipping a coin
Enough for now about random number generators. Let's flip a coin 100 times and count the number of heads (0-s) and the number of tails (1-s):
```
flip = rnd.randint(0, 1 + 1, 100)
headcount = 0
tailcount = 0
for i in range(100):
if flip[i] == 0:
headcount += 1
else:
tailcount += 1
print('number of heads:', headcount)
print('number of tails:', tailcount)
```
First of all, note that the number of heads and the number of tails add up to 100. Also, note how we counted the heads and tails. We created counters `headcount` and `tailcount`, looped through all flips, and added 1 to the appropriate counter. Instead of a loop, we could have used a condition for the indices combined with a summation as follows
```
headcount = np.count_nonzero(flip == 0)
tailcount = np.count_nonzero(flip == 1)
print('headcount', headcount)
print('tailcount', tailcount)
```
How does that work? You may recall that the `flip == 0` statement returns an array with length 100 (equal to the lenght of `flip`) with the value `True` when the condition is met, and `False` when the condition is not met. The boolean `True` has the value 1, and the boolean `False` has the value 0. So we simply need to count the nonzero values using the `np.count_nonzero` function to find out how many items are `True`.
The code above is easy, but if we do an experiment with more than two outcomes, it may be cumbersome to count the non-zero items for every possible outcome. So let's try to rewrite this part of the code using a loop. For this specific case the number of lines of code doesn't decrease, but when we have an experiment with many different outcomes this will be much more efficient. Note that `dtype='int'` sets the array to integers.
```
outcomes = np.zeros(2, dtype='int') # Two outcomes. heads are stored in outcome[0], tails in outcome[1]
for i in range (2):
outcomes[i] = np.count_nonzero(flip == i)
print('outcome ', i, ' is ', outcomes[i])
```
### Exercise 1. <a name="back1"></a>Throwing a dice
Throw a dice 100 times and report how many times you throw 1, 2, 3, 4, 5, and 6. Use a seed of 33. Make sure that the reported values add up to 100. Make sure you use a loop in your code as we did in the previous code cell.
<a href="#ex1answer">Answers to Exercise 1</a>
### Flipping a coin twice
Next we are going to flip a coin twice for 100 times and count the number of tails. We generate a random array of 0-s (heads) and 1-s (tails) with two rows (representing two coin flips) and 100 colums. The sum of the two rows represents the number of tails. The `np.sum` function takes an array and by default sums all the values in the array and returns one number. In this case we want to sum the rows. For that, the `sum` function has a keyword argument called `axis`, where `axis=0` sums over index 0 of the array (the rows), `axis=1` sums over the index 1 of the array (the columns), etc.
```
rnd.seed(55)
flips = rnd.randint(low=0, high=1 + 1, size=(2, 100))
tails = np.sum(flips, axis=0)
number_of_tails = np.zeros(3, dtype='int')
for i in range(3):
number_of_tails[i] = np.count_nonzero(tails == i)
print('number of 0, 1, 2 tails:', number_of_tails)
```
Another way to simulate flipping a coin twice, is to draw a number at random from a set of 2 numbers (0 and 1). You need to replace the number after every draw, of course. The `numpy` function to draw a random number from a given array is called `choice`. The `choice` function has a keyword to specify whether values are replaced or not. Hence the following two ways to generate 5 flips are identical.
```
rnd.seed(55)
flips1 = rnd.randint(low=0, high=1 + 1, size=5)
rnd.seed(55)
flips2 = rnd.choice(range(2), size=5, replace=True)
np.alltrue(flips1 == flips2) # Check whether all values in the two arrays are equal
```
### Bar graph
The outcome of the experiment may also be plotted with a bar graph
```
plt.bar(range(0, 3), number_of_tails)
plt.xticks(range(0, 3))
plt.xlabel('number of tails')
plt.ylabel('occurence in 100 trials');
```
### Cumulative Probability
Next we compute the experimental probability of 0 tails, 1 tail, and 2 tails through division by the total number of trials (one trial is two coin flips). The three probabilities add up to 1. The cumulative probability distribution is obtained by cumulatively summing the probabilities using the `cumsum` function of `numpy`. The first value is the probability of throwing 0 tails. The second value is the probability of 1 or fewer tails, and the third value it the probability of 2 or fewer tails. The probability is computed as the number of tails divided by the total number of trials.
```
prob = number_of_tails / 100 # number_of_tails was computed two code cells back
cum_prob = np.cumsum(prob) # So cum_prob[0] = prob[0], cum_prob[1] = prob[0] + prob[1], etc.
print('cum_prob ', cum_prob)
```
The cumulative probability distribution is plotted with a bar graph, making sure that all the bars touch each other (by setting the width to 1, in the case below)
```
plt.bar(range(0, 3), cum_prob, width=1)
plt.xticks(range(0, 3))
plt.xlabel('number of tails in two flips')
plt.ylabel('cumulative probability');
```
### Exercise 2. <a name="back2"></a>Flip a coin five times
Flip a coin five times in a row and record how many times you obtain tails (varying from 0-5). Perform the exeriment 1000 times. Make a bar graph with the total number of tails on the horizontal axis and the emperically computed probability to get that many tails, on the vertical axis. Execute your code several times (hit [shift]-[enter]) and see that the graph changes a bit every time, as the sequence of random numbers changes every time.
Compute the cumulative probability. Print the values to the screen and make a plot of the cumulative probability function using a bar graph.
<a href="#ex2answer">Answers to Exercise 2</a>
### Probability of a Bernouilli variable
In the previous exercise, we computed the probability of a certain number of heads in five flips experimentally. But we can, of course, compute the value exactly by using a few simple formulas. Consider the random variable $Y$, which is the outcome of an experiment with two possible values 0 and 1. Let $p$ be the probability of success, $p=P(Y=1)$.
Then $Y$ is said to be a Bernoulli variable. The experiment is repeated $n$ times and we define $X$ as the number of successes in the experiment. The variable $X$ has a Binomial Distribution with parameters $n$ and $p$. The probability that $X$ takes value $k$ can be computed as (see for example [here](http://en.wikipedia.org/wiki/Binomial_distribution))
$$P(X=k) = \binom{n}{k}p^k(1-p)^{n-k}$$
The term $\binom{n}{k}$ may be computed with the `comb` function, which needs to be imported from the `scipy.misc` package.
### Exercise 3. <a name="back3"></a>Flip a coin 5 times revisited
Go back to the experiment where we flip a coin five times in a row and record how many times we obtain tails.
Compute the theoretical probability for 0, 1, 2, 3, 4, and 5 tails and compare your answer to the probability computed from 1000 trials, 10000 trials, and 100000 trials (use a loop for these three sets of trials). Do you approach the theoretical value with more trials?
<a href="#ex3answer">Answers to Exercise 3</a>
### Exercise 4. <a name="back4"></a>Maximum value of two dice throws
Throw a dice two times and record the maximum value of the two throws. Use the `np.max` function to compute the maximum value. Like the `np.sum` function, the `np.max` function takes an array as input argument and an optional keyword argument named `axis`. Perform the experiment 1000 times and compute the probability that the highest value is 1, 2, 3, 4, 5, or 6. Make a graph of the cumulative probability distribution function using a step graph.
<a href="#ex4answer">Answers to Exercise 4</a>
### Exercise 5. <a name="back5"></a>Maximum value of two dice throws revisited
Refer back to Exercise 4.
Compute the theoretical value of the probability of the highest dice when throwing the dice twice (the throws are labeled T1 and T2, respectively). There are 36 possible outcomes for this experiment. Let $M$ denote the random variable corresponding to this experiment (this means for instance that $M=3$ when your first throw is a 2, and the second throw is a 3). All outcomes of $M$ can easily be written down, as shown in the following Table:
| T1$\downarrow$ T2$\to$ | 1 | 2 | 3 | 4 | 5 | 6 |
|-----------:|------------:|:------------:|
| 1 | 1 | 2 | 3 | 4 | 5 | 6 |
| 2 | 2 | 2 | 3 | 4 | 5 | 6 |
| 3 | 3 | 3 | 3 | 4 | 5 | 6 |
| 4 | 4 | 4 | 4 | 4 | 5 | 6 |
| 5 | 5 | 5 | 5 | 5 | 5 | 6 |
| 6 | 6 | 6 | 6 | 6 | 6 | 6 |
Use the 36 possible outcomes shown in the Table to compute the theoretical probability of $M$ being 1, 2, 3, 4, 5, or 6. Compare the theoretical outcome with the experimental outcome for 100, 1000, and 10000 dice throws.
<a href="#ex5answer">Answers to Exercise 5</a>
### Generate random integers with non-equal probabilities
So far, we have generated random numbers of which the probability of each outcome was the same (heads or tails, or the numbers on a dice, considering the throwing device was "fair"). What now if we want to generate outcomes that don't have the same probability? For example, consider the case that we have a bucket with 4 blue balls and 6 red balls. When you draw a ball at random, the probability of a blue ball is 0.4 and the probability of a red ball is 0.6. A sequence of drawing ten balls, with replacement, may be generated as follows
```
balls = np.zeros(10, dtype='int') # zero is blue
balls[4:] = 1 # one is red
print('balls:', balls)
drawing = rnd.choice(balls, 10, replace=True)
print('drawing:', drawing)
print('blue balls:', np.count_nonzero(drawing == 0))
print('red balls:', np.count_nonzero(drawing == 1))
```
### Exercise 6. <a name="back6"></a>Election poll
Consider an election where one million people will vote. 490,000 people will vote for candidate $A$ and 510,000 people will vote for candidate $B$. One day before the election, the company of 'Maurice the Dog' conducts a pole among 1000 randomly chosen voters. Compute whether the Dog will predict the winner correctly using the approach explained above and a seed of 2.
Perform the pole 1000 times. Count how many times the outcome of the pole is that candidate $A$ wins and how many times the outcome of the pole is that candidate $B$ wins. What is the probability that the Dog will predict the correct winner based on these 1000 poles of 1000 people?
Compute the probability that the Dog will predict the correct winner based on 1000 poles of 5000 people? Does the probability that The Dog predicts the correct winner increase significantly when he poles 5000 people?
<a href="#ex6answer">Answers to Exercise 6</a>
### Answers to the exercises
<a name="ex1answer">Answers to Exercise 1</a>
```
rnd.seed(33)
dicethrow = rnd.randint(1, 6 + 1, 100)
side = np.zeros(6, dtype='int')
for i in range(6):
side[i] = np.count_nonzero(dicethrow == i + 1)
print('number of times', i + 1, 'is', side[i])
print('total number of throws ', sum(side))
```
<a href="#back1">Back to Exercise 1</a>
<a name="ex2answer">Answers to Exercise 2</a>
```
N = 1000
tails = np.sum(rnd.randint(0, 1 + 1, (5, 1000)), axis=0)
counttails = np.zeros(6, dtype='int')
for i in range(6):
counttails[i] = np.count_nonzero(tails == i)
plt.bar(range(0, 6), counttails / N)
plt.xlabel('number of tails in five flips')
plt.ylabel('probability');
cumprob = np.cumsum(counttails / N)
print('cumprob:', cumprob)
plt.bar(range(0, 6), cumprob, width=1)
plt.xlabel('number of tails in five flips')
plt.ylabel('cumulative probability');
```
<a href="#back2">Back to Exercise 2</a>
<a name="ex3answer">Answers to Exercise 3</a>
```
from scipy.misc import comb
print('Theoretical probabilities:')
for k in range(6):
print(k, ' tails ', comb(5, k) * 0.5 ** k * 0.5 ** (5 - k))
for N in (1000, 10000, 100000):
tails = np.sum(rnd.randint(0, 1 + 1, (5, N)), axis=0)
counttails = np.zeros(6)
for i in range(6):
counttails[i] = np.count_nonzero(tails==i)
print('Probability with', N, 'trials: ', counttails / float(N))
```
<a href="#back3">Back to Exercise 3</a>
<a name="ex4answer">Answers to Exercise 4</a>
```
dice = rnd.randint(1, 6 + 1, (2, 1000))
highest_dice = np.max(dice, 0)
outcome = np.zeros(6)
for i in range(6):
outcome[i] = np.sum(highest_dice == i + 1) / 1000
plt.bar(left=np.arange(1, 7), height=outcome, width=1)
plt.xlabel('highest dice in two throws')
plt.ylabel('probability');
```
<a href="#back4">Back to Exercise 4</a>
<a name="ex5answer">Answers to Exercise 5</a>
```
for N in [100, 1000, 10000]:
dice = rnd.randint(1, 6 + 1, (2, N))
highest_dice = np.max(dice, axis=0)
outcome = np.zeros(6)
for i in range(6):
outcome[i] = np.sum(highest_dice == i + 1) / N
print('Outcome for', N, 'throws: ', outcome)
# Exact values
exact = np.zeros(6)
for i, j in enumerate(range(1, 12, 2)):
exact[i] = j / 36
print('Exact probabilities: ',exact)
```
<a href="#back5">Back to Exercise 5</a>
<a name="ex6answer">Answers to Exercise 6</a>
```
rnd.seed(2)
people = np.zeros(1000000, dtype='int') # candidate A is 0
people[490000:] = 1 # candidate B is 1
pole = rnd.choice(people, 1000)
poled_for_A = np.count_nonzero(pole == 0)
print('poled for A:', poled_for_A)
if poled_for_A > 500:
print('The Dog will predict the wrong winner')
else:
print('The Dog will predict the correct winner')
Awins = 0
Bwins = 0
for i in range(1000):
people = np.zeros(1000000, dtype='int') # candidate A is 0
people[490000:] = 1 # candidate B is 1
pole = rnd.choice(people, 1000)
poled_for_A = np.count_nonzero(pole == 0)
if poled_for_A > 500:
Awins += 1
else:
Bwins += 1
print('1000 poles of 1000 people')
print('Probability that The Dog predicts candidate A to win:', Awins / 1000)
Awins = 0
Bwins = 0
for i in range(1000):
people = np.zeros(1000000, dtype='int') # candidate A is 0
people[490000:] = 1 # candidate B is 1
pole = rnd.choice(people, 5000)
poled_for_A = np.count_nonzero(pole == 0)
if poled_for_A > 2500:
Awins += 1
else:
Bwins += 1
print('1000 poles of 5000 people')
print('Probability that The Dog predicts candidate A to win:', Awins / 5000)
```
<a href="#back6">Back to Exercise 6</a>
| github_jupyter |
Lambda School Data Science
*Unit 2, Sprint 3, Module 3*
---
# Permutation & Boosting
- Get **permutation importances** for model interpretation and feature selection
- Use xgboost for **gradient boosting**
### Setup
Run the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.
Libraries:
- category_encoders
- [**eli5**](https://eli5.readthedocs.io/en/latest/)
- matplotlib
- numpy
- pandas
- scikit-learn
- [**xgboost**](https://xgboost.readthedocs.io/en/latest/)
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
!pip install eli5
# If you're working locally:
else:
DATA_PATH = '../data/'
```
We'll go back to Tanzania Waterpumps for this lesson.
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace the zeros with nulls, and impute missing values later.
# Also create a "missing indicator" column, because the fact that
# values are missing may be a predictive signal.
cols_with_zeros = ['longitude', 'latitude', 'construction_year',
'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col+'_MISSING'] = X[col].isnull()
# Drop duplicate columns
duplicates = ['quantity_group', 'payment_type']
X = X.drop(columns=duplicates)
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
X['years_MISSING'] = X['years'].isnull()
# return the wrangled dataframe
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
target = 'status_group'
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
```
# Get permutation importances for model interpretation and feature selection
## Overview
Default Feature Importances are fast, but Permutation Importances may be more accurate.
These links go deeper with explanations and examples:
- Permutation Importances
- [Kaggle / Dan Becker: Machine Learning Explainability](https://www.kaggle.com/dansbecker/permutation-importance)
- [Christoph Molnar: Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/feature-importance.html)
- (Default) Feature Importances
- [Ando Saabas: Selecting good features, Part 3, Random Forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/)
- [Terence Parr, et al: Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html)
There are three types of feature importances:
### 1. (Default) Feature Importances
Fastest, good for first estimates, but be aware:
>**When the dataset has two (or more) correlated features, then from the point of view of the model, any of these correlated features can be used as the predictor, with no concrete preference of one over the others.** But once one of them is used, the importance of others is significantly reduced since effectively the impurity they can remove is already removed by the first feature. As a consequence, they will have a lower reported importance. This is not an issue when we want to use feature selection to reduce overfitting, since it makes sense to remove features that are mostly duplicated by other features. But when interpreting the data, it can lead to the incorrect conclusion that one of the variables is a strong predictor while the others in the same group are unimportant, while actually they are very close in terms of their relationship with the response variable. — [Selecting good features – Part III: random forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/)
> **The scikit-learn Random Forest feature importance ... tends to inflate the importance of continuous or high-cardinality categorical variables.** ... Breiman and Cutler, the inventors of Random Forests, indicate that this method of “adding up the gini decreases for each individual variable over all trees in the forest gives a **fast** variable importance that is often very consistent with the permutation importance measure.” — [Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html)
```
# Get feature importances
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
# Plot feature importances
%matplotlib inline
import matplotlib.pyplot as plt
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
```
### 2. Drop-Column Importance
The best in theory, but too slow in practice
```
column = 'quantity'
# Fit without column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train.drop(columns=column), y_train)
score_without = pipeline.score(X_val.drop(columns=column), y_val)
print(f'Validation Accuracy without {column}: {score_without}')
# Fit with column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
score_with = pipeline.score(X_val, y_val)
print(f'Validation Accuracy with {column}: {score_with}')
# Compare the error with & without column
print(f'Drop-Column Importance for {column}: {score_with - score_without}')
```
### 3. Permutation Importance
Permutation Importance is a good compromise between Feature Importance based on impurity reduction (which is the fastest) and Drop Column Importance (which is the "best.")
[The ELI5 library documentation explains,](https://eli5.readthedocs.io/en/latest/blackbox/permutation_importance.html)
> Importance can be measured by looking at how much the score (accuracy, F1, R^2, etc. - any score we’re interested in) decreases when a feature is not available.
>
> To do that one can remove feature from the dataset, re-train the estimator and check the score. But it requires re-training an estimator for each feature, which can be computationally intensive. ...
>
>To avoid re-training the estimator we can remove a feature only from the test part of the dataset, and compute score without using this feature. It doesn’t work as-is, because estimators expect feature to be present. So instead of removing a feature we can replace it with random noise - feature column is still there, but it no longer contains useful information. This method works if noise is drawn from the same distribution as original feature values (as otherwise estimator may fail). The simplest way to get such noise is to shuffle values for a feature, i.e. use other examples’ feature values - this is how permutation importance is computed.
>
>The method is most suitable for computing feature importances when a number of columns (features) is not huge; it can be resource-intensive otherwise.
### Do-It-Yourself way, for intuition
### With eli5 library
For more documentation on using this library, see:
- [eli5.sklearn.PermutationImportance](https://eli5.readthedocs.io/en/latest/autodocs/sklearn.html#eli5.sklearn.permutation_importance.PermutationImportance)
- [eli5.show_weights](https://eli5.readthedocs.io/en/latest/autodocs/eli5.html#eli5.show_weights)
- [scikit-learn user guide, `scoring` parameter](https://scikit-learn.org/stable/modules/model_evaluation.html#the-scoring-parameter-defining-model-evaluation-rules)
eli5 doesn't work with pipelines.
```
# Ignore warnings
```
### We can use importances for feature selection
For example, we can remove features with zero importance. The model trains faster and the score does not decrease.
# Use xgboost for gradient boosting
## Overview
In the Random Forest lesson, you learned this advice:
#### Try Tree Ensembles when you do machine learning with labeled, tabular data
- "Tree Ensembles" means Random Forest or **Gradient Boosting** models.
- [Tree Ensembles often have the best predictive accuracy](https://arxiv.org/abs/1708.05070) with labeled, tabular data.
- Why? Because trees can fit non-linear, non-[monotonic](https://en.wikipedia.org/wiki/Monotonic_function) relationships, and [interactions](https://christophm.github.io/interpretable-ml-book/interaction.html) between features.
- A single decision tree, grown to unlimited depth, will [overfit](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/). We solve this problem by ensembling trees, with bagging (Random Forest) or **[boosting](https://www.youtube.com/watch?v=GM3CDQfQ4sw)** (Gradient Boosting).
- Random Forest's advantage: may be less sensitive to hyperparameters. **Gradient Boosting's advantage:** may get better predictive accuracy.
Like Random Forest, Gradient Boosting uses ensembles of trees. But the details of the ensembling technique are different:
### Understand the difference between boosting & bagging
Boosting (used by Gradient Boosting) is different than Bagging (used by Random Forests).
Here's an excerpt from [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8.2.3, Boosting:
>Recall that bagging involves creating multiple copies of the original training data set using the bootstrap, fitting a separate decision tree to each copy, and then combining all of the trees in order to create a single predictive model.
>
>**Boosting works in a similar way, except that the trees are grown _sequentially_: each tree is grown using information from previously grown trees.**
>
>Unlike fitting a single large decision tree to the data, which amounts to _fitting the data hard_ and potentially overfitting, the boosting approach instead _learns slowly._ Given the current model, we fit a decision tree to the residuals from the model.
>
>We then add this new decision tree into the fitted function in order to update the residuals. Each of these trees can be rather small, with just a few terminal nodes. **By fitting small trees to the residuals, we slowly improve fˆ in areas where it does not perform well.**
>
>Note that in boosting, unlike in bagging, the construction of each tree depends strongly on the trees that have already been grown.
This high-level overview is all you need to know for now. If you want to go deeper, we recommend you watch the StatQuest videos on gradient boosting!
Let's write some code. We have lots of options for which libraries to use:
#### Python libraries for Gradient Boosting
- [scikit-learn Gradient Tree Boosting](https://scikit-learn.org/stable/modules/ensemble.html#gradient-boosting) — slower than other libraries, but [the new version may be better](https://twitter.com/amuellerml/status/1129443826945396737)
- Anaconda: already installed
- Google Colab: already installed
- [xgboost](https://xgboost.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://xiaoxiaowang87.github.io/monotonicity_constraint/)
- Anaconda, Mac/Linux: `conda install -c conda-forge xgboost`
- Windows: `conda install -c anaconda py-xgboost`
- Google Colab: already installed
- [LightGBM](https://lightgbm.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://blog.datadive.net/monotonicity-constraints-in-machine-learning/)
- Anaconda: `conda install -c conda-forge lightgbm`
- Google Colab: already installed
- [CatBoost](https://catboost.ai/) — can accept missing values and use [categorical features](https://catboost.ai/docs/concepts/algorithm-main-stages_cat-to-numberic.html) without preprocessing
- Anaconda: `conda install -c conda-forge catboost`
- Google Colab: `pip install catboost`
In this lesson, you'll use a new library, xgboost — But it has an API that's almost the same as scikit-learn, so it won't be a hard adjustment!
#### [XGBoost Python API Reference: Scikit-Learn API](https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn)
#### [Avoid Overfitting By Early Stopping With XGBoost In Python](https://machinelearningmastery.com/avoid-overfitting-by-early-stopping-with-xgboost-in-python/)
Why is early stopping better than a For loop, or GridSearchCV, to optimize `n_estimators`?
With early stopping, if `n_iterations` is our number of iterations, then we fit `n_iterations` decision trees.
With a for loop, or GridSearchCV, we'd fit `sum(range(1,n_rounds+1))` trees.
But it doesn't work well with pipelines. You may need to re-run multiple times with different values of other parameters such as `max_depth` and `learning_rate`.
#### XGBoost parameters
- [Notes on parameter tuning](https://xgboost.readthedocs.io/en/latest/tutorials/param_tuning.html)
- [Parameters documentation](https://xgboost.readthedocs.io/en/latest/parameter.html)
### Try adjusting these hyperparameters
#### Random Forest
- class_weight (for imbalanced classes)
- max_depth (usually high, can try decreasing)
- n_estimators (too low underfits, too high wastes time)
- min_samples_leaf (increase if overfitting)
- max_features (decrease for more diverse trees)
#### Xgboost
- scale_pos_weight (for imbalanced classes)
- max_depth (usually low, can try increasing)
- n_estimators (too low underfits, too high wastes time/overfits) — Use Early Stopping!
- learning_rate (too low underfits, too high overfits)
For more ideas, see [Notes on Parameter Tuning](https://xgboost.readthedocs.io/en/latest/tutorials/param_tuning.html) and [DART booster](https://xgboost.readthedocs.io/en/latest/tutorials/dart.html).
## Challenge
You will use your portfolio project dataset for all assignments this sprint. Complete these tasks for your project, and document your work.
- Continue to clean and explore your data. Make exploratory visualizations.
- Fit a model. Does it beat your baseline?
- Try xgboost.
- Get your model's permutation importances.
You should try to complete an initial model today, because the rest of the week, we're making model interpretation visualizations.
But, if you aren't ready to try xgboost and permutation importances with your dataset today, you can practice with another dataset instead. You may choose any dataset you've worked with previously.
| github_jupyter |
# Partial Correlation
The purpose of this notebook is to understand how to compute the [partial correlation](https://en.wikipedia.org/wiki/Partial_correlation) between two variables, $X$ and $Y$, given a third $Z$. In particular, these variables are assumed to be guassians (or, in general, multivariate gaussians).
Why is it important to estimate partial correlations? The primary reason for estimating a partial correlation is to use it to detect for [confounding](https://en.wikipedia.org/wiki/Confounding_variable) variables during causal analysis.
## Simulation
Let's start out by simulating 3 data sets. Graphically, these data sets comes from graphs represented by the following.
* $X \rightarrow Z \rightarrow Y$ (serial)
* $X \leftarrow Z \rightarrow Y$ (diverging)
* $X \rightarrow Z \leftarrow Y$ (converging)
```
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
import warnings
warnings.filterwarnings('ignore')
plt.style.use('ggplot')
def get_serial_graph():
g = nx.DiGraph()
g.add_node('X')
g.add_node('Y')
g.add_node('Z')
g.add_edge('X', 'Z')
g.add_edge('Z', 'Y')
return g
def get_diverging_graph():
g = nx.DiGraph()
g.add_node('X')
g.add_node('Y')
g.add_node('Z')
g.add_edge('Z', 'X')
g.add_edge('Z', 'Y')
return g
def get_converging_graph():
g = nx.DiGraph()
g.add_node('X')
g.add_node('Y')
g.add_node('Z')
g.add_edge('X', 'Z')
g.add_edge('Y', 'Z')
return g
g_serial = get_serial_graph()
g_diverging = get_diverging_graph()
g_converging = get_converging_graph()
p_serial = nx.nx_agraph.graphviz_layout(g_serial, prog='dot', args='-Kcirco')
p_diverging = nx.nx_agraph.graphviz_layout(g_diverging, prog='dot', args='-Kcirco')
p_converging = nx.nx_agraph.graphviz_layout(g_converging, prog='dot', args='-Kcirco')
fig, ax = plt.subplots(3, 1, figsize=(5, 5))
nx.draw(g_serial, pos=p_serial, with_labels=True, node_color='#e0e0e0', node_size=800, arrowsize=20, ax=ax[0])
nx.draw(g_diverging, pos=p_diverging, with_labels=True, node_color='#e0e0e0', node_size=800, arrowsize=20, ax=ax[1])
nx.draw(g_converging, pos=p_converging, with_labels=True, node_color='#e0e0e0', node_size=800, arrowsize=20, ax=ax[2])
ax[0].set_title('Serial')
ax[1].set_title('Diverging')
ax[2].set_title('Converging')
plt.tight_layout()
```
In the serial graph, `X` causes `Z` and `Z` causes `Y`. In the diverging graph, `Z` causes both `X` and `Y`. In the converging graph, `X` and `Y` cause `Z`. Below, the serial, diverging, and converging data sets are named S, D, and C, correspondingly.
Note that in the serial graph, the data is sampled as follows.
* $X \sim \mathcal{N}(0, 1)$
* $Z \sim 2 + 1.8 \times X$
* $Y \sim 5 + 2.7 \times Z$
In the diverging graph, the data is sampled as follows.
* $Z \sim \mathcal{N}(0, 1)$
* $X \sim 4.3 + 3.3 \times Z$
* $Y \sim 5.0 + 2.7 \times Z$
Lastly, in the converging graph, the data is sampled as follows.
* $X \sim \mathcal{N}(0, 1)$
* $Y \sim \mathcal{N}(5.5, 1)$
* $Z \sim 2.0 + 0.8 \times X + 1.2 \times Y$
Note the ordering of the sampling with the variables follows the structure of the corresponding graph.
```
import numpy as np
np.random.seed(37)
def get_error(N=10000, mu=0.0, std=0.2):
return np.random.normal(mu, std, N)
def to_matrix(X, Z, Y):
return np.concatenate([
X.reshape(-1, 1),
Z.reshape(-1, 1),
Y.reshape(-1, 1)], axis=1)
def get_serial(N=10000, e_mu=0.0, e_std=0.2):
X = np.random.normal(0, 1, N) + get_error(N, e_mu, e_std)
Z = 2 + 1.8 * X + get_error(N, e_mu, e_std)
Y = 5 + 2.7 * Z + get_error(N, e_mu, e_std)
return to_matrix(X, Z, Y)
def get_diverging(N=10000, e_mu=0.0, e_std=0.2):
Z = np.random.normal(0, 1, N) + get_error(N, e_mu, e_std)
X = 4.3 + 3.3 * Z + get_error(N, e_mu, e_std)
Y = 5 + 2.7 * Z + get_error(N, e_mu, e_std)
return to_matrix(X, Z, Y)
def get_converging(N=10000, e_mu=0.0, e_std=0.2):
X = np.random.normal(0, 1, N) + get_error(N, e_mu, e_std)
Y = np.random.normal(5.5, 1, N) + get_error(N, e_mu, e_std)
Z = 2 + 0.8 * X + 1.2 * Y + get_error(N, e_mu, e_std)
return to_matrix(X, Z, Y)
S = get_serial()
D = get_diverging()
C = get_converging()
```
## Computation
For the three datasets, `S`, `D`, and `C`, we want to compute the partial correlation between $X$ and $Y$ given $Z$. The way to do this is as follows.
* Regress $X$ on $Z$ and also $Y$ on $Z$
* $X = b_X + w_X * Z$
* $Y = b_Y + w_Y * Z$
* With the new weights $(b_X, w_X)$ and $(b_Y, w_Y)$, predict $X$ and $Y$.
* $\hat{X} = b_X + w_X * Z$
* $\hat{Y} = b_Y + w_Y * Z$
* Now compute the residuals between the true and predicted values.
* $R_X = X - \hat{X}$
* $R_Y = Y - \hat{Y}$
* Finally, compute the Pearson correlation between $R_X$ and $R_Y$.
The correlation between the residuals is the partial correlation and runs from -1 to +1. More interesting is the test of significance. If $p > \alpha$, where $\alpha \in [0.1, 0.05, 0.01]$, then assume independence. For example, assume $\alpha = 0.01$ and $p = 0.002$, then $X$ is conditionally independent of $Y$ given $Z$.
```
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
from scipy.stats import pearsonr
from scipy import stats
def get_cond_indep_test(c_xy_z, N=10000, alpha=0.01):
point = stats.norm.ppf(1 - (alpha / 2.0))
z_transform = np.sqrt(N - 3) * np.abs(0.5 * np.log((1 + c_xy_z) / (1 - c_xy_z)))
return z_transform, point, z_transform > point
def get_partial_corr(M):
X = M[:, 0]
Z = M[:, 1].reshape(-1, 1)
Y = M[:, 2]
mXZ = LinearRegression()
mXZ.fit(Z, X)
pXZ = mXZ.predict(Z)
rXZ = X - pXZ
mYZ = LinearRegression()
mYZ.fit(Z, Y)
pYZ = mYZ.predict(Z)
rYZ = Y - pYZ
c_xy, p_xy = pearsonr(X, Y)
c_xy_z, p_xy_z = pearsonr(rXZ, rYZ)
return c_xy, p_xy, c_xy_z, p_xy_z
```
## Serial graph data
For $X \rightarrow Z \rightarrow Y$, note that the marginal correlation is high (0.99) and the correlation is significant (p < 0.01). However, the correlation between X and Y vanishes given Z to -0.01 (p > 0.01). Note the conditional independence test fails to reject the null hypothesis.
```
c_xy, p_xy, c_xy_z, p_xy_z = get_partial_corr(S)
print(f'corr_xy={c_xy:.5f}, p_xy={p_xy:.5f}')
print(f'corr_xy_z={c_xy_z:.5f}, p_xy_z={p_xy_z:.5f}')
print(get_cond_indep_test(c_xy_z))
```
## Diverging graph data
For $X \leftarrow Z \rightarrow Y$, note that the marginal correlation is high (0.99) and the correlation is significant (p < 0.01). However, the correlation between X and Y vanishes given Z to 0.01 (p > 0.01). Note the conditional independence test fails to reject the null hypothesis.
```
c_xy, p_xy, c_xy_z, p_xy_z = get_partial_corr(D)
print(f'corr_xy={c_xy:.5f}, p_xy={p_xy:.5f}')
print(f'corr_xy_z={c_xy_z:.5f}, p_xy_z={p_xy_z:.5f}')
print(get_cond_indep_test(c_xy_z))
```
## Converging graph data
For $X \rightarrow Z \leftarrow Y$, note that the correlation is low (-0.00) and the correlation is insignficiant (p > 0.01). However, the correlation between X and Y increases to -0.96 and becomes significant (p < 0.01)! Note the conditional independence test rejects the null hypothesis.
```
c_xy, p_xy, c_xy_z, p_xy_z = get_partial_corr(C)
print(f'corr_xy={c_xy:.5f}, p_xy={p_xy:.5f}')
print(f'corr_xy_z={c_xy_z:.5f}, p_xy_z={p_xy_z:.5f}')
print(get_cond_indep_test(c_xy_z))
```
## Statistically Distinguishable
The `serial` and `diverging` graphs are said to be `statistically indistingishable` since $X$ and $Y$ are both `conditionally independent` given $Z$. However, the `converging` graph is `statistically distinguishable` since it is the only graph where $X$ and $Y$ are `conditionally dependent` given $Z$.
| github_jupyter |
# Starbucks Capstone Challenge
### Introduction
This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks.
Not all users receive the same offer, and that is the challenge to solve with this data set.
Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.
Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.
You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer.
Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer.
### Example
To give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer.
However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer.
### Cleaning
This makes data cleaning especially important and tricky.
You'll also want to take into account that some demographic groups will make purchases even if they don't receive an offer. From a business perspective, if a customer is going to make a 10 dollar purchase without an offer anyway, you wouldn't want to send a buy 10 dollars get 2 dollars off offer. You'll want to try to assess what a certain demographic group will buy when not receiving any offers.
### Final Advice
Because this is a capstone project, you are free to analyze the data any way you see fit. For example, you could build a machine learning model that predicts how much someone will spend based on demographics and offer type. Or you could build a model that predicts whether or not someone will respond to an offer. Or, you don't need to build a machine learning model at all. You could develop a set of heuristics that determine what offer you should send to each customer (i.e., 75 percent of women customers who were 35 years old responded to offer A vs 40 percent from the same demographic to offer B, so send offer A).
# Data Sets
The data is contained in three files:
* portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)
* profile.json - demographic data for each customer
* transcript.json - records for transactions, offers received, offers viewed, and offers completed
Here is the schema and explanation of each variable in the files:
**portfolio.json**
* id (string) - offer id
* offer_type (string) - type of offer ie BOGO, discount, informational
* difficulty (int) - minimum required spend to complete an offer
* reward (int) - reward given for completing an offer
* duration (int) - time for offer to be open, in days
* channels (list of strings)
**profile.json**
* age (int) - age of the customer
* became_member_on (int) - date when customer created an app account
* gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)
* id (str) - customer id
* income (float) - customer's income
**transcript.json**
* event (str) - record description (ie transaction, offer received, offer viewed, etc.)
* person (str) - customer id
* time (int) - time in hours since start of test. The data begins at time t=0
* value - (dict of strings) - either an offer id or transaction amount depending on the record
**Note:** If you are using the workspace, you will need to go to the terminal and run the command `conda update pandas` before reading in the files. This is because the version of pandas in the workspace cannot read in the transcript.json file correctly, but the newest version of pandas can. You can access the termnal from the orange icon in the top left of this notebook.
You can see how to access the terminal and how the install works using the two images below. First you need to access the terminal:
<img src="pic1.png"/>
Then you will want to run the above command:
<img src="pic2.png"/>
Finally, when you enter back into the notebook (use the jupyter icon again), you should be able to run the below cell without any errors.
## Problem Statement
In this project I will determine how likely is a customer to complete an offer. The end goal is:
1. To determine, does sending more offers lead to a higher completion rate.
2. Customers with lower completion rate should be sent offers or not
## Exploratory Data Analysis
### Read Data Files
```
#######Run This
import pandas as pd
import numpy as np
import math
import json
import os
%matplotlib inline
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
```
### Save The Data Files
```
if not os.path.isdir('explore'):
os.makedirs('explore')
def data_info(data, filename):
path = os.path.join('explore', filename)
if not os.path.isfile(path):
pd.DataFrame(data).to_csv(path)
print(data.shape)
data_info(portfolio, 'portfolio.csv')
data_info(profile, 'profile.csv')
data_info(transcript, 'transcript.csv')
```
### Clean Portfolio
By looking at the portfolio file we can see that the channels column in grouped, so we'll use sklearn's MultiLabelBinarizer to unpack the channel column and then remove it from the DataFrame
```
from sklearn.preprocessing import MultiLabelBinarizer
cleaned_portfolio = portfolio.copy()
cleaned_portfolio.rename(columns={'id':'offer_id'}, inplace=True)
s = cleaned_portfolio['channels']
mlb = MultiLabelBinarizer()
channels = pd.DataFrame(mlb.fit_transform(s),columns=mlb.classes_, index=cleaned_portfolio.index)
cleaned_portfolio = cleaned_portfolio.join(channels)
cleaned_portfolio.drop(['channels'], axis=1, inplace=True)
cleaned_portfolio
```
### Clean Profile
By looking at the profile data we can see that there are missing age values, we also observe that the people with missing age values have missing income values. Therefore for now we'll remove all the rows with NaN values(~2000), deduce inference and then later combine with missing values and compare results
```
#profile['became_member_on'] = pd.to_datetime(profile['became_member_on'], format='%Y%m%d')
profile.rename(columns={"id":"person"}, inplace=True)
undefined_group = None
cleaned_profile = None
#cleaning profile and dividing it into cleaned_profile and undefined_group
undefined_group = profile.copy()
undefined_group['gender'] = undefined_group['gender'].fillna('U')
undefined_group = undefined_group.loc[undefined_group['gender'] == 'U'].reset_index(drop=True)
cleaned_profile = profile.dropna().reset_index(drop=True)
cleaned_profile
```
### Clean Transcript
From the transcript we can see that the column values has 2 values i.e it has an offer id or amount spent for that transaction, we'll split the value in 2 columns offer_id, amount and then drop the value column
```
cleaned_transcript = transcript.copy()
value = cleaned_transcript['value']
cleaned_transcript['amount'] = [int(i['amount']) if i.get('amount') else 0 for i in value]
cleaned_transcript['offer_id'] = [i['offer_id'] if i.get('offer_id') else (i['offer id'] if i.get('offer id') else '0') for i in value]
cleaned_transcript.drop(['value'], axis=1, inplace=True)
#drop the profile which have no gender or income
cleaned_transcript = cleaned_transcript[~cleaned_transcript.person.isin(undefined_group.person)]
sort_df = cleaned_transcript.sort_values(by=['person', 'time'])
sort_df
```
### Get Data
method: get_valid_data
params: df {df is the set of all the events for a person, lets say offer received, viewed or completed}
---- The idea is to parse a set of transaction entries for a person and then divide it into offer received, offer viewed and offer completed
---- Then create a new column ['g'] which stores the cumalative count for every entry lets say offer id 'a' offered twice then the corresponding g column will store the count something like this:
offer_id g
a 0
a 1
The idea behind g is that it will help us merge elements on [person, offer_id] and will prevent duplicates
```
def get_valid_data(df):
offer_received = df.loc[df['event'] == 'offer received'].reset_index(drop=True)
offer_viewed = df.loc[df['event'] == 'offer viewed'].reset_index(drop=True)
offer_completed = df.loc[df['event'] == 'offer completed'].reset_index(drop=True)
offer_received['g'] = offer_received.groupby('offer_id').cumcount()
offer_viewed['g'] = offer_viewed.groupby('offer_id').cumcount()
offer_completed['g'] = offer_completed.groupby('offer_id').cumcount()
res = pd.merge(offer_received, offer_viewed, on=['person', 'offer_id', 'g'], how='outer')
res = pd.merge(res, offer_completed, on=['person', 'offer_id', 'g'], how='outer')
return res
offers_completed = sort_df.groupby('person').apply(lambda x: get_valid_data(x))
offers_completed = offers_completed.dropna()
offers_completed = offers_completed.reset_index(drop=True)
offers_completed
```
### Combine Portfolio with the offers completed for every entry
method: valid_offer_completed
parameter: df {offers completed}, cleaned_portfolio {cleaned_portfolio- information about every customer like age, income}
##### Functions
1. Drop columns like amount_x, amount_y since they only have value 0 and theh drop event like offer received etc
2. Merge cleaned_portfolio[offer_type, duration] to df on offer_id
3. Drop the columns where a user have completed and offer before and viewed it later i.e keep only those where time_y <= time
```
def valid_offer_completed(df, cleaned_portfolio):
df = df.rename(columns={"offer_id_x":"offer_id"})
offers = cleaned_portfolio[['offer_id', 'offer_type', 'duration']]
df = df.merge(offers,how='left', on='offer_id')
df = df.drop(['amount_x', 'amount_y', 'amount', 'event_x', 'event_y', 'event', 'g'], axis=1).reset_index(drop=True)
df = df[['person','offer_id','time_x','time_y', 'time', 'offer_type', 'duration']]
df = df[(df.time_x <= df.time_y) & (df.time_y <= df.time)]
return df
valid = valid_offer_completed(offers_completed, cleaned_portfolio)
valid = valid.reset_index(drop=True)
valid
```
### Find Informational Offers
Informational offers do not have any offer completed record so we need to find the offer_completed time because we need to combine then with the valid dataframe later on
so we'll caluate the offfer completed based on the duration of the information offer
```
def info_offer(df):
offer_received = df.loc[df['event'] == 'offer received'].reset_index(drop=True)
offer_viewed = df.loc[df['event'] == 'offer viewed'].reset_index(drop=True)
offer_received['g'] = offer_received.groupby('offer_id').cumcount()
offer_viewed['g'] = offer_viewed.groupby('offer_id').cumcount()
res = pd.merge(offer_received, offer_viewed, on=['person', 'offer_id', 'g'], how='outer')
offers = cleaned_portfolio[['offer_id', 'offer_type', 'duration']]
res = res.merge(offers,how='left', on='offer_id')
res['time'] = res['time_x'] + res['duration'] * 24
res = res.dropna()
res = res[res.time_x <= res.time_y]
res['response'] = np.where(res.time_y > res.time , 0, 1)
res = res.loc[res.response == 1]
res = res.drop(['response', 'amount_x', 'amount_y', 'event_x', 'event_y', 'g'], axis=1).reset_index(drop=True)
res = res[['person','offer_id','time_x','time_y', 'time', 'offer_type', 'duration']]
return res
info_df = sort_df[sort_df['offer_id'].isin(['3f207df678b143eea3cee63160fa8bed', '5a8bc65990b245e5a138643cd4eb9837'])]
info_data = info_df.groupby('person').apply(lambda x: info_offer(x))
info_data =info_data.reset_index(drop=True)
info_data
```
### Combine the valid and information dataframes
```
complete = pd.concat([valid, info_data], ignore_index=True, sort=False)
complete
```
### Fill Profile
method: fill_profile
params: gd {Grouped data is the grouped data which includes all the transction record per person}, df {df is the customer portfolio}
1. Find the number of valid offers completed
2. Append the total offers completed for every person in th customer portfolio
```
df = None
def fill_profile(gd, df):
grouped_data = gd.groupby(['person'])
invalid = []
for index, row in df.iterrows():
if row['person'] in grouped_data.groups.keys():
offers = grouped_data.get_group(row['person'])['offer_type'].value_counts()
df.at[index, 'offers completed'] = offers.sum()
for offer, count in offers.items():
df.at[index, offer] = count
else:
invalid.append(row['person'])
print(len(invalid))
df = df.fillna(0)
return df
df = fill_profile(complete, cleaned_profile)
df
```
### Find Data
method: find_data
parameters: gd {gd is the grouped data which includes all the transction record per person}, df {df is the customer portfolio}
1. Find the total number of offers received for every customer from the original transcript not the updated one
2. Calculate the completion rate
3. Append the new details in the customer portfiolio dataframe for each user
```
def find_data(gd, df):
gd = gd[(gd.event == 'offer received')].reset_index(drop=True)
grouped_data = gd.groupby(['person'])
for index, row in df.iterrows():
if row['person'] in grouped_data.groups.keys():
events = grouped_data.get_group(row['person'])['event'].count()
df.at[index, 'offers received'] = events
df.at[index, 'completion rate'] = row['offers completed'] * 100 / events
return df
df = find_data(sort_df, df)
df
```
### Find amount
1. Find the total amount spent by each user
```
def find_amount(df):
amount = pd.DataFrame()
values = df.groupby(['person']).sum()
amount['person'] = values.index
amount['total amount'] = values.amount.to_numpy()
return amount
total_amount = find_amount(cleaned_transcript)
df = df.merge(total_amount, on='person')
df = df.reset_index(drop=True)
########### Convert gender to M-0/F-1/O-2
df['gender'] = df['gender'].map({'M': 0, 'F': 1, 'O': 2})
df = df.fillna(0)
df
data_info(df, 'complete_profile_with_missing_values.csv')
data_info(complete, 'transcript_with_missing_values.csv')
```
# Data Visualization
### Visualising the Data in 1D Space
```
import matplotlib.pyplot as plt
import matplotlib
complete.hist(bins=15, color='steelblue', edgecolor='black', linewidth=1.0,
xlabelsize=8, ylabelsize=8, grid=False)
plt.tight_layout(rect=(0, 0, 1.5, 1.5))
df.hist(bins=15, color='steelblue', edgecolor='black', linewidth=2.0,
xlabelsize=20, ylabelsize=20, grid=False)
plt.tight_layout(rect=(0, 0, 5.8, 10))
import seaborn as sns
f, ax = plt.subplots(figsize=(12, 8))
corr = df.corr()
hm = sns.heatmap(round(corr,2), annot=True, ax=ax, cmap="coolwarm",fmt='.2f',
linewidths=.05)
f.subplots_adjust(top=0.93)
t= f.suptitle('Profile Attributes Correlation Heatmap', fontsize=14)
plt.figure(figsize=(15,4))
plt.plot(df['completion rate'].value_counts().sort_index())
```
# Unsupervised Learning
We will use 2 unsuoervised learning algorithms to check if our data is actually seprable in approximately 5 clusters
The reason being if we get good number of clusters(4 or 5) then we can go and label all the data points according to our logic for likeliness (will be discussed later)
```
df.index = df['person']
df = df.drop(['person'], axis = 1)
```
### Normalizing the data
Using sklearn's Min Max Scaler we will normalize the data in the range of 0 to 1 so that it becomes easier to work with supervised or unsupervised algorithms
```
from sklearn.preprocessing import MinMaxScaler
def normalize_data(df):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(df.astype(float)))
df_scaled.columns = df.columns
df_scaled.index = df.index
return df_scaled
df_scaled = normalize_data(df)
df_scaled
```
### Agglomerative Clustering
1. Plot the dendogram
2. Based on the dendogram detemine the distance threshold
3. Use the distance threshold to find the number of clusters
4. Check the distribution of clusters
```
import matplotlib.pyplot as plt
from hcluster import pdist, linkage, dendrogram
X = df_scaled.T.values #Transpose values
Y = pdist(X)
Z = linkage(Y)
plt.subplots(figsize=(18,5))
dendrogram(Z, labels = df_scaled.columns)
```
from the dendogram gram graph we cab determine the distance threshold i.e. the line from where we can cut the graph is about 40 on the y axis
```
from sklearn.cluster import AgglomerativeClustering
cluster = AgglomerativeClustering(n_clusters=None, affinity='euclidean', linkage='ward', distance_threshold=40)
agg_clusters = np.array(cluster.fit_predict(df_scaled))
unique, counts = np.unique(agg_clusters, return_counts=True)
dict(zip(unique, counts))
```
### K-means Clustering
1. Apply kmeans and find out the optimal number of clusters using the elbow method
2. Analyse the no of clusters formed and select the one where the clusters are equally distributed
```
from sklearn.cluster import KMeans
from sklearn import metrics
from scipy.spatial.distance import cdist
import numpy as np
import matplotlib.pyplot as plt
# create new plot and data
plt.plot()
X = df_scaled
# k means determine k
distortions = []
for k in range(1,10):
km = KMeans(n_clusters=k, n_init=30)
km.fit(X)
wcss = km.inertia_
km_clusters = km.predict(X)
unique, counts = np.unique(km_clusters, return_counts=True)
print("Cluster ", k, dict(zip(unique, counts)))
distortions.append(wcss)
# Plot the elbow
plt.plot(range(1,10), distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
```
from the k-means clustering we can see that for cluster 4 and cluster 5, and also from the elbow graph we can see that around 5 clusters are suitable
```
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=5)
kmeans.fit(df_scaled)
labels = kmeans.predict(df_scaled)
centroids = kmeans.cluster_centers_
unique, counts = np.unique(labels, return_counts=True)
print("Cluster 5",dict(zip(unique, counts)))
```
### Unsupervised Learning Algorithm Results
Agglomerative clustering gives really bad results because of the variability in our dataset, whereas k-means gives an average result. We can also see that the elbow graph is not well formed but we do get an idea of the separablity in our dataset. Thereforw we will now use Supervised Learning Algorithms to properly label our data
## Supervised Learning for Multi-Label Classification
### Determining the likeliness
logic of determining the likeliness is column*weight,I am determining the likeliness using offer completion rate, and the the offer types i.e. bogo, informational and discount. We assign the completion rate a weightage of 3 and offer type a weightage of 1. We noramlize the dataframe so that all the values in the 3 offer type columns are in the range and therefore the logic begind weightage can be applied.
total weights = 3(completion rate) + 1(offer types) = 4
score = { (bogo + informational + discount)/3 + completion_rate*3} / total_weight
Label 4 - Very Likely (score>= 80)
Label 3 - Likely (score>= 60)
Label 2 - Neutral(50% chance) (score>= 40)
Label 1 - Unlikely(score>= 20)
Label 0 - Very Unlikely (score < 20)
```
def calculate_likeliness(rate):
if rate >= 80 and rate <= 100:
return 4
elif rate >= 60 and rate < 80:
return 3
elif rate >= 40 and rate < 60:
return 2
elif rate >= 20 and rate < 40:
return 1
else:
return 0
def likelihood(row):
completion_rate= row[9]
discount = row[7]
informational = row[6]
bogo = row[5]
rate = ((discount + informational + bogo)/3 + completion_rate*3)/4
return calculate_likeliness(rate * 100)
df_scaled.apply(lambda x: likelihood(x), axis=1)
df_scaled['likeliness'] = df_scaled.apply(lambda x: likelihood(x), axis=1)
df_scaled
## we can see that the data is well distributed
df_scaled.likeliness.value_counts()
# drop the columns used for determing the likeliness, so that our supervised learning model is not able to cheat
df_scaled = df_scaled.drop(['bogo', 'informational', 'discount', 'completion rate'], axis=1)
df_scaled
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
mat = df_scaled.values
mat = df_scaled.values
X = mat[:,0:7]
Y = mat[:,7]
seed = 7
test_size = 0.33
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
def find_accuracy(y_test, y_pred):
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
from sklearn.svm import LinearSVC
clf = LinearSVC(random_state=0, tol=1e-5)
clf.fit(X_train, y_train)
# print(clf.coef_)
# print(clf.intercept_)
y_pred = clf.predict(X_test)
find_accuracy(y_test, y_pred)
from sklearn.svm import SVC
svclassifier = SVC(kernel='rbf')
svclassifier.fit(X_train, y_train)
y_pred = svclassifier.predict(X_test)
find_accuracy(y_test, y_pred)
from sklearn.svm import SVC
svclassifier = SVC(kernel='poly')
svclassifier.fit(X_train, y_train)
y_pred = svclassifier.predict(X_test)
find_accuracy(y_test, y_pred)
from sklearn.svm import SVC
svclassifier = SVC(kernel='linear')
svclassifier.fit(X_train, y_train)
y_pred = svclassifier.predict(X_test)
find_accuracy(y_test, y_pred)
from sklearn.ensemble import RandomForestClassifier
rf_classifier = RandomForestClassifier()
rf_classifier.fit(X_train, y_train)
rf_pred = rf_classifier.predict(X_test)
find_accuracy(y_test, rf_pred)
```
## Benchmark Model
Xgboost Algorithm is our benchmark model because it performs tasks like multilabel classifcation with ease and with very High accuracy
```
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
find_accuracy(y_test, y_pred)
```
## Evaluating the Model
#### Now we will evaluate our random forest model
```
from sklearn import model_selection
from sklearn.metrics import mean_absolute_error, mean_squared_error
from math import sqrt
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, rf_pred))
print(classification_report(y_test, rf_pred))
kfold = model_selection.KFold(n_splits=10, random_state=seed)
scoring = 'accuracy'
results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print('Accuracy -val set: %.2f%% (%.2f)' % (results.mean()*100, results.std()))
print("MAE test score:", mean_absolute_error(y_test, rf_pred))
print("RMSE test score:", sqrt(mean_squared_error(y_test, rf_pred)))
```
#### We can observe that our model performs really well and it has accuracy nearly as our benchmark model with really low MAE and RMSE Scores
## Now solving the same problem by filling the missing values
```
undefined = None
income_max = cleaned_profile.income.describe().max()
undefined_group = undefined_group.fillna(income_max)
# undefined = pd.concat([undefined_group, cleaned_profile], ignore_index=True)
# undefined
complete_transcript = transcript.copy()
value = complete_transcript['value']
complete_transcript['amount'] = [int(i['amount']) if i.get('amount') else 0 for i in value]
complete_transcript['offer_id'] = [i['offer_id'] if i.get('offer_id') else (i['offer id'] if i.get('offer id') else '0') for i in value]
complete_transcript.drop(['value'], axis=1, inplace=True)
sort_df = complete_transcript.sort_values(by=['person', 'time'])
sort_df
users = sort_df.groupby('person').apply(lambda x: get_valid_data(x))
users = users.dropna()
users = users.reset_index(drop=True)
users
valid_df = valid_offer_completed(users, cleaned_portfolio)
valid_df
complete_info = sort_df[sort_df['offer_id'].isin(['3f207df678b143eea3cee63160fa8bed', '5a8bc65990b245e5a138643cd4eb9837'])]
complete_info_data = complete_info.groupby('person').apply(lambda x: info_offer(x))
complete_info_data =complete_info_data.reset_index(drop=True)
complete_info_data
complete_df = pd.concat([valid_df, complete_info_data], ignore_index=True)
complete_df
full_profile = profile.copy()
full_profile['gender'] = full_profile['gender'].fillna('U')
full_profile['income'] = full_profile['income'].fillna(income_max)
full_profile
df2 = fill_profile(complete_df, full_profile)
df2 = find_data(sort_df, df2)
df2 = df2.fillna(0)
df2
total_amount = find_amount(complete_transcript)
df2 = df2.merge(total_amount, on='person')
df2 = df2.reset_index(drop=True)
df2.index = df2['person']
df2 = df2.drop(['person'], axis = 1)
########### Convert gender to 0/1/2
df2['gender'] = df2['gender'].map({'M': 0, 'F': 1, 'O': 2, 'U':3})
df2_scaled = normalize_data(df2)
df2_scaled
data_info(df2, 'complete_profile.csv')
data_info(complete_df, 'complete_transcript.csv')
df2_scaled.apply(lambda x: likelihood(x), axis=1)
df2_scaled['likeliness'] = df2_scaled.apply(lambda x: likelihood(x), axis=1)
df2_scaled
df2_scaled = df2_scaled.drop(['bogo', 'informational', 'discount', 'completion rate'], axis=1)
df2_scaled
mat2 = df2_scaled.values
X2 = mat2[:,0:7]
Y2 = mat2[:,7]
seed = 7
test_size = 0.20
X2_train, X2_test, y2_train, y2_test = train_test_split(X2, Y2, test_size=test_size, random_state=seed)
##Random Forest
rf2_classifier = RandomForestClassifier()
rf2_classifier.fit(X2_train, y2_train)
rf2_pred = rf2_classifier.predict(X2_test)
find_accuracy(y2_test, rf2_pred)
## XGBoost Algorithm
model2 = XGBClassifier()
model2.fit(X2_train, y2_train)
y2_pred = model2.predict(X2_test)
predictions2 = [round(value) for value in y2_pred]
find_accuracy(y2_test, predictions2)
```
## Conculsion
In the project I have tried to determine how likely is a user complete an offer. I have used some data visualition to explain some realtion in the data. Then I have used unsupervised learning techniques to determine how seprable the data is and if we are actually able to divide the data in like 5 clusters. Then I determined the likeliness of every data point and removed the columns used to calculate the likeliness. This means that our supervised leaning model will not be able to deduce any inference in determining the likeliness. Then I split the data into training and test set and passed it to several SVM models with different kernel. One thing, I observed that tree models like Random Forest or XGBoost Algorithm perform really well for multi labeling tasks like this. Hence we will choose Gradient Boost as the algorithm of our choice and XGBoost as a benchmark model. Hence we will choose Gradient Boost as the algorithm of our choice and XGBoost as a benchmark model.
For evaluating our model we look at the confusion matrix from where we can see High precision and High Recall which means that our results have been labeled correctly.
Although we are having very good accuracy for our models but that does not mean that our model is perfect it simply means that our model has very less data for now and therefore classifying our data is an easy task, also because we do not have very high dimensional data. Multi-label Classification is easy for low dimensional data.
#### Missing Values vs Non-Missing Values
When we removed missing values from our data we found a good balance between various classes but when we added those missing values and performed some inference on that data we notice that our class imbalance increased. Otherwise, the performance is the same
```
df2['offers received'].describe()
df2['total amount'].describe()
df['likeliness'] = df_scaled['likeliness']
def find_info(df2, class_label):
label = df2[df2['likeliness'] == class_label]
print("Likeliness ==", class_label)
print("bogo", label.bogo.sum())
print("discount", label.discount.sum())
print("informational", label.informational.sum())
print("offers received", label['offers received'].sum())
print()
find_info(df, 4)
find_info(df, 3)
find_info(df, 2)
find_info(df, 1)
find_info(df, 0)
```
## Final Verdict
We saw that the Random Forest model performs better than other models except for the XGBoost Algorithm but we could see an improvement in the performance of the random forest model when we added more data i.e. when we imputed the missing values and added to the training and test sets. Since our model is already so accurate we didn't perform any hyperparameter tuning.
1. from the above distribution we can see that users who have received more informational offers have a lower completion rate than users who have received more bogo/discount offers
2. We can also see that the users who have received the most offers have low completion rate
3. Lastly, age, income, became_memeber_on, total_amount have no influence over the completion rate
| github_jupyter |
## Recommender System Algorithm
### Objective
We want to help consumers find attorneys. To surface attorneys to consumers, sales consultants often have to help attorneys describe their areas of practice (areas like Criminal Defense, Business or Personal Injury).
To expand their practices, attorneys can branch into related areas of practice. This can allow attorneys to help different customers while remaining within the bounds of their experience.
Attached is an anonymized dataset of attorneys and their specialties. The columns are anonymized attorney IDs and specialty IDs. Please design a process that returns the top 5 recommended practice areas for a given attorney with a set of specialties.
## Data
```
import pandas as pd
from sklearn.preprocessing import normalize
import numpy as np
# Import data
data = pd.read_excel('data.xlsx', 'data')
# View first few rows of the dataset
data.head()
```
## 3. Data Exploration
```
# Information of the dataset
data.info()
# Check missing values
data.isnull().sum()
# Check duplicates
data.duplicated().sum()
# Check unique value count for the two ID's
data['attorney_id'].nunique(), data['specialty_id'].nunique()
# Check number of specialties per attorney
data.groupby('attorney_id')['specialty_id'].nunique().sort_values()
```
The number of specialties of an attorney ranges from 1 to 28.
```
# View a sample: an attorney with 28 specialties
data[data['attorney_id']==157715]
```
## Recommendation System
### Recommendation for Top K Practice Areas based on Similarity for Specialties
#### Step 1: Build the specialty-attorney matrix
```
# Build the specialty-attorney matrix
specialty_attorney = data.groupby(['specialty_id','attorney_id'])['attorney_id'].count().unstack(fill_value=0)
specialty_attorney = (specialty_attorney > 0).astype(int)
specialty_attorney
```
#### Step 2: Build specialty-specialty similarity matrix
```
# Build specialty-specialty similarity matrix
specialty_attorney_norm = normalize(specialty_attorney, axis=1)
similarity = np.dot(specialty_attorney_norm, specialty_attorney_norm.T)
df_similarity = pd.DataFrame(similarity, index=specialty_attorney.index, columns=specialty_attorney.index)
df_similarity
```
#### Step 3: Find the Top K most similar specialties
```
# Find the top k most similar specialties
def topk_specialty(specialty, similarity, k):
result = similarity.loc[specialty].sort_values(ascending=False)[1:k + 1].reset_index()
result = result.rename(columns={'specialty_id': 'Specialty_Recommend', specialty: 'Similarity'})
return result
```
### Testing Recommender System based on Similarity
#### Process:
1. Ask user to input the ID of his/her obtained specialties
2. The system will recommend top 5 practice areas for the user's specialties based on similarity
```
# Test on a specialty sample 1
user_input1 = int(input('Please input your specialty ID: '))
recommend_user1 = topk_specialty(specialty=user_input1, similarity=df_similarity, k=5)
print('Top 5 recommended practice areas for user 1:')
print('--------------------------------------------')
print(recommend_user1)
# Test on a specialty sample 2
user_input2 = int(input('Please input your specialty ID: '))
recommend_user2 = topk_specialty(specialty=user_input2, similarity=df_similarity, k=5)
print('Top 5 recommended practice areas for user 2:')
print('--------------------------------------------')
print(recommend_user2)
```
### Popularity-based Recommendation - If user requests recommedation based on popularity
```
# Get ranked specialties based on popularity
df_specialty_popular = data_recommend.groupby('specialty_id')['attorney_id'].nunique().sort_values(ascending=False)
df_specialty_popular
# Top 5 specialties based on popularity among attorneys
df_specialty_popular.columns = ['specialty_id', 'count_popular']
print('The 5 most popular specialties:')
print('--------------------------------')
print(df_specialty_popular.nlargest(5, keep='all'))
```
| github_jupyter |
# Import bibilotek
```
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_score, KFold
from sklearn.metrics import mean_absolute_error
pip install --upgrade tables
```
# Odczyt danych z pliku h5
```
df_train = pd.read_hdf("train_data.h5")
df_train['price'] = df_train['price'].map(parse_price)
df_test = pd.read_hdf("test_data.h5")
df = pd.concat([df_train, df_test])
print(df_train.shape, df_test.shape)
df
```
# Funkcje pomocniczne
```
def parse_price(val):
if isinstance(val, str):
if "₽" in val:
val = val.split('₽')[0]
val = val.replace(' ', '')
return int(val) / 1000000
return float(val)
def parse_area(val):
if isinstance(val, int): return val
if isinstance(val, float): return val
return float(val.split("м")[0].replace(" ", ""))
def parse_floor(val):
if isinstance(val, int): return val
if isinstance(val, str):
return val.split('/')[0]
return val
def get_metro_station(row):
for i in row:
if 'МЦК' in i:
return i
def check_log_model(df, feats, model, cv=5, scoring="neg_mean_absolute_error"):
df_train = df[ ~df["price"].isnull() ].copy()
X = df_train[feats]
y = df_train["price"]
y_log = np.log(y)
cv = KFold(n_splits=5, shuffle=True, random_state=0)
scores = []
for train_idx, test_idx in cv.split(X):
X_train, X_test = X.iloc[train_idx], X.iloc[test_idx]
y_log_train, y_test = y_log.iloc[train_idx], y.iloc[test_idx]
model = xgb.XGBRegressor(max_depth=md, n_estimators=ne, learning_rate=lr, random_state=0)
model.fit(X_train, y_log_train)
y_log_pred = model.predict(X_test)
y_pred = np.exp(y_log_pred)
score = mean_absolute_error(y_test, y_pred)
scores.append(score)
return np.mean(scores), np.std(scores)
```
# Future ennginiring
```
params = df["params"].apply(pd.Series)
params = params.fillna(-1)
if "Охрана:" not in df:
df = pd.concat([df, params], axis=1)
obj_feats = params.select_dtypes(object).columns
for feat in obj_feats:
df["{}_cat".format(feat)] = df[feat].factorize()[0]
cat_feats = [x for x in df.columns if "_cat" in x]
cat_feats
# powierzchnia mieszkania
df["area"] = df["Общая площадь:"].map(parse_area)
# powierzchnia kuchni
df["kitchen_area"] = df["Площадь кухни:"].map(parse_area)
geo_block = (
df["geo_block"]
.map(lambda x: x[:int(len(x)/2) ])
.map(lambda x: {"geo_block_{}".format(idx):val for idx,val in enumerate(x) })
.apply(pd.Series)
)
for feat in geo_block.columns:
df["{}_cat".format(feat)] = geo_block[feat].factorize()[0]
geo_cat_feats = [x for x in df.columns if "geo_block" in x and "_cat" in x]
breadcrumbs = (
df["breadcrumbs"]
.map(lambda x: {"breadcrumbs_{}".format(idx):val for idx,val in enumerate(x) })
.apply(pd.Series)
)
for feat in breadcrumbs.columns:
df["{}_cat".format(feat)] = breadcrumbs[feat].factorize()[0]
df
breadcrumbs_cat_feats = [x for x in df.columns if "breadcrumbs" in x and "_cat" in x]
breadcrumbs_cat_feats
metro_station = (
df["breadcrumbs"]
.map(lambda x: get_metro_station(x))
.apply(pd.Series)
)
metro_station.columns = ['metro_station_name']
df["metro_station_cat"] = metro_station.apply(lambda x : pd.factorize(x)[0])
df
```
# Model DecisionTreeRegressor
```
feats = ["area", "kitchen_area", "metro_station_cat"] + geo_cat_feats + cat_feats + breadcrumbs_cat_feats
model = DecisionTreeRegressor(max_depth=20)
check_log_model(df, feats, DecisionTreeRegressor(max_depth=20))
```
# Model XGBRegressor
```
md = 20
ne = 700
lr = 0.15
feats = ["area", "kitchen_area", "metro_station_cat"] + geo_cat_feats + cat_feats + breadcrumbs_cat_feats
check_log_model(df, feats, xgb.XGBRegressor(max_depth=md, n_estimators=ne, learning_rate=lr, random_state=0))
```
# Kaggle submit
```
feats = ["area", "kitchen_area", "metro_station_cat"] + geo_cat_feats + cat_feats + breadcrumbs_cat_feats
df_train = df[ ~df["price"].isnull() ].copy()
df_test = df[ df["price"].isnull() ].copy()
X_train = df_train[feats]
y_train = df_train["price"]
y_log_train = np.log(y_train)
X_test = df_test[feats]
model = xgb.XGBRegressor(max_depth=8, n_estimators=700, learning_rate=0.1, random_state=0)
model.fit(X_train, y_log_train)
y_log_pred = model.predict(X_test)
y_pred = np.exp(y_log_pred)
df_test["price"] = y_pred
df_test[ ["id", "price"] ].to_csv("./xgb_location_log_area_v2.csv", index=False)
```
| github_jupyter |
```
print('hello')
for number in [1,2,3]:
print(number)
print('1+3 is {}'.format(1+3))
!pip install psycopg2
import pandas
import psycopg2
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
host=config['myaws']['host']
db=config['myaws']['db']
user=config['myaws']['user']
pwd = config['myaws']['pwd']
conn = psycopg2.connect( host=host,
user=user,
password = pwd,
dbname=db)
cur=conn.cursor()
```
Define SQL statement
```
sql_statement = """ select bathroom,bedroom
from public.house_price_full
where bathroom>2"""
```
Cursor Executes the SQL statement
```
cur.execute(sql_statement)
cur.fetchone()
for bathroom,bedroom in cur.fetchall()[:10]:
print(bathroom,bedroom)
df = pandas.read_sql_query(sql_statement,conn)
df[:]
sql_statement= """
select built_in,
avg(price) as avg_price
from public.house_price_full
group by built_in
order by built_in
"""
df = pandas.read_sql_query(sql_statement,conn)
df[:10]
df_price=pandas.read_sql_query(sql_statement,conn)
df_price.plot(y='avg_price',x='built_in')
sql_statement= """
select price,area
from public.house_price_full
"""
df_price=pandas.read_sql_query(sql_statement,conn)
df_price[:10]
df_price=pandas.read_sql_query(sql_statement,conn)
df_price['area'].hist()
df_price=pandas.read_sql_query(sql_statement,conn)
df_price.plot.scatter(x='area',y='price')
sql_statement= """
select house_type,
avg(price) as avg_price
from public.house_price_full
group by house_type
order by avg_price desc
"""
df_price=pandas.read_sql_query(sql_statement,conn)
df_price.plot.bar(x='house_type',y='avg_price')
sql_statement = """
insert into gp1.student(s_email,s_name,s_major)
values('{}','{}','{}')
""".format('s6@jmu.edu','s5','ia')
print(sql_statement)
conn.rollback()
sql_statement = """
insert into gp1.student(s_email,s_name,s_major)
values('{}','{}','{}')
""".format('s6@jmu.edu','s6','ia')
cur.execute(sql_statement)
conn.commit()
df_student=pandas.read_sql_query('select * from gp1.student',conn)
df_student[:]
sql_statement = """
delete from gp1.student
where s_email = '{}'
""".format('s6@jmu.edu')
print(sql_statement)
cur.execute(sql_statement)
conn.commit()
cur.close()
conn.close()
```
| github_jupyter |
# Analysis of Chest X-Ray images
Neural networks have revolutionised image processing in several different domains. Among these is the field of medical imaging. In the following notebook, we will get some hands-on experience in working with Chest X-Ray (CXR) images.
The objective of this exercise is to identify images where an "effusion" is present. This is a classification problem, where we will be dealing with two classes - 'effusion' and 'nofinding'. Here, the latter represents a "normal" X-ray image.
This same methodology can be used to spot various other illnesses that can be detected via a chest x-ray. For the scope of this demonstration, we will specifically deal with "effusion".
## 1. Data Pre-processing
Our data is in the form of grayscale (black and white) images of chest x-rays. To perform our classification task effectively, we need to perform some pre-processing of the data.
First, we load all the relevant libraries.
```
from skimage import io
import os
import glob
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter('ignore')
```
Point a variable to the path where the data resides. Note that to use the code below you will need to move the folders effusion/ and nofinding/ into one common folder. You can do something like this:
```
mkdir CXR_Data
move effusion CXR_Data
move nofinding CXR_Data
```
```
DATASET_PATH = './CXR_data/'
# There are two classes of images that we will deal with
disease_cls = ['effusion', 'nofinding']
```
Next, we read the "effusion" and "nofinding" images.
```
effusion_path = os.path.join(DATASET_PATH, disease_cls[0], '*')
effusion = glob.glob(effusion_path)
effusion = io.imread(effusion[0])
normal_path = os.path.join(DATASET_PATH, disease_cls[1], '*')
normal = glob.glob(normal_path)
normal = io.imread(normal[0])
f, axes = plt.subplots(1, 2, sharey=True)
f.set_figwidth(10)
axes[0].imshow(effusion, cmap='gray')
axes[1].imshow(normal, cmap='gray')
effusion.shape
normal.shape
```
### Data Augmentation ###
Now that we have read the images, the next step is data augmentation. We use the concept of a "data generator" that you learnt in the last section.
```
from skimage.transform import rescale
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=10,
width_shift_range=0,
height_shift_range=0,
vertical_flip=False,)
def preprocess_img(img, mode):
img = (img - img.min())/(img.max() - img.min())
img = rescale(img, 0.25, multichannel=True, mode='constant')
if mode == 'train':
if np.random.randn() > 0:
img = datagen.random_transform(img)
return img
```
## 2. Model building
We will be using a Resnet in this (you learnt about Resnets previously).
For this to work, the script that defines the resnet model (resnet.py) should reside in the same folder as this notebook
```
import resnet
img_channels = 1
img_rows = 256
img_cols = 256
nb_classes = 2
import numpy as np
import tensorflow as tf
class AugmentedDataGenerator(tf.keras.utils.Sequence):
'Generates data for Keras'
def __init__(self, mode='train', ablation=None, disease_cls = ['nofinding', 'effusion'],
batch_size=32, dim=(256, 256), n_channels=1, shuffle=True):
'Initialization'
self.dim = dim
self.batch_size = batch_size
self.labels = {}
self.list_IDs = []
self.mode = mode
for i, cls in enumerate(disease_cls):
paths = glob.glob(os.path.join(DATASET_PATH, cls, '*'))
brk_point = int(len(paths)*0.8)
if self.mode == 'train':
paths = paths[:brk_point]
else:
paths = paths[brk_point:]
if ablation is not None:
paths = paths[:int(len(paths)*ablation/100)]
self.list_IDs += paths
self.labels.update({p:i for p in paths})
self.n_channels = n_channels
self.n_classes = len(disease_cls)
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.list_IDs) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
list_IDs_temp = [self.list_IDs[k] for k in indexes]
X, y = self.__data_generation(list_IDs_temp)
return X, y
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
X = np.empty((self.batch_size, *self.dim, self.n_channels))
y = np.empty((self.batch_size), dtype=int)
delete_rows = []
# Generate data
for i, ID in enumerate(list_IDs_temp):
img = io.imread(ID)
img = img[:, :, np.newaxis]
if img.shape == (1024, 1024,1):
img = preprocess_img(img, self.mode)
X[i,] = img
y[i] = self.labels[ID]
else:
delete_rows.append(i)
continue
X = np.delete(X, delete_rows, axis=0)
y = np.delete(y, delete_rows, axis=0)
return X, tf.keras.utils.to_categorical(y, num_classes=self.n_classes)
```
## 3. Ablation Run
In the previous notebook, you learnt about Ablation. Briefly, an ablation run is when you systematically modify certain parts of the input, in order to observe the equivalent change in the input.
For the following section, we'll be using the Data Generator concept that you previously worked on.
```
model = resnet.ResnetBuilder.build_resnet_18((img_channels, img_rows, img_cols), nb_classes)
model.compile(loss='categorical_crossentropy',optimizer='SGD',
metrics=['accuracy'])
training_generator = AugmentedDataGenerator('train', ablation=5)
validation_generator = AugmentedDataGenerator('val', ablation=5)
model.fit(training_generator, epochs=1, validation_data=validation_generator)
model = resnet.ResnetBuilder.build_resnet_18((img_channels, img_rows, img_cols), nb_classes)
model.compile(loss='categorical_crossentropy',optimizer='SGD',
metrics=['accuracy'])
training_generator = AugmentedDataGenerator('train', ablation=5)
validation_generator = AugmentedDataGenerator('val', ablation=5)
model.fit(training_generator, epochs=5, validation_data=None)
from sklearn.metrics import roc_auc_score
from tensorflow.keras import optimizers
from tensorflow.keras.callbacks import *
class roc_callback(Callback):
def on_train_begin(self, logs={}):
logs['val_auc'] = 0
def on_epoch_end(self, epoch, logs={}):
y_p = []
y_v = []
for i in range(len(validation_generator)):
x_val, y_val = validation_generator[i]
y_pred = self.model.predict(x_val)
y_p.append(y_pred)
y_v.append(y_val)
y_p = np.concatenate(y_p)
y_v = np.concatenate(y_v)
roc_auc = roc_auc_score(y_v, y_p)
print ('\nVal AUC for epoch{}: {}'.format(epoch, roc_auc))
logs['val_auc'] = roc_auc
model = resnet.ResnetBuilder.build_resnet_18((img_channels, img_rows, img_cols), nb_classes)
model.compile(loss='categorical_crossentropy',optimizer='SGD',
metrics=['accuracy'])
training_generator = AugmentedDataGenerator('train', ablation=20)
validation_generator = AugmentedDataGenerator('val', ablation=20)
auc_logger = roc_callback()
model.fit(training_generator, epochs=5, validation_data=validation_generator, callbacks=[auc_logger])
from functools import partial
import tensorflow.keras.backend as K
from itertools import product
def w_categorical_crossentropy(y_true, y_pred, weights):
nb_cl = len(weights)
final_mask = K.zeros_like(y_pred[:, 0])
y_pred_max = K.max(y_pred, axis=1)
y_pred_max = K.reshape(y_pred_max, (K.shape(y_pred)[0], 1))
y_pred_max_mat = K.cast(K.equal(y_pred, y_pred_max), K.floatx())
for c_p, c_t in product(range(nb_cl), range(nb_cl)):
final_mask += (weights[c_t, c_p] * y_pred_max_mat[:, c_p] * y_true[:, c_t])
cross_ent = K.categorical_crossentropy(y_true, y_pred, from_logits=False)
return cross_ent * final_mask
bin_weights = np.ones((2,2))
bin_weights[0, 1] = 5
bin_weights[1, 0] = 5
ncce = partial(w_categorical_crossentropy, weights=bin_weights)
ncce.__name__ ='w_categorical_crossentropy'
model = resnet.ResnetBuilder.build_resnet_18((img_channels, img_rows, img_cols), nb_classes)
model.compile(loss=ncce, optimizer='SGD',
metrics=['accuracy'])
training_generator = AugmentedDataGenerator('train', ablation=5)
validation_generator = AugmentedDataGenerator('val', ablation=5)
model.fit(training_generator, epochs=1, validation_data=None)
```
## 4. Final Run
After deeply examining our data and building some preliminary models, we are finally ready to build a model that will perform our prediction task.
```
class DecayLR(tf.keras.callbacks.Callback):
def __init__(self, base_lr=0.01, decay_epoch=1):
super(DecayLR, self).__init__()
self.base_lr = base_lr
self.decay_epoch = decay_epoch
self.lr_history = []
def on_train_begin(self, logs={}):
K.set_value(self.model.optimizer.lr, self.base_lr)
def on_epoch_end(self, epoch, logs={}):
new_lr = self.base_lr * (0.5 ** (epoch // self.decay_epoch))
self.lr_history.append(K.get_value(self.model.optimizer.lr))
K.set_value(self.model.optimizer.lr, new_lr)
model = resnet.ResnetBuilder.build_resnet_18((img_channels, img_rows, img_cols), nb_classes)
sgd = optimizers.SGD(lr=0.005)
bin_weights = np.ones((2,2))
bin_weights[1, 1] = 10
bin_weights[1, 0] = 10
ncce = partial(w_categorical_crossentropy, weights=bin_weights)
ncce.__name__ ='w_categorical_crossentropy'
model.compile(loss=ncce,optimizer= sgd,
metrics=['accuracy'])
training_generator = AugmentedDataGenerator('train', ablation=50)
validation_generator = AugmentedDataGenerator('val', ablation=50)
auc_logger = roc_callback()
filepath = 'models/best_model.hdf5'
checkpoint = ModelCheckpoint(filepath, monitor='val_auc', verbose=1, save_best_only=True, mode='max')
decay = DecayLR()
model.fit(training_generator, epochs=10, validation_data=validation_generator, callbacks=[auc_logger, decay, checkpoint])
```
## 5. Making a Prediction
```
val_model = resnet.ResnetBuilder.build_resnet_18((img_channels, img_rows, img_cols), nb_classes)
val_model.load_weights('models/best_model.hdf5')
effusion_path = os.path.join(DATASET_PATH, disease_cls[0], '*')
effusion = glob.glob(effusion_path)
effusion = io.imread(effusion[-8])
plt.imshow(effusion,cmap='gray')
img = preprocess_img(effusion[:, :, np.newaxis], 'validation')
val_model.predict(img[np.newaxis,:])
```
| github_jupyter |
# Gallery of examples

Here you can browse a gallery of examples using EinsteinPy in the form of Jupyter notebooks.
## [Analyzing Earth using EinsteinPy!](docs/source/examples/Analyzing%20Earth%20using%20EinsteinPy!.ipynb)
[](docs/source/examples/Analyzing%20Earth%20using%20EinsteinPy!.ipynb)
## [Animations in EinsteinPy!](docs/source/examples/Animations%20in%20EinsteinPy.ipynb)
[](docs/source/examples/Animations%20in%20EinsteinPy.ipynb)
## [Einstein Tensor calculations using Symbolic module](docs/source/examples/Einstein%20Tensor%20symbolic%20calculation.ipynb)
[](docs/source/examples/Einstein%20Tensor%20symbolic%20calculation.ipynb)
## [Lambdify in Symbolic module](docs/source/examples/Lambdify%20symbolic%20calculation.ipynb)
[](docs/source/examples/Lambdify%20symbolic%20calculation.ipynb)
## [Playing with Contravariant and Covariant Indices in Tensors(Symbolic)](docs/source/examples/Playing%20with%20Contravariant%20and%20Covariant%20Indices%20in%20Tensors(Symbolic).ipynb)
[](docs/source/examples/Playing%20with%20Contravariant%20and%20Covariant%20Indices%20in%20Tensors(Symbolic).ipynb)
## [Predefined Metrics in Symbolic Module](docs/source/examples/Predefined%20Metrics%20in%20Symbolic%20Module.ipynb)
[](docs/source/examples/Predefined%20Metrics%20in%20Symbolic%20Module.ipynb)
## [Ricci Tensor and Scalar Curvature calculations using Symbolic module](docs/source/examples/Ricci%20Tensor%20and%20Scalar%20Curvature%20symbolic%20calculation.ipynb)
[](docs/source/examples/Ricci%20Tensor%20and%20Scalar%20Curvature%20symbolic%20calculation.ipynb)
<center><em>Gregorio Ricci-Curbastro</em></center>
## [Shadow cast by an thin emission disk around a black hole](docs/source/examples/Shadow%20cast%20by%20an%20thin%20emission%20disk%20around%20a%20black%20hole.ipynb)
[](docs/source/examples/Shadow%20cast%20by%20an%20thin%20emission%20disk%20around%20a%20black%20hole.ipynb)
## [Spatial Hypersurface Embedding for Schwarzschild Space-Time!](docs/source/examples/Plotting%20spacial%20hypersurface%20embedding%20for%20schwarzschild%20spacetime.ipynb)
[](docs/source/examples/Plotting%20spacial%20hypersurface%20embedding%20for%20schwarzschild%20spacetime.ipynb)
## [Symbolically Understanding Christoffel Symbol and Riemann Metric Tensor using EinsteinPy](docs/source/examples/Symbolically%20Understanding%20Christoffel%20Symbol%20and%20Riemann%20Curvature%20Tensor%20using%20EinsteinPy.ipynb)
[](docs/source/examples/Symbolically%20Understanding%20Christoffel%20Symbol%20and%20Riemann%20Curvature%20Tensor%20using%20EinsteinPy.ipynb)
## [Visualizing Event Horizon and Ergosphere (Singularities) of Kerr Metric or Black Hole](docs/source/examples/Visualizing%20Event%20Horizon%20and%20Ergosphere%20(Singularities)%20of%20Kerr%20Metric%20or%20Black%20Hole.ipynb)
[](docs/source/examples/Visualizing%20Event%20Horizon%20and%20Ergosphere%20(Singularities)%20of%20Kerr%20Metric%20or%20Black%20Hole.ipynb)
## [Visualizing Frame Dragging in Kerr Spacetime](docs/source/examples/Visualizing%20Frame%20Dragging%20in%20Kerr%20Spacetime.ipynb)
[](docs/source/examples/Visualizing%20Frame%20Dragging%20in%20Kerr%20Spacetime.ipynb)
## [Visualizing Precession in Schwarzschild Spacetime](docs/source/examples/Visualizing%20Precession%20in%20Schwarzschild%20Spacetime.ipynb)
[](docs/source/examples/Visualizing%20Precession%20in%20Schwarzschild%20Spacetime.ipynb)
## [Weyl Tensor calculations using Symbolic module](docs/source/examples/Weyl%20Tensor%20symbolic%20calculation.ipynb)
[](docs/source/examples/Weyl%20Tensor%20symbolic%20calculation.ipynb)
<center><em>Hermann Weyl</em></center>
| github_jupyter |
```
%matplotlib inline
```
# Straight line Hough transform
The Hough transform in its simplest form is a method to detect straight lines
[1]_.
In the following example, we construct an image with a line intersection. We
then use the `Hough transform <https://en.wikipedia.org/wiki/Hough_transform>`__.
to explore a parameter space for straight lines that may run through the image.
## Algorithm overview
Usually, lines are parameterised as $y = mx + c$, with a gradient
$m$ and y-intercept `c`. However, this would mean that $m$ goes to
infinity for vertical lines. Instead, we therefore construct a segment
perpendicular to the line, leading to the origin. The line is represented by
the length of that segment, $r$, and the angle it makes with the x-axis,
$\theta$.
The Hough transform constructs a histogram array representing the parameter
space (i.e., an $M \times N$ matrix, for $M$ different values of
the radius and $N$ different values of $\theta$). For each
parameter combination, $r$ and $\theta$, we then find the number
of non-zero pixels in the input image that would fall close to the
corresponding line, and increment the array at position $(r, \theta)$
appropriately.
We can think of each non-zero pixel "voting" for potential line candidates. The
local maxima in the resulting histogram indicates the parameters of the most
probably lines. In our example, the maxima occur at 45 and 135 degrees,
corresponding to the normal vector angles of each line.
Another approach is the Progressive Probabilistic Hough Transform [2]_. It is
based on the assumption that using a random subset of voting points give a good
approximation to the actual result, and that lines can be extracted during the
voting process by walking along connected components. This returns the
beginning and end of each line segment, which is useful.
The function `probabilistic_hough` has three parameters: a general threshold
that is applied to the Hough accumulator, a minimum line length and the line
gap that influences line merging. In the example below, we find lines longer
than 10 with a gap less than 3 pixels.
## References
.. [1] Duda, R. O. and P. E. Hart, "Use of the Hough Transformation to
Detect Lines and Curves in Pictures," Comm. ACM, Vol. 15,
pp. 11-15 (January, 1972)
.. [2] C. Galamhos, J. Matas and J. Kittler,"Progressive probabilistic
Hough transform for line detection", in IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, 1999.
### Line Hough Transform
```
import numpy as np
from skimage.transform import hough_line, hough_line_peaks
from skimage.feature import canny
from skimage import data
import matplotlib.pyplot as plt
from matplotlib import cm
# Constructing test image
image = np.zeros((200, 200))
idx = np.arange(25, 175)
image[idx[::-1], idx] = 255
image[idx, idx] = 255
# Classic straight-line Hough transform
# Set a precision of 0.5 degree.
tested_angles = np.linspace(-np.pi / 2, np.pi / 2, 360)
h, theta, d = hough_line(image, theta=tested_angles)
# Generating figure 1
fig, axes = plt.subplots(1, 3, figsize=(15, 6))
ax = axes.ravel()
ax[0].imshow(image, cmap=cm.gray)
ax[0].set_title('Input image')
ax[0].set_axis_off()
ax[1].imshow(np.log(1 + h),
extent=[np.rad2deg(theta[-1]), np.rad2deg(theta[0]), d[-1], d[0]],
cmap=cm.gray, aspect=1/1.5)
ax[1].set_title('Hough transform')
ax[1].set_xlabel('Angles (degrees)')
ax[1].set_ylabel('Distance (pixels)')
ax[1].axis('image')
ax[2].imshow(image, cmap=cm.gray)
origin = np.array((0, image.shape[1]))
for _, angle, dist in zip(*hough_line_peaks(h, theta, d)):
y0, y1 = (dist - origin * np.cos(angle)) / np.sin(angle)
ax[2].plot(origin, (y0, y1), '-r')
ax[2].set_xlim(origin)
ax[2].set_ylim((image.shape[0], 0))
ax[2].set_axis_off()
ax[2].set_title('Detected lines')
plt.tight_layout()
plt.show()
```
### Probabilistic Hough Transform
```
from skimage.transform import probabilistic_hough_line
# Line finding using the Probabilistic Hough Transform
image = data.camera()
edges = canny(image, 2, 1, 25)
lines = probabilistic_hough_line(edges, threshold=10, line_length=5,
line_gap=3)
# Generating figure 2
fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(image, cmap=cm.gray)
ax[0].set_title('Input image')
ax[1].imshow(edges, cmap=cm.gray)
ax[1].set_title('Canny edges')
ax[2].imshow(edges * 0)
for line in lines:
p0, p1 = line
ax[2].plot((p0[0], p1[0]), (p0[1], p1[1]))
ax[2].set_xlim((0, image.shape[1]))
ax[2].set_ylim((image.shape[0], 0))
ax[2].set_title('Probabilistic Hough')
for a in ax:
a.set_axis_off()
plt.tight_layout()
plt.show()
```
| github_jupyter |
```
# Purpose: Analyze results from Predictions Files created by Models
# Inputs: Prediction files from Random Forest, Elastic Net, XGBoost, and Team Ensembles
# Outputs: Figures (some included in the paper, some in SI)
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import astropy.stats as AS
from scipy.stats.stats import pearsonr
from os import listdir
from sklearn.decomposition import PCA
%matplotlib inline
```
## Reading Data, generating In-Sample Scores
```
name_dict = {'lassoRF_prediction': 'Lasso RF','elastic_prediction': 'Elastic Net','RF_prediction': 'Ensemble RF',
'LR_prediction': 'Ensemble LR','weighted_multiRF_prediction': 'Nested RF',
'weighted_avrg_prediction': 'Weighted Team Avg', 'avrg_prediction': 'Team Avg',
'xgboost_prediction': 'Gradient Boosted Tree'}
training=pd.read_csv('../data/train.csv',index_col = 'challengeID')
baseline=np.mean(training, axis=0)
BL_CV_scores = pd.DataFrame(columns = ['outcome','type','model','score_avg'])
for outcome in training.columns.values:
y = training[outcome].dropna()
y_hat = baseline[outcome]
partition_scores = list()
for i in range(10,110,10):
bools = y.index<np.percentile(y.index,i)
y_curr=y[bools]
partition_scores.append(np.linalg.norm(y_curr-y_hat)/len(y_curr))
bootstrapped_means = AS.bootstrap(np.array(partition_scores),samples = 10, bootnum = 100, bootfunc = np.mean)
to_add = pd.DataFrame({'outcome':list(len(bootstrapped_means)*[outcome]),'type':len(bootstrapped_means)*['In-Sample Error'],'model':len(bootstrapped_means)*['Baseline'],'score_avg':bootstrapped_means})
BL_CV_scores = BL_CV_scores.append(to_add, ignore_index = True)
name_dict
bootstrapped_scores_all = {}
for name in list(name_dict.keys()):
model_name = name_dict[name]
data=pd.read_csv(str('../output/final_pred/'+name+'.csv'), index_col = 'challengeID')
CV_scores = pd.DataFrame(columns = ['outcome','type','model','score_avg'])
for outcome in training.columns.values:
y = training[outcome].dropna()
y_hat = data[outcome][np.in1d(data.index,y.index)]
partition_scores = list()
for i in range(10,110,10):
bools = y.index<np.percentile(y.index,i)
y_curr=y[bools]
y_hat_curr = y_hat[bools]
partition_scores.append(np.linalg.norm(y_curr-y_hat_curr)/len(y_curr))
bootstrapped_means = AS.bootstrap(np.array(partition_scores),samples = 10, bootnum = 100, bootfunc = np.mean)
bootstrapped_means = (1-np.divide(bootstrapped_means,BL_CV_scores.score_avg[BL_CV_scores.outcome==outcome]))*100
to_add = pd.DataFrame({'outcome':list(len(bootstrapped_means)*[outcome]),'type':len(bootstrapped_means)*['In-Sample Error'],'model':len(bootstrapped_means)*[model_name],'score_avg':bootstrapped_means})
CV_scores = CV_scores.append(to_add, ignore_index = True)
bootstrapped_scores_all[name] = CV_scores
```
## Individual Model Scores
```
GBT_CV = bootstrapped_scores_all['xgboost_prediction']
GBT_leaderboard = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Leaderboard'],'model':6*['Gradient Boosted Tree'],'score_avg':[0.37543,0.22008,0.02437,0.05453,0.17406,0.19676]})
GBT_holdout = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Holdout'],'model':6*['Gradient Boosted Tree'],'score_avg':[0.34379983,0.238180899,0.019950074,0.056877623,0.167392429,0.177202581]})
GBT_scores = GBT_CV.append(GBT_leaderboard.append(GBT_holdout,ignore_index = True),ignore_index = True)
avrg_CV = bootstrapped_scores_all['avrg_prediction']
avrg_leaderboard = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Leaderboard'],'model':6*['Team Avg'],'score_avg':[0.36587,0.21287,0.02313,0.05025,0.17467,0.20058]})
avrg_holdout = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Holdout'],'model':6*['Team Avg'],'score_avg':[0.352115776,0.241462042,0.019888218,0.053480264,0.169287396,0.181767792]})
avrg_scores = avrg_CV.append(avrg_leaderboard.append(avrg_holdout,ignore_index = True),ignore_index = True)
weighted_avrg_CV = bootstrapped_scores_all['weighted_avrg_prediction']
weighted_avrg_leaderboard = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Leaderboard'],'model':6*['Weighted Team Avg'],'score_avg':[0.36587,0.21287,0.02301,0.04917,0.1696,0.19782]})
weighted_avrg_holdout = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Holdout'],'model':6*['Weighted Team Avg'],'score_avg':[0.352115776,0.241462042,0.020189616,0.053818827,0.162462938,0.178098036]})
weighted_avrg_scores = weighted_avrg_CV.append(weighted_avrg_leaderboard.append(weighted_avrg_holdout,ignore_index = True),ignore_index = True)
multi_RF_CV = bootstrapped_scores_all['weighted_multiRF_prediction']
multi_RF_leaderboard = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Leaderboard'],'model':6*['Nested RF'],'score_avg':[0.38766,0.22353,0.02542,0.05446,0.20228,0.22092]})
multi_RF_holdout = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Holdout'],'model':6*['Nested RF'],'score_avg':[0.365114483,0.248124154,0.021174361,0.063930882,0.207400541,0.191352482]})
multi_RF_scores = multi_RF_CV.append(multi_RF_leaderboard.append(multi_RF_holdout,ignore_index = True),ignore_index = True)
LR_CV = bootstrapped_scores_all['LR_prediction']
LR_leaderboard = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Leaderboard'],'model':6*['Ensemble LR'],'score_avg':[0.37674,0.2244,0.02715,0.05092,0.18341,0.22311]})
LR_holdout = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Holdout'],'model':6*['Ensemble LR'],'score_avg':[0.364780108,0.247382526,0.021359837,0.058200047,0.181441591,0.194502527]})
LR_scores = LR_CV.append(LR_leaderboard.append(LR_holdout,ignore_index = True),ignore_index = True)
RF_CV = bootstrapped_scores_all['RF_prediction']
RF_leaderboard = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Leaderboard'],'model':6*['Ensemble RF'],'score_avg':[0.38615,0.22342,0.02547,0.05475,0.20346,0.22135]})
RF_holdout = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Holdout'],'model':6*['Ensemble RF'],'score_avg':[0.364609923,0.247940405,0.021135379,0.064494339,0.208869867,0.191742726]})
RF_scores = RF_CV.append(RF_leaderboard.append(RF_holdout,ignore_index = True),ignore_index = True)
lasso_RF_CV = bootstrapped_scores_all['lassoRF_prediction']
lasso_RF_leaderboard = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Leaderboard'],'model':6*['Lasso RF'],'score_avg':[0.37483,0.21686,0.02519,0.05226,0.17223,0.20028]})
lasso_RF_holdout = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Holdout'],'model':6*['Lasso RF'],'score_avg':[0.361450643,0.243745261,0.020491841,0.054397319,0.165154165,0.180446409]})
lasso_scores = lasso_RF_CV.append(lasso_RF_leaderboard.append(lasso_RF_holdout,ignore_index = True),ignore_index = True)
eNet_CV = bootstrapped_scores_all['elastic_prediction']
eNet_leaderboard = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Leaderboard'],'model':6*['Elastic Net'],'score_avg':[0.36477,0.21252,0.02353,0.05341,0.17435,0.20224]})
eNet_holdout = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Holdout'],'model':6*['Elastic Net'],'score_avg':[0.350083,0.239361,0.019791,0.055458,0.167224,0.185329]})
eNet_scores = eNet_CV.append(eNet_leaderboard.append(eNet_holdout,ignore_index = True),ignore_index = True)
#bools = np.in1d(eNet_scores.outcome,['gpa','grit','materialHardship'])
#eNet_scores = eNet_scores.loc[bools]
```
## Score Aggregation and Plotting
```
## Baseline Scores:
BL_LB = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Leaderboard'],'model':6*['Baseline'],'score_avg':[0.39273,0.21997,0.02880,0.05341,0.17435,0.20224]})
BL_HO = pd.DataFrame({'outcome':['gpa','grit','materialHardship','eviction','layoff','jobTraining'],'type':6*['Holdout'],'model':6*['Baseline'],'score_avg':[0.425148881,0.252983596,0.024905617,0.055457913,0.167223718,0.185329492]})
scores_all = eNet_scores.append(lasso_scores.append(RF_scores.append(LR_scores.append(multi_RF_scores.append(weighted_avrg_scores.append(avrg_scores.append(GBT_scores,ignore_index = True),ignore_index = True),ignore_index = True),ignore_index = True),ignore_index = True),ignore_index = True), ignore_index = True)
scores_ADJ = scores_all
scores = scores_all.loc[scores_all.type != 'In-Sample Error']
for OUTCOME in training.columns.values:
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 7), sharex=True)
temp=scores.loc[scores.outcome==OUTCOME]
temp.score_avg.loc[temp.type=='Leaderboard']=(1-np.divide(temp.score_avg.loc[temp.type=='Leaderboard'],BL_LB.score_avg.loc[BL_LB.outcome==OUTCOME]))*100
temp.score_avg.loc[temp.type=='Holdout']=(1-np.divide(temp.score_avg.loc[temp.type=='Holdout'],BL_HO.score_avg.loc[BL_HO.outcome==OUTCOME]))*100
scores_ADJ.score_avg.loc[(scores_ADJ.outcome==OUTCOME) & (scores_ADJ.type=='Leaderboard')] = (1-np.divide(scores_ADJ.score_avg.loc[(scores_ADJ.outcome==OUTCOME) & (scores_ADJ.type=='Leaderboard')],BL_LB.score_avg.loc[BL_LB.outcome==OUTCOME]))*100
scores_ADJ.score_avg.loc[(scores_ADJ.outcome==OUTCOME) & (scores_ADJ.type=='Holdout')] = (1-np.divide(scores_ADJ.score_avg.loc[(scores_ADJ.outcome==OUTCOME) & (scores_ADJ.type=='Holdout')],BL_HO.score_avg.loc[BL_HO.outcome==OUTCOME]))*100
sns.barplot('model','score_avg',hue = 'type', data = temp, ci = 'sd', ax=ax)
ax.set_title(str(OUTCOME))
ax.set_xlabel('Model')
ax.set_ylabel('Accuracy Improvement over Baseline (%)')
plt.setp( ax.xaxis.get_majorticklabels(), rotation=30)
ax.tick_params(labelsize=18)
plt.savefig(str('../output/fig/'+OUTCOME+'.pdf'))
bools_L = (scores.type=='Leaderboard') & (scores.outcome==OUTCOME)
bools_H = (scores.type=='Holdout') & (scores.outcome==OUTCOME)
print(OUTCOME)
print('Best Leaderboard Model: ',scores.loc[(bools_L)&(scores.loc[bools_L].score_avg==max(scores.loc[bools_L].score_avg))].model)
print('Best Holdout Model: ',scores.loc[(bools_H)&(scores.loc[bools_H].score_avg==max(scores.loc[bools_H].score_avg))].model)
print()
scores = scores_all.loc[scores_all.type=='In-Sample Error']
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(24, 7), sharex=True)
sns.barplot('model','score_avg', hue = 'outcome', data = scores, ci = 'sd', ax=ax)
ax.set_title('In-Sample Model Performance Improvement')
ax.set_xlabel('Model')
ax.set_ylabel('Accuracy Improvement over Baseline (%)')
plt.setp( ax.xaxis.get_majorticklabels(), rotation=30)
plt.ylim([-20,100])
ax.tick_params(labelsize=18)
plt.savefig(str('../output/fig/ALL_IS.pdf'))
scores = scores_all.loc[scores_all.type=='In-Sample Error']
for OUTCOME in training.columns.values:
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 7), sharex=True)
temp=scores.loc[scores.outcome==OUTCOME]
sns.barplot('model','score_avg', data = temp, ci = 'sd', ax=ax, color = 'red')
ax.set_title(str(OUTCOME))
ax.set_xlabel('Model')
ax.set_ylabel('Accuracy Improvement over Baseline (%)')
plt.setp( ax.xaxis.get_majorticklabels(), rotation=30)
ax.tick_params(labelsize=18)
plt.savefig(str('../output/fig/'+OUTCOME+'_IS.pdf'))
bools_L = (scores.type=='Leaderboard') & (scores.outcome==OUTCOME)
bools_H = (scores.type=='Holdout') & (scores.outcome==OUTCOME)
```
# Data Partition Performance
```
scores_PLT = scores_ADJ
scores_PLT = scores_PLT.loc[~((scores_ADJ.model=='Elastic Net') & np.in1d(scores_ADJ.outcome,['eviction','layoff','jobTraining']))]
scores_PLT['color'] = [-1]*np.shape(scores_PLT)[0]
for i,OUTCOME in enumerate(['gpa', 'grit', 'materialHardship', 'eviction', 'layoff', 'jobTraining']):
scores_PLT.color.loc[scores_PLT.outcome==OUTCOME] = i
# LEADERBOARD vs HOLDOUT
scores_X = scores_PLT.loc[scores_PLT.type=='Leaderboard']
scores_Y = scores_PLT.loc[scores_PLT.type=='Holdout']
txt = [str(a) for a,b in zip(scores_X.model,scores_X.outcome)]
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12), sharex=True)
colors = ['red','blue','green','black','yellow','orange']
for i in range(6):
corr_temp = np.round(pearsonr(scores_X.score_avg.loc[scores_X.color==i],
scores_Y.score_avg.loc[scores_Y.color==i]),decimals = 3)
plt.scatter(x = scores_X.score_avg.loc[scores_X.color==i],
s=20, y = scores_Y.score_avg.loc[scores_Y.color==i],
c = colors[i],label=str(scores_X.outcome.loc[scores_X.color==i].iloc[0])+': r^2='+str(corr_temp[0])+' p='+str(corr_temp[1]))
print(i)
print(len(scores_X.score_avg.loc[scores_X.color==i]),
len(scores_Y.score_avg.loc[scores_Y.color==i]))
ax.set_xlabel('Leaderboard Improvement Over Baseline (%)')
ax.set_ylabel('Holdout Improvement Over Baseline (%)')
ax.tick_params(labelsize=18)
plt.ylim([-26, 22])
plt.xlim([-26, 22])
ax.plot([-26,22],[-26,22], 'k-')
ax.legend()
for i,n in enumerate(txt):
ax.annotate(n,(scores_X.score_avg.iloc[i],scores_Y.score_avg.iloc[i]),
size = 10,textcoords='data')
plt.savefig(str('../output/fig/LB_vs_HO.pdf'))
# LEADERBOARD VS IN-SAMPLE
scores_X = scores_PLT.loc[scores_PLT.type=='Leaderboard']
scores_Y = scores_PLT.loc[scores_PLT.type=='In-Sample Error']
scores_Y = pd.DataFrame(scores_Y.groupby([scores_Y.model,scores_Y.outcome]).mean())
txt = [str(a) for a,b in zip(scores_X.model,scores_X.outcome)]
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12), sharex=True)
colors = ['red','blue','green','black','yellow','orange']
for i in range(6):
corr_temp = np.round(pearsonr(scores_X.score_avg.loc[scores_X.color==i],
scores_Y.score_avg.loc[scores_Y.color==i]),decimals = 3)
plt.scatter(x = scores_X.score_avg.loc[scores_X.color==i],
s=20, y = scores_Y.score_avg.loc[scores_Y.color==i],
c = colors[i],label=str(scores_X.outcome.loc[scores_X.color==i].iloc[0])+': r^2='+str(corr_temp[0])+' p='+str(corr_temp[1]))
print(i)
print(len(scores_X.score_avg.loc[scores_X.color==i]),
len(scores_Y.score_avg.loc[scores_Y.color==i]))
ax.set_xlabel('Leaderboard Improvement Over Baseline (%)')
ax.set_ylabel('In-Sample Error Improvement Over Baseline (%)')
ax.tick_params(labelsize=18)
#plt.ylim([-26, 22])
#plt.xlim([-26, 22])
#ax.plot([-26,22],[-26,22], 'k-')
ax.legend()
for i,n in enumerate(txt):
ax.annotate(n,(scores_X.score_avg.iloc[i],scores_Y.score_avg.iloc[i]),
size = 10,textcoords='data')
plt.savefig(str('../output/fig/LB_vs_IS.pdf'))
# HOLDOUT VS IN-SAMPLE
scores_X = scores_PLT.loc[scores_PLT.type=='Holdout']
scores_Y = scores_PLT.loc[scores_PLT.type=='In-Sample Error']
scores_Y = scores_Y.groupby([scores_Y.model,scores_Y.outcome]).mean().reset_index()
# UNCOMMENT if STD
#scores_Y.color = [0, 1, 2, 3, 0, 1, 5, 4, 2, 3, 0, 1, 5, 4, 2, 3, 0, 1, 5, 4, 2, 3, 0,
# 1, 5, 4, 2, 3, 0, 1, 5, 4, 2, 3, 0, 1, 5, 4, 2, 3, 0, 1, 5, 4, 2]
txt = [str(a) for a,b in zip(scores_X.model,scores_X.outcome)]
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12), sharex=True)
colors = ['red','blue','green','black','yellow','orange']
for i in range(6):
corr_temp = np.round(pearsonr(scores_X.score_avg.loc[scores_X.color==i],
scores_Y.score_avg.loc[scores_Y.color==i]),decimals = 3)
plt.scatter(x = scores_X.score_avg.loc[scores_X.color==i],
s=20, y = scores_Y.score_avg.loc[scores_Y.color==i],
c = colors[i],label=str(scores_X.outcome.loc[scores_X.color==i].iloc[0])+': r^2='+str(corr_temp[0])+' p='+str(corr_temp[1]))
print(i)
print(len(scores_X.score_avg.loc[scores_X.color==i]),
len(scores_Y.score_avg.loc[scores_Y.color==i]))
ax.set_xlabel('Holdout Improvement Over Baseline (%)')
ax.set_ylabel('In-Sample Error Improvement Over Baseline (%)')
ax.tick_params(labelsize=18)
#plt.ylim([-26, 22])
#plt.xlim([-26, 22])
#ax.plot([-26,22],[-26,22], 'k-')
ax.legend()
for i,n in enumerate(txt):
ax.annotate(n,(scores_X.score_avg.iloc[i],scores_Y.score_avg.iloc[i]),
size = 10,textcoords='data')
plt.savefig(str('../output/fig/HO_vs_IS.pdf'))
```
### Bootstrapping Correlation Values
```
bootnum = 10000
all_keys_boot = ['gpa']*bootnum
temp = ['grit']*bootnum
all_keys_boot.extend(temp)
temp = ['materialHardship']*bootnum
all_keys_boot.extend(temp)
temp = ['eviction']*bootnum
all_keys_boot.extend(temp)
temp = ['layoff']*bootnum
all_keys_boot.extend(temp)
temp = ['jobTraining']*bootnum
all_keys_boot.extend(temp)
temp = ['overall']*bootnum
all_keys_boot.extend(temp)
scores_ADJ = scores_all
keys = ['gpa', 'grit', 'materialHardship', 'eviction', 'layoff', 'jobTraining','overall']
t1 = ['In-Sample Error']*14
temp = ['Leaderboard']*7
t1.extend(temp)
t2 = ['Leaderboard']*7
temp = ['Holdout']*14
t2.extend(temp)
all_keys_boot = ['gpa']*bootnum
temp = ['grit']*bootnum
all_keys_boot.extend(temp)
temp = ['materialHardship']*bootnum
all_keys_boot.extend(temp)
temp = ['eviction']*bootnum
all_keys_boot.extend(temp)
temp = ['layoff']*bootnum
all_keys_boot.extend(temp)
temp = ['jobTraining']*bootnum
all_keys_boot.extend(temp)
temp = ['overall']*bootnum
all_keys_boot.extend(temp)
df_full = pd.DataFrame(columns = ['T1-T2', 'condition', 'avg_corr','sd_corr'])
for [T1,T2] in [['In-Sample Error','Leaderboard'],['In-Sample Error','Holdout'],['Leaderboard','Holdout']]:
X_type = scores_ADJ.loc[scores_ADJ.type==T1]
Y_type = scores_ADJ.loc[scores_ADJ.type==T2]
avg_corr = list([])
# For Ind. Outcomes
for OUTCOME in ['gpa', 'grit', 'materialHardship', 'eviction', 'layoff', 'jobTraining']:
corr = np.zeros(bootnum)
X_OC = X_type.loc[X_type.outcome==OUTCOME]
Y_OC = Y_type.loc[Y_type.outcome==OUTCOME]
X_curr = X_OC.groupby(X_OC.model).score_avg.mean()
Y_curr = Y_OC.groupby(Y_OC.model).score_avg.mean()
for i in range(bootnum):
index = np.random.choice(list(range(len(X_curr))),len(X_curr))
avg_corr.append(pearsonr(X_curr[index].values,Y_curr[index].values)[0])
# For Overall
X_curr = X_type.groupby([X_type.model,X_type.outcome]).score_avg.mean()
Y_curr = Y_type.groupby([Y_type.model,Y_type.outcome]).score_avg.mean()
corr = np.zeros(bootnum)
for i in range(bootnum):
index = np.random.choice(list(range(len(X_curr))),len(X_curr))
avg_corr.append(pearsonr(X_curr[index].values,Y_curr[index].values)[0])
to_add = pd.DataFrame({'T1-T2':7*bootnum*[str(T1)+' w/ '+str(T2)], 'condition': all_keys_boot,
'avg_corr':avg_corr})
df_full = df_full.append(to_add)
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 7), sharex=True)
sns.barplot('T1-T2','avg_corr', hue = 'condition', data = df_full, ci = 'sd', ax=ax)
ax.set_title('Correlation Comparison')
ax.set_xlabel('Data Partitions Compared')
ax.set_ylabel('Avg. Correlation')
plt.setp( ax.xaxis.get_majorticklabels(), rotation=30)
plt.ylim([-1.3,1.2])
ax.tick_params(labelsize=18)
plt.savefig(str('../output/fig/Correlation_Comparison.pdf'))
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 7), sharex=True)
sns.barplot('T1-T2','avg_corr', hue = 'condition', data = df_full.loc[df_full.condition=='overall'], ci = 'sd', ax=ax)
ax.set_title('Correlation Comparison')
ax.set_xlabel('Data Partitions Compared')
ax.set_ylabel('Avg. Correlation')
plt.setp( ax.xaxis.get_majorticklabels(), rotation=30)
plt.ylim([0,1])
ax.tick_params(labelsize=18)
plt.savefig(str('../output/fig/Correlations_Overall.pdf'))
```
## Feature Importance XGBoost
```
father = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Father'],'score': [0.199531305,0.140893472,0.221546773,0.1923971,0.130434782,0.27181208]})
homevisit = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Home Visit'],'score': [0.203213929,0.209621994,0.189125295,0.112949541,0.036789297,0.187919463]})
child = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Child'],'score': [0.044861065,0.003436426,0.082404594,0.01572542,0.006688963,0.023489933]})
kinder = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Kindergarden'],'score': [0.003347841,0.003436426,0.00810537,0.008432472,0.003344482,0.006711409]})
mother = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Mother'],'score': [0.349849352,0.515463913,0.360351229,0.569032313,0.66889632,0.395973155]})
other = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Other'],'score': [0.016069635,0.01718213,0.003377237,0.0097999,0.006688963,0.016778523]})
care = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Caregiver'],'score': [0.085369937,0.048109966,0.10570753,0.060713797,0.140468227,0.080536912]})
teacher = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Teacher'],'score': [0.087378641,0.058419244,0.023302938,0.02306395,0.006688963,0.016778524]})
wav1 = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Wave 1'],'score': [0.109809175,0.048109966,0.101654846,0.317288843,0.046822742,0.104026846]})
wav2 = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Wave 2'],'score': [0.126548378,0.085910654,0.125295507,0.122612698,0.117056855,0.073825504]})
wav3 = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Wave 3'],'score': [0.189822567,0.206185568,0.173252278,0.162496011,0.143812707,0.271812079]})
wav4 = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Wave 4'],'score': [0.172079012,0.230240552,0.205336034,0.166826199,0.217391305,0.241610739]})
wav5 = pd.DataFrame({'outcome': ['gpa','eviction','grit','materialHardship','jobTraining','layoff'],
'characteristic': 6*['Wave 5'],'score': [0.388014734,0.422680407,0.380276931,0.214458269,0.471571907,0.302013422]})
who_df = pd.concat([mother,father,care,homevisit,child,teacher,kinder,other],ignore_index = True)
when_df = pd.concat([wav1,wav2,wav3,wav4,wav5],ignore_index = True)
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 5), sharex=True)
sns.barplot('characteristic','score', hue = 'outcome', data = who_df,
ci = None,ax=ax)
ax.set_ylabel('Feature Importance (Sum)')
ax.tick_params(labelsize=13)
ax.set_ylim(0,0.7)
plt.savefig('../output/fig/Who_Feature_Importance.pdf')
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 5), sharex=True)
sns.barplot('characteristic','score', hue = 'outcome', data = when_df,
ci = None,ax=ax)
ax.set_ylabel('Feature Importance (Sum)')
ax.tick_params(labelsize=13)
ax.set_ylim(0,0.7)
plt.savefig('../output/fig/When_Feature_Importance.pdf')
```
## Comparison of Feature Selection Methods
```
LASSO_files = listdir('../output/LASSO_ALL/')
MI_files = ['data_univariate_feature_selection_5.csv','data_univariate_feature_selection_15.csv','data_univariate_feature_selection_50.csv','data_univariate_feature_selection_100.csv','data_univariate_feature_selection_200.csv','data_univariate_feature_selection_300.csv','data_univariate_feature_selection_500.csv','data_univariate_feature_selection_700.csv','data_univariate_feature_selection_1000.csv','data_univariate_feature_selection_1500.csv','data_univariate_feature_selection_2000.csv','data_univariate_feature_selection_3000.csv','data_univariate_feature_selection_4000.csv']
msk = [i!='.DS_Store' for i in LASSO_files]
LASSO_files = [i for i,j in zip(LASSO_files,msk) if j]
LASSO_files = np.sort(LASSO_files)
MI_file = MI_files[0]
L_file = LASSO_files[0]
perc_similar = np.zeros((len(LASSO_files),len(MI_files)))
PC1_corr = np.zeros((len(LASSO_files),len(MI_files)))
L_names = []
MI_names = []
for i,L_file in enumerate(LASSO_files):
temp_L = pd.read_csv(('../output/LASSO_ALL/'+L_file))
L_names.append(np.shape(temp_L.columns.values)[0])
L_PC = PCA(n_components=2).fit_transform(temp_L)
for j,MI_file in enumerate(MI_files):
temp_M = pd.read_csv(('../output/MI/'+MI_file))
MI_names.append(np.shape(temp_M.columns.values)[0])
MI_PC = PCA(n_components=2).fit_transform(temp_M)
PC1_corr[i,j] = pearsonr(L_PC[:,0],MI_PC[:,0])[0]
perc_similar[i,j]= sum(np.in1d(temp_L.columns.values,temp_M.columns.values))
data_named = pd.DataFrame(perc_similar,index = L_names, columns = np.unique(MI_names))
columns = data_named.columns.tolist()
columns = columns[::-1]
data_named = data_named[columns]
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(15, 15), sharex=True)
sns.heatmap(data_named, annot = True)
plt.savefig('../output/fig/feature_heatmap.png')
L_names = [str('r^2='+str(i)) for i in np.linspace(0.1,0.9,9)]
MI_names = [str('K='+str(i)) for i in [5,15,50,100,200,300,500,700,1000,1500,2000,3000,4000]]
data_PC = pd.DataFrame(PC1_corr,index = L_names, columns = MI_names)
columns = data_named.columns.tolist()
columns = columns[::-1]
data_named = data_named[columns]
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(15, 15), sharex=True)
sns.heatmap(data_PC, annot = True)
plt.savefig('../output/fig/PC1_heatmap.png')
```
| github_jupyter |
# Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
- **Twitter**: @iamtrask
- **Blog**: http://iamtrask.github.io
### What You Should Already Know
- neural networks, forward and back-propagation
- stochastic gradient descent
- mean squared error
- and train/test splits
### Where to Get Help if You Need it
- Re-watch previous Udacity Lectures
- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)
- Shoot me a tweet @iamtrask
### Tutorial Outline:
- Intro: The Importance of "Framing a Problem" (this lesson)
- [Curate a Dataset](#lesson_1)
- [Developing a "Predictive Theory"](#lesson_2)
- [**PROJECT 1**: Quick Theory Validation](#project_1)
- [Transforming Text to Numbers](#lesson_3)
- [**PROJECT 2**: Creating the Input/Output Data](#project_2)
- Putting it all together in a Neural Network (video only - nothing in notebook)
- [**PROJECT 3**: Building our Neural Network](#project_3)
- [Understanding Neural Noise](#lesson_4)
- [**PROJECT 4**: Making Learning Faster by Reducing Noise](#project_4)
- [Analyzing Inefficiencies in our Network](#lesson_5)
- [**PROJECT 5**: Making our Network Train and Run Faster](#project_5)
- [Further Noise Reduction](#lesson_6)
- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](#project_6)
- [Analysis: What's going on in the weights?](#lesson_7)
# Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
```
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
```
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
```
len(reviews)
reviews[0]
labels[0]
```
# Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
```
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
```
# Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the [Counter](https://docs.python.org/2/library/collections.html#collections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
```
from collections import Counter
import numpy as np
```
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
```
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
```
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
```
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if labels[i]=='POSITIVE':
every_word_in_reviews = reviews[i].split(' ')
for word in every_word_in_reviews:
positive_counts[word] += 1
total_counts[word] += 1
else:
every_word_in_reviews = reviews[i].split(' ')
for word in every_word_in_reviews:
negative_counts[word]+= 1
total_counts[word] += 1
```
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
```
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
```
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.
**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`.
>Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
```
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
```
Examine the ratios you've calculated for a few words:
```
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
```
Looking closely at the values you just calculated, we see the following:
* Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
* When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
```
# TODO: Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
```
Examine the new ratios you've calculated for the same words from before:
```
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
```
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
```
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
```
# End of Project 1.
## Watch the next video to see Andrew's solution, then continue on to the next lesson.
# Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
```
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
```
# Project 2: Creating the Input/Output Data<a id='project_2'></a>
**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.html#sets) named `vocab` that contains every word in the vocabulary.
```
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
```
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
```
vocab_size = len(vocab)
print(vocab_size)
```
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
```
from IPython.display import Image
Image(filename='sentiment_network_2.png')
```
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
```
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1, vocab_size))
```
Run the following cell. It should display `(1, 74074)`
```
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
```
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
```
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
```
**TODO:** Complete the implementation of `update_input_layer`. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside `layer_0`.
```
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
```
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
```
update_input_layer(reviews[0])
layer_0
```
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`,
depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
```
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
# TODO: Your code here
if (label == "POSITIVE"):
return 1
else:
return 0
```
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
```
labels[0]
get_target_for_label(labels[0])
```
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
```
labels[1]
get_target_for_label(labels[1])
```
# End of Project 2.
## Watch the next video to see Andrew's solution, then continue on to the next lesson.
# Project 3: Building a Neural Network<a id='project_3'></a>
**TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)
- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions
- Ensure `train` trains over the entire corpus
### Where to Get Help if You Need it
- Re-watch earlier Udacity lectures
- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
```
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
#label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid function
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
```
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
```
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
```
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
**We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
```
mlp.test(reviews[-1000:],labels[-1000:])
```
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
```
mlp.train(reviews[:-1000],labels[:-1000])
```
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
```
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
```
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
```
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
```
With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
# End of Project 3.
## Watch the next video to see Andrew's solution, then continue on to the next lesson.
# Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
```
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
```
# Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the `SentimentNetwork` class you created earlier into the following cell.
* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used.
```
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
```
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
```
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
```
That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
```
mlp.test(reviews[-1000:],labels[-1000:])
```
# End of Project 4.
## Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
# Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
```
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
```
# Project 5: Making our Network More Efficient<a id='project_5'></a>
**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the `SentimentNetwork` class from the previous project into the following cell.
* Remove the `update_input_layer` function - you will not need it in this version.
* Modify `init_network`:
>* You no longer need a separate input layer, so remove any mention of `self.layer_0`
>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
* Modify `train`:
>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.
>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.
>* Remove call to `update_input_layer`
>* Use `self`'s `layer_1` instead of a local `layer_1` object.
>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.
>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.
* Modify `run`:
>* Remove call to `update_input_layer`
>* Use `self`'s `layer_1` instead of a local `layer_1` object.
>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review.
```
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
```
Run the following cell to recreate the network and train it once again.
```
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
```
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
```
mlp.test(reviews[-1000:],labels[-1000:])
```
# End of Project 5.
## Watch the next video to see Andrew's solution, then continue on to the next lesson.
# Further Noise Reduction<a id='lesson_6'></a>
```
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
```
# Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the `SentimentNetwork` class from the previous project into the following cell.
* Modify `pre_process_data`:
>* Add two additional parameters: `min_count` and `polarity_cutoff`
>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
>* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.
>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`
* Modify `__init__`:
>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data`
```
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
```
Run the following cell to train your network with a small polarity cutoff.
```
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
```
And run the following cell to test it's performance. It should be
```
mlp.test(reviews[-1000:],labels[-1000:])
```
Run the following cell to train your network with a much larger polarity cutoff.
```
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
```
And run the following cell to test it's performance.
```
mlp.test(reviews[-1000:],labels[-1000:])
```
# End of Project 6.
## Watch the next video to see Andrew's solution, then continue on to the next lesson.
# Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
```
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
```
| github_jupyter |
<p style="border: 1px solid #e7692c; border-left: 15px solid #e7692c; padding: 10px; text-align:justify;">
<strong style="color: #e7692c">Tip.</strong> <a style="color: #000000;" href="https://nbviewer.jupyter.org/github/PacktPublishing/Hands-On-Computer-Vision-with-Tensorflow/blob/master/ch4/ch4_nb5_explore_imagenet_and_its_tiny_version.ipynb" title="View with Jupyter Online">Click here to view this notebook on <code>nbviewer.jupyter.org</code></a>.
<br/>These notebooks are better read there, as Github default viewer ignores some of the formatting and interactive content.
</p>
<table style="font-size: 1em; padding: 0; margin: 0;">
<tr style="vertical-align: top; padding: 0; margin: 0;">
<td style="vertical-align: top; padding: 0; margin: 0; padding-right: 15px;">
<p style="background: #363636; color:#ffffff; text-align:justify; padding: 10px 25px;">
<strong style="font-size: 1.0em;"><span style="font-size: 1.2em;"><span style="color: #e7692c;">Hands-on</span> Computer Vision with TensorFlow 2</span><br/>by <em>Eliot Andres</em> & <em>Benjamin Planche</em> (Packt Pub.)</strong><br/><br/>
<strong>> Chapter 4: Influential Classification Tools</strong><br/>
</p>
<h1 style="width: 100%; text-align: left; padding: 0px 25px;"><small style="color: #e7692c;">
Notebook 5:</small><br/>Exploring ImageNet and Tiny-ImageNet</h1>
<br/>
<p style="border-left: 15px solid #363636; text-align:justify; padding: 0 10px;">
In this additional notebook, we demonstrate how those interested can acquire <em><strong>ImageNet</em></strong> and its smaller version <em><strong>Tiny-ImageNet</em></strong>, and can set up training pipelines using them. With this notebook, we will also briefly introduce the <code>tf.data</code> API.
</p>
<br/>
<p style="border-left: 15px solid #e7692c; padding: 0 10px; text-align:justify;">
<strong style="color: #e7692c;">Tip.</strong> The notebooks shared on this git repository illustrate some of notions from the book "<em><strong>Hands-on Computer Vision with TensorFlow 2</strong></em>" written by Eliot Andres and Benjamin Planche and published by Packt. If you enjoyed the insights shared here, <strong>please consider acquiring the book!</strong>
<br/><br/>
The book provides further guidance for those eager to learn about computer vision and to harness the power of TensorFlow 2 and Keras to build performant recognition systems for object detection, segmentation, video processing, smartphone applications, and more.</p>
</td>
<td style="vertical-align: top; padding: 0; margin: 0; width: 250px;">
<a href="https://www.packtpub.com" title="Buy on Packt!">
<img src="../banner_images/book_cover.png" width=250>
</a>
<p style="background: #e7692c; color:#ffffff; padding: 10px; text-align:justify;"><strong>Leverage deep learning to create powerful image processing apps with TensorFlow 2 and Keras. <br/></strong>Get the book for more insights!</p>
<ul style="height: 32px; white-space: nowrap; text-align: center; margin: 0px; padding: 0px; padding-top: 10px;">
<li style="display: inline-block; height: 100%; vertical-align: middle; float: left; margin: 5px; padding: 0px;">
<a href="https://www.packtpub.com" title="Get the book on Amazon!">
<img style="vertical-align: middle; max-width: 72px; max-height: 32px;" src="../banner_images/logo_amazon.png" width="75px">
</a>
</li>
<li style="display: inline-block; height: 100%; vertical-align: middle; float: left; margin: 5px; padding: 0px;">
<a href="https://www.packtpub.com" title="Get your Packt book!">
<img style="vertical-align: middle; max-width: 72px; max-height: 32px;" src="../banner_images/logo_packt.png" width="75px">
</a>
</li>
<li style="display: inline-block; height: 100%; vertical-align: middle; float: left; margin: 5px; padding: 0px;">
<a href="https://www.packtpub.com" title="Get the book on O'Reilly Safari!">
<img style="vertical-align: middle; max-width: 72px; max-height: 32px;" src="../banner_images/logo_oreilly.png" width="75px">
</a>
</li>
</ul>
</td>
</tr>
</table>
```
import os
import glob
import tensorflow as tf
from matplotlib import pyplot as plt
```
## Tiny-ImageNet
### Presentation
As presented in the chapter, the *ImageNet* dataset ([http://image-net.org](http://image-net.org)) and its yearly competition pushed forward the development of performant CNNs for image recognition[$^1$](#ref).
While it could have been interesting to reuse this dataset to reproduce the results listed in the book, its huge size makes _ImageNet_ difficult to deploy on most machines (memory-wise). Training on such a dataset would also be a long, expensive task.
Another solution could have been to use only a portion of _ImageNet_. Indeed, the people at Standford University already compiled such a dataset for one of their famous classes ("_CS231n: Convolutional Neural Networks for Visual Recognition_" - http://cs231n.stanford.edu/). This dataset, _Tiny-ImageNet_ ([https://tiny-imagenet.herokuapp.com](https://tiny-imagenet.herokuapp.com)) contains 200 different classes (against the 1,000 of ImageNet). For each class, it offers 500 training images, 50 validation images, and 50 test ones.
### Setup
Tiny-ImageNet can be downloaded at [https://tiny-imagenet.herokuapp.com](https://tiny-imagenet.herokuapp.com) or [http://image-net.org/download-images](http://image-net.org/download-images) (users need the proper access).
***Note:*** Makee sure to check the _ImageNet_ terms of use: [http://image-net.org/download-faq](http://image-net.org/download-faq).
Once downloaded, the archive can be unzipped (`unzip tiny-imagenet-200.zip`) at a proper location. Its path is stored into a variable:
```
ROOT_FOLDER = os.path.expanduser('~/datasets/tiny-imagenet-200/')
```
Let us have a look at the directory structure of the dataset:
- <ROOT_FOLDER>/tiny-imagenet-200/
- wnids.txt <-- File with the list of class IDs in the dataset
- words.txt <-- File with the mapping from class IDs to readable labels
- train/ <-- Training folder
- <class_i>/ <-- Folder containing training data of class <class_i>
- images/ <-- Sub-folder with all the images for this class
- ***.JPEG
- n01443537_boxes.txt <-- Annotations for detection tasks (unused)
- val/ <-- Validation folder
- images/ <-- Folder with all the validation images
- val_annotations.txt <-- File with the list of eval image filenames and
the corresponding class IDs
- test/ <-- Test folder
- images/ <-- Folder containing all the test images
Finally, we define some additional dataset-related constants useful for later:
```
IMAGENET_IDS_FILE_BASENAME = 'wnids.txt' # File in ROOT_FOLDER containing the list of class IDs
IMAGENET_WORDS_FILE_BASENAME = 'words.txt' # File in ROOT_FOLDER containing the mapping from class IDs to readable labels
IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS = 64, 64, 3 # Image dimensions
```
## Input Pipeline
Datasets come in all forms and sizes. As training a CNN is a complex and heavy process, it is important to have an efficient data pipeline to provide the training batches on time to avoid performance bottlenecks.
In the following section, we will set up an input pipeline for a Tensorflow model, using Tiny-ImageNet as an example.
### Parsing the Labels
_Tiny-ImageNet_ is mainly organized by class. Therefore, let us start by listing and parsing those various classes.
We will use the two text files at the root of _Tiny-ImageNet_ to:
- List the IDs corresponding to the 200 classes. This list will allow us to assign to each ID (IDs are 9-character-long strings) an integer from 0 to 199 (the ID position in the list);
- Build a dictionary to map the IDs to human-readable labels (e.g., '_n01443537_' $ \rightarrow$ '_goldfish, Carassius auratus_')
The first list is the most important, as it defines the categories (mapping the string IDs to numbers) which will be the target of our recognition models. The second structure, the dictionary, will simply allow us at the end to get understandable results.
```
def _get_class_information(ids_file, words_file):
"""
Extract the class IDs and corresponding human-readable labels from metadata files.
:param ids_file: IDs filename (contains list of unique string class IDs)
:param words_file: Words filename (contains list of tuples <ID, human-readable label>)
:return: List of IDs, Dictionary of labels
"""
with open(ids_file, "r") as f:
class_ids = [line[:-1] for line in f.readlines()] # removing the `\n` for each line
with open(words_file, "r") as f:
words_lines = f.readlines()
class_readable_labels = {}
for line in words_lines:
# We split the line between the ID (9-char long) and the human readable label:
class_id = line[:9]
class_label = line[10:-1]
# If this class is in our dataset, we add it to our id-to-label dictionary:
if class_id in class_ids:
class_readable_labels[class_id] = class_label
return class_ids, class_readable_labels
```
We can directly test this function:
```
ids_file = os.path.join(ROOT_FOLDER, IMAGENET_IDS_FILE_BASENAME)
words_file = os.path.join(ROOT_FOLDER, IMAGENET_WORDS_FILE_BASENAME)
class_ids, class_readable_labels = _get_class_information(ids_file, words_file)
# Let's for example print the 10 first IDs and their human-readable labels:
for i in range(10):
id = class_ids[i]
print('"{}" --> "{}"'.format(id, class_readable_labels[id]))
```
### Listing All Images and Labels
Now that we have the categories defined, we can list all the images along with their respective categorical labels.
Since the dataset structure is different for training/validation/testing splits, we have to cover them separately. This happens often in practice, as defining a normalized structure for datasets is a complicated task (image format, annotation types, folder structure, etc. are heavily affected by the use-cases).
In this example, we will cover only the training and validation split:
```
def _get_train_image_files_and_labels(root_folder, class_ids):
"""
Fetch the lists of training images and numerical labels.
We assume the images are stored as "<root_folder>/train/<class_id>/images/*.JPEG"
:param root_folder: Dataset root folder
:param class_ids: List of class IDs
:return: List of image filenames and List of corresponding labels
"""
image_files, labels = [], []
for i in range(len(class_ids)):
class_id = class_ids[i]
# Grabbing all the image files for this class:
class_image_paths = os.path.join(root_folder, 'train', class_id, 'images', '*.JPEG')
class_images = glob.glob(class_image_paths)
# Creating as many numerical labels:
class_labels = [i] * len(class_images)
image_files += class_images
labels += class_labels
return image_files, labels
def _get_val_image_files_and_labels(root_folder, class_ids):
"""
Fetch the lists of validation images and numerical labels.
We assume the images are stored as "<root_folder>/train/<class_id>/images/*.JPEG"
:param root_folder: Dataset root folder
:param class_ids: List of class IDs
:return: List of image filenames and List of corresponding labels
"""
image_files, labels = [], []
# The file 'val_annotations.txt' contains for each line the image filename and its annotations.
# We parse it to build our dataset lists:
val_annotation_file = os.path.join(root_folder, 'val', 'val_annotations.txt')
with open(val_annotation_file, "r") as f:
anno_lines = f.readlines()
for line in anno_lines:
split_line = line.split('\t') # Splitting the line to extract the various pieces of info
if len(split_line) > 1:
image_file, image_class_id = split_line[0], split_line[1]
class_num_id = class_ids.index(image_class_id)
if class_num_id >= 0: # If the label belongs to our dataset, we add them:
image_files.append(image_file)
labels.append(class_num_id)
return image_files, labels
```
If we call the method for the training data, we obtain our list of 500 * 200 = 100,000 images and their labels:
```
image_files, image_labels = _get_train_image_files_and_labels(ROOT_FOLDER, class_ids)
print("Number of training images: {}".format(len(image_files)))
```
### Building an Iterable Dataset with Tensorflow
We need to convert this list of filenames into images, and generate a list of batches our model could iterate over during its training. There are however lots of elements to take into consideration.
For instance, pre-loading all the images may not be possible for modest machines (at least for bigger datasets); but loading images on the fly would cause continuous delays. Also, in several papers we presented in Chapter 4, data scientists are applying random transformations to the images at each iteration (cropping, scaling, etc.). Those operations are also consuming.
All in all, we would probably need some multi-thread pipeline for our inputs. Thankfully, Tensorflow provides us with an efficient solution. Its **`tf.data`** API contains several methods to build **`tf.data.Dataset()`** instances, a dataset structure which can be converted into batch iterators for TF models.
***Note:*** The `tf.data` API is thoroughfully detailed later in Chapter [7](./ch7).
For instance, a `Dataset` can be created from tensors containing lists of elements. Therefore, we can easily wrap our `image_files` and `image_labels` into a `Dataset`, first converting them into tensors:
```
image_files = tf.constant(image_files)
image_labels = tf.constant(image_labels)
dataset = tf.data.Dataset.from_tensor_slices((image_files, image_labels))
dataset
```
This object has multiple methods to transform its content, batch the elements, shuffle them, etc. Once defined, those operations will be applied only when necessary / called by the framework (like any other operation in TF graphs).
Our goal is to have this dataset output batches of images and their labels. So first thing first, let us add an operation to obtain the images from the filenames:
```
def _parse_function(filename, label, size=[IMG_HEIGHT, IMG_WIDTH]):
"""
Parse the provided tensors, loading and resizing the corresponding image.
Code snippet from https://www.tensorflow.org/guide/datasets#decoding_image_data_and_resizing_it (Apache 2.0 License).
:param filename: Image filename (String Tensor)
:param label: Image label
:param size: Size to resize the images to
:return: Image, Label
"""
# Reading the file and returning its content as bytes:
image_string = tf.io.read_file(filename)
# Decoding those into the image
# (with `channels=3`, TF will duplicate the channels of grayscale images so they have 3 channels too):
image_decoded = tf.io.decode_jpeg(image_string, channels=3)
# Converting to float:
image_float = tf.image.convert_image_dtype(image_decoded, tf.float32)
# Resizing the image to the expected dimensions:
image_resized = tf.image.resize(image_float, size)
return image_resized, label
dataset = dataset.map(_parse_function)
```
`dataset.map(fn)` tells the dataset to apply the function `fn` to each element requested at a given iteration. These functions can be chained. For example, we can add another function to randomly transform the training images, to artificially increase the number of different images our model can train on:
```
def _training_augmentation_fn(image, label):
"""
Apply random transformations to augment the training images.
:param images: Images
:param label: Labels
:return: Augmented Images, Labels
"""
# Randomly applied horizontal flip:
image = tf.image.random_flip_left_right(image)
# Random B/S changes:
image = tf.image.random_brightness(image, max_delta=0.1)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.clip_by_value(image, 0.0, 1.0) # keeping pixel values in check
# Random resize and random crop back to expected size:
original_shape = tf.shape(image)
random_scale_factor = tf.random.uniform([1], minval=0.7, maxval=1.3, dtype=tf.float32)
scaled_height = tf.cast(tf.cast(original_shape[0], tf.float32) * random_scale_factor,
tf.int32)
scaled_width = tf.cast(tf.cast(original_shape[1], tf.float32) * random_scale_factor,
tf.int32)
scaled_shape = tf.squeeze(tf.stack([scaled_height, scaled_width]))
image = tf.image.resize(image, scaled_shape)
image = tf.image.random_crop(image, original_shape)
return image, label
dataset.map(_training_augmentation_fn)
```
We can also specify if we want the dataset to be suffled, or sepcify how many elements we want at each iteration in a batch, how many times we want the dataset to be repeated (for multiple epochs), how many batches to pre-fetch, etc:
```
batch_size = 32
num_epochs = 30
dataset = dataset.shuffle(buffer_size=10000)
dataset = dataset.batch(batch_size)
dataset = dataset.repeat(num_epochs)
dataset = dataset.prefetch(1)
```
***Note:*** More detailed explanations on `Dataset` and its methods, as well as performance recommendations, will be provided in Chapter 7 and its [notebooks](../ch7).
Our dataset is ready, and we can now simply iterate over it to obtain our batches:
```
images, labels = next(dataset.__iter__())
# Displaying an example:
i = 0
class_id = class_ids[labels[i]]
readable_label = class_readable_labels[class_id]
print(readable_label)
plt.imshow(images[i])
```
As we saw through the previous notebooks, this `tf.data.Dataset` instances can be simply passed to Keras models for their training.
### Wrapping Up for Estimators
If we want to pass our dataset to an Estimator, we can wrap the iterable inputs (`images` here) into a dictionary in order to name the content.
```
batch = {'image': images, 'label': labels}
```
We know have our input pipeline ready. We will reuse these variables in the next notebooks. For clarity, we wrap their definition into easy-to-call functions:
```
def _input_fn(image_files, image_labels,
shuffle=True, batch_size=32, num_epochs=None,
augmentation_fn=None, wrap_for_estimator=True, resize_to=None):
"""
Prepares and returns the iterators for a dataset.
:param image_files: List of image files
:param image_labels: List of image labels
:param shuffle: Flag to shuffle the dataset (if True)
:param batch_size: Batch size
:param num_epochs: Number of epochs (to repeat the iteration - infinite if None)
:param augmentation_fn: opt. Augmentation function
:param wrap_for_estimator: Flag to wrap the inputs to be passed for Estimators
:param resize_to: (opt) Dimensions (h x w) to resize the images to
:return: Iterable batched images and labels
"""
# Converting to TF dataset:
image_files = tf.constant(image_files)
image_labels = tf.constant(image_labels)
dataset = tf.data.Dataset.from_tensor_slices((image_files, image_labels))
if shuffle:
dataset = dataset.shuffle(buffer_size=50000)
# Adding parsing operation, to open and decode images:
if resize_to is None:
parse_fn = _parse_function
else:
# We specify to which dimensions to resize the images, if requested:
parse_fn = partial(_parse_function, size=resize_to)
dataset = dataset.map(parse_fn, num_parallel_calls=4)
# Opt. adding some further transformations:
if augmentation_fn is not None:
dataset.map(augmentation_fn, num_parallel_calls=4)
# Further preparing for iterating on:
dataset = dataset.batch(batch_size)
dataset = dataset.repeat(num_epochs)
dataset = dataset.prefetch(1)
if wrap_for_estimator:
dataset = dataset.map(lambda img, label: {'image': img, 'label': label})
return dataset
def tiny_imagenet(phase='train', shuffle=True, batch_size=32, num_epochs=None,
augmentation_fn=_training_augmentation_fn, wrap_for_estimator=True,
root_folder=ROOT_FOLDER, resize_to=None):
"""
Instantiate a Tiny-Image training or validation dataset, which can be passed to any model.
:param phase: Phase ('train' or 'val')
:param shuffle: Flag to shuffle the dataset (if True)
:param batch_size: Batch size
:param num_epochs: Number of epochs (to repeat the iteration - infinite if None)
:param augmentation_fn: opt. Augmentation function
:param wrap_for_estimator: Flag to wrap the inputs to be passed for Estimators
:param root_folder: Dataset root folder
:param resize_to: (opt) Dimensions (h x w) to resize the images to
:return: Dataset pipeline, IDs List, Dictionary to read labels
"""
ids_file = os.path.join(root_folder, IMAGENET_IDS_FILE_BASENAME)
words_file = os.path.join(root_folder, IMAGENET_WORDS_FILE_BASENAME)
class_ids, class_readable_labels = _get_class_information(ids_file, words_file)
if phase == 'train':
image_files, image_labels = _get_train_image_files_and_labels(root_folder, class_ids)
elif phase == 'val':
image_files, image_labels = _get_val_image_files_and_labels(root_folder, class_ids)
else:
raise ValueError("Unknown phase ('train' or 'val' only)")
dataset = _input_fn(image_files, image_labels,
shuffle, batch_size, num_epochs, augmentation_fn,
wrap_for_estimator, resize_to)
return dataset, class_ids, class_readable_labels
```
## ImageNet
For our more ambitious readers, the same process can be followed with the original *ImageNet* dataset, after acquiring it from its website ([http://image-net.org](http://image-net.org)).
However, TensorFlow developers have made public the `tensorflow-datasets` package ([https://github.com/tensorflow/datasets](https://github.com/tensorflow/datasets)), which greatly simplifies the download and usage of many standard datasets (it is still up to the users to make sure they have the proper authorizations / they respect the terms of use for the datasets they download this way).
We will not extend further in this notebook, as `tensorflow-datasets` has been already properly introduced in a previous [notebook](./ch4_nb1_implement_resnet_from_scratch.ipynb). The explanations shared there can be directly applied to the _ImageNet_ version provided by these package ([details](https://github.com/tensorflow/datasets/blob/master/docs/datasets.md#imagenet2012)):
```
# !pip install tensorflow-datasets # Uncomment to install the module
import tensorflow_datasets as tfds
imagenet_builder = tfds.builder("imagenet2012")
print(imagenet_builder.info)
# Uncommment to download and get started (check terms of use!):
# imagenet_builder.download_and_prepare()
```
<a id="ref"></a>
#### References
1. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L., 2014. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs].
| github_jupyter |
# Few-Shot Learning With Prototypical Networks
```
import torch
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from matplotlib import pyplot as plt
import cv2
from tensorboardX import SummaryWriter
from torch import optim
from tqdm import tqdm
import multiprocessing as mp
from preprocessing import read_images
from prototypicalNet import PrototypicalNet, train_step, test_step, load_weights
tqdm.pandas(desc="my bar!")
```
## Data Reading and Augmentation
The Omniglot data set is designed for developing more human-like learning algorithms. It contains 1623 different handwritten characters from 50 different alphabets. Then to increase the number of classes, all the images are rotated by 90, 180 and 270 degrees and each rotation resulted in one more class. Hence the total count of classes reached to 6492(1623 * 4) classes. We split images of 4200 classes to training data and the rest went to test set.
```
# Reading the data
print("Reading background images")
trainx, trainy = read_images(r'D:\_hackerreborn\Prototypical-Networks\input\omniglot\images_background')
print(trainx.shape)
print(trainy.shape)
# Checking if GPU is available
use_gpu = torch.cuda.is_available()
# Converting input to pytorch Tensor
trainx = torch.from_numpy(trainx).float()
if use_gpu:
trainx = trainx.cuda()
trainx.shape, trainy.shape
```
## Model
```
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from tqdm import trange
from time import sleep
import numpy as np
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
class Net(nn.Module):
"""
Image2Vector CNN which takes image of dimension (28x28x3) and return column vector length 64
"""
def sub_block(self, in_channels, out_channels=64, kernel_size=3):
block = torch.nn.Sequential(
torch.nn.Conv2d(kernel_size=kernel_size, in_channels=in_channels, out_channels=out_channels, padding=1),
torch.nn.BatchNorm2d(out_channels),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2)
)
return block
def __init__(self):
super(Net, self).__init__()
self.convnet1 = self.sub_block(3)
self.convnet2 = self.sub_block(64)
self.convnet3 = self.sub_block(64)
self.convnet4 = self.sub_block(64)
def forward(self, x):
x = self.convnet1(x)
x = self.convnet2(x)
x = self.convnet3(x)
x = self.convnet4(x)
x = torch.flatten(x, start_dim=1)
return x
class PrototypicalNet(nn.Module):
def __init__(self, use_gpu=False):
super(PrototypicalNet, self).__init__()
self.f = Net()
self.gpu = use_gpu
if self.gpu:
self.f = self.f.cuda()
def forward(self, datax, datay, Ns,Nc, Nq, total_classes):
"""
Implementation of one episode in Prototypical Net
datax: Training images
datay: Corresponding labels of datax
Nc: Number of classes per episode
Ns: Number of support data per class
Nq: Number of query data per class
total_classes: Total classes in training set
"""
k = total_classes.shape[0]
K = np.random.choice(total_classes, Nc, replace=False)
Query_x = torch.Tensor()
if(self.gpu):
Query_x = Query_x.cuda()
Query_y = []
Query_y_count = []
centroid_per_class = {}
class_label = {}
label_encoding = 0
for cls in K:
S_cls, Q_cls = self.random_sample_cls(datax, datay, Ns, Nq, cls)
centroid_per_class[cls] = self.get_centroid(S_cls, Nc)
class_label[cls] = label_encoding
label_encoding += 1
Query_x = torch.cat((Query_x, Q_cls), 0) # Joining all the query set together
Query_y += [cls]
Query_y_count += [Q_cls.shape[0]]
Query_y, Query_y_labels = self.get_query_y(Query_y, Query_y_count, class_label)
Query_x = self.get_query_x(Query_x, centroid_per_class, Query_y_labels)
return Query_x, Query_y
def random_sample_cls(self, datax, datay, Ns, Nq, cls):
"""
Randomly samples Ns examples as support set and Nq as Query set
"""
data = datax[(datay == cls).nonzero()]
perm = torch.randperm(data.shape[0])
idx = perm[:Ns]
S_cls = data[idx]
idx = perm[Ns : Ns+Nq]
Q_cls = data[idx]
if self.gpu:
S_cls = S_cls.cuda()
Q_cls = Q_cls.cuda()
return S_cls, Q_cls
def get_centroid(self, S_cls, Nc):
"""
Returns a centroid vector of support set for a class
"""
return torch.sum(self.f(S_cls), 0).unsqueeze(1).transpose(0,1) / Nc
def get_query_y(self, Qy, Qyc, class_label):
"""
Returns labeled representation of classes of Query set and a list of labels.
"""
labels = []
m = len(Qy)
for i in range(m):
labels += [Qy[i]] * Qyc[i]
labels = np.array(labels).reshape(len(labels), 1)
label_encoder = LabelEncoder()
Query_y = torch.Tensor(label_encoder.fit_transform(labels).astype(int)).long()
if self.gpu:
Query_y = Query_y.cuda()
Query_y_labels = np.unique(labels)
return Query_y, Query_y_labels
def get_centroid_matrix(self, centroid_per_class, Query_y_labels):
"""
Returns the centroid matrix where each column is a centroid of a class.
"""
centroid_matrix = torch.Tensor()
if(self.gpu):
centroid_matrix = centroid_matrix.cuda()
for label in Query_y_labels:
centroid_matrix = torch.cat((centroid_matrix, centroid_per_class[label]))
if self.gpu:
centroid_matrix = centroid_matrix.cuda()
return centroid_matrix
def get_query_x(self, Query_x, centroid_per_class, Query_y_labels):
"""
Returns distance matrix from each Query image to each centroid.
"""
centroid_matrix = self.get_centroid_matrix(centroid_per_class, Query_y_labels)
Query_x = self.f(Query_x)
m = Query_x.size(0)
n = centroid_matrix.size(0)
# The below expressions expand both the matrices such that they become compatible to each other in order to caclulate L2 distance.
centroid_matrix = centroid_matrix.expand(m, centroid_matrix.size(0), centroid_matrix.size(1)) # Expanding centroid matrix to "m".
Query_matrix = Query_x.expand(n, Query_x.size(0), Query_x.size(1)).transpose(0,1) # Expanding Query matrix "n" times
Qx = torch.pairwise_distance(centroid_matrix.transpose(1,2), Query_matrix.transpose(1,2))
return Qx
protonet = PrototypicalNet(use_gpu=use_gpu)
optimizer = optim.SGD(protonet.parameters(), lr = 0.01, momentum=0.99)
```
## Training
```
def train_step(datax, datay, Ns,Nc, Nq):
optimizer.zero_grad()
Qx, Qy= protonet(datax, datay, Ns, Nc, Nq, np.unique(datay))
Qx = Qx.max() - Qx
pred = torch.log_softmax(Qx, dim=-1)
loss = F.nll_loss(pred, Qy)
loss.backward()
optimizer.step()
acc = torch.mean((torch.argmax(pred, 1) == Qy).float())
return loss, acc
num_episode = 16000
frame_size = 1000
trainx = trainx.permute(0, 3, 1, 2)
frame_loss = 0
frame_acc = 0
for i in range(num_episode):
loss, acc = train_step(trainx, trainy, 5, 60, 5)
frame_loss += loss.data
frame_acc += acc.data
if( (i+1) % frame_size == 0):
print("Frame Number:", ((i+1) // frame_size), 'Frame Loss: ', frame_loss.data.cpu().numpy().tolist()/ frame_size, 'Frame Accuracy:', (frame_acc.data.cpu().numpy().tolist() * 100) / frame_size)
frame_loss = 0
frame_acc = 0
```
## Testing
```
def test_step(datax, datay, Ns,Nc, Nq):
Qx, Qy= protonet(datax, datay, Ns, Nc, Nq, np.unique(datay))
pred = torch.log_softmax(Qx, dim=-1)
loss = F.nll_loss(pred, Qy)
acc = torch.mean((torch.argmax(pred, 1) == Qy).float())
return loss, acc
num_test_episode = 2000
avg_loss = 0
avg_acc = 0
for _ in range(num_test_episode):
loss, acc = test_step(testx, testy, 5, 60, 15)
avg_loss += loss.data
avg_acc += acc.data
print('Avg Loss: ', avg_loss.data.cpu().numpy().tolist() / num_test_episode , 'Avg Accuracy:', (avg_acc.data.cpu().numpy().tolist() * 100) / num_test_episode)
```
| github_jupyter |
<table>
<tr><td><img style="height: 150px;" src="images/geo_hydro1.jpg"></td>
<td bgcolor="#FFFFFF">
<p style="font-size: xx-large; font-weight: 900; line-height: 100%">AG Dynamics of the Earth</p>
<p style="font-size: large; color: rgba(0,0,0,0.5);">Jupyter notebooks</p>
<p style="font-size: large; color: rgba(0,0,0,0.5);">Georg Kaufmann</p>
</td>
</tr>
</table>
# Angewandte Geophysik II: Kap 5: Gravimetrie
# Schweremodellierung
----
*Georg Kaufmann,
Geophysics Section,
Institute of Geological Sciences,
Freie Universität Berlin,
Germany*
```
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
# define profile
xmin = -400.
xmax = +400.
xstep = 101
x = np.linspace(xmin,xmax,xstep)
```
## 3D sphere
<img src=figures/sketch_kugel.jpg style=width:10cm>
$$
g(x) = {{4}\over{3}} \pi G \Delta\rho R^3 {{D}\over{(x^2 + D^2)^{3/2}}}
$$
```
def boug_sphere(x,D=100.,R=50.,drho=500.):
# Bouguer gravity of solid sphere
G = 6.672e-11 # m^3/kg/s^2
boug = 4./3.*np.pi*G*drho * R**3*D/(x**2+D**2)**(3/2)
return boug
def plot_sphere(f1=False,f2=False,f3=False,f4=False,f5=False):
fig,axs = plt.subplots(2,1,figsize=(12,8))
axs[0].set_xlim([-400,400])
axs[0].set_xticks([x for x in np.linspace(-300,300,7)])
#axs[0].set_xlabel('Profile [m]')
axs[0].set_ylim([0,0.4])
axs[0].set_yticks([y for y in np.linspace(0,0.4,5)])
axs[0].set_ylabel('Gravity [mGal]')
axs[0].plot(x,1.e5*boug_sphere(x),linewidth=1.0,linestyle=':',color='black',label='sphere')
if (f1):
axs[0].plot(x,1.e5*boug_sphere(x),linewidth=2.0,linestyle='-',color='red',label='R=50m, D=100m')
if (f2):
axs[0].plot(x,1.e5*boug_sphere(x,D=80),linewidth=2.0,linestyle='--',color='red',label='R=50m, D=80m')
if (f3):
axs[0].plot(x,1.e5*boug_sphere(x,D=120),linewidth=2.0,linestyle=':',color='red',label='R=50m, D=120m')
if (f4):
axs[0].plot(x,1.e5*boug_sphere(x,R=40),linewidth=2.0,linestyle='-',color='green',label='R=40m, D=100m')
if (f5):
axs[0].plot(x,1.e5*boug_sphere(x,R=60),linewidth=2.0,linestyle='-',color='blue',label='R=60m, D=100m')
axs[0].legend()
axs[1].set_xlim([-400,400])
axs[1].set_xticks([x for x in np.linspace(-300,300,7)])
axs[1].set_xlabel('Profile [m]')
axs[1].set_ylim([250,0])
axs[1].set_yticks([y for y in np.linspace(0.,200.,5)])
axs[1].set_ylabel('Depth [m]')
angle = [theta for theta in np.linspace(0,2*np.pi,41)]
R1=50.;D1=100.
R2=50.;D2=80.
R3=50.;D3=120.
R4=40.;D4=100.
R5=60.;D5=100.
if (f1):
axs[1].plot(R1*np.cos(angle),D1+R1*np.sin(angle),linewidth=2.0,linestyle='-',color='red',label='R=50m, D=100m')
if (f2):
axs[1].plot(R2*np.cos(angle),D2+R2*np.sin(angle),linewidth=2.0,linestyle='--',color='red',label='R=50m, D=80m')
if (f3):
axs[1].plot(R3*np.cos(angle),D3+R3*np.sin(angle),linewidth=2.0,linestyle=':',color='red',label='R=50m, D=120m')
if (f4):
axs[1].plot(R4*np.cos(angle),D4+R4*np.sin(angle),linewidth=2.0,linestyle='-',color='green',label='R=40m, D=100m')
if (f5):
axs[1].plot(R5*np.cos(angle),D5+R5*np.sin(angle),linewidth=2.0,linestyle='-',color='blue',label='R=60m, D=100m')
plot_sphere(f3=True)
# call interactive module
w = dict(
f1=widgets.Checkbox(value=True,description='eins',continuous_update=False,disabled=False),
#a1=widgets.FloatSlider(min=0.,max=2.,step=0.1,value=1.0),
f2=widgets.Checkbox(value=False,description='zwei',continuous_update=False,disabled=False),
f3=widgets.Checkbox(value=False,description='drei',continuous_update=False,disabled=False),
f4=widgets.Checkbox(value=False,description='vier',continuous_update=False,disabled=False),
f5=widgets.Checkbox(value=False,description='fuenf',continuous_update=False,disabled=False))
output = widgets.interactive_output(plot_sphere, w)
box = widgets.HBox([widgets.VBox([*w.values()]), output])
display(box)
```
... done
| github_jupyter |
# 1. User Reviews via Steam API (https://partner.steamgames.com/doc/store/getreviews)
```
# import packages
import os
import sys
import time
import json
import numpy as np
import urllib.parse
import urllib.request
from tqdm import tqdm
import plotly.express as px
from datetime import datetime
from googletrans import Translator
import pandas as pd
from pandas import json_normalize
# list package ver. etc.
print("Python version")
print (sys.version)
print("Version info.")
print (sys.version_info)
print('---------------')
```
---
### Data Dictionary:
- Response:
- success - 1 if the query was successful
- query_summary - Returned in the first request
- recommendationid - The unique id of the recommendation
- author
- steamid - the user’s SteamID
- um_games_owned - number of games owned by the user
- num_reviews - number of reviews written by the user
- playtime_forever - lifetime playtime tracked in this app
- playtime_last_two_weeks - playtime tracked in the past two weeks for this app
- playtime_at_review - playtime when the review was written
- last_played - time for when the user last played
- language - language the user indicated when authoring the review
- review - text of written review
- timestamp_created - date the review was created (unix timestamp)
- timestamp_updated - date the review was last updated (unix timestamp)
- voted_up - true means it was a positive recommendation
- votes_up - the number of users that found this review helpful
- votes_funny - the number of users that found this review funny
- weighted_vote_score - helpfulness score
- comment_count - number of comments posted on this review
- steam_purchase - true if the user purchased the game on Steam
- received_for_free - true if the user checked a box saying they got the app for free
- written_during_early_access - true if the user posted this review while the game was in Early Access
- developer_response - text of the developer response, if any
- timestamp_dev_responded - Unix timestamp of when the developer responded, if applicable
---
Source: https://partner.steamgames.com/doc/store/getreviews
## 1.1 Import
```
# generate game review df
#steam 'chunks' their json files (the game reviews) in sets of 100
#ending with a signature, a 'cursor'. This cursor is then pasted
#onto the the same url, to 'grab' the next chunk and so on.
#This sequence block with an 'end cursor' of 'AoJ4tey90tECcbOXSw=='
#set variables
url_base = 'https://store.steampowered.com/appreviews/393380?json=1&filter=updated&language=all&review_type=all&purchase_type=all&num_per_page=100&cursor='
#first pass
url = urllib.request.urlopen("https://store.steampowered.com/appreviews/393380?json=1&filter=updated&language=all&review_type=all&purchase_type=all&num_per_page=100&cursor=*")
data = json.loads(url.read().decode())
next_cursor = data['cursor']
next_cursor = next_cursor.replace('+', '%2B')
df1 = json_normalize(data['reviews'])
print(next_cursor)
#add results till stopcursor met, then send all results to csv
while True:
time.sleep(0.5) # Sleep for one second
url_temp = url_base + next_cursor
url = urllib.request.urlopen(url_temp)
data = json.loads(url.read().decode())
next_cursor = data['cursor']
next_cursor = next_cursor.replace('+', '%2B')
df2 = json_normalize(data['reviews'])
df1 = pd.concat([df1, df2])
print(next_cursor)
if next_cursor == 'AoJ44PCp0tECd4WXSw==' or next_cursor == '*':
df_steam_reviews = df1
df1 = None
break
#the hash below is each 'cursor' I loop through until the 'end cursor'.
#this is just my way to monitor the download.
# inspect columns
print(df_steam_reviews.info(verbose=True))
# inspect shape
print(df_steam_reviews.shape)
# inspect df
df_steam_reviews
# save that sheet
df_steam_reviews.to_csv('squad_reviews.csv', index=False)
```
## 1.2 Clean
```
#search for presence of empty cells
df_steam_reviews.isnull().sum(axis = 0)
#drop empty cols 'timestamp_dev_responded' and 'developer_response'
df_steam_reviews = df_steam_reviews.drop(['timestamp_dev_responded', 'developer_response'], axis=1)
# convert unix timestamp columns to datetime format
def time_to_clean(x):
return datetime.fromtimestamp(x)
df_steam_reviews['timestamp_created'] = df_steam_reviews['timestamp_created'].apply(time_to_clean)
df_steam_reviews['timestamp_updated'] = df_steam_reviews['timestamp_updated'].apply(time_to_clean)
df_steam_reviews['author.last_played'] = df_steam_reviews['author.last_played'].apply(time_to_clean)
# inspect
df_steam_reviews
# save that sheet
df_steam_reviews.to_csv('squad_reviews.csv', index=False)
```
# Misc
```
# list of free weekends:
Squad Free Weekend - Nov 2016
Squad Free Weekend - Apr 2017
Squad Free Weekend - Nov 2017
Squad Free Weekend - Jun 2018
Squad Free Weekend - Nov 2018
Squad Free Weekend - Jul 2019
Squad Free Weekend - Nov 2019
# list of major patch days:
v1 - July 1 2015
v2 - Oct 31 2015
v3 - Dec 15 2015
v4 - ?
v5 - Mar 30 2016
v6 - May 26 2016
v7 - Aug 7 2016
v8 - Nov 1 2016
v9 - Mar 9 2017
v10 Feb 5 2018
v11 Jun 6 2018
v12 Nov 29 2018
v13 May ? 2019
v14 Jun 28 2019
v15 Jul 22 2019
v16 Oct 10 2019
v17 Nov 25 2019
v18 ?
v19 May 2 2020
```

```
#v2 (fromhttps://cloud.google.com/translate/docs/simple-translate-call#translate_translate_text-python)
# translate/spellcheck via googletranslate pkg
from google.cloud import translate_v2 as translate
def time_to_translate(x):
if x == None: # ignore the 'NaN' reviews
return 'NaN'
else:
translate_client = translate.Client()
if isinstance(x, six.binary_type):
text = x.decode('utf-8')
return text
#print(time_to_translate('hola'))
# scratch
df_steam_reviews = pd.read_csv('squad_reviews.csv', low_memory=False)
df_steam_reviews
# display reviews
fig = px.histogram(df_steam_reviews, x="timestamp_created", color="voted_up", width=1000, height=500, title='Positive(True)/Negative(False) Reviews')
fig.show()
# translate/spellcheck t
t['review.translated'] = t['review'].progress_apply(time_to_translate)
t.to_csv('t.csv', index=False)
```
| github_jupyter |
# Benchmarking the Permanent
This tutorial shows how to use the permanent function using The Walrus, which calculates the permanent using Ryser's algorithm
### The Permanent
The permanent of an $n$-by-$n$ matrix A = $a_{i,j}$ is defined as
$\text{perm}(A)=\sum_{\sigma\in S_n}\prod_{i=1}^n a_{i,\sigma(i)}.$
The sum here extends over all elements $\sigma$ of the symmetric group $S_n$; i.e. over all permutations of the numbers $1, 2, \ldots, n$. ([see Wikipedia](https://en.wikipedia.org/wiki/Permanent)).
The function `thewalrus.perm` implements [Ryser's algorithm](https://en.wikipedia.org/wiki/Computing_the_permanent#Ryser_formula) to calculate the permanent of an arbitrary matrix using [Gray code](https://en.wikipedia.org/wiki/Gray_code) ordering.
## Using the library
Once installed or compiled, one imports the library in the usual way:
```
from thewalrus import perm
```
To use it we need to pass square numpy arrays thus we also import NumPy:
```
import numpy as np
import time
```
The library provides functions to compute permanents of real and complex matrices. The functions take as arguments the matrix; the number of threads to be used to do the computation are determined using OpenMP.
```
size = 20
matrix = np.ones([size,size])
perm(matrix)
size = 20
matrix = np.ones([size,size], dtype=np.complex128)
perm(matrix)
```
Not surprisingly, the permanent of a matrix containing only ones equals the factorial of the dimension of the matrix, in our case $20!$.
```
from math import factorial
factorial(20)
```
### Benchmarking the performance of the code
For sizes $n=1,28$ we will generate random unitary matrices and measure the (average) amount of time it takes to calculate their permanent. The number of samples for each will be geometrically distirbuted with a 1000 samples for size $n=1$ and 10 samples for $n=28$. The unitaries will be random Haar distributed.
```
a0 = 1000.
anm1 = 10.
n = 28
r = (anm1/a0)**(1./(n-1))
nreps = [(int)(a0*(r**((i)))) for i in range(n)]
nreps
```
The following function generates random Haar unitaries of dimensions $n$
```
from scipy import diagonal, randn
from scipy.linalg import qr
def haar_measure(n):
'''A Random matrix distributed with Haar measure
See https://arxiv.org/abs/math-ph/0609050
How to generate random matrices from the classical compact groups
by Francesco Mezzadri '''
z = (randn(n,n) + 1j*randn(n,n))/np.sqrt(2.0)
q,r = qr(z)
d = diagonal(r)
ph = d/np.abs(d)
q = np.multiply(q,ph,q)
return q
```
Now let's bench mark the scaling of the calculation with the matrix size:
```
times = np.empty(n)
for ind, reps in enumerate(nreps):
#print(ind+1,reps)
start = time.time()
for i in range(reps):
size = ind+1
nth = 1
matrix = haar_measure(size)
res = perm(matrix)
end = time.time()
times[ind] = (end - start)/reps
print(ind+1, times[ind])
```
We can now plot the (average) time it takes to calculate the permanent vs. the size of the matrix:
```
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_formats=['svg']
plt.semilogy(np.arange(1,n+1),times,"+")
plt.xlabel(r"Matrix size $n$")
plt.ylabel(r"Time in seconds for 4 threads")
```
We can also fit to the theoretical scaling of $ c n 2^n$ and use it to extrapolate for larger sizes:
```
def fit(n,c):
return c*n*2**n
from scipy.optimize import curve_fit
popt, pcov = curve_fit(fit, np.arange(1,n+1)[15:-1],times[15:-1])
```
The scaling prefactor is
```
popt[0]
```
And we can use it to extrapolate the time it takes to calculate permanents of bigger dimensions
```
flags = [3600,3600*24*7, 3600*24*365, 3600*24*365*1000]
labels = ["1 hour", "1 week", "1 year", "1000 years"]
plt.semilogy(np.arange(1,n+1), times, "+", np.arange(1,61), fit(np.arange(1,61),popt[0]))
plt.xlabel(r"Matrix size $n$")
plt.ylabel(r"Time in seconds for single thread")
plt.hlines(flags,0,60,label="1 hr",linestyles=u'dotted')
for i in range(len(flags)):
plt.text(0,2*flags[i], labels[i])
```
The specs of the computer on which this benchmark was performed are:
```
!cat /proc/cpuinfo|head -19
```
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Import all the necessary files!
import os
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
# Download the inception v3 weights
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
-O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
# Import the inception model
from tensorflow.keras.applications.inception_v3 import InceptionV3
# Create an instance of the inception model from the local pre-trained weights
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
# Your Code Here
pre_trained_model = InceptionV3(input_shape=(150,150,3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
# Make all the layers in the pre-trained model non-trainable
for layer in pre_trained_model.layers:
# Your Code Here
layer.trainable = False
# Print the model summary
pre_trained_model.summary()
# Expected Output is extremely large, but should end with:
#batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0]
#__________________________________________________________________________________________________
#activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0]
#__________________________________________________________________________________________________
#mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0]
# activation_276[0][0]
#__________________________________________________________________________________________________
#concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0]
# activation_280[0][0]
#__________________________________________________________________________________________________
#activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0]
#__________________________________________________________________________________________________
#mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0]
# mixed9_1[0][0]
# concatenate_5[0][0]
# activation_281[0][0]
#==================================================================================================
#Total params: 21,802,784
#Trainable params: 0
#Non-trainable params: 21,802,784
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output# Your Code Here
# Expected Output:
# ('last layer output shape: ', (None, 7, 7, 768))
# Define a Callback class that stops training once accuracy reaches 99.9%
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if logs.get('accuracy'):
if(logs.get('accuracy')>0.999):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
from tensorflow.keras.optimizers import RMSprop
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense (1, activation='sigmoid')(x)
model = Model(pre_trained_model.input, x)
model.compile(optimizer = RMSprop(lr=0.0001),
loss = 'binary_crossentropy',
metrics = ['acc'])
model.summary()
# Expected output will be large. Last few lines should be:
# mixed7 (Concatenate) (None, 7, 7, 768) 0 activation_248[0][0]
# activation_251[0][0]
# activation_256[0][0]
# activation_257[0][0]
# __________________________________________________________________________________________________
# flatten_4 (Flatten) (None, 37632) 0 mixed7[0][0]
# __________________________________________________________________________________________________
# dense_8 (Dense) (None, 1024) 38536192 flatten_4[0][0]
# __________________________________________________________________________________________________
# dropout_4 (Dropout) (None, 1024) 0 dense_8[0][0]
# __________________________________________________________________________________________________
# dense_9 (Dense) (None, 1) 1025 dropout_4[0][0]
# ==================================================================================================
# Total params: 47,512,481
# Trainable params: 38,537,217
# Non-trainable params: 8,975,264
# Get the Horse or Human dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip -O /tmp/horse-or-human.zip
# Get the Horse or Human Validation dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip -O /tmp/validation-horse-or-human.zip
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import zipfile
local_zip = '//tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/training')
zip_ref.close()
local_zip = '//tmp/validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation')
zip_ref.close()
train_horses_dir = "/tmp/training/horses/" # Your Code Here
train_humans_dir = "/tmp/training/humans" # Your Code Here
validation_horses_dir = "/tmp/validation/horses" # Your Code Here
validation_humans_dir = "/tmp/validation/humans" # Your Code Here
train_horses_fnames = len(os.listdir(train_horses_dir))
train_humans_fnames = len(os.listdir(train_humans_dir)) # Your Code Here
validation_horses_fnames = len(os.listdir(validation_horses_dir)) # Your Code Here
validation_humans_fnames = len(os.listdir(validation_humans_dir)) # Your Code Here
print(train_horses_fnames)
print(train_humans_fnames)
print(validation_horses_fnames)
print(validation_humans_fnames)
# Expected Output:
# 500
# 527
# 128
# 128
# Define our example directories and files
train_dir = '/tmp/training'
validation_dir = '/tmp/validation'
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator( rescale = 1.0/255.)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size=20,
class_mode='binary',
target_size=(150, 150))
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(validation_dir,
batch_size=20,
class_mode = 'binary',
target_size = (150, 150))
# Expected Output:
# Found 1027 images belonging to 2 classes.
# Found 256 images belonging to 2 classes.
# Run this and see how many epochs it should take before the callback
# fires, and stops training at 99.9% accuracy
# (It should take less than 100 epochs)
callbacks = myCallback()# Your Code Here
history = model.fit_generator(train_generator,
epochs=3,
verbose=1,
validation_data=validation_generator,
callbacks=[callbacks])
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
```
| github_jupyter |
# BUSINESS UNDERSTANDING
# DATA UNDERSTANDING
### Collecting The Sonic Features
Collecting implicitly labeled songs from playlists such as 'top 100 country songs'. Experiment can be rerun with different genres.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
plt.style.use('seaborn')
from genres import collect_genre_features
%load_ext autoreload
%autoreload 2
genres = {
'country': 'https://www.youtube.com/playlist?list=PL3oW2tjiIxvQW6c-4Iry8Bpp3QId40S5S',
'jazz': 'https://www.youtube.com/playlist?list=PL8F6B0753B2CCA128',
'hip_hop': 'https://www.youtube.com/playlist?list=PLAPo1R_GVX4IZGbDvUH60bOwIOnZplZzM',
'classical': 'https://www.youtube.com/playlist?list=PLRb-5mC4V_Lop8KLXqSqMv4_mqw5M9jjW',
'metal': 'https://www.youtube.com/playlist?list=PLfY-m4YMsF-OM1zG80pMguej_Ufm8t0VC',
'electronic': 'https://www.youtube.com/playlist?list=PLDDAxmBan0BKeIxuYWjMPBWGXDqNRaW5S'
}
# collect_genre_features(genres) # Started 8:55 done around 11:28
df = pd.read_json('data/genre_features.json', lines=True)
df.head()
```
Each row contains the sonic features for a unique 10 second audio portion of a song's video.
If a song is longer than 4 minutes, we only have the first 4 minutes of it.
There may be statistical noise in the form of cinematic intros and dialogue.
```
df.info()
df.describe()
```
Tempo and Beats columns both have a minimum of 0.
```
df[df.tempo == 0]
display(df[df.song.str.contains('Her World')].head(2))
df[df.song.str.contains('Her World')].tail(2)
df[df.song.str.contains('Marry Me')].tail(3)
```
These are intros and outros.
A case could be made to:
1. drop them as statistical noise in the tempo and beats columns.
2. keep them as relevant audio in the spectral features.
3. replace the 0s with the average tempo of the rest of the song.
As this is a first iteration we will include the rows with 0s unaltered
### Examine the distribution of data points among genres.
```
df.groupby('genre').song.count()
df.groupby('genre').song.nunique()
```
The dataset is roughly balanced among genres
### Examine the distribution of features among genres
```
c_options = ['darkorange', 'green', 'deepskyblue', 'mediumblue', 'black', 'deeppink']
colors = {k:v for k, v in zip(genres.keys(), c_options)}
colors
cols = ['chroma_stft', 'spec_cent', 'mfcc1', 'mfcc3']
for g1, g2 in itertools.combinations(df.genre.unique(), 2):
plt.figure(figsize=(18,2))
for i, col in enumerate(cols, start=1):
plt.subplot(1, len(cols), i)
s1 = df.loc[df.genre == g1, col]
s2 = df.loc[df.genre == g2, col]
sns.distplot(s1, label=g1, color=colors[g1])
sns.distplot(s2, label=g2, color=colors[g2])
plt.title(col)
plt.legend()
plt.show()
```
# DATA PREPARATION
### Train Test Split
```
from sklearn.model_selection import train_test_split
y = df['genre']
X = df.drop(['genre', 'song'], axis=1)
X_work, X_holdout, y_work, y_holdout = train_test_split(X, y, test_size=0.2, random_state=111)
X_train, X_test, y_train, y_test = train_test_split(X_work, y_work, test_size=0.2, random_state=111)
y_train.value_counts()
```
Training set is well balanced. Should be no need for class weighting.
### Data Transformation
- For Non-Tree models, numerical scaling is requried.
- Also power_transforming will be tested
- PCA will be tested for predictive improvements
Both of these will be added into pipelines
# __MODELING__
```
from sklearn.preprocessing import StandardScaler, PowerTransformer
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
from sklearn.metrics import log_loss, jaccard_score
```
#### Out of the box (Non Grid Search)
- Logistic Regression
- SVM
- Random Forest
-
- GBTrees
```
labels = list(genres.keys())
```
## Logistic Regression
```
from sklearn.linear_model import LogisticRegression
logit_model = LogisticRegression(solver='lbfgs', multi_class='ovr', n_jobs=-1)
logreg = Pipeline([
('scaler', StandardScaler()),
('model', logit_model)
])
logreg.fit(X_train, y_train)
pred_probas = logreg.predict_proba(X_test)
preds = logreg.predict(X_test)
```
#### Log Loss - Logistic Regression
```
log_loss(y_test, pred_probas, labels=labels)
```
#### Jaccard Score - Logistic Regression
```
jaccard_score(y_test, preds, average='macro').round(5)
print(list(zip(labels, jaccard_score(y_test, preds, average=None, labels=labels).round(5))))
```
#### Power Transformation - Logistic Regression
```
def eval_model(y_test, preds, pred_probas):
print(f"""Log Loss:
{log_loss(y_test, pred_probas, labels=labels)}
Jaccard:
{jaccard_score(y_test, preds, average='macro').round(5)}
{list(zip(labels, jaccard_score(y_test, preds, average=None, labels=labels).round(5)))}""")
from genres import eval_model
logreg = Pipeline([
('scaler', PowerTransformer()),
('model', logit_model)
])
logreg.fit(X_train, y_train)
pred_probas = logreg.predict_proba(X_test)
preds = logreg.predict(X_test)
eval_model(y_test, preds, pred_probas, labels=labels)
```
PowerTransformer provides a narrow improvement over StandardScaler.
#### PCA - Logistic Regression
```
logreg = Pipeline([
('scaler', PowerTransformer()),
('pca', PCA()),
('model', logit_model)
])
logreg.fit(X_train, y_train)
pred_probas = logreg.predict_proba(X_test)
preds = logreg.predict(X_test)
eval_model(y_test, preds, pred_probas, labels=labels)
```
PCA does not seem to improve performance.
---
---
### Support Vector Machine
```
from sklearn.svm import SVC
svc_model = SVC(gamma='scale', probability=True)
svc = Pipeline([
('scaler', StandardScaler()),
('model', svc_model)
])
svc.fit(X_train, y_train)
pred_probas = svc.predict_proba(X_test)
preds = svc.predict(X_test)
eval_model(y_test, preds, pred_probas, labels=labels)
```
Support Vectors seem to improve predictions. Will PowerTransformer and PCA have any effect?
### Support Vector Machine - Power Transform
```
svc = Pipeline([
('scaler', PowerTransformer()),
('model', svc_model)
])
svc.fit(X_train, y_train)
pred_probas = svc.predict_proba(X_test)
preds = svc.predict(X_test)
eval_model(y_test, preds, pred_probas, labels=labels)
```
Again, PowerTransformer provides a modest increase in performance.
### Support Vector Machine - PCA
```
svc = Pipeline([
('pca', PCA()),
('model', svc_model)
])
svc.fit(X_train, y_train)
pred_probas = svc.predict_proba(X_test)
preds = svc.predict(X_test)
eval_model(y_test, preds, pred_probas, labels=labels)
```
Again, PCA does not improve performance.
### Random Forest
```
from sklearn.ensemble import RandomForestClassifier
rfc_model = RandomForestClassifier(n_estimators=200, n_jobs=-1)
rfc_model.fit(X_train, y_train)
pred_probas = rfc_model.predict_proba(X_test)
preds = rfc_model.predict(X_test)
eval_model(y_test, preds, pred_probas, labels=labels)
```
### Random Forest - PCA
```
rfc = Pipeline([
('pca', PCA()),
('model', rfc_model)
])
rfc.fit(X_train, y_train)
pred_probas = rfc.predict_proba(X_test)
preds = rfc.predict(X_test)
eval_model(y_test, preds, pred_probas, labels=labels)
```
Random Forest seems to perform comparably to Support Vectors and PCA *is* helpful for the Forest model.
### Gradient Learning - Trees
```
from sklearn.ensemble import GradientBoostingClassifier # 57 and 70
gbt_model = GradientBoostingClassifier(max_depth=10)
gbt = Pipeline([
('pca', PCA()),
('model', gbt_model)
])
gbt.fit(X_train, y_train)
pred_probas = gbt.predict_proba(X_test)
preds = gbt.predict(X_test)
eval_model(y_test, preds, pred_probas, labels=labels)
```
The Gradient Boosted Trees Jaccard score is comparable to the Random Forest models but the log loss is an improvement.
### __Grid Searching__
1. Support Vector
3. Gradient Boosted Trees
```
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
scorer = make_scorer(log_loss, greater_is_better=False, needs_proba=True)
```
#### Support Vector - Grid Search
```
# svc = Pipeline([
# ('scaler', PowerTransformer()),
# ('model', svc_model)
# ])
# params = {
# 'model__C': [0.5, 1, 10],
# 'model__kernel': ['rbf', 'poly', 'sigmoid'],
# 'model__gamma': [0.1, 0.25, 0.5]
# }
# svc_grid = GridSearchCV(
# estimator=svc,
# param_grid=params,
# cv=5,
# scoring=scorer,
# n_jobs=-1
# )
# svc_grid.fit(X_train, y_train) # Takes some time to run
svc_grid.best_params_
eval_model(
y_test,
svc_grid.predict(X_test),
svc_grid.predict_proba(X_test),
labels=labels
)
```
#### Support Vector Best Performer
```python
{'model__C': 10, 'model__gamma': 0.25, 'model__kernel': 'rbf'}
Log Loss:
0.38742063263445364
Jaccard:
0.77164
[('country', 0.70789), ('jazz', 0.89671), ('hip_hop', 0.71136), ('classical', 0.84577), ('metal', 0.811), ('electronic', 0.65707)]
```
#### Gradient Boosting Classifier - Grid Search
```
# params = {
# 'model__n_estimators': [100,150,200],
# 'model__max_depth': [6,8,10,12,15],
# 'model__subsample': [1, 0.9, 0.8]
# }
# gbt_grid = GridSearchCV(
# estimator=gbt,
# param_grid=params,
# cv=5,
# scoring=scorer,
# n_jobs=-1
# )
# gbt_grid.fit(X_train, y_train)
gbt_grid.best_params_
eval_model(
y_test=y_test,
preds=gbt_grid.predict(X_test),
pred_probas=gbt_grid.predict_proba(X_test),
labels=labels
)
```
#### Gradient Boosting Best Performer
```python
{'model__max_depth': 6, 'model__n_estimators': 200, 'model__subsample': 0.8}
Log Loss:
0.4983750092724258
Jaccard:
0.72957
[('country', 0.65496), ('jazz', 0.82589), ('hip_hop', 0.71101), ('classical', 0.83127), ('metal', 0.73708), ('electronic', 0.61722)]
```
The Gradient Boosting Trees model does not improve upon the Support Vector Machine.
It will, however, scale better and therefor can be a viable option.
### Deep Learning
```
from keras.layers import Dense, Dropout
from keras.models import Sequential
from keras.optimizers import SGD
from keras.utils.np_utils import to_categorical
from keras.wrappers.scikit_learn import KerasClassifier
from keras.regularizers import l2
genre_map = {genre:i for i, genre in enumerate(genres.keys())}
labels = y.map(genre_map)
Y = to_categorical(labels)
Y_work, Y_holdout = train_test_split(Y, test_size=0.2, random_state=111)
Y_train, Y_test = train_test_split(Y_work, test_size=0.2, random_state=111)
def build_model(dropout=0.3, optimizer='adam'):
model = Sequential()
model.add(Dense(
40,
activation='relu',
input_shape=(20,),
kernel_regularizer=l2(),
bias_regularizer=l2()
))
model.add(Dropout(rate=dropout))
model.add(Dense(
20,
activation='relu',
kernel_regularizer=l2(),
bias_regularizer=l2()
))
model.add(Dropout(rate=dropout))
model.add(Dense(
6,
activation='softmax',
kernel_regularizer=l2(),
bias_regularizer=l2()
))
model.compile(
optimizer=optimizer,
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
clf = KerasClassifier(build_fn=build_model)
pipe = Pipeline([
('pca', PCA()),
# ('scaling', PowerTransformer()),
('model', clf)
])
history = pipe.fit(
X_train,
Y_train,
model__epochs=200,
model__batch_size=50,
model__validation_data=(X_test,Y_test)
)
```
### Log Loss
```
target = y_test.map(genre_map)
preds = history.predict(X_test)
pred_probas = history.predict_proba(X_test)
log_loss(target, pred_probas)
```
### Jaccard
```
jaccard_score(target, preds, average='macro')
js = jaccard_score(target, preds, average=None, labels=[0,1,2,3,4,5])
list(zip(genres.keys(), js))
```
Deep Learning has not yet produce a robust model.
# EVALUATION
Holdout Time!
```
from sklearn.metrics import confusion_matrix
svc_model = SVC(
C=10,
gamma=0.25,
probability=True,
)
svc = Pipeline([
('scaling', PowerTransformer()),
('model', svc_model)
])
svc.fit(X_work, y_work)
pred_probas = svc.predict_proba(X_holdout)
preds = svc.predict(X_holdout)
eval_model(y_holdout, preds, pred_probas, labels)
```
#### Confusion Matrix
```
cm = confusion_matrix(y_holdout, preds, labels=labels)
ax = sns.heatmap(cm, cmap='gnuplot_r')
ax.set_ylim(6,0)
ax.set_ylabel('True Label', labelpad=20)
ax.set_xlabel('Predicted Label', labelpad=30)
ax.set_xticklabels(labels, rotation=20)
ax.set_yticklabels(labels, rotation=20)
thresh = cm.max()/2
for i,j in itertools.product(range(6),range(6)):
val = cm[i,j]
plt.text(
x=j+0.35,
y=i+0.5,
s=val,
color='white' if val>thresh else 'black',
size=15
)
plt.title(
'Genre Confusion Matrix',
pad=20,
fontdict={'fontweight': 'bold', 'fontsize': 28}
);
```
### How did each category do?
Percent of true labels depicted in Confusion Matrix
```
from genres import percentify_cm
pcm = percentify_cm(cm)
ax = sns.heatmap(pcm, cmap='gnuplot_r')
ax.set_ylim(6,0)
ax.set_ylabel('True Label', labelpad=20)
ax.set_xlabel('Predicted Label', labelpad=30)
ax.set_xticklabels(labels, rotation=20)
ax.set_yticklabels(labels, rotation=20)
thresh = pcm.max()/2
for i,j in itertools.product(range(6),range(6)):
val = pcm[i,j]
plt.text(
x=j+0.35,
y=i+0.5,
s=val,
color='white' if val>thresh else 'black',
size=15
)
plt.title(
'Confusion Matrix: Percent of True Label',
pad=20,
fontdict={'fontweight': 'bold', 'fontsize': 22}
);
```
# DEPLOYMENT
```
import pickle
with open('genre_clf.pkl', 'wb') as f:
pickle.dump(svc, f)
svc.classes_
pred_probas.sum(axis=0)/ pred_probas.sum()
y_holdout
from genres import classify_rows
classify_rows(df[:10])
from genres import classify
original_fp = '/Users/patrickfuller/Music/iTunes/iTunes Media/Music/Unknown Artist/Unknown Album/Waiting Dare.mp3'
classify(m4a_fp=original_fp)
moonlight_sonata_url = 'https://www.youtube.com/watch?v=4591dCHe_sE'
classify(url=moonlight_sonata_url)
luigi_url = 'https://www.youtube.com/watch?v=EHQ43ObPMHQ'
classify(url=luigi_url)
```
| github_jupyter |
```
from keras import applications
# python image_scraper.py "yellow labrador retriever" --count 500 --label labrador
from keras.preprocessing.image import ImageDataGenerator
from keras_tqdm import TQDMNotebookCallback
from keras import optimizers
from keras.models import Sequential, Model
from keras.layers import (Dropout, Flatten, Dense, Conv2D,
Activation, MaxPooling2D)
from keras.applications.inception_v3 import InceptionV3
from keras.layers import GlobalAveragePooling2D
from keras import backend as K
from sklearn.cross_validation import train_test_split
import os, glob
from tqdm import tqdm
from collections import Counter
import pandas as pd
from sklearn.utils import shuffle
import numpy as np
import shutil
more_im = glob.glob("collie_lab/*/*.jpg")
more_im = shuffle(more_im)
collie = [x for x in more_im if "coll" in x.split("\\")[-2]]
lab = [x for x in shuffle(more_im) if "lab" in x.split("\\")[-2]]
print(len(collie))
print(len(lab))
for_labeling = collie + lab
for_labeling = shuffle(for_labeling)
Counter([x.split("\\")[-2] for x in more_im]).most_common()
import shutil
from tqdm import tqdm
%mkdir collie_lab_train
%mkdir collie_lab_valid
%mkdir collie_lab_train\\collie
%mkdir collie_lab_train\\lab
%mkdir collie_lab_valid\\collie
%mkdir collie_lab_valid\\lab
for index, image in tqdm(enumerate(for_labeling)):
if index < 1000:
label = image.split("\\")[-2]
image_name = image.split("\\")[-1]
if "coll" in label:
shutil.copy(image, 'collie_lab_train\\collie\\{}'.format(image_name))
if "lab" in label:
shutil.copy(image, 'collie_lab_train\\lab\\{}'.format(image_name))
if index > 1000:
label = image.split("\\")[-2]
image_name = image.split("\\")[-1]
if "coll" in label:
shutil.copy(image, 'collie_lab_valid\\collie\\{}'.format(image_name))
if "lab" in label:
shutil.copy(image, 'collie_lab_valid\\lab\\{}'.format(image_name))
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=False)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
'collie_lab_train/',
target_size=(150, 150),
batch_size=32,
shuffle=True,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
'collie_lab_valid/',
target_size=(150, 150),
batch_size=32,
class_mode='binary')
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(150, 150, 3)))
model.add(Activation('relu')) #tanh
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu')) #tanh
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(96))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # binary
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch= 3000 // 32, # give me more data
epochs=30,
callbacks=[TQDMNotebookCallback()],
verbose=0,
validation_data=validation_generator,
validation_steps= 300 // 32)
```
| github_jupyter |
```
import os
path_parent = os.path.dirname(os.getcwd())
os.chdir(path_parent)
from data_utils.utils import get_X_y_from_data, data_dict_from_df_tables
from ggmodel_dev.models.landuse.BE2 import model_dictionnary
import pandas as pd
import numpy as np
from ggmodel_dev.graphmodel import GraphModel, concatenate_graph_specs
from ggmodel_dev.validation import score_model, plot_diagnostic, score_model_dictionnary
import os
os.environ["GGGI_db_username"] = 'postgres'
os.environ["GGGI_db_password"] = 'lHeJnnRINyWCzfkDOzKl'
os.environ['GGGI_db_endpoint'] ='database-gggi-1.cg4tog4qy0py.ap-northeast-2.rds.amazonaws.com'
os.environ['GGGI_db_port'] = '5432'
from ggmodel_dev.database import get_variables_df
def prepare_TAI_data():
data_dict = get_variables_df(['ANPi', 'AYi', 'FPi', 'PTTAi', 'TAi'], exclude_tables=['variabledata', 'foodbalancesheet'])
data_dict['livestock'] = data_dict['livestock'].drop(columns=['Description_y', 'Unit_y']).rename(columns={'Description_x': 'Description', 'Unit_x': 'Unit'})
data_dict['foodbalancesheet_new'] = data_dict['foodbalancesheet_new'].query("group == 'animal'")
return data_dict
data_dict = prepare_TAI_data()
data_dict['emissions']
test = data_dict_from_df_tables([data_dict['livestock']])
data_dict['livestock'].query("Variable == 'FPi'")
test = data_dict_from_df_tables([data_dict['livestock']])
test['ANPi'] = test['ANPi'].droplevel(['FBS_item', 'table']).dropna()
test['AYi'] = test['AYi'].droplevel(['FBS_item', 'table']).dropna()
test['FPi'] = test['FPi'].droplevel(['FBS_item', 'table']).dropna()
test['PTTAi'] = test['PTTAi'].droplevel(['FBS_item', 'table', 'Item']).rename_axis(index={"emi_item": 'Item'})
test['TAi'] = data_dict['emissions'].set_index(['ISO', 'Year', 'Item'])['Value']
TAi_nodes = {'FPi': {'type': 'input',
'unit': '1000 tonnes',
'name': 'Food production per food group'},
'AYi': {'type': 'input',
'unit': 'tonnes/head',
'name': 'Vector of animal yields'},
'ANPi': {'type': 'variable',
'unit': 'head',
'name': 'Animals needed for production per animal type',
'computation': lambda FPi, AYi, **kwargs: 1e3 * FPi / AYi
},
'PTTAi': {'type': 'parameter',
'unit': '1',
'name': 'Production to animal population ratio',
},
'TAi': {'type': 'output',
'unit': 'head',
'name': 'Animal population',
'computation': lambda ANPi, PTTAi, **kwargs: PTTAi * ANPi.groupby(level=['ISO', 'Year', 'emi_item']).sum().rename_axis(index={"emi_item": 'Item'})
},
}
model = GraphModel(TAi_nodes)
res = model.run(test)
scores = score_model(model, test)
scores['score_by_Variable']
```
| github_jupyter |
# Recommendations with IBM
In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform.
You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.**
By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations.
## Table of Contents
I. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br>
II. [Rank Based Recommendations](#Rank)<br>
III. [User-User Based Collaborative Filtering](#User-User)<br>
IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br>
V. [Matrix Factorization](#Matrix-Fact)<br>
VI. [Extras & Concluding](#conclusions)
At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
```
!pip install progressbar
# Import necessary packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
import progressbar
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import accuracy_score
import project_tests as t
import pickle
# nltk
import nltk
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.tokenize import word_tokenize
nltk.download(['punkt', 'wordnet', 'stopwords',
'averaged_perceptron_tagger'])
# Pretty display for notebooks
%matplotlib inline
%config InlineBachend.figure_format = 'retina'
df = pd.read_csv('data/user-item-interactions.csv')
df_content = pd.read_csv('data/articles_community.csv')
del df['Unnamed: 0']
del df_content['Unnamed: 0']
# Show df to get an idea of the data
df.head()
# Show df_content to get an idea of the data
df_content.head()
```
### <a class="anchor" id="Exploratory-Data-Analysis">Part I : Exploratory Data Analysis</a>
Use the dictionary and cells below to provide some insight into the descriptive statistics of the data.
`1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article.
```
# Check null values in df
df.isnull().sum()
# Check null values in df_content
df_content.isnull().sum()
# Count user interaction
user_interaction = df.email.value_counts(dropna=False)
# Distribution of how many articles a user interacts with in the dataset
plt.figure()
plt.hist(user_interaction.values, bins=100)
plt.title('Distribution of how many articles a user \
interacts with in the dataset')
plt.xlabel('interactions')
plt.ylabel('count')
plt.show()
most_articles = df.article_id.value_counts(dropna=False)
cum_user = np.cumsum(most_articles.values)
# 50% of individuals interact with ____ number of articles or fewer.
median_val = len(cum_user[cum_user <= len(user_interaction)/2])
# The maximum number of user-article interactions by any 1 user is ______.
max_views_by_user = user_interaction.iloc[0]
```
`2.` Explore and remove duplicate articles from the **df_content** dataframe.
```
# Find and explore duplicate articles
article_count = df_content.article_id.value_counts(dropna=False)
dup_articles = article_count[article_count > 1]
print('number of duplicate articles is: ', len(dup_articles))
# Remove any rows that have the same article_id - only keep the first
df_content.drop_duplicates(subset=['article_id'], inplace=True)
```
`3.` Use the cells below to find:
**a.** The number of unique articles that have an interaction with a user.
**b.** The number of unique articles in the dataset (whether they have any interactions or not).<br>
**c.** The number of unique users in the dataset. (excluding null values) <br>
**d.** The number of user-article interactions in the dataset.
```
# The number of unique articles that have at least one interaction
unique_articles = len(most_articles)
# The number of unique articles on the IBM platform
total_articles = df_content.shape[0]
# The number of unique users
unique_users = len(user_interaction)-1
# The number of user-article interactions
user_article_interactions = len(df)
```
`4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
```
# The most viewed article in the dataset
# as a string with one value following the decimal
most_viewed_article_id = str(most_articles.index[0])
# The most viewed article in the dataset was viewed how many times?
max_views = most_articles.iloc[0]
## No need to change the code here - this will be helpful for later parts of the notebook
# Run this cell to map the user email to a user_id column and remove the email column
def email_mapper():
coded_dict = dict()
cter = 1
email_encoded = []
for val in df['email']:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
email_encoded.append(coded_dict[val])
return email_encoded
email_encoded = email_mapper()
del df['email']
df['user_id'] = email_encoded
# show header
df.head()
## If you stored all your results in the variable names above,
## you shouldn't need to change anything in this cell
sol_1_dict = {
'`50% of individuals have _____ or fewer interactions.`': median_val,
'`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,
'`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,
'`The most viewed article in the dataset was viewed _____ times.`': max_views,
'`The article_id of the most viewed article is ______.`': most_viewed_article_id,
'`The number of unique articles that have at least 1 rating ______.`': unique_articles,
'`The number of unique users in the dataset is ______`': unique_users,
'`The number of unique articles on the IBM platform`': total_articles
}
# Test your dictionary against the solution
t.sol_1_test(sol_1_dict)
```
### <a class="anchor" id="Rank">Part II: Rank-Based Recommendations</a>
Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.
`1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.
```
def get_top_articles(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
# Get articles ids
user_by_article = df.groupby(['user_id',
'article_id'])['title'].count().unstack()
articles_interaction = user_by_article.sum().sort_values(ascending=False)
articles_index = articles_interaction.iloc[:n].index
# Get articles titles
df_art_title = df.drop_duplicates(subset=['article_id'])[['article_id',
'title']]
df_art_title.index = df_art_title.article_id
# get list of the top n article titles
top_articles = list(df_art_title.loc[articles_index].title)
return top_articles
def get_top_article_ids(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
user_by_article = df.groupby(['user_id',
'article_id'])['title'].count().unstack()
articles_interaction = user_by_article.sum().sort_values(ascending=False)
top_articles = list(articles_interaction.iloc[:n].index)
return top_articles # Return the top article ids
print(get_top_articles(10))
print(get_top_article_ids(10))
# Test your function by returning the top 5, 10, and 20 articles
top_5 = get_top_articles(5)
top_10 = get_top_articles(10)
top_20 = get_top_articles(20)
# Test each of your three lists from above
t.sol_2_test(get_top_articles)
```
### <a class="anchor" id="User-User">Part III: User-User Based Collaborative Filtering</a>
`1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns.
* Each **user** should only appear in each **row** once.
* Each **article** should only show up in one **column**.
* **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1.
* **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**.
Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
```
# create the user-article matrix with 1's and 0's
def create_user_item_matrix(df):
'''
INPUT:
df - pandas dataframe with article_id, title, user_id columns
OUTPUT:
user_item - user item matrix
Description:
Return a matrix with user ids as rows and article ids on the columns
with 1 values where a user interacted with an article and a 0 otherwise
'''
# Fill in the function here
user_item = df.groupby(['user_id',
'article_id'])['title'].agg(lambda x: 1).unstack()
user_item.fillna(0, inplace=True)
return user_item # return the user_item matrix
user_item = create_user_item_matrix(df)
# save the matrix in a pickle file
user_item.to_pickle('user_item_matrix.p')
## Tests: You should just need to run this cell. Don't change the code.
assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right."
assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right."
assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right."
print("You have passed our quick tests! Please proceed!")
```
`2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users.
Use the tests to test your function.
```
def find_similar_users(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user_id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
similar_users - (list) an ordered list where the closest users
(largest dot product users) are listed first
Description:
Computes the similarity of every pair of users based on the dot product
Returns an ordered
'''
# Compute similarity of each user to the provided user
user_vector = np.array(user_item.loc[user_id]).reshape(-1, 1)
Matrix_item = user_item.drop(user_id)
similarity = np.dot(Matrix_item.values, user_vector)
# sort by similarity
df_smly = pd.DataFrame({'user_id': Matrix_item.index,
'similarity': similarity.flatten()})
df_smly.sort_values(by=['similarity'], inplace=True, ascending=False)
# Create list of just the ids
most_similar_users = list(df_smly.user_id)
return most_similar_users
# Do a spot check of your function
print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10]))
print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5]))
print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
```
`3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
```
def get_article_names(article_ids, df=df):
'''
INPUT:
article_ids - (list) a list of article ids (str)
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
article_names - (list) a list of article names associated with the list
of article ids (this is identified by the title column)
'''
article_ids = [float(x) for x in article_ids]
df_2 = df.drop_duplicates(subset=['article_id'])
df_2.set_index('article_id', inplace=True)
article_names = list(df_2.loc[article_ids]['title'])
return article_names
def get_user_articles(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
article_ids - (list) a list of the article ids seen by the user
article_names - (list) a list of article names associated with
the list of article ids
Description:
Provides a list of the article_ids and article titles that have
been seen by a user
'''
row_user = user_item.loc[user_id]
article_ids = list(row_user[row_user > 0].index)
article_ids = [str(x) for x in article_ids]
article_names = get_article_names(article_ids)
return article_ids, article_names
def user_user_recs(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and
provides them as recs
Does this until m recommendations are found
Notes:
Users who are the same closeness are chosen arbitrarily as the 'next' user
For the user where the number of recommended articles starts below m
and ends exceeding m, the last items are chosen arbitrarily
'''
# Get user articles
article_ids, _ = get_user_articles(user_id)
# Find similar users
most_similar_users = find_similar_users(user_id)
# How many users for progress bar
n_users = len(most_similar_users)
recs = []
# Create the progressbar
cnter = 0
bar = progressbar.ProgressBar(maxval=n_users+1,
widgets=[progressbar.Bar('=', '[', ']'),
' ', progressbar.Percentage()])
bar.start()
for user in most_similar_users:
# Update the progress bar
cnter += 1
bar.update(cnter)
# Get user articles
ids, _ = get_user_articles(user)
article_not_seen = np.setdiff1d(np.array(ids), np.array(article_ids))
article_not_recs = np.setdiff1d(article_not_seen, np.array(recs))
recs.extend(list(article_not_recs))
# If there are more than
if len(recs) > m:
break
bar.finish()
recs = recs[:10]
return recs
# Check Results
get_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1
# Test your functions here - No need to change this code - just run this cell
assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])
assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])
assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])
assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])
print("If this is all you see, you passed all of our tests! Nice job!")
```
`4.` Now we are going to improve the consistency of the **user_user_recs** function from above.
* Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.
* Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.
```
def get_top_sorted_users(user_id, df=df, user_item=user_item):
'''
INPUT:
user_id - (int)
df - (pandas dataframe) df as defined at the top of the notebook
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
neighbors_df - (pandas dataframe) a dataframe with:
neighbor_id - is a neighbor user_id
similarity - measure of the similarity of each
user to the provided user_id
num_interactions - the number of articles viewed
by the user - if a u
Other Details - sort the neighbors_df by the similarity and then by number
of interactions where highest of each is higher in the dataframe
'''
# similarity
user_vector = np.array(user_item.loc[user_id]).reshape(-1, 1)
Matrix_item = user_item.drop(user_id)
similarity = np.dot(Matrix_item.values, user_vector)
# sort by similarity
df_smly = pd.DataFrame({'neighbor_id': Matrix_item.index,
'similarity': similarity.flatten()})
# Number of interaction
count_inter = df.groupby('user_id')['article_id'].count()
df_inter = pd.DataFrame({'neighbor_id': count_inter.index,
'num_interactions': count_inter.values})
# Merging the two dataframes
neighbors_df = df_smly.merge(df_inter)
# sort the neighbors_df
neighbors_df.sort_values(by=['similarity', 'num_interactions'],
inplace=True, ascending=False)
return neighbors_df
def user_user_recs_part2(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user by article id
rec_names - (list) a list of recommendations for the user by article title
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and
provides them as recs
Does this until m recommendations are found
Notes:
* Choose the users that have the most total article interactions
before choosing those with fewer article interactions.
* Choose articles with the articles with the most total interactions
before choosing those with fewer total interactions.
'''
# get user articles
article_ids, _ = get_user_articles(user_id)
# find similar users
most_similar_users = list(get_top_sorted_users(user_id).neighbor_id)
# How many users for progress bar
n_users = len(most_similar_users)
recs = []
# Create the progressbar
cnter = 0
bar = progressbar.ProgressBar(maxval=n_users+1,
widgets=[progressbar.Bar('=', '[', ']'), ' ',
progressbar.Percentage()])
bar.start()
for user in most_similar_users:
# Update the progress bar
cnter += 1
bar.update(cnter)
# get user articles
ids, _ = get_user_articles(user)
article_not_seen = np.setdiff1d(np.array(ids), np.array(article_ids))
article_not_recs = np.setdiff1d(article_not_seen, np.array(recs))
recs.extend(list(article_not_recs))
# If there are more than
if len(recs) > m:
break
bar.finish()
recs = recs[:10]
rec_names = get_article_names(recs)
return recs, rec_names
# Quick spot check - don't change this code - just use it to test your functions
rec_ids, rec_names = user_user_recs_part2(20, 10)
print("The top 10 recommendations for user 20 are the following article ids:")
print(rec_ids)
print()
print("The top 10 recommendations for user 20 are the following article names:")
print(rec_names)
```
`5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
```
# Tests with a dictionary of results
user1_most_sim = get_top_sorted_users(1).neighbor_id.values[0] # Find the user that is most similar to user 1
user131_10th_sim = get_top_sorted_users(131).neighbor_id.values[9] # Find the 10th most similar user to user 131
## Dictionary Test Here
sol_5_dict = {
'The user that is most similar to user 1.': user1_most_sim,
'The user that is the 10th most similar to user 131': user131_10th_sim,
}
t.sol_5_test(sol_5_dict)
```
`6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.
**Provide your response here.**
For a new user, we can use get_top_articles function to suggest top articles.
We can improve our recommendations for a new user by using a Knowledge-Based Recommendations where we will ask the user to provide pieces of information about the types of articles they are interested in and look throughout our data for articles that meet the user specifications.
`7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
```
new_user = '0.0'
# List of the top 10 article ids you would give to
new_user_recs = [str(x) for x in get_top_article_ids(10)]
assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users."
print("That's right! Nice job!")
```
### <a class="anchor" id="Content-Recs">Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)</a>
Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information.
`1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations.
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
```
def make_content_recs():
'''
INPUT:
OUTPUT:
'''
```
`2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender?
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
**Write an explanation of your content based recommendation system here.**
`3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations.
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
```
# make recommendations for a brand new user
# make a recommendations for a user who only has interacted with article id '1427.0'
```
### <a class="anchor" id="Matrix-Fact">Part V: Matrix Factorization</a>
In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.
`1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook.
```
# Load the matrix here
user_item_matrix = pd.read_pickle('user_item_matrix.p')
# quick look at the matrix
user_item_matrix.head()
```
`2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
```
# Perform SVD on the User-Item Matrix Here
# use the built in to get the three matrices
u, s, vt = np.linalg.svd(user_item_matrix)
s.shape, u.shape, vt.shape
```
**Provide your response here.**
The lesson provides a data structure with numeric values representing a rating and nulls representing non-interaction. This is not a matrix in the linear algebra sense and cannot be operated on (eg by SVD). Funk SVD would have to be used to provide a numeric approximation.
The matrix in this exercise contains binary values with a zero representing non-interaction and a one representing interaction. Although not invertible, this matrix can be factored by SVD.
`3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
```
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item_matrix, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
```
`4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below.
Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below:
* How many users can we make predictions for in the test set?
* How many users are we not able to make predictions for because of the cold start problem?
* How many articles can we make predictions for in the test set?
* How many articles are we not able to make predictions for because of the cold start problem?
```
df_train = df.head(40000)
df_test = df.tail(5993)
def create_test_and_train_user_item(df_train, df_test):
'''
INPUT:
df_train - training dataframe
df_test - test dataframe
OUTPUT:
user_item_train - a user-item matrix of the training dataframe
(unique users for each row and unique articles
for each column)
user_item_test - a user-item matrix of the testing dataframe
(unique users for each row and unique articles for
each column)
test_idx - all of the test user ids
test_arts - all of the test article ids
'''
# user-item matrix of the training dataframe
user_item_train = create_user_item_matrix(df_train)
# user-item matrix of the testing dataframe
user_item_test = create_user_item_matrix(df_test)
test_idx = list(user_item_train.index) # test user ids
test_arts = list(user_item_train.columns) # test article ids
return user_item_train, user_item_test, test_idx, test_arts
user_item_train, user_item_test, test_idx, \
test_arts = create_test_and_train_user_item(df_train, df_test)
# Replace the values in the dictionary below
a = 662
b = 574
c = 20
d = 0
sol_4_dict = {
'How many users can we make predictions for in the test set?': c,
'How many users in the test set are we not able to make predictions for because of the cold start problem?': a,
'How many movies can we make predictions for in the test set?': b,
'How many movies in the test set are we not able to make predictions for because of the cold start problem?':d
}
t.sol_4_test(sol_4_dict)
```
`5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.
Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
```
# Fit SVD on the user_item_train matrix
u_train, s_train, vt_train = np.linalg.svd(user_item_train)
# Use these cells to see how well you can use the training
# decomposition to predict on test data
# Subset of rows in the user_item_test dataset that you can predict
# Rows that match the test set
test_idx = user_item_test.index
row_idxs = user_item_train.index.isin(test_idx)
u_test = u_train[row_idxs, :]
# Columns that match the test set
test_col = user_item_test.columns
col_idxs = user_item_train.columns.isin(test_col)
vt_test = vt_train[:, col_idxs]
# Test data
train_idx = user_item_train.index
row_idxs_2 = user_item_test.index.isin(train_idx)
sub_user_item_test = user_item_test.loc[row_idxs_2]
latent_feats = np.arange(10, 700+10, 20)
all_errs, train_errs, test_errs = [], [], []
for k in latent_feats:
# restructure with k latent features
s_train_lat, u_train_lat, vt_train_lat = np.diag(s_train[:k]), u_train[:, :k], vt_train[:k, :]
u_test_lat, vt_test_lat = u_test[:, :k], vt_test[:k, :]
# take dot product
user_item_train_preds = np.around(np.dot(np.dot(u_train_lat, s_train_lat), vt_train_lat))
user_item_test_preds = np.around(np.dot(np.dot(u_test_lat, s_train_lat), vt_test_lat))
all_errs.append(1 - ((np.sum(user_item_test_preds)+np.sum(np.sum(sub_user_item_test)))/(sub_user_item_test.shape[0]*sub_user_item_test.shape[1])))
# compute prediction accuracy
train_errs.append(accuracy_score(user_item_train.values.flatten(), user_item_train_preds.flatten()))
test_errs.append(accuracy_score(sub_user_item_test.values.flatten(), user_item_test_preds.flatten()))
plt.figure()
plt.plot(latent_feats, all_errs, label='All Errors')
plt.plot(latent_feats, train_errs, label='Train')
plt.plot(latent_feats, test_errs, label='Test')
plt.xlabel('Number of Latent Features')
plt.ylabel('Accuracy')
plt.title('Accuracy vs. Number of Latent Features')
plt.legend()
plt.show()
```
`6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles?
**Your response here.**
The figure above shows that in overall the accuracy of the model is very high. But it is a misleading result because we have a class imbalance in your data. In fact, the data contain much more zeros than ones.
The training accuracy increase to near 100% as the number of latent features increases. While the testing accuracy decrease as the number of latent features increases. This could be due to a limited variety in the datasets. A solution to this problem could be to perform Cross Validation to determine the number of latent features which allow the model to see different subsets of the datasets.
<a id='conclusions'></a>
### Extras
Using your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capable of taking these tasks on to improve upon your work here!
## Conclusion
> Congratulations! You have reached the end of the Recommendations with IBM project!
> **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the [rubric](https://review.udacity.com/#!/rubrics/2322/view). You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible.
## Directions to Submit
> Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).
> Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.
> Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!
```
from subprocess import call
call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])
```
| github_jupyter |
# Applying GrandPrix on the cell cycle single cell nCounter data of PC3 human prostate cancer
_Sumon Ahmed_, 2017, 2018
This notebooks describes how GrandPrix with informative prior over the latent space can be used to infer the cell cycle stages from the single cell nCounter data of the PC3 human prostate cancer cell line.
```
import pandas as pd
import numpy as np
from GrandPrix import GrandPrix
```
# Data decription
<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4102402/" terget="_blank">McDavid et al. (2014)</a> assayed the expression profiles of the PC3 human prostate cancer cell line. They identified the cells in G0/G1, S and G2/M cell cycle stages. The cells identified as G0/G1, S and G2/M have been mapped to the capture times of 1, 2 and 3, respectively. Due to the additional challenge of optimizing pseudotime parameters for periodic data, random pseudotimes having the largest log likelihood to estimate cell cycle peak time points have been used to initilize the prior.
The __McDavidtrainingData.csv__ file contains the expression profiles of the top __56__ differentially expressed genes in __361__ cells from the PC3 human prostate cancer cell line which have been used in the inference.
The __McDavidCellMeta.csv__ file contains the additional information of the data such as capture time of each cells, different initializations of pseudotimes, etc.
```
Y = pd.read_csv('../data/McDavid/McDavidtrainingData.csv', index_col=[0]).T
mData = pd.read_csv('../data/McDavid/McDavidCellMeta.csv', index_col=[0])
N, D = Y.shape
print('Time Points: %s, Genes: %s'%(N, D))
mData.head()
```
## Model with Informative prior
Capture time points have been used as the informative prior information over pseudotime. Following arguments have been passed to initialize the model.
<!--
- __data__: _array-like, shape N x D_. Observed data, where N is the number of time points and D is the number of genes.
- __latent_prior_mean__: _array-like, shape N_ x 1, _optional (default:_ __0__). > Mean of the prior distribution over pseudotime.
- __latent_prior_var__: _array-like, shape N_ x 1, _optional (default:_ __1.__). Variance of the prior distribution over pseudotime.
- __latent_mean__: _array-like, shape N_ x 1, _optional (default:_ __1.__). Initial mean values of the approximate posterior distribution over pseudotime.
- __latent_var__: _array-like, shape N_ x 1, _optional (default:_ __1.__). Initial variance of the approximate posterior distribution over pseudotime.
- __kernel:__ _optional (default: RBF kernel with lengthscale and variance set to 1.0)_. Covariance function to define the mapping from the latent space to the data space in Gaussian process prior.
-->
- __data__: _array-like, shape N x D_. Observed data, where N is the number of time points and D is the number of genes.
- __latent_prior_mean__: _array-like, shape N_ x 1. Mean of the prior distribution over pseudotime.
- __latent_prior_mean__: _array-like, shape N_ x 1. Mean of the prior distribution over pseudotime.
- __latent_prior_var__: _array-like, shape N_ x 1. Variance of the prior distribution over pseudotime.
- __latent_mean__: _array-like, shape N_ x 1. Initial mean values of the approximate posterior distribution over pseudotime.
<!--
- __latent_var__: _array-like, shape N_ x 1. Initial variance of the approximate posterior distribution over pseudotime.
-->
- __kernel__: Covariance function to define the mapping from the latent space to the data space in Gaussian process prior. Here we have used the standard periodic covariance function <a href="http://www.ics.uci.edu/~welling/teaching/KernelsICS273B/gpB.pdf" terget="_blank">(MacKay, 1998)</a>, to restrict the Gaussian Process (GP) prior to periodic functions only.
- __predict__: _int_. The number of new points. The mean of the expression level and associated variance of these new data points will be predicted.
```
np.random.seed(10)
sigma_t = .5
prior_mean = mData['prior'].values[:, None]
init_mean = mData['capture.orig'].values[:, None]
X_mean = [init_mean[i, 0] + sigma_t * np.random.randn(1) for i in range(0, N)] # initialisation of latent_mean
mp = GrandPrix.fit_model(data=Y.values, n_inducing_points = 20, latent_prior_mean=prior_mean, latent_prior_var=np.square(sigma_t),
latent_mean=np.asarray(X_mean), kernel={'name':'Periodic', 'ls':5.0, 'var':1.0}, predict=100)
pseudotimes = mp[0]
posterior_var = mp[1]
mean = mp[2] # mean of predictive distribution
var = mp[3] # variance of predictive distribution
Xnew = np.linspace(min(pseudotimes), max(pseudotimes), 100)[:, None]
```
# Visualize the results
The expression profile of some interesting genes have been plotted against the estimated pseudotime. Each point corresponds to a particular gene expression in a cell.
The points are coloured based on cell cycle stages according to <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4102402/" terget="_blank" style="text-decoration:none;">McDavid et al. (2014)</a>. The circular horizontal axis (where both first and last labels are G2/M) represents the periodicity realized by the method in pseudotime inference.
The solid black line is the posterior predicted mean of expression profiles while the grey ribbon depicts the 95% confidence interval.
The vertical dotted lines are the CycleBase peak times for the selected genes.
To see the expression profiles of a different set of genes a list containing gene names shound be passed to the function `plot_genes`.
```
selectedGenes = ['CDC6', 'MKI67', 'NUF2', 'PRR11', 'PTTG1', 'TPX2']
geneProfiles = pd.DataFrame({selectedGenes[i]: Y[selectedGenes[i]] for i in range(len(selectedGenes))})
```
## Binding gene names with predictive mean and variations
```
geneNames = Y.columns.values
name = [_ for _ in geneNames]
posterior_mean = pd.DataFrame(mean, columns=name)
posterior_var = pd.DataFrame(var, columns=name)
```
## geneData description
The __"McDavidgene.csv"__ file contains gene specific information such as peak time, etc. for the top 56 differentially expressed genes.
```
geneData = pd.read_csv('../data/McDavid/McDavid_gene.csv', index_col=0).T
geneData.head()
%matplotlib inline
from utils import plot_genes
cpt = mData['capture.orig'].values
plot_genes(pseudotimes, geneProfiles, geneData, cpt, prediction=(Xnew, posterior_mean, posterior_var))
```
| github_jupyter |
---
_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._
---
# The Python Programming Language: Functions
<br>
`add_numbers` is a function that takes two numbers and adds them together.
```
def add_numbers(x, y):
return x + y
add_numbers(1, 2)
```
<br>
`add_numbers` updated to take an optional 3rd parameter. Using `print` allows printing of multiple expressions within a single cell.
```
def add_numbers(x,y,z=None):
if (z==None):
return x+y
else:
return x+y+z
print(add_numbers(1, 2))
print(add_numbers(1, 2, 3))
```
<br>
`add_numbers` updated to take an optional flag parameter.
```
def add_numbers(x, y, z=None, flag=False):
if (flag):
print('Flag is true!')
if (z==None):
return x + y
else:
return x + y + z
print(add_numbers(1, 2, flag=True))
```
<br>
Assign function `add_numbers` to variable `a`.
```
def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2)
```
<br>
# The Python Programming Language: Types and Sequences
<br>
Use `type` to return the object's type.
```
type('This is a string')
type(None)
type(1)
type(1.0)
type(add_numbers)
```
<br>
Tuples are an immutable data structure (cannot be altered).
```
x = (1, 'a', 2, 'b')
type(x)
```
<br>
Lists are a mutable data structure.
```
x = [1, 'a', 2, 'b']
type(x)
```
<br>
Use `append` to append an object to a list.
```
x.append(3.3)
print(x)
```
<br>
This is an example of how to loop through each item in the list.
```
for item in x:
print(item)
```
<br>
Or using the indexing operator:
```
i=0
while( i != len(x) ):
print(x[i])
i = i + 1
```
<br>
Use `+` to concatenate lists.
```
[1,2] + [3,4]
```
<br>
Use `*` to repeat lists.
```
[1]*3
```
<br>
Use the `in` operator to check if something is inside a list.
```
1 in [1, 2, 3]
```
<br>
Now let's look at strings. Use bracket notation to slice a string.
```
x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
```
<br>
This will return the last element of the string.
```
x[-1]
```
<br>
This will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end.
```
x[-4:-2]
```
<br>
This is a slice from the beginning of the string and stopping before the 3rd element.
```
x[:3]
```
<br>
And this is a slice starting from the 4th element of the string and going all the way to the end.
```
x[3:]
firstname = 'Christopher'
lastname = 'Brooks'
print(firstname + ' ' + lastname)
print(firstname*3)
print('Chris' in firstname)
```
<br>
`split` returns a list of all the words in a string, or a list split on a specific character.
```
firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname)
```
<br>
Make sure you convert objects to strings before concatenating.
```
'Chris' + 2
'Chris' + str(2)
```
<br>
Dictionaries associate keys with values.
```
x = {'Christopher Brooks': 'brooksch@umich.edu', 'Bill Gates': 'billg@microsoft.com'}
x['Christopher Brooks'] # Retrieve a value by using the indexing operator
x['Kevyn Collins-Thompson'] = None
x['Kevyn Collins-Thompson']
```
<br>
Iterate over all of the keys:
```
for name in x:
print(x[name])
```
<br>
Iterate over all of the values:
```
for email in x.values():
print(email)
```
<br>
Iterate over all of the items in the list:
```
for name, email in x.items():
print(name)
print(email)
```
<br>
You can unpack a sequence into different variables:
```
x = ('Christopher', 'Brooks', 'brooksch@umich.edu')
fname, lname, email = x
fname
lname
```
<br>
Make sure the number of values you are unpacking matches the number of variables being assigned.
```
x = ('Christopher', 'Brooks', 'brooksch@umich.edu', 'Ann Arbor')
fname, lname, email = x
```
<br>
# The Python Programming Language: More on Strings
```
print('Chris' + 2)
print('Chris' + str(2))
```
<br>
Python has a built in method for convenient string formatting.
```
sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
```
<br>
# Reading and Writing CSV files
<br>
Let's import our datafile mpg.csv, which contains fuel economy data for 234 cars.
* mpg : miles per gallon
* class : car classification
* cty : city mpg
* cyl : # of cylinders
* displ : engine displacement in liters
* drv : f = front-wheel drive, r = rear wheel drive, 4 = 4wd
* fl : fuel (e = ethanol E85, d = diesel, r = regular, p = premium, c = CNG)
* hwy : highway mpg
* manufacturer : automobile manufacturer
* model : model of car
* trans : type of transmission
* year : model year
```
import csv
%precision 2
with open('mpg.csv') as csvfile:
mpg = list(csv.DictReader(csvfile))
mpg[:3] # The first three dictionaries in our list.
```
<br>
`csv.Dictreader` has read in each row of our csv file as a dictionary. `len` shows that our list is comprised of 234 dictionaries.
```
len(mpg)
```
<br>
`keys` gives us the column names of our csv.
```
mpg[0].keys()
```
<br>
This is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float.
```
sum(float(d['cty']) for d in mpg) / len(mpg)
```
<br>
Similarly this is how to find the average hwy fuel economy across all cars.
```
sum(float(d['hwy']) for d in mpg) / len(mpg)
```
<br>
Use `set` to return the unique values for the number of cylinders the cars in our dataset have.
```
cylinders = set(d['cyl'] for d in mpg)
cylinders
```
<br>
Here's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group.
```
CtyMpgByCyl = []
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl
```
<br>
Use `set` to return the unique values for the class types in our dataset.
```
vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass
```
<br>
And here's an example of how to find the average hwy mpg for each class of vehicle in our dataset.
```
HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass
```
<br>
# The Python Programming Language: Dates and Times
```
import datetime as dt
import time as tm
```
<br>
`time` returns the current time in seconds since the Epoch. (January 1st, 1970)
```
tm.time()
```
<br>
Convert the timestamp to datetime.
```
dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow
```
<br>
Handy datetime attributes:
```
dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime
```
<br>
`timedelta` is a duration expressing the difference between two dates.
```
delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
```
<br>
`date.today` returns the current local date.
```
today = dt.date.today()
today - delta # the date 100 days ago
today > today-delta # compare dates
```
<br>
# The Python Programming Language: Objects and map()
<br>
An example of a class in python:
```
class Person:
department = 'School of Information' #a class variable
def set_name(self, new_name): #a method
self.name = new_name
def set_location(self, new_location):
self.location = new_location
person = Person()
person.set_name('Christopher Brooks')
person.set_location('Ann Arbor, MI, USA')
print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department))
```
<br>
Here's an example of mapping the `min` function between two lists.
```
store1 = [10.00, 11.00, 12.34, 2.34]
store2 = [9.00, 11.10, 12.34, 2.01]
cheapest = map(min, store1, store2)
cheapest
```
<br>
Now let's iterate through the map object to see the values.
```
for item in cheapest:
print(item)
```
<br>
# The Python Programming Language: Lambda and List Comprehensions
<br>
Here's an example of lambda that takes in three parameters and adds the first two.
```
my_function = lambda a, b, c : a + b
my_function(1, 2, 3)
```
<br>
Let's iterate from 0 to 999 and return the even numbers.
```
my_list = []
for number in range(0, 1000):
if number % 2 == 0:
my_list.append(number)
my_list
```
<br>
Now the same thing but with list comprehension.
```
my_list = [number for number in range(0,1000) if number % 2 == 0]
my_list
```
<br>
# The Python Programming Language: Numerical Python (NumPy)
```
import numpy as np
```
<br>
## Creating Arrays
Create a list and convert it to a numpy array
```
mylist = [1, 2, 3]
x = np.array(mylist)
x
```
<br>
Or just pass in a list directly
```
y = np.array([4, 5, 6])
y
```
<br>
Pass in a list of lists to create a multidimensional array.
```
m = np.array([[7, 8, 9], [10, 11, 12]])
m
```
<br>
Use the shape method to find the dimensions of the array. (rows, columns)
```
m.shape
```
<br>
`arange` returns evenly spaced values within a given interval.
```
n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30
n
```
<br>
`reshape` returns an array with the same data with a new shape.
```
n = n.reshape(3, 5) # reshape array to be 3x5
n
```
<br>
`linspace` returns evenly spaced numbers over a specified interval.
```
o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4
o
```
<br>
`resize` changes the shape and size of array in-place.
```
o.resize(3, 3)
o
```
<br>
`ones` returns a new array of given shape and type, filled with ones.
```
np.ones((3, 2))
```
<br>
`zeros` returns a new array of given shape and type, filled with zeros.
```
np.zeros((2, 3))
```
<br>
`eye` returns a 2-D array with ones on the diagonal and zeros elsewhere.
```
np.eye(3)
```
<br>
`diag` extracts a diagonal or constructs a diagonal array.
```
np.diag(y)
```
<br>
Create an array using repeating list (or see `np.tile`)
```
np.array([1, 2, 3] * 3)
```
<br>
Repeat elements of an array using `repeat`.
```
np.repeat([1, 2, 3], 3)
```
<br>
#### Combining Arrays
```
p = np.ones([2, 3], int)
p
```
<br>
Use `vstack` to stack arrays in sequence vertically (row wise).
```
np.vstack([p, 2*p])
```
<br>
Use `hstack` to stack arrays in sequence horizontally (column wise).
```
np.hstack([p, 2*p])
```
<br>
## Operations
Use `+`, `-`, `*`, `/` and `**` to perform element wise addition, subtraction, multiplication, division and power.
```
print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]
print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]
print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]
print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]
print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]
```
<br>
**Dot Product:**
$ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix}
\cdot
\begin{bmatrix}y_1 \\ y_2 \\ y_3\end{bmatrix}
= x_1 y_1 + x_2 y_2 + x_3 y_3$
```
x.dot(y) # dot product 1*4 + 2*5 + 3*6
z = np.array([y, y**2])
print(len(z)) # number of rows of array
```
<br>
Let's look at transposing arrays. Transposing permutes the dimensions of the array.
```
z = np.array([y, y**2])
z
```
<br>
The shape of array `z` is `(2,3)` before transposing.
```
z.shape
```
<br>
Use `.T` to get the transpose.
```
z.T
```
<br>
The number of rows has swapped with the number of columns.
```
z.T.shape
```
<br>
Use `.dtype` to see the data type of the elements in the array.
```
z.dtype
```
<br>
Use `.astype` to cast to a specific type.
```
z = z.astype('f')
z.dtype
```
<br>
## Math Functions
Numpy has many built in math functions that can be performed on arrays.
```
a = np.array([-4, -2, 1, 3, 5])
a.sum()
a.max()
a.min()
a.mean()
a.std()
```
<br>
`argmax` and `argmin` return the index of the maximum and minimum values in the array.
```
a.argmax()
a.argmin()
```
<br>
## Indexing / Slicing
```
s = np.arange(13)**2
s
```
<br>
Use bracket notation to get the value at a specific index. Remember that indexing starts at 0.
```
s[0], s[4], s[-1]
```
<br>
Use `:` to indicate a range. `array[start:stop]`
Leaving `start` or `stop` empty will default to the beginning/end of the array.
```
s[1:5]
```
<br>
Use negatives to count from the back.
```
s[-4:]
```
<br>
A second `:` can be used to indicate step-size. `array[start:stop:stepsize]`
Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.
```
s[-5::-2]
```
<br>
Let's look at a multidimensional array.
```
r = np.arange(36)
r.resize((6, 6))
r
```
<br>
Use bracket notation to slice: `array[row, column]`
```
r[2, 2]
```
<br>
And use : to select a range of rows or columns
```
r[3, 3:6]
```
<br>
Here we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column.
```
r[:2, :-1]
```
<br>
This is a slice of the last row, and only every other element.
```
r[-1, ::2]
```
<br>
We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see `np.where`)
```
r[r > 30]
```
<br>
Here we are assigning all values in the array that are greater than 30 to the value of 30.
```
r[r > 30] = 30
r
```
<br>
## Copying Data
Be careful with copying and modifying arrays in NumPy!
`r2` is a slice of `r`
```
r2 = r[:3,:3]
r2
```
<br>
Set this slice's values to zero ([:] selects the entire array)
```
r2[:] = 0
r2
```
<br>
`r` has also been changed!
```
r
```
<br>
To avoid this, use `r.copy` to create a copy that will not affect the original array
```
r_copy = r.copy()
r_copy
```
<br>
Now when r_copy is modified, r will not be changed.
```
r_copy[:] = 10
print(r_copy, '\n')
print(r)
```
<br>
### Iterating Over Arrays
Let's create a new 4 by 3 array of random numbers 0-9.
```
test = np.random.randint(0, 10, (4,3))
test
```
<br>
Iterate by row:
```
for row in test:
print(row)
```
<br>
Iterate by index:
```
for i in range(len(test)):
print(test[i])
```
<br>
Iterate by row and index:
```
for i, row in enumerate(test):
print('row', i, 'is', row)
```
<br>
Use `zip` to iterate over multiple iterables.
```
test2 = test**2
test2
for i, j in zip(test, test2):
print(i,'+',j,'=',i+j)
```
| github_jupyter |
```
from elasticsearch import Elasticsearch
from random import randint
es = Elasticsearch([{'host': 'localhost', 'port': 9200}], http_auth=('xxxxxxx', 'xxxxxxxxx'))
# ~ 6,000,000 companies
# ~ 4,000 colleges
ratio ==> 1500 companies per one college
6000000/4000
doc = {'email':'name_'+str(i)+'@email.com',
'number': randint(1,100),
'company': 'company_'+str(randint(1,100)),
'school': 'school_'+str(randint(1,10))}
res = es.index(index="test-index", doc_type='tweet', body=doc)
es.count(index='users')['count']
```
## POST (adding new user)
```
def add_user():
# Can also implement request.json
print('Please fill out the following information.')
email = input('email: ')
number = input('number: ')
company = input('company: ')
school = input('school: ')
doc = {'email': email,
'number': int(number),
'company': company,
'school': school}
#es.index(index='users',doc_type='people',id=es.count(index='users')['count']+1,body=doc)
add_user()
```
## DELETE (delete existing user)
```
def delete_user():
# Delete user based off ID in the users index
print('###########################################')
print('################# WARNING #################')
print('###########################################')
print('You are about to delete a user from Remote.')
print(' ')
answer = input('Do you wish to continue? Y/N ')
if answer.upper() == 'Y':
user_id = input('Enter user ID: ')
# es.delete(index='users',doc_type='people',id=int(user_id))
print('You have removed %s from Remote.com. :(' % user_id)
else:
pass
delete_user()
```
## PUT (update user)
```
def update_user():
print('You are about to update a user\'s information.')
print(' ')
answer = input('Do you wish to continue? Y/N ')
if answer.upper() == 'Y':
user_id = input('Enter user id: ')
print('Please update the following information.')
email = input('email: ')
number = input('number: ')
company = input('company: ')
school = input('school: ')
doc = {'email': email,
'number': int(number),
'company': company,
'school': school}
# es.index(index, doc_type, body, id=user_id)
# return jsonify({'Update': True})
else:
pass
# return jsonify({'Update': False})
update_user()
```
## GET (view user info)
```
def get_user_info():
user_id = input('Enter user id: ')
return jsonify(es.search(index='users',body={'query': {'match': {'_id':user_id}}})['hits']['hits'][0]['_source'])
es.search(index='users',body={'query': {'match': {'_id':'2600'}}})['hits']['hits'][0]['_source']
user = es.search(index='users',body={'query': {'match': {'_id':'2600'}}})['hits']['hits'][0]
print(user)
es.search(index='users',body={'query': {'match': {'company':'company_100'}}})
```
## GET (user's 1st connection)
```
def user_1st_degree():
user_id = input('Enter user id: ')
user_info = es.search(index='users',body={'query': {'match': {'_id':user_id}}})['hits']['hits'][0]['_source']
coworkers = es.search(index='users',body={'query': {'match': {'company':user_info['company']}}})['hits']['hits']
coworker_ids = [coworker['_id'] for coworker in coworkers]
classmates = es.search(index='users',body={'query': {'match': {'school':user_info['school']}}})['hits']['hits']
classmate_ids = [classmate['_id'] for classmate in classmates]
first_deg_conns = list(set(coworker_ids+classmate_ids))
return first_deg_conns
user_1st_degree()
coworkers = es.search(index='users',body={'query': {'match': {'company':'company_7092'}}})['hits']['hits']
coworker_ids = [coworker['_id'] for coworker in coworkers]
# print(coworker_ids)
classmates = es.search(index='users',body={'query': {'match': {'school':'school_303'}}})['hits']['hits']
classmate_ids = [classmate['_id'] for classmate in classmates]
# print(classmate_ids)
total = classmate_ids+ coworker_ids
print(list(set(total)))
es.search(index='users',body={'query': {'match': {'company':'company_7092'}}})
```
## GET (user's 2nd degree connections)
From your 1st degree connections, get their 1st degree connections...this will yield your 2nd degree connections
```
def user_1st_degreex(user_id):
user_info = es.search(index='users',body={'query': {'match': {'_id':user_id}}})['hits']['hits'][0]['_source']
coworkers = es.search(index='users',body={'query': {'match': {'company':user_info['company']}}})['hits']['hits']
coworker_ids = [coworker['_id'] for coworker in coworkers]
classmates = es.search(index='users',body={'query': {'match': {'school':user_info['school']}}})['hits']['hits']
classmate_ids = [classmate['_id'] for classmate in classmates]
first_conns = list(set(coworker_ids+classmate_ids))
return first_conns
def user_2nd_degree():
user_id = input('Enter user id: ')
first_conns = user_1st_degree(user_id)
second_conns = []
for conn in first_conns:
second_conns.extend(user_1st_degree(conn))
unique_second_conns = list(set(second_conns))
return unique_second_conns
user_2nd_degree()
```
| github_jupyter |
```
import numpy as np
class Perception(object):
'''
Created on May 14th, 2017
Perception: A very simple model for binary classification
@author: Qi Gong
'''
def __init__(self, eta = 0.01, n_iter = 10):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
'''
X : matrix, shape = [n_samples, n_features]. Traning data
y : vector. label
'''
self.w_ = np.zeros(1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0)
self.errors_.append(errors)
return self
def net_input(self, X):
'''
Calculate net_input
input: X. X is training data and a matrix.
'''
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
'''
Predict the label by input X
input : X. X is a matrix.
'''
return np.where(self.net_input(X) > 0.0, 1, -1)
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', header=None)
df.tail()
import matplotlib.pyplot as plt
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', 1, -1)
X = df.iloc[0:100, [0, 2]].values
plt.scatter(X[:50, 0], X[:50, 1], color = 'red', marker = 'o', label = 'setosa')
plt.scatter(X[50:100, 0], X[50:100, 1], color = 'blue', marker = 'x', label = 'viginica')
plt.xlabel('petal length')
plt.ylabel('sepal length')
plt.legend(loc = 'upper left')
plt.show()
ppn = Perception(eta = 0.1, n_iter = 10)
ppn.fit(X, y)
plt.plot(range(1, len(ppn.errors_) + 1), ppn.errors_, marker='o')
plt.xlabel('Epoches')
plt.ylabel('Number of misclassifications')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution = 0.02):
'''
visualize the decision boundaries for 2D datasheet
'''
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen','gray','cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
#plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha = 0.4, cmap = cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
#plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x = X[y == cl, 0], y = X[y == cl, 1], alpha = 0.8, c = cmap(idx),
marker = markers[idx], label = cl)
plot_decision_regions(X, y, classifier=ppn)
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc = 'upper left')
plt.show()
```
# Implementing an Adaptive Linear Neuron in Python
```
class AdalineGD(object):
def __init__(self, eta = 0.01, n_iter = 50):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
self.w_ = np.zeros(1+X.shape[1])
self.errors_ = []
for i in range(self.n_iter):
output = self.net_input(X)
error = y - output
self.w_[1:] += self.eta * X.T.dot(error)
self.w_[0] += self.eta * error.sum()
cost = (error**2).sum()/2.0
self.errors_.append(cost)
return self
def net_input(self, X):
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
return self.net_input(X)
def predict(self, X):
return np.where(self.activation(X) > 0.0, 1, -1)
fig, ax = plt.subplots(nrows = 1, ncols = 2, figsize=(8, 4))
ada1 = AdalineGD(n_iter = 10, eta = 0.01).fit(X, y)
ax[0].plot(range(1, len(ada1.errors_) + 1), np.log10(ada1.errors_), marker = 'o')
ax[0].set_xlabel('Epochs')
ax[0].set_ylabel('log(Sum-squared-error')
ax[0].set_title('AdalineGD Learning rate = 0.01')
ada2 = AdalineGD(n_iter = 10, eta = 0.0001).fit(X, y)
ax[1].plot(range(1, len(ada2.errors_) + 1), np.log10(ada2.errors_), marker = 'o')
ax[1].set_xlabel('Epochs')
ax[1].set_ylabel('log(Sum-squared-error)')
ax[1].set_title('AdalineGD learning rate = 0.0001')
plt.show()
```
# Standardize the data
```
X_std = np.copy(X)
X_std[:, 0] = (X[:, 0] - X[:, 0].mean()) / X[:, 0].std()
X_std[:, 1] = (X[:, 1] - X[:, 1].mean()) / X[:, 1].std()
ada = AdalineGD(n_iter = 15, eta = 0.01)
ada.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=ada)
plt.title('Adaline-Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.show()
plt.plot(range(1, len(ada.errors_) + 1), ada.errors_, marker = 'o')
plt.xlabel('Epochs')
plt.ylabel('Sum-squared-error')
plt.show()
```
# Adaptive linear neuron Stochastic Gradient Descent
```
from numpy.random import seed
class AdalineSGD(object):
def __init__(self, eta = 0.01, n_iter = 50, shuffle = True, random_state = None):
self.eta = eta
self.n_iter = n_iter
self.w_initialized = False
self.shuffle = shuffle
if random_state:
seed(random_state)
def fit(self, X, y):
self.__initialize_weights(X.shape[1])
self.errors_ = []
for i in range(self.n_iter):
if self.shuffle:
X, y = self.__shuffle(X, y)
cost = []
for xi, target in zip(X, y):
cost.append(self.__update_weights(xi, target))
avg_cost = sum(cost) / len(y)
self.errors_.append(avg_cost)
return self
def partial_fit(self, X, y):
if not self.w_initialized:
self._initialize_weights(X.shape[1])
if y.ravel().shape[0] > 1:
for xi, target in zip(X, y):
self.__update_weights(xi, target)
else:
self.__update_weights(X, y)
return self
def __initialize_weights(self, m):
self.w_ = np.zeros(1+m)
self.w_initialized = True
def __shuffle(self, X, y):
r = np.random.permutation(len(y))
return X[r], y[r]
def __update_weights(self, x, y):
output = self.net_input(x)
error = y - output
self.w_[1:] += self.eta * x.dot(error)
self.w_[0] += self.eta * error
cost = 0.5 * error ** 2
return cost
def net_input(self, X):
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
return self.net_input(X)
def predict(self, X):
return np.where(self.activation(X) > 0.0, 1, -1)
```
# Test
```
adasgd = AdalineSGD(n_iter = 15, eta = 0.01, random_state = 1)
adasgd.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=adasgd)
plt.title('Adaline-Gradient Stochastic Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.legend(loc = 'upper left')
plt.show()
plt.plot(range(1, len(adasgd.errors_) + 1), adasgd.errors_, marker = 'o')
plt.xlabel('Epochs')
plt.ylabel('Sum-squared-error')
plt.show()
```
| github_jupyter |
# Sequences
## `sequence.DNA`
`coral.DNA` is the core data structure of `coral`. If you are already familiar with core python data structures, it mostly acts like a container similar to lists or strings, but also provides further object-oriented methods for DNA-specific tasks, like reverse complementation. Most design functions in `coral` return a `coral.DNA` object or something that contains a `coral.DNA` object (like `coral.Primer`). In addition, there are related `coral.RNA` and `coral.Peptide` objects for representing RNA and peptide sequences and methods for converting between them.
To get started with `coral.DNA`, import `coral`:
```
import coral as cor
```
### Your first sequence
Let's jump right into things. Let's make a sequence that's the first 30 bases of gfp from *A. victoria*. To initialize a sequence, you feed it a string of DNA characters.
```
example_dna = cor.DNA('atgagtaaaggagaagaacttttcactgga')
display(example_dna)
```
A few things just happened behind the scenes. First, the input was checked to make sure it's DNA (A, T, G, and C). For now, it supports only unambiguous letters - no N, Y, R, etc. Second, the internal representation is converted to an uppercase string - this way, DNA is displayed uniformly and functional elements (like annealing and overhang regions of primers) can be delineated using case. If you input a non-DNA sequence, a `ValueError` is raised.
For the most part, a `sequence.DNA` instance acts like a python container and many string-like operations work.
```
# Extract the first three bases
display(example_dna[0:3])
# Extract the last seven bases
display(example_dna[-7:])
# Reverse a sequence
display(example_dna[::-1])
# Grab every other base starting at index 0
display(example_dna[::2])
# Is the sequence 'AT' in our sequence? How about 'AC'?
print "'AT' is in our sequence: {}.".format("AT" in example_dna)
print "'ATT' is in our sequence: {}.".format("ATT" in example_dna)
```
Several other common special methods and operators are defined for sequences - you can concatenate DNA (so long as it isn't circular) using `+`, repeat linear sequences using `*` with an integer, check for equality with `==` and `!=` (note: features, not just sequences, must be identical), check the length with `len(dna_object)`, etc.
### Simple sequences - methods
In addition to slicing, `sequence.DNA` provides methods for common molecular manipulations. For example, reverse complementing a sequence is a single call:
```
example_dna.reverse_complement()
```
An extremely important method is the `.copy()` method. It may seem redundant to have an entire function for copying a sequence - why not just assign a `sequence.DNA` object to a new variable? As in most high-level languages, python does not actually copy entire objects in memory when assignment happens - it just adds another reference to the same data. The short of it is that the very common operation of generating a lot of new variants to a sequence, or copying a sequence, requires the use of a `.copy()` method. For example, if you want to generate a new list of variants where an 'a' is substituted one at a time at each part of the sequence, using `.copy()` returns the correct result (the first example) while directly accessing example_dna has horrible consequences (the edits build up, as they all modify the same piece of data sequentially):
```
example_dna.copy()
# Incorrect way (editing shared + mutable sequence):
example_dna = cor.DNA('atgagtaaaggagaagaacttttcactgga')
variant_list = []
for i, base in enumerate(example_dna):
variant = example_dna
variant.top[i] = 'A'
variant.bottom[i] = 'T'
variant_list.append(variant)
print [str(x) for x in variant_list]
print
# Correct way (copy mutable sequence, then edit):
example_dna = cor.DNA('atgagtaaaggagaagaacttttcactgga')
variant_list = []
for i, base in enumerate(example_dna):
variant = example_dna.copy()
variant.top[i] = 'A'
variant.bottom[i] = 'T'
variant_list.append(variant)
print [str(x) for x in variant_list]
```
An important fact about `sequence.DNA` methods and slicing is that none of the operations modify the object directly (they don't mutate their parent) - if we look at example_dna, it has not been reverse-complemented itself. Running `example_dna.reverse_complement()` outputs a new sequence, so if you want to save your chance you need to assign a variable:
```
revcomp_dna = example_dna.reverse_complement()
display(example_dna)
display(revcomp_dna)
```
You also have direct access important attributes of a `sequence.DNA` object. The following are examples of how to get important sequences or information about a sequence.
```
# The top strand - a simple python string in the 5' -> 3' orientation.
example_dna.top
# The bottom strand - another python string, also in the 5' -> 3' orientation.
example_dna.bottom
# Sequences are double stranded, or 'ds' by default.
# This is a directly accessible attribute, not a method, so () is not required.
example_dna.ds
# DNA can be linear or circular - check the boolean `circular` attribute.
example_dna.circular
# You can switch between topologies using the .circularize and .linearize methods.
# Circular DNA has different properties:
# 1) it can't be concatenated to
# 2) sequence searches using .locate will search over the current origin (e.g. from -10 to +10 for a 20-base sequence).
circular_dna = example_dna.circularize()
circular_dna.circular
# Linearization is more complex - you can choose the index at which to linearize a circular sequence.
# This simulates a precise double stranded break at the index of your choosing.
# The following example shows the difference between linearizing at index 0 (default) versus index 2
# (python 0-indexes, so index 2 = 3rd base, i.e. 'g' in 'atg')
print circular_dna.linearize()
print
print circular_dna.linearize(2)
# Sometimes you just want to rotate the sequence around - i.e. switch the top and bottom strands.
# For this, use the .flip() method
example_dna.flip()
```
| github_jupyter |
```
import os
os.chdir('C:\\Users\\SHAILESH TIWARI\\Downloads\\Classification\\hr')
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
train = pd.read_csv('train.csv')
# getting their shapes
print("Shape of train :", train.shape)
#print("Shape of test :", test.shape)
train.shape
train.head()
train.columns
train.isna().sum()
#calculation of percentage of missing data
total = train.isnull().sum().sort_values(ascending=False)
percent_1 = train.isnull().sum()/train.isnull().count()*100
percent_2 = (round(percent_1, 1)).sort_values(ascending=False)
missing_data = pd.concat([total, percent_2], axis=1, keys=['Total', '%'])
missing_data.head(5)
train['is_promoted'].value_counts() #unbalanced
train.shape
# finding the %age of people promoted
promoted = (4668/54808)*100
print("Percentage of Promoted Employees is {:.2f}%".format(promoted))
#plotting a scatter plot
plt.hist(train['is_promoted'])
plt.title('plot to show the gap in Promoted and Non-Promoted Employees', fontsize = 30)
plt.xlabel('0 -No Promotion and 1- Promotion', fontsize = 20)
plt.ylabel('count')
plt.show()
s1=train.dtypes
s1.groupby(s1).count()
train.dtypes
corr_matrix = train.corr(method='pearson')
corr_matrix['is_promoted'].sort_values(kind="quicksort")
#dropping the column
train.drop(['employee_id','region'], axis = 1, inplace = True)
train.head()
train.columns.values
#check for missing value, unique etc
FileNameDesc = pd.DataFrame(columns = ['column_name','missing_count','percent_missing','unique_count'])
for col in list(train.columns.values):
sum_missing = train[col].isnull().sum()
percent_missing = sum_missing/len(train)*100
uniq_count = (train.groupby([col])[col].count()).count()
FileNameDesc = FileNameDesc.append({'column_name':col,'missing_count':sum_missing,
'percent_missing':percent_missing,'unique_count':uniq_count},
ignore_index = True)
FileNameDesc
#Apply Mode strategy to populate the categorical data
train.groupby('education').agg({'education': np.size})
train["education"]=train["education"].fillna('Attchd')
train["education"]=train["education"].astype('category')
train["education"] = train["education"].cat.codes
train.isnull().sum()
train['previous_year_rating'].unique()
train['previous_year_rating'].mode()
train['previous_year_rating'].fillna(1, inplace = True)
train.isnull().sum()
train.dtypes
data=pd.get_dummies(train,columns=['department','gender','recruitment_channel','previous_year_rating'],drop_first=True)
data
df1=data['is_promoted']
data.drop(['is_promoted'], axis = 1, inplace = True)
data=pd.concat([data,df1],axis=1)
#Key data analysis
len(data)
data.head()
data.isnull().any()
data.isnull().sum()
data.corr()
sns.heatmap(data.corr(),annot=False)
data.columns
x = data.iloc[:,0:22].values
y = data.iloc[:,-1:].values
x
y
from sklearn.preprocessing import StandardScaler # to make the data in standard format to read
sc = StandardScaler() # feature scaling because salary and age are both in different scale
x=sc.fit_transform(x)
pd.DataFrame(x)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test= train_test_split(x,y,test_size=0.20, random_state=0)
# applying logistic regression
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(x_train,y_train)
# prediction for x_test
y_pred = logmodel.predict(x_test)
y_pred
y_test
# concept of confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred)
len(y_test)
sns.pairplot(train)
# applying cross validation on top of algo
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator=logmodel, X=x_train,y=y_train,cv=10)
accuracies
accuracies.mean()
# k nearest neighbour algo applying
from sklearn.neighbors import KNeighborsClassifier
classifier_knn =KNeighborsClassifier(n_neighbors=11,metric='euclidean',p=2)
classifier_knn.fit(x_train,y_train)
y_pred_knn = classifier_knn.predict(x_test)
y_pred_knn
y_test
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred_knn)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred_knn)
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator=classifier_knn, X=x_train,y=y_train,cv=10)
accuracies
accuracies.mean()
# naiye baise algo application
from sklearn.naive_bayes import GaussianNB
classifier_nb =GaussianNB()
classifier_nb.fit(x_train,y_train)
y_pred_nb = classifier_nb.predict(x_test)
y_pred_nb
y_test
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred_nb)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred_nb)
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator=classifier_nb, X=x_train,y=y_train,cv=10)
accuracies
accuracies.mean()
# support vector machine application through sigmoid kernel
from sklearn.svm import SVC
classifier_svm_sig = SVC(kernel='sigmoid')
classifier_svm_sig.fit(x_train,y_train)
pred_svm_sig = classifier_svm_sig.predict(x_test)
pred_svm_sig
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,pred_svm_sig)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,pred_svm_sig)
# support vector machine application through linear kernel
from sklearn.svm import SVC
classifier_svm_lin = SVC(kernel='linear')
classifier_svm_lin.fit(x_train,y_train)
y_pred_svm_lin = classifier_svm_lin.predict(x_test)
y_pred_svm_lin
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred_svm_lin)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred_svm_lin)
# support vector machine application through polynomial kernel
from sklearn.svm import SVC
classifier_svm_poly = SVC(kernel='poly')
classifier_svm_poly.fit(x_train,y_train)
y_pred_svm_poly = classifier_svm_poly.predict(x_test)
y_pred_svm_poly
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred_svm_poly)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred_svm_poly)
# support vector machine application through rbf kernel
from sklearn.svm import SVC
classifier_svm_rbf = SVC(kernel='rbf')
classifier_svm_rbf.fit(x_train,y_train)
y_pred_svm_rbf = classifier_svm_rbf.predict(x_test)
y_pred_svm_rbf
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred_svm_rbf)
#accuracy score
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred_svm_rbf)
#running decision tree algo
from sklearn.tree import DecisionTreeClassifier
classifier_dt =DecisionTreeClassifier(criterion='entropy') # also can use gini
classifier_dt.fit(x_train,y_train)
y_pred_dt =classifier_dt.predict(x_test)
y_pred_dt
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred_dt)
# accuracy score calculation
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred_dt)
# running random forest algorithm
from sklearn.ensemble import RandomForestClassifier
classifier_rf =RandomForestClassifier(n_estimators=3, criterion='entropy')
classifier_rf.fit(x_train,y_train)
y_pred_rf =classifier_rf.predict(x_test)
y_pred_rf
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred_rf)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred_rf)
```
| github_jupyter |
```
import numpy as np
from numpy import array
import random
from random import randint
import os
import matplotlib.pyplot as plt
import pandas as pd
import keras
from keras.models import Sequential
from keras.layers import Dense, Conv1D, Flatten, Activation, MaxPooling1D, Dropout
from keras.optimizers import SGD
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="1" #model will be trained on GPU 1
"""Hyperparameters"""
w = 17280-500 # History window (number of time stamps taken into account)
# i.e., filter(kernel) size
p_w = 5000 # Prediction window (number of time stampes required to be
# predicted)
n_features = 1 # Univariate time series
kernel_size = 2 # Size of filter in conv layers
num_filt_1 = 32 # Number of filters in first conv layer
num_filt_2 = 32 # Number of filters in second conv layer
num_nrn_dl = 40 # Number of neurons in dense layer
num_nrn_ol = p_w # Number of neurons in output layer
conv_strides = 1
pool_size_1 = 2 # Length of window of pooling layer 1
pool_size_2 = 2 # Length of window of pooling layer 2
pool_strides_1 = 2 # Stride of window of pooling layer 1
pool_strides_2 = 2 # Stride of window of pooling layer 2
epochs = 30
dropout_rate = 0.5 # Dropout rate in the fully connected layer
learning_rate = 2e-5
anm_det_thr = 0.8 # Threshold for classifying anomaly (0.5~0.8)
from hdf_helper import *
from stat_helper import *
from data_cleaning import *
import h5py
#df = pd.read_csv('data/datch_3.csv').drop(['Unnamed: 0'], axis = 1)
path = 'dat/cleaned_dat/ch_27.csv.csv'
df_test = pd.read_csv(path, nrows=100)
float_cols = [c for i, c in enumerate(df_test.columns) if i != 0]
float64_cols = {c: np.float64 for c in float_cols}
df = pd.read_csv(path, engine='c', dtype=float64_cols).drop(['Unnamed: 0'], axis = 1)
df = df.replace(np.NAN, 0.0)
zero_outliers = df.loc[:, (df == 0.0).all(axis=0)]
reg_data = df.loc[:,(df != 0.0).any(axis=0)]
#df = reduce_dataset_size(df, cluster_size = 50)
df = smooth_values(df)
scaler = RobustScaler()
df = pd.DataFrame(scaler.fit_transform(df))
w = len(df.index) - p_w
"""
Data preprocessing
"""
# split a univariate sequence into samples
def split_sequence(sequence):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + w
out_end_ix = end_ix + p_w
# check if we are beyond the sequence
if out_end_ix > len(sequence):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix:out_end_ix]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
# # define input sequence
# for col in reg_data.columns:
# sampl, labl = split_sequence(list(reg_data[col]))
samples = []
labels = []
batch_sampl, batch_labl = split_sequence(list(reg_data.ix[:,0]))
samples.append(batch_sampl)
labels.append(batch_labl)
print()
for i in range(1, len(reg_data.columns)):
batch_sampl, batch_labl = split_sequence(list(reg_data.ix[:,i]))
samples.append(batch_sampl)
labels.append(batch_labl)
batch_sample = np.array(samples)
batch_label = np.array(labels)
print(batch_sample.shape)
# summarize the data
# for i in range(5):
# print(X[i], Y[i])
# 2. reshape from [samples, timesteps] into [samples, timesteps, features]
# need to convert batch into 3D tensor of the form [batch_size, input_seq_len, n_features]
batch_sample = batch_sample.reshape((batch_sample.shape[0], batch_sample.shape[2], n_features))
batch_label = batch_label.reshape((batch_label.shape[0],batch_label.shape[1],batch_label.shape[2]))
print(batch_label.shape)
print(batch_sample.shape)
"""Generate model for predictor"""
model = Sequential()
# Convolutional Layer #1
# Computes 32 features using a 1D filter(kernel) of with w with ReLU activation.
# Padding is added to preserve width.
# Input Tensor Shape: [batch_size, w, 1] / batch_size = len(batch_sample)
# Output Tensor Shape: [batch_size, w, num_filt_1] (num_filt_1 = 32 feature vectors)
model.add(Conv1D(filters=num_filt_1,
kernel_size=kernel_size,
strides=conv_strides,
padding='valid',
activation='relu',
input_shape=(w, n_features)))
# Pooling Layer #1
# First max pooling layer with a filter of length 2 and stride of 2
# Input Tensor Shape: [batch_size, w, num_filt_1]
# Output Tensor Shape: [batch_size, 0.5 * w, num_filt_1]
model.add(MaxPooling1D(pool_size=pool_size_1))
# strides=pool_strides_1,
# padding='valid'))
# Convolutional Layer #2
# Computes 64 features using a 5x5 filter.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 0.5 * w, 32]
# Output Tensor Shape: [batch_size, 0.5 * w, num_filt_1 * num_filt_2]
model.add(Conv1D(filters=num_filt_2,
kernel_size=kernel_size,
strides=conv_strides,
padding='valid',
activation='relu'))
# Max Pooling Layer #2
# Second max pooling layer with a 2x2 filter and stride of 2
# Input Tensor Shape: [batch_size, 0.5 * w, num_filt_1 * num_filt_2]
# Output Tensor Shape: [batch_size, 0.25 * w, num_filt_1 * num_filt_2]
model.add(MaxPooling1D(pool_size=pool_size_2))
# strides=pool_strides_2,
# padding='valid'
# Flatten tensor into a batch of vectors
# Input Tensor Shape: [batch_size, 0.25 * w, num_filt_1 * num_filt_2]
# Output Tensor Shape: [batch_size, 0.25 * w * num_filt_1 * num_filt_2]
model.add(Flatten())
# Dense Layer (Output layer)
# Densely connected layer with 1024 neurons
# Input Tensor Shape: [batch_size, 0.25 * w * num_filt_1 * num_filt_2]
# Output Tensor Shape: [batch_size, 1024]
model.add(Dense(units=num_nrn_dl, activation='relu'))
# Dropout
# Prevents overfitting in deep neural networks
model.add(Dropout(dropout_rate))
# Output layer
# Input Tensor Shape: [batch_size, 1024]
# Output Tensor Shape: [batch_size, p_w]
model.add(Dense(units=num_nrn_ol))
# Summarize model structure
model.summary()
'''configure model'''
model.compile(optimizer='adam',
loss='mean_absolute_error')
# sgd = keras.optimizers.SGD(lr=learning_rate,
# decay=1e-6,
# momentum=0.9,
# nesterov=True)
# model.compile(optimizer='sgd',
# loss='mean_absolute_error',
# metrics=['accuracy'])
'''
model_fit = model.fit(batch_sample,
batch_label,
epochs=epochs,
verbose=1)
'''
for i in range(len(reg_data.columns)):
sampl = batch_sample[i].reshape((1,batch_sample.shape[1],batch_sample.shape[2]))
print(sampl.shape)
labl = batch_label[i].reshape((batch_label.shape[1],batch_label.shape[2]))
model.fit(sampl,
labl,
epochs=epochs,
verbose=1)
"""Testing with random interval(DeepAnT)"""
# Set number of test sequences
n_test_seq = 1
# Split a univariate sequence into samples
def generate_test_batch(raw_seq, n_test_seq):
# Sample a portion of the raw_seq randomly
ran_ix = random.randint(0,len(raw_seq) - n_test_seq * w - n_test_seq * p_w)
raw_test_seq = array(raw_seq[ran_ix:ran_ix + n_test_seq * w + n_test_seq * p_w])
batch_test_seq, batch_test_label = list(), list()
ix = ran_ix
for i in range(n_test_seq):
# gather input and output parts of the pattern
seq_x = raw_seq[ix : ix+w],
seq_y = raw_seq[ix+w : ix+w+p_w]
ix = ix+w+p_w
batch_test_seq.append(seq_x)
batch_test_label.append(seq_y)
return array(batch_test_seq), array(batch_test_label)
batch_test_seq, batch_test_label = generate_test_batch(list(reg_data.ix[:,0]), n_test_seq)
batch_test_seq = batch_test_seq.reshape((batch_test_seq.shape[0], w, n_features))
batch_test_label = batch_test_label.reshape((batch_test_label.shape[0], p_w))
# Returns the loss value & metrics values for the model in test mode
model.evaluate(x=batch_test_seq,
y=batch_test_label,
verbose=1)
"""Save Weights (DeepAnT)"""
# save it to disk so we can load it back up anytime
model.save_weights('ch_3_noanom_weights.h5')
"""Predicting future sequence (DeepAnT)"""
# Build model
model = Sequential()
model.add(Conv1D(filters=num_filt_1,
kernel_size=kernel_size,
strides=conv_strides,
padding='valid',
activation='relu',
input_shape=(w, n_features)))
model.add(MaxPooling1D(pool_size=pool_size_1))
model.add(Conv1D(filters=num_filt_2,
kernel_size=kernel_size,
strides=conv_strides,
padding='valid',
activation='relu'))
model.add(MaxPooling1D(pool_size=pool_size_2))
model.add(Flatten())
model.add(Dense(units=num_nrn_dl, activation='relu'))
model.add(Dropout(dropout_rate))
model.add(Dense(units=num_nrn_ol))
# Load the model's saved weights.
model.load_weights('ch_3_noanom_weights.h5')
raw_seq = list(reg_data.ix[:,0])
endix = len(raw_seq) - w - p_w
input_seq = array(raw_seq[endix:endix+w])
target_seq = array(raw_seq[endix+w:endix+w+p_w])
input_seq = input_seq.reshape((1, w, n_features))
# Predict the next time stampes of the sampled sequence
predicted_seq = model.predict(input_seq, verbose=1)
# Print our model's predictions.
print(predicted_seq)
# Check our predictions against the ground truths.
print(target_seq) # [7, 2, 1, 0, 4]
'''Visualization of predicted time series'''
in_seq = reg_data.ix[:,i][endix:endix+w]
tar_seq = reg_data.ix[:,i][endix+w:endix+w+p_w]
predicted_seq = predicted_seq.reshape((p_w))
d = {'time': reg_data.ix[:,i][endix+w:endix+w+p_w], 'values': predicted_seq}
df_sine_pre = pd.DataFrame(data=d)
pre_seq = df_sine_pre['values']
plt.plot(in_seq)
plt.plot(tar_seq)
plt.plot(pre_seq)
plt.ylim(top = 10)
plt.ylim(bottom=0)
plt.title('Channel 27 Prediction')
plt.ylabel('value')
plt.xlabel('time')
plt.legend(['input_seq', 'pre_seq', 'target_seq'], loc='upper right')
axes = plt.gca()
axes.set_xlim([endix,endix+w+p_w])
fig_predict = plt.figure(figsize=(100,10))
fig_predict.savefig('predicted_sequence.png')
plt.show()
'''
"""Predicting random intervals (DeepAnT)"""
# Build model
model = Sequential()
model.add(Conv1D(filters=num_filt_1,
kernel_size=kernel_size,
strides=conv_strides,
padding='valid',
activation='relu',
input_shape=(w, n_features)))
model.add(MaxPooling1D(pool_size=pool_size_1))
model.add(Conv1D(filters=num_filt_2,
kernel_size=kernel_size,
strides=conv_strides,
padding='valid',
activation='relu'))
model.add(MaxPooling1D(pool_size=pool_size_2))
model.add(Flatten())
model.add(Dense(units=num_nrn_dl, activation='relu'))
model.add(Dropout(dropout_rate))
model.add(Dense(units=num_nrn_ol))
# Load the model's saved weights.
model.load_weights('ch_1_weights.h5')
# Sample a portion of the raw_seq randomly
# 1. Choose
ran_ix = random.randint(1,len(raw_seq) - w - p_w)
input_seq = array(raw_seq[ran_ix : ran_ix + w])
target_seq = array(raw_seq[ran_ix + w : ran_ix + w + p_w])
input_seq = input_seq.reshape((1, w, n_features))
# Predict the next time stampes of the sampled sequence
yhat = model.predict(input_seq, verbose=1)
# Print our model's predictions.
print(yhat)
# Check our predictions against the ground truths.
print(target_seq) # [7, 2, 1, 0, 4]
'''
"""
Determins whether a sequence exceeds the threshold for being an anomaly
return boolean value of whether the sequence is an anomaly or not
"""
def anomaly_detector(prediction_seq, ground_truth_seq):
# calculate Euclidean between actual seq and predicted seq
dist = np.linalg.norm(ground_truth_seq - prediction_seq)
if (dist > anm_det_thr):
return true # anomaly
else:
return false # normal
```
| github_jupyter |
# Simple Neural Networks: Revised
Back in February I published a post title [<i>Simple Neural Networks with Numpy</i>](https://a-i-dan.github.io/tanh_NN). I wanted to take a deep dive into the world of neural networks and learn everything that went into making a neural net seem "magical". Now, a few months later, I want to rewrite that post. At the time of the original post, I was a novice in both machine learning and the Python language. While I still consider myself in the beginning stages of my growth, I have learned a lot in the recent months and have realized the mistakes that were made in my original post. I thought about deleting the post and rewriting it, but I made this blog to keep a track record of my progress, and deleting the original post goes against this idea. I am sure there will be mistakes in this post as well as I am still learning this field.
This blog post will be a revised, and way better, version of my original blog post with a similar title. Unlike the original post, in this post, building the neural network's architecture will be achieved by taking an object-oriented approach (this is mainly to help me learn). Therefore, the code will not be the most efficient code possible and will be a bit more lengthy. This neural network will contain one hidden layer and will be using the <b>sigmoid activation function</b>. The full code for this project will be posted below (including plot and predictions), then blocks of code will be seperated and explained in further detail.
## What is a Neural Network?
A neural network is loosely based on how the human brain works: many neurons connected to other neurons, passing information through their connections and firing when the input to a neuron surpasses a certain threshold. Our artificial neural network will consist of artificial neurons and synapses with information being passed between them. The synapses, or connections, will be weighted according to the neurons strength of influence on determining the output. These synaptic weights will go through an optimization process called <b>backpropagation</b>. For each iteration during the training process, backpropagation will be used to go backwards through the layers of the network and adjusts the weights according to their contribution to the neural net's error.
Neural networks are essentially self-optimizing functions that map inputs to the correct outputs. We can then place a new input into the function, where it will predict an output based on the function it created with the training data.
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Untitled%20presentation-2.png?raw=true' style='display: block; margin: auto; width: 700px;'>
## Neural Net's Goal
This neural network, like all neural networks, will have to learn what the important features are in the data to produce the output. In paticular, this neural net will be given an input matrix with six samples, each with three feature columns consisting of soley zeros and ones. For example, one sample in the training set may be [0, 1, 1]. The output to each sample will be a single one or zero. The output will be determined by the number in the first feature column of the data samples. Using the example given before, the output for [0, 1, 1] would be 0, because the first column contains a zero. An example chart will be given below to demonstrate the output for each input sample.
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Untitled%20presentation.png?raw=true' style='display: block; margin: auto; width: 600px;'>
### Full Code
```
import numpy as np # helps with the math
import matplotlib.pyplot as plt # to plot error during training
# input data
inputs = np.array([[0, 1, 0],
[0, 1, 1],
[0, 0, 0],
[1, 0, 0],
[1, 1, 1],
[1, 0, 1]])
# output data
outputs = np.array([[0], [0], [0], [1], [1], [1]])
# create NeuralNetwork class
class NeuralNetwork:
# intialize variables in class
def __init__(self, inputs, outputs):
self.inputs = inputs
self.outputs = outputs
# initialize weights as .50 for simplicity
self.weights = np.array([[.50], [.50], [.50]])
self.error_history = []
self.epoch_list = []
#activation function ==> S(x) = 1/1+e^(-x)
def sigmoid(self, x, deriv=False):
if deriv == True:
return x * (1 - x)
return 1 / (1 + np.exp(-x))
# data will flow through the neural network.
def feed_forward(self):
self.hidden = self.sigmoid(np.dot(self.inputs, self.weights))
# going backwards through the network to update weights
def backpropagation(self):
self.error = self.outputs - self.hidden
delta = self.error * self.sigmoid(self.hidden, deriv=True)
self.weights += np.dot(self.inputs.T, delta)
# train the neural net for 25,000 iterations
def train(self, epochs=25000):
for epoch in range(epochs):
# flow forward and produce an output
self.feed_forward()
# go back though the network to make corrections based on the output
self.backpropagation()
# keep track of the error history over each epoch
self.error_history.append(np.average(np.abs(self.error)))
self.epoch_list.append(epoch)
# function to predict output on new and unseen input data
def predict(self, new_input):
prediction = self.sigmoid(np.dot(new_input, self.weights))
return prediction
# create neural network
NN = NeuralNetwork(inputs, outputs)
# train neural network
NN.train()
# create two new examples to predict
example = np.array([[1, 1, 0]])
example_2 = np.array([[0, 1, 1]])
# print the predictions for both examples
print(NN.predict(example), ' - Correct: ', example[0][0])
print(NN.predict(example_2), ' - Correct: ', example_2[0][0])
# plot the error over the entire training duration
plt.figure(figsize=(15,5))
plt.plot(NN.epoch_list, NN.error_history)
plt.xlabel('Epoch')
plt.ylabel('Error')
plt.show()
```
## Code Breakdown
```
import numpy as np
import matplotlib.pyplot as plt
```
Before getting started, we will need to import the necessary libraries. Only two libraries will be needed for this example, without plotting the loss we would only need Numpy. Numpy is a python math library mainly used for linear algebra applications. Matplotlib is a visualization tool that we will use to create a plot to display how our error decreases over time.
```
inputs = np.array([[0, 1, 0],
[0, 1, 1],
[0, 0, 0],
[1, 0, 0],
[1, 1, 1],
[1, 0, 1]])
outputs = np.array([[0], [0], [0], [1], [1], [1]])
```
As mentioned earlier, neural networks need data to learn from. We will create our input data matrix and the corresponding outputs matrix with Numpy's `.array()` function. Each sample in the input consists of three feature columns made up of 0s and 1s that produce one output of either a 0 or 1. We want to neural network to learn that the outputs are determined by the first feature column in each sample.
$$\begin{matrix}
[0 & 1 & 0] ==>\\
[0 & 1 & 1] ==>\\
[1 & 0 & 0] ==>\\
[1 & 0 & 1] ==>\end{matrix}\begin{matrix}
0\\
0\\
1\\
1\end{matrix}$$
```
class NeuralNetwork:
def __init__(self, inputs, outputs):
self.inputs = inputs
self.outputs = outputs
self.weights = np.array([[.50], [.50], [.50]])
self.error_history = []
self.epoch_list = []
```
We will take an object-oriented approach to building this paticular neural network. We can begin by creating a class called "NeuralNetwork" and initializing the class by defining the `__init__` function. Our `__init__` function will take the inputs and outputs as arguments. We will also need to define our weights, which, for simplicity, will start with each weight being .50. Because each feature in the data must be connected to the hidden layer, we will need a weight for each feature in the data (three weights). For plotting purposes, we will also create two empty lists: loss_history and epoch_list. This will keep track of our neural network's error at each epoch during the training process.
```
def sigmoid(self, x, deriv=False):
if deriv == True:
return x * (1 - x)
return 1 / (1 + np.exp(-x))
```
This neural network will be using the sigmoid function, or logistic function, as the activation function. The sigmoid function is a popular nonlinear activation function that has a range of (0-1). The inputs to this function will always be squished down to fit inbetween the sigmoid function's two horizontal asymptotes at y=0 and y=1. The sigmoid function has some well known issues that restrict its usage. When we look at the graph below of the sigmoidal curve, we notice that as we reach the two ends of the curve, the derivatives of those points become very small. When these small derivatives are multiplied during backpropagation, they become smaller and smaller until becoming useless. Due to the derivatives, or gradients, getting smaller and smaller, the weights in the neural network will not be updated very much, if at all. This will lead the neural network to become stuck, with the situation becoming worse and worse for every additional training iteration.
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Unknown-13?raw=true' style='display: block; margin: auto; width: 600px;'>
The sigmoid function can be written as:
$ S(x) = \frac{1}{1 + e^{-x}} $
And the derivative of the sigmoid function can be written as:
$S'(x) = S(x) \cdot (1 - S(x))$
### How to get Derivative
A derivative is just a fancy word for the slope or the tangent line to a given point. Take a closer look at the sigmoid function's curve on the graph above. Where x=0, the slope is much greater than the slope where x=4 or x=-4. The amount that the weight(s) are updated is based on the derivative. If the slope is a lower value, the neural network is confident in its prediction, and less movement of the weights is needed. If the slope is of a higher value, then the neural networks predictions are closer to .50, or 50% (The highest slope value possible for the sigmoid function is at x=0 and y=.5. y is the prediction.). This means the neural network is not very confident in its prediction and is in need of a greater update to the weights.
We can find the derivative of the sigmoid function with the steps below:
$$ S(x) = \frac{1}{1 + e^{-x}} $$
$$ S'(x) = \frac{d}{dx}(1 + e^{-x})^{-1} $$
$$ = -(1 + e^{-x})^{-2} \cdot \frac{d}{dx} (1 + e^{-x})$$
$$ = -(1 + e^{-x})^{-2} \cdot (\frac{d}{dx} (1) + \frac{d}{dx}(e^{-x}))$$
$$ = -(1 + e^{-x})^{-2} \cdot (-e^{-x}) $$
$$ = \frac{-(-e^{-x})}{(1 + e^{-x})^{2}} $$
$$ = \frac{e^{-x}}{(1 + e^{-x})^{2}} $$
<center>This is the derivative of the sigmoid function, but we can simplify it further by multiplying the numerator by one:</center>
$$ = \frac{1}{(1 + e^{-x})} \cdot \frac{e^{-x}}{(1 + e^{-x})} $$
<center>This will pull out another sigmoid function! We can then use a cool trick to continue the simplification: add one and subtract one to $e^{-x}$. Adding one and subtracting one will not change anything because they cancel eachother out. It is a fancy way of adding zero.</center>
$$ = \frac{1}{(1 + e^{-x})} \cdot \frac{1 + e^{-x} - 1}{(1 + e^{-x})} $$
<center>By adding and subtracting one in the numerator, we can split the fraction up again and pull out another sigmoid function!</center>
$$ = \frac{1}{(1 + e^{-x})} \cdot (\frac{(1 + e^{-x})}{(1 + e^{-x})} - \frac{1}{(1 + e^{-x})}) $$
<center>Now we can simplify $\frac{(1 + e^{-x})}{(1 + e^{-x})}$ to 1 and end up with the sigmoid functions simplified derivative.</center>
$$ = \frac{1}{(1 + e^{-x})} \cdot (1 - \frac{1}{(1 + e^{-x})}) $$
<center>If we write the sigmoid function as $S(x)$, then the derivative can be written as:</center>
$$ = S(x) \cdot (1 - S(x)) $$
```
def feed_forward(self):
self.hidden = self.sigmoid(np.dot(self.inputs, self.weights))
```
During our neural network's training process, the input data will be fed forward through the network's weights and functions. The result of this feed forward function will be the output of the hidden layer, or the hidden layer's best guess with the weights it is given. Each feature in the input data will have its own weight for its connection to the hidden layer. We will start by taking the sum of every feature multiplied by its corresponding weight. Once we have multiplied the input and weight matrices, we can take the results and feed it through the sigmoid function to get squished down into a probability between (0-1). The forward propagation function can be written like this, where $x_{i}$ and $w_{i}$ are individual features and weights in the matrices:
$$ \hat y = \frac{1}{1 + e^{-(∑x_iw_i)}} $$
To reiterate, the hidden layer will be calculated with the following steps:
* Multiply each feature column with its weight
* Sum the products of the features and weights
* Pass the sum into the sigmoid function to produce the output $\hat y$
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Untitled%20presentation-6.png?raw=true' style='display:block; margin:auto; width:500px;'>
The above image shows the process of multiplying each feature and its corresponding weight, then taking the sum of the products. Each row in the training data will be computed this way. The resulting 4x1 matrix will be fed into the sigmoid activation function, as shown below:
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Untitled%20presentation-4.png?raw=true' style='display:block; margin:auto; width:500px;'>
The above process will result in the hidden layer's prediction. Each row in the $\sum xw$ matrix will be entered into the sigmoid function. The colors represent the individual processes for each row in the $\sum xw$ matrix. <b>Note:</b> this calculation only represents <b>one training iteration</b>, so the resulting $\hat y$ matrix will not be very accurate. By computing the hidden layer this way, then using backpropagation for many iterations, the result will be much more accurate.
```
def backpropagation(self):
self.error = self.outputs - self.hidden
delta = self.error * self.sigmoid(self.hidden, deriv=True)
self.weights += np.dot(self.inputs.T, delta)
```
This is the coolest part of the whole neural net: backpropagation. Backpropagation will go back through the layer(s) of the neural network, determine which weights contributed to the output and the error, then change the weights based on the gradient of the hidden layers output. This will be explained further, but for now, the whole process can be written like this, where $y$ is the correct output and $\hat y$ is the hidden layers prediction.:
$$ w_{i} = w_i + X^{T}\cdot (y - \hat y) \cdot [\frac{1}{1 + e^{-(∑x_iw_i)}} \cdot (1 - \frac{1}{1 + e^{-(∑x_iw_i)}})]$$
To calculate the error of the hidden layer's predictions, we will simply take the difference between the correct output matrix, $y$, and the hidden layer's matrix, $\hat y$. This process will be shown below.
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Untitled%20presentation-5.png?raw=true' style='display:block; margin:auto; width:400px;'>
We can now multiply the error and the derivative of the hidden layer's prediction. We know that the derivative of the sigmoid function is $S(x)\cdot (1 - S(x))$. Therefore, the derivative for each of the hidden layer's predictions would be $[\hat y \cdot (1 - \hat y)]$. For example, the first row in the hidden layer's prediction matrix holds a value of $0.62$. We can substitute $\hat y$ with $0.62$ and the result will be the derivative of the prediction. $0.62 \cdot (1 - 0.62) = 0.2356$. Repeating this process for every row in the $\hat y$ matrix will give you a 4x1 matrix of derivatives which you will then multiply with the error matrix.
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Untitled%20presentation-7.png?raw=true' style='display:block; margin:auto; width:400px;'>
Multiplying the error and the derivative is used to find the change that is needed. When the sigmoid function outputs a value with a higher confidence (either close to 0 or close to 1), the derivative will be smaller, therefore the change needed will be smaller. If the sigmoid function outputs a value closer to .50, then the derivative is a larger value, which means there needs to be a larger change in order for the nerual net to become more confident.
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Untitled%20presentation-8.png?raw=true' style='display:block; margin:auto; width:400px;'>
This step will result with the update that will be added to the weights. We can get this update by multiplying our "error weighted derivative" from the above step and the inputs. If the feature in the input is a 0, then the update to the weight will be 0 and if the feature in the input is 1, the update will be added in. This will result in a (3x1) matrix that matches the shape of our weights matrix.
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Untitled%20presentation-9.png?raw=true' style='display:block; margin:auto; width:400px;'>
Once we have the update matrix, we can add it to our weights matrix to officially change the weights to become stronger. Even after one training iteration there is some noticeable progress! If you look at the updated weights matrix, you may notice that the first weight in the matrix has a higher value. Remember that our neural network must learn that the first feature in the inputs determines the output. We can see that our neural network is already assigning a higher value to the weight connected to the first feature in each input example!
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Untitled%20presentation-10.png?raw=true' style='display:block; margin:auto; width:400px;'>
```
def train(self, epochs=25000):
for epoch in range(epochs):
self.feed_forward()
self.backpropagation()
self.error_history.append(np.average(np.abs(self.error)))
self.epoch_list.append(epoch)
```
The time has come to train the neural network. During the training process, the neural net will "learn" which features in the input data correlate with its output, and it will learn to make accurate predictions. To train our neural network, we will create the train function with the number of epochs, or iterations, to 25,000. This means the neural network will repeat the weight-updating process 25,000 times. Within the train function, we will call our `feed_forward()` function, then the `backpropagation()` function. For each iteration we will also keep track of the error produced after the `feed_forward()` function has completed. We will keep track of this by appending the error and epoch to the lists that were initialized earlier. I am sure there is an easier way to do this, but for quick prototyping, this way works just fine for now.
The training process follows the equation below for every weight in our neural net:
* $x_i$ - Feature in Input Data
* $w_i$ - The Weight that is Being Updated
* $X^T$ - Transposed Input Data
* $y$ - Correct Output
* $\hat y$ - Predicted Output
* $(y - \hat y)$ - Error
* $∑x_iw_i$ - Sum of the Products of Input Features and Weights
* $\frac{1}{1 + e^{(∑x_iw_i)}}$ or $S(∑x_iw_i)$ - Sigmoid Function
$$ w_i = w_i + X^{T}\cdot(y - \hat y) \cdot [S(∑x_iw_i) \cdot (1 - S(∑x_iw_i))] $$
```
def predict(self, new_input):
prediction = self.sigmoid(np.dot(new_input, self.weights))
return prediction
```
Now that the neural network has been trained and has learned the important features in the input data, we can begin to make predictions. The prediction function will look similar to the hidden layer, or the `feedforward()` function. The forward propagation function essentially makes a prediction as well, then backpropagation checks for the error and updates the weights. Our predict function will use the same method as the feedforward function: multiply the input matrix and the weights matrix, then feed the results through the sigmoid function to return a value between 0-1. Hopefully, our neural network will make a prediction as close as possible to the actual output.
```
NN = NeuralNetwork(inputs, outputs)
```
We will create our NN object from the NeuralNetwork class and pass in the input matrix and the output matrix.
```
NN.train()
```
We can then call the `.train()` function on our neural network object.
```
example = np.array([[1, 1, 0]])
example_2 = np.array([[0, 1, 1]])
print(NN.predict(example), ' - Correct: ', example[0][0])
print(NN.predict(example_2), ' - Correct: ', example_2[0][0])
```
```
[[0.99089925]] - Correct: 1
[[0.006409]] - Correct: 0
```
Now we can create the two new examples that we want our neural network to make predictions for. We will call these "example" and "example_2". We can then call the `.predict()` function and pass through the arrays. We know that the first number, or feature, in the input determines the output. The first example, "Example", has a 1 in the first column, and therefore the output should be a one. The second example has a 0 in the first column, and so the output should be a 0.
```
plt.figure(figsize=(15,5))
plt.plot(NN.epoch_list, NN.error_history)
plt.xlabel('Epoch')
plt.ylabel('Loss')
```
With the training complete, we can plot the error over each training iteration. The plot shows that there is a hude decrease in error during the earlier epochs, but that the error slightly plateaus after approximately 5000 iterations.
<img src='https://github.com/a-i-dan/a-i-dan.github.io/blob/master/images/NN_Revised/Unknown-14?raw=true' style='display: block; margin: auto; width: 1000px;'>
| github_jupyter |
```
# help function
from transfer_learning import NeuralNet_sherpa_optimize
from dataset_loader import data_loader, get_descriptors, one_filter, data_scaler
# modules
import torch
import torch.nn as nn
import torch.optim as optim
import os
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# New
from transfer_learning import MyDataset
from Statistics_helper import stratified_cluster_sample
from ignite.engine import Engine, Events, create_supervised_evaluator
from ignite.metrics import Loss
from ignite.contrib.metrics.regression import R2Score
import time
from ignite.engine import Events, create_supervised_evaluator
import sherpa
from sklearn.metrics import r2_score
# file name and data path
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
base_path = os.getcwd()
file_name = "data/CrystGrowthDesign_SI.csv"
"""
Data description.
Descriptors:
'void fraction', 'Vol. S.A.', 'Grav. S.A.', 'Pore diameter Limiting', 'Pore diameter Largest'
Source task:
'H2@100 bar/243K (wt%)'
Target tasks:
'H2@100 bar/130K (wt%)' 'CH4@100 bar/298 K (mg/g)' '5 bar Xe mol/kg' '5 bar Kr mol/kg'
"""
descriptor_columns = [
"void fraction",
"Vol. S.A.",
"Grav. S.A.",
"Pore diameter Limiting",
"Pore diameter Largest",
]
one_filter_columns = ["H2@100 bar/243K (wt%)"]
another_filter_columns = ['5 bar Kr mol/kg']
# load data
data = data_loader(base_path, file_name)
data = data.reset_index(drop=True)
epochs = 10000
batch_size = 128
# parameters
input_size = 5
output_size = 1
# file specifics
#filename = f"data_epochs-{epochs}_bs-{batch_size}"
trial_parameters={
"lr" : 0.003014,
"H_l1" : 257,
"activate" : "nn.PReLU"
}
#format data
learning_rate = trial_parameters["lr"]
df, t_1, t_2, y_1, y_2 = stratified_cluster_sample(
1, data, descriptor_columns, one_filter_columns[0], 5, net_out=True
)
df = df[0]
df=df.drop("Cluster",axis=1)
interest = one_filter_columns[0]
#descriptor_columns.append("Cluster")
features = descriptor_columns
df_train, df_val, y_df_train, y_df_val = train_test_split(
df[features], df[interest], test_size=0.1
)
df_train[interest] = np.array(y_df_train)
df_val[interest] = np.array(y_df_val)
first = MyDataset(df_train, interest, features)
train_loader = torch.utils.data.DataLoader(first, batch_size=batch_size)
second = MyDataset(df_val, interest, features)
val_loader = torch.utils.data.DataLoader(second, batch_size=len(df_val))
train_loss = []
train_r_2 = []
val_loss = []
val_r_2 = []
test_loss = []
test_r_2 = []
net_time = []
#create model
model = NeuralNet_sherpa_optimize(5, 1, trial_parameters).to(device)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
def train_step(engine, batch):
x, y = batch
model.train()
optimizer.zero_grad()
y_pred = model(x)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
return loss.item()
trainer = Engine(train_step)
metrics = {"loss": Loss(criterion), "r_2": R2Score()}
#train_evaluator = create_supervised_evaluator(model, metrics=metrics, device=device)
# train_evaluator.logger = setup_logger("Train Evaluator")
#validation_evaluator = create_supervised_evaluator(
# model, metrics=metrics, device=device
#)
# validation_evaluator.logger = setup_logger("Val Evaluator")
train_loader = torch.utils.data.DataLoader(first, batch_size=batch_size,shuffle=True)
start = time.time()
trainer.logger.disabled=True
trainer.run(train_loader, max_epochs=epochs)
descriptor_columns = [
"void fraction",
"Vol. S.A.",
"Grav. S.A.",
"Pore diameter Limiting",
"Pore diameter Largest",
]
model.fc1.weight.requires_grad = False
model.fc1.bias.requires_grad = False
model.fc2.weight.requires_grad = False
model.fc2.bias.requires_grad = False
optimizer = optim.Adam(
filter(lambda p: p.requires_grad, model.parameters()), lr=learning_rate
)
df, t_1, t_2, y_1, y_2 = stratified_cluster_sample(
1, data, descriptor_columns, another_filter_columns[0], 5, net_out=True
)
df = df[0]
df=df.drop("Cluster",axis=1)
interest = another_filter_columns[0]
#descriptor_columns.append("Cluster")
features = descriptor_columns
df_train, df_test, y_df_train, y_df_test = train_test_split(
df[features], df[interest], test_size=0.2
)
y_df_train=y_df_train.reset_index(drop=False)
df_train, df_val, y_df_train, y_df_val = train_test_split(
df_train[features], y_df_train[interest], test_size=0.2
)
df_train[interest] = np.array(y_df_train)
df_val[interest] = np.array(y_df_val)
df_test[interest]=np.array(y_df_test)
interest=another_filter_columns[0]
first = MyDataset(df_train, interest, features)
train_loader = torch.utils.data.DataLoader(first, batch_size=batch_size)
second = MyDataset(df_val, interest, features)
val_loader = torch.utils.data.DataLoader(second, batch_size=len(df_val))
third = MyDataset(df_test, interest, features)
test_loader=torch.utils.data.DataLoader(third, batch_size=len(df_test))
def train_step_1(engine, batch):
x, y = batch
model.train()
optimizer.zero_grad()
y_pred = model(x)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
return loss.item()
transfer_trainer = Engine(train_step_1)
metrics = {"loss": Loss(criterion), "r_2": R2Score()}
@transfer_trainer.on(Events.EPOCH_COMPLETED(every=50))
def store_metrics(engine):
end = time.time()
e = engine.state.epoch
out=float(criterion(model(train_loader.dataset.x_train),train_loader.dataset.y_train))
out1=float(r2_score(model(train_loader.dataset.x_train).detach().numpy(),train_loader.dataset.y_train.detach().numpy()))
out2=float(criterion(model(val_loader.dataset.x_train),val_loader.dataset.y_train))
out3=float(r2_score(model(val_loader.dataset.x_train).detach().numpy(),val_loader.dataset.y_train.detach().numpy()))
out4=float(criterion(model(test_loader.dataset.x_train),test_loader.dataset.y_train))
out5=float(r2_score(model(test_loader.dataset.x_train).detach().numpy(),test_loader.dataset.y_train.detach().numpy()))
train_loss.append(out)
train_r_2.append(out1)
val_loss.append(out2)
val_r_2.append(out3)
test_loss.append(out4)
test_r_2.append(out5)
net_time.append(end-start)
print(e)
transfer_trainer.logger.disabled=True
transfer_trainer.run(train_loader, max_epochs=epochs)
import matplotlib.pyplot as plt
plt.plot(val_r_2)
plt.plot(train_r_2,label="t")
plt.plot(test_r_2,label="real")
plt.legend()
plt.plot(val_loss)
plt.plot(train_loss,label="t")
plt.plot(test_loss,label="real")
plt.legend()
torch.save(model, "KR.ckpt")
pd.DataFrame()
```
| github_jupyter |
# 3. Image-Similar-FCNN-Binary
For landmark-recognition-2019 algorithm validation
## Run name
```
import time
project_name = 'Dog-Breed'
step_name = '3-Image-Similar-FCNN-Binary'
time_str = time.strftime("%Y%m%d-%H%M%S", time.localtime())
run_name = project_name + '_' + step_name + '_' + time_str
print('run_name: ' + run_name)
t0 = time.time()
```
## Important params
```
import multiprocessing
cpu_amount = multiprocessing.cpu_count()
print('cpu_amount: ', cpu_amount)
```
## Import PKGs
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
from IPython.display import display
import os
import sys
import gc
import math
import shutil
import zipfile
import pickle
import h5py
from tqdm import tqdm
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score
import keras
from keras.utils import Sequence
from keras.layers import *
from keras.models import *
from keras.applications import *
from keras.optimizers import *
from keras.regularizers import *
from keras.preprocessing.image import *
from keras.applications.inception_v3 import preprocess_input
```
## Project folders
```
cwd = os.getcwd()
feature_folder = os.path.join(cwd, 'feature')
input_folder = os.path.join(cwd, 'input')
output_folder = os.path.join(cwd, 'output')
model_folder = os.path.join(cwd, 'model')
org_train_folder = os.path.join(input_folder, 'org_train')
org_test_folder = os.path.join(input_folder, 'org_test')
train_folder = os.path.join(input_folder, 'data_train')
val_folder = os.path.join(input_folder, 'data_val')
test_folder = os.path.join(input_folder, 'data_test')
test_sub_folder = os.path.join(test_folder, 'test')
vgg16_feature_file = os.path.join(feature_folder, 'feature_wrapper_171023.h5')
train_csv_file = os.path.join(input_folder, 'train.csv')
test_csv_file = os.path.join(input_folder, 'test.csv')
sample_submission_folder = os.path.join(input_folder, 'sample_submission.csv')
print(vgg16_feature_file)
print(train_csv_file)
print(test_csv_file)
print(sample_submission_folder)
```
## Load feature
```
with h5py.File(vgg16_feature_file, 'r') as h:
x_train = np.array(h['train'])
y_train = np.array(h['train_label'])
x_val = np.array(h['val'])
y_val = np.array(h['val_label'])
print(x_train.shape)
print(y_train.shape)
print(x_val.shape)
print(y_val.shape)
import random
random.choice(list(range(10)))
import copy
a = list(range(10, 20))
print(a)
a.remove(13)
print(a)
```
## ImageSequence
```
# class ImageSequence(Sequence):
# def __init__(self, x, y, batch_size, times_for_1_image, positive_rate):
# self.x = x
# self.y = y
# self.batch_size = batch_size
# self.times_for_1_image = times_for_1_image
# self.positive_rate = positive_rate
# self.len_x = self.x.shape[0]
# self.index = list(range(self.len_x))
# self.group = {}
# self.classes = list(set(self.y))
# self.classes.sort()
# for c in self.classes:
# self.group[c] = []
# for i, y_i in enumerate(self.y):
# temp_arr = self.group[y_i]
# temp_arr.append(i)
# self.group[y_i] = temp_arr
# def __len__(self):
# # times_for_1_image: the times to train one image
# # 2: positive example and negative example
# return self.times_for_1_image * 2 * (math.ceil(self.len_x/self.batch_size))
# def __getitem__(self, idx):
# batch_main_x = []
# batch_libary_x = []
# batch_x = {}
# batch_y = [] # 0 or 1
# for i in range(self.batch_size):
# # prepare main image
# item_main_image_idx = random.choice(self.index) # random choice one image from all train images
# item_main_image_y = self.y[item_main_image_idx]
# # prepare libary image
# is_positive = random.random() < self.positive_rate
# if is_positive: # chioce a positive image as libary_x
# # choice one image from itself group
# item_libary_image_idx = random.choice(self.group[item_main_image_y]) # don't exclude item_main_image_idx, so it could choice a idx same to item_main_image_idx.
# else: # chioce a negative image as libary_x
# # choice group
# new_class = copy.deepcopy(self.classes)
# new_class.remove(item_main_image_y)
# item_libary_image_group_num = random.choice(new_class)
# # choice one image from group
# item_libary_image_idx = random.choice(self.group[item_libary_image_group_num])
# # add item data to batch
# batch_main_x.append(self.x[item_main_image_idx])
# batch_libary_x.append(self.x[item_libary_image_idx])
# batch_y.append(int(is_positive))
# # concatenate array to np.array
# batch_x = {
# 'main_input': np.array(batch_main_x),
# 'library_input': np.array(batch_libary_x)
# }
# batch_y = np.array(batch_y)
# return batch_x, batch_y
# demo_sequence = ImageSequence(x_train[:200], y_train[:200], 128, 3, 0.1)
# print(len(demo_sequence))
# print(type(demo_sequence))
# batch_index = 0
# demo_batch = demo_sequence[batch_index]
# demo_batch_x = demo_batch[0]
# demo_batch_y = demo_batch[1]
# print(type(demo_batch_x))
# print(type(demo_batch_y))
# demo_main_input = demo_batch_x['main_input']
# demo_library_input = demo_batch_x['library_input']
# print(demo_main_input.shape)
# print(demo_library_input.shape)
# print(demo_batch_y.shape)
# # print(demo_main_input[0])
# print(demo_batch_y)
class ImageSequence(Sequence):
def __init__(self, x, y, batch_size, times_for_1_image, positive_rate):
self.x = x
self.y = y
self.batch_size = batch_size
self.times_for_1_image = times_for_1_image
self.positive_rate = positive_rate
self.len_x = self.x.shape[0]
self.index = list(range(self.len_x))
self.group = {}
self.classes = list(set(self.y))
self.classes.sort()
for c in self.classes:
self.group[c] = []
for i, y_i in enumerate(self.y):
temp_arr = self.group[y_i]
temp_arr.append(i)
self.group[y_i] = temp_arr
def __len__(self):
# times_for_1_image: the times to train one image
# 2: positive example and negative example
return self.times_for_1_image * 2 * (math.ceil(self.len_x/self.batch_size))
def __getitem__(self, idx):
batch_main_x = np.zeros((self.batch_size, self.x.shape[1]))
batch_libary_x = np.zeros((self.batch_size, self.x.shape[1]))
batch_x = {}
batch_y = [] # 0 or 1
for i in range(self.batch_size):
# prepare main image
item_main_image_idx = random.choice(self.index) # random choice one image from all train images
item_main_image_y = self.y[item_main_image_idx]
# prepare libary image
is_positive = random.random() < self.positive_rate
if is_positive: # chioce a positive image as libary_x
# choice one image from itself group
item_libary_image_idx = random.choice(self.group[item_main_image_y]) # don't exclude item_main_image_idx, so it could choice a idx same to item_main_image_idx.
else: # chioce a negative image as libary_x
# choice group
new_class = copy.deepcopy(self.classes)
new_class.remove(item_main_image_y)
item_libary_image_group_num = random.choice(new_class)
# choice one image from group
item_libary_image_idx = random.choice(self.group[item_libary_image_group_num])
# add item data to batch
batch_main_x[i] = self.x[item_main_image_idx]
batch_libary_x[i] = self.x[item_libary_image_idx]
batch_y.append(int(is_positive))
# concatenate array to np.array
batch_x = {
'main_input': batch_main_x,
'library_input': batch_libary_x
}
batch_y = np.array(batch_y)
return batch_x, batch_y
demo_sequence = ImageSequence(x_train[:200], y_train[:200], 128, 3, 0.1)
print(len(demo_sequence))
print(type(demo_sequence))
batch_index = 0
demo_batch = demo_sequence[batch_index]
demo_batch_x = demo_batch[0]
demo_batch_y = demo_batch[1]
print(type(demo_batch_x))
print(type(demo_batch_y))
demo_main_input = demo_batch_x['main_input']
demo_library_input = demo_batch_x['library_input']
print(demo_main_input.shape)
print(demo_library_input.shape)
print(demo_batch_y.shape)
# print(demo_main_input[0])
print(demo_batch_y)
train_sequence = ImageSequence(x_train, y_train, 32, 3, 0.5)
val_sequence = ImageSequence(x_val, y_val, 32, 3, 0.5)
```
## Model
```
main_input = Input((x_train.shape[1],), dtype='float32', name='main_input')
library_input = Input((x_train.shape[1],), dtype='float32', name='library_input')
x = keras.layers.concatenate([main_input, library_input])
x = Dense(x_train.shape[1]*2, activation='sigmoid')(x)
x = Dense(1024, activation='sigmoid')(x)
x = Dense(1024, activation='sigmoid')(x)
output = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[main_input, library_input], outputs=[output])
model.compile(optimizer=Adam(lr=1e-4), loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
hist = model.fit_generator(
train_sequence,
steps_per_epoch=128,
epochs=300,
verbose=1,
callbacks=None,
validation_data=val_sequence,
validation_steps=128,
class_weight=None,
max_queue_size=10,
workers=1,
use_multiprocessing=False,
shuffle=True,
initial_epoch=0
)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(hist.history['loss'], color='b')
plt.plot(hist.history['val_loss'], color='r')
plt.show()
plt.plot(hist.history['acc'], color='b')
plt.plot(hist.history['val_acc'], color='r')
plt.show()
def saveModel(model, run_name):
cwd = os.getcwd()
modelPath = os.path.join(cwd, 'model')
if not os.path.isdir(modelPath):
os.mkdir(modelPath)
weigths_file = os.path.join(modelPath, run_name + '.h5')
print(weigths_file)
model.save(weigths_file)
saveModel(model, run_name)
print('Time elapsed: %.1fs' % (time.time() - t0))
print(run_name)
```
| github_jupyter |
# Support Vector Machine
```
from PIL import Image
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from sklearn import datasets, svm, linear_model
matplotlib.style.use('bmh')
matplotlib.rcParams['figure.figsize']=(10,10)
```
### 2D Linear
```
# Random 2d X
X0 = np.random.normal(-2, size=(30,2))
X1 = np.random.normal(2, size=(30,2))
X = np.concatenate([X0,X1], axis=0)
y = X @ [1,1] > 0
clf=svm.SVC(kernel='linear', C=1000)
clf.fit(X, y)
# 邊界
x_min, y_min = X.min(axis=0)-1
x_max, y_max = X.max(axis=0)+1
# 座標點
grid = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
# grid.shape = (2, 200, 200)
# 在座標點 算出 svm 的判斷函數
Z = clf.decision_function(grid.reshape(2, -1).T)
Z = Z.reshape(grid.shape[1:])
# 畫出顏色和邊界
plt.pcolormesh(grid[0], grid[1], Z > 0, cmap=plt.cm.rainbow, alpha=0.1)
plt.contour(grid[0], grid[1], Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-1, 0, 1])
# 標出 sample 點
plt.scatter(X[:,0], X[:, 1], c=y, cmap=plt.cm.rainbow, zorder=10, s=50);
```
3D view
```
from mpl_toolkits.mplot3d import Axes3D
ax = plt.gca(projection='3d')
ax.plot_surface(grid[0], grid[1], Z, cmap=plt.cm.rainbow, alpha=0.2)
ax.plot_wireframe(grid[0], grid[1], Z, alpha=0.2, rstride=20, cstride=20)
ax.scatter(X[:, 0], X[:, 1], y, c=y, cmap=plt.cm.rainbow, s=30);
ax.set_zlim3d(-2,2)
ax.set_xlim3d(-3,3)
ax.set_ylim3d(-3,3)
ax.view_init(15, -75)
```
Linear Nonseparable
```
# Random 2d X
X = np.random.uniform(-1.5, 1.5, size=(100,2))
y = (X**2).sum(axis=1) > 1
clf=svm.SVC(kernel='linear', C=1000)
clf.fit(X, y)
# 邊界
x_min, y_min = X.min(axis=0)-1
x_max, y_max = X.max(axis=0)+1
# 座標點
grid = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
# grid.shape = (2, 200, 200)
# 在座標點 算出 svm 的判斷函數
Z = clf.decision_function(grid.reshape(2, -1).T)
Z = Z.reshape(grid.shape[1:])
# 畫出顏色和邊界
plt.pcolormesh(grid[0], grid[1], Z > 0, cmap=plt.cm.rainbow, alpha=0.1)
plt.contour(grid[0], grid[1], Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-1, 0, 1])
# 標出 sample 點
plt.scatter(X[:,0], X[:, 1], c=y, cmap=plt.cm.rainbow, zorder=10, s=20);
(np.linspace(-1.5,1.5, 10)[:, None] @ np.linspace(-1.5,1.5, 10)[None, :]).shape
# Random 2d X
X = np.random.uniform(-1.5, 1.5, size=(100,2))
# more feature (x**2, y**2, x*y)
X2 = np.concatenate([X, X**2, (X[:, 0]*X[:, 1])[:, None]], axis=1)
y = (X**2).sum(axis=1) > 1
clf=svm.SVC(kernel='linear', C=1000)
clf.fit(X2, y)
# 邊界
x_min, y_min = X.min(axis=0)-1
x_max, y_max = X.max(axis=0)+1
# 座標點
grid = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
# grid.shape = (2, 200, 200)
G = grid.reshape(2, -1).T
G = np.concatenate([G, G**2, (G[:, 0]*G[:, 1])[:, None]], axis=1)
# 在座標點 算出 svm 的判斷函數
Z = clf.decision_function(G)
Z = Z.reshape(grid.shape[1:])
# 畫出顏色和邊界
plt.pcolormesh(grid[0], grid[1], Z > 0, cmap=plt.cm.rainbow, alpha=0.1)
plt.contour(grid[0], grid[1], Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-1, 0, 1])
# 標出 sample 點
plt.scatter(X[:,0], X[:, 1], c=y, cmap=plt.cm.rainbow, zorder=10, s=20);
#%matplotlib qt
ax = plt.gca(projection='3d')
ax.plot_surface(grid[0], grid[1], Z, cmap=plt.cm.rainbow, alpha=0.2)
ax.plot_wireframe(grid[0], grid[1], Z, alpha=0.2, rstride=20, cstride=20)
ax.scatter(X[:, 0], X[:, 1], y, c=y, cmap=plt.cm.rainbow, s=30);
#plt.show()
```
With kernel
```
%matplotlib inline
matplotlib.rcParams['figure.figsize']=(10,10)
# Random 2d X
X = np.random.uniform(-1.5, 1.5, size=(100,2))
# more feature (x**2, y**2, x*y)
X2 = np.concatenate([X, X**2, (X[:, 0]*X[:, 1])[:, None]], axis=1)
y = (X**2).sum(axis=1) > 1
clf=svm.SVC(kernel='rbf', C=1000)
clf.fit(X2, y)
# 邊界
x_min, y_min = X.min(axis=0)-1
x_max, y_max = X.max(axis=0)+1
# 座標點
grid = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
# grid.shape = (2, 200, 200)
G = grid.reshape(2, -1).T
G = np.concatenate([G, G**2, (G[:, 0]*G[:, 1])[:, None]], axis=1)
# 在座標點 算出 svm 的判斷函數
Z = clf.decision_function(G)
Z = Z.reshape(grid.shape[1:])
# 畫出顏色和邊界
plt.pcolormesh(grid[0], grid[1], Z > 0, cmap=plt.cm.rainbow, alpha=0.1)
plt.contour(grid[0], grid[1], Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-1, 0, 1])
# 標出 sample 點
plt.scatter(X[:,0], X[:, 1], c=y, cmap=plt.cm.rainbow, zorder=10, s=20);
#%matplotlib qt
ax = plt.gca(projection='3d')
ax.plot_surface(grid[0], grid[1], Z, cmap=plt.cm.rainbow, alpha=0.2)
ax.plot_wireframe(grid[0], grid[1], Z, alpha=0.2, rstride=20, cstride=20)
ax.scatter(X[:, 0], X[:, 1], y, c=y, cmap=plt.cm.rainbow, s=30);
#plt.show()
```
| github_jupyter |
<img src="http://drive.google.com/uc?export=view&id=1tpOCamr9aWz817atPnyXus8w5gJ3mIts" width=500px>
Proprietary content. © Great Learning. All Rights Reserved. Unauthorized use or distribution prohibited.
---
# Hands-on - Advanced Certificate in Software Engineering - IIT Madras
---
# Instructions
- You need to add the code where ever you see "`#### Add your code here ####`"
- Marks are mentioned along with the cells
- **Do not edit any of the prefilled text/code part**
## Run below cells before starting the test - Mandatory
```
#### Please run this cell. Don't edit anything ####
!pip install --upgrade pip
!pip install ipython-sql
%load_ext sql
%sql sqlite://
```
## <font color='blue'> Please run the following cell. Don't edit anything </font>
```
%%sql
CREATE TABLE if not exists `user`(
`user_id` VARCHAR(50) NOT NULL,
`user_name` VARCHAR(50) NOT NULL,
`email` VARCHAR(50) NOT NULL UNIQUE,
`access_level` VARCHAR(50)
);
INSERT INTO `user` (`user_id`,`user_name`,`email`,`access_level`) VALUES ('U101','Alex','alex@office.com','Normal');
INSERT INTO `user` (`user_id`,`user_name`,`email`,`access_level`) VALUES ('U102','Dwight','dwight@office.com','Admin');
INSERT INTO `user` (`user_id`,`user_name`,`email`,`access_level`) VALUES ('U103','Michael','micael@office.com','Admin');
INSERT INTO `user` (`user_id`,`user_name`,`email`,`access_level`) VALUES ('U104','Jim','jim@office.com','Admin');
INSERT INTO `user` (`user_id`,`user_name`,`email`,`access_level`) VALUES ('U105','Pam','pam@office.com','Normal');
CREATE TABLE if not exists `products`(
`product_id` VARCHAR(50) NOT NULL,
`product_name` VARCHAR(50) NOT NULL,
`brand_name` VARCHAR(50) NOT NULL,
`price` INT
);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('1','soap','vivel',10);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('2','box','naman',15);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('3','shampoo','vivel',30);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('4','detergent','quarts',20);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('5','brush','pepsod',100);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('6','fridge','kelvinator',35);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('7','cooler','kelvinator',40);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('8','shirt','raymond',60);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('9','heater','kelvinator',70);
INSERT INTO `products` (`product_id`,`product_name`,`brand_name`,`price`) VALUES ('10','trouser','raymond',21);
```
## <font color='blue'> Please run the following cell. Don't edit anything </font>
```
%%sql
select * from user;
```
## <font color='blue'> Please run the following cell. Don't edit anything </font>
```
%%sql
select * from products;
```
## Question 1 (4 Marks)
Count the number of `user_id` with `Normal` access_level in the `user` table.
```
%%sql
select count(user_id) from user where access_level = 'Normal';
```
## Question 2 (3 Marks)
Print the all the product information for `brand_name` `vivel` in the `products` table.
```
%%sql
select * from products where brand_name = 'vivel';
```
## Question 3 (3 Marks)
Print the all the unique `brand_name` from the `products` table.
```
%%sql
select distinct(brand_name) from products;
%%sql
select count(brand_name), brand_name from products group by brand_name;
```
| github_jupyter |
```
import sys
import numpy as np
```
# Numpy
Numpy proporciona un nuevo contenedor de datos a Python, los `ndarray`s, además de funcionalidad especializada para poder manipularlos de forma eficiente.
Hablar de manipulación de datos en Python es sinónimo de Numpy y prácticamente todo el ecosistema científico de Python está construido sobre Numpy. Digamos que Numpy es el ladrillo que ha permitido levantar edificios tan sólidos como Pandas, Matplotlib, Scipy, scikit-learn,...
**Índice**
* [¿Por qué un nuevo contenedor de datos?](#%C2%BFPor-qu%C3%A9-un-nuevo-contenedor-de-datos?)
* [Tipos de datos](#Tipos-de-datos)
* [Creación de `numpy` arrays](#Creaci%C3%B3n-de-numpy-arrays)
* [Operaciones disponibles más típicas](#Operaciones-disponibles-m%C3%A1s-t%C3%ADpicas)
* [Metadatos y anatomía de un `ndarray`](#Metadatos-y-anatom%C3%ADa-de-un-ndarray)
* [Indexación](#Indexaci%C3%B3n)
* [Manejo de valores especiales](#Manejo-de-valores-especiales)
* [Subarrays, vistas y copias](#Subarrays,-vistas-y-copias)
* [¿Cómo funcionan los ejes de un `ndarray`?](#%C2%BFC%C3%B3mo-funcionan-los-ejes-en-un-ndarray?)
* [Reformateo de `ndarray`s](#Reformateo-de-ndarrays)
* [Broadcasting](#Broadcasting)
* [`ndarrays` estructurados y `recarray`s](#ndarrays-estructurados-y-recarrays)
* [Concatenación y partición de `ndarray`s](#Concatenaci%C3%B3n-y-partici%C3%B3n-de-ndarrays)
* [Funciones matemáticas, funciones universales *ufuncs* y vectorización](#Funciones-matem%C3%A1ticas,-funciones-universales-ufuncs-y-vectorizaci%C3%B3n)
* [Estadística](#Estad%C3%ADstica)
* [Ordenando, buscando y contando](#Ordenando,-buscando-y-contando)
* [Polinomios](#Polinomios)
* [Álgebra lineal](#%C3%81lgebra-lineal)
* [Manipulación de `ndarray`s](#Manipulaci%C3%B3n-de-ndarrays)
* [Módulos de interés dentro de numpy](#M%C3%B3dulos-de-inter%C3%A9s-dentro-de-numpy)
* [Cálculo matricial](#C%C3%A1lculo-matricial)
## ¿Por qué un nuevo contenedor de datos?
En Python, disponemos, de partida, de diversos contenedores de datos, listas, tuplas, diccionarios, conjuntos,..., ¿por qué añadir uno más?.
¡Por conveniencia!, a pesar de la pérdida de flexibilidad. Es una solución de compromiso.
* Uso de memoria más eficiente: Por ejemplo, una lista puede contener distintos tipos de objetos lo que provoca que Python deba guardar información del tipo de cada elemento contenido en la lista. Por otra parte, un `ndarray` contiene tipos homogéneos, es decir, todos los elementos son del mismo tipo, por lo que la información del tipo solo debe guardarse una vez independientemente del número de elementos que tenga el `ndarray`.

***(imagen por Jake VanderPlas y extraída [de GitHub](https://github.com/jakevdp/PythonDataScienceHandbook)).***
* Más rápido: Por ejemplo, en una lista que consta de elementos con diferentes tipos Python debe realizar trabajos extra para saber si los tipos son compatibles con las operaciones que estamos realizando. Cuando trabajamos con un `ndarray` ya podemos saber eso de partida y podemos tener operaciones más eficientes (además de que mucha funcionalidad está programada en C, C++, Cython, Fortran).
* Operaciones vectorizadas
* Funcionalidad extra: Muchas operaciones de álgebra lineal, transformadas rápidas de Fourier, estadística básica, histogramas,...
* Acceso a los elementos más conveniente: Indexación más avanzada que con los tipos normales de Python
* ...
Uso de memoria
```
# AVISO: SYS.GETSYZEOF NO ES FIABLE
lista = list(range(5_000_000))
arr = np.array(lista, dtype=np.uint32)
print("5 millones de elementos")
print(sys.getsizeof(lista))
print(sys.getsizeof(arr))
print()
lista = list(range(100))
arr = np.array(lista, dtype=np.uint8)
print("100 elementos")
print(sys.getsizeof(lista))
print(sys.getsizeof(arr))
```
Velocidad de operaciones
```
a = list(range(1000000))
%timeit sum(a)
print(sum(a))
a = np.array(a)
%timeit np.sum(a)
print(np.sum(a))
```
Operaciones vectorizadas
```
# Suma de dos vectores elemento a elemento
a = [1, 1, 1]
b = [3, 4, 3]
print(a + b)
print('Fail')
# Suma de dos vectores elemento a elemento
a = np.array([1, 1, 1])
b = np.array([3, 4, 3])
print(a + b)
print('\o/')
```
Funcionalidad más conveniente
```
# suma acumulada
a = list(range(100))
print([sum(a[:i+1]) for i in a])
a = np.array(a)
print(a.cumsum())
```
Acceso a elementos más conveniente
```
a = [[11, 12, 13],
[21, 22, 23],
[31, 32, 33]]
print('acceso a la primera fila: ', a[0])
print('acceso a la primera columna: ', a[:][0], ' Fail!!!')
a = np.array(a)
print('acceso a la primera fila: ', a[0])
print('acceso a la primera columna: ', a[:,0], ' \o/')
```
...
Recapitulando un poco.
***Los `ndarray`s son contenedores multidimensionales, homogéneos con elementos de tamaño fijo, de dimensión predefinida.***
## Tipos de datos
Como los arrays deben ser homogéneos tenemos tipos de datos. Algunos de ellos se pueden ver en la siguiente tabla:
| Data type | Descripción |
|---------------|-------------|
| ``bool_`` | Booleano (True o False) almacenado como un Byte |
| ``int_`` | El tipo entero por defecto (igual que el `long` de C; normalmente será `int64` o `int32`)|
| ``intc`` | Idéntico al ``int`` de C (normalmente `int32` o `int64`)|
| ``intp`` | Entero usado para indexación (igual que `ssize_t` en C; normalmente `int32` o `int64`)|
| ``int8`` | Byte (de -128 a 127)|
| ``int16`` | Entero (de -32768 a 32767)|
| ``int32`` | Entero (de -2147483648 a 2147483647)|
| ``int64`` | Entero (de -9223372036854775808 a 9223372036854775807)|
| ``uint8`` | Entero sin signo (de 0 a 255)|
| ``uint16`` | Entero sin signo (de 0 a 65535)|
| ``uint32`` | Entero sin signo (de 0 a 4294967295)|
| ``uint64`` | Entero sin signo (de 0 a 18446744073709551615)|
| ``float_`` | Atajo para ``float64``.|
| ``float16`` | Half precision float: un bit para el signo, 5 bits para el exponente, 10 bits para la mantissa|
| ``float32`` | Single precision float: un bit para el signo, 8 bits para el exponente, 23 bits para la mantissa|
| ``float64`` | Double precision float: un bit para el signo, 11 bits para el exponente, 52 bits para la mantissa|
| ``complex_`` | Atajo para `complex128`.|
| ``complex64`` | Número complejo, represantedo por dos *floats* de 32-bits|
| ``complex128``| Número complejo, represantedo por dos *floats* de 64-bits|
Es posible tener una especificación de tipos más detallada, pudiendo especificar números con *big endian* o *little endian*. No vamos a ver esto en este momento.
El tipo por defecto que usa `numpy` al crear un *ndarray* es `np.float_`, siempre que no específiquemos explícitamente el tipo a usar.
Por ejemplo, un array de tipo `np.uint8` puede tener los siguientes valores:
```
import itertools
for i, bits in enumerate(itertools.product((0, 1), repeat=8)):
print(i, bits)
```
Es decir, puede contener valores que van de 0 a 255 ($2^8$).
¿Cuántos bytes tendrá un `ndarray` de 10 elementos cuyo tipo de datos es un `np.int8`?
```
a = np.arange(10, dtype=np.int8)
print(a.nbytes)
print(sys.getsizeof(a))
a = np.repeat(1, 100000).astype(np.int8)
print(a.nbytes)
print(sys.getsizeof(a))
```
## Creación de numpy arrays
Podemos crear numpy arrays de muchas formas.
* Rangos numéricos
`np.arange`, `np.linspace`, `np.logspace`
* Datos homogéneos
`np.zeros`, `np.ones`
* Elementos diagonales
`np.diag`, `np.eye`
* A partir de otras estructuras de datos ya creadas
`np.array`
* A partir de otros numpy arrays
`np.empty_like`
* A partir de ficheros
`np.loadtxt`, `np.genfromtxt`,...
* A partir de un escalar
`np.full`, `np.tile`,...
* A partir de valores aleatorios
`np.random.randint`, `np.random.randint`, `np.random.randn`,...
...
```
a = np.arange(10) # similar a range pero devuelve un ndarray en lugar de un objeto range
print(a)
a = np.linspace(0, 1, 101)
print(a)
a_i = np.zeros((2, 3), dtype=np.int)
a_f = np.zeros((2, 3))
print(a_i)
print(a_f)
a = np.eye(3)
print(a)
a = np.array(
(
(1, 2, 3, 4, 5, 6),
(10, 20, 30, 40, 50, 60)
),
dtype=np.float
)
print(a)
np.full((5, 5), -999)
np.random.randint(0, 50, 15)
```
<div class="alert alert-success">
<p>Referencias:</p>
<p><a href="https://docs.scipy.org/doc/numpy/user/basics.creation.html#arrays-creation">array creation</a></p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/routines.array-creation.html#routines-array-creation">routines for array creation</a></p>
</div>
**Practicando**
Recordad que siempre podéis usar `help`, `?`, `np.lookfor`,..., para obtener más información.
```
help(np.sum)
np.rad2deg?
np.lookfor("create array")
```
Ved un poco como funciona `np.repeat`, `np.empty_like`,...
```
# Play area
%load ../../solutions/03_01_np_array_creacion.py
```
## Operaciones disponibles más típicas
```
a = np.random.rand(5, 2)
print(a)
a.sum()
a.sum(axis=0)
a.sum(axis=1)
a.ravel()
a.reshape(2, 5)
a.T
a.transpose()
a.mean()
a.mean(axis=1)
a.cumsum(axis=1)
```
<div class="alert alert-success">
<p>Referencias:</p>
<p><a href="https://docs.scipy.org/doc/numpy/user/quickstart.html">Quick start tutorial</a></p>
</div>
**Practicando**
Mirad más métodos de un `ndarray` y toquetead. Si no entendéis algo, preguntad:
```
dir(a)
# Play area
%load ../../solutions/03_02_np_operaciones_tipicas.py
```
## Metadatos y anatomía de un `ndarray`
En realidad, un `ndarray` es un bloque de memoria con información extra sobre como interpretar su contenido. La memoria dinámica (RAM) se puede considerar como un 'churro' lineal y es por ello que necesitamos esa información extra para saber como formar ese `ndarray`, sobre todo la información de `shape` y `strides`.
Esta parte va a ser un poco más esotérica para los no iniciados pero considero que es necesaria para poder entender mejor nuestra nueva estructura de datos y poder sacarle mejor partido.
```
a = np.random.randn(5000, 5000)
```
El número de dimensiones del `ndarray`
```
a.ndim
```
El número de elementos en cada una de las dimensiones
```
a.shape
```
El número de elementos
```
a.size
```
El tipo de datos de los elementos
```
a.dtype
```
El número de bytes de cada elemento
```
a.itemsize
```
El número de bytes que ocupa el `ndarray` (es lo mismo que `size` por `itemsize`)
```
a.nbytes
```
El *buffer* que contiene los elementos del `ndarray`
```
a.data
```
Pasos a dar en cada dimensión cuando nos movemos entre elementos
```
a.strides
```

***(imagen extraída [de GitHub](https://github.com/btel/2016-erlangen-euroscipy-advanced-numpy)).***
Más cosas
```
a.flags
```
Pequeño ejercicio, ¿por qué tarda menos en sumar elementos en una dimensión que en otra si es un array regular?
```
%timeit a.sum(axis=0)
%timeit a.sum(axis=1)
```
Pequeño ejercicio, ¿por qué ahora el resultado es diferente?
```
aT = a.T
%timeit aT.sum(axis=0)
%timeit aT.sum(axis=1)
print(aT.strides)
print(aT.flags)
print(np.repeat((1,2,3), 3))
print()
a = np.repeat((1,2,3), 3).reshape(3, 3)
print(a)
print()
print(a.sum(axis=0))
print()
print(a.sum(axis=1))
```
<div class="alert alert-success">
<p>Referencias:</p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#internal-memory-layout-of-an-ndarray">Internal memory layout of an ndarray</a></p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/internals.html#multidimensional-array-indexing-order-issues">multidimensional array indexing order issues</a></p>
</div>
## Indexación
Si ya has trabajado con indexación en estructuras de Python, como listas, tuplas o strings, la indexación en Numpy te resultará muy familiar.
Por ejemplo, por hacer las cosas sencillas, vamos a crear un `ndarray` de 1D:
```
a = np.arange(10, dtype=np.uint8)
print(a)
print(a[:]) # para acceder a todos los elementos
print(a[:-1]) # todos los elementos menos el último
print(a[1:]) # todos los elementos menos el primero
print(a[::2]) # el primer, el tercer, el quinto,..., elemento
print(a[3]) # el cuarto elemento
print(a[-1:-5:-1]) # ¿?
# Practicad vosotros
```
Para *ndarrays* de una dimensión es exactamente igual que si usásemos listas o tuplas de Python:
* Primer elemento tiene índice 0
* Los índices negativos empiezan a contar desde el final
* slices/rebanadas con `[start:stop:step]`
Con un `ndarray` de más dimensiones las cosas ya cambian con respecto a Python puro:
```
a = np.random.randn(10, 2)
print(a)
a[1] # ¿Qué nos dará esto?
a[1, 1] # Si queremos acceder a un elemento específico hay que dar su posición completa en el ndarray
a[::3, 1]
```
Si tenemos dimensiones mayores a 1 es parecido a las listas pero los índices se separan por comas para las nuevas dimensiones.
<img src="../../images/03_03_arraygraphics_0.png" width=400px />
(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
```
a = np.arange(40).reshape(5, 8)
print(a)
a[2, -3]
```
Para obtener más de un elemento hacemos *slicing* para cada eje:
<img src="../../images/03_04_arraygraphics_1.png" width=400px />
(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
```
a[:3, :5]
```
¿Cómo podemos conseguir los elementos señalados en esta imagen?
<img src="../../images/03_06_arraygraphics_2_wo.png" width=400px />
(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
```
a[x:x ,x:x]
```
¿Cómo podemos conseguir los elementos señalados en esta imagen?
<img src="../../images/03_08_arraygraphics_3_wo.png" width=400px />
(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
```
a[x:x ,x:x]
```
¿Cómo podemos conseguir los elementos señalados en esta imagen?
<img src="../../images/03_10_arraygraphics_4_wo.png" width=400px />
(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
```
a[x:x ,x:x]
```
¿Cómo podemos conseguir los elementos señalados en esta imagen?
<img src="../../images/03_12_arraygraphics_5_wo.png" width=400px />
(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
```
a[x:x ,x:x]
```
Soluciones a lo anterior:
<img src="../../images/03_05_arraygraphics_2.png" width=200px />
<img src="../../images/03_07_arraygraphics_3.png" width=200px />
<img src="../../images/03_09_arraygraphics_4.png" width=200px />
<img src="../../images/03_11_arraygraphics_5.png" width=200px />
(imágenes extraídas de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
**Fancy indexing**
Con *fancy indexing* podemos hacer cosas tan variopintas como:
<img src="../../images/03_13_arraygraphics_6.png" width=300px />
<img src="../../images/03_14_arraygraphics_7.png" width=300px />
(imágenes extraídas de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
Es decir, podemos indexar usando `ndarray`s de booleanos ó usando listas de índices para extraer elementos concretos de una sola vez.
**WARNING: En el momento que usamos *fancy indexing* nos devuelve un nuevo *ndarray* que no tiene porque conservar la estructura original.**
Por ejemplo, en el siguiente caso no devuelve un *ndarray* de dos dimensiones porque la máscara no tiene porqué ser regular y, por tanto, devuelve solo los valores que cumplen el criterio en un vector (*ndarray* de una dimensión).
```
a = np.arange(10).reshape(2, 5)
print(a)
bool_indexes = (a % 2 == 0)
print(bool_indexes)
a[bool_indexes]
```
Sin embargo, sí que lo podríamos usar para modificar el *ndarray* original en base a un criterio y seguir manteniendo la misma forma.
```
a[bool_indexes] = 999
print(a)
```
<div class="alert alert-success">
<p>Referencias:</p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#arrays-indexing">array indexing</a></p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#indexing-arrays">indexing arrays</a></p>
</div>
## Manejo de valores especiales
`numpy` provee de varios valores especiales: `np.nan`, `np.Inf`, `np.Infinity`, `np.inf`, `np.infty`,...
```
a = 1 / np.arange(10)
print(a)
a[0] == np.inf
a.max() # Esto no es lo que queremos
a.mean() # Esto no es lo que queremos
a[np.isfinite(a)].max()
a[-1] = np.nan
print(a)
a.mean()
np.isnan(a)
np.isfinite(a)
np.isinf(a) # podéis mirar también np.isneginf, np.isposinf
```
`numpy` usa el estándar IEEE de números flotantes para aritmética (IEEE 754). Esto significa que *Not a
Number* no es equivalente a *infinity*. También, *positive infinity* no es equivalente a *negative infinity*. Pero *infinity* es equivalente a *positive infinity*.
```
1 < np.inf
1 < -np.inf
1 > -np.inf
1 == np.inf
1 < np.nan
1 > np.nan
1 == np.nan
```
## Subarrays, vistas y copias
**¡IMPORTANTE!**
Vistas y copias: `numpy`, por defecto, siempre devuelve vistas para evitar incrementos innecesarios de memoria. Este comportamiento difiere del de Python puro donde una rebanada (*slicing*) de una lista devuelve una copia. Si queremos una copia de un `ndarray` debemos obtenerla de forma explícita:
```
a = np.arange(10)
b = a[2:5]
print(a)
print(b)
b[0] = 222
print(a)
print(b)
```
Este comportamiento por defecto es realmente muy útil, significa que, trabajando con grandes conjuntos de datos, podemos acceder y procesar piezas de estos conjuntos de datos sin necesidad de copiar el buffer de datos original.
A veces, es necesario crear una copia. Esto se puede realizar fácilmente usando el método `copy` de los *ndarrays*. El ejemplo anterior usando una copia en lugar de una vista:
```
a = np.arange(10)
b = a[2:5].copy()
print(a)
print(b)
b[0] = 222
print(a)
print(b)
```
## ¿Cómo funcionan los ejes en un `ndarray`?
Por ejemplo, cuando hacemos `a.sum()`, `a.sum(axis=0)`, `a.sum(axis=1)`.
¿Qué pasa si tenemos más de dos dimensiones?
Vamos a ver ejemplos:
```
a = np.arange(10).reshape(5,2)
a.shape
a.sum()
a.sum(axis=0)
a.sum(axis=1)
```

(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
```
a = np.arange(9).reshape(3, 3)
print(a)
print(a.sum(axis=0))
print(a.sum(axis=1))
```

(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
```
a = np.arange(24).reshape(2, 3, 4)
print(a)
print(a.sum(axis=0))
print(a.sum(axis=1))
print(a.sum(axis=2))
```
Por ejemplo, en el primer caso, `axis=0`, lo que sucede es que cogemos todos los elementos del primer índice y aplicamos la operación para cada uno de los elementos de los otros dos ejes. Hecho de uno en uno sería lo siguiente:
```
print(a[:,0,0].sum(), a[:,0,1].sum(), a[:,0,2].sum(), a[:,0,3].sum())
print(a[:,1,0].sum(), a[:,1,1].sum(), a[:,1,2].sum(), a[:,1,3].sum())
print(a[:,2,0].sum(), a[:,2,1].sum(), a[:,2,2].sum(), a[:,2,3].sum())
```
Sin contar el eje que estamos usando, las dimensiones que quedan son 3 x 4 (segunda y tercera dimensiones) por lo que el resultado son 12 elementos.
Para el caso de `axis=1`:
```
print(a[0,:,0].sum(), a[0,:,1].sum(), a[0,:,2].sum(), a[0,:,3].sum())
print(a[1,:,0].sum(), a[1,:,1].sum(), a[1,:,2].sum(), a[1,:,3].sum())
```
Sin contar el eje que estamos usando, las dimensiones que quedan son 2 x 4 (primera y tercera dimensiones) por lo que el resultado son 8 elementos.
Para el caso de `axis=2`:
```
print(a[0,0,:].sum(), a[0,1,:].sum(), a[0,2,:].sum())
print(a[1,0,:].sum(), a[1,1,:].sum(), a[1,2,:].sum())
```
Sin contar el eje que estamos usando, las dimensiones que quedan son 2 x 3 (primera y segunda dimensiones) por lo que el resultado son 3 elementos.
## Reformateo de `ndarray`s
Podemos cambiar la forma de los `ndarray`s usando el método `reshape`. Por ejemplo, si queremos colocar los números del 1 al 9 en un grid $3 \times 3$ lo podemos hacer de la siguiente forma:
```
a = np.arange(1, 10).reshape(3, 3)
```
Para que el cambio de forma no dé errores hemos de tener cuidado en que los tamaños del `ndarray` inicial y del `ndarray` final sean compatibles.
```
# Por ejemplo, lo siguiente dará error?
a = np.arange(1, 10). reshape(5, 2)
```
Otro patrón común de cambio de forma sería la conversion de un `ndarray` de 1D en uno de 2D añadiendo un nuevo eje. Lo podemos hacer usando, nuevamente, el método `reshape` o usando `numpy.newaxis`.
```
# Por ejemplo un array 2D de una fila
a = np.arange(3)
a1_2D = a.reshape(1,3)
a2_2D = a[np.newaxis, :]
print(a1_2D)
print(a1_2D.shape)
print(a2_2D)
print(a2_2D.shape)
# Por ejemplo un array 2D de una columna
a = np.arange(3)
a1_2D = a.reshape(3,1)
a2_2D = a[:, np.newaxis]
print(a1_2D)
print(a1_2D.shape)
print(a2_2D)
print(a2_2D.shape)
```
## Broadcasting
Es poible realizar operaciones en *ndarrays* de diferentes tamaños. En algunos casos `numpy` puede transformar estos *ndarrays* automáticamente de forma que todos tienen la misma forma. Esta conversión automática se llama **broadcasting**.
Normas del Broadcasting
Para determinar la interacción entre dos `ndarray`s en Numpy se sigue un conjunto de reglas estrictas:
* Regla 1: Si dos `ndarray`s difieren en su número de dimensiones la forma de aquel con menos dimensiones se rellena con 1's a su derecha.
- Regla 2: Si la forma de dos `ndarray`s no es la misma en ninguna de sus dimensiones, el `ndarry` con forma igual a 1 en esa dimensión se 'alarga' para tener simulares dimensiones que los del otros `ndarray`.
- Regla 3: Si en cualquier dimensión el tamaño no es igual y ninguno de ellos es igual a 1 entonces obtendremos un error.
Resumiendo, cuando se opera en dos *ndarrays*, `numpy` compara sus formas (*shapes*) elemento a elemento. Empieza por las dimensiones más a la izquierda y trabaja hacia las siguientes dimensiones. Dos dimensiones son compatibles cuando
ambas son iguales o
una de ellas es 1
Si estas condiciones no se cumplen se lanzará una excepción `ValueError: frames are not aligned` indicando que los *ndarrays* tienen formas incompatibles. El tamaño del *ndarray* resultante es el tamaño máximo a lo largo de cada dimensión de los *ndarrays* de partida.
De forma más gráfica:

(imagen extraída de [aquí](https://github.com/btel/2016-erlangen-euroscipy-advanced-numpy))
```
a: 4 x 3 a: 4 x 3 a: 4 x 1
b: 4 x 3 b: 3 b: 3
result: 4 x 3 result: 4 x 3 result: 4 x 3
```
Intentemos reproducir los esquemas de la imagen anterior.
```
a = np.repeat((0, 10, 20, 30), 3).reshape(4, 3)
b = np.repeat((0, 1, 2), 4).reshape(3,4).T
print(a)
print(b)
print(a + b)
a = np.repeat((0, 10, 20, 30), 3).reshape(4, 3)
b = np.array((0, 1, 2))
print(a)
print(b)
print(a + b)
a = np.array((0, 10, 20, 30)).reshape(4,1)
b = np.array((0, 1, 2))
print(a)
print(b)
print(a + b)
```
<div class="alert alert-success">
<p>Referencias:</p>
<p><a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html">Basic broadcasting</a></p>
<p><a href="http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc">Broadcasting more in depth</a></p>
</div>
## `ndarrays` estructurados y `recarray`s
Antes hemos comentado que los `ndarray`s deben ser homogéneos pero era un poco inexacto, en realidad, podemos tener `ndarray`s que tengan diferentes tipos. Estos se llaman `ndarray`s estructurados y `recarray`s.
Veamos ejemplos:
```
nombre = ['paca', 'pancracio', 'nemesia', 'eulogio']
edad = [72, 68, 86, 91]
a = np.array(np.zeros(4), dtype=[('name', '<S10'), ('age', np.int)])
a['name'] = nombre
a['age'] = edad
print(a)
```
Podemos acceder a las columnas por nombre
```
a['name']
```
A todos los elementos menos el primero
```
a['age'][1:]
```
Un `recarray` es similar pero podemos acceder a los campos con notación de punto (*dot notation*).
```
ra = a.view(np.recarray)
ra.name
```
Esto introduce un poco de *overhead* para acceder ya que se realizan algunas operaciones de más.
## Concatenación y partición de `ndarrays`
Podemos combinar múltiples *ndarrays* en uno o separar uno en varios.
Para concatenar podemos usar `np.concatenate`, `np.hstack`, `np.vstack`, `np.dstack`. Ejemplos:
```
a = np.array([1, 1, 1, 1])
b = np.array([2, 2, 2, 2])
```
Podemos concatenar esos dos arrays usando `np.concatenate`:
```
np.concatenate([a, b])
```
No solo podemos concatenar *ndarrays* de una sola dimensión:
```
np.concatenate([a.reshape(2, 2), b.reshape(2, 2)])
```
Podemos elegir sobre qué eje concatenamos:
```
np.concatenate([a.reshape(2, 2), b.reshape(2, 2)], axis=1)
```
Podemos concatenar más de dos arrays:
```
c = [3, 3, 3, 3]
np.concatenate([a, b, c])
```
Si queremos ser más explícitos podemos usar `np.hstack` o `np.vstack`. La `h` y la `v` son para horizontal y vertical, respectivamente.
```
np.hstack([a, b])
np.vstack([a, b])
```
Podemos concatenar en la tercera dimensión usamos `np.dstack`.
De la misma forma que podemos concatenar, podemos partir *ndarrays* usando `np.split`, `np.hsplit`, `np.vsplit`, `np.dsplit`.
```
# Intentamos entender como funciona la partición probando...
```
## Funciones matemáticas, funciones universales *ufuncs* y vectorización
¿Qué es eso de *ufunc*?
De la [documentación oficial de Numpy](http://docs.scipy.org/doc/numpy/reference/ufuncs.html):
> A universal function (or ufunc for short) is a function that operates on ndarrays in an element-by-element fashion, supporting array broadcasting, type casting, and several other standard features. That is, a ufunc is a “**vectorized**” wrapper for a function that takes a **fixed number of scalar inputs** and produces a **fixed number of scalar outputs**.
Una *ufunc* es una *Universal function* o función universal que actúa sobre todos los elementos de un `ndarray`, es decir aplica la funcionalidad sobre cada uno de los elementos del `ndarray`. Esto se conoce como vectorización.
Por ejemplo, veamos la operación de elevar al cuadrado una lista en python puro o en `numpy`:
```
# En Python puro
a_list = list(range(10000))
%timeit [i ** 2 for i in a_list]
# En numpy
an_arr = np.arange(10000)
%timeit np.power(an_arr, 2)
a = np.arange(10)
np.power(a, 2)
```
La función anterior eleva al cuadrado cada uno de los elementos del `ndarray` anterior.
Dentro de `numpy` hay muchísimas *ufuncs* y `scipy` (no lo vamos a ver) dispone de muchas más *ufuns* mucho más especializadas.
En `numpy` tenemos, por ejemplo:
* Funciones trigonométricas: `sin`, `cos`, `tan`, `arcsin`, `arccos`, `arctan`, `hypot`, `arctan2`, `degrees`, `radians`, `unwrap`, `deg2rad`, `rad2deg`
```
# juguemos un poco con ellas
```
* Funciones hiperbólicas: `sinh`, `cosh`, `tanh`, `arcsinh`, `arccosh`, `arctanh`
```
# juguemos un poco con ellas
```
* Redondeo: `around`, `round_`, `rint`, `fix`, `floor`, `ceil`, `trunc`
```
# juguemos un poco con ellas
```
* Sumas, productos, diferencias: `prod`, `sum`, `nansum`, `cumprod`, `cumsum`, `diff`, `ediff1d`, `gradient`, `cross`, `trapz`
```
# juguemos un poco con ellas
```
* Exponentes y logaritmos: `exp`, `expm1`, `exp2`, `log`, `log10`, `log2`, `log1p`, `logaddexp`, `logaddexp2`
```
# juguemos un poco con ellas
```
* Otras funciones especiales: `i0`, `sinc`
```
# juguemos un poco con ellas
```
* Trabajo con decimales: `signbit`, `copysign`, `frexp`, `ldexp`
```
# juguemos un poco con ellas
```
* Operaciones aritméticas: `add`, `reciprocal`, `negative`, `multiply`, `divide`, `power`, `subtract`, `true_divide`, `floor_divide`, `fmod`, `mod`, `modf`, `remainder`
```
# juguemos un poco con ellas
```
* Manejo de números complejos: `angle`, `real`, `imag`, `conj`
```
# juguemos un poco con ellas
```
* Miscelanea: `convolve`, `clip`, `sqrt`, `square`, `absolute`, `fabs`, `sign`, `maximum`, `minimum`, `fmax`, `fmin`, `nan_to_num`, `real_if_close`, `interp`
...
```
# juguemos un poco con ellas
```
<div class="alert alert-success">
<p>Referencias:</p>
<p><a href="http://docs.scipy.org/doc/numpy/reference/ufuncs.html">Ufuncs</a></p>
</div>
## Estadística
* Orden: `amin`, `amax`, `nanmin`, `nanmax`, `ptp`, `percentile`, `nanpercentile`
* Medias y varianzas: `median`, `average`, `mean`, `std`, `var`, `nanmedian`, `nanmean`, `nanstd`, `nanvar`
* Correlacionando: `corrcoef`, `correlate`, `cov`
* Histogramas: `histogram`, `histogram2d`, `histogramdd`, `bincount`, `digitize`
...
```
# juguemos un poco con ellas
```
## Ordenando, buscando y contando
* Ordenando: `sort`, `lexsort`, `argsort`, `ndarray.sort`, `msort`, `sort_complex`, `partition`, `argpartition`
* Buscando: `argmax`, `nanargmax`, `argmin`, `nanargmin`, `argwhere`, `nonzero`, `flatnonzero`, `where`, `searchsorted`, `extract`
* Contando: `count_nonzero`
...
```
# juguemos un poco con ellas
```
## Polinomios
* Series de potencias: `numpy.polynomial.polynomial`
* Clase Polynomial: `np.polynomial.Polynomial`
* Básicos: `polyval`, `polyval2d`, `polyval3d`, `polygrid2d`, `polygrid3d`, `polyroots`, `polyfromroots`
* Ajuste: `polyfit`, `polyvander`, `polyvander2d`, `polyvander3d`
* Cálculo: `polyder`, `polyint`
* Álgebra: `polyadd`, `polysub`, `polymul`, `polymulx`, `polydiv`, `polypow`
* Miscelánea: `polycompanion`, `polydomain`, `polyzero`, `polyone`, `polyx`, `polytrim`, `polyline`
* Otras funciones polinómicas: `Chebyshev`, `Legendre`, `Laguerre`, `Hermite`
...
```
# juguemos un poco con ellas
```
## Álgebra lineal
Lo siguiente que se encuentra dentro de `numpy.linalg` vendrá precedido por `LA`.
* Productos para vectores y matrices: `dot`, `vdot`, `inner`, `outer`, `matmul`, `tensordot`, `einsum`, `LA.matrix_power`, `kron`
* Descomposiciones: `LA.cholesky`, `LA.qr`, `LA.svd`
* Eigenvalores: `LA.eig`, `LA.eigh`, `LA.eigvals`, `LA.eigvalsh`
* Normas y otros números: `LA.norm`, `LA.cond`, `LA.det`, `LA.matrix_rank`, `LA.slogdet`, `trace`
* Resolución de ecuaciones e inversión de matrices: `LA.solve`, `LA.tensorsolve`, `LA.lstsq`, `LA.inv`, `LA.pinv`, `LA.tensorinv`
Dentro de `scipy` tenemos más cosas relacionadas.
```
# juguemos un poco con ellas
```
## Manipulación de `ndarrays`
`tile`, `hstack`, `vstack`, `dstack`, `hsplit`, `vsplit`, `dsplit`, `repeat`, `reshape`, `ravel`, `resize`,...
```
# juguemos un poco con ellas
```
## Módulos de interés dentro de `numpy`
Dentro de `numpy` podemos encontrar módulos para:
* Usar números aleatorios: `np.random`
* Usar FFT: `np.fft`
* Usar *masked arrays*: `np.ma`
* Usar polinomios: `np.polynomial`
* Usar álgebra lineal: `np.linalg`
* Usar matrices: `np.matlib`
* ...
Toda esta funcionalidad se puede ampliar y mejorar usando `scipy`.
## Cálculo matricial
```
a1 = np.repeat(2, 9).reshape(3, 3)
a2 = np.tile(2, (3, 3))
a3 = np.ones((3, 3), dtype=np.int) * 2
print(a1)
print(a2)
print(a3)
b = np.arange(1,4)
print(b)
print(a1.dot(b))
print(np.dot(a2, b))
print(a3 @ b) # only python version >= 3.5
```
Lo anterior lo hemos hecho usando *ndarrays* pero `numpy` también ofrece una estructura de datos `matrix`.
```
a_mat = np.matrix(a1)
a_mat
b_mat = np.matrix(b)
a_mat @ b_mat
a_mat @ b_mat.T
```
Como vemos, con los *ndarrays* no hace falta que seamos rigurosos con las dimensiones, en cambio, si usamos `np.matrix` como tipos hemos de realizar operaciones matriciales válidas (por ejemplo, que las dimensiones sean correctas).
A efectos prácticos, en general, los *ndarrays* se pueden usar como `matrix` conociendo estas pequeñas cosas.
| github_jupyter |
### Processing Echosounder Data from Ocean Observatories Initiative with `echopype`.
Downloading a file from the OOI website. We pick August 21, 2017 since this was the day of the solar eclipse which affected the traditional patterns of the marine life.
```
# downloading the file
!wget https://rawdata.oceanobservatories.org/files/CE04OSPS/PC01B/ZPLSCB102_10.33.10.143/OOI-D20170821-T163049.raw
filename = 'OOI-D20170821-T163049.raw'
```
**Converting from Raw to Standartized Netcdf Format**
```
# import as part of a submodule
from echopype.convert import ConvertEK60
data_tmp = ConvertEK60(filename)
data_tmp.raw2nc()
os.remove(filename)
```
**Calibrating, Denoising, Mean Volume Backscatter Strength**
```
from echopype.model import EchoData
data = EchoData(filename[:-4]+'.nc')
data.calibrate() # Calibration and echo-integration
data.remove_noise(save=True) # Save denoised Sv to FILENAME_Sv_clean.nc
data.get_MVBS(save=True)
```
**Visualizing the Result**
```
%matplotlib inline
data.MVBS.MVBS.sel(frequency=200000).plot(x='ping_time',cmap = 'jet')
```
**Processing Multiple Files**
To process multiple file from the OOI website we need to scrape the names of the existing files there. We will use the `Beautiful Soup`
package for that.
```
!conda install --yes beautifulsoup4
from bs4 import BeautifulSoup
from urllib.request import urlopen
path = 'https://rawdata.oceanobservatories.org/files/CE04OSPS/PC01B/ZPLSCB102_10.33.10.143/'
response = urlopen(path)
soup = BeautifulSoup(response.read(), "html.parser")
# urls = []
# for item in soup.find_all(text=True):
# if '.raw' in item:
# urls.append(path+'/'+item)
urls = [path+'/'+item for item in soup.find_all(text=True) if '.raw' in item]
# urls
from datetime import datetime
```
Specify range:
```
start_time = '20170821-T000000'
end_time = '20170822-T235959'
# convert the times to datetime format
start_datetime = datetime.strptime(start_time,'%Y%m%d-T%H%M%S')
end_datetime = datetime.strptime(end_time,'%Y%m%d-T%H%M%S')
# function to check if a date is in the date range
def in_range(date_str, start_time, end_time):
date_str = datetime.strptime(date_str,'%Y%m%d-T%H%M%S')
true = date_str >= start_datetime and date_str <= end_datetime
return(true)
# identify the list of urls in range
range_urls = []
for url in urls:
date_str = url[-20:-4]
if in_range(date_str, start_time, end_time):
range_urls.append(url)
range_urls
rawnames = [url.split('//')[-1] for url in range_urls]
ls
import os
```
**Downloading the Files**
```
# Download the files
import requests
rawnames = []
for url in range_urls:
r = requests.get(url, allow_redirects=True)
rawnames.append(url.split('//')[-1])
open(url.split('//')[-1], 'wb').write(r.content)
!pip install echopype
ls
```
**Converting from Raw to Standartized Netcdf Format**
```
# import as part of a submodule
from echopype.convert import ConvertEK60
for filename in rawnames:
data_tmp = ConvertEK60(filename)
data_tmp.raw2nc()
os.remove(filename)
#ls
```
**Calibrating, Denoising, Mean Volume Backscatter Strength**
```
# calibrate and denoise
from echopype.model import EchoData
for filename in rawnames:
data = EchoData(filename[:-4]+'.nc')
data.calibrate() # Calibration and echo-integration
data.remove_noise(save=False) # Save denoised Sv to FILENAME_Sv_clean.nc
data.get_MVBS(save=True)
os.remove(filename[:-4]+'.nc')
os.remove(filename[:-4]+'_Sv.nc')
```
**Opening and Visualizing the Results in Parallel**
No that all files are in an appropriate format, we can open them and visualize them in parallel. For that we will need to install the `dask` parallelization library.
```
!conda install --yes dask
import xarray as xr
res = xr.open_mfdataset('*MVBS.nc')
import matplotlib.pyplot as plt
plt.figure(figsize = (15,5))
res.MVBS.sel(frequency=200000).plot(x='ping_time',cmap = 'jet')
```
| github_jupyter |
# Book-Crossing Recommendation System
> Book recommender system on book crossing dataset using surprise SVD and NMF models
- toc: true
- badges: true
- comments: true
- categories: [Surprise, SVD, NMF, Book]
- author: "<a href='https://github.com/tttgm/fellowshipai'>Tom McKenzie</a>"
- image:
## Setup
```
!pip install -q git+https://github.com/sparsh-ai/recochef.git
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pickle
from sklearn import model_selection
import warnings
warnings.filterwarnings('ignore')
plt.style.use('seaborn-white')
plt.rcParams.update({'font.size': 15})
%matplotlib inline
from recochef.datasets.bookcrossing import BookCrossing
```
## Load the dataset
```
bookcrossing = BookCrossing()
users = bookcrossing.load_users()
books = bookcrossing.load_items()
book_ratings = bookcrossing.load_interactions()
users.head()
books.head()
book_ratings.head()
print(f'Users: {len(users)}\nBooks: {len(books)}\nRatings: {len(book_ratings)}')
```
## EDA and Data cleaning
### Users
```
users.describe(include='all').T
```
The age range goes from 0 to 244 years old! Obviously this cannot be correct; I'll set all ages less than 5 and older than 100 to NaN to try keep them realistic.
```
users.loc[(users.AGE<5) | (users.AGE>100), 'AGE'] = np.nan
u = users.AGE.value_counts().sort_index()
plt.figure(figsize=(20, 10))
plt.bar(u.index, u.values)
plt.xlabel('Age')
plt.ylabel('counts')
plt.show()
```
Next, can we expand the 'Location' field to break it up into 'City', 'State', and 'Country'.
> Note: Used Pandas Series.str.split method as it has an 'expand' parameter which can handle None cases
```
user_location_expanded = users.LOCATION.str.split(',', 2, expand=True)
user_location_expanded.columns = ['CITY', 'STATE', 'COUNTRY']
users = users.join(user_location_expanded)
users.COUNTRY.replace('', np.nan, inplace=True)
users.drop(columns=['LOCATION'], inplace=True)
users.head()
```
### Books
```
books.head(2)
books.describe(include='all').T
# Convert years to float
books.YEAR = pd.to_numeric(books.YEAR, errors='coerce')
# Replace all years of zero with NaN
books.YEAR.replace(0, np.nan, inplace=True)
yr = books.YEAR.value_counts().sort_index()
yr = yr.where(yr>5) # filter out counts less than 5
plt.figure(figsize=(20, 10))
plt.bar(yr.index, yr.values)
plt.xlabel('Year of Publication')
plt.ylabel('counts')
plt.show()
```
Note that in the plot above we filtered out counts less than 5, as there are a few books in the dataset with publication years in the 1300s, and a few in the future (?!). The plot above show the general trend that more recent books are much more frequent.
Let's take a look at some of those 'outlier' books. Maybe we'll even keep them as a separate dataset so we can filter them out if we need to later in the analysis. We'll leave them in for now, and then figure out how to handle them once we have more info later on.
```
historical_books = books[books.YEAR<1900] # create df of old books
books_from_the_future = books[books.YEAR>2018] # create df of books with publication yrs in the future!
hist_books_mini = historical_books[['TITLE', 'YEAR']]
future_books_mini = books_from_the_future[['TITLE', 'YEAR']]
print(f'Historical books:\n{hist_books_mini}')
print('\n')
print(f'Future books:\n{future_books_mini}')
```
I think we can probably omit the 'historical_books' as they may potentially skew the model and do not seem to have much relevance to the wider userbase.
Some of the 'future' books actually appear to be errors (e.g. Alice in Wonderland, Edgar Allen Poe, etc.)... Perhaps they were supposed to be e.g. 1950 instead of 2050? However, instead of investigating this further, since there are <20 books here I will simply remove them from the 'books' table.
```
print(f'Length of books dataset before removal: {len(books)}')
books = books.loc[~(books.ITEMID.isin(historical_books.ITEMID))] # remove historical books
books = books.loc[~(books.ITEMID.isin(books_from_the_future.ITEMID))] # remove historical books
print(f'Length of books dataset after removal: {len(books)}')
```
We clean up the ampersand formatting in the Publisher field.
```
books.PUBLISHER = books.PUBLISHER.str.replace('&', '&', regex=False)
books.head()
```
Check that there are no duplicated book entries.
```
uniq_books = books.ITEMID.nunique()
all_books = books.ITEMID.count()
print(f'No. of unique books: {uniq_books} | All book entries: {all_books}')
```
Let's look at the most frequent Publishing houses in the dataset.
```
top_publishers = books.PUBLISHER.value_counts()[:10]
print(f'The 10 publishers with the most entries in the books table are:\n{top_publishers}')
```
What about authors with the most entries?
```
top_authors = books.AUTHOR.value_counts()[:10]
print(f'The 10 authors with the most entries in the books table are:\n{top_authors}')
```
We should search for empty or NaN values in these fields too.
```
empty_string_publisher = books[books.PUBLISHER == ''].PUBLISHER.count()
nan_publisher = books.PUBLISHER.isnull().sum()
print(f'There are {empty_string_publisher} entries with empty strings, and {nan_publisher} NaN entries in the Publisher field')
```
Great - no empty strings in the Publisher field, and only 2 NaNs.
```
empty_string_author = books[books.AUTHOR == ''].AUTHOR.count()
nan_author = books.AUTHOR.isnull().sum()
print(f'There are {empty_string_author} entries with empty strings, and {nan_author} NaN entries in the Author field')
```
Cool, only 1 NaN in the Author field.
Let's look at the titles.
```
top_titles = books.TITLE.value_counts()[:10]
print(f'The 10 book titles with the most entries in the books table are:\n{top_titles}')
```
This is actually quite an important observation. Although all of the ISBN entries are *unique* in the 'books' dataframe, different *forms* of the **same** book will have different ISBNs - i.e. paperback, e-book, etc. Therefore, we can see that some books have multiple ISBN entries (e.g. Jane Eyre has 19 different ISBNs, each corresponding to a different version of the book).
Let's take a look at, for example, the entries for 'Jane Eyre'.
```
books[books.TITLE=='Jane Eyre']
```
It looks like each ISBN assigned to the book 'Jane Eyre' has different Publisher and Year of Publication values also.
It might be more useful for our model if we simplified this to give each book a *unique* identifier, independent of the book format, as our recommendations will be for a book, not a specific version of a book. Therefore, all values in the Jane Eyre example above would stay the same, except all of the Jane Eyre entries would additionally be assigned a *unique ISBN* code as a new field.
**Will create this more unique identifier under the field name 'UNIQUE_ITEMIDS'. Note that entries with only a single ISBN number will be left the same. However, will need to do this after joining to the other tables in the dataset, as some ISBNs in the 'book-rating' table may be removed if done prior.**
### Interactions
```
book_ratings.head()
book_ratings.describe(include='all').T
book_ratings.dtypes
```
The data types already look good. Remember that the ISBN numbers may contain letters, and so should be left as strings.
Which users contribute the most ratings?
```
super_users = book_ratings.groupby('USERID').ITEMID.count().sort_values(ascending=False)
print(f'The 20 users with the most ratings:\n{super_users[:20]}')
```
Wow! User \#11676 has almost twice as many ratings as the next highest user! All of the top 20 users have thousands of ratings, which seems like a lot, although maybe I'm just a slow reader...
Let's see how they are distributed.
```
# user distribution - users with more than 50 ratings removed
user_hist = super_users.where(super_users<50)
user_hist.hist(bins=30)
plt.xlabel('No. of ratings')
plt.ylabel('count')
plt.show()
```
It looks like **_by far_** the most frequent events are users with only 1 or 2 rating entries. We can see that the 'super users' with thousands of ratings are significant outliers.
This becomes clear if we make the same histogram with a cutoff for users with a minimum of 1000 ratings.
```
# only users with more than 1000 ratings
super_user_hist = super_users.where(super_users>1000)
super_user_hist.hist(bins=30)
plt.xlabel('No. of ratings (min. 1000)')
plt.ylabel('count')
plt.show()
```
Let's see what the distribution of **ratings** looks like.
```
rtg = book_ratings.RATING.value_counts().sort_index()
plt.figure(figsize=(10, 5))
plt.bar(rtg.index, rtg.values)
plt.xlabel('Rating')
plt.ylabel('counts')
plt.show()
```
Seems like most of the entries have a rating of zero!
After doing some research on the internet regarding this (and similar) datasets, it appears that the rating scale is actually from 1 to 10, and a 0 indicates an 'implicit' rather than an 'explicit' rating. An implicit rating represents an interaction (may be positive or negative) between the user and the item. Implicit interactions usually need to be handled differently from explicit ones.
For the modeling step we'll only be looking at *explicit* ratings, and so the 0 rating entry rows will be removed.
```
print(f'Size of book_ratings before removing zero ratings: {len(book_ratings)}')
book_ratings = book_ratings[book_ratings.RATING != 0]
print(f'Size of book_ratings after removing zero ratings: {len(book_ratings)}')
```
By removing the implicit ratings we have reduced our sample size by more than half.
Let's look at how the ratings are distributed again.
```
rtg = book_ratings.RATING.value_counts().sort_index()
plt.figure(figsize=(10, 5))
plt.bar(rtg.index, rtg.values)
plt.xlabel('Rating')
plt.ylabel('counts')
plt.show()
```
This is much more clear! Now we can see that 8 is the most frequent rating, while users tend to give ratings > 5, with very few low ratings given.
### Merge
First, we'll join the 'books' table to the 'book_ratings' table on the ISBN field.
```
print(f'Books table size: {len(books)}')
print(f'Ratings table size: {len(book_ratings)}')
books_with_ratings = book_ratings.join(books.set_index('ITEMID'), on='ITEMID')
print(f'New table size: {len(books_with_ratings)}')
```
Let's take a look at the new table.
```
books_with_ratings.head()
print(f'There are {books_with_ratings.TITLE.isnull().sum()} books with no title/author information.')
print(f'This represents {len(books_with_ratings)/books_with_ratings.TITLE.isnull().sum():.2f}% of the ratings dataset.')
```
There seems to be quite a few ISBNs in the ratings table that did not match an ISBN in the books table, almost 9% of all entries!
There isn't really anything we can do about that, but we should really remove them from the dataset as we won't be able to access the title of the book to make a recommendation even if the model can use them.
```
books_with_ratings.info()
```
It looks like the ```year_of_publication``` field contains the most NaN entries, while ```USERID```, ```isbn```, and ```book_rating``` are full. The ```book_title```, ```book_author```, and ```publisher``` fields contain approximately the same number of missing entries.
We'll choose to remove rows for which the ```book_title``` is empty, as this is the most crucial piece of data needed to identify the book.
```
books_with_ratings.dropna(subset=['TITLE'], inplace=True) # remove rows with missing title/author data
```
Let's see which books have the highest **cumulative** book rating values.
```
cm_rtg = books_with_ratings.groupby('TITLE').RATING.sum()
cm_rtg = cm_rtg.sort_values(ascending=False)[:10]
idx = cm_rtg.index.tolist() # Get sorted book titles
vals = cm_rtg.values.tolist() # Get corresponding cm_rtg values
plt.figure(figsize=(10, 5))
plt.bar(range(len(idx)), vals)
plt.xticks(range(len(idx)), idx, rotation='vertical')
plt.ylabel('cumulative rating score')
plt.show()
```
This seems about right as it combines the total number of ratings with the score given, so these are all really popular book titles.
What about the highest **average ratings** (with a minimum of at least 50 ratings recieved)?
```
cutoff = books_with_ratings.TITLE.value_counts()
mean_rtg = books_with_ratings[books_with_ratings.TITLE.isin(cutoff[cutoff>50].index)].groupby('TITLE')['RATING'].mean()
mean_rtg.sort_values(ascending=False)[:10] # show only top 10
```
This looks perfectly reasonable. The Harry Potter and Lord of the Rings books rate extremely highly, as expected.
How about the **lowest-rated** books?
```
mean_rtg.sort_values(ascending=False)[-10:] # bottom 10 only
```
Seems like the *lowest average* rating in the dataset is only a 4.39 - and all the rest of the books have average ratings higher than 5.
I haven't heard of any of these books, so I can't really comment on if they seem correct here.
**Now I'd like to tackle the challenge of the same book potentially having multiple ISBN numbers (for the different formats it is available in). We should clean that up here before we add the 'user' table.**
### Single ISBN per book
Restrict books to a "single ISBN per book" (regardless of format)
Let's look again at the book titles which have the most associated ISBN numbers.
```
books_with_ratings.groupby('TITLE').ITEMID.nunique().sort_values(ascending=False)[:10]
multiple_isbns = books_with_ratings.groupby('TITLE').ITEMID.nunique()
multiple_isbns.value_counts()
```
We can see that the vast majority of books have less only 1 associated ISBN number, however quite a few multiple ISBNs. We want to create a ```UNIQUE_ITEMIDS``` such that a single book will only have 1 identifier when fed to the recommendation model.
```
has_mult_isbns = multiple_isbns.where(multiple_isbns>1)
has_mult_isbns.dropna(inplace=True) # remove NaNs, which in this case is books with a single ISBN number
print(f'There are {len(has_mult_isbns)} book titles with multiple ISBN numbers which we will try to re-assign to a unique identifier')
# Check to see that our friend Jane Eyre still has multiple ISBN values
has_mult_isbns['Jane Eyre']
```
**Note:** Created the dictionary below and pickled it, just need to load it again (or run it if the first time on a new system).
```
# Create dictionary for books with multiple isbns
def make_isbn_dict(df):
title_isbn_dict = {}
for title in has_mult_isbns.index:
isbn_series = df.loc[df.TITLE==title].ITEMID.unique() # returns only the unique ISBNs
title_isbn_dict[title] = isbn_series.tolist()
return title_isbn_dict
%time dict_UNIQUE_ITEMIDS = make_isbn_dict(books_with_ratings)
# As the loop takes a while to run (8 min on the full dataset), pickle this dict for future use
with open('multiple_isbn_dict.pickle', 'wb') as handle:
pickle.dump(dict_UNIQUE_ITEMIDS, handle, protocol=pickle.HIGHEST_PROTOCOL)
# LOAD isbn_dict back into namespace
with open('multiple_isbn_dict.pickle', 'rb') as handle:
multiple_isbn_dict = pickle.load(handle)
print(f'There are now {len(multiple_isbn_dict)} books in the ISBN dictionary that have multiple ISBN numbers')
```
Let's take a quick look in the dict we just created for the 'Jane Eyre' entry - it should contain a list of 14 ISBN numbers.
```
print(f'Length of Jane Eyre dict entry: {len(multiple_isbn_dict["Jane Eyre"])}\n')
multiple_isbn_dict['Jane Eyre']
```
Looking good!
As I don't really know what each of the different ISBN numbers refers to (from what I understand the code actually signifies various things including publisher, year, type of print, etc, but decoding this is outside the scope of this analysis), I'll just select the **first** ISBN number that appears in the list of values to set as our ```UNIQUE_ITEMIDS``` for that particular book.
_**Note**_: ISBN numbers are currently 13 digits long, but used to be 10. Any ISBN that isn't 10 or 13 digits long is probably an error that should be handled somehow. Any that are 9 digits long might actually be SBN numbers (pre-1970), and can be converted into ISBN's by just pre-fixing with a zero.
```
# Add 'UNIQUE_ITEMIDS' column to 'books_with_ratings' dataframe that includes the first ISBN if multiple ISBNS,
# or just the ISBN if only 1 ISBN present anyway.
def add_UNIQUE_ITEMIDS_col(df):
df['UNIQUE_ITEMIDS'] = df.apply(lambda row: multiple_isbn_dict[row.TITLE][0] if row.TITLE in multiple_isbn_dict.keys() else row.ITEMID, axis=1)
return df
%time books_with_ratings = add_UNIQUE_ITEMIDS_col(books_with_ratings)
books_with_ratings.head()
```
The table now includes our ```UNIQUE_ITEMIDS``` field.
Let's check to see if the 'Jane Eyre' entries have been assigned the ISBN '1590071212', which was the first val in the dictionary for this title.
```
books_with_ratings[books_with_ratings.TITLE=='Jane Eyre'].head()
```
Great! Seems to have worked well.
We won't replace the original ISBN column with the 'UNIQUE_ITEMIDS' column, but just note that the recommendation model should be based on the 'UNIQUE_ITEMIDS' field.
### Remove Small and Large book-cover URL columns
```
books_users_ratings.drop(['URLSMALL', 'URLLARGE'], axis=1, inplace=True)
```
## Join the 'users' table on the 'USERID' field
```
print(f'Books+Ratings table size: {len(books_with_ratings)}')
print(f'Users table size: {len(users)}')
books_users_ratings = books_with_ratings.join(users.set_index('USERID'), on='USERID')
print(f'New "books_users_ratings" table size: {len(books_users_ratings)}')
```
Inspect the new table.
```
books_users_ratings.head()
books_users_ratings.info()
```
There are a few missing ```age```, ```year_of_publication```, ```publisher```, and ```country``` entries, but the primary fields of ```USERID```, ```UNIQUE_ITEMIDS```, and ```book_rating``` are all full, which is good.
In terms of the data types, ```USERID``` and ```book_rating``` are integers, while the ```UNIQUE_ITEMIDS``` are strings (which is expected as the ISBN numbers may also contain letters).
```
books_users_ratings.shape
```
## Recommender model
Collaborative filtering use similarities of the 'user' and 'item' fields, with values of 'rating' predicted based on either user-item, or item-item similarity:
- Item-Item CF: "Users who liked this item also liked..."
- User-Item CF: "Users who are similar to you also liked..."
In both cases, we need to create a user-item matrix built from the entire dataset. We'll create a matrix for each of the training and testing sets, with the users as the rows, the books as the columns, and the rating as the matrix value. Note that this will be a very sparse matrix, as not every user will have watched every movie etc.
We'll first create a new dataframe that contains only the relevant columns (```USERID```, ```UNIQUE_ITEMIDS```, and ```book_rating```).
```
user_item_rating = books_users_ratings[['USERID', 'UNIQUE_ITEMIDS', 'RATING']]
user_item_rating.head()
```
We know what the distribution of ratings should look like (as we plotted it earlier) - let's plot it again on this new dataframe to just quickly check that it looks right.
```
rtg = user_item_rating.RATING.value_counts().sort_index()
plt.figure(figsize=(10, 5))
plt.bar(rtg.index, rtg.values)
plt.xlabel('Rating')
plt.ylabel('counts')
plt.show()
```
Looks perfect! Continue.
### Using ```sklearn``` to generate training and testing subsets
```
train_data, test_data = model_selection.train_test_split(user_item_rating, test_size=0.20)
print(f'Training set size: {len(train_data)}')
print(f'Testing set size: {len(test_data)}')
print(f'Test set is {(len(test_data)/(len(train_data)+len(test_data))*100):.0f}% of the full dataset.')
```
### Map the ```USERID``` and ```UNIQUE_ITEMIDS``` fields to sequential integers for matrix processing
```
### TRAINING SET
# Get int mapping for USERID
u_unique_train = train_data.USERID.unique() # create a 'set' (i.e. all unique) list of vals
train_data_user2idx = {o:i for i, o in enumerate(u_unique_train)}
# Get int mapping for UNIQUE_ITEMIDS
b_unique_train = train_data.UNIQUE_ITEMIDS.unique() # create a 'set' (i.e. all unique) list of vals
train_data_book2idx = {o:i for i, o in enumerate(b_unique_train)}
### TESTING SET
# Get int mapping for USERID
u_unique_test = test_data.USERID.unique() # create a 'set' (i.e. all unique) list of vals
test_data_user2idx = {o:i for i, o in enumerate(u_unique_test)}
# Get int mapping for UNIQUE_ITEMIDS
b_unique_test = test_data.UNIQUE_ITEMIDS.unique() # create a 'set' (i.e. all unique) list of vals
test_data_book2idx = {o:i for i, o in enumerate(b_unique_test)}
### TRAINING SET
train_data['USER_UNIQUE'] = train_data['USERID'].map(train_data_user2idx)
train_data['ITEM_UNIQUE'] = train_data['UNIQUE_ITEMIDS'].map(train_data_book2idx)
### TESTING SET
test_data['USER_UNIQUE'] = test_data['USERID'].map(test_data_user2idx)
test_data['ITEM_UNIQUE'] = test_data['UNIQUE_ITEMIDS'].map(test_data_book2idx)
### Convert back to 3-column df
train_data = train_data[['USER_UNIQUE', 'ITEM_UNIQUE', 'RATING']]
test_data = test_data[['USER_UNIQUE', 'ITEM_UNIQUE', 'RATING']]
train_data.tail()
train_data.dtypes
```
This dataset is now ready to be processed via a collaborative filtering approach!
**Note:** When we need to identify the user or book from the model we'll need to refer back to the ```train_data_user2idx``` and ```train_data_book2idx``` dictionaries to locate the ```USERID``` and ```UNIQUE_ITEMIDS```, respectively.
```
### TRAINING SET
# Create user-item matrices
n_users = train_data['USER_UNIQUE'].nunique()
n_books = train_data['ITEM_UNIQUE'].nunique()
# First, create an empty matrix of size USERS x BOOKS (this speeds up the later steps)
train_matrix = np.zeros((n_users, n_books))
# Then, add the appropriate vals to the matrix by extracting them from the df with itertuples
for entry in train_data.itertuples(): # entry[1] is the user-id, entry[2] is the book-isbn
train_matrix[entry[1]-1, entry[2]-1] = entry[3] # -1 is to counter 0-based indexing
train_matrix.shape
```
Now do the same for the test set.
```
### TESTING SET
# Create user-item matrices
n_users = test_data['u_unique'].nunique()
n_books = test_data['b_unique'].nunique()
# First, create an empty matrix of size USERS x BOOKS (this speeds up the later steps)
test_matrix = np.zeros((n_users, n_books))
# Then, add the appropriate vals to the matrix by extracting them from the df with itertuples
for entry in test_data.itertuples(): # entry[1] is the user-id, entry[2] is the book-isbn
test_matrix[entry[1]-1, entry[2]-1] = entry[3] # -1 is to counter 0-based indexing
test_matrix.shape
```
Now the matrix is in the correct format, with the user and book entries encoded from the mapping dict created above!
### Calculating cosine similarity with the 'pairwise distances' function
To determine the similarity between users/items we'll use the 'cosine similarity' which is a common n-dimensional distance metric.
**Note:** since all of the rating values are positive (1-10 scale), the cosine distances will all fall between 0 and 1.
```
# It may take a while to calculate, so I'll perform on a subset initially
train_matrix_small = train_matrix[:10000, :10000]
test_matrix_small = test_matrix[:10000, :10000]
from sklearn.metrics.pairwise import pairwise_distances
user_similarity = pairwise_distances(train_matrix_small, metric='cosine')
item_similarity = pairwise_distances(train_matrix_small.T, metric='cosine') # .T transposes the matrix (NumPy)
```
If we are looking at similarity between users we need to account for the average behaviour of that individual user. For example, one user may give all movies quite high ratings, whereas one might give all ratings between 3 and 7. These users might otherwise have quite similar preferences.
To do this, we use the users average rating as a 'weighting' factor.
If we are looking at item-based similarity we don't need to add this weighting factor.
We can incorporate this into a ```predict()``` function, like so:
```
def predict(ratings, similarity, type='user'): # default type is 'user'
if type == 'user':
mean_user_rating = ratings.mean(axis=1)
# Use np.newaxis so that mean_user_rating has the same format as ratings
ratings_diff = (ratings - mean_user_rating[:, np.newaxis])
pred = mean_user_rating[:, np.newaxis] + similarity.dot(ratings_diff) / np.array([np.abs(similarity).sum(axis=1)]).T
elif type == 'item':
pred = ratings.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)])
return pred
```
Then can make our predictions!
```
item_prediction = predict(train_matrix_small, item_similarity, type='item')
user_prediction = predict(train_matrix_small, user_similarity, type='user')
```
### Evaluation
How do we know if this is making good ```rating``` predictions?
We'll start by just taking the root mean squared error (RMSE) (from ```sklearn```) of predicted values in the ```test_set``` (i.e. where we know what the answer should be).
Since we want to compare only predicted ratings that are in the test set, we can filter out all other predictions that aren't in the test matrix.
```
from sklearn.metrics import mean_squared_error
from math import sqrt
def rmse(prediction, test_matrix):
prediction = prediction[test_matrix.nonzero()].flatten()
test_matrix = test_matrix[test_matrix.nonzero()].flatten()
return sqrt(mean_squared_error(prediction, test_matrix))
# Call on test set to get error from each approach ('user' or 'item')
print(f'User-based CF RMSE: {rmse(user_prediction, test_matrix_small)}')
print(f'Item-based CF RMSE: {rmse(item_prediction, test_matrix_small)}')
```
For the user-item and the item-item recommendations we get RMSE = 7.85 (MSE > 60) for both. This is pretty bad, but we only trained over a small subset of the data.
Although this collaborative filtering setup is relatively simple to write, it doesn't scale very well at all, as it is all stored in memory! (Hence why we only used a subset of the training/testing data).
----------------
Instead, we should really use a model-based (based on matrix factorization) recommendation algorithm. These are inherently more scalable and can deal with higher sparsity level than memory-based models, and are considered more powerful due to their ability to pick up on "latent factors" in the relationships between what sets of items users like. However, they still suffer from the "cold start" problem (where a new user has no history).
Fortunately, there is a Python library called ```surprise``` that was built specifically for the implementation of model-based recommendation systems! This library comes with many of the leading algorithms in this space already built-in. Let's try use it for our book recommender system.
# Using the ```surprise``` library for building a recommender system
Several common model-based algorithms including SVD, KNN, and non-negative matrix factorization are built-in!
See [here](http://surprise.readthedocs.io/en/stable/getting_started.html#basic-usage) for the docs.
```
from surprise import Reader, Dataset
user_item_rating.head() # take a look at our data
# First need to create a 'Reader' object to set the scale/limit of the ratings field
reader = Reader(rating_scale=(1, 10))
# Load the data into a 'Dataset' object directly from the pandas df.
# Note: The fields must be in the order: user, item, rating
data = Dataset.load_from_df(user_item_rating, reader)
# Load the models and 'evaluation' method
from surprise import SVD, NMF, model_selection, accuracy
```
Where: SVD = Singular Value Decomposition (orthogonal factorization), NMF = Non-negative Matrix Factorization.
**Note** that when using the ```surprise``` library we don't need to manually create the mapping of USERID and UNIQUE_ITEMIDS to integers in a custom dict. See [here](http://surprise.readthedocs.io/en/stable/FAQ.html#raw-inner-note) for details.
### SVD model
**_Using cross-validation (5 folds)_**
```
# Load SVD algorithm
model = SVD()
# Train on books dataset
%time model_selection.cross_validate(model, data, measures=['RMSE'], cv=5, verbose=True)
```
The SVD model gives an average RMSE of ca. 1.64 after 5-folds, with a fit time of ca. 28 s for each fold.
**_Using test-train split_**
```
# set test set to 20%.
trainset, testset = model_selection.train_test_split(data, test_size=0.2)
# Instantiate the SVD model.
model = SVD()
# Train the algorithm on the training set, and predict ratings for the test set
model.fit(trainset)
predictions = model.test(testset)
# Then compute RMSE
accuracy.rmse(predictions)
```
Using a 80% train-test split, the SVD model gave a RMSE of 1.6426.
-----------
We can see that using the SVD algorithm has already far out-performed the memory-based collaborative filtering approach (RMSE of 1.64 vs 7.92)!
### NMF model
```
# Load NMF algorithm
model = NMF()
# Train on books dataset
%time model_selection.cross_validate(model, data, measures=['RMSE'], cv=5, verbose=True)
```
The NMF model gave a mean RMSE of ca. 2.47, with a fit time of ca. 48 s.
It seems like the SVD algorithm is the best choice for this dataset.
## Optimizing the SVD algorithm with parameter tuning
Since it seems like the SVD algorithm is our best choice, let's see if we can improve the predictions even further by optimizing some of the algorithm hyperparameters.
One way of doing this is to use the handy ```GridSearchCV``` method from the ```surprise``` library. When passed a range of hyperparameter values, ```GridSearchCV``` will automatically search through the parameter-space to find the best-performing set of hyperparameters.
```
# We'll remake the training set, keeping 20% for testing
trainset, testset = model_selection.train_test_split(data, test_size=0.2)
### Fine-tune Surprise SVD model useing GridSearchCV
from surprise.model_selection import GridSearchCV
param_grid = {'n_factors': [80, 100, 120], 'lr_all': [0.001, 0.005, 0.01], 'reg_all': [0.01, 0.02, 0.04]}
# Optimize SVD algorithm for both root mean squared error ('rmse') and mean average error ('mae')
gs = GridSearchCV(SVD, param_grid, measures=['rmse', 'mae'], cv=3)
# Fit the gridsearch result on the entire dataset
%time gs.fit(data)
# Return the best version of the SVD algorithm
model = gs.best_estimator['rmse']
print(gs.best_score['rmse'])
print(gs.best_params['rmse'])
model_selection.cross_validate(model, data, measures=['rmse', 'mae'], cv=5, verbose=True)
```
The mean RSME using the optimized parameters was 1.6351 over 5 folds, with an average fit time of ca. 24s.
```
### Use the new parameters with the training set
model = SVD(n_factors=80, lr_all=0.005, reg_all=0.04)
model.fit(trainset) # re-fit on only the training data using the best hyperparameters
test_pred = model.test(testset)
print("SVD : Test Set")
accuracy.rmse(test_pred, verbose=True)
```
Using the optimized hyperparameters we see a slight improvement in the resulting RMSE (1.629) compared with the unoptimized SVD algorithm (1.635)1
## Testing some of the outputs (ratings and recommendations)
Would like to do an intuitive check of some of the recommendations being made.
Let's just choose a random user/book pair (represented in the ```suprise``` library as ```uid``` and ```iid```, respectively).
**Note:** The ```model``` being used here is the optimized SVD algorithm that has been fit on the training set.
```
# get a prediction for specific users and items.
uid = 276744 # the USERID int
iid = '038550120X' # the UNIQUE_ITEMIDS string
# This pair has an actual rating of 7!
pred = model.predict(uid, iid, verbose=True)
```
Can access the attributes of the ```predict``` method to get a nicer output.
```
print(f'The estimated rating for the book with the "UNIQUE_ITEMIDS" code {pred.iid} from user #{pred.uid} is {pred.est:.2f}.\n')
actual_rtg = user_item_rating[(user_item_rating.USERID==pred.uid) & (user_item_rating.UNIQUE_ITEMIDS==pred.iid)].RATING.values[0]
print(f'The real rating given for this was {actual_rtg:.2f}.')
# get a prediction for specific users and items.
uid = 95095 # the USERID int
iid = '0140079963' # the UNIQUE_ITEMIDS string
# This pair has an actual rating of 6.0!
pred = model.predict(uid, iid, verbose=True)
print(f'The estimated rating for the book with the "UNIQUE_ITEMIDS" code {pred.iid} from user #{pred.uid} is {pred.est:.2f}.\n')
actual_rtg = user_item_rating[(user_item_rating.USERID==pred.uid) & (user_item_rating.UNIQUE_ITEMIDS==pred.iid)].RATING.values[0]
print(f'The real rating given for this was {actual_rtg:.2f}.')
```
The following function was adapted from the ```surprise``` docs, and can be used to get the top book recommendations for each user.
```
from collections import defaultdict
def get_top_n(predictions, n=10):
'''Return the top-N recommendation for each user from a set of predictions.
Args:
predictions(list of Prediction objects): The list of predictions, as
returned by the test method of an algorithm.
n(int): The number of recommendation to output for each user. Default
is 10.
Returns:
A dict where keys are user (raw) ids and values are lists of tuples:
[(raw item id, rating estimation), ...] of size n.
'''
# First map the predictions to each user.
top_n = defaultdict(list)
for uid, iid, true_r, est, _ in predictions:
top_n[uid].append((iid, est))
# Then sort the predictions for each user and retrieve the k highest ones.
for uid, user_ratings in top_n.items():
user_ratings.sort(key=lambda x: x[1], reverse=True)
top_n[uid] = user_ratings[:n]
return top_n
```
Let's get the Top 10 recommended books for each USERID in the test set.
```
pred = model.test(testset)
top_n = get_top_n(pred)
def get_reading_list(userid):
"""
Retrieve full book titles from full 'books_users_ratings' dataframe
"""
reading_list = defaultdict(list)
top_n = get_top_n(predictions, n=10)
for n in top_n[userid]:
book, rating = n
title = books_users_ratings.loc[books_users_ratings.UNIQUE_ITEMIDS==book].TITLE.unique()[0]
reading_list[title] = rating
return reading_list
# Just take a random look at USERID=60337
example_reading_list = get_reading_list(userid=60337)
for book, rating in example_reading_list.items():
print(f'{book}: {rating}')
```
Have tried out a few different ```userid``` entries (from the ```testset```) to see what the top 10 books that user would like are and they seem pretty well related, indicating that the recommendation engine is performing reasonably well!
# Summary
In this notebook a dataset from the 'Book-Crossing' website was used to create a recommendation system. A few different approaches were investigated, including memory-based correlations, and model-based matrix factorization algorithms[2]. Of these, the latter - and particularly the Singular Value Decomposition (SVD) algorithm - gave the best performance as assessed by comparing the predicted book ratings for a given user with the actual rating in a test set that the model was not trained on.
The only fields that were used for the model were the "user ID", "book ID", and "rating". There were others available in the dataset, such as "age", "location", "publisher", "year published", etc, however for these types of recommendation systems it has often been found that additional data fields do not increase the accuracy of the models significantly[1]. A "Grid Search Cross Validation" method was used to optimize some of the hyperparameters for the model, resulting in a slight improvement in model performance from the default values.
Finally, we were able to build a recommender that could predict the 10 most likely book titles to be rated highly by a given user.
It should be noted that this approach still suffers from the "cold start problem"[3] - that is, for users with no ratings or history the model will not make accurate predictions. One way we could tackle this problem may be to initially start with popularity-based recommendations, before building up enough user history to implement the model. Another piece of data that was not utilised in the current investigation was the "implicit" ratings - denoted as those with a rating of "0" in the dataset. Although more information about these implicit ratings (for example, does it represent a positive or negative interaction), these might be useful for supplementing the "explicit" ratings recommender.
# References
1. http://blog.ethanrosenthal.com/2015/11/02/intro-to-collaborative-filtering/
2. https://cambridgespark.com/content/tutorials/implementing-your-own-recommender-systems-in-Python/index.html
3. https://towardsdatascience.com/building-a-recommendation-system-for-fragrance-5b00de3829da
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import pandas as pd
from tqdm import tqdm
import metnum
import numpy
from sklearn.utils import shuffle
from sklearn.metrics import accuracy_score,precision_score,recall_score,f1_score
import csv
import time
def correr_Knn_con_k_aumentando_en(porcentage_para_entrenar,cant_muestras=42000,semilla = 2,intervalo_k):
df_train = pd.read_csv("../data/train.csv")
df_train = df_train.sample(frac=1, random_state=semilla)
X= df_train[df_train.columns[1:]].values
y = df_train["label"].values.reshape(-1, 1)
accuracy_of_all_k = []
precision_df = pd.DataFrame(columns=(0,1,2,3,4,5,6,7,8,9))
recall_df = pd.DataFrame(columns=(0,1,2,3,4,5,6,7,8,9))
f1_df = pd.DataFrame(columns=(0,1,2,3,4,5,6,7,8,9))
times = []
setup_time_start_time = time.time()
limit = int(0.8 * X.shape[0])
X_train = X[:limit]
X_val = X[limit:]
y_train = y[:limit]
y_val = y[limit:]
alpha = 0
# Hago el fit generico que guarda los datos
clf.fit(X_train, y_train)
setup_time_end_time = time.time()
setup_time = setup_time_end_time-setup_time_start_time
for k in tqdm(range(1,X_train.shape[0]+1,intervalo_k)):
# Correr knn
clf = metnum.KNNClassifier(k)
knn_start_time = time.time()
y_pred = clf.predict(X_val)
print(accuracy_score(y_val, y_pred))
knn_end_time = time.time()
knn_time = knn_end_time-knn_start_time
# Calcular metricas de interes
labels= [0,1,2,3,4,5,6,7,8,9]
precision = precision_score(y_val, y_pred,labels=labels, average=None)
accuracy = accuracy_score(y_val, y_pred)
recall = recall_score(y_val,y_pred,labels=labels, average=None)
f1 = f1_score(y_val,y_pred,labels=labels, average=None)
# Escribir los resultados
accuracy_of_all_k.append([alpha, k ,accuracy])
times.append(setup_time+knn_time)
# Agregar una fila al dataframe de precision
digit = 0
precision_dict={}
for i in range(0,10,1):
precision_dict[digit]=precision[i]
digit += 1
precision_dict['k']=k
precision_dict['alpha']=alpha
precision_df = precision_df.append(precision_dict,ignore_index=True)
# Agregar una fila al dataframe de recall
digit = 0
recall_dict={}
for i in range(0,10,1):
recall_dict[digit]=recall[i]
digit += 1
recall_dict['k']=k
recall_dict['alpha']=alpha
recall_df = recall_df.append(recall_dict,ignore_index=True)
# Agregar una fila al dataframe de f1
digit = 0
f1_dict={}
for i in range(0,10,1):
f1_dict[digit]=f1[i]
digit += 1
f1_dict['k']=k
f1_dict['alpha']=alpha
f1_df = f1_df.append(f1_dict,ignore_index=True)
# Escribo los resultados a un archivo para no tener que correr devuelta los resultados.
precision_df.to_csv('knn_solo_precision.csv', index=False)
recall_df.to_csv('knn_solo_recall.csv', index=False)
f1_df.to_csv('knn_solo_f1.csv', index=False)
with open('knn_solo_acuracy.csv', 'w', newline='') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
wr.writerow(accuracy_of_all_k)
with open('knn_solo_time.csv', 'w', newline='') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
wr.writerow(times)
with open('knn_solo_predicciones.csv', 'w', newline='') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
wr.writerow(y_pred)
correr_Knn_con_todos_los_k_posibles(0.8)
def correr_Knn_pca(porcentage_para_entrenar,cant_muestras=42000,semilla = 2,intervalo_k,intervalo_alpha):
df_train = pd.read_csv("../data/train.csv")
df_train = df_train.sample(frac=1, random_state=semilla)
X_original = df_train[df_train.columns[1:]].values
y = df_train["label"].values.reshape(-1, 1)
valores_k = [1,5,10,25,50,75,100,200,500,1000,2000,5000,10000,20000]
accuracy_of_all_k = []
precision_df = pd.DataFrame(columns=('alpha','k',0,1,2,3,4,5,6,7,8,9))
recall_df = pd.DataFrame(columns=('alpha','k',0,1,2,3,4,5,6,7,8,9))
f1_df = pd.DataFrame(columns=(0,1,2,3,4,5,6,7,8,9))
times = []
for alpha in tqdm(range(1,29,intervalo_alpha)):
setup_time_start_time = time.time()
pca = metnum.PCA(alpha)
X = pca.transform(X_original)
limit = int(0.8 * X.shape[0])
X_train = X[:limit]
X_val = X[limit:]
y_train = y[:limit]
y_val = y[limit:]
# Hago el fit generico que guarda los datos
setup_time_end_time = time.time()
setup_time = setup_time_end_time-setup_time_start_time
for k in tqdm(valores_k):
# Correr knn
clf = metnum.KNNClassifier(k)
knn_start_time = time.time()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_val)
print(accuracy_score(y_val, y_pred))
knn_end_time = time.time()
knn_time = knn_end_time-knn_start_time
# Calcular metricas de interes
labels= [0,1,2,3,4,5,6,7,8,9]
precision = precision_score(y_val, y_pred,labels=labels, average=None)
accuracy = accuracy_score(y_val, y_pred)
recall = recall_score(y_val,y_pred,labels=labels, average=None)
f1 = f1_score(y_val,y_pred,labels=labels, average=None)
# Escribir los resultados
accuracy_of_all_k.append([alpha, k ,accuracy])
times.append(setup_time+knn_time)
# Agregar una fila al dataframe de precision
digit = 0
precision_dict={}
for i in range(0,10,1):
precision_dict[digit]=precision[i]
digit += 1
precision_dict['k']=k
precision_dict['alpha']=alpha
precision_df = precision_df.append(precision_dict,ignore_index=True)
# Agregar una fila al dataframe de recall
digit = 0
recall_dict={}
for i in range(0,10,1):
recall_dict[digit]=recall[i]
digit += 1
recall_dict['k']=k
recall_dict['alpha']=alpha
recall_df = recall_df.append(recall_dict,ignore_index=True)
# Agregar una fila al dataframe de f1
digit = 0
f1_dict={}
for i in range(0,10,1):
f1_dict[digit]=f1[i]
digit += 1
f1_dict['k']=k
f1_dict['alpha']=alpha
f1_df = f1_df.append(f1_dict,ignore_index=True)
# Escribo los resultados a un archivo para no tener que correr devuelta los resultados.
precision_df.to_csv('knn_pca_precision_re_do.csv', index=False)
recall_df.to_csv('knn_pca_recall.csv_re_do', index=False)
with open('knn_pca_acuracy_re_do.csv', 'w', newline='') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
wr.writerow(accuracy_of_all_k)
with open('knn_pca_time_re_do.csv', 'w', newline='') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
wr.writerow(times)
with open('knn_pca_predicciones.csv', 'w', newline='') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
wr.writerow(y_pred)
correr_Knn_pca(0.8)
```
| github_jupyter |
Variables with more than one value
==================================
You have already seen ordinary variables that store a single value. However other variable types can hold more than one value. The simplest type is called a list. Here is a example of a list being used:
```
which_one = int(input("What month (1-12)? "))
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July',\
'August', 'September', 'October', 'November', 'December']
if 1 <= which_one <= 12:
print("The month is", months[which_one - 1])
```
and an output example:
In this example the months is a list. months is defined with the lines months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] (Note that a `\` can be used to split a long line). The `[` and `]` start and end the list with comma's (`,`) separating the list items. The list is used in `months[which_one - 1]`. A list consists of items that are numbered starting at 0. In other words if you wanted January you would type in 1 and that would have 1 subtracted off to use `months[0]`. Give a list a number and it will return the value that is stored at that location.
The statement `if 1 <= which_one <= 12:` will only be true if `which_one` is between one and twelve inclusive (in other words it is what you would expect if you have seen that in algebra). Since 1 is subtracted from `which_one` we get list locations from 0 to 11.
Lists can be thought of as a series of boxes. For example, the boxes created by demolist = ['life', 42, 'the', 'universe', 6, 'and', 7] would look like this:
| | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| box number | 0 | 1 | 2 | 3 | 4 | 5 | 6 |
| demolist | 'life' | 42 | 'the' | 'universe' | 6 | 'and' | 7 |
Each box is referenced by its number so the statement `demolist[0]` would get 'life', `demolist[1]` would get 42 and so on up to `demolist[6]` getting 7.
More features of lists
======================
The next example is just to show a lot of other stuff lists can do (for once, I don't expect you to type it in, but you should probably play around with lists until you are comfortable with them. Also, there will be another program that uses most of these features soon.). Here goes:
```
demolist = ['life', 42, 'the', 'universe', 6, 'and', 7]
print('demolist = ', demolist)
demolist.append('everything')
print("after 'everything' was appended demolist is now:")
print(demolist)
print('len(demolist) =', len(demolist))
print('demolist.index(42) =', demolist.index(42))
print('demolist[1] =', demolist[1])
#Next we will loop through the list
c = 0
while c < len(demolist):
print('demolist[', c, ']=', demolist[c])
c = c + 1
del demolist[2]
print("After 'the universe' was removed demolist is now:")
print(demolist)
if 'life' in demolist:
print("'life' was found in demolist")
else:
print("'life' was not found in demolist")
if 'amoeba' in demolist:
print("'amoeba' was found in demolist")
if 'amoeba' not in demolist:
print("'amoeba' was not found in demolist")
int_list = []
c = 0
while c < len(demolist):
if type(0) == type(demolist[c]):
int_list.append(demolist[c])
c = c + 1
print('int_list is', int_list)
int_list.sort()
print('The sorted int_list is ', int_list)
```
The output is:
This example uses a whole bunch of new functions. Notice that you can
just print a whole list. Next the append function is used
to add a new item to the end of the list. `len` returns how many
items are in a list. The valid indexes (as in numbers that can be
used inside of the []) of a list range from 0 to len - 1. The
index function tell where the first location of an item is
located in a list. Notice how `demolist.index(42)` returns 1 and
when `demolist[1]` is run it returns 42. The line
`#Next we will loop through the list` is a just a reminder to the
programmer (also called a comment). Python will ignore any lines that
start with a `#`. Next the lines:
```
c = 0
while c < len(demolist):
print('demolist[', c, ']=', demolist[c])
c = c + 1
```
This creates a variable c which starts at 0 and is incremented until it reaches the last index of the list. Meanwhile the print function prints out each element of the list.
The `del` command can be used to remove a given element in a list. The next few lines use the in operator to test if a element is in or is not in a list.
The `sort` function sorts the list. This is useful if you need a
list in order from smallest number to largest or alphabetical. Note
that this rearranges the list. Note also that the numbers were put in
a new list, and that was sorted, instead of trying to sort a mixed
list. Sorting numbers and strings does not really make sense and results
in an error.
In summary for a list the following operations exist:
| | | | |
| --- | --- | --- | --- |
| example | explanation | | |
| list[2] | accesses the element at index 2 | | |
| list[2] = 3 | sets the element at index 2 to be 3 | | |
| del list[2] | removes the element at index 2 | | |
| len(list) | returns the length of list | | |
| "value" in list | is true if "value" is an element in list | | |
| "value" not in list | is true if "value" is not an element in list | | |
| list.sort() | sorts list | | |
| list.index("value") | returns the index of the first place that "value" occurs | | |
| list.append("value") | adds an element "value" at the end of the list | | |
This next example uses these features in a more useful way:
```
menu_item = 0
list = []
while menu_item != 9:
print("--------------------")
print("1. Print the list")
print("2. Add a name to the list")
print("3. Remove a name from the list")
print("4. Change an item in the list")
print("9. Quit")
menu_item = int(input("Pick an item from the menu: "))
if menu_item == 1:
current = 0
if len(list) > 0:
while current < len(list):
print(current, ". ", list[current])
current = current + 1
else:
print("List is empty")
elif menu_item == 2:
name = input("Type in a name to add: ")
list.append(name)
elif menu_item == 3:
del_name = input("What name would you like to remove: ")
if del_name in list:
item_number = list.index(del_name)
del list[item_number]
#The code above only removes the first occurance of
# the name. The code below from Gerald removes all.
#while del_name in list:
# item_number = list.index(del_name)
# del list[item_number]
else:
print(del_name, " was not found")
elif menu_item == 4:
old_name = input("What name would you like to change: ")
if old_name in list:
item_number = list.index(old_name)
new_name = input("What is the new name: ")
list[item_number] = new_name
else:
print(old_name, " was not found")
print("Goodbye")
```
And here is part of the output:
That was a long program. Let's take a look at the source code. The line `list = []` makes the variable list a list with no items (or elements). The next important line is `while menu_item != 9:` . This line starts a loop that allows the menu system for this program. The next few lines display a menu and decide which part of the program to run.
The section:
goes through the list and prints each name. `len(list_name)` tell how many items are in a list. If len returns `0` then the list is empty.
Then a few lines later the statement `list.append(name)` appears. It uses the append function to add a item to the end of the list. Jump down another two lines and notice this section of code:
Here the index function is used to find the index value that will be used later to remove the item. `del list[item_number]` is used to remove a element of the list.
The next section
uses index to find the `item_number` and then puts `new_name` where the `old_name` was.
Congratulations, with lists under your belt, you now know enough of the language
that you could do any computations that a computer can do (this is technically known as Turing-Completeness). Of course, there are still many features that
are used to make your life easier.
Examples
========
test.py
```
## This program runs a test of knowledge
# First get the test questions
# Later this will be modified to use file io.
def get_questions():
# notice how the data is stored as a list of lists
return [["What color is the daytime sky on a clear day?", "blue"],\
["What is the answer to life, the universe and everything?", "42"],\
["What is a three letter word for mouse trap?", "cat"]]
# This will test a single question
# it takes a single question in
# it returns true if the user typed the correct answer, otherwise false
def check_question(question_and_answer):
#extract the question and the answer from the list
question = question_and_answer[0]
answer = question_and_answer[1]
# give the question to the user
given_answer = input(question)
# compare the user's answer to the tester's answer
if answer == given_answer:
print("Correct")
return True
else:
print("Incorrect, correct was:", answer)
return False
# This will run through all the questions
def run_test(questions):
if len(questions) == 0:
print("No questions were given.")
# the return exits the function
return
index = 0
right = 0
while index < len(questions):
#Check the question
if check_question(questions[index]):
right = right + 1
#go to the next question
index = index + 1
#notice the order of the computation, first multiply, then divide
print("You got ", right*100//len(questions), "% right out of", len(questions))
#now lets run the questions
run_test(get_questions())
```
Sample Output:
Exercises
=========
Expand the test.py program so it has menu giving the option of taking
the test, viewing the list of questions and answers, and an option to
Quit. Also, add a new question to ask, “What noise does a truly
advanced machine make?” with the answer of “ping”.
| github_jupyter |
# Control of a hydropower dam
Consider a hydropower plant with a dam. We want to control the flow through the dam gates in order to keep the amount of water at a desired level.
<p><img src="hydropowerdam-wikipedia.png" alt="Hydro power from Wikipedia" width="400"></p>
The system is a typical integrator, and is given by the difference equation
$$ y(k+1) = y(k) + b_uu(k) - b_vv(k), $$
where $x$ is the deviation of the water level from a reference level, $u$ is the change in the flow through the dam gates. A positive value of $u$ corresponds to less flow through the gates, relative to an operating point. The flow $v$ corresponds to changes in the flow in (from river) or out (through power plant).
The pulse transfer function of the dam is thus $$H(z) = \frac{b_u}{z-1}.$$
We want to control the system using a two-degree-of-freedom controller, including an anti-aliasing filter modelled as a delay of one sampling period. This gives the block diagram <p><img src="2dof-block-integrator.png" alt="Block diagram" width="700"></p>
The desired closed-loop system from the command signal $u_c$ to the output $y$ should have poles in $z=0.7$, and any observer poles should be chosen faster than the closed-loop poles, say in $z=0.5$.
## The closed-loop pulse-transfer functions
With $F_b(z) = \frac{S(z)}{R(z)}$ and $F_f(z) = \frac{T(z)}{R(z)}$, and using Mason's rule, we get that the closed-loop pulse-transfer function from command signal $u_c$ to output $y$ becomes
$$G_c(z) = \frac{\frac{T(z)}{R(z)}\frac{b_u}{z-1}}{1 + \frac{S(z)}{R(z)} \frac{b_u}{(z-1)z}} = \frac{b_uzT(z)}{z(z-1)R(z) + b_uS(z)}.$$
The closed-loop transfer function from disturbance to output becomes
$$G_{cv}(z) = \frac{\frac{b_v}{z-1}}{1 + \frac{S(z)}{R(z)} \frac{b_u}{(z-1)z}} = \frac{b_vzR(z)}{z(z-1)R(z) + b_uS(z)}.$$
## The Diophantine equation
The diophantine equation becomes
$$z(z-1)R(z) + b_uS(z) = A_c(z)A_o(z)$$ We want to find the smallest order controller that can satisfy the Diophantine equation. Since the feedback controller is given by
$$ F_b(z) = \frac{s_0z^n + s_1z^{n-1} + \cdots + s_n}{z^n + r_1z^{n-1} + \cdots + r_n}$$ and has $2\deg R + 1$ unknown parameters, and since we should choose the order of the Diphantine equation to be the same as the number of unknown parameters, we get
$$ \deg \big((z(z-1)R(z) + b_uS(z)\big) = \deg R + 2 = 2\deg R + 1 \quad \Rightarrow \quad \deg R = n = 1.$$
The Diophantine equation thus becomes
$$ z(z-1)(z+r_1) + b_u(s_0z+s_1) = (z-0.7)^2(z-0.5), $$
where $A_o(z) = z-0.5$ is the observer polynomial. Working out the expressions on both sides gives
$$ z^3-(1-r_1)z^2 -r_1 z + b_us_0z + b_us_1 = (z^2 - 1.4z + 0.49)(z-0.5)$$
$$ z^3 -(1-r_1)z^2 +(b_us_0-r_1)z + b_us_1 = z^3 - (1.4+0.5)z^2 + (0.49+0.7)z -0.245$$
From the Diophantine equation we get the following equations in the unknowns
\begin{align}
z^2: &\quad 1-r_1 = 1.9\\
z^1: &\quad b_us_0 - r_1 = 1.19\\
z^0: &\quad b_us_1 = -0.245
\end{align}
This is a linear system of equations in the unknown, and can be solved in many different ways. Here we see that with simple substitution we find
\begin{align}
r_1 &= 1-1.9 = -0.9\\
s_0 &= \frac{1}{b_u}(1.19+r_1) = \frac{0.29}{b_u}\\
s_1 &= -\frac{0.245}{b_u}
\end{align}
## The feedforward
We set $T(z) = t_0A_o(z)$ which gives the closed-loop pulse-transfer function
$$G_c(z) = \frac{b_uzT(z)}{z(z-1)R(z) + b_uS(z)}= \frac{b_ut_0zA_o(z)}{A_c(z)A_o(z)} = \frac{b_u t_0z}{A_c(z)}$$
In order for this pulse-transfer function to have unit DC-gain (static gain) we must have $G_c(1) = 1$, or
$$ \frac{b_ut_0}{A_c(1)} = 1. $$
The solution is
$$ t_0 = \frac{A_c(1)}{b_u} = \frac{(1-0.7)^2}{b_u} = \frac{0.3^2}{b_u}. $$
## Verify by symbolic computer algebra
```
import numpy as np
import sympy as sy
z = sy.symbols('z', real=False)
bu,r1,s0,s1 = sy.symbols('bu,r1,s0,s1', real=True)
pc,po = sy.symbols('pc,po', real=True) # Closed-loop pole and observer pole
# The polynomials
Ap = sy.Poly(z*(z-1), z)
Bp = sy.Poly(bu,z)
Rp = sy.Poly(z+r1, z)
Sp = sy.Poly(s0*z+s1, z)
Ac = sy.Poly((z-pc)**2, z)
Ao = sy.Poly(z-po, z)
# The diophantine eqn
dioph = Ap*Rp + Bp*Sp - Ac*Ao
# Form system of eqs from coefficients, then solve
dioph_coeffs = dioph.all_coeffs()
# Solve for r1, s0 and s1,
sol = sy.solve(dioph_coeffs, (r1,s0,s1))
print('r_1 = %s' % sol[r1])
print('s_0 = %s' % sol[s0])
print('s_1 = %s' % sol[s1])
# Substitute values for the desired closed-loop pole and observer pole
substitutions = [(pc, 0.7), (po, 0.5)]
print('r_1 = %s' % sol[r1].subs(substitutions))
print('s_0 = %s' % sol[s0].subs(substitutions))
print('s_1 = %s' % sol[s1].subs(substitutions))
# The forward controller
t0 = (Ac.eval(1)/Bp.eval(1))
print('t_0 = %s' % t0)
print('t_0 = %s' % t0.subs(substitutions))
```
## Requirements on the closed-loop poles and observer poles in order to obtain stable controller
Notice the solution for the controller denominator
$$ R(z) = z+r_1 = z -2p_c -p_o + 1, $$
where $0\ge p_c<1$ is the desired closed-loop pole and $0 \ge p_o<1$ is the observer pole. Sketch in the $(p_c, p_o)$-plane the region which will give a stable controller $F_b(z) = \frac{S(z)}{R(z)}$!
## Simulate a particular case
Let $b_u=1$, $p_c = p_o = \frac{2}{3}$. Analyze the closed-loop system by simulation
```
import control
import control.matlab as cm
import matplotlib.pyplot as plt
sbs = [(bu, 1), (pc, 2.0/3.0), (po, 2.0/3.0)]
Rcoeffs = [1, float(sol[r1].subs(sbs))]
Scoeffs = [float(sol[s0].subs(sbs)), float(sol[s1].subs(sbs))]
Tcoeffs = float(t0.subs(sbs))*np.array([1, float(pc.subs(sbs))])
Acoeffs = [1, -1]
H = cm.tf(float(bu.subs(sbs)), Acoeffs, 1)
Ff = cm.tf(Tcoeffs, Rcoeffs, 1)
Fb = cm.tf(Scoeffs, Rcoeffs, 1)
Haa = cm.tf(1, [1, 0], 1) # The delay due to the anti-aliasing filter
Gc = cm.minreal(Ff*cm.feedback(H, Haa*Fb ))
Gcv = cm.feedback(H, Haa*Fb)
# Pulse trf fcn from command signal to control signal
Gcu = Ff*cm.feedback(1, H*Haa*Fb)
cm.pzmap(Fb)
tvec = np.arange(40)
(t1, y1) = control.step_response(Gc,tvec)
plt.figure(figsize=(14,4))
plt.step(t1, y1[0])
plt.xlabel('k')
plt.ylabel('y')
plt.title('Output')
(t1, y1) = control.step_response(Gcv,tvec)
plt.figure(figsize=(14,4))
plt.step(t1, y1[0])
plt.xlabel('k')
plt.ylabel('y')
plt.title('Output')
```
| github_jupyter |
```
import pandas as pd
df = pd.read_csv(filepath_or_buffer='https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv', sep='\t').iloc[:100,:]
df.head()
```
## Cuantos pedidos por cada orden?
```
mask = df['order_id'] == 1
df[mask]
df[mask].quantity
df[mask].quantity.sum()
mask = df['order_id'] == 2
df[mask]
df[mask].quantity
df[mask].quantity.sum()
mask = df['order_id'] == 3
df[mask]
df[mask].quantity
df[mask].quantity.sum()
mask = df['order_id'] == pepa
df[mask]
df[mask].quantity
df[mask].quantity.sum()
for pepa in [1,2,3]:
mask = df['order_id'] == pepa
df[mask]
df[mask].quantity
df[mask].quantity.sum()
for pepa in [1,2,3, 4, 5, 6,7,8,9]: # pepa = 1, # pepa = 2
mask = df['order_id'] == pepa # mask = df['order_id'] == 1, mask = df['order_id'] == 2
print(df[mask].quantity.sum())
n_productos_pedidos = []
for pepa in [1,2,3,4,5,6,7,8,9]: # pepa = 1, # pepa = 2
mask = df['order_id'] == pepa # mask = df['order_id'] == 1, mask = df['order_id'] == 2
n_productos_pedidos.append(df[mask].quantity.sum())
n_productos_pedidos
n_productos_pedidos = []
for pepa in [1,2,3,4,5,6,7,8,9]: # pepa = 1, # pepa = 2
mask = df['order_id'] == pepa # mask = df['order_id'] == 1, mask = df['order_id'] == 2
n_productos_pedidos.append(df[mask])
n_productos_pedidos[0]
n_productos_pedidos[1]
n_productos_pedidos[2]
n_productos_pedidos[3]
n_productos_pedidos[4]
n_productos_pedidos
dic['order 1'] = 5
dic
dic['order 2'] = 3
dic
dic_pedidos = {}
dic_pedidos
dic_pedidos['order 1'] = 4
dic_pedidos
dic_pedidos[1] = 4
dic_pedidos
dic_pedidos['clave'] = 89
dic_pedidos = {}
for pepa in [1,2,3,4,5,6,7,8,9]: # pepa = 1, # pepa = 2
mask = df['order_id'] == pepa # mask = df['order_id'] == 1, mask = df['order_id'] == 2
dic_pedidos[pepa] = df[mask].quantity.sum()
dic_pedidos
df
dic_pedidos = {}
for pepa in [1,2,3,4,5,6,7,8,9]: # pepa = 1, # pepa = 2
mask = df['order_id'] == pepa # mask = df['order_id'] == 1, mask = df['order_id'] == 2
dic_pedidos[pepa] = df[mask].quantity.sum()
dic_pedidos
df
df['order_id']
pepas = df['order_id'].unique()
for pepa in pepas:
dic_pedidos = {}
for pepa in array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
35, 36, 37, 38, 39, 40, 41, 42, 43, 44]): # pepa = 1, # pepa = 2
mask = df['order_id'] == pepa # mask = df['order_id'] == 1, mask = df['order_id'] == 2
dic_pedidos[pepa] = df[mask].quantity.sum()
dic_pedidos
dic_pedidos = {}
for pepa in df['order_id'].unique(): # pepa = 1, # pepa = 2
mask = df['order_id'] == pepa # mask = df['order_id'] == 1, mask = df['order_id'] == 2
dic_pedidos[pepa] = df[mask].quantity.sum()
dic_pedidos
dfg = df.groupby('order_id')
dfg.get_group(1).quantity.sum()
dfg.get_group(2)
df.quantity
for i in ...
import seaborn as sns
df = sns.load_dataset('mpg')
df
import matplotlib.pyplot as plt
plt.scatter(x='weight', y='mpg', data=df)
mask_usa = df.origin == 'usa'
mask_japan = df.origin == 'japan'
plt.scatter(x='weight', y='mpg', data=df[mask_usa])
plt.scatter(x='weight', y='mpg', data=df[mask_japan])
'x'
x
mask = df.origin == x
plt.scatter(x='weight', y='mpg', data=df[mask])
for x in ['usa','japan']:
mask = df.origin == x
plt.scatter(x='weight', y='mpg', data=df[mask])
df
paises = df.origin.unique()
paises
for x in paises:
mask = df.origin == x
plt.scatter(x='weight', y='mpg', data=df[mask])
for x in df.cylinders.unique():
mask = df.cylinders == x
plt.scatter(x='weight', y='mpg', data=df[mask])
dic = {}
for x in df.cylinders.unique():
mask = df.cylinders == x
plt.scatter(x='weight', y='mpg', data=df[mask])
dic[x] = len(df[mask].cylinders.unique())
dic
dfsel = df[df['order_id'] == 1]
n_pedidos = dfsel.shape[0]
dfsel
dfsel = df[df['order_id'] == 2]
n_pedidos = dfsel.shape[0]
dfsel
dfsel = df[df['order_id'] == 3]
n_pedidos = dfsel.shape[0]
dfsel
dic_pedidos = {}
dfsel = df[df['order_id'] == 1]
n_pedidos = dfsel.shape[0]
dfsel
dic_pedidos[1] = n_pedidos
dic_pedidos
dfsel = df[df['order_id'] == 2]
n_pedidos = dfsel.shape[0]
dfsel
dic_pedidos[2] = n_pedidos
dic_pedidos
dfsel = df[df['order_id'] == 3]
n_pedidos = dfsel.shape[0]
dfsel
dic_pedidos[3] = n_pedidos
dic_pedidos
dic_pedidos = {}
dfsel = df[df['order_id'] == 3]
n_pedidos = dfsel.shape[0]
dfsel
dic_pedidos[3] = n_pedidos
dic_pedidos
dic_pedidos = {}
dfsel = df[df['order_id'] == pepa]
n_pedidos = dfsel.shape[0]
dfsel
dic_pedidos[pepa] = n_pedidos
dic_pedidos
dic_pedidos = {}
for pepa in [1,2,3]:
dfsel = df[df['order_id'] == pepa]
n_pedidos = dfsel.shape[0]
dfsel
dic_pedidos[pepa] = n_pedidos
dic_pedidos
dic_pedidos
df
df.order_id
pedidos = df.order_id.unique()
dic_pedidos = {}
for pepa in pedidos:
dfsel = df[df['order_id'] == pepa]
n_pedidos = dfsel.shape[0]
dfsel
dic_pedidos[pepa] = n_pedidos
dic_pedidos
dic_pedidos
pd.Series(dic_pedidos)
df.order_id.value_counts()
df.order_id.value_counts(sort=False)
df.order_id.value_counts(sort=False)
dic_pedidos = {}
for pepa in pedidos:
dfsel = df[df['order_id'] == pepa]
n_pedidos = dfsel.shape[0]
dic_pedidos[pepa] = n_pedidos
pd.Series(dic_pedidos)
df.item_price.sum()
for i in df.item_price:
print(i)
i = '$2.39 '
clean_i = i.replace('$', '').replace(' ', '')
float_i = float(clean_i)
float_i
i = '$2.39 '
clean_i = i.replace('$', '').replace(' ', '')
float_i = float(clean_i)
float_i
i = '$2.39 '
clean_i = i.replace('$', '').replace(' ', '')
float_i = float(clean_i)
float_i
for i in df.item_price:
clean_i = i.replace('$', '').replace(' ', '')
float_i = float(clean_i)
float_i
float_i
lista_precios = []
for i in df.item_price:
clean_i = i.replace('$', '').replace(' ', '')
float_i = float(clean_i)
lista_precios.append(float_i)
lista_precios
df['precio'] = lista_precios
df
df.item_price.sum()
df.precio.sum()
df
for item in df.choice_description:
print(item)
item = '[Tomatillo-Red Chili Salsa (Hot), [Black Beans, Rice, Cheese, Sour Cream]]'
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
lista_item
item = '[Tomatillo-Red Chili Salsa (Hot), [Black Beans, Rice, Cheese, Sour Cream]]'
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
lista_item
item = '[Tomatillo-Red Chili Salsa (Hot), [Black Beans, Rice, Cheese, Sour Cream]]'
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
lista_item
for item in df.choice_description:
item = '[Tomatillo-Red Chili Salsa (Hot), [Black Beans, Rice, Cheese, Sour Cream]]'
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
lista_item
for item in df.choice_description:
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
lista_item
for item in df.choice_description:
print(item)
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
lista_item
import numpy as np
np.nan
np.nan.replace()
if type(item) == float:
print('bingo')
else:
print('nanai')
for item in df.choice_description:
if type(item) == float:
item
else:
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
lista_item
for item in df.choice_description:
if type(item) == float:
item
else:
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
print(lista_item)
for item in df.choice_description:
if type(item) == float:
lista_item = []
else:
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
print(lista_item)
lista_todos = []
for item in df.choice_description:
if type(item) == float:
lista_item = []
lista_todos.append(lista_item)
else:
item = item.replace('[', '').replace(']', '')
item
lista_item = item.split(', ')
lista_todos.append(lista_item)
lista_todos
from sklearn.feature_extraction.text import TfidfVectorizer
bag_of_words = CountVectorizer(tokenizer=lambda doc: doc, lowercase=False).fit_transform(lista_todos)
bag_of_words.get
vec = TfidfVectorizer()
lista_todos = [','.join(i) for i in lista_todos]
vec.fit(lista_todos)
data = vec.transform(lista_todos).toarray()
vec.get_feature_names_out()
pd.DataFrame(data)
pd.DataFrame(bag_of_words.toarray())
```
| github_jupyter |
### Simulate the flight reservation process (MZ685 Case Vocram)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from random import random
# Constants
FLIGHTS = 1000 # The number of flights for simulation (can be changed)
CALLS = 10 # The number of calls for each flight
SEATS = 3 # The number of available seats for each flight
WTP_PUBLIC = 100 # The WTP upper bound of public employee
WTP_PRIVATE = 150 # The WTP upper bound of private individual
P = 0.5 # P: The probability of a call being public employee
# 1-P: The probability of a call being private individual
```
### Replicate the Status Quo
Assumptions:
* Single price: $65
* The WTP of public employee follows a discrete uniform distribution [1, 100]
* The WTP of private individual follows a discrete uniform distribution [1, 150]
* A strict first-call first-serve system: whoever called and made a reservation for a specific departure would be given a seat, provided one was available.
Simulation results are expected to be in line with the fact:
Historically, 40% of the clients were scientists/researchers/public employees and 60% were private individuals.
```
def one_flight(flight_no, price, reservations):
"""Simulate the reservation process for one flight.
Parameters
----------
flight_no: flight sequence number for bookkeeping
"""
seats_left = SEATS
for i in range(CALLS):
whocall = [0, 1][random()>P]
if whocall == 0: # 0-public employee
wtp = np.random.randint(1, WTP_PUBLIC+1)
if wtp >= price:
seats_left -= 1
reservations.append((flight_no, whocall, wtp))
if seats_left == 0:
break
elif whocall == 1: # 1-private individual
wtp = np.random.randint(1, WTP_PRIVATE+1)
if wtp >= price:
seats_left -= 1
reservations.append((flight_no, whocall, wtp))
if seats_left == 0:
break
def revenue(price, more_info=False):
"""Simulate FLIGHT times and return the average revenue per flight.
"""
reservations = []
for i in range(1, FLIGHTS+1):
one_flight(i, price, reservations)
avg_revenue = price * len(reservations) / FLIGHTS
if more_info == True:
print('The # of availabe seats in {} flights is {}. The # of seats sold is {}.' \
.format(FLIGHTS, FLIGHTS*SEATS, len(reservations)))
pcnt = ((pd.DataFrame(reservations, columns=['flight_no', 'whocall', 'wtp'])). \
groupby('whocall').size()) / len(reservations) * 100
title = 'Price: \$' + str(price) + ', Average revenue per flight: \$' + str(round(avg_revenue)) + \
'\n' + ' %' + str(round(len(reservations) / (FLIGHTS*SEATS) * 100)) + ' seats were sold'
ax = pcnt.plot(kind='bar', title=title, figsize=(8,5))
ax.set_xlabel("0-public employee 1-private individual")
ax.set_ylabel("Percent (%)")
return avg_revenue
revenue(65, True)
```
### Get the Optimal Single Price
```
# Step 1: Get a rough range of the optimal single price
optimal_price, max_revenue = 1, 1
for p in range(60, 150):
r = revenue(p)
if max_revenue < r:
optimal_price, max_revenue = p, r
print(optimal_price, max_revenue)
# Step 2: Run simulation many times within a smaller price range
optima = []
for i in range(100):
optimal_price, max_revenue = 1, 1
for p in range(75, 90):
r = revenue(p)
if max_revenue < r:
optimal_price, max_revenue = p, r
optima.append((optimal_price, max_revenue))
# Step 3: Average the results from Step 2 to get the optimal single price
(pd.DataFrame(optima, columns=['price', 'revenue'])).mean()
revenue(82, True)
```
### Two-tier Pricing
Assumptions:
* Stu will ask a customer calling in to know what type the person belongs to. For public employee, he will quote the high price; for private individual, he will quote the low price.
* Stu doesn't set a booking limit for public employee.
```
def one_flight_two_tier(flight_no, high_price, low_price, reservations, booking_limit=SEATS):
""" Simulate the reservation process for one flight with two-tier pricing system
Parameters
----------
flight_no: flight sequence number for bookkeeping
high_price: price for private individual
low_price: price for public employee
booking_limit: the max number of seats allocated to public employee per flight
"""
seats_left = SEATS
seats_for_public = booking_limit
for i in range(CALLS):
whocall = [0, 1][random()>P]
if whocall == 0: # 0-public employee
wtp = np.random.randint(1, WTP_PUBLIC+1)
if seats_for_public == 0:
continue
if wtp >= low_price:
seats_left -= 1
seats_for_public -= 1
reservations.append((flight_no, whocall, wtp))
if seats_left == 0:
break
elif whocall == 1: # 1-private individual
wtp = np.random.randint(1, WTP_PRIVATE+1)
if wtp >= high_price:
seats_left -= 1
reservations.append((flight_no, whocall, wtp))
if seats_left == 0:
break
def revenue_two_tier(high_price, low_price, booking_limit=SEATS, more_info=False):
"""Return the average revenue per flight
"""
reservations = []
for i in range(1, FLIGHTS+1):
one_flight_two_tier(i, high_price, low_price, reservations, booking_limit)
df = pd.DataFrame(reservations, columns=['flight_no', 'whocall', 'wtp'])
avg_revenue = (high_price * len(df[df.whocall==1]) + low_price * len(df[df.whocall==0])) / FLIGHTS
if more_info == True:
print('The # of availabe seats in {} flights is {}. The # of seats sold is {}.' \
.format(FLIGHTS, FLIGHTS*SEATS, len(reservations)))
pcnt = df.groupby('whocall').size() / len(reservations) * 100
title = 'High Price: \$' + str(high_price) + ', Low Price: \$' + str(low_price) + \
'\nAverage revenue per flight: \$' + str(round(avg_revenue)) + \
'\n' + ' %' + str(round(len(reservations) / (FLIGHTS*SEATS) * 100)) + ' seats were sold'
ax = pcnt.plot(kind='bar', title=title, figsize=(8,5))
ax.set_xlabel("0-public employee 1-private individual")
ax.set_ylabel("Percent (%)")
return avg_revenue
```
### Get the Optimal Two-tier System
```
# Step 1: Get rough ranges of the optimal two prices
optimal_high_price, optimal_low_price, max_revenue = 1, 1, 1
for high in range(70, 150):
for low in range(50, 100):
r = revenue_two_tier(high, low)
if max_revenue < r:
max_revenue = r
optimal_high_price, optimal_low_price = high, low
print(optimal_high_price, optimal_low_price, max_revenue)
# Step 2: Run simulation many times within smaller price ranges
N = 100
optima = []
for i in range(N):
optimal_high_price, optimal_low_price, max_revenue = 1, 1, 1
for high in range(90, 110):
for low in range(65, 80):
r = revenue_two_tier(high, low)
if max_revenue < r:
max_revenue = r
optimal_high_price, optimal_low_price = high, low
optima.append((optimal_high_price, optimal_low_price, max_revenue))
# Step 3: Average the results from Step 2 to get the optimal two prices
(pd.DataFrame(optima, columns=['high_price', 'low_price', 'revenue'])).mean()
revenue_two_tier(98, 73, more_info=True)
```
| github_jupyter |
```
import warnings
warnings.filterwarnings('ignore')
import nltk
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('wordnet')
from nltk.corpus import stopwords
import pandas as pd
import numpy as np
from glove import Glove
from sklearn.preprocessing import LabelEncoder
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from keras.models import Sequential
from keras.layers import Dense
import geopandas as gpd
import os
import json
import h5py
labelEncoder = LabelEncoder()
one_enc = OneHotEncoder()
lemma = nltk.WordNetLemmatizer()
```
## Manual Classification
```
#Dir = '/mnt/d/Dropbox/Ranee_Joshi_PhD_Local/04_PythonCodes/dh2loop_old/shp_NSW'
#DF=litho_Dataframe(Dir)
#DF.to_csv('export.csv')
DF = pd.read_csv('/mnt/d/Dropbox/Ranee_Joshi_PhD_Local/04_PythonCodes/dh2loop/notebooks/Upscaled_Litho_Test2.csv')
DF['FromDepth'] = pd.to_numeric(DF.FromDepth)
DF['ToDepth'] = pd.to_numeric(DF.ToDepth)
DF['TopElev'] = pd.to_numeric(DF.TopElev)
DF['BottomElev'] = pd.to_numeric(DF.BottomElev)
DF['x'] = pd.to_numeric(DF.x)
DF['y'] = pd.to_numeric(DF.y)
print('number of original litho classes:', len(DF.MajorLithCode.unique()))
print('number of litho classes :',
len(DF['reclass'].unique()))
print('unclassified descriptions:',
len(DF[DF['reclass'].isnull()]))
def save_file(DF, name):
'''Function to save manually reclassified dataframe
Inputs:
-DF: reclassified pandas dataframe
-name: name (string) to save dataframe file
'''
DF.to_pickle('{}.pkl'.format(name))
save_file(DF, 'manualTest_ygsb')
```
## MLP Classification
```
def load_geovec(path):
instance = Glove()
with h5py.File(path, 'r') as f:
v = np.zeros(f['vectors'].shape, f['vectors'].dtype)
f['vectors'].read_direct(v)
dct = f['dct'][()].tostring().decode('utf-8')
dct = json.loads(dct)
instance.word_vectors = v
instance.no_components = v.shape[1]
instance.word_biases = np.zeros(v.shape[0])
instance.add_dictionary(dct)
return instance
# Stopwords
extra_stopwords = [
'also',
]
stop = stopwords.words('english') + extra_stopwords
def tokenize(text, min_len=1):
'''Function that tokenize a set of strings
Input:
-text: set of strings
-min_len: tokens length
Output:
-list containing set of tokens'''
tokens = [word.lower() for sent in nltk.sent_tokenize(text)
for word in nltk.word_tokenize(sent)]
filtered_tokens = []
for token in tokens:
if token.isalpha() and len(token) >= min_len:
filtered_tokens.append(token)
return [x.lower() for x in filtered_tokens if x not in stop]
def tokenize_and_lemma(text, min_len=0):
'''Function that retrieves lemmatised tokens
Inputs:
-text: set of strings
-min_len: length of text
Outputs:
-list containing lemmatised tokens'''
filtered_tokens = tokenize(text, min_len=min_len)
lemmas = [lemma.lemmatize(t) for t in filtered_tokens]
return lemmas
def get_vector(word, model, return_zero=False):
'''Function that retrieves word embeddings (vector)
Inputs:
-word: token (string)
-model: trained MLP model
-return_zero: boolean variable
Outputs:
-wv: numpy array (vector)'''
epsilon = 1.e-10
unk_idx = model.dictionary['unk']
idx = model.dictionary.get(word, unk_idx)
wv = model.word_vectors[idx].copy()
if return_zero and word not in model.dictionary:
n_comp = model.word_vectors.shape[1]
wv = np.zeros(n_comp) + epsilon
return wv
def mean_embeddings(dataframe_file, model):
'''Function to retrieve sentence embeddings from dataframe with
lithological descriptions.
Inputs:
-dataframe_file: pandas dataframe containing lithological descriptions
and reclassified lithologies
-model: word embeddings model generated using GloVe
Outputs:
-DF: pandas dataframe including sentence embeddings'''
DF = pd.read_pickle(dataframe_file)
DF = DF.drop_duplicates(subset=['x', 'y', 'z'])
DF['tokens'] = DF['Description'].apply(lambda x: tokenize_and_lemma(x))
DF['length'] = DF['tokens'].apply(lambda x: len(x))
DF = DF.loc[DF['length']> 0]
DF['vectors'] = DF['tokens'].apply(lambda x: np.asarray([get_vector(n, model) for n in x]))
DF['mean'] = DF['vectors'].apply(lambda x: np.mean(x[~np.all(x == 1.e-10, axis=1)], axis=0))
DF['reclass'] = pd.Categorical(DF.reclass)
DF['code'] = DF.reclass.cat.codes
DF['drop'] = DF['mean'].apply(lambda x: (~np.isnan(x).any()))
DF = DF[DF['drop']]
return DF
# loading word embeddings model
# (This can be obtained from https://github.com/spadarian/GeoVec )
#modelEmb = Glove.load('/home/ignacio/Documents/chapter2/best_glove_300_317413_w10_lemma.pkl')
modelEmb = load_geovec('geovec_300d_v1.h5')
# getting the mean embeddings of descriptions
DF = mean_embeddings('manualTest_ygsb.pkl', modelEmb)
DF2 = DF[DF['code'].isin(DF['code'].value_counts()[DF['code'].value_counts()>2].index)]
print(DF2)
def split_stratified_dataset(Dataframe, test_size, validation_size):
'''Function that split dataset into test, training and validation subsets
Inputs:
-Dataframe: pandas dataframe with sentence mean_embeddings
-test_size: decimal number to generate the test subset
-validation_size: decimal number to generate the validation subset
Outputs:
-X: numpy array with embeddings
-Y: numpy array with lithological classes
-X_test: numpy array with embeddings for test subset
-Y_test: numpy array with lithological classes for test subset
-Xt: numpy array with embeddings for training subset
-yt: numpy array with lithological classes for training subset
-Xv: numpy array with embeddings for validation subset
-yv: numpy array with lithological classes for validation subset
'''
#df2 = Dataframe[Dataframe['code'].isin(Dataframe['code'].value_counts()[Dataframe['code'].value_counts()>2].index)]
#X = np.vstack(df2['mean'].values)
#Y = df2.code.values.reshape(len(df2.code), 1)
X = np.vstack(Dataframe['mean'].values)
Y = Dataframe.code.values.reshape(len(Dataframe.code), 1)
#print(X.shape)
#print (Dataframe.code.values.shape)
#print (len(Dataframe.code))
#print (Y.shape)
X_train, X_test, y_train, y_test = train_test_split(X,
Y, stratify=Y,
test_size=test_size,
random_state=42)
#print(X_train.shape)
#print(Y_train.shape)
Xt, Xv, yt, yv = train_test_split(X_train,
y_train,
test_size=validation_size,
stratify=None,
random_state=1)
return X, Y, X_test, y_test, Xt, yt, Xv, yv
# subseting dataset for training classifier
X, Y, X_test, Y_test, X_train, Y_train, X_validation, Y_validation = split_stratified_dataset(DF2, 0.1, 0.1)
# encoding lithological classes
encodes = one_enc.fit_transform(Y_train).toarray()
# MLP model generation
model = Sequential()
model.add(Dense(100, input_dim=300, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(units=len(DF2.code.unique()), activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# training MLP model
model.fit(X_train, encodes, epochs=30, batch_size=100, verbose=2)
# saving MLP model
model.save('mlp_prob_model.h5')
def retrieve_predictions(classifier, x):
'''Function that retrieves lithological classes using the trained classifier
Inputs:
-classifier: trained MLP classifier
-x: numpy array containing embbedings
Outputs:
-codes_pred: numpy array containing lithological classes predicted'''
preds = classifier.predict(x, verbose=0)
new_onehot = np.zeros((x.shape[0], 72))
new_onehot[np.arange(len(preds)), preds.argmax(axis=1)] = 1
codes_pred = one_enc.inverse_transform(new_onehot)
return codes_pred
def classifier_assess(classifier, x, y):
'''Function that prints the performance of the classifier
Inputs:
-classifier: trained MLP classifier
-x: numpy array with embeddings
-y: numpy array with lithological classes predicted'''
Y2 = retrieve_predictions(classifier, x)
print('f1 score: ', metrics.f1_score(y, Y2, average='macro'),
'accuracy: ', metrics.accuracy_score(y, Y2),
'balanced_accuracy:', metrics.balanced_accuracy_score(y, Y2))
def save_predictions(Dataframe, classifier, x, name):
'''Function that saves dataframe predictions as a pickle file
Inputs:
-Dataframe: pandas dataframe with mean_embeddings
-classifier: trained MLP model,
-x: numpy array with embeddings,
-name: string name to save dataframe
Outputs:
-save dataframe'''
preds = classifier.predict(x, verbose=0)
Dataframe['predicted_probabilities'] = preds.tolist()
Dataframe['pred'] = retrieve_predictions(classifier, x).astype(np.int32)
Dataframe[['x', 'y', 'FromDepth', 'ToDepth', 'TopElev', 'BottomElev',
'mean', 'predicted_probabilities', 'pred', 'reclass', 'code']].to_pickle('{}.pkl'.format(name))
# assessment of model performance
classifier_assess(model, X_validation, Y_validation)
# save lithological prediction likelihoods dataframe
save_predictions(DF2, model, X, 'YGSBpredictions')
import pickle
with open('YGSBpredictions.pkl', 'rb') as f:
data = pickle.load(f)
print(data)
len(data)
data.head()
tmp = data['predicted_probabilities'][0]
len(tmp)
data.to_csv('YGSBpredictions.csv')
import base64
with open('a.csv', 'a', encoding='utf8') as csv_file:
wr = csv.writer(csv_file, delimiter='|')
pickle_bytes = pickle.dumps(obj) # unsafe to write
b64_bytes = base64.b64encode(pickle_bytes) # safe to write but still bytes
b64_str = b64_bytes.decode('utf8') # safe and in utf8
wr.writerow(['col1', 'col2', b64_str])
# the file contains
# col1|col2|gANdcQAu
with open('a.csv', 'r') as csv_file:
for line in csv_file:
line = line.strip('\n')
b64_str = line.split('|')[2] # take the pickled obj
obj = pickle.loads(base64.b64decode(b64_str)) #
```
| github_jupyter |
```
# https://community.plotly.com/t/different-colors-for-bars-in-barchart-by-their-value/6527/7
%reset
# Run this app with `python app.py` ando
# visit http://127.0.0.1:8050/ in your web browser.
import dash
import dash_core_components as dcc
import dash_html_components as html
import plotly.express as px
import jupyter_dash
import pandas as pd
from dash.dependencies import Input, Output
import plotly.graph_objects as go
from plotly.subplots import make_subplots
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
top_artists_df = pd.read_csv('~/qs/lastfm/data/lastfm_top_artists.csv')
top_tracks_df = pd.read_csv('~/qs/lastfm/data/lastfm_top_tracks.csv')
top_albums_df = pd.read_csv('~/qs/lastfm/data/lastfm_top_albums.csv')
new_top_artists_df = pd.read_csv('~/qs/lastfm/data/lastfm_top_artists_with_tags.csv', usecols=[1, 2, 3])
new_top_artists_df
top_artists_df = new_top_artists_df
top_artists_df
print('Top Artists')
print(f"{top_artists_df.head(5)} \n")
# print('Top Tracks')
# print(f"{top_tracks_df.head(5)} \n")
# print('Top Albums')
# print(top_albums_df.head(5))
top_artists_df.tail(5)
df = top_artists_df
# total_songs = top_artists_df['play_count'].sum()
# total_songs
# total_artists = len(top_artists_df['artist'])
# total_artists
# def num_unique_tags(df):
# unique_tags = []
# for tags, num_artists_in_tags in df.groupby('tags'):
# unique_tags.append(len(num_artists_in_tags))
# unique_tags.sort(reverse=True)
# return len(unique_tags)
# num_unique_tags(top_artists_df)
metal = 'metal|core|sludge'
rock = 'rock'
len(df.loc[df['tags'].str.contains(metal)])
# if you want custom colors for certain words in the tags
#https://stackoverflow.com/questions/23400743/pandas-modify-column-values-in-place-based-on-boolean-array
#px.colors.qualitative.Plotly
custom_colors = ['#EF553B'] * 200
df['colors'] = custom_colors
metal = 'metal|core|sludge'
df.loc[df['tags'].str.contains(metal), 'colors'] = '#636EFA'
rock = 'rock|blues'
df.loc[df['tags'].str.contains(rock), 'colors'] = '#00CC96'
punk = 'punk'
df.loc[df['tags'].str.contains(punk), 'colors'] = '#AB63FA'
alternative = 'alternative'
df.loc[df['tags'].str.contains(alternative), 'colors'] = '#FFA15A'
indie = 'indie'
df.loc[df['tags'].str.contains(indie), 'colors'] = '#19D3F3'
billy = 'billy'
df.loc[df['tags'].str.contains(billy), 'colors'] = '#FF6692'
rap = 'rap|hip|rnb'
df.loc[df['tags'].str.contains(rap), 'colors'] = '#B6E880'
pop = 'pop|soul'
df.loc[df['tags'].str.contains(pop), 'colors'] = '#FF97FF'
electronic = 'electronic|synthwave'
df.loc[df['tags'].str.contains(electronic), 'colors'] = '#FECB52'
df
title = f'Last.fm Dashboard'
#https://stackoverflow.com/questions/22291395/sorting-the-grouped-data-as-per-group-size-in-pandas
df_grouped = sorted(df.groupby('tags'), key=lambda x: len(x[1]), reverse=True)
artist = df['artist']
play_count = df['play_count']
tags = df['tags']
total_songs = df['play_count'].sum()
total_artists = len(df['artist'])
num_unique_tags = len(df.groupby('tags'))
fig = go.Figure()
def make_traces(df):
for tags, df in df_grouped:
num_tags = len(df)
fig.add_trace(go.Bar(y=df['artist'],
x=df['play_count'],
orientation='h',
text=df['play_count'],
textposition='outside',
name=f"{tags} ({num_tags})",
customdata = df['tags'],
hovertemplate =
"Artist: %{y}<br>" +
"Play Count: %{x}<br>" +
"Tag: %{customdata}" +
"<extra></extra>",
# for custom colors
marker_color=df['colors'],
#https://community.plotly.com/t/different-colors-for-bars-in-barchart-by-their-value/6527/4
#marker={'color': colors[tags]},
showlegend=True,
))
make_traces(top_artists_df)
fig.update_layout(title=dict(text=title,
yanchor="top",
y=.95,
xanchor="left",
x=.075),
legend_title=f'Unique Tags: ({num_unique_tags})',
legend_itemclick='toggle',
legend_itemdoubleclick='toggleothers',
margin_l = 240,
xaxis=dict(fixedrange=True),
# https://towardsdatascience.com/4-ways-to-improve-your-plotly-graphs-517c75947f7e
yaxis=dict(categoryorder='total descending'),
dragmode='pan',
annotations=[
#https://plotly.com/python/text-and-annotations/#adding-annotations-with-xref-and-yref-as-paper
#https://community.plotly.com/t/how-to-add-a-text-area-with-custom-text-as-a-kind-of-legend-in-a-plotly-plot/24349/3
go.layout.Annotation(
text=f'Total Songs Tracked: {total_songs}<br>Total Artists Tracked: {total_artists}',
align='left',
showarrow=False,
xref='paper',
yref='paper',
yanchor="bottom",
y=1.02,
xanchor="right",
x=1,
)])
fig.update_yaxes(title_text="Artist",
type='category',
range=[25.5, -.5],
# https://plotly.com/python/setting-graph-size/#adjusting-height-width--margins
automargin=False
)
fig.update_xaxes(title_text="Play Count ",
range=[0, 700],
dtick=100,
)
app = jupyter_dash.JupyterDash(__name__,
external_stylesheets=external_stylesheets,
title=f"{title}")
def make_footer():
return html.Div(html.Footer([
'Matthew Waage',
html.A('github.com/mcwaage1',
href='http://www.github.com/mcwaage1',
target='_blank',
style = {'margin': '.5em'}),
html.A('mcwaage1@gmail.com',
href="mailto:mcwaage1@gmail.com",
target='_blank',
style = {'margin': '.5em'}),
html.A('waage.dev',
href='http://www.waage.dev',
target='_blank',
style = {'margin': '.5em'})
], style={'position': 'fixed',
'text-align': 'right',
'left': '0px',
'bottom': '0px',
'margin-right': '10%',
'color': 'black',
'display': 'inline-block',
'background': '#f2f2f2',
'border-top': 'solid 2px #e4e4e4',
'width': '100%'}))
app.layout = html.Div([
dcc.Graph(
figure=fig,
#https://plotly.com/python/setting-graph-size/
#https://stackoverflow.com/questions/46287189/how-can-i-change-the-size-of-my-dash-graph
style={'height': '95vh'}
),
make_footer(),
])
if __name__ == '__main__':
app.run_server(mode ='external', port=8070, debug=True)
# title = '2020 Last.fm Dashboard'
# fig = go.Figure()
# fig.add_trace(go.Bar(
# x=top_artists_df['artist'],
# y=top_artists_df['play_count'],
# text=top_artists_df['play_count'],
# ))
# fig.update_traces(textposition='outside')
# fig.update_layout(title=dict(text=title,
# yanchor="top",
# y=.95,
# xanchor="left",
# x=.075),
# dragmode='pan',
# annotations=[
# #https://plotly.com/python/text-and-annotations/#adding-annotations-with-xref-and-yref-as-paper
# #https://community.plotly.com/t/how-to-add-a-text-area-with-custom-text-as-a-kind-of-legend-in-a-plotly-plot/24349/3
# go.layout.Annotation(
# text=f'Total Songs Played: {total_songs}',
# align='left',
# showarrow=False,
# xref='paper',
# yref='paper',
# yanchor="bottom",
# y=1.02,
# xanchor="right",
# x=1,
# )])
# # https://stackoverflow.com/questions/61782622/plotly-how-to-add-a-horizontal-scrollbar-to-a-plotly-express-figure
# # https://community.plotly.com/t/freeze-y-axis-while-scrolling-along-x-axis/4898/5
# fig.update_layout(
# xaxis=dict(
# rangeslider=dict(
# visible=True,
# )))
# fig.update_xaxes(title_text="Artist",
# type='category',
# range=[-.5, 25.5],
# )
# fig.update_yaxes(title_text="Play Count ",
# range=[0, 750],
# dtick=100,
# )
# app = jupyter_dash.JupyterDash(__name__,
# external_stylesheets=external_stylesheets,
# title=f"{title}")
# app.layout = html.Div([
# dcc.Graph(
# figure=fig,
# #https://plotly.com/python/setting-graph-size/
# #https://stackoverflow.com/questions/46287189/how-can-i-change-the-size-of-my-dash-graph
# style={'height': '95vh'}
# )
# ])
# if __name__ == '__main__':
# app.run_server(mode ='external', debug=True, port=8080)
import requests
import json
headers = {
'user-agent': 'mcwaage1'
}
with open("data/credentials.json", "r") as file:
credentials = json.load(file)
last_fm_cr = credentials['last_fm']
key = last_fm_cr['KEY']
username = last_fm_cr['USERNAME']
limit = 20 #api lets you retrieve up to 200 records per call
extended = 0 #api lets you retrieve extended data for each track, 0=no, 1=yes
page = 1 #page of results to start retrieving at
```
### Testing out api calls of artists
```
artist = "death from above 1979"
artist = artist.replace(' ', '+')
artist
method = 'artist.gettoptags'
request_url = f'http://ws.audioscrobbler.com/2.0/?method={method}&artist={artist}&api_key={key}&limit={limit}&extended={extended}&page={page}&format=json'
response = requests.get(request_url, headers=headers)
response.status_code
artist_tags = [tag['name'] for tag in response.json()['toptags']['tag'][:3]]
artist_tags
method = 'tag.gettoptags'
request_url = f'http://ws.audioscrobbler.com/2.0/?method={method}&api_key={key}&limit={limit}&extended={extended}&page={page}&format=json'
response = requests.get(request_url, headers=headers)
response.status_code
top_tags = artist_tags
top_tags
method = 'user.getinfo'
request_url = f'http://ws.audioscrobbler.com/2.0/?method={method}&user={username}&api_key={key}&limit={limit}&extended={extended}&page={page}&format=json'
response = requests.get(request_url, headers=headers)
response.status_code
response.json()
user_info = [response.json()['user']['name']]
user_info
user_info.append(response.json()['user']['url'])
user_info.append(response.json()['user']['image'][0]['#text'])
user_info
# def make_user_info(user_info):
# return html.Div(children=[html.Img(src=f'{user_info[2]}'),
# html.A(f'{user_info[0]}',
# href=f'{user_info[1]}',
# target='_blank',
# style={'margin': '.5em'}
# ),
# ])
```
### End of testing
```
artists = []
artists
def get_artists():
artists = []
for artist in top_artists_df['artist']:
artist = artist.replace(' ', '+')
artists.append(artist)
return artists
artists_to_parse = get_artists()
artists_to_parse
```
# To start the api calling process and get new data
```
# replace the [:1] with [:3] or whatever for more tags of artist
artist_genre = []
for artist in artists_to_parse:
request_url = f'http://ws.audioscrobbler.com/2.0/?method=artist.gettoptags&artist={artist}&api_key={key}&limit={limit}&extended={extended}&page={page}&format=json'
response = requests.get(request_url, headers=headers)
artist_tags = [tag['name'] for tag in response.json()['toptags']['tag'][:1]]
artist_genre.append(artist_tags)
artist_genre
# https://stackoverflow.com/questions/12555323/adding-new-column-to-existing-dataframe-in-python-pandas
top_artists_df['tags'] = artist_genre
top_artists_df.head(5)
top_artists_df['tags'] = top_artists_df['tags'].astype(str)
top_artists_df['tags'] = top_artists_df['tags'].str.strip("['']")
top_artists_df['tags'] = top_artists_df['tags'].str.lower()
top_artists_df.head(5)
```
### To replace tags that you don't want
```
tags_to_replace = 'seen live|vocalists'
def get_new_artists(tags_to_replace):
artists_to_replace = []
for artist in df.loc[df['tags'].str.contains(tags_to_replace)]['artist']:
artists_to_replace.append(artist)
return artists_to_replace
get_new_artists(tags_to_replace)
tags_to_replace = 'seen live|vocalists'
def get_artists_to_replace(tags_to_replace):
artists_to_replace = []
for artist in df.loc[df['tags'].str.contains(tags_to_replace)]['artist']:
artist = artist.replace(' ', '+')
artists_to_replace.append(artist)
return artists_to_replace
get_artists_to_replace(tags_to_replace)
new_artists_to_parse = get_artists_to_replace(tags_to_replace)
new_artists_tags = []
for artist in new_artists_to_parse:
request_url = f'http://ws.audioscrobbler.com/2.0/?method=artist.gettoptags&artist={artist}&api_key={key}&limit={limit}&extended={extended}&page={page}&format=json'
response = requests.get(request_url, headers=headers)
artist_tags = [tag['name'] for tag in response.json()['toptags']['tag'][1:2]]
new_artists_tags.append(artist_tags)
new_artists_tags
new_artists_tags = [str(x) for x in new_artists_tags]
new_artists_tags = [x.strip("['']") for x in new_artists_tags]
new_artists_tags = [x.lower() for x in new_artists_tags]
new_artists_tags
for artist in get_new_artists(tags_to_replace):
print(artist)
for k, v in zip(get_new_artists(tags_to_replace), new_artists_tags):
df.loc[df['artist'].str.contains(k), 'tags'] = v
```
### End of replacing tags
```
top_artists_df.to_csv('~/qs/lastfm/data/lastfm_top_artists_with_tags.csv')
from IPython.display import display
with pd.option_context('display.max_rows', 205, 'display.max_columns', 5):
display(top_artists_df)
# unique_tags = top_artists_df['tags'].unique()
# unique_tags = pd.Series(unique_tags)
# print("Type: ", type(unique_tags))
# print('')
# for tag in unique_tags:
# print(tag)
# len(unique_tags)
# def get_sorted_tags(df):
# unique_tags = df['tags'].unique()
# unique_tags = pd.Series(unique_tags)
# sorted_tags = []
# for tag in unique_tags:
# #sorted_tags.append((top_artists_df['tags'].str.count(tag).sum(), tag))
# #sorted_tags.append(top_artists_df['tags'].str.count(tag).sum())
# sorted_tags.sort(reverse=True)
# return sorted_tags
# get_sorted_tags(top_artists_df)
# unique_tags = unique_tags.str.split()
# type(unique_tags)
# unique_tags
# for tag in unique_tags:
# print(tag, unique_tags.str.count(tag).sum())
# px.colors.qualitative.Plotly
# https://plotly.com/python/discrete-color/
#fig = px.colors.qualitative.swatches()
#https://plotly.com/python/renderers/
#fig.show(renderer='iframe')
# One way to replace value in one series based on another, better version below
# top_artists_df['colors'][top_artists_df['tags'].str.contains('metal')] = '#636EFA'
# top_tags
#https://stackoverflow.com/questions/23400743/pandas-modify-column-values-in-place-based-on-boolean-array
px.colors.qualitative.Plotly
# custom_colors = ['#EF553B'] * 200
# df['colors'] = custom_colors
# df.loc[df['tags'].str.contains('metal'), 'colors'] = '#636EFA'
# df.loc[df['tags'].str.contains('rock'), 'colors'] = '#00CC96'
# df.loc[df['tags'].str.contains('punk'), 'colors'] = '#AB63FA'
# df.loc[df['tags'].str.contains('alternative'), 'colors'] = '#FFA15A'
# df.loc[df['tags'].str.contains('indie'), 'colors'] = '#19D3F3'
# df.loc[df['tags'].str.contains('billy'), 'colors'] = '#FF6692'
# df
# colors = {'Ugly': 'red',
# 'Bad': 'orange',
# 'Good': 'lightgreen',
# 'Great': 'darkgreen'
# }
# from IPython.display import display
# with pd.option_context('display.max_rows', 265, 'display.max_columns', 5):
# display(top_artists_df)
# for tag, artist in top_artists_df.groupby('tags'):
# print(tag, len(artist))
# print(artist)
# print('')
# top_artists_df.loc[top_artists_df['tags'].str.contains('metal')]
# type(top_artists_df.loc[top_artists_df['tags'].str.contains('metal')])
# len(top_artists_df.loc[top_artists_df['tags'].str.contains('metal')])
# def print_tags(df):
# printed_tags = []
# for tags, top_artists in top_artists_df.groupby('tags'):
# printed_tags.append([len(top_artists), tags])
# printed_tags.sort(reverse=True)
# return printed_tags
# print_tags(top_artists_df)
```
| github_jupyter |
# Hawaii - A Climate Analysis And Exploration
### For data between August 23, 2016 - August 23, 2017
---
```
# Import dependencies
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
```
## Reflect Tables into SQLAlchemy ORM
```
# Set up query engine. 'echo=True is the default - will keep a log of activities'
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# Reflect an existing database into a new model
Base = automap_base()
# Reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Another way to get table names from SQL-lite
inspector = inspect(engine)
inspector.get_table_names()
```
## Exploratory Climate Analysis
```
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# Display details of 'measurement' table
columns = inspector.get_columns('measurement')
for c in columns:
print(c['name'], c['type'])
# DISPLY number of line items measurement, and remove tuple form
result, = engine.execute('SELECT COUNT(*) FROM measurement').fetchall()[0]
print(result,)
# Display details of 'station' table
columns = inspector.get_columns('station')
for c in columns:
print(c['name'], c['type'])
# DISPLY number of line items station, and remove tuple form
result, = engine.execute('SELECT COUNT(*) FROM station').fetchall()[0]
print(result,)
# FULL INNTER JOIN BOTH THE MEASUREMENT AND STATION TABLE
# engine.execute('SELECT measurement.*, station.name, station.latitude FROM measurement INNER JOIN station ON measurement.station = station.station;').fetchall()
join_result = engine.execute('SELECT * FROM measurement INNER JOIN station ON measurement.station = station.station;').fetchall()
join_result
# Another way to PERFORM AN INNER JOIN ON THE MEASUREMENT AND STATION TABLES
engine.execute('SELECT measurement.*, station.* FROM measurement, station WHERE measurement.station=station.station;').fetchall()
# Query last date of the measurement file
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()[0]
print(last_date)
last_date_measurement = dt.date(2017, 8 ,23)
# Calculate the date 1 year delta of the "last date measurement"
one_year_ago = last_date_measurement - dt.timedelta(days=365)
print(one_year_ago)
# Plotting precipitation data from 1 year ago
date = dt.date(2016, 8, 23)
#sel = [Measurement.id, Measurement.station, Measurement.date, Measurement.prcp, Measurement.tobs]
sel = [Measurement.date, Measurement.prcp]
print(date)
# date = "2016-08-23"
result = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date >= date).all()
# get the count / length of the list of tuples
print(len(result))
# Created a line plot and saved the figure
df = pd.DataFrame(result, columns=['Date', 'Precipitation'])
df.sort_values(by=['Date'])
df.set_index('Date', inplace=True)
s = df['Precipitation']
ax = s.plot(figsize=(8,6), use_index=True, title='Precipitation Data Between 8/23/2016 - 8/23/2017')
fig = ax.get_figure()
fig.savefig('./Images/precipitation_line.png')
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe()
# Design a query to show how many stations are available in this dataset?
session.query(Measurement.station).\
group_by(Measurement.station).count()
# Querying for the most active stations (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
engine.execute('SELECT DISTINCT station, COUNT(id) FROM measurement GROUP BY station ORDER BY COUNT(id) DESC').fetchall()
# Query for stations from the measurement table
session.query(Measurement.station).\
group_by(Measurement.station).all()
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature most active station?
sel = [func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)]
session.query(*sel).\
filter(Measurement.station == 'USC00519281').all()
# Query the dates of the last 12 months of the most active station
last_date = session.query(Measurement.date).\
filter(Measurement.station == 'USC00519281').\
order_by(Measurement.date.desc()).first()[0]
print(last_date)
last_date_USC00519281 = dt.date(2017, 8 ,18)
last_year_USC00519281 = last_date_USC00519281 - dt.timedelta(days=365)
print(last_year_USC00519281)
# SET UP HISTOGRAM QUERY AND PLOT
sel_two = [Measurement.tobs]
results_tobs_hist = session.query(*sel_two).\
filter(Measurement.date >= last_year_USC00519281).\
filter(Measurement.station == 'USC00519281').all()
# HISTOGRAM Plot
df = pd.DataFrame(results_tobs_hist, columns=['tobs'])
ax = df.plot.hist(figsize=(8,6), bins=12, use_index=False, title='Hawaii - Temperature Histogram Between 8/23/2016 - 8/23/2017')
fig = ax.get_figure()
fig.savefig('./Images/temperature_histogram.png')
# Created a function called `calc_temps` that accepts a 'start date' and 'end date' in the format 'YYYY-MM-DD'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""Temp MIN,Temp AVG, and Temp MAX for a list of dates.
Args are:
start_date (string): A date string in the format YYYY-MM-DD
end_date (string): A date string in the format YYYY-MM-DD
Returns:
T-MIN, T-AVG, and T-MAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
print(calc_temps('2017-08-01', '2017-08-07'))
```
| github_jupyter |
```
import os
import glob
base_dir = os.path.join('F:/0Sem 7/B.TECH PROJECT/0Image data/cell_images')
infected_dir = os.path.join(base_dir,'Parasitized')
healthy_dir = os.path.join(base_dir,'Uninfected')
infected_files = glob.glob(infected_dir+'/*.png')
healthy_files = glob.glob(healthy_dir+'/*.png')
print("Infected samples:",len(infected_files))
print("Uninfected samples:",len(healthy_files))
import numpy as np
import pandas as pd
np.random.seed(42)
files_df = pd.DataFrame({
'filename': infected_files + healthy_files,
'label': ['malaria'] * len(infected_files) + ['healthy'] * len(healthy_files)
}).sample(frac=1, random_state=42).reset_index(drop=True)
files_df.head()
from sklearn.model_selection import train_test_split
from collections import Counter
train_files, test_files, train_labels, test_labels = train_test_split(files_df['filename'].values,
files_df['label'].values,
test_size=0.3, random_state=42)
train_files, val_files, train_labels, val_labels = train_test_split(train_files,
train_labels,
test_size=0.1, random_state=42)
print(train_files.shape, val_files.shape, test_files.shape)
print('Train:', Counter(train_labels), '\nVal:', Counter(val_labels), '\nTest:', Counter(test_labels))
import cv2
from concurrent import futures
import threading
def get_img_shape_parallel(idx, img, total_imgs):
if idx % 5000 == 0 or idx == (total_imgs - 1):
print('{}: working on img num: {}'.format(threading.current_thread().name,
idx))
return cv2.imread(img).shape
ex = futures.ThreadPoolExecutor(max_workers=None)
data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
print('Starting Img shape computation:')
train_img_dims_map = ex.map(get_img_shape_parallel,
[record[0] for record in data_inp],
[record[1] for record in data_inp],
[record[2] for record in data_inp])
train_img_dims = list(train_img_dims_map)
print('Min Dimensions:', np.min(train_img_dims, axis=0))
print('Avg Dimensions:', np.mean(train_img_dims, axis=0))
print('Median Dimensions:', np.median(train_img_dims, axis=0))
print('Max Dimensions:', np.max(train_img_dims, axis=0))
IMG_DIMS = (32, 32)
def get_img_data_parallel(idx, img, total_imgs):
if idx % 5000 == 0 or idx == (total_imgs - 1):
print('{}: working on img num: {}'.format(threading.current_thread().name,
idx))
img = cv2.imread(img)
img = cv2.resize(img, dsize=IMG_DIMS,
interpolation=cv2.INTER_CUBIC)
img = np.array(img, dtype=np.float32)
return img
ex = futures.ThreadPoolExecutor(max_workers=None)
train_data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
val_data_inp = [(idx, img, len(val_files)) for idx, img in enumerate(val_files)]
test_data_inp = [(idx, img, len(test_files)) for idx, img in enumerate(test_files)]
print('Loading Train Images:')
train_data_map = ex.map(get_img_data_parallel,
[record[0] for record in train_data_inp],
[record[1] for record in train_data_inp],
[record[2] for record in train_data_inp])
train_data = np.array(list(train_data_map))
print('\nLoading Validation Images:')
val_data_map = ex.map(get_img_data_parallel,
[record[0] for record in val_data_inp],
[record[1] for record in val_data_inp],
[record[2] for record in val_data_inp])
val_data = np.array(list(val_data_map))
print('\nLoading Test Images:')
test_data_map = ex.map(get_img_data_parallel,
[record[0] for record in test_data_inp],
[record[1] for record in test_data_inp],
[record[2] for record in test_data_inp])
test_data = np.array(list(test_data_map))
train_data.shape, val_data.shape, test_data.shape
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(1 , figsize = (8 , 8))
n = 0
for i in range(16):
n += 1
r = np.random.randint(0 , train_data.shape[0] , 1)
plt.subplot(4 , 4 , n)
plt.subplots_adjust(hspace = 0.5 , wspace = 0.5)
plt.imshow(train_data[r[0]]/255.)
plt.title('{}'.format(train_labels[r[0]]))
plt.xticks([]) , plt.yticks([])
BATCH_SIZE = 32
NUM_CLASSES = 2
EPOCHS = 25
INPUT_SHAPE = (32, 32, 3)
train_imgs_scaled = train_data / 255.
val_imgs_scaled = val_data / 255.
# encode text category labels
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(train_labels)
train_labels_enc = le.transform(train_labels)
val_labels_enc = le.transform(val_labels)
print(train_labels[:6], train_labels_enc[:6])
import tensorflow as tf
vgg = tf.keras.applications.mobilenet.MobileNet(include_top=False, alpha=1.0, weights='imagenet',
input_shape=INPUT_SHAPE)
# Freeze the layers
vgg.trainable = True
set_trainable = False
for layer in vgg.layers:
layer.trainable = False
base_vgg = vgg
base_out = base_vgg.output
pool_out = tf.keras.layers.Flatten()(base_out)
hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)
drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
model = tf.keras.Model(inputs=base_vgg.input, outputs=out)
from tensorflow.keras.optimizers import Adam
adam = Adam(lr=0.0001)
model.compile(optimizer=adam,
loss='binary_crossentropy',
metrics=['accuracy'])
print("Total Layers:", len(model.layers))
print("Total trainable layers:", sum([1 for l in model.layers if l.trainable]))
print(model.summary())
history = model.fit(x=train_imgs_scaled, y=train_labels_enc,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(val_imgs_scaled, val_labels_enc),
verbose=1)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Basic CNN Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
max_epoch = len(history.history['accuracy'])+1
epoch_list = list(range(1,max_epoch))
ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
ax1.set_xticks(np.arange(1, max_epoch, 5))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(1, max_epoch, 5))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
test_imgs_scaled = test_data/255.
test_labels_enc = le.transform(test_labels)
# evaluate the model
_, train_acc = model.evaluate(train_imgs_scaled, train_labels_enc, verbose=0)
_, test_acc = model.evaluate(test_imgs_scaled, test_labels_enc, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
print(model.summary())
```
| github_jupyter |
# Imdb sentiment classification.
Dataset of 25,000 movies reviews from IMDB, labeled by sentiment (positive/negative). Reviews have been preprocessed, and each review is encoded as a sequence of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer "3" encodes the 3rd most frequent word in the data. This allows for quick filtering operations such as: "only consider the top 10,000 most common words, but eliminate the top 20 most common words".
```
# Basic packages.
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
# Keras specific packages.
from keras import Input
from keras import Model
from keras import regularizers
from keras import optimizers
from keras.layers import Dense, Activation, Flatten, GRU
from keras.layers import Dropout
from keras.layers import Conv1D, MaxPooling1D
from keras.layers import Embedding
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.datasets import imdb
MAX_NUM_WORDS = 10000
MAX_SEQUENCE_LENGTH = 1000
EMBEDDING_DIM = 100
VALIDATION_SPLIT = 0.25
TEXT_DATA_DIR = "dataset/20_newsgroup"
GLOVE_DIR = "dataset/glove"
EPOCHS = 10
BATCH_SIZE = 129
```
## 1. Load the dataset.
```
# Load the data.
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=MAX_NUM_WORDS)
# Get the word to index dict.
word_to_index = imdb.get_word_index()
# Get the index to word dict.
index_to_word = dict(
[(value, key) for (key, value) in word_to_index.items()])
# Display
print("Length dictionnary = {}".format(len(word_to_index)))
max_row = []
for i in range(x_train.shape[0]):
max_row.append(len(x_train[i]))
print(max(max_row))
```
## 2. Preparing the pretrained embedding layer.
```
embeddings_index = {}
f = open(os.path.join(GLOVE_DIR, "glove.6B.{}d.txt".format(EMBEDDING_DIM)))
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype="float32")
embeddings_index[word] = coefs
f.close()
print("Found %s word vectors." % len(embeddings_index))
embedding_matrix = np.zeros((len(word_to_index) + 1, EMBEDDING_DIM))
for word, i in word_to_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
embedding_layer = Embedding(len(word_to_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
```
## 3. Handle the dataset.
Here we gather the features of words $X \in \mathbb{R}^{m \times n}$ where $m$ is the total number of samples and $n$ is the features length. For the current example $n$ is equal to $10000$.
```
# Pad the training and test features.
x_tr = pad_sequences(x_train, maxlen=MAX_SEQUENCE_LENGTH)
x_te = pad_sequences(x_test, maxlen=MAX_SEQUENCE_LENGTH)
# Display the size.
print("Size x_tr = {}".format(x_tr.shape))
print("Size x_te = {}".format(x_te.shape))
# Handle the training and test labels.
y_tr = y_train.reshape(-1, 1)
y_te = y_test.reshape(-1, 1)
# Display the shapes.
print("y_train ", y_tr.shape)
print("y_test ", y_te.shape)
```
## 3. Build the model.
```
# Set the input.
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype="int32")
# Set the embedding layer.
embedded_sequences = embedding_layer(sequence_input)
# Conv layer 1.
"""
x = Conv1D(64, 5, kernel_regularizer=regularizers.l2(0.001))(embedded_sequences)
x = Activation("relu")(x)
x = MaxPooling1D(5)(x)
X = Dropout(0.5)(x)
# Conv Layer 2.
x = Conv1D(64, 5, kernel_regularizer=regularizers.l2(0.001))(x)
x = Activation("relu")(x)
x = MaxPooling1D(5)(x)
X = Dropout(0.5)(x)
# Conv Layer 3.
x = Conv1D(64, 5, kernel_regularizer=regularizers.l2(0.001))(x)
x = Activation("relu")(x)
x = MaxPooling1D(35)(x)
X = Dropout(0.5)(x)
# Output layer.
x = Flatten()(x)
x = Dense(128)(x)
x = Activation("relu")(x)
X = Dropout(0.5)(x)
"""
#x = Flatten()(x)
#x = Dense(128)(x)
#x = Activation("relu")(x)
#X = Dropout(0.5)(x)
x = GRU(128, return_sequences=False)(embedded_sequences)
# Softmax layer.
preds = Dense(1, activation="sigmoid")(x)
# Build the model.
model = Model(sequence_input, preds)
# Set the optimizer.
optim = optimizers.Adam(lr=0.001)
# Compile the model.
model.compile(loss="binary_crossentropy", optimizer=optim, metrics=["acc"])
# Set the fitting parameters.
fit_params = {
"epochs": EPOCHS,
"batch_size": BATCH_SIZE,
"validation_split": VALIDATION_SPLIT,
"shuffle": True
}
# Print the model.
model.summary()
# Fit the model.
history = model.fit(x_tr, y_tr, **fit_params)
# Visualise the training resuls.
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.plot(history.history["loss"], color="b", label="tr")
plt.plot(history.history["val_loss"], color="r", label="te")
plt.ylabel("loss")
plt.xlabel("epochs")
plt.grid()
plt.legend()
plt.subplot(122)
plt.plot(history.history["acc"], color="b", label="tr")
plt.plot(history.history["val_acc"], color="r", label="te")
plt.ylabel("acc")
plt.xlabel("epochs")
plt.grid()
plt.legend()
plt.show()
```
## 4. Evaluation
```
# Get the predictions for the test dataset.
y_pred = model.predict(x_te)
# Update the predictions.
y_pred = 1.0 * (y_pred > 0.5 )
# Display the classification report.
print(classification_report(y_te, y_pred))
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Barren plateaus
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/barren_plateaus"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/barren_plateaus.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/barren_plateaus.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/barren_plateaus.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this example you will explore the result of <a href="https://www.nature.com/articles/s41467-018-07090-4" class="external">McClean, 2019</a> that says not just any quantum neural network structure will do well when it comes to learning. In particular you will see that a certain large family of random quantum circuits do not serve as good quantum neural networks, because they have gradients that vanish almost everywhere. In this example you won't be training any models for a specific learning problem, but instead focusing on the simpler problem of understanding the behaviors of gradients.
## Setup
```
try:
%tensorflow_version 2.x
except Exception:
pass
```
Install TensorFlow Quantum:
```
!pip install tensorflow-quantum
```
Now import TensorFlow and the module dependencies:
```
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
np.random.seed(1234)
```
## 1. Summary
Random quantum circuits with many blocks that look like this ($R_{P}(\theta)$ is a random Pauli rotation):<br/>
<img src="./images/barren_2.png" width=700>
Where if $f(x)$ is defined as the expectation value w.r.t. $Z_{a}Z_{b}$ for any qubits $a$ and $b$, then there is a problem that $f'(x)$ has a mean very close to 0 and does not vary much. You will see this below:
## 2. Generating random circuits
The construction from the paper is straightforward to follow. The following implements a simple function that generates a random quantum circuit—sometimes referred to as a *quantum neural network* (QNN)—with the given depth on a set of qubits:
```
def generate_random_qnn(qubits, symbol, depth):
"""Generate random QNN's with the same structure from McClean et al."""
circuit = cirq.Circuit()
for qubit in qubits:
circuit += cirq.Ry(np.pi / 4.0)(qubit)
for d in range(depth):
# Add a series of single qubit rotations.
for i, qubit in enumerate(qubits):
random_n = np.random.uniform()
random_rot = np.random.uniform(
) * 2.0 * np.pi if i != 0 or d != 0 else symbol
if random_n > 2. / 3.:
# Add a Z.
circuit += cirq.Rz(random_rot)(qubit)
elif random_n > 1. / 3.:
# Add a Y.
circuit += cirq.Ry(random_rot)(qubit)
else:
# Add a X.
circuit += cirq.Rx(random_rot)(qubit)
# Add CZ ladder.
for src, dest in zip(qubits, qubits[1:]):
circuit += cirq.CZ(src, dest)
return circuit
generate_random_qnn(cirq.GridQubit.rect(1, 3), sympy.Symbol('theta'), 2)
```
The authors investigate the gradient of a single parameter $\theta_{1,1}$. Let's follow along by placing a `sympy.Symbol` in the circuit where $\theta_{1,1}$ would be. Since the authors do not analyze the statistics for any other symbols in the circuit, let's replace them with random values now instead of later.
## 3. Running the circuits
Generate a few of these circuits along with an observable to test the claim that the gradients don't vary much. First, generate a batch of random circuits. Choose a random *ZZ* observable and batch calculate the gradients and variance using TensorFlow Quantum.
### 3.1 Batch variance computation
Let's write a helper function that computes the variance of the gradient of a given observable over a batch of circuits:
```
def process_batch(circuits, symbol, op):
"""Compute the variance of a batch of expectations w.r.t. op on each circuit that
contains `symbol`. Note that this method sets up a new compute graph every time it is
called so it isn't as performant as possible."""
# Setup a simple layer to batch compute the expectation gradients.
expectation = tfq.layers.Expectation()
# Prep the inputs as tensors
circuit_tensor = tfq.convert_to_tensor(circuits)
values_tensor = tf.convert_to_tensor(
np.random.uniform(0, 2 * np.pi, (n_circuits, 1)).astype(np.float32))
# Use TensorFlow GradientTape to track gradients.
with tf.GradientTape() as g:
g.watch(values_tensor)
forward = expectation(circuit_tensor,
operators=op,
symbol_names=[symbol],
symbol_values=values_tensor)
# Return variance of gradients across all circuits.
grads = g.gradient(forward, values_tensor)
grad_var = tf.math.reduce_std(grads, axis=0)
return grad_var.numpy()[0]
```
### 3.1 Set up and run
Choose the number of random circuits to generate along with their depth and the amount of qubits they should act on. Then plot the results.
```
n_qubits = [2 * i for i in range(2, 7)
] # Ranges studied in paper are between 2 and 24.
depth = 50 # Ranges studied in paper are between 50 and 500.
n_circuits = 200
theta_var = []
for n in n_qubits:
# Generate the random circuits and observable for the given n.
qubits = cirq.GridQubit.rect(1, n)
symbol = sympy.Symbol('theta')
circuits = [
generate_random_qnn(qubits, symbol, depth) for _ in range(n_circuits)
]
op = cirq.Z(qubits[0]) * cirq.Z(qubits[1])
theta_var.append(process_batch(circuits, symbol, op))
plt.semilogy(n_qubits, theta_var)
plt.title('Gradient Variance in QNNs')
plt.xlabel('n_qubits')
plt.ylabel('$\\partial \\theta$ variance')
plt.show()
```
This plot shows that for quantum machine learning problems, you can't simply guess a random QNN ansatz and hope for the best. Some structure must be present in the model circuit in order for gradients to vary to the point where learning can happen.
## 4. Heuristics
An interesting heuristic by <a href="https://arxiv.org/pdf/1903.05076.pdf" class="external">Grant, 2019</a> allows one to start very close to random, but not quite. Using the same circuits as McClean et al., the authors propose a different initialization technique for the classical control parameters to avoid barren plateaus. The initialization technique starts some layers with totally random control parameters—but, in the layers immediately following, choose parameters such that the initial transformation made by the first few layers is undone. The authors call this an *identity block*.
The advantage of this heuristic is that by changing just a single parameter, all other blocks outside of the current block will remain the identity—and the gradient signal comes through much stronger than before. This allows the user to pick and choose which variables and blocks to modify to get a strong gradient signal. This heuristic does not prevent the user from falling in to a barren plateau during the training phase (and restricts a fully simultaneous update), it just guarantees that you can start outside of a plateau.
### 4.1 New QNN construction
Now construct a function to generate identity block QNNs. This implementation is slightly different than the one from the paper. For now, look at the behavior of the gradient of a single parameter so it is consistent with McClean et al, so some simplifications can be made.
To generate an identity block and train the model, generally you need $U1(\theta_{1a}) U1(\theta_{1b})^{\dagger}$ and not $U1(\theta_1) U1(\theta_1)^{\dagger}$. Initially $\theta_{1a}$ and $\theta_{1b}$ are the same angles but they are learned independently. Otherwise, you will always get the identity even after training. The choice for the number of identity blocks is empirical. The deeper the block, the smaller the variance in the middle of the block. But at the start and end of the block, the variance of the parameter gradients should be large.
```
def generate_identity_qnn(qubits, symbol, block_depth, total_depth):
"""Generate random QNN's with the same structure from Grant et al."""
circuit = cirq.Circuit()
# Generate initial block with symbol.
prep_and_U = generate_random_qnn(qubits, symbol, block_depth)
circuit += prep_and_U
# Generate dagger of initial block without symbol.
U_dagger = (prep_and_U[1:])**-1
circuit += cirq.resolve_parameters(
U_dagger, param_resolver={symbol: np.random.uniform() * 2 * np.pi})
for d in range(total_depth - 1):
# Get a random QNN.
prep_and_U_circuit = generate_random_qnn(
qubits,
np.random.uniform() * 2 * np.pi, block_depth)
# Remove the state-prep component
U_circuit = prep_and_U_circuit[1:]
# Add U
circuit += U_circuit
# Add U^dagger
circuit += U_circuit**-1
return circuit
generate_identity_qnn(cirq.GridQubit.rect(1, 3), sympy.Symbol('theta'), 2, 2)
```
### 4.2 Comparison
Here you can see that the heuristic does help to keep the variance of the gradient from vanishing as quickly:
```
block_depth = 10
total_depth = 5
heuristic_theta_var = []
for n in n_qubits:
# Generate the identity block circuits and observable for the given n.
qubits = cirq.GridQubit.rect(1, n)
symbol = sympy.Symbol('theta')
circuits = [
generate_identity_qnn(qubits, symbol, block_depth, total_depth)
for _ in range(n_circuits)
]
op = cirq.Z(qubits[0]) * cirq.Z(qubits[1])
heuristic_theta_var.append(process_batch(circuits, symbol, op))
plt.semilogy(n_qubits, theta_var)
plt.semilogy(n_qubits, heuristic_theta_var)
plt.title('Heuristic vs. Random')
plt.xlabel('n_qubits')
plt.ylabel('$\\partial \\theta$ variance')
plt.show()
```
This is a great improvement in getting stronger gradient signals from (near) random QNNs.
| github_jupyter |
# PageRank
In this notebook, you'll build on your knowledge of eigenvectors and eigenvalues by exploring the PageRank algorithm.
The notebook is in two parts, the first is a worksheet to get you up to speed with how the algorithm works - here we will look at a micro-internet with fewer than 10 websites and see what it does and what can go wrong.
The second is an assessment which will test your application of eigentheory to this problem by writing code and calculating the page rank of a large network representing a sub-section of the internet.
## Part 1 - Worksheet
### Introduction
PageRank (developed by Larry Page and Sergey Brin) revolutionized web search by generating a
ranked list of web pages based on the underlying connectivity of the web. The PageRank algorithm is
based on an ideal random web surfer who, when reaching a page, goes to the next page by clicking on a
link. The surfer has equal probability of clicking any link on the page and, when reaching a page with no
links, has equal probability of moving to any other page by typing in its URL. In addition, the surfer may
occasionally choose to type in a random URL instead of following the links on a page. The PageRank is
the ranked order of the pages from the most to the least probable page the surfer will be viewing.
```
# Before we begin, let's load the libraries.
%matplotlib widget
import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as la
from readonly.PageRankFunctions import *
np.set_printoptions(suppress=True)
```
### PageRank as a linear algebra problem
Let's imagine a micro-internet, with just 6 websites (**A**vocado, **B**ullseye, **C**atBabel, **D**romeda, **e**Tings, and **F**aceSpace).
Each website links to some of the others, and this forms a network as shown,

The design principle of PageRank is that important websites will be linked to by important websites.
This somewhat recursive principle will form the basis of our thinking.
Imagine we have 100 *Procrastinating Pat*s on our micro-internet, each viewing a single website at a time.
Each minute the Pats follow a link on their website to another site on the micro-internet.
After a while, the websites that are most linked to will have more Pats visiting them, and in the long run, each minute for every Pat that leaves a website, another will enter keeping the total numbers of Pats on each website constant.
The PageRank is simply the ranking of websites by how many Pats they have on them at the end of this process.
We represent the number of Pats on each website with the vector,
$$\mathbf{r} = \begin{bmatrix} r_A \\ r_B \\ r_C \\ r_D \\ r_E \\ r_F \end{bmatrix}$$
And say that the number of Pats on each website in minute $i+1$ is related to those at minute $i$ by the matrix transformation
$$ \mathbf{r}^{(i+1)} = L \,\mathbf{r}^{(i)}$$
with the matrix $L$ taking the form,
$$ L = \begin{bmatrix}
L_{A→A} & L_{B→A} & L_{C→A} & L_{D→A} & L_{E→A} & L_{F→A} \\
L_{A→B} & L_{B→B} & L_{C→B} & L_{D→B} & L_{E→B} & L_{F→B} \\
L_{A→C} & L_{B→C} & L_{C→C} & L_{D→C} & L_{E→C} & L_{F→C} \\
L_{A→D} & L_{B→D} & L_{C→D} & L_{D→D} & L_{E→D} & L_{F→D} \\
L_{A→E} & L_{B→E} & L_{C→E} & L_{D→E} & L_{E→E} & L_{F→E} \\
L_{A→F} & L_{B→F} & L_{C→F} & L_{D→F} & L_{E→F} & L_{F→F} \\
\end{bmatrix}
$$
where the columns represent the probability of leaving a website for any other website, and sum to one.
The rows determine how likely you are to enter a website from any other, though these need not add to one.
The long time behaviour of this system is when $ \mathbf{r}^{(i+1)} = \mathbf{r}^{(i)}$, so we'll drop the superscripts here, and that allows us to write,
$$ L \,\mathbf{r} = \mathbf{r}$$
which is an eigenvalue equation for the matrix $L$, with eigenvalue 1 (this is guaranteed by the probabalistic structure of the matrix $L$).
Complete the matrix $L$ below, we've left out the column for which websites the *FaceSpace* website (F) links to.
Remember, this is the probability to click on another website from this one, so each column should add to one (by scaling by the number of links).
```
# Replace the ??? here with the probability of clicking a link to each website when leaving Website F (FaceSpace).
L = np.array([[0, 1/2, 1/3, 0, 0, 1/3 ],
[1/3, 0, 0, 0, 1/2, 0 ],
[1/3, 1/2, 0, 1, 0, 1/2 ],
[1/3, 0, 1/3, 0, 1/2, 0 ],
[0, 0, 0, 0, 0, 1/6 ],
[0, 0, 1/3, 0, 0, 0 ]])
```
In principle, we could use a linear algebra library, as below, to calculate the eigenvalues and vectors.
And this would work for a small system. But this gets unmanagable for large systems.
And since we only care about the principal eigenvector (the one with the largest eigenvalue, which will be 1 in this case), we can use the *power iteration method* which will scale better, and is faster for large systems.
Use the code below to peek at the PageRank for this micro-internet.
```
eVals, eVecs = la.eig(L) # Gets the eigenvalues and vectors
order = np.absolute(eVals).argsort()[::-1] # Orders them by their eigenvalues
eVals = eVals[order]
eVecs = eVecs[:,order]
r = eVecs[:, 0] # Sets r to be the principal eigenvector
100 * np.real(r / np.sum(r)) # Make this eigenvector sum to one, then multiply by 100 Procrastinating Pats
```
We can see from this list, the number of Procrastinating Pats that we expect to find on each website after long times.
Putting them in order of *popularity* (based on this metric), the PageRank of this micro-internet is:
**C**atBabel, **D**romeda, **A**vocado, **F**aceSpace, **B**ullseye, **e**Tings
Referring back to the micro-internet diagram, is this what you would have expected?
Convince yourself that based on which pages seem important given which others link to them, that this is a sensible ranking.
Let's now try to get the same result using the Power-Iteration method that was covered in the video.
This method will be much better at dealing with large systems.
First let's set up our initial vector, $\mathbf{r}^{(0)}$, so that we have our 100 Procrastinating Pats equally distributed on each of our 6 websites.
```
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
r # Shows it's value
```
Next, let's update the vector to the next minute, with the matrix $L$.
Run the following cell multiple times, until the answer stabilises.
```
r = L @ r # Apply matrix L to r
r # Show it's value
# Re-run this cell multiple times to converge to the correct answer.
```
We can automate applying this matrix multiple times as follows,
```
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
for i in np.arange(100) : # Repeat 100 times
r = L @ r
r
```
Or even better, we can keep running until we get to the required tolerance.
```
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = L @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = L @ r
i += 1
print(str(i) + " iterations to convergence.")
r
```
See how the PageRank order is established fairly quickly, and the vector converges on the value we calculated earlier after a few tens of repeats.
Congratulations! You've just calculated your first PageRank!
### Damping Parameter
The system we just studied converged fairly quickly to the correct answer.
Let's consider an extension to our micro-internet where things start to go wrong.
Say a new website is added to the micro-internet: *Geoff's* Website.
This website is linked to by *FaceSpace* and only links to itself.

Intuitively, only *FaceSpace*, which is in the bottom half of the page rank, links to this website amongst the two others it links to,
so we might expect *Geoff's* site to have a correspondingly low PageRank score.
Build the new $L$ matrix for the expanded micro-internet, and use Power-Iteration on the Procrastinating Pat vector.
See what happens…
```
# We'll call this one L2, to distinguish it from the previous L.
L2 = np.array([[0, 1/2, 1/3, 0, 0, 1/3, 0 ],
[1/3, 0, 0, 0, 1/2, 1/3, 0 ],
[1/3, 1/2, 0, 1, 0, 0, 0 ],
[1/3, 0, 1/3, 0, 1/2, 0, 0 ],
[0, 0, 0, 0, 0, 1/3, 0 ],
[0, 0, 1/3, 0, 0, 0, 0 ],
[0, 0, 0, 0, 0, 0, 1 ]])
r = 100 * np.ones(7) / 7 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = L2 @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = L2 @ r
i += 1
print(str(i) + " iterations to convergence.")
r
```
That's no good! *Geoff* seems to be taking all the traffic on the micro-internet, and somehow coming at the top of the PageRank.
This behaviour can be understood, because once a Pat get's to *Geoff's* Website, they can't leave, as all links head back to Geoff.
To combat this, we can add a small probability that the Procrastinating Pats don't follow any link on a webpage, but instead visit a website on the micro-internet at random.
We'll say the probability of them following a link is $d$ and the probability of choosing a random website is therefore $1-d$.
We can use a new matrix to work out where the Pat's visit each minute.
$$ M = d \, L + \frac{1-d}{n} \, J $$
where $J$ is an $n\times n$ matrix where every element is one.
If $d$ is one, we have the case we had previously, whereas if $d$ is zero, we will always visit a random webpage and therefore all webpages will be equally likely and equally ranked.
For this extension to work best, $1-d$ should be somewhat small - though we won't go into a discussion about exactly how small.
Let's retry this PageRank with this extension.
```
d = 0.5 # Feel free to play with this parameter after running the code once.
M = d * L2 + (1-d)/7 * np.ones([7, 7]) # np.ones() is the J matrix, with ones for each entry.
r = 100 * np.ones(7) / 7 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = M @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = M @ r
i += 1
print(str(i) + " iterations to convergence.")
r
```
This is certainly better, the PageRank gives sensible numbers for the Procrastinating Pats that end up on each webpage.
This method still predicts Geoff has a high ranking webpage however.
This could be seen as a consequence of using a small network. We could also get around the problem by not counting self-links when producing the L matrix (an if a website has no outgoing links, make it link to all websites equally).
We won't look further down this route, as this is in the realm of improvements to PageRank, rather than eigenproblems.
You are now in a good position, having gained an understanding of PageRank, to produce your own code to calculate the PageRank of a website with thousands of entries.
Good Luck!
## Part 2 - Assessment
In this assessment, you will be asked to produce a function that can calculate the PageRank for an arbitrarily large probability matrix.
This, the final assignment of the course, will give less guidance than previous assessments.
You will be expected to utilise code from earlier in the worksheet and re-purpose it to your needs.
### How to submit
Edit the code in the cell below to complete the assignment.
Once you are finished and happy with it, press the *Submit Assignment* button at the top of this notebook.
Please don't change any of the function names, as these will be checked by the grading script.
If you have further questions about submissions or programming assignments, here is a [list](https://www.coursera.org/learn/linear-algebra-machine-learning/discussions/weeks/1/threads/jB4klkn5EeibtBIQyzFmQg) of Q&A. You can also raise an issue on the discussion forum. Good luck!
```
# PACKAGE
# Here are the imports again, just in case you need them.
# There is no need to edit or submit this cell.
import numpy as np
import numpy.linalg as la
from readonly.PageRankFunctions import *
np.set_printoptions(suppress=True)
# GRADED FUNCTION
# Complete this function to provide the PageRank for an arbitrarily sized internet.
# I.e. the principal eigenvector of the damped system, using the power iteration method.
# (Normalisation doesn't matter here)
# The functions inputs are the linkMatrix, and d the damping parameter - as defined in this worksheet.
def pageRank(linkMatrix, d) :
n = linkMatrix.shape[0]
M = d * linkMatrix + (1-d)/n * np.ones([n, n]) # np.ones() is the J matrix, with ones for each entry.
r = 100 * np.ones(n) / n
lastR = r
r = M @ r
while la.norm (lastR - r) > 0.01 :
lastR = r
r = M @ r
return r
```
## Test your code before submission
To test the code you've written above, run the cell (select the cell above, then press the play button [ ▶| ] or press shift-enter).
You can then use the code below to test out your function.
You don't need to submit this cell; you can edit and run it as much as you like.
```
# Use the following function to generate internets of different sizes.
generate_internet(5)
# Test your PageRank method against the built in "eig" method.
# You should see yours is a lot faster for large internets
L = generate_internet(10)
pageRank(L, 1)
# Do note, this is calculating the eigenvalues of the link matrix, L,
# without any damping. It may give different results that your pageRank function.
# If you wish, you could modify this cell to include damping.
# (There is no credit for this though)
eVals, eVecs = la.eig(L) # Gets the eigenvalues and vectors
order = np.absolute(eVals).argsort()[::-1] # Orders them by their eigenvalues
eVals = eVals[order]
eVecs = eVecs[:,order]
r = eVecs[:, 0]
100 * np.real(r / np.sum(r))
# You may wish to view the PageRank graphically.
# This code will draw a bar chart, for each (numbered) website on the generated internet,
# The height of each bar will be the score in the PageRank.
# Run this code to see the PageRank for each internet you generate.
# Hopefully you should see what you might expect
# - there are a few clusters of important websites, but most on the internet are rubbish!
%matplotlib widget
import matplotlib.pyplot as plt
import numpy as np
r = pageRank(generate_internet(100), 0.9)
plt.bar(np.arange(r.shape[0]), r);
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
#from dnn_app_utils_v2 import *
import pandas as pd
%matplotlib inline
from pandas import ExcelWriter
from pandas import ExcelFile
%load_ext autoreload
%autoreload 2
from sklearn.utils import resample
import tensorflow as tf
from tensorflow.python.framework import ops
import openpyxl
import keras
import xlsxwriter
from keras.layers import Dense, Dropout
from keras import optimizers
import pandas as pd
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
#from dnn_app_utils_v2 import *
import pandas as pd
%matplotlib inline
from pandas import ExcelWriter
from pandas import ExcelFile
%load_ext autoreload
%autoreload 2
from sklearn.utils import resample
import tensorflow as tf
from tensorflow.python.framework import ops
import openpyxl
import keras
import xlsxwriter
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
print(" All the necessary Libraries have been loaded")
print(" ")
print(" ")
print(" The code after this is for loading your data into train and test. Make sure you load the correct features")
xls = pd.ExcelFile("test_selected.xlsx")
test_selected_x = pd.read_excel(xls, 'test_selected_x')
test_selected_y = pd.read_excel(xls, 'test_selected_y')
print(" The selected important features data for spesific model is loaded into train, and test")
print(" ")
test_selected_x=test_selected_x.values
test_selected_y=test_selected_y.values
print("##################################################################################################")
print("Now you load the model but with correct model name")
print(" loading the trained model ")
print(" ")
from keras.models import model_from_json
# load json and create model
json_file = open('1_model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model_1 = model_from_json(loaded_model_json)
# load weights into new model
loaded_model_1.load_weights("1_model.h5")
print("Loaded model from disk")
print(" ")
json_file = open('2_model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model_2 = model_from_json(loaded_model_json)
# load weights into new model
loaded_model_2.load_weights("2_model.h5")
print("Loaded model from disk")
print(" ")
json_file = open('3_model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model_3 = model_from_json(loaded_model_json)
# load weights into new model
loaded_model_3.load_weights("3_model.h5")
print("Loaded model from disk")
print(" ")
json_file = open('4_model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model_4 = model_from_json(loaded_model_json)
# load weights into new model
loaded_model_4.load_weights("4_model.h5")
print("Loaded model from disk")
print(" ")
print(" Computing the AUCROC using the loded model for checking ")
print(" ")
from sklearn.metrics import roc_auc_score, roc_curve
pred_test_1 = loaded_model_1.predict(test_selected_x)
pred_test_2 = loaded_model_2.predict(test_selected_x)
pred_test_3 = loaded_model_3.predict(test_selected_x)
pred_test_4 = loaded_model_4.predict(test_selected_x)
pred_test=(pred_test_1+pred_test_2+pred_test_3+pred_test_4)/4
auc_test = roc_auc_score(test_selected_y, pred_test)
print ("AUROC_test: " + str(auc_test))
```
| github_jupyter |
## sigMF STFT on GPU and CPU
```
import os
import itertools
from sklearn.utils import shuffle
import torch, torchvision
import torch.nn as nn
import torch.nn.functional as d
import torch.optim as optim
import torch.nn.functional as F
import torch.nn.modules as mod
import torch.utils.data
import torch.utils.data as data
from torch.nn.utils.rnn import pack_padded_sequence
from torch.nn.utils.rnn import pad_packed_sequence
from torch.autograd import Variable
import numpy as np
import sys
import importlib
import time
import matplotlib.pyplot as plt
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torchvision.utils import save_image
import librosa
from scipy import signal
from scipy import stats
from scipy.special import comb
import matplotlib.pyplot as plt
import glob
import json
import pickle
from random import randint, choice
import random
from timeit import default_timer as timer
# from torchaudio.functional import istft
from torch import istft
from sklearn.decomposition import NMF
plt.style.use('default')
device = torch.device('cuda:0')
print('Torch version =', torch.__version__, 'CUDA version =', torch.version.cuda)
print('CUDA Device:', device)
print('Is cuda available? =',torch.cuda.is_available())
# %matplotlib notebook
# %matplotlib inline
```
#### Machine paths
```
path_save = "/home/david/sigMF_ML/SVD/saved/"
path = "/home/david/sigMF_ML/SVD/"
print(path)
```
#### reading sigmf meta data and encoder function
```
# START OF FUNCTIONS ****************************************************
def meta_encoder(meta_list, num_classes):
a = np.asarray(meta_list, dtype=int)
# print('a = ', a)
return a
def read_meta(meta_files):
meta_list = []
for meta in meta_files:
all_meta_data = json.load(open(meta))
meta_list.append(all_meta_data['global']["core:class"])
meta_list = list(map(int, meta_list))
return meta_list
def read_num_val(x):
x = len(meta_list_val)
return x
print(path)
os.chdir(path)
data_files = sorted(glob.glob('*.sigmf-data'))
meta_files = sorted(glob.glob('*.sigmf-meta'))
for meta in meta_files:
all_meta_data = json.load(open(meta))
print("file name = ", meta)
```
#### torch GPU Cuda stft
```
def gpu(db, n_fft):
I = db[0::2]
Q = db[1::2]
start = timer()
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda()
I_stft = torch.stft(torch.tensor(I).cuda(), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
Q_stft = torch.stft(torch.tensor(Q).cuda(), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
X_stft = I_stft[...,0] + Q_stft[...,0] + I_stft[...,1] + -1*Q_stft[...,1]
X_stft = torch.cat((X_stft[n_fft//2:],X_stft[:n_fft//2]))
end = timer() # mag spec of sum of I and Q complex channels; sqrt((a+bj)^2+(c+dj)^2)
print(end - start)
torch.cuda.empty_cache()
return X_stft, I_stft, Q_stft
```
#### scipy CPU stft function
```
def cpu(db, n_fft):
t = len(db)
db2 = db[0::]
start = timer()
db = db.astype(np.float32).view(np.complex64)
Fs = 1e6
I_t, I_f, Z = signal.stft(db, fs=Fs, nperseg=n_fft, return_onesided=False)
Z = np.vstack([Z[n_fft//2:], Z[:n_fft//2]])
end = timer()
print(end - start)
return Z
```
### GPU Timing: Slow the first time running
```
n_fft = 1024
for file in data_files:
db = np.fromfile(file, dtype="float32")
X_stft, I_stft, Q_stft = gpu(db, n_fft)
plt.imshow(20*np.log10(np.abs(X_stft.cpu()+1e-8)), aspect='auto', origin='lower')
plt.show()
```
### CPU Timing
```
for file in data_files:
db = np.fromfile(file, dtype="float32")
stft_cpu = cpu(db, 1000)
```
### CPU load stft to Cuda Time
```
start = timer()
IQ_tensor = torch.tensor(np.abs(stft_cpu)).cuda()
end = timer()
print(end - start)
torch.cuda.empty_cache()
plt.imshow(20*np.log10(np.abs(stft_cpu)+1e-8), aspect='auto', origin='lower')
plt.show()
```
#### GPU SVD
```
def udv_stft(I_stft,Q_stft):
start = timer()
U_I0, D_I0, V_I0 = torch.svd(I_stft[...,0])
U_I1, D_I1, V_I1 = torch.svd(I_stft[...,1])
U_Q0, D_Q0, V_Q0 = torch.svd(Q_stft[...,0])
U_Q1, D_Q1, V_Q1 = torch.svd(Q_stft[...,1])
end = timer()
print('SVD time: ',end - start)
return U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1
```
#### Inverse stft
```
# def ISTFT(db, n_fft):# We are matching scipy.signal behavior (setting noverlap=frame_length - hop)
# w = 512
# win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda()
# start = timer()
# Z = istft(db, n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=False)
# end = timer()
# print('ISTFT time = ',end - start)
# torch.cuda.empty_cache()
# return Z
def ISTFT(db, n_fft):# We are matching scipy.signal behavior (setting noverlap=frame_length - hop)
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda()
start = timer()
Z = istft(db, n_fft=n_fft, hop_length=n_fft//2, win_length=n_fft, window=win, center=True, normalized=True, onesided=False)
end = timer()
print('ISTFT time = ',end - start)
torch.cuda.empty_cache()
return Z
```
#### Re-combine UDV to approximate original signal
```
def udv(u, d, v, k): # like ----> np.matrix(U[:, :k]) * np.diag(D[:k]) * V[:k, :]
start = timer()
UD = torch.mul(u[:, :k], d[:k])
v = torch.transpose(v,1,0)
UDV = torch.mm(UD, v[:k, :])
end = timer()
print('UDV time: ',end - start)
return UDV
```
### Main function to run all sub function calls
```
def complete(I_stft,Q_stft, num, n_fft):
U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1 = udv_stft(I_stft,Q_stft)
torch.cuda.empty_cache()
print('UDV I0 shapes = ',U_I0.shape, D_I0.shape, V_I0.shape)
print('UDV I1 shapes = ',U_I1.shape, D_I1.shape, V_I1.shape)
print('UDV Q0 shapes = ', U_Q0.shape, D_Q0.shape, V_Q0.shape)
print('UDV Q1 shapes = ', U_Q1.shape, D_Q1.shape, V_Q1.shape)
udv_I0 = udv(U_I0, D_I0, V_I0,num)
udv_I1 = udv(U_I1, D_I1, V_I1,num)
udv_Q0 = udv(U_Q0, D_Q0, V_Q0,num)
udv_Q1 = udv(U_Q1, D_Q1, V_Q1,num)
torch.cuda.empty_cache()
print('udv I shapes = ',udv_I0.shape,udv_I1.shape)
print('udv Q shapes = ',udv_Q0.shape,udv_Q1.shape)
# -------------stack and transpose----------------------------------------
UDV_I = torch.stack([udv_I0,udv_I1])
UDV_I = torch.transpose(UDV_I,2,0)
UDV_I = torch.transpose(UDV_I,1,0)
UDV_Q = torch.stack([udv_Q0,udv_Q1])
UDV_Q = torch.transpose(UDV_Q,2,0)
UDV_Q = torch.transpose(UDV_Q,1,0)
torch.cuda.empty_cache()
#--------------------------------------------------------------------------
I = ISTFT(UDV_I, n_fft)
Q = ISTFT(UDV_Q, n_fft)
torch.cuda.empty_cache()
I = I.detach().cpu().numpy()
Q = Q.detach().cpu().numpy()
end = len(I)*2
IQ_SVD = np.zeros(len(I)*2)
IQ_SVD[0:end:2] = I
IQ_SVD[1:end:2] = Q
IQ_SVD = IQ_SVD.astype(np.float32).view(np.complex64)
return IQ_SVD
```
### Perform SVD on IQ stft data
```
num = 3 # number to reconstruct SVD matrix from
IQ_SVD = complete(I_stft,Q_stft, num, n_fft)
torch.cuda.empty_cache()
```
### Write reconstructed IQ file to file
```
from array import array
IQ_file = open("UV5R_voice2", 'wb')
IQ_SVD.tofile(IQ_file)
IQ_file.close()
```
| github_jupyter |
```
import numpy as np
from matplotlib import pyplot as plt
def relu(z):
return max(0, z)
def sigmoid(z):
return (1/(1+np.exp(-z)))
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
"""
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
return (n_x, n_h, n_y)
def init_params(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Function:
Generates Weights (random) and Biases (zeros) for the 2 layer neural network
"""
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
paramters = {
'W1': W1,
'b1': b1,
'W2': W2,
'b2': b2,
}
return paramters
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
"""
# Extract the Weights and Bias from parameters dictionary
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
# Calculate multiple steps of forward propagation and at the end calculate A2 Probabilities
Z1 = np.matmul(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.matmul(W2, A1) + b2
A2 = sigmoid(Z2)
cache = {
'Z1': Z1,
'A1': A1,
'Z2': Z2,
'A2': A2
}
return cache
def calc_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
"""
# Get the total number of examples
m = Y.shape[1]
# Compute the Cross-entropy cost
logprob = (np.multiply(np.log(A2), Y) + np.multiply(np.log(1-A2), (1-Y)))
# Squeeze the Numpy array (removes the extra dimensions)
cost = np.squeeze(-(1/m) * np.sum(logprob))
return cost
def backward_prop(parameters, cache, X, Y):
"""
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
"""
m = X.shape[1]
# Get the Weights from 'paramters' dictionary
W1 = parameters['W1']
W2 = parameters['W2']
# Retrieve the respective activations from 'cache' dictionary
A1 = cache['A1']
A2 = cache['A2']
# Calculate respective derivatives
dZ2 = A2 - Y # Derivative of Final layer output is final Activation - Target value (Predicted - Original)
dW2 = (1/m) * np.matmul(dZ2, A1.T) # Derivative of Second layer weights is multiplication of dZ2 and A1.T, averaged over all samples
db2 = (1/m) * np.sum(dZ2, axis=1, keepdims=True)
dZ1 = np.matmul(W2.T, dZ2) * (1 - np.power(A1, 2))
dW1 = (1/m) * np.matmul(dZ1, X.T)
db1 = (1/m) * np.sum(dZ1, axis=1, keepdims=True)
gradients = {
'dW1': dW1,
'db1': db1,
'dW2': dW2,
'db2': db2,
}
def update_paramters(parameters, gradients, learning_rate = 1):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
"""
# Get the Paramters from the Dictionary
W1 = paramters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
# Get the gradients from the dictionary
dW1 = gradients['dW1']
db1 = gradients['db1']
dW2 = gradients['dW2']
db2 = gradients['db2']
# Run Gradient Descent for all weights and biases
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
# Pack the parameters into a dictionary
paramters = {
'W1': W1,
'b1': b1,
'W2': W2,
'b2': b2,
}
return paramters
def fit_model(X, Y, n_h, epochs=10000, print_cost = False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
"""
# Make new n_x and n_y
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize paramters
parameters = initialize_parameters(n_x, n_h, n_y)
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
# Run the loop for training
for epoch in range(epochs):
cache = forward_propagation(X, parameters=parameters)
A2 = cache['A2']
cost = calc_cost(A2, Y, parameters)
grads = backward_prop(parameters, cache, X, Y)
parameters = update_paramters(parameters, grads)
if print_cost and epoch % 1000 == 0:
print("Cost after iteration %i: %f" %(i, cost))
return parameters
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
"""
cache = forward_propagation(X, parameters)
A2 = cache['A2']
predictions = (A2 > 0.5)
return predictions
```
| github_jupyter |
# Training and Data Sets
Author: Ravin Poudel
Main goal in the statistical or machine learning model is to biuld a generalized predictive-model. Often we start with a set of data to build a model and describe the model fit and other properties. However, it is equally important to test the model with new data (the data that has not been used in fitting a model), and check the model performace. From agricultural perspective, basically we need to run an additional experiment to generate a data for purpose of model validation. Instead what we can do is to __randomly__ divide the a single dataset into two parts, and use one part for the purpose of learnign whereas the other part for testing the model performacne.
<img src="../nb-images/Train_test.png">
> Train data set: A data set used to __construct/train__ a model.
> Test data set: A data set used to __evaluate__ the model.
#### How do we spilit a single dataset into two?
There is not a single or one best solution. Its convention to use more data for training the model than to test/evaluate the moddel. Often convention such as `75%/ 25% train/ test` or `90%/10% train/test` scheme are used. Larger the training dataset allows to learn better model, while the larger testing dataset, the better condifence in the model evaluation.
> Can we apply similar data-splitting scheme when we have a small dataset? Often the case in agriculure or lifescience - "as of now".
> Does a single random split make our predictive model random? Do we want a stable model or a random model?
We will be using an `iris dataset` to explore the concept of data-spiliting. The data set contains:
- 50 samples of 3 different species of iris folower (150 samples total)
- Iris folower: Setosa, Versicolour, and Virginica
- Measurements: sepal length, sepal width, petal length, petal width
```
# Import data and modules
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
# import iris data from scikit and data preparato
iris = datasets.load_iris()
# check data shape
iris_X.data.shape
print(iris_y)
print(names)
print(feature_names)
# splitting into train and test data
# test dataset = 20% of the original dataset
X_train, X_test, y_train, y_test = train_test_split(iris_X, iris_y, test_size=0.2, random_state=0)
# shape of train dataset
X_train.shape, y_train.shape
# shape of test dataset
X_test.shape, y_test.shape
# instantiate a K-Nearest Neighbors(KNN) model, and fit with X and y
model = KNeighborsClassifier()
model_tt = model.fit(X_train, y_train)
# check the accuracy on the training set
model_tt.score(X_test, y_test)
# predict class labels for the test set
predicted = model_tt.predict(X_test)
print (predicted)
print(y_test)
# generate evaluation metrics
from sklearn import metrics
print (metrics.accuracy_score(y_test, predicted))
print (metrics.confusion_matrix(y_test, predicted))
print (metrics.classification_report(y_test, predicted))
```
### Model Evaluation Using Cross-Validation
```
# evaluate the model using 10-fold cross-validation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(KNeighborsClassifier(), iris_X, iris_y, cv=5)
print (scores)
print (scores.mean())
# The mean score and the 95% confidence interval of the score estimate are hence given by:
print("Accuracy: %.3f%% (%.3f%%)" % (scores.mean()*100.0, scores.std()*100.0))
```
### K fold
```
model = KNeighborsClassifier()
kfold = model_selection.KFold(n_splits=5, random_state=12323, shuffle=False)
results = model_selection.cross_val_score(model, iris_X, iris_y, cv=kfold)
results
print("Accuracy: %.3f%% (%.3f%%)" % (results.mean()*100.0, results.std()*100.0))
```
### K fold with randomization in split? -- might help us to understand my accurracy of model in cv is different from k fold?
```
model = KNeighborsClassifier()
kfold = model_selection.KFold(n_splits=5, random_state=12323, shuffle=True)
results = model_selection.cross_val_score(model, iris_X, iris_y, cv=kfold)
results
print("Accuracy: %.3f%% (%.3f%%)" % (results.mean()*100.0, results.std()*100.0))
```
### LOOCV
```
model = KNeighborsClassifier()
loocv = model_selection.LeaveOneOut()
results = model_selection.cross_val_score(model, iris_X, iris_y, cv=loocv)
print("Accuracy: %.3f%% (%.3f%%)" % (results.mean()*100.0, results.std()*100.0))
```
Resources:
- https://towardsdatascience.com/train-test-split-and-cross-validation-in-python-80b61beca4b6
- https://scikit-learn.org/stable/modules/cross_validation.html
- https://blog.goodaudience.com/classifying-flowers-using-logistic-regression-in-sci-kit-learn-38262416e4c6
- https://machinelearningmastery.com/machine-learning-in-python-step-by-step/
- https://nbviewer.jupyter.org/gist/justmarkham/6d5c061ca5aee67c4316471f8c2ae976
- https://machinelearningmastery.com/evaluate-performance-machine-learning-algorithms-python-using-resampling/
- https://machinelearningmastery.com/k-fold-cross-validation/
| github_jupyter |
# LKJ Cholesky Covariance Priors for Multivariate Normal Models
While the [inverse-Wishart distribution](https://en.wikipedia.org/wiki/Inverse-Wishart_distribution) is the conjugate prior for the covariance matrix of a multivariate normal distribution, it is [not very well-suited](https://github.com/pymc-devs/pymc3/issues/538#issuecomment-94153586) to modern Bayesian computational methods. For this reason, the [LKJ prior](http://www.sciencedirect.com/science/article/pii/S0047259X09000876) is recommended when modeling the covariance matrix of a multivariate normal distribution.
To illustrate modelling covariance with the LKJ distribution, we first generate a two-dimensional normally-distributed sample data set.
```
import arviz as az
import numpy as np
import pymc3 as pm
import seaborn as sns
import warnings
from matplotlib.patches import Ellipse
from matplotlib import pyplot as plt
az.style.use("arviz-darkgrid")
warnings.simplefilter(action="ignore", category=FutureWarning)
RANDOM_SEED = 8924
np.random.seed(3264602) # from random.org
N = 10000
μ_actual = np.array([1.0, -2.0])
sigmas_actual = np.array([0.7, 1.5])
Rho_actual = np.matrix([[1.0, -0.4], [-0.4, 1.0]])
Σ_actual = np.diag(sigmas_actual) * Rho_actual * np.diag(sigmas_actual)
x = np.random.multivariate_normal(μ_actual, Σ_actual, size=N)
Σ_actual
var, U = np.linalg.eig(Σ_actual)
angle = 180.0 / np.pi * np.arccos(np.abs(U[0, 0]))
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red, *_ = sns.color_palette()
e = Ellipse(
μ_actual, 2 * np.sqrt(5.991 * var[0]), 2 * np.sqrt(5.991 * var[1]), angle=angle
)
e.set_alpha(0.5)
e.set_facecolor(blue)
e.set_zorder(10)
ax.add_artist(e)
ax.scatter(x[:, 0], x[:, 1], c="k", alpha=0.05, zorder=11)
rect = plt.Rectangle((0, 0), 1, 1, fc=blue, alpha=0.5)
ax.legend([rect], ["95% density region"], loc=2);
```
The sampling distribution for the multivariate normal model is $\mathbf{x} \sim N(\mu, \Sigma)$, where $\Sigma$ is the covariance matrix of the sampling distribution, with $\Sigma_{ij} = \textrm{Cov}(x_i, x_j)$. The density of this distribution is
$$f(\mathbf{x}\ |\ \mu, \Sigma^{-1}) = (2 \pi)^{-\frac{k}{2}} |\Sigma|^{-\frac{1}{2}} \exp\left(-\frac{1}{2} (\mathbf{x} - \mu)^{\top} \Sigma^{-1} (\mathbf{x} - \mu)\right).$$
The LKJ distribution provides a prior on the correlation matrix, $\mathbf{C} = \textrm{Corr}(x_i, x_j)$, which, combined with priors on the standard deviations of each component, [induces](http://www3.stat.sinica.edu.tw/statistica/oldpdf/A10n416.pdf) a prior on the covariance matrix, $\Sigma$. Since inverting $\Sigma$ is numerically unstable and inefficient, it is computationally advantageous to use the [Cholesky decompositon](https://en.wikipedia.org/wiki/Cholesky_decomposition) of $\Sigma$, $\Sigma = \mathbf{L} \mathbf{L}^{\top}$, where $\mathbf{L}$ is a lower-triangular matrix. This decompositon allows computation of the term $(\mathbf{x} - \mu)^{\top} \Sigma^{-1} (\mathbf{x} - \mu)$ using back-substitution, which is more numerically stable and efficient than direct matrix inversion.
PyMC3 supports LKJ priors for the Cholesky decomposition of the covariance matrix via the [LKJCholeskyCov](../api/distributions/multivariate.rst) distribution. This distribution has parameters `n` and `sd_dist`, which are the dimension of the observations, $\mathbf{x}$, and the PyMC3 distribution of the component standard deviations, respectively. It also has a hyperparamter `eta`, which controls the amount of correlation between components of $\mathbf{x}$. The LKJ distribution has the density $f(\mathbf{C}\ |\ \eta) \propto |\mathbf{C}|^{\eta - 1}$, so $\eta = 1$ leads to a uniform distribution on correlation matrices, while the magnitude of correlations between components decreases as $\eta \to \infty$.
In this example, we model the standard deviations with $\textrm{Exponential}(1.0)$ priors, and the correlation matrix as $\mathbf{C} \sim \textrm{LKJ}(\eta = 2)$.
```
with pm.Model() as m:
packed_L = pm.LKJCholeskyCov(
"packed_L", n=2, eta=2.0, sd_dist=pm.Exponential.dist(1.0)
)
```
Since the Cholesky decompositon of $\Sigma$ is lower triangular, `LKJCholeskyCov` only stores the diagonal and sub-diagonal entries, for efficiency:
```
packed_L.tag.test_value.shape
```
We use [expand_packed_triangular](../api/math.rst) to transform this vector into the lower triangular matrix $\mathbf{L}$, which appears in the Cholesky decomposition $\Sigma = \mathbf{L} \mathbf{L}^{\top}$.
```
with m:
L = pm.expand_packed_triangular(2, packed_L)
Σ = L.dot(L.T)
L.tag.test_value.shape
```
Often however, you'll be interested in the posterior distribution of the correlations matrix and of the standard deviations, not in the posterior Cholesky covariance matrix *per se*. Why? Because the correlations and standard deviations are easier to interpret and often have a scientific meaning in the model. As of PyMC 3.9, there is a way to tell PyMC to automatically do these computations and store the posteriors in the trace. You just have to specify `compute_corr=True` in `pm.LKJCholeskyCov`:
```
with pm.Model() as model:
chol, corr, stds = pm.LKJCholeskyCov(
"chol", n=2, eta=2.0, sd_dist=pm.Exponential.dist(1.0), compute_corr=True
)
cov = pm.Deterministic("cov", chol.dot(chol.T))
```
To complete our model, we place independent, weakly regularizing priors, $N(0, 1.5),$ on $\mu$:
```
with model:
μ = pm.Normal("μ", 0.0, 1.5, shape=2, testval=x.mean(axis=0))
obs = pm.MvNormal("obs", μ, chol=chol, observed=x)
```
We sample from this model using NUTS and give the trace to [ArviZ](https://arviz-devs.github.io/arviz/):
```
with model:
trace = pm.sample(random_seed=RANDOM_SEED, init="adapt_diag")
idata = az.from_pymc3(trace)
az.summary(idata, var_names=["~chol"], round_to=2)
```
Sampling went smoothly: no divergences and good r-hats. You can also see that the sampler recovered the true means, correlations and standard deviations. As often, that will be clearer in a graph:
```
az.plot_trace(
idata,
var_names=["~chol"],
compact=True,
lines=[
("μ", {}, μ_actual),
("cov", {}, Σ_actual),
("chol_stds", {}, sigmas_actual),
("chol_corr", {}, Rho_actual),
],
);
```
The posterior expected values are very close to the true value of each component! How close exactly? Let's compute the percentage of closeness for $\mu$ and $\Sigma$:
```
μ_post = trace["μ"].mean(axis=0)
(1 - μ_post / μ_actual).round(2)
Σ_post = trace["cov"].mean(axis=0)
(1 - Σ_post / Σ_actual).round(2)
```
So the posterior means are within 3% of the true values of $\mu$ and $\Sigma$.
Now let's replicate the plot we did at the beginning, but let's overlay the posterior distribution on top of the true distribution -- you'll see there is excellent visual agreement between both:
```
var_post, U_post = np.linalg.eig(Σ_post)
angle_post = 180.0 / np.pi * np.arccos(np.abs(U_post[0, 0]))
fig, ax = plt.subplots(figsize=(8, 6))
e = Ellipse(
μ_actual, 2 * np.sqrt(5.991 * var[0]), 2 * np.sqrt(5.991 * var[1]), angle=angle
)
e.set_alpha(0.5)
e.set_facecolor(blue)
e.set_zorder(10)
ax.add_artist(e)
e_post = Ellipse(
μ_post,
2 * np.sqrt(5.991 * var_post[0]),
2 * np.sqrt(5.991 * var_post[1]),
angle=angle_post,
)
e_post.set_alpha(0.5)
e_post.set_facecolor(red)
e_post.set_zorder(10)
ax.add_artist(e_post)
ax.scatter(x[:, 0], x[:, 1], c="k", alpha=0.05, zorder=11)
rect = plt.Rectangle((0, 0), 1, 1, fc=blue, alpha=0.5)
rect_post = plt.Rectangle((0, 0), 1, 1, fc=red, alpha=0.5)
ax.legend(
[rect, rect_post],
["True 95% density region", "Estimated 95% density region"],
loc=2,
);
%load_ext watermark
%watermark -n -u -v -iv -w
```
| github_jupyter |
# Forecasting on Contraceptive Use - A Multi-step Ensemble Approach¶
Update: 09/07/2020
Github Repository: https://github.com/herbsh/USAID_Forecast_submit
## key idea
- The goal is to forecast on site_code & product_code level demand.
- The site_code & product_code level demand fluctuates too much and doesn't have any obvious pattern.
- The aggregate level is easier to forecast. The noise cancels out.
- We don't know what is the best level to aggregate. It's possible that it varies regarding each site too.
- we aggregate on various levels, and use "Ensemble Learning" to to determine the final result
## Aggregate
How to aggregate? Here is the structure of the supply chain and products. We can use this guide our aggregation:
- site_code -> district -> region
- product_code -> product_type
After we aggregate on some level, we get a time series of stock_distributed, {Y}.
We supplement some external data and match/aggregate to the same level, {X}.
we use a time-series model to forecast {Y} with {Y} and {X}.
## Forecast the Aggregate : Time Series Modeling, Auto_SARIMAX
- We use a SARIMAX stucture (ARIMA (AR, MA with a trend) with Seasonality and external data )
- The specific order of SARIMAX is determined within each time series with an Auto_ARIMA function.
- Use BIC criteria to pick the optimal order for ARIMA (p, d, q) and seasonality (P,D,Q) (BIC : best for forecast power) (range of p,d,q and P,D,Q - small, less than 4)
- use the time series model to make aggregate level forecast. Store the results for later use.
## De-aggregate (Distribute) : Machine Learning Modeling
- We use machine learning to learn the variable "share", the share of the specific stock_distributed as a fraction of the aggregate sum.
- training data:
- all the data, excluding the last 3 month. Encode the year, month, region, district, product type, code, plus all available external data, matched to site_code+product level.
- target: actual shares
- model: RandomForest Decision Regression Tree
- use the fitted model to make prediction on shares. "Distribute" aggregate stocks to individual forecasts.
## Emsemble
- From Aggregate and Distribute, we arrive at X different forecast numbers for each site+product_code. (We also have a lot of intermediary forecasts that could be of interests to various parties).
- We introduce another model to perform the ensemble
- For each training observation, we have multiple fourcasts and one actual realization, denote them as F1, F2, F3, F4 and Y (ommitting site, product, t time subscripts). We also have all the features X ( temperature, roads, etc).
- The emsemble part estimate another model to take inputs (F1..F4, and features X) to arrive at an estimated Y_hat that minimizes its MSE to Y ( actual stock_distributed).
- We used XGBoost to perform the ensemble learning part.
## Key Takeaway of this approach
- Combines traditional forecast methods (SARIMAX) and Machine Learning (XGBoost)
- It's very transferable to other scenarios
- It uses external data and it's easy to plug in more external data to improve the forecasts.
- The ensemble piece makes adding model possible and easy
```
# suppress warning to make cleaner output
import warnings
warnings.filterwarnings("ignore")
```
# Outline
## Step 1: Data Cleaning
### Upsample (fill in gaps in time series)
- notebook: datacleaning_upsample.ipynb
```
%run datacleaning_upsample.ipynb
```
- inputs: "..\0_data\contraceptive_logistics_data.csv"
- output: "..\2_pipeline\df_upsample.csv"
- steps:
- upsample - make sure all individual product-site series has no gaps in time even though they may differ in length
- fill in 0 for NA in stock_distributed
### Supplement
(very import thing about supplement data - if we are to use any supplement data, the value should exist for the 3 months that are to be forecasted) ( 10, 11, 12) Must be careful when constructing supplement dataset
- notebook: datacleaning_prep_supplement.ipynb
- input:
- "..\0_data\supplement_data_raw.csv"
- steps:
- time invariant supplement data:
- identifiers: site_code product_code region district
- information: (currently) road condition, product type
- output: "../0_data/time_invariant_supplement.dta"
- time variant supplement data: (include rows for time that need to be forecasted)
- identifiers: temp_timeindex year month site_code product_code region district
- information: maxtemp temp pressure relative rain visibility windspeed maxsus* storm fog
- output: "../0_data/time_variant_supplement.dta"
### Combine
- notebook: datacleaning_combine.ipynb
```
%run datacleaning_combine.ipynb
```
- input:
- "../2_pipeline/df_upsample.csv"
- '../0_data/submission_format.csv'
- "../0_data/time_invariant_supplement.dta"
- "../0_data/time_variant_supplement.dta"
- '../0_data/service_delivery_site_data.csv'
- output: a site_code & product_code & date level logistics data with time variant and time invariant exogenous features :
- for development: '../0_data/df_training.csv'
- for final prediction(contained 3 last month exog vars and space holder) '../0_data/df_combined_fullsample.csv'
## Step 2: Multiple Agg-Forecast-Distribute Models in parallel
### Region Level
- notebook: model_SARIMAX_Distribute_region.ipynb
%run model_SARIMAX_Distribute_region.ipynb
- input:
- '../0_data/df_combined_fullsample.csv'
- output:
1. ../2_pipeline/final_pred_region_lev.csv
2. ./2_pipeline/final_distribute_regionlev.csv'
### District Level
- notebook: model_SARIMAX_Distribute_District.ipynb
```
%run model_SARIMAX_Distribute_District.ipynb
```
- input:
- '../0_data/df_combined_fullsample.csv'
- output::
1. '../2_pipeline/final_pred_district_lev.csv'
2. '../2_pipeline/final_distribute_districtlev.csv'
### Region-Product_type level
- notebook: model_SARIMAX_Distribute_regionproducttype.ipynb
```
%run model_SARIMAX_Distribute_regionproducttype.ipynb
```
- input:
- '../0_data/df_combined_fullsample.csv'
- output:
1. ../2_pipeline/final_pred_region_producttype_lev.csv
2. '../2_pipeline/final_distribute_regionproducttypelev.csv'
### Individual Level, with raw data, winsorized data, and rolling smoothed data
- notebook: model_SARIMAX_individual.ipyn
```
%run model_SARIMAX_individual.ipynb
```
- input:
- '../0_data/df_combined_fullsample.csv'
- output:
- '../2_pipeline/final_pred_ind_lev.csv'
- notebook: model_SARIMAX_individual_winsorized.ipynb
```
%run model_SARIMAX_individual_winsorized.ipynb
```
- input:
- '../0_data/df_combined_fullsample.csv'
- output:
- '../2_pipeline/final_pred_ind_winsorized_lev.csv'
- notebook: model_SARIMAX_individual_rollingsmoothed.ipynb
```
%run model_SARIMAX_individual_rollingsmoothed.ipynb
```
- input:
- '../0_data/df_combined_fullsample.csv'
- output:
- '../2_pipeline/final_pred_ind_rollingsmoothed_lev.csv'
## Step 3: Ensemble, learn the ensemble model, make final prediction
- notebook: ensemble.ipynb
```
%run ensemble.ipynb
```
- input:
- Distribution model results: glob.glob('../2_pipeline/final_distribute_*.csv')
- SARIMAX results: glob.glob('../2_pipeline/final_pred_ind*.csv')
- output:
# Ensemble Model Details
## Import : data with actual stock distributed and exogenous variables
```
import pandas as pd
df_combined=pd.read_csv('../0_data/df_combined_fullsample.csv')
```
## Import results from distribution(shares) models
```
import glob
temp=glob.glob('../2_pipeline/final_distribute_*.csv')
print('\n importing results from the distribution stage of various aggregation levels \n')
print(temp)
distribute_districtlev=pd.read_csv('../2_pipeline\\final_distribute_districtlev.csv').drop(columns=['Unnamed: 0'])
distribute_regionlev=pd.read_csv('../2_pipeline\\final_distribute_regionlev.csv').drop(columns=['Unnamed: 0'])
distribute_regionproducttypelev=pd.read_csv('../2_pipeline\\final_distribute_regionproducttypelev.csv').drop(columns=['Unnamed: 0'])
```
## Import SARIMAX_agg model results and merge with distribute
```
print('Import SARIMAX_agg model results and merge with predicted distribute values')
sarimax_pred_region=pd.read_csv('../2_pipeline/final_pred_region_lev.csv').rename(columns={'Unnamed: 0':'date',}).rename(columns={'stock_distributed_forecasted':'stock_forecast_agg_region'})
sarimax_pred_regionproducttype=pd.read_csv('../2_pipeline/final_pred_region_producttype_lev.csv').rename(columns={'Unnamed: 0':'date'}).rename(columns={'stock_distributed_forecasted':'stock_forecast_agg_regionproducttype'})
sarimax_pred_district=pd.read_csv('../2_pipeline/final_pred_district_lev.csv').rename(columns={'Unnamed: 0':'date'}).rename(columns={'stock_distributed_forecasted':'stock_forecast_agg_district'})
```
- merge sarimax_pred_region with distribute_region
```
pred_agg_region=pd.merge(left=sarimax_pred_region,right=distribute_regionlev,on=['date','region','product_code'],how='right')
pred_agg_region.describe()
```
- merge sarimax_pred_regionproducttype with distribute_regionproducttype
```
pred_agg_regionproducttype=pd.merge(left=sarimax_pred_regionproducttype,right=distribute_regionproducttypelev,on=['date','region','product_type'],how='right')
pred_agg_regionproducttype.describe()
```
- merge sarimax_pred_district with distribute_districtlev
```
pred_agg_district=pd.merge(left=sarimax_pred_district,right=distribute_districtlev,on=['date','district','product_code'],how='right')
pred_agg_district.describe()
```
## Import three individual level sarimax results
```
import glob
temp=glob.glob('../2_pipeline/final_pred_ind*.csv')
print('\n Import three individual level sarimax results \n')
print(temp)
sarimax_ind=pd.read_csv('../2_pipeline/final_pred_ind_lev.csv').rename(columns={'Unnamed: 0':'date','stock_distributed_forecasted':'stock_forecast_agg_ind'})
sarimax_ind.head(2)
sarimax_ind_smooth=pd.read_csv('../2_pipeline/final_pred_ind_rollingsmoothed_lev.csv').rename(columns={'Unnamed: 0':'date','stock_distributed_forecasted':'stock_forecast_agg_ind_smooth'})
sarimax_ind_smooth.head(2)
sarimax_ind_winsorized=pd.read_csv('../2_pipeline/final_pred_ind_winsorized_lev.csv').rename(columns={'Unnamed: 0':'date','stock_distributed_forecasted':'stock_forecast_agg_ind_winsorized'})
sarimax_ind_winsorized.head(2)
df_ensemble=pd.merge(left=df_combined,right=pred_agg_region.drop(columns=['agg_level']),on=['date','region','product_code','site_code'],how='left')
len(df_ensemble)
df_ensemble=pd.merge(left=df_ensemble,right=pred_agg_regionproducttype.drop(columns=['agg_level']),on=['date','region','product_type','site_code','product_code'],how='left')
len(df_ensemble)
df_ensemble=pd.merge(left=df_ensemble,right=pred_agg_district.drop(columns=['agg_level']),on=['date','district','site_code','product_code'],how='left')
df_ensemble=pd.merge(left=df_ensemble,right=sarimax_ind,on=['date','site_code','product_code'],how='left')
df_ensemble=pd.merge(left=df_ensemble,right=sarimax_ind_smooth,on=['date','site_code','product_code'],how='left')
df_ensemble=pd.merge(left=df_ensemble,right=sarimax_ind_winsorized,on=['date','site_code','product_code'],how='left')
df_ensemble['date']=pd.to_datetime(df_ensemble['date'])
df_ensemble.set_index('date')['2019-10':].describe()
df_ensemble=df_ensemble.fillna(0)
df_ensemble.head()
```
## Sort df_ensemble dataframe by date to ensure the train-test data are set up correctly
```
df_ensemble.sort_values(by='date',inplace=True)
```
## Feature Engineering
### Add a few interactions
```
df_ensemble['interaction_1']=df_ensemble['pred_share_regionlev']*df_ensemble['stock_forecast_agg_regionproducttype']
df_ensemble['interaction_2']=df_ensemble['pred_share_districtlev']*df_ensemble['stock_forecast_agg_regionproducttype']
df_ensemble['weather_interaction']=df_ensemble['maxtemp']*df_ensemble['rainfallsnowmelt']
columns_to_encode=['site_code', 'product_code', 'year', 'month',
'region', 'district', 'product_type','site_type']
columns_continuous_exog=['regionroads',
'regionasphaltroads', 'regionearthroads', 'regionsurfacetreatmentroads',
'regionpoorroads', 'poorroads', 'earthroads', 'asphaltroads', 'temp',
'maxtemp', 'pressure', 'relativehumidity', 'rainfallsnowmelt',
'visibility', 'windspeed', 'maxsustainedwindspeed', 'rainordrizzle',
'storm', 'fog','weather_interaction']
columns_continuous_frommodel=['stock_forecast_agg_region', 'pred_share_regionlev',
'stock_forecast_agg_regionproducttype',
'pred_share_regionproducttype_tlev', 'stock_forecast_agg_district',
'pred_share_districtlev', 'stock_forecast_agg_ind',
'stock_forecast_agg_ind_smooth', 'stock_forecast_agg_ind_winsorized','interaction_1','interaction_2']
```
## Setting up target
- the y vector should have all 0s for the last 3 months worth of data
```
y=df_ensemble.stock_distributed
```
## Setting up features
```
# Import libraries and download example data
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(sparse=False,categories='auto')
encoded_columns = ohe.fit_transform(df_ensemble[columns_to_encode])
import numpy as np
np.shape(encoded_columns)
```
- produce one-hot encoding for categorical values
```
features=pd.DataFrame(data=encoded_columns,columns=ohe.get_feature_names(columns_to_encode))
features.describe()
```
- add continuous values. Put everything to a X matrix
```
X=features
X[columns_continuous_exog]=df_ensemble[columns_continuous_exog]
X[columns_continuous_frommodel]=df_ensemble[columns_continuous_frommodel]
X.to_csv('x.csv')
```
## Scale X
```
from sklearn.preprocessing import scale
Xs = scale(X)
```
# Use XGboost to make final prediction
## XGBoost
```
import xgboost as xgb
from sklearn.metrics import mean_squared_error
import pandas as pd
import numpy as np
df_train=df_ensemble.set_index('date')[:'2019-9']
df_pred=df_ensemble.set_index('date')['2019-10':'2019-12']
Xs_train=Xs[:df_train.shape[0]]
y_train=y[:df_train.shape[0]]
Xs_pred=Xs[-df_pred.shape[0]:]
data_dmatrix = xgb.DMatrix(data=Xs,label=y)
xg_reg = xgb.XGBRegressor(objective ='reg:squarederror', colsample_bytree = 0.3, learning_rate = 0.1,
max_depth = 30, alpha = 10, n_estimators = 20)
xg_reg.fit(Xs_train,y_train)
preds = xg_reg.predict(Xs_pred)
len(preds)
import matplotlib.pyplot as plt
%matplotlib inline
xgb.plot_importance(xg_reg)
plt.rcParams['figure.figsize'] = [8,10]
plt.savefig('../2_pipeline/xgboost_plot_importance.jpg')
plt.show()
```
## Collect Results
```
temp=df_pred[['year','month','site_code','product_code']].copy()
temp['predicted_value']=preds
temp=temp.reset_index()
temp=temp.drop(columns='date')
submission_format=pd.read_csv('../0_data/submission_format.csv')
submission=pd.merge(left=submission_format.drop(columns='predicted_value'),right=temp,on=['year','month','site_code','product_code'],how='left')
submission.describe()
submission['predicted_value']=submission['predicted_value'].apply(lambda x: max(x,0))
submission.describe()
submission.head()
submission[['year','month','site_code','product_code','predicted_value']].to_csv('../submission.csv')
```
| github_jupyter |
% 30 Days of Kaggle - Day 10: (https://www.kaggle.com/dansbecker/underfitting-and-overfitting)[Over-Fitting and Under-Fitting].
Now that I can create models I need to be able to evaluate their accuracy.
I calculated mean absolute error in the last notebook using sklearn.
MAE = \frac{\sum_0^N | predicted - actual |}{N}
The lesson notes give a great explanation of under- and over-fitting:

They use the example of housing data. Decision tree depth is the variable to watch. A binary tree of depth n will have 2^n leaf nodes. If n is too small we may be under-fitting. If n is too large we eventually end up with one case in each leaf node. There's a sweet spot that we have to find for training data.
Use this utility method to compare MAE for different max leaf nodes:
```
from sklearn.metrics import mean_absolute_error
from sklearn.tree import DecisionTreeRegressor
def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y):
model = DecisionTreeRegressor(max_leaf_nodes = max_leaf_nodes, random_state = 0)
model.fit(train_X, train_y)
predictions_val = model.predict(val_X)
mae = mean_absolute_error(val_y, predictions_val)
return mae
```
These cells repeat the earlier calculations for the Melbourne housing data:
```
# Data Loading Code Runs At This Point
import pandas as pd
# Load data
melbourne_file_path = '../datasets/kaggle/melbourne-house-prices/melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path)
# Filter rows with missing values
filtered_melbourne_data = melbourne_data.dropna(axis=0)
# Choose target and features
y = filtered_melbourne_data.Price
melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'BuildingArea',
'YearBuilt', 'Lattitude', 'Longtitude']
X = filtered_melbourne_data[melbourne_features]
```
Split data into train and test sets:
```
from sklearn.model_selection import train_test_split
# split data into training and validation data, for both features and target
train_X, val_X, train_y, val_y = train_test_split(X, y,random_state = 0)
```
Now let's calculate MEA with differing values of max_leaf_nodes:
```
for max_leaf_nodes in [5, 50, 500, 5000]:
my_mae = get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y)
print("Max leaf nodes: %4d \t\t Mean Absolute Error: %d" %(max_leaf_nodes, my_mae))
```
Exercises: do the same thing with the Iowa housing model.
```
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
iowa_file_path = '../datasets/kaggle/iowa-house-prices/train.csv'
home_data = pd.read_csv(iowa_file_path)
y = home_data.SalePrice
feature_columns = ['Lot Area', 'Year Built', '1st Flr SF', '2nd Flr SF', 'Bedroom AbvGr', 'TotRms AbvGrd']
X = home_data[feature_columns]
iowa_model = DecisionTreeRegressor(random_state=1)
iowa_model.fit(train_X, train_y)
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE: {:,.0f}".format(val_mae))
candidate_max_leaf_nodes = [5, 25, 50, 100, 250, 500, 600, 700, 800, 900, 1000]
for max_leaf_nodes in candidate_max_leaf_nodes:
mae = get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y)
print("Max leaf nodes: %4d \t\t Mean Absolute Error: %d" %(max_leaf_nodes, mae))
```
Now that we know that we want 500 leaf nodes we can use all the data to create the final model.
```
final_model = DecisionTreeRegressor(max_leaf_nodes=500)
final_model.fit(X, y)
print(final_model)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
# from sklearn.preprocessing import
from sklearn.model_selection import train_test_split
from random import randint
import sklearn.metrics as skm
from xgboost import XGBClassifier
import xgboost as xgb
from sklearn.metrics import roc_curve
from matplotlib import pyplot as plt
def replace_rare_entries(df, columns, threshold_frac):
tot_instances = df.shape[0]
threshold = tot_instances * threshold_frac
df = df.apply(lambda x: x.mask(x.map(x.value_counts()) < threshold, 'RARE') if x.name in columns else x)
return df
categoricals = ['OP_UNIQUE_CARRIER', 'DEST', 'DEP_TIME_BLK', 'DAY_OF_MONTH', 'DAY_OF_WEEK', 'MONTH','weather_label']
numericals = ['precipitation_intensity','precipitation_probability','visibility','cloud_cover','humidity','wind_bearing','wind_speed','uv_index','temperature','moon_phase','dew_point','pressure','sunrise_time','sunset_time']
df = pd.read_csv('../Data/new_york/year_lga_dep_weather.csv')
data = df.drop([col for col in df.columns if (col not in categoricals and col not in numericals)], axis=1)
data = replace_rare_entries(data, ['DEST'], 0.005)
data = replace_rare_entries(data, ['UNIQUE_CARRIER'], 0.005)
data = pd.get_dummies(data, columns=categoricals)
label = df['DEP_DEL15']
data.columns
data = data[~label.isna()]
label = label[~label.isna()]
print('Rows: {}\nFeatures: {}\nLabel-1 Fraction: {}'
.format(data.shape[0], data.shape[1], label.sum() / label.shape[0]))
thres = np.linspace(0, 1, 500)
x, x_test, y, y_test = train_test_split(data, label, test_size=0.2,
random_state=randint(1, 500),
stratify=label)
x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.2,
random_state=randint(1, 500),
stratify=y)
Dtrain = xgb.DMatrix(x_train, label=y_train)
Dval = xgb.DMatrix(x_val, label=y_val)
y_train.shape[0]
param = { # General guidelines for initial paramaters:
'min_child_weight': 1, # 1 (choose small for high class imbalance)
'gamma': 0.3, # 0.1-0.2
'lambda': 0, #1 # L2 Regulariztion - default = 1
'scale_pos_weight': 4, # 1 (choose small for high class imbalance)
'subsample': 0.6, # 0.5-0.9
'colsample_bytree': 0.8, # 0.5-0.9
'colsample_bylevel': 0.7, # 0.5-0.9
'max_depth': 6, #5 # 3-10
'eta': 0.1, # 0.05-0.3
'silent': 0, # 0 - prints progress 1 - quiet
'objective': 'binary:logistic',
'num_class': 1,
'eval_metric': 'auc'}
num_round = 10000 # the number of training iterations if not stopped early
evallist = [(Dtrain, 'train'), (Dval, 'eval')] # Specify validation set to watch performance
# Train the model on the training set to get an initial impression on the performance
model = xgb.train(param, Dtrain, num_round, evallist, early_stopping_rounds=10)
print("Best error: {:.2f} with {} rounds".format(
model.best_score,
model.best_iteration+1))
Dtest = xgb.DMatrix(x_test, label=y_test)
probas = model.predict(Dtest)
y_test = Dtest.get_label()
accs, recalls, precs, f1s = [], [], [], []
for thr in thres:
y_pred = (probas > thr).astype(int)
accs.append(skm.accuracy_score(y_test, y_pred))
recalls.append(skm.recall_score(y_test, y_pred))
precs.append(skm.precision_score(y_test, y_pred))
f1s.append(skm.f1_score(y_test, y_pred))
fig = plt.figure(figsize=(20, 10))
fig.subplots_adjust(hspace=0.4, wspace=0.4)
for i, (metric, name) in enumerate(zip([accs, recalls, precs, f1s], ['acc', 'rcl', 'prc', 'f1']), start=1):
fig.add_subplot(2, 2, i)
plt.plot(thres, metric)
plt.title(name)
tpr, fpr, _ = roc_curve(y_test, probas)
plt.plot(tpr, fpr);
best_thres = thres[np.argmax(f1s)]
y_pred = (probas > best_thres).astype(int)
print('Acc for max f1 threshold: ', skm.accuracy_score(y_test, y_pred))
print('Max acc : ', max(accs))
print('Precision : ', skm.precision_score(y_test, y_pred))
print('Recall : ', skm.recall_score(y_test, y_pred))
print('AUC : ', skm.auc(tpr, fpr))
xgb.plot_importance(model, max_num_features=20, importance_type='gain');
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from os.path import join
import seaborn as sns
```
# Build results tables
```
N_patients = {
"AM":14646484,
"AU":973941,
"CH":521211,
"DER":771281,
"GGH":748889,
"HNO":564501,
"IM":1693363,
"KI":743235,
"NEU":212302,
"ORTR":1518719,
"PSY":170016,
"RAD":1593710,
"URO":394209
}
specialty_map = {
"AM":"GP",
"AU":"OPH",
"CH":"SRG",
"DER":"DER",
"GGH":"OBGYN",
"HNO":"ENT",
"IM":"IM",
"KI":"PED",
"NEU":"NEU",
"ORTR":"ORTH",
"PSY":"PSY",
"RAD":"RAD",
"URO":"OPT"
}
fname = "states_doc_info_total_hour-based_quarterly.csv"
N_patients = pd.read_csv(join(src, fname))
N_patients
src = "results"
fname = "states_doc_info_total_hour-based_quarterly.csv"
N_patients = pd.read_csv(join(src, fname))
fname = "searching_pats_{}_iter10_shocksize{}.csv"
specialties = ["AM", "AU", "CH", "DER", "GGH", "HNO", "IM", "KI",
"NEU", "ORTR", "PSY", "RAD", "URO"]
shocksizes = [7, 10, 15, 20]
results = pd.DataFrame()
for shocksize in shocksizes:
for spec in specialties:
df = pd.read_csv(join(src, fname.format(spec, shocksize)),
names=[f"step_{i}" for i in range(1, 11)], header=None)
for col in df.columns:
df[col] = df[col] / N_patients[f"{spec}_total"].sum() * 100
df["run"] = range(1, 11)
df["specialty"] = specialty_map[spec]
df["shocksize"] = shocksize
cols = ["step_1", "step_10", "run", "specialty", "shocksize"]
results = pd.concat([results, df[cols]])
results = results.reset_index(drop=True)
ylims = {7:[5, 0.5], 10:[8, 1], 15:[10, 2], 20:[14, 3]}
yticks_t1 = {
7:[1, 2, 3, 4, 5],
10:[0, 2, 4, 6, 8],
15:[0, 2, 4, 6, 8, 10, 12],
20:[0, 2, 4, 6, 8, 10, 12, 14, 16]}
yticks_t10 = {
7:[0, 0.1, 0.2, 0.3, 0.4, 0.5],
10:[0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2],
15:[0., 0.5, 1.0, 1.5, 2],
20:[0, 1, 2, 3, 4]
}
ylabels = {1: "% searching patients",
10: "% lost patients"}
for shock_size in [7, 10, 15, 20]:
df = results[results["shocksize"] == shock_size]
fig, axes = plt.subplots(1, 2, figsize=(14, 4))
cmap = plt.get_cmap("RdYlGn_r")
for i, step in enumerate([1, 10]):
ax = axes[i]
agg = df[["specialty", f"step_{step}"]]\
.groupby("specialty")\
.agg("median")\
.sort_values(by=f"step_{step}")
order = agg.index
max_val = agg[f"step_{step}"][-1]
palette = [cmap(i) for i in agg[f"step_{step}"] / max_val]
sns.boxplot(
ax=ax,
x="specialty",
y=f"step_{step}",
data=df,
order=order,
palette=palette
)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_ylabel(ylabels[step], fontsize=16)
ax.set_xlabel("")
ax.set_title(f"displacement step {step}", fontsize=16)
ax.set_xticks(range(len(labels)))
ax.set_xticklabels(labels, fontsize=11)
axes[0].set_ylim(0, ylims[shock_size][0])
axes[0].set_yticks(yticks_t1[shock_size])
axes[0].set_yticklabels(yticks_t1[shock_size], fontsize=12)
axes[1].set_ylim(0, ylims[shock_size][1])
axes[1].set_yticks(yticks_t10[shock_size])
axes[1].set_yticklabels(yticks_t10[shock_size], fontsize=12)
fig.tight_layout()
plt.savefig(f"figures/shock_results_{shock_size}.svg")
```
| github_jupyter |
```
# Import Dependencies
import pandas as pd
from bs4 import BeautifulSoup as bs
import requests
from splinter import Browser
from splinter.exceptions import ElementDoesNotExist
from IPython.display import HTML
#browser = Browser()
# Create a path to use for splinter
executable_path = {'executable_path' : 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
# Create shortcut for URL of main page
url = 'https://mars.nasa.gov/news/'
# Get response of web page
response = requests.get(url)
# html parser with beautiful soup
soup = bs(response.text, 'html.parser')
# check to see if it parses
print(soup.prettify())
#locates most recent articel title
title = soup.find("div", class_= "content_title").text
print(title)
# Locates the paragraph within the most recent story
paragraph= soup.find("div", class_= "rollover_description_inner").text
print(paragraph)
# Visit's the website below in the new browser
browser.visit('https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars')
# Manually clicks the link in the browser
browser.click_link_by_id('full_image')
# manually clicks the "more info" link
browser.click_link_by_partial_text('more info')
# html parser
html=browser.html
soup = bs(html, "html.parser")
image= soup.select_one('figure.lede a img').get('src')
image
main_url= "https://www.jpl.nasa.gov"
featured_image_url = main_url+image
featured_image_url
# Use Pandas to scrape data
tables = pd.read_html('https://space-facts.com/mars/')
tables
# Creates a dataframe from the list that is "tables"
mars_df = pd.DataFrame(tables[0])
# Changes the name of the columns
mars_df.rename(columns={0:"Information", 1:"Values"})
# Transforms dataframe so it is readible in html
mars_html_table = [mars_df.to_html(classes='data_table', index=False, header=False, border=0)]
mars_html_table
mars_df.rename = mars_df.rename.to_html()
# Visits the website below
browser.visit('https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars')
# html and parser
html = browser.html
soup = bs(html, 'html.parser')
# Creates an empty list that will contian the names of the hemispheres
hemi_names = []
# Search for the names of all four hemispheres
results = soup.find_all('div', class_="collapsible results")
hemispheres = results[0].find_all('h3')
# Get text and store in list
for name in hemispheres:
hemi_names.append(name.text)
hemi_names
# Search for thumbnail links
thumbnail_results = results[0].find_all('a')
thumbnail_links = []
for thumbnail in thumbnail_results:
# If the thumbnail element has an image...
if (thumbnail.img):
# then grab the attached link
thumbnail_url = 'https://astrogeology.usgs.gov/' + thumbnail['href']
# Append list with links
thumbnail_links.append(thumbnail_url)
thumbnail_links
full_imgs = []
for url in thumbnail_links:
# Click through each thumbanil link
browser.visit(url)
html = browser.html
soup = bs(html, 'html.parser')
# Scrape each page for the relative image path
results = soup.find_all('img', class_='wide-image')
relative_img_path = results[0]['src']
# Combine the reltaive image path to get the full url
img_link = 'https://astrogeology.usgs.gov/' + relative_img_path
# Add full image links to a list
full_imgs.append(img_link)
full_imgs
# Zip together the list of hemisphere names and hemisphere image links
mars_hemi_zip = zip(hemi_names, full_imgs)
hemisphere_image_urls = []
# Iterate through the zipped object
for title, img in mars_hemi_zip:
mars_hemi_dict = {}
# Add hemisphere title to dictionary
mars_hemi_dict['title'] = title
# Add image url to dictionary
mars_hemi_dict['img_url'] = img
# Append the list with dictionaries
hemisphere_image_urls.append(mars_hemi_dict)
hemisphere_image_urls
```
| github_jupyter |
# tensorflow-compress
[](https://colab.research.google.com/github/byronknoll/tensorflow-compress/blob/master/tensorflow-compress.ipynb)
Made by Byron Knoll. GitHub repository: https://github.com/byronknoll/tensorflow-compress
### Description
tensorflow-compress performs lossless data compression using neural networks in TensorFlow. It can run on GPUs with a large batch size to get a substantial speed improvement. It is made using Colab, which should make it easy to run through a web browser. You can choose a file, perform compression (or decompression), and download the result.
tensorflow-compress is open source and the code should hopefully be easy to understand and modify. Feel free to experiment with the code and create pull requests with improvements.
The neural network is trained from scratch during compression and decompression, so the model weights do not need to be stored. Arithmetic coding is used to encode the model predictions to a file.
Feel free to contact me at byron@byronknoll.com if you have any questions.
### Instructions:
Basic usage: configure all the fields in the "Parameters" section and select Runtime->Run All.
Advanced usage: save a copy of this notebook and modify the code.
### Related Projects
* [NNCP](https://bellard.org/nncp/) - this uses a similar LSTM architecture to tensorflow-compress. It is limited to running only on CPUs.
* [lstm-compress](https://github.com/byronknoll/lstm-compress) - similar to NNCP, but has a batch size limit of one (so it is significantly slower).
* [cmix](http://www.byronknoll.com/cmix.html) - shares the same LSTM code as lstm-compress, but contains a bunch of other components to get better compression rate.
* [DeepZip](https://github.com/mohit1997/DeepZip) - this also performs compression using TensorFlow. However, it has some substantial architecture differences to tensorflow-compress: it uses pretraining (using multiple passes over the training data) and stores the model weights in the compressed file.
### Benchmarks
These benchmarks were performed using tensorflow-compress v3 with the default parameter settings. Some parameters differ between enwik8 and enwik9 as noted in the parameter comments. Colab Pro was used with Tesla V100 GPU. Compression time and decompression time are approximately the same.
* enwik8: compressed to 16,128,954 bytes in 32,113.38 seconds. NNCP preprocessing time: 206.38 seconds. Dictionary size: 65,987 bytes.
* enwik9: compressed to 118,938,744 bytes in 297,505.98 seconds. NNCP preprocessing time: 2,598.77 seconds. Dictionary size: 79,876 bytes. Since Colab has a 24 hour time limit, the preprocessed enwik9 file was split into four parts using [this notebook](https://colab.sandbox.google.com/github/byronknoll/tensorflow-compress/blob/master/nncp-splitter.ipynb). The "checkpoint" option was used to save/load model weights between processing each part. For the first part, start_learning_rate=0.0007 and end_learning_rate=0.0005 was used. For the remaining three parts, a constant 0.00035 learning rate was used.
See the [Large Text Compression Benchmark](http://mattmahoney.net/dc/text.html) for more information about the test files and a comparison with other programs.
### Versions
* v3 - released November 28, 2020. Changes from v2:
* Parameter tuning
* [New notebook](https://colab.sandbox.google.com/github/byronknoll/tensorflow-compress/blob/master/nncp-splitter.ipynb) for file splitting
* Support for learning rate decay
* v2 - released September 6, 2020. Changes from v1:
* 16 bit floats for improved speed
* Weight updates occur at every timestep (instead of at spaced intervals)
* Support for saving/loading model weights
* v1 - released July 20, 2020.
## Parameters
```
batch_size = 96 #@param {type:"integer"}
#@markdown >_This will split the file into N batches, and process them in parallel. Increasing this will improve speed but can make compression rate worse. Make this a multiple of 8 to improve speed on certain GPUs._
seq_length = 11 #@param {type:"integer"}
#@markdown >_This determines the horizon for back propagation through time. Reducing this will improve speed, but can make compression rate worse._
rnn_units = 1400 #@param {type:"integer"}
#@markdown >_This is the number of units to use within each LSTM layer. Reducing this will improve speed, but can make compression rate worse. Make this a multiple of 8 to improve speed on certain GPUs._
num_layers = 6 #@param {type:"integer"}
#@markdown >_This is the number of LSTM layers to use. Reducing this will improve speed, but can make compression rate worse._
start_learning_rate = 0.00075 #@param {type:"number"}
#@markdown >_Learning rate for Adam optimizer. Recommended value for enwik8: 0.00075. For enwik9, see the notes in the "Benchmarks" section for the recommended learning rate._
end_learning_rate = 0.00075 #@param {type:"number"}
#@markdown >_Typically this should be set to the same value as the "start_learning_rate" parameter above. If this is set to a different value, the learning rate will start at "start_learning_rate" and linearly change to "end_learning_rate" by the end of the file. For large files this could be useful for learning rate decay._
mode = 'compress' #@param ["compress", "decompress", "both", "preprocess_only"]
#@markdown >_Whether to run compression only, decompression only, or both. "preprocess_only" will only run preprocessing and skip compression._
preprocess = 'nncp' #@param ["cmix", "nncp", "nncp-done", "none"]
#@markdown >_The choice of preprocessor. NNCP works better on enwik8/enwik9. NNCP preprocessing is slower since it constructs a custom dictionary, while cmix uses a pretrained dictionary. "nncp_done" is used for files which have already been preprocessed by NNCP (the dictionary must also be included)._
n_words = 8192 #@param {type:"integer"}
#@markdown >_Only used for NNCP preprocessor: this is the approximative maximum number of words of the dictionary. Recommended value for enwik8/enwik9: 8192._
min_freq = 64 #@param {type:"integer"}
#@markdown >_Only used for NNCP preprocessor: this is the minimum frequency of the selected words. Recommended value for enwik8: 64, enwik9: 512._
path_to_file = "enwik8" #@param ["enwik4", "enwik6", "enwik8", "enwik9", "custom"]
#@markdown >_Name of the file to compress or decompress. If "custom" is selected, use the next parameter to set a custom path._
custom_path = '' #@param {type:"string"}
#@markdown >_Use this if the previous parameter was set to "custom". Set this to the name of the file you want to compress/decompress. You can transfer files using the "http_path" or "local_upload" options below._
http_path = '' #@param {type:"string"}
#@markdown >_The file from this URL will be downloaded. It is recommended to use Google Drive URLs to get fast transfer speed. Use this format for Google Drive files: https://drive.google.com/uc?id= and paste the file ID at the end of the URL. You can find the file ID from the "Get Link" URL in Google Drive. You can enter multiple URLs here, space separated._
local_upload = False #@param {type:"boolean"}
#@markdown >_If enabled, you will be prompted in the "Setup Files" section to select files to upload from your local computer. You can upload multiple files. Note: the upload speed can be quite slow (use "http_path" for better transfer speeds)._
download_option = "no_download" #@param ["no_download", "local", "google_drive"]
#@markdown >_If this is set to "local", the output files will be downloaded to your computer after compression/decompression. If set to "google_drive", they will be copied to your Google Drive account (which is significantly faster than downloading locally)._
checkpoint = False #@param {type:"boolean"}
#@markdown >_If this is enabled, a checkpoint of the model weights will be downloaded (using the "download_option" parameter). This can be useful for getting around session time limits for Colab, by splitting files into multiple segments and saving/loading the model weights between each segment. Checkpoints (if present) will automatically be loaded when starting compression._
```
## Setup
```
#@title Imports
import tensorflow as tf
import numpy as np
import random
from google.colab import files
import time
import math
import sys
import subprocess
import contextlib
import os
from tensorflow.keras.mixed_precision import experimental as mixed_precision
from google.colab import drive
os.environ['TF_DETERMINISTIC_OPS'] = '1'
#@title System Info
def system_info():
"""Prints out system information."""
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime → "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
print("TensorFlow version: ", tf.__version__)
!lscpu |grep 'Model name'
!cat /proc/meminfo | head -n 3
system_info()
#@title Mount Google Drive
if download_option == "google_drive":
drive.mount('/content/gdrive')
#@title Setup Files
!mkdir -p "data"
if local_upload:
%cd data
files.upload()
%cd ..
if path_to_file == 'enwik8' or path_to_file == 'enwik6' or path_to_file == 'enwik4':
%cd data
!gdown --id 1BUbuEUhPOBaVZDdOh0KG8hxvIDgsyiZp
!unzip enwik8.zip
!head -c 1000000 enwik8 > enwik6
!head -c 10000 enwik8 > enwik4
path_to_file = 'data/' + path_to_file
%cd ..
if path_to_file == 'enwik9':
%cd data
!gdown --id 1D2gCmf9AlXIBP62ARhy0XcIuIolOTRAE
!unzip enwik9.zip
path_to_file = 'data/' + path_to_file
%cd ..
if path_to_file == 'custom':
path_to_file = 'data/' + custom_path
if http_path:
%cd data
paths = http_path.split()
for path in paths:
!gdown $path
%cd ..
if preprocess == 'cmix':
!gdown --id 1qa7K28tlUDs9GGYbaL_iE9M4m0L1bYm9
!unzip cmix-v18.zip
%cd cmix
!make
%cd ..
if preprocess == 'nncp' or preprocess == 'nncp-done':
!gdown --id 1EzVPbRkBIIbgOzvEMeM0YpibDi2R4SHD
!tar -xf nncp-2019-11-16.tar.gz
%cd nncp-2019-11-16/
!make preprocess
%cd ..
#@title Model Architecture
def build_model(vocab_size):
"""Builds the model architecture.
Args:
vocab_size: Int, size of the vocabulary.
"""
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
inputs = [
tf.keras.Input(batch_input_shape=[batch_size, seq_length, vocab_size])]
# In addition to the primary input, there are also two "state" inputs for each
# layer of the network.
for i in range(num_layers):
inputs.append(tf.keras.Input(shape=(None,)))
inputs.append(tf.keras.Input(shape=(None,)))
# Skip connections will be used to connect each LSTM layer output to the final
# output layer. Each LSTM layer will get as input both the original input and
# the output of the previous layer.
skip_connections = []
# In addition to the softmax output, there are also two "state" outputs for
# each layer of the network.
outputs = []
predictions, state_h, state_c = tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform',
)(inputs[0], initial_state=[
tf.cast(inputs[1], tf.float16),
tf.cast(inputs[2], tf.float16)])
skip_connections.append(predictions)
outputs.append(state_h)
outputs.append(state_c)
for i in range(num_layers - 1):
layer_input = tf.keras.layers.concatenate(
[inputs[0], skip_connections[-1]])
predictions, state_h, state_c = tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')(
layer_input, initial_state=[tf.cast(inputs[i*2+3], tf.float16),
tf.cast(inputs[i*2+4], tf.float16)])
skip_connections.append(predictions)
outputs.append(state_h)
outputs.append(state_c)
# The dense output layer only needs to be computed for the last timestep, so
# we can discard the earlier outputs.
last_timestep = []
for i in range(num_layers):
last_timestep.append(tf.slice(skip_connections[i], [0, seq_length - 1, 0],
[batch_size, 1, rnn_units]))
if num_layers == 1:
layer_input = last_timestep[0]
else:
layer_input = tf.keras.layers.concatenate(last_timestep)
dense = tf.keras.layers.Dense(vocab_size, name='dense_logits')(layer_input)
output = tf.keras.layers.Activation('softmax', dtype='float32',
name='predictions')(dense)
outputs.insert(0, output)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
return model
#@title Compression Library
def get_symbol(index, length, freq, coder, compress, data):
"""Runs arithmetic coding and returns the next symbol.
Args:
index: Int, position of the symbol in the file.
length: Int, size limit of the file.
freq: ndarray, predicted symbol probabilities.
coder: this is the arithmetic coder.
compress: Boolean, True if compressing, False if decompressing.
data: List containing each symbol in the file.
Returns:
The next symbol, or 0 if "index" is over the file size limit.
"""
symbol = 0
if index < length:
if compress:
symbol = data[index]
coder.write(freq, symbol)
else:
symbol = coder.read(freq)
data[index] = symbol
return symbol
def train(pos, seq_input, length, vocab_size, coder, model, optimizer, compress,
data, states):
"""Runs one training step.
Args:
pos: Int, position in the file for the current symbol for the *first* batch.
seq_input: Tensor, containing the last seq_length inputs for the model.
length: Int, size limit of the file.
vocab_size: Int, size of the vocabulary.
coder: this is the arithmetic coder.
model: the model to generate predictions.
optimizer: optimizer used to train the model.
compress: Boolean, True if compressing, False if decompressing.
data: List containing each symbol in the file.
states: List containing state information for the layers of the model.
Returns:
seq_input: Tensor, containing the last seq_length inputs for the model.
cross_entropy: cross entropy numerator.
denom: cross entropy denominator.
"""
loss = cross_entropy = denom = 0
split = math.ceil(length / batch_size)
# Keep track of operations while running the forward pass for automatic
# differentiation.
with tf.GradientTape() as tape:
# The model inputs contain both seq_input and the states for each layer.
inputs = states.pop(0)
inputs.insert(0, seq_input)
# Run the model (for all batches in parallel) to get predictions for the
# next characters.
outputs = model(inputs)
predictions = outputs.pop(0)
states.append(outputs)
p = predictions.numpy()
symbols = []
# When the last batch reaches the end of the file, we start giving it "0"
# as input. We use a mask to prevent this from influencing the gradients.
mask = []
# Go over each batch to run the arithmetic coding and prepare the next
# input.
for i in range(batch_size):
# The "10000000" is used to convert floats into large integers (since
# the arithmetic coder works on integers).
freq = np.cumsum(p[i][0] * 10000000 + 1)
index = pos + 1 + i * split
symbol = get_symbol(index, length, freq, coder, compress, data)
symbols.append(symbol)
if index < length:
prob = p[i][0][symbol]
if prob <= 0:
# Set a small value to avoid error with log2.
prob = 0.000001
cross_entropy += math.log2(prob)
denom += 1
mask.append(1.0)
else:
mask.append(0.0)
# "input_one_hot" will be used both for the loss function and for the next
# input.
input_one_hot = tf.expand_dims(tf.one_hot(symbols, vocab_size), 1)
loss = tf.keras.losses.categorical_crossentropy(
input_one_hot, predictions, from_logits=False) * tf.expand_dims(
tf.convert_to_tensor(mask), 1)
# Remove the oldest input and append the new one.
seq_input = tf.slice(seq_input, [0, 1, 0],
[batch_size, seq_length - 1, vocab_size])
seq_input = tf.concat([seq_input, input_one_hot], 1)
# Run the backwards pass to update model weights.
grads = tape.gradient(loss, model.trainable_variables)
# Gradient clipping to make training more robust.
capped_grads = [tf.clip_by_value(grad, -5., 5.) for grad in grads]
optimizer.apply_gradients(zip(capped_grads, model.trainable_variables))
return (seq_input, cross_entropy, denom)
def reset_seed():
"""Initializes various random seeds to help with determinism."""
SEED = 1234
os.environ['PYTHONHASHSEED']=str(SEED)
random.seed(SEED)
np.random.seed(SEED)
tf.random.set_seed(SEED)
def download(path):
"""Downloads the file at the specified path."""
if download_option == 'local':
files.download(path)
elif download_option == 'google_drive':
!cp -f $path /content/gdrive/My\ Drive
def process(compress, length, vocab_size, coder, data):
"""This runs compression/decompression.
Args:
compress: Boolean, True if compressing, False if decompressing.
length: Int, size limit of the file.
vocab_size: Int, size of the vocabulary.
coder: this is the arithmetic coder.
data: List containing each symbol in the file.
"""
start = time.time()
reset_seed()
model = build_model(vocab_size = vocab_size)
checkpoint_path = tf.train.latest_checkpoint('./data')
if checkpoint_path:
model.load_weights(checkpoint_path)
model.summary()
# Try to split the file into equal size pieces for the different batches. The
# last batch may have fewer characters if the file can't be split equally.
split = math.ceil(length / batch_size)
learning_rate_fn = tf.keras.optimizers.schedules.PolynomialDecay(
start_learning_rate,
split,
end_learning_rate,
power=1.0)
optimizer = tf.keras.optimizers.Adam(
learning_rate=learning_rate_fn, beta_1=0, beta_2=0.9999, epsilon=1e-5)
hidden = model.reset_states()
# Use a uniform distribution for predicting the first batch of symbols. The
# "10000000" is used to convert floats into large integers (since the
# arithmetic coder works on integers).
freq = np.cumsum(np.full(vocab_size, (1.0 / vocab_size)) * 10000000 + 1)
# Construct the first set of input characters for training.
symbols = []
for i in range(batch_size):
symbols.append(get_symbol(i*split, length, freq, coder, compress, data))
input_one_hot = tf.expand_dims(tf.one_hot(symbols, vocab_size), 1)
# Replicate the input tensor seq_length times, to match the input format.
seq_input = tf.tile(input_one_hot, [1, seq_length, 1])
pos = cross_entropy = denom = last_output = 0
template = '{:0.2f}%\tcross entropy: {:0.2f}\ttime: {:0.2f}'
# This will keep track of layer states. Initialize them to zeros.
states = []
for i in range(seq_length):
states.append([tf.zeros([batch_size, rnn_units])] * (num_layers * 2))
# Keep repeating the training step until we get to the end of the file.
while pos < split:
seq_input, ce, d = train(pos, seq_input, length, vocab_size, coder, model,
optimizer, compress, data, states)
cross_entropy += ce
denom += d
pos += 1
time_diff = time.time() - start
# If it has been over 20 seconds since the last status message, display a
# new one.
if time_diff - last_output > 20:
last_output = time_diff
percentage = 100 * pos / split
if percentage >= 100: continue
print(template.format(percentage, -cross_entropy / denom, time_diff))
if compress:
coder.finish()
print(template.format(100, -cross_entropy / length, time.time() - start))
system_info()
if mode != "both" or not compress:
model.save_weights('./data/model')
#@title Arithmetic Coding Library
#
# Reference arithmetic coding
# Copyright (c) Project Nayuki
#
# https://www.nayuki.io/page/reference-arithmetic-coding
# https://github.com/nayuki/Reference-arithmetic-coding
#
import sys
python3 = sys.version_info.major >= 3
# ---- Arithmetic coding core classes ----
# Provides the state and behaviors that arithmetic coding encoders and decoders share.
class ArithmeticCoderBase(object):
# Constructs an arithmetic coder, which initializes the code range.
def __init__(self, numbits):
if numbits < 1:
raise ValueError("State size out of range")
# -- Configuration fields --
# Number of bits for the 'low' and 'high' state variables. Must be at least 1.
# - Larger values are generally better - they allow a larger maximum frequency total (maximum_total),
# and they reduce the approximation error inherent in adapting fractions to integers;
# both effects reduce the data encoding loss and asymptotically approach the efficiency
# of arithmetic coding using exact fractions.
# - But larger state sizes increase the computation time for integer arithmetic,
# and compression gains beyond ~30 bits essentially zero in real-world applications.
# - Python has native bigint arithmetic, so there is no upper limit to the state size.
# For Java and C++ where using native machine-sized integers makes the most sense,
# they have a recommended value of num_state_bits=32 as the most versatile setting.
self.num_state_bits = numbits
# Maximum range (high+1-low) during coding (trivial), which is 2^num_state_bits = 1000...000.
self.full_range = 1 << self.num_state_bits
# The top bit at width num_state_bits, which is 0100...000.
self.half_range = self.full_range >> 1 # Non-zero
# The second highest bit at width num_state_bits, which is 0010...000. This is zero when num_state_bits=1.
self.quarter_range = self.half_range >> 1 # Can be zero
# Minimum range (high+1-low) during coding (non-trivial), which is 0010...010.
self.minimum_range = self.quarter_range + 2 # At least 2
# Maximum allowed total from a frequency table at all times during coding. This differs from Java
# and C++ because Python's native bigint avoids constraining the size of intermediate computations.
self.maximum_total = self.minimum_range
# Bit mask of num_state_bits ones, which is 0111...111.
self.state_mask = self.full_range - 1
# -- State fields --
# Low end of this arithmetic coder's current range. Conceptually has an infinite number of trailing 0s.
self.low = 0
# High end of this arithmetic coder's current range. Conceptually has an infinite number of trailing 1s.
self.high = self.state_mask
# Updates the code range (low and high) of this arithmetic coder as a result
# of processing the given symbol with the given frequency table.
# Invariants that are true before and after encoding/decoding each symbol
# (letting full_range = 2^num_state_bits):
# - 0 <= low <= code <= high < full_range. ('code' exists only in the decoder.)
# Therefore these variables are unsigned integers of num_state_bits bits.
# - low < 1/2 * full_range <= high.
# In other words, they are in different halves of the full range.
# - (low < 1/4 * full_range) || (high >= 3/4 * full_range).
# In other words, they are not both in the middle two quarters.
# - Let range = high - low + 1, then full_range/4 < minimum_range
# <= range <= full_range. These invariants for 'range' essentially
# dictate the maximum total that the incoming frequency table can have.
def update(self, freqs, symbol):
# State check
low = self.low
high = self.high
# if low >= high or (low & self.state_mask) != low or (high & self.state_mask) != high:
# raise AssertionError("Low or high out of range")
range = high - low + 1
# if not (self.minimum_range <= range <= self.full_range):
# raise AssertionError("Range out of range")
# Frequency table values check
total = int(freqs[-1])
symlow = int(freqs[symbol-1]) if symbol > 0 else 0
symhigh = int(freqs[symbol])
#total = freqs.get_total()
#symlow = freqs.get_low(symbol)
#symhigh = freqs.get_high(symbol)
# if symlow == symhigh:
# raise ValueError("Symbol has zero frequency")
# if total > self.maximum_total:
# raise ValueError("Cannot code symbol because total is too large")
# Update range
newlow = low + symlow * range // total
newhigh = low + symhigh * range // total - 1
self.low = newlow
self.high = newhigh
# While low and high have the same top bit value, shift them out
while ((self.low ^ self.high) & self.half_range) == 0:
self.shift()
self.low = ((self.low << 1) & self.state_mask)
self.high = ((self.high << 1) & self.state_mask) | 1
# Now low's top bit must be 0 and high's top bit must be 1
# While low's top two bits are 01 and high's are 10, delete the second highest bit of both
while (self.low & ~self.high & self.quarter_range) != 0:
self.underflow()
self.low = (self.low << 1) ^ self.half_range
self.high = ((self.high ^ self.half_range) << 1) | self.half_range | 1
# Called to handle the situation when the top bit of 'low' and 'high' are equal.
def shift(self):
raise NotImplementedError()
# Called to handle the situation when low=01(...) and high=10(...).
def underflow(self):
raise NotImplementedError()
# Encodes symbols and writes to an arithmetic-coded bit stream.
class ArithmeticEncoder(ArithmeticCoderBase):
# Constructs an arithmetic coding encoder based on the given bit output stream.
def __init__(self, numbits, bitout):
super(ArithmeticEncoder, self).__init__(numbits)
# The underlying bit output stream.
self.output = bitout
# Number of saved underflow bits. This value can grow without bound.
self.num_underflow = 0
# Encodes the given symbol based on the given frequency table.
# This updates this arithmetic coder's state and may write out some bits.
def write(self, freqs, symbol):
self.update(freqs, symbol)
# Terminates the arithmetic coding by flushing any buffered bits, so that the output can be decoded properly.
# It is important that this method must be called at the end of the each encoding process.
# Note that this method merely writes data to the underlying output stream but does not close it.
def finish(self):
self.output.write(1)
def shift(self):
bit = self.low >> (self.num_state_bits - 1)
self.output.write(bit)
# Write out the saved underflow bits
for _ in range(self.num_underflow):
self.output.write(bit ^ 1)
self.num_underflow = 0
def underflow(self):
self.num_underflow += 1
# Reads from an arithmetic-coded bit stream and decodes symbols.
class ArithmeticDecoder(ArithmeticCoderBase):
# Constructs an arithmetic coding decoder based on the
# given bit input stream, and fills the code bits.
def __init__(self, numbits, bitin):
super(ArithmeticDecoder, self).__init__(numbits)
# The underlying bit input stream.
self.input = bitin
# The current raw code bits being buffered, which is always in the range [low, high].
self.code = 0
for _ in range(self.num_state_bits):
self.code = self.code << 1 | self.read_code_bit()
# Decodes the next symbol based on the given frequency table and returns it.
# Also updates this arithmetic coder's state and may read in some bits.
def read(self, freqs):
#if not isinstance(freqs, CheckedFrequencyTable):
# freqs = CheckedFrequencyTable(freqs)
# Translate from coding range scale to frequency table scale
total = int(freqs[-1])
#total = freqs.get_total()
#if total > self.maximum_total:
# raise ValueError("Cannot decode symbol because total is too large")
range = self.high - self.low + 1
offset = self.code - self.low
value = ((offset + 1) * total - 1) // range
#assert value * range // total <= offset
#assert 0 <= value < total
# A kind of binary search. Find highest symbol such that freqs.get_low(symbol) <= value.
start = 0
end = len(freqs)
#end = freqs.get_symbol_limit()
while end - start > 1:
middle = (start + end) >> 1
low = int(freqs[middle-1]) if middle > 0 else 0
#if freqs.get_low(middle) > value:
if low > value:
end = middle
else:
start = middle
#assert start + 1 == end
symbol = start
#assert freqs.get_low(symbol) * range // total <= offset < freqs.get_high(symbol) * range // total
self.update(freqs, symbol)
#if not (self.low <= self.code <= self.high):
# raise AssertionError("Code out of range")
return symbol
def shift(self):
self.code = ((self.code << 1) & self.state_mask) | self.read_code_bit()
def underflow(self):
self.code = (self.code & self.half_range) | ((self.code << 1) & (self.state_mask >> 1)) | self.read_code_bit()
# Returns the next bit (0 or 1) from the input stream. The end
# of stream is treated as an infinite number of trailing zeros.
def read_code_bit(self):
temp = self.input.read()
if temp == -1:
temp = 0
return temp
# ---- Bit-oriented I/O streams ----
# A stream of bits that can be read. Because they come from an underlying byte stream,
# the total number of bits is always a multiple of 8. The bits are read in big endian.
class BitInputStream(object):
# Constructs a bit input stream based on the given byte input stream.
def __init__(self, inp):
# The underlying byte stream to read from
self.input = inp
# Either in the range [0x00, 0xFF] if bits are available, or -1 if end of stream is reached
self.currentbyte = 0
# Number of remaining bits in the current byte, always between 0 and 7 (inclusive)
self.numbitsremaining = 0
# Reads a bit from this stream. Returns 0 or 1 if a bit is available, or -1 if
# the end of stream is reached. The end of stream always occurs on a byte boundary.
def read(self):
if self.currentbyte == -1:
return -1
if self.numbitsremaining == 0:
temp = self.input.read(1)
if len(temp) == 0:
self.currentbyte = -1
return -1
self.currentbyte = temp[0] if python3 else ord(temp)
self.numbitsremaining = 8
assert self.numbitsremaining > 0
self.numbitsremaining -= 1
return (self.currentbyte >> self.numbitsremaining) & 1
# Reads a bit from this stream. Returns 0 or 1 if a bit is available, or raises an EOFError
# if the end of stream is reached. The end of stream always occurs on a byte boundary.
def read_no_eof(self):
result = self.read()
if result != -1:
return result
else:
raise EOFError()
# Closes this stream and the underlying input stream.
def close(self):
self.input.close()
self.currentbyte = -1
self.numbitsremaining = 0
# A stream where bits can be written to. Because they are written to an underlying
# byte stream, the end of the stream is padded with 0's up to a multiple of 8 bits.
# The bits are written in big endian.
class BitOutputStream(object):
# Constructs a bit output stream based on the given byte output stream.
def __init__(self, out):
self.output = out # The underlying byte stream to write to
self.currentbyte = 0 # The accumulated bits for the current byte, always in the range [0x00, 0xFF]
self.numbitsfilled = 0 # Number of accumulated bits in the current byte, always between 0 and 7 (inclusive)
# Writes a bit to the stream. The given bit must be 0 or 1.
def write(self, b):
if b not in (0, 1):
raise ValueError("Argument must be 0 or 1")
self.currentbyte = (self.currentbyte << 1) | b
self.numbitsfilled += 1
if self.numbitsfilled == 8:
towrite = bytes((self.currentbyte,)) if python3 else chr(self.currentbyte)
self.output.write(towrite)
self.currentbyte = 0
self.numbitsfilled = 0
# Closes this stream and the underlying output stream. If called when this
# bit stream is not at a byte boundary, then the minimum number of "0" bits
# (between 0 and 7 of them) are written as padding to reach the next byte boundary.
def close(self):
while self.numbitsfilled != 0:
self.write(0)
self.output.close()
```
## Compress
```
#@title Preprocess
if mode != 'decompress':
input_path = path_to_file
if preprocess == 'cmix':
!./cmix/cmix -s ./cmix/dictionary/english.dic $path_to_file ./data/preprocessed.dat
input_path = "./data/preprocessed.dat"
# int_list will contain the characters of the file.
int_list = []
if preprocess == 'nncp' or preprocess == 'nncp-done':
if preprocess == 'nncp':
!time ./nncp-2019-11-16/preprocess c data/dictionary.words $path_to_file data/preprocessed.dat $n_words $min_freq
else:
!cp $path_to_file data/preprocessed.dat
input_path = "./data/preprocessed.dat"
orig = open(input_path, 'rb').read()
for i in range(0, len(orig), 2):
int_list.append(orig[i] * 256 + orig[i+1])
vocab_size = int(subprocess.check_output(
['wc', '-l', 'data/dictionary.words']).split()[0])
else:
text = open(input_path, 'rb').read()
vocab = sorted(set(text))
vocab_size = len(vocab)
# Creating a mapping from unique characters to indexes.
char2idx = {u:i for i, u in enumerate(vocab)}
for idx, c in enumerate(text):
int_list.append(char2idx[c])
# Round up to a multiple of 8 to improve performance.
vocab_size = math.ceil(vocab_size/8) * 8
file_len = len(int_list)
print ('Length of file: {} symbols'.format(file_len))
print ('Vocabulary size: {}'.format(vocab_size))
#@title Compression
if mode == 'compress' or mode == 'both':
original_file = path_to_file
path_to_file = "data/compressed.dat"
with open(path_to_file, "wb") as out, contextlib.closing(BitOutputStream(out)) as bitout:
length = len(int_list)
# Write the original file length to the compressed file header.
out.write(length.to_bytes(5, byteorder='big', signed=False))
if preprocess != 'nncp' and preprocess != 'nncp-done':
# If NNCP was not used for preprocessing, write 256 bits to the compressed
# file header to keep track of the vocabulary.
for i in range(256):
if i in char2idx:
bitout.write(1)
else:
bitout.write(0)
enc = ArithmeticEncoder(32, bitout)
process(True, length, vocab_size, enc, int_list)
print("Compressed size:", os.path.getsize(path_to_file))
#@title Download Result
if mode == 'preprocess_only':
if preprocess == 'nncp':
download('data/dictionary.words')
download(input_path)
elif mode != 'decompress':
download('data/compressed.dat')
if preprocess == 'nncp':
download('data/dictionary.words')
if checkpoint and mode != "both":
download('data/model.index')
download('data/model.data-00000-of-00001')
download('data/checkpoint')
```
## Decompress
```
#@title Decompression
if mode == 'decompress' or mode == 'both':
output_path = "data/decompressed.dat"
with open(path_to_file, "rb") as inp, open(output_path, "wb") as out:
# Read the original file size from the header.
length = int.from_bytes(inp.read()[:5], byteorder='big')
inp.seek(5)
# Create a list to store the file characters.
output = [0] * length
bitin = BitInputStream(inp)
if preprocess == 'nncp' or preprocess == 'nncp-done':
# If the preprocessor is NNCP, we can get the vocab_size from the
# dictionary.
vocab_size = int(subprocess.check_output(
['wc', '-l', 'data/dictionary.words']).split()[0])
else:
# If the preprocessor is not NNCP, we can get the vocabulary from the file
# header.
vocab = []
for i in range(256):
if bitin.read():
vocab.append(i)
vocab_size = len(vocab)
# Round up to a multiple of 8 to improve performance.
vocab_size = math.ceil(vocab_size/8) * 8
dec = ArithmeticDecoder(32, bitin)
process(False, length, vocab_size, dec, output)
# The decompressed data is stored in the "output" list. We can now write the
# data to file (based on the type of preprocessing used).
if preprocess == 'nncp' or preprocess == 'nncp-done':
for i in range(length):
out.write(bytes(((output[i] // 256),)))
out.write(bytes(((output[i] % 256),)))
else:
# Convert indexes back to the original characters.
idx2char = np.array(vocab)
for i in range(length):
out.write(bytes((idx2char[output[i]],)))
if preprocess == 'cmix':
!./cmix/cmix -d ./cmix/dictionary/english.dic $output_path ./data/final.dat
output_path = "data/final.dat"
if preprocess == 'nncp' or preprocess == 'nncp-done':
!./nncp-2019-11-16/preprocess d data/dictionary.words $output_path ./data/final.dat
output_path = "data/final.dat"
#@title Download Result
if mode == 'decompress':
if preprocess == 'nncp-done':
download('data/decompressed.dat')
else:
download(output_path)
if checkpoint:
download('data/model.index')
download('data/model.data-00000-of-00001')
download('data/checkpoint')
#@title Validation
if mode == 'decompress' or mode == 'both':
if preprocess == 'nncp-done':
!md5sum data/decompressed.dat
!md5sum $output_path
if mode == 'both':
!md5sum $original_file
```
| github_jupyter |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os
import sys
from os.path import exists
sys.path.append('../..')
import pylab as plt
import pandas as pd
import numpy as np
from loguru import logger
import seaborn as sns
from stable_baselines3 import PPO, DQN
from vimms.Common import POSITIVE, set_log_level_warning, load_obj, save_obj
from vimms.ChemicalSamplers import UniformRTAndIntensitySampler, GaussianChromatogramSampler, UniformMZFormulaSampler, \
MZMLFormulaSampler, MZMLRTandIntensitySampler, MZMLChromatogramSampler
from vimms.Noise import UniformSpikeNoise
from vimms.Evaluation import evaluate_real
from vimms.Chemicals import ChemicalMixtureFromMZML
from vimms.Roi import RoiBuilderParams, SmartRoiParams
from mass_spec_utils.data_import.mzmine import load_picked_boxes
from vimms_gym.env import DDAEnv
from vimms_gym.chemicals import generate_chemicals
from vimms_gym.evaluation import evaluate, run_method
from vimms_gym.common import METHOD_RANDOM, METHOD_FULLSCAN, METHOD_TOPN, METHOD_PPO, METHOD_DQN
```
# 1. Parameters
```
n_chemicals = (20, 50)
mz_range = (100, 110)
rt_range = (0, 1440)
intensity_range = (1E4, 1E20)
min_mz = mz_range[0]
max_mz = mz_range[1]
min_rt = rt_range[0]
max_rt = rt_range[1]
min_log_intensity = np.log(intensity_range[0])
max_log_intensity = np.log(intensity_range[1])
isolation_window = 0.7
N = 10
rt_tol = 120
exclusion_t_0 = 15
mz_tol = 10
min_ms1_intensity = 5000
ionisation_mode = POSITIVE
enable_spike_noise = True
noise_density = 0.1
noise_max_val = 1E3
mzml_filename = '../fullscan_QCB.mzML'
samplers = None
samplers_pickle = 'samplers_fullscan_QCB_small.mzML.p'
if exists(samplers_pickle):
logger.info('Loaded %s' % samplers_pickle)
samplers = load_obj(samplers_pickle)
mz_sampler = samplers['mz']
ri_sampler = samplers['rt_intensity']
cr_sampler = samplers['chromatogram']
else:
logger.info('Creating samplers from %s' % mzml_filename)
mz_sampler = MZMLFormulaSampler(mzml_filename, min_mz=min_mz, max_mz=max_mz)
ri_sampler = MZMLRTandIntensitySampler(mzml_filename, min_rt=min_rt, max_rt=max_rt,
min_log_intensity=min_log_intensity,
max_log_intensity=max_log_intensity)
roi_params = RoiBuilderParams(min_roi_length=3, at_least_one_point_above=1000)
cr_sampler = MZMLChromatogramSampler(mzml_filename, roi_params=roi_params)
samplers = {
'mz': mz_sampler,
'rt_intensity': ri_sampler,
'chromatogram': cr_sampler
}
save_obj(samplers, samplers_pickle)
params = {
'chemical_creator': {
'mz_range': mz_range,
'rt_range': rt_range,
'intensity_range': intensity_range,
'n_chemicals': n_chemicals,
'mz_sampler': mz_sampler,
'ri_sampler': ri_sampler,
'cr_sampler': GaussianChromatogramSampler(),
},
'noise': {
'enable_spike_noise': enable_spike_noise,
'noise_density': noise_density,
'noise_max_val': noise_max_val,
'mz_range': mz_range
},
'env': {
'ionisation_mode': ionisation_mode,
'rt_range': rt_range,
'isolation_window': isolation_window,
'mz_tol': mz_tol,
'rt_tol': rt_tol,
}
}
max_peaks = 200
in_dir = 'results'
n_eval_episodes = 1
deterministic = True
```
# 2. Evaluation
#### Generate some chemical sets
```
set_log_level_warning()
eval_dir = 'evaluation'
methods = [
METHOD_TOPN,
METHOD_RANDOM,
METHOD_PPO,
]
chemical_creator_params = params['chemical_creator']
chem_list = []
for i in range(n_eval_episodes):
print(i)
chems = generate_chemicals(chemical_creator_params)
chem_list.append(chems)
```
#### Run different methods
```
for chems in chem_list:
print(len(chems))
max_peaks
out_dir = eval_dir
in_dir, out_dir
```
#### Compare to Top-10
```
env_name = 'DDAEnv'
model_name = 'PPO'
intensity_threshold = 0.5
topN_N = 20
topN_rt_tol = 30
method_eval_results = {}
for method in methods:
effective_rt_tol = rt_tol
copy_params = dict(params)
copy_params['env']['rt_tol'] = effective_rt_tol
if method == METHOD_PPO:
fname = os.path.join(in_dir, '%s_%s.zip' % (env_name, model_name))
model = PPO.load(fname)
elif method == METHOD_DQN:
fname = os.path.join(in_dir, '%s_%s.zip' % (env_name, model_name))
model = DQN.load(fname)
else:
model = None
if method == METHOD_TOPN:
N = topN_N
effective_rt_tol = topN_rt_tol
copy_params = dict(params)
copy_params['env']['rt_tol'] = effective_rt_tol
banner = 'method = %s max_peaks = %d N = %d rt_tol = %d' % (method, max_peaks, N, effective_rt_tol)
print(banner)
print()
episodic_results = run_method(env_name, copy_params, max_peaks, chem_list, method, out_dir,
N=N, min_ms1_intensity=min_ms1_intensity, model=model,
print_eval=True, print_reward=False, intensity_threshold=intensity_threshold)
eval_results = [er.eval_res for er in episodic_results]
method_eval_results[method] = eval_results
print()
```
#### Test classic controllers in ViMMS
```
from vimms.MassSpec import IndependentMassSpectrometer
from vimms.Controller import TopNController, TopN_SmartRoiController, WeightedDEWController
from vimms.Environment import Environment
```
Run Top-N Controller
```
method = 'TopN_Controller'
print('method = %s' % method)
print()
effective_rt_tol = topN_rt_tol
effective_N = topN_N
eval_results = []
for i in range(len(chem_list)):
spike_noise = None
if enable_spike_noise:
noise_params = params['noise']
noise_density = noise_params['noise_density']
noise_max_val = noise_params['noise_max_val']
noise_min_mz = noise_params['mz_range'][0]
noise_max_mz = noise_params['mz_range'][1]
spike_noise = UniformSpikeNoise(noise_density, noise_max_val, min_mz=noise_min_mz,
max_mz=noise_max_mz)
chems = chem_list[i]
mass_spec = IndependentMassSpectrometer(ionisation_mode, chems, spike_noise=spike_noise)
controller = TopNController(ionisation_mode, effective_N, isolation_window, mz_tol, effective_rt_tol,
min_ms1_intensity)
env = Environment(mass_spec, controller, min_rt, max_rt, progress_bar=False, out_dir=out_dir,
out_file='%s_%d.mzML' % (method, i), save_eval=True)
env.run()
eval_res = evaluate(env, intensity_threshold)
eval_results.append(eval_res)
print('Episode %d finished' % i)
print(eval_res)
method_eval_results[method] = eval_results
```
Run SmartROI Controller
```
alpha = 2
beta = 0.1
smartroi_N = 20
smartroi_dew = 15
method = 'SmartROI_Controller'
print('method = %s' % method)
print()
effective_rt_tol = exclusion_t_0
eval_results = []
for i in range(len(chem_list)):
spike_noise = None
if enable_spike_noise:
noise_params = params['noise']
noise_density = noise_params['noise_density']
noise_max_val = noise_params['noise_max_val']
noise_min_mz = noise_params['mz_range'][0]
noise_max_mz = noise_params['mz_range'][1]
spike_noise = UniformSpikeNoise(noise_density, noise_max_val, min_mz=noise_min_mz,
max_mz=noise_max_mz)
chems = chem_list[i]
mass_spec = IndependentMassSpectrometer(ionisation_mode, chems, spike_noise=spike_noise)
roi_params = RoiBuilderParams(min_roi_intensity=500, min_roi_length=0)
smartroi_params = SmartRoiParams(intensity_increase_factor=alpha, drop_perc=beta/100.0)
controller = TopN_SmartRoiController(ionisation_mode, isolation_window, smartroi_N, mz_tol, smartroi_dew,
min_ms1_intensity, roi_params, smartroi_params)
env = Environment(mass_spec, controller, min_rt, max_rt, progress_bar=False, out_dir=out_dir,
out_file='%s_%d.mzML' % (method, i), save_eval=True)
env.run()
eval_res = evaluate(env, intensity_threshold)
eval_results.append(eval_res)
print('Episode %d finished' % i)
print(eval_res)
method_eval_results[method] = eval_results
```
Run WeightedDEW Controller
```
t0 = 15
t1 = 60
weighteddew_N = 20
method = 'WeightedDEW_Controller'
print('method = %s' % method)
print()
eval_results = []
for i in range(len(chem_list)):
spike_noise = None
if enable_spike_noise:
noise_params = params['noise']
noise_density = noise_params['noise_density']
noise_max_val = noise_params['noise_max_val']
noise_min_mz = noise_params['mz_range'][0]
noise_max_mz = noise_params['mz_range'][1]
spike_noise = UniformSpikeNoise(noise_density, noise_max_val, min_mz=noise_min_mz,
max_mz=noise_max_mz)
chems = chem_list[i]
mass_spec = IndependentMassSpectrometer(ionisation_mode, chems, spike_noise=spike_noise)
controller = WeightedDEWController(ionisation_mode, weighteddew_N, isolation_window, mz_tol, t1,
min_ms1_intensity, exclusion_t_0=t0)
env = Environment(mass_spec, controller, min_rt, max_rt, progress_bar=False, out_dir=out_dir,
out_file='%s_%d.mzML' % (method, i), save_eval=True)
env.run()
eval_res = evaluate(env, intensity_threshold)
eval_results.append(eval_res)
print('Episode %d finished' % i)
print(eval_res)
method_eval_results[method] = eval_results
```
#### Plotting
Flatten data into dataframe
```
data = []
for method in method_eval_results:
eval_results = method_eval_results[method]
for eval_res in eval_results:
row = (
method,
float(eval_res['coverage_prop']),
float(eval_res['intensity_prop']),
float(eval_res['ms1/ms2 ratio']),
float(eval_res['efficiency']),
float(eval_res['precision']),
float(eval_res['recall']),
float(eval_res['f1']),
)
data.append(row)
df = pd.DataFrame(data, columns=['method', 'coverage_prop', 'intensity_prop', 'ms1/ms2_ratio', 'efficiency', 'precision', 'recall', 'f1'])
# df.set_index('method', inplace=True)
df.head()
sns.set_context("poster")
plt.figure(figsize=(10, 5))
sns.boxplot(data=df, x='method', y='coverage_prop')
plt.xticks(rotation=90)
plt.title('Coverage Proportion')
plt.figure(figsize=(10, 5))
sns.boxplot(data=df, x='method', y='intensity_prop')
plt.xticks(rotation=90)
plt.title('Intensity Proportion')
plt.figure(figsize=(10, 5))
sns.boxplot(data=df, x='method', y='ms1/ms2_ratio')
plt.xticks(rotation=90)
plt.title('MS1/MS2 Ratio')
plt.figure(figsize=(10, 5))
sns.boxplot(data=df, x='method', y='efficiency')
plt.xticks(rotation=90)
plt.title('Efficiency')
plt.figure(figsize=(10, 5))
sns.boxplot(data=df, x='method', y='precision')
plt.xticks(rotation=90)
plt.title('Precision')
plt.figure(figsize=(10, 5))
sns.boxplot(data=df, x='method', y='recall')
plt.xticks(rotation=90)
plt.title('Recall')
plt.figure(figsize=(10, 5))
sns.boxplot(data=df, x='method', y='f1')
plt.xticks(rotation=90)
plt.title('F1')
df.to_pickle('evaluation.p')
```
| github_jupyter |
# Random Walks
This week we will discuss a new topic, *random walks*. Random walks are an example of a markov process, and we will also learn what this means, and how we can analyze the behavior of the random walker using a markov chain.
The exercises this week are slightly more extensive then other weeks, and is more of a project based work than earlier exercise sets as well. This is because the plan is to cover some of these exercises in L20, i.e., the lecture on Friday November 8th. It it therefore recommended that you work on the exercises before Thursday. If you cannot attend the lecture on Friday, it is strongly recommended to take a good look at the example solutions, which I will upload during Friday's lecture.
## Random Walks
A random walk is a process where we follow some object taking *random steps*. The path the object walks then defines a random path, or trajectory. Random walks are powerful mathematical objects with a large number of use cases, but we will return to this point later, for now let us look at some actual random walks.
### The 1D Random Walker
A random walk can refer to many different processes, but let us start of with perhaps the simplest of them all, a 1D random walk on a regular grid. Assume some walker starts of at $x=0$. Now it takes steps to the left or right at random, with equal probability.
<img src="fig/1D_walk.png" width=600>
We denote the position of the walker after $N$ steps by $X_N$. Because the walker is taking random steps, $X_N$ is what we call a *random* or *stochastic variable*, it won't have a specific value in general, but be different for each specific random walk, depending on what steps are actually taken.
For each step the walker takes, we move either 1 step to the left, or 1 step to the right. Thus
$$X_{N+1} = X_{N} + K_N,$$
where $K_N$ is the $N$'th step taken. We assume that all steps are independent of all others, and that each step has an equal chance of being to the left or to the right, so
$$K_N = \begin{cases}
1 & \mbox{with 50} \% \mbox{ chance} \\
-1 & \mbox{with 50}\% \mbox{ chance}
\end{cases}$$
Let us look at how a random walk looks. To draw the step $K_N$ using numpy, we use `np.random.randint(2)`, but this gives us 0 or 1, so we instead use `2*np.random.randint(2) - 1`, which will then give us -1 or 1 with equal probability.
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(12345)
nr_steps = 10
X = np.zeros(nr_steps+1)
X[0] = 0
for N in range(nr_steps):
X[N+1] = X[N] + 2*np.random.randint(2) - 1
plt.plot(range(nr_steps+1), X)
plt.xlabel('Nr of steps taken')
plt.ylabel(r'$X_N$')
plt.show()
```
Simply using `plt.plot` here can be a bit misleading, so we can alternatively change the plotstyle, or we can change to use the `plt.step` function instead:
```
plt.step(range(nr_steps+1), X, where='mid')
plt.xlabel('Nr of steps taken')
plt.ylabel(r'$X_N$')
plt.show()
nr_steps = 1000
X = np.zeros(nr_steps+1)
X[0] = 0
for N in range(nr_steps):
X[N+1] = X[N] + 2*np.random.randint(2) - 1
plt.step(range(nr_steps+1), X, where='mid')
plt.xlabel('Nr of steps taken')
plt.ylabel('Displacement')
plt.show()
```
### Vectorized Random Walk
As we saw last week, if we want to repeatedly draw and use random numbers, `np.random` can be used in a vectorized way to be more efficient. Let us see how we can do this for a random walk.
Drawing the steps themselves is straight forward:
```
nr_steps = 1000
steps = 2*np.random.randint(2, size=nr_steps) - 1
```
But now we need to combine these steps into the variable $X_N$. Now, if we only want to know the final displacement after all the steps, then we could simply do the sum
$$X_{1000} = \sum_{i=1}^{1000} K_i.$$
However, if we want to plot out the full trajectory of the walk, then we need to compute all the partial sums as well, i.e., find $X_N$ for $N=1, 2, 3, \ldots 1000.$
We can do this with the function `np.cumsum`, which stands for *cumulative sum*. Taking the cumulative sum of a sequence gives a new sequence where element $n$ of the new sequence is the sum of the first $n$ elements of the input. Thus, the cumulative sum of $K_N$ will give $X_N$.
```
X = np.zeros(nr_steps + 1)
X[0] = 0
X[1:] = X[0] + np.cumsum(steps)
```
Note that we could have simply said `X = np.cumsum(steps)`, but in that case, $X_0$ wouldn't be 0, it would be -1 or 1. That's not a big deal, but we take the extra step of defining $X_0 = 0$, and then finding the rest of $X_N$ for $N > 100$.
```
plt.step(range(nr_steps+1), X, where='mid')
plt.xlabel('Nr of steps taken')
plt.ylabel('Displacement')
plt.show()
```
### Many Walkers
Because the walker is completely random, understanding how it behaves from looking at a single walker isn't that useful. Instead, we can look at a large *ensemble* of walkers, and then perhaps we can gleam some insight into how they behave.
We can also use the vectorization of `np.random` to draw the walks of many different walkers in a vectorized manner:
```
nr_steps = 100
walkers = 5
X = np.zeros((nr_steps+1, walkers))
X[0, :] = 0
steps = 2*np.random.randint(2, size=(nr_steps, walkers)) - 1
X[1:, :] = np.cumsum(steps, axis=0)
plt.plot(X)
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.show()
```
Or with many more steps:
```
nr_steps = 10000
walkers = 5
X = np.zeros((nr_steps+1, walkers))
X[0, :] = 0
steps = 2*np.random.randint(2, size=(nr_steps, walkers)) - 1
X[1:, :] = np.cumsum(steps, axis=0)
plt.plot(X, linewidth=0.5)
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.show()
```
### Very Many Walkers
We have now seen how we can plot 5 walkers. But if we really want to understand the average behavior, we might want to plot a lot more walkers. With our code, this works just fine, but the output won't tell us to much, because it will become too chaotic:
```
nr_steps = 1000
walkers = 1000
X = np.zeros((nr_steps+1, walkers))
X[0, :] = 0
steps = 2*np.random.randint(2, size=(nr_steps, walkers)) - 1
X[1:, :] = np.cumsum(steps, axis=0)
plt.plot(X)
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.show()
```
This plot shows a thousand random walks overlaying each other, but we cannot really see what is going on, because the different lines simply overlap and hide each other.
To fix this, instead of plotting all the walks over each other, we plot the *density* of walkers. We can accomplish this by using the `alpha` keyword to `plt.plot`. This keyword is used to make a line semi-transparent . Here, `alpha=1` is the default, non-transparent line, `alpha=0` is a completely transparent, and thus invisible, line. If we then set for example `alpha=0.1`, we get 10% transparent lines.
With semi-transparent lines, anywhere many lines overlap will give a strong color, if there are fewer lines, we get a weaker color. To emphasise this, let us also only plot black lines, and ignore colors.
```
nr_steps = 1000
walkers = 1000
X = np.zeros((nr_steps+1, walkers))
X[0, :] = 0
steps = 2*np.random.randint(2, size=(nr_steps, walkers)) - 1
X[1:, :] = np.cumsum(steps, axis=0)
plt.plot(X, alpha=0.01, color='k')
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.axis((0, 1000, -100, 100))
plt.show()
```
At the beginning, all the walkers are close to the origin, as they simply have not had time to get further away. As time progresses towards the right, the walkers spread out. The highest density of walkers is still found in the middle however, as the net sum of steps will tend towards a mean of 0.
### Analyzing the average behavior of a walker
Because the random walk is a random process, prediciting how a single walker will move is impossible. However, if we instead look at a lot of walkers, we can analyze their *average* activity. Because of the law of large numbers, we know that the average behavior for a large number of walkers will converge to a specific behavior.
Compare for example the first plot of a single walker. If you rerun this code, you will get a dramatically different behavior, because one specific walk looks very different from one specific different walk. For the last Figure we made however, rerunning the code won't change much, because the average behavior of 1000 walkers will tend to be the same.
One way to explore the average behavior is of course to do simulation, and then simply taking the sample average. For more complex random walk behaviors, this is the only option. Our random walk however, is quite simple, and so we can also analyze it mathematically. Let us do this.
#### Average Displacement
First we want to know the average displacement of a large number of walkers? For a single walker, the position of the walker after $N$ steps was given by
$$X_N = X_0 + \sum_{i=1}^N K_i.$$
or alternatively:
$$X_{N+1} = X_{N} + K_N.$$
Now, we want to compute the *average* of this variable, which we will denote $\langle X_N \rangle$, another word for this value is the *expected value* or *expectation*. If you have not heard these terms before, simply think of the value as the average of a large number of walkers.
Taking the average of the $X_N$ gives:
$$\langle X_{N+1} \rangle = \langle X_N + K_N \rangle.$$
However, taking an average is a linear operation, and so we can split the right hand side into
$$\langle X_{N+1} \rangle = \langle X_N \rangle + \langle K_N \rangle.$$
Now, we don't know $\langle X_{N} \rangle,$ because this is what we are actually trying to find. However, $\langle K_N \rangle$, we know, because it is simply the average of the two outcomes:
$$\langle K_N \rangle = \frac{1}{2}\cdot1 + \frac{1}{2}\cdot (-1) = \frac{1}{2} - \frac{1}{2} = 0.$$
Because there is an equal chance of taking a step to the left and the right, the *average* displacement for a single step will be 0. Inserting this gives
$$\langle X_{N+1} \rangle = \langle X_{N} \rangle.$$
If the walkers start in $X_0 = 0$, then $\langle X_0 \rangle =0$, which in turn implies $\langle X_1 = 0$, and then $\langle X_2 \rangle = 0$ and so on. Giving
$$\langle X_{N}\rangle = 0.$$
This expression tells us that the average displacement of a large number of walkers will be 0, no matter how many steps they take. Is this not surprising? We have seen that the more steps the walkers take, the longer away from the origin they will tend to move, so why is the average 0?
The average is 0 because we are looking at a completely *uniform* and symmetric walker. The walkers have an equal chance of moving left, or right, from the origin, and the average will therefore tend to be 0, even if the walkers move away from the origin.
#### Averaged Square Displacement
The average displacement became 0 because the problem is completely symmetric. However, if we now instead look at the squared displacement $X_N^2$, we get a better feel for how far away from the origin things move, because the square is positive regardless of wether the walker moves away in the positive or negative direction.
We can write out an expression for $X_{N+1}^2$ as
$$X_{N+1}^2 = (X_{N} + K_N)^2 = X_{N}^2 + 2X_N \cdot K_N + K_N^2.$$
Again we care about the average, so we take the average of this expression:
$$\langle X_{N+1}^2 \rangle = \langle X_{N}^2 \rangle + 2\langle X_N \cdot K_N \rangle + \langle K_N^2 \rangle.$$
Now, the term $\langle X_N \cdot K_N \rangle$ will again be zero, because $K_N$ is independent of $X_N$ and has an equal chance of being positive and negative. So we get
$$\langle X_{N+1}^2 \rangle = \langle X_N^2 \rangle + \langle K_N^2 \rangle.$$
Let us compute $\langle K_N^2 \rangle$:
$$\langle K_N^2 \rangle = \frac{1}{2}(1)^2 + \frac{1}{2}(-1)^2 = \frac{1}{2} + \frac{1}{2} = 1.$$
Thus we get
$$\langle X_{N+1}^2 \rangle = \langle X_N^2 \rangle + 1.$$
If we say that $X_0 = 0$, we then get that $\langle X_1 \rangle = 1$, $\langle X_2 \rangle = 2$, and so on:
$$\langle X_N^2 \rangle = N.$$
So we see that while the average displacement does not change over time: $\langle X_N \rangle = 0$, the average squared displacement does! In fact, the squared displacement grows linearily with the number of steps $N$. The longer a random walk carries on for, the further away from the origin the walker will tend to move.
This expression also tells us that the *variance* of the walkers, because the variance of random variable can always be written as
$$\text{Var}(X_N) = \langle X_N^2 \rangle - \langle X_N \rangle^2,$$
and so in this case
$$\text{Var}(X_N) = N - 0^2 = N.$$
So thar variance of $X_N$ is also $N$.
#### Root Mean Square Displacement
While it is clear from the expression
$$\langle X_N^2 \rangle = N,$$
that the walkers will tend to move further away from the origin, this is the *squared* displacement. A more intuitive quantity would perhaps be the average absolute *displacement*, i.e., $\langle |X_N| \rangle$. This would be a useful quantity, but it turns out to be a bit tricky to compute.
As an easier solution, we just take the root of the mean squared displacement:
$$\text{RMS} = \sqrt{\langle X_N^2 \rangle} = \sqrt{N}.$$
This quantity is known as the *root mean square* displacement (RMS). It won't be exactly the same as $\langle |X_N| \rangle$, but it will be close to it.
Because the root mean square displacement grows as $\sqrt{N}$, we see that a 1D random walker will tend to be about $\sqrt{N}$ away from the origin after taking $N$ steps.
### Plotting the RMS
Let us verify our statement. We repeat our density plot with 1000 walkers, but now we also plot in our expression for the RMS: $\sqrt{N}$:
```
N = 1000
walkers = 1000
k = 2*np.random.randint(2, size=(N, walkers)) - 1
X = np.cumsum(k, axis=0)
plt.plot(X, alpha=0.01, color='k')
plt.plot(range(N), np.sqrt(2*np.arange(N)), color='C1')
plt.plot(range(N), -np.sqrt(2*np.arange(N)), color='C1')
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.axis((0, 1000, -100, 100))
plt.show()
```
We see that the density of walkers inside the RMS curves is higher than outside it. This makes sense, because the root-mean-square will tend to give outliers more weight. The RMS curves still seem very reasonable, as they clearly indicate the rough region where most walkers will be found. We also see that the scaling seems reasonable.
Instead of plotting it, we can also compute the actual root-mean-square of our 1000 walkers, which is then a *sample mean*, and compare it to our analytical expresison.
```
N = 1000
walkers = 1000
k = 2*np.random.randint(2, size=(N, walkers)) - 1
X = np.cumsum(k, axis=0)
RMS = np.sqrt(np.mean(X**2, axis=1))
plt.plot(np.arange(N), np.sqrt(np.arange(N)), '--', label="Analytic Mean")
plt.plot(np.arange(N), RMS, label="Sample mean")
plt.legend()
plt.xlabel('Number of steps')
plt.ylabel('Root Mean Square Displacement')
plt.show()
```
So we see that our analytic expression looks very reasonable.
### Flipping Coins and the Law of Large Numbers
So far we have only look at the random walk as a completely theoretical exercise. As an example, let us now couple it to some more concrete situation.
Our 1D random walk is the sum of a discrete random variable that has 2, equally likely outcomes. An example of this is flipping a coin. Thus, our random walk models the process of flipping a coin many times and keeping track of the total number of heads and tails we get.
We looked at this example last week as well:
```
def flip_coins(N):
flips = np.random.randint(2, size=N)
heads = np.sum(flips == 0)
tails = N - heads
return heads, tails
print("Flipping 1000 coins:")
heads, tails = flip_coins(1000)
print("Heads:", heads)
print("Tail:", tails)
```
When we flip $N$ coins, we expect close an equal number of heads and tails, i.e., about $N/2$ of each. But should we expect exactly $N/2$ heads? The answer is *no*. The probability of getting a perfectly even distribution goes *down* with the number of throws $N$. Let us look at some numbers:
```
print(f"{'N':>10} {'Heads':>10}|{'Tails':<6} {'Deviation':>12} {'Ratio':>10}")
print("="*60)
for N in 10, 1000, 10**4, 10**5, 10**6:
for i in range(3):
heads, tails = flip_coins(N)
print(f"{N:>10} {heads:>10}|{tails:<6} {abs(N/2-heads):10} {heads/N:>10.1%}|{tails/N:<6.1%}")
print()
print("="*60)
```
Here, we explore how the *deviation*, which is the number of flips we are away from a perfectly even split, grows with $N$. What we call the deviation here is equivalent to the displacement of one of our random walkers, and as we have seen, the root mean square displacement grows as $\sqrt{N}$. The more coins we flip $N$, the bigger deviation from the baseline we expect.
Now, isn't this counterdicting the law of large numbers? No, it isn't, but it actually highlights an important point about the law of large numbers. The law of large numbers only guarantees that the *average* of many trials will approach the expected value for large numbers. Thus the law of large numbers states that the *ratio* of heads and tails will become 50%/50% in the long run, it gives not guarantee that we will have the same number of outcomes.
In fact ,we see that this is indeed the case for our results too, while the deviation grows with $N$, we can see for the exact same random sample that the *ratio* of heads and tails approaches 50/50! This is because the ratio is computed from
$$P(\text{heads}) \approx \frac{\text{number of heads}}{N},$$
but we know that the deviation in the number of heads grows as $\sqrt{N}$, but that means the deviation in the *ratio* grows as
$$\frac{\sqrt{N}}{N} = \frac{1}{\sqrt{N}}.$$
And so our results don't contradict the law of large numbers, it proves it.
The law of large numbers only talkes about averages, never about single events. However, it is a very common fallacy to think that the number of heads and tails have to *even out* in the long run. This is known as the *Gambler's Fallacy*.
### 2D Random Walk
So far we have only looked at a random walk in one dimension. Let us add another dimension, so we are looking at a random walker moving around in a 2D plane. We will still be looking at a random walk on a regular grid or lattice.
For every step, there are then 4 choices for our walker. If we envison our grid as the streets of a city seen from above, these directions would be *north*, *south*, *west*, and *east*. We now denote the displacement of the walker as
$$\vec{R}_N = (X_N, Y_N).$$
Let us jump right into simulating a random walk.
```
possible_steps = [(1, 0), (-1, 0), (0, 1), (0, -1)]
N = 100
R = np.zeros((N+1, 2))
R[0] = (0, 0)
for i in range(N):
step = possible_steps[np.random.randint(4)]
R[i+1] = R[i] + step
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y)
plt.axis('equal')
plt.show()
```
Here we specify the possible steps, and then draw one of these at random for every step. Performing this vectorized is slightly tricky. to make things a lot simpler, we simply change the possible steps to saying that the walker takes a step in both dimensions for each step, so instead of
$$(1, 0) \quad (-1, 0) \quad (0, 1) \quad (0, -1),$$
as our possibilities, we have
$$(1, 1) \quad (1, -1) \quad (-1, 1) \quad (-1, 1).$$
This makes things a lot easier, cause the steps in the $X$ and $Y$ direction are now decoupled.
```
N = 1000
steps = 2*np.random.randint(2, size=(N, 2)) - 1
R = np.cumsum(steps, axis=0)
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y)
plt.axis('equal')
plt.show()
```
The only difference with our change to the steps is that our walker now walks a distance $\sqrt{2}$ every step, instead of 1. The plot also looks like the diagonal version of the previous plot.
Let us try to plot many more steps:
```
N = 25000
steps = 2*np.random.randint(2, size=(N, 2)) - 1
R = np.cumsum(steps, axis=0)
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y)
plt.axis('equal')
plt.show()
```
Because our walker is now using 2 spatial dimensions, we cannot plot this walk over time, we can only plot out the total trajectory over time. This has some drawbacks, as it is hard to understand how the walk builds up over time, and how much the walk doubles back over itself.
A fix to this is to create an animation of the walk over time. We won't take the time to do this here. But you can click the links under to see such animations:
1. [Animated random walk in 2D with 2500 steps](https://upload.wikimedia.org/wikipedia/commons/f/f3/Random_walk_2500_animated.svg)
2. [Animated random walk in 2D with 25000 steps](https://upload.wikimedia.org/wikipedia/commons/c/cb/Random_walk_25000.svg)
### Plotting several walkers
Again we can plot several walks over each other
```
nr_steps = 500
nr_walkers = 5
for walker in range(nr_walkers):
steps = 2*np.random.randint(2, size=(nr_steps, 2)) - 1
R = np.cumsum(steps, axis=0)
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y, alpha=0.5)
plt.axis('equal')
plt.scatter(0, 0, marker='o', color='black', s=100)
plt.show()
```
Here, we plot 5 random walks over each other, and mark the origin with a black circle.
### Analyzing the Mean Displacement
We can now return to analyze the average behavior of the 2D random walker, just like we did for the 1D case. However, it turns out we don't need to reinvent the wheel. We know that
$$\vec{R}_N = (X_N, Y_N).$$
So to find the mean displacement, we find
$$\langle \vec{R}_N \rangle = (\langle X_N \rangle, \langle Y_N \rangle).$$
However, both $X_N$ and $Y_N$ behave exactly like a 1D-walker in their dimension, as they increase by -1 or 1 every step. So we have
$$\langle \vec{R}_N \rangle = (0, 0).$$
We could almost have guessed this, because the 2D problem is, just like the 1D problem, completely symmetric. The average will therefore tend to be the exact origin.
But what about the mean square displacement? In this case, taking the square of the vector means taking the dot product with itself, it is thus the square of the distance to the origin we are computing:
$$\langle |\vec{R_N}|^2 \rangle = \langle X_N^2 \rangle + \langle Y_N^2 \rangle.$$
So again we can simply insert the values we found earlier for the 1D walker:
$$\langle |\vec{R_N}|^2 \rangle = 2N.$$
Thus, the root mean square distance of a 2D random walker to the origin is given by
$$\text{RMS} = \sqrt{\langle |\vec{R_N}|^2 \rangle} = \sqrt{2N}.$$
We can draw this into our 2D plot, to see if this seems reasonable.
```
nr_steps = 500
nr_walkers = 5
# Plot random walks
for walker in range(nr_walkers):
steps = 2*np.random.randint(2, size=(nr_steps, 2)) - 1
R = np.cumsum(steps, axis=0)
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y, alpha=0.5)
plt.axis('equal')
# Plot origin
plt.scatter(0, 0, marker='o', color='black', s=100)
# Plot analytic RMS
rms = np.sqrt(2*nr_steps)
theta = np.linspace(0, 2*np.pi, 1001)
plt.plot(rms*np.cos(theta), rms*np.sin(theta), 'k--')
# Plot
plt.show()
```
## Why Random Walkers are so interesting
The random walker is an example of a process that is built up of simple, random steps, but whose net behavior can be complex. These kind of processes are found throughout the natural sciences and in mathematics. The list of applications of random walks is therefore very long and varied.
Some examples of processess that can be modelled with random walks are:
* The price of stocks in economics
* Modeling of population dynamics in biology
* The modeling of genetic drift
* The study of polymers in material science use a special type of self-avoiding random walks
* In image processing, images can be segmented by using an algorithm that randomly walks over the image
* Twitter uses a random walk approach to make suggestions of who to follow
These are just *some* examples, and the list goes on and on. If you want more example, there is a more extensive list [here](https://en.wikipedia.org/wiki/Random_walk#Applications).
## Moving from a discrete to a continious model
As a final example, let us show how we can move from a discrete random walk model to a continious one. As we already have seen some examples of, when we move towards a large number of steps $N$, the movement of the random walker doesn't necessarily look so jagged and force anymore, but *seems* more like a continious process. And this is the whole trick to moving to a continious model, letting $N\to\infty$. We obviously cannot do this on a computer, but we can analyze the problem mathematically.
To keep things as simple as possible, we can consider the uniform 1D random walker. Instead of talking about the displacement $X_N$, we now define a function $P(x, t)$ that denotes the probability of finding the walker at position $x$ at time $t$.
Because we have a discrete model, we say that the walker moves a length $\Delta x$ each step, so that the walker will be at a position
$$x_i = i\cdot \Delta x,$$
In addition, we assume the walker takes one step every $\Delta t$ timestep, so we can denote a given time as
$$t_j = j\cdot \Delta t.$$
Thus, we are talking about the probability of finding the walker at position $x_i$ at time $t_j$, which is described by the function $P(x_i, t_j)$, or simply $P_{i, j}$ for short.
Now, our goal isn't necessarily to find an expression for $P$, to find an expression for how it develops over time. Or put more formally, we are trying to find an expression for the time-derivative
$$\frac{\partial P(x, t)}{\partial t},$$
i.e., we are trying to find a differential equation. To find a time derivative, we want to find an expression on the form:
$$\frac{P(x_i, t_{j+1}) - P(x_i, t_j)}{\Delta t}.$$
Because then we can take the limit $\Delta t \to 0$ to get a derivative.
As we are trying to find the time-derivative of $P(x, t)$, let us write out what we know about stepping forward in time with our model. The probability of finding the walker in position $x_i$ at the *next* time step, must be given by the chance of finding it at the two neighboring grid points at the current time step, so:
$$P(x_i, t_{j+1}) = \frac{1}{2}P(x_{i-1}, t_j) + \frac{1}{2}P(x_{i+1}, t_j).$$
The reasons the two terms have a factor 1/2, is because there is only a 50% chance of a walker in those grid points moving the right direction.
Now, to find an expression for the time derivative, we need to subtract $P(x_i, t_j)$ from both sides.
$$P(x_i, t_{j+1}) - P(x_i, t_j) = \frac{1}{2}\big(P(x_{i-1}, t_j) - 2P(x_i, t_j) + P(x_{i+1}, t_j)\big).$$
The next step is then to divide by $\Delta t$ on both sides
$$\frac{P(x_i, t_{j+1}) - P(x_i, t_j)}{\Delta t}=\frac{1}{2\Delta t}\big(P(x_{i-1}, t_j) - 2P(x_i, t_j) + P(x_{i+1}, t_j)\big).$$
Now we are getting very close! However, we cannot take the limit $\Delta t \to 0$ just yet, because then the expression on the right will blow up. However, we can fix this by expanding the fraction by a factor of $\Delta x^2$
$$\frac{P(x_i, t_{j+1}) - P(x_i, t_j)}{\Delta t}=\frac{\Delta x^2}{2\Delta t}\frac{P(x_{i-1}, t_j) - 2P(x_i, t_j) + P(x_{i+1}, t_j)}{\Delta x^2}.$$
This helps, because we can now take the limit of $\Delta t \to 0$ and $\Delta x^2 \to 0$ at the *same* time. This way, we can enforce the constraint that we do it in such a manner that
$$\frac{\Delta x^2}{2\Delta t} = \text{constant}.$$
Because this expression will be a constant, we name it $D$. We then have
$$\lim_{\substack{\Delta t \to 0 \\ \Delta x \to 0 \\ D={\rm const.}}} \bigg[\frac{P(x_i, t_{j+1}) - P(x_i, t_j)}{\Delta t}= D \frac{P(x_{i-1}, t_j) - 2P(x_i, t_j) + P(x_{i+1}, t_j)}{\Delta x^2}\bigg].$$
Now, the term on the left was equal to the time derivative of $P$. But the expression on the right is also a derivative, it is the second-order derivative with respect to $x$! So we get
$$\frac{\partial P}{\partial t} = D\frac{\partial^2 P}{\partial x^2}.$$
Let us summarize what we have done, we have said our random walker takes steps of $\Delta x$ in time $\Delta t$, and then taken the limit where both of these go to 0. Effectively, we are saying the walker takes infinitesimally small steps, infinitely fast. This is effectively the same as letting the number of steps taken go to infinity ($N \to \infty$). But at the same time, we do this in the manner in which the total displacement of the walker stays bounded.
Taking the limit of a simple 1D walker has given us a partial differential equation known as the *Diffusion Equation*, or alternatively the *Heat Equation*. This is one of the most fundamental and important equations in the natural sciences, so it is quite astonishing that it can be derived from a simple random walker!
For more information and more detailed derivations, see for example:
- [Mark Kac's classical paper from 1947](http://www.math.hawaii.edu/~xander/Fa06/Kac--Brownian_Motion.pdf)
In practice, one does not use a 1D diffusion equation, but a 3D one:
$$\frac{\partial u}{\partial t} = \nabla^2 u.$$
But this pde can be found by taking the limit of a 3D random walker in exactly the same manner.
### Solving the Diffusion Equation
What is very interesting about what we have just done, is that we have gone from a discrete, numerically solvable problem, into a continiouse partial differential equation. This is the opposite process of what we are used to dealing with when we are looking at numerics!
If we want to solve the diffusion equation numerically, we have to discretize the equation again, and move back to the effective 1D walker. If you want to read how that can be done, take a look at this supplemental notebook: [*Solving the 1D Diffusion Equation*](S19_solving_the_1D_diffusion_equation.ipynb).
| github_jupyter |
## Define the Convolutional Neural Network
After you've looked at the data you're working with and, in this case, know the shapes of the images and of the keypoints, you are ready to define a convolutional neural network that can *learn* from this data.
In this notebook and in `models.py`, you will:
1. Define a CNN with images as input and keypoints as output
2. Construct the transformed FaceKeypointsDataset, just as before
3. Train the CNN on the training data, tracking loss
4. See how the trained model performs on test data
5. If necessary, modify the CNN structure and model hyperparameters, so that it performs *well* **\***
**\*** What does *well* mean?
"Well" means that the model's loss decreases during training **and**, when applied to test image data, the model produces keypoints that closely match the true keypoints of each face. And you'll see examples of this later in the notebook.
---
## CNN Architecture
Recall that CNN's are defined by a few types of layers:
* Convolutional layers
* Maxpooling layers
* Fully-connected layers
You are required to use the above layers and encouraged to add multiple convolutional layers and things like dropout layers that may prevent overfitting. You are also encouraged to look at literature on keypoint detection, such as [this paper](https://arxiv.org/pdf/1710.00977.pdf), to help you determine the structure of your network.
### TODO: Define your model in the provided file `models.py` file
This file is mostly empty but contains the expected name and some TODO's for creating your model.
---
## PyTorch Neural Nets
To define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the feedforward behavior of a network that employs those initialized layers in the function `forward`, which takes in an input image tensor, `x`. The structure of this Net class is shown below and left for you to fill in.
Note: During training, PyTorch will be able to perform backpropagation by keeping track of the network's feedforward behavior and using autograd to calculate the update to the weights in the network.
#### Define the Layers in ` __init__`
As a reminder, a conv/pool layer may be defined like this (in `__init__`):
```
# 1 input image channel (for grayscale images), 32 output channels/feature maps, 3x3 square convolution kernel
self.conv1 = nn.Conv2d(1, 32, 3)
# maxpool that uses a square window of kernel_size=2, stride=2
self.pool = nn.MaxPool2d(2, 2)
```
#### Refer to Layers in `forward`
Then referred to in the `forward` function like this, in which the conv1 layer has a ReLu activation applied to it before maxpooling is applied:
```
x = self.pool(F.relu(self.conv1(x)))
```
Best practice is to place any layers whose weights will change during the training process in `__init__` and refer to them in the `forward` function; any layers or functions that always behave in the same way, such as a pre-defined activation function, should appear *only* in the `forward` function.
#### Why models.py
You are tasked with defining the network in the `models.py` file so that any models you define can be saved and loaded by name in different notebooks in this project directory. For example, by defining a CNN class called `Net` in `models.py`, you can then create that same architecture in this and other notebooks by simply importing the class and instantiating a model:
```
from models import Net
net = Net()
```
```
# import the usual resources
import matplotlib.pyplot as plt
import numpy as np
# watch for any changes in model.py, if it changes, re-load it automatically
%load_ext autoreload
%autoreload 2
## TODO: Define the Net in models.py
import torch
import torch.nn as nn
import torch.nn.functional as F
## TODO: Once you've define the network, you can instantiate it
# one example conv layer has been provided for you
from models import Net
net = Net()
print(net)
```
## Transform the dataset
To prepare for training, create a transformed dataset of images and keypoints.
### TODO: Define a data transform
In PyTorch, a convolutional neural network expects a torch image of a consistent size as input. For efficient training, and so your model's loss does not blow up during training, it is also suggested that you normalize the input images and keypoints. The necessary transforms have been defined in `data_load.py` and you **do not** need to modify these; take a look at this file (you'll see the same transforms that were defined and applied in Notebook 1).
To define the data transform below, use a [composition](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html#compose-transforms) of:
1. Rescaling and/or cropping the data, such that you are left with a square image (the suggested size is 224x224px)
2. Normalizing the images and keypoints; turning each RGB image into a grayscale image with a color range of [0, 1] and transforming the given keypoints into a range of [-1, 1]
3. Turning these images and keypoints into Tensors
These transformations have been defined in `data_load.py`, but it's up to you to call them and create a `data_transform` below. **This transform will be applied to the training data and, later, the test data**. It will change how you go about displaying these images and keypoints, but these steps are essential for efficient training.
As a note, should you want to perform data augmentation (which is optional in this project), and randomly rotate or shift these images, a square image size will be useful; rotating a 224x224 image by 90 degrees will result in the same shape of output.
```
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
# the dataset we created in Notebook 1 is copied in the helper file `data_load.py`
from data_load import FacialKeypointsDataset
# the transforms we defined in Notebook 1 are in the helper file `data_load.py`
from data_load import Rescale, RandomCrop, Normalize, ToTensor
## TODO: define the data_transform using transforms.Compose([all tx's, . , .])
# order matters! i.e. rescaling should come before a smaller crop
data_transform = None
#coding
data_transform =transforms.Compose([Rescale(250),
RandomCrop(224),
Normalize(),
ToTensor()])
# testing that you've defined a transform
assert(data_transform is not None), 'Define a data_transform'
# create the transformed dataset
transformed_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/',
transform=data_transform)
print('Number of images: ', len(transformed_dataset))
# iterate through the transformed dataset and print some stats about the first few samples
for i in range(4):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['keypoints'].size())
```
## Batching and loading data
Next, having defined the transformed dataset, we can use PyTorch's DataLoader class to load the training data in batches of whatever size as well as to shuffle the data for training the model. You can read more about the parameters of the DataLoader, in [this documentation](http://pytorch.org/docs/master/data.html).
#### Batch size
Decide on a good batch size for training your model. Try both small and large batch sizes and note how the loss decreases as the model trains.
**Note for Windows users**: Please change the `num_workers` to 0 or you may face some issues with your DataLoader failing.
```
# load training data in batches
batch_size = 10
train_loader = DataLoader(transformed_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=4)
```
## Before training
Take a look at how this model performs before it trains. You should see that the keypoints it predicts start off in one spot and don't match the keypoints on a face at all! It's interesting to visualize this behavior so that you can compare it to the model after training and see how the model has improved.
#### Load in the test dataset
The test dataset is one that this model has *not* seen before, meaning it has not trained with these images. We'll load in this test data and before and after training, see how your model performs on this set!
To visualize this test data, we have to go through some un-transformation steps to turn our images into python images from tensors and to turn our keypoints back into a recognizable range.
```
# load in the test data, using the dataset class
# AND apply the data_transform you defined above
# create the test dataset
test_dataset = FacialKeypointsDataset(csv_file='data/test_frames_keypoints.csv',
root_dir='data/test/',
transform=data_transform)
# load test data in batches
batch_size = 10
test_loader = DataLoader(test_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=4)
```
## Apply the model on a test sample
To test the model on a test sample of data, you have to follow these steps:
1. Extract the image and ground truth keypoints from a sample
2. Make sure the image is a FloatTensor, which the model expects.
3. Forward pass the image through the net to get the predicted, output keypoints.
This function test how the network performs on the first batch of test data. It returns the images, the transformed images, the predicted keypoints (produced by the model), and the ground truth keypoints.
```
# test the model on a batch of test images
def net_sample_output():
# iterate through the test dataset
for i, sample in enumerate(test_loader):
# get sample data: images and ground truth keypoints
images = sample['image']
key_pts = sample['keypoints']
# convert images to FloatTensors
images = images.type(torch.FloatTensor)
# forward pass to get net output
output_pts = net(images)
# reshape to batch_size x 68 x 2 pts
output_pts = output_pts.view(output_pts.size()[0], 68, -1)
# break after first image is tested
if i == 0:
return images, output_pts, key_pts
```
#### Debugging tips
If you get a size or dimension error here, make sure that your network outputs the expected number of keypoints! Or if you get a Tensor type error, look into changing the above code that casts the data into float types: `images = images.type(torch.FloatTensor)`.
```
# call the above function
# returns: test images, test predicted keypoints, test ground truth keypoints
test_images, test_outputs, gt_pts = net_sample_output()
# print out the dimensions of the data to see if they make sense
print(test_images.data.size())
print(test_outputs.data.size())
print(gt_pts.size())
```
## Visualize the predicted keypoints
Once we've had the model produce some predicted output keypoints, we can visualize these points in a way that's similar to how we've displayed this data before, only this time, we have to "un-transform" the image/keypoint data to display it.
Note that I've defined a *new* function, `show_all_keypoints` that displays a grayscale image, its predicted keypoints and its ground truth keypoints (if provided).
```
def show_all_keypoints(image, predicted_key_pts, gt_pts=None):
"""Show image with predicted keypoints"""
# image is grayscale
plt.imshow(image, cmap='gray')
plt.scatter(predicted_key_pts[:, 0], predicted_key_pts[:, 1], s=20, marker='.', c='m')
# plot ground truth points as green pts
if gt_pts is not None:
plt.scatter(gt_pts[:, 0], gt_pts[:, 1], s=20, marker='.', c='g')
```
#### Un-transformation
Next, you'll see a helper function. `visualize_output` that takes in a batch of images, predicted keypoints, and ground truth keypoints and displays a set of those images and their true/predicted keypoints.
This function's main role is to take batches of image and keypoint data (the input and output of your CNN), and transform them into numpy images and un-normalized keypoints (x, y) for normal display. The un-transformation process turns keypoints and images into numpy arrays from Tensors *and* it undoes the keypoint normalization done in the Normalize() transform; it's assumed that you applied these transformations when you loaded your test data.
```
# visualize the output
# by default this shows a batch of 10 images
def visualize_output(test_images, test_outputs, gt_pts=None, batch_size=10):
for i in range(batch_size):
plt.figure(figsize=(20,10))
ax = plt.subplot(1, batch_size, i+1)
# un-transform the image data
image = test_images[i].data # get the image from it's wrapper
image = image.numpy() # convert to numpy array from a Tensor
image = np.transpose(image, (1, 2, 0)) # transpose to go from torch to numpy image
# un-transform the predicted key_pts data
predicted_key_pts = test_outputs[i].data
predicted_key_pts = predicted_key_pts.numpy()
# undo normalization of keypoints
predicted_key_pts = predicted_key_pts*50.0+100
# plot ground truth points for comparison, if they exist
ground_truth_pts = None
if gt_pts is not None:
ground_truth_pts = gt_pts[i]
ground_truth_pts = ground_truth_pts*50.0+100
# call show_all_keypoints
show_all_keypoints(np.squeeze(image), predicted_key_pts, ground_truth_pts)
plt.axis('off')
plt.show()
# call it
visualize_output(test_images, test_outputs, gt_pts)
```
## Training
#### Loss function
Training a network to predict keypoints is different than training a network to predict a class; instead of outputting a distribution of classes and using cross entropy loss, you may want to choose a loss function that is suited for regression, which directly compares a predicted value and target value. Read about the various kinds of loss functions (like MSE or L1/SmoothL1 loss) in [this documentation](http://pytorch.org/docs/master/_modules/torch/nn/modules/loss.html).
### TODO: Define the loss and optimization
Next, you'll define how the model will train by deciding on the loss function and optimizer.
---
```
## TODO: Define the loss and optimization
import torch.optim as optim
#criterion = None
criterion = nn.MSELoss()
#optimizer = None
optimizer = optim.Adam(net.parameters(),lr = 0.01)
```
## Training and Initial Observation
Now, you'll train on your batched training data from `train_loader` for a number of epochs.
To quickly observe how your model is training and decide on whether or not you should modify it's structure or hyperparameters, you're encouraged to start off with just one or two epochs at first. As you train, note how your the model's loss behaves over time: does it decrease quickly at first and then slow down? Does it take a while to decrease in the first place? What happens if you change the batch size of your training data or modify your loss function? etc.
Use these initial observations to make changes to your model and decide on the best architecture before you train for many epochs and create a final model.
```
def train_net(n_epochs):
# prepare the net for training
net.train()
for epoch in range(n_epochs): # loop over the dataset multiple times
running_loss = 0.0
# train on batches of data, assumes you already have train_loader
for batch_i, data in enumerate(train_loader):
# get the input images and their corresponding labels
images = data['image']
key_pts = data['keypoints']
# flatten pts
key_pts = key_pts.view(key_pts.size(0), -1)
# convert variables to floats for regression loss
key_pts = key_pts.type(torch.FloatTensor)
images = images.type(torch.FloatTensor)
# forward pass to get outputs
output_pts = net(images)
# calculate the loss between predicted and target keypoints
loss = criterion(output_pts, key_pts)
# zero the parameter (weight) gradients
optimizer.zero_grad()
# backward pass to calculate the weight gradients
loss.backward()
# update the weights
optimizer.step()
# print loss statistics
# to convert loss into a scalar and add it to the running_loss, use .item()
running_loss += loss.item()
if batch_i % 10 == 9: # print every 10 batches
print('Epoch: {}, Batch: {}, Avg. Loss: {}'.format(epoch + 1, batch_i+1, running_loss/1000))
running_loss = 0.0
print('Finished Training')
# train your network
n_epochs = 10 # start small, and increase when you've decided on your model structure and hyperparams
train_net(n_epochs)
```
## Test data
See how your model performs on previously unseen, test data. We've already loaded and transformed this data, similar to the training data. Next, run your trained model on these images to see what kind of keypoints are produced. You should be able to see if your model is fitting each new face it sees, if the points are distributed randomly, or if the points have actually overfitted the training data and do not generalize.
```
# get a sample of test data again
test_images, test_outputs, gt_pts = net_sample_output()
print(test_images.data.size())
print(test_outputs.data.size())
print(gt_pts.size())
## TODO: visualize your test output
# you can use the same function as before, by un-commenting the line below:
# visualize_output(test_images, test_outputs, gt_pts)
visualize_output(test_images, test_outputs, gt_pts)
```
Once you've found a good model (or two), save your model so you can load it and use it later!
```
## TODO: change the name to something uniqe for each new model
model_dir = 'saved_models/'
model_name = 'keypoints_model_1.pt'
# after training, save your model parameters in the dir 'saved_models'
torch.save(net.state_dict(), model_dir+model_name)
```
After you've trained a well-performing model, answer the following questions so that we have some insight into your training and architecture selection process. Answering all questions is required to pass this project.
### Question 1: What optimization and loss functions did you choose and why?
**Answer**: write your answer here (double click to edit this cell)
### Question 2: What kind of network architecture did you start with and how did it change as you tried different architectures? Did you decide to add more convolutional layers or any layers to avoid overfitting the data?
**Answer**: write your answer here
### Question 3: How did you decide on the number of epochs and batch_size to train your model?
**Answer**: write your answer here
## Feature Visualization
Sometimes, neural networks are thought of as a black box, given some input, they learn to produce some output. CNN's are actually learning to recognize a variety of spatial patterns and you can visualize what each convolutional layer has been trained to recognize by looking at the weights that make up each convolutional kernel and applying those one at a time to a sample image. This technique is called feature visualization and it's useful for understanding the inner workings of a CNN.
In the cell below, you can see how to extract a single filter (by index) from your first convolutional layer. The filter should appear as a grayscale grid.
```
# Get the weights in the first conv layer, "conv1"
# if necessary, change this to reflect the name of your first conv layer
weights1 = net.conv1.weight.data
w = weights1.numpy()
filter_index = 0
print(w[filter_index][0])
print(w[filter_index][0].shape)
# display the filter weights
plt.imshow(w[filter_index][0], cmap='gray')
```
## Feature maps
Each CNN has at least one convolutional layer that is composed of stacked filters (also known as convolutional kernels). As a CNN trains, it learns what weights to include in it's convolutional kernels and when these kernels are applied to some input image, they produce a set of **feature maps**. So, feature maps are just sets of filtered images; they are the images produced by applying a convolutional kernel to an input image. These maps show us the features that the different layers of the neural network learn to extract. For example, you might imagine a convolutional kernel that detects the vertical edges of a face or another one that detects the corners of eyes. You can see what kind of features each of these kernels detects by applying them to an image. One such example is shown below; from the way it brings out the lines in an the image, you might characterize this as an edge detection filter.
<img src='images/feature_map_ex.png' width=50% height=50%/>
Next, choose a test image and filter it with one of the convolutional kernels in your trained CNN; look at the filtered output to get an idea what that particular kernel detects.
### TODO: Filter an image to see the effect of a convolutional kernel
---
```
import cv2
##TODO: load in and display any image from the transformed test dataset
image = test_images[i].data # get the image from it's wrapper
image = image.numpy() # convert to numpy array from a Tensor
image = np.transpose(image, (1, 2, 0)) # transpose to go from torch to numpy image
## TODO: Using cv's filter2D function,
## apply a specific set of filter weights (like the one displayed above) to the test image
## TODO: Using cv's filter2D function,
## apply a specific set of filter weights (like the one displayed above) to the test image
filtered_image = cv2.filter2D(image, -1, w[0][0])
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.set_title('filter 0 - image in conv1')
ax1.imshow(filtered_image, cmap='gray')
ax2.set_title('filter 0 - image in conv2')
ax2.imshow( cv2.filter2D(image, -1, w[0][0]), cmap='gray')
```
### Question 4: Choose one filter from your trained CNN and apply it to a test image; what purpose do you think it plays? What kind of feature do you think it detects?
**Answer**: (does it detect vertical lines or does it blur out noise, etc.) write your answer here
---
## Moving on!
Now that you've defined and trained your model (and saved the best model), you are ready to move on to the last notebook, which combines a face detector with your saved model to create a facial keypoint detection system that can predict the keypoints on *any* face in an image!
| github_jupyter |
# Data Visualization With Safas
This notebook demonstrates plotting the results from Safas video analysis.
## Import modules and data
Import safas and other components for display and analysis. safas has several example images in the safas/data directory. These images are accessible as attributes of the data module because the __init__ function of safas/data also acts as a loader.
```
import sys
from matplotlib import pyplot as plt
from matplotlib.ticker import ScalarFormatter
%matplotlib inline
import pandas as pd
import cv2
from safas import filters
from safas import data
from safas.filters.sobel_focus import imfilter as sobel_filter
from safas.filters.imfilters_module import add_contours
```
## Object properties
Users may interactively select and link objects in safas. When the data is saved, the data is written in tabular form where the properties of each oject are stored.
The object properties are calculated with the Scikit-Image function [regionprops.](https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops). A complete description of these properties may be found in the [regionprops.](https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops) documentation. At this time, the following properties are stored in a .xlsx file:
property | unit |
--- | --- |
area | $\mu m^2$ |
equivalent_diameter | $\mu m$ |
perimeter | $\mu m$ |
euler_number | -- |
minor_axis_length | $\mu m$ |
major_axis_length | $\mu m$ |
extent | -- |
If a selected object is linked to an object in the next frame, the instantaneous velocity will be calculated based on the displacement of the object centroid and the frame rate of the video.
property | unit | description
--- | --- | ---|
vel_mean | [mm/s] | velocity |
vel_N | [--] | number of objects linked|
vel_std | [mm/s] | standard deviation of velocity |
## Plot a settling velocity versus floc size
```
# load the excel file as a Pandas DataFrame
df = pd.read_excel('data/floc_props.xlsx')
# see the keys
print(df.keys())
# plot velocity vs major_axis_length
f, ax = plt.subplots(1,1, figsize=(3.5, 2.2), dpi=250)
# note: remove *10 factor if floc_props.xlsx file is updated: previous version was output in [cm/s]
ax.plot(df.major_axis_length, df.vel_mean*10, marker='o', linestyle='None')
ax.set_xlabel('Floc size [$\mu m$]')
ax.set_ylabel('Settling velocity [mm/s]')
# convert to log-log
ax.loglog()
ax.axis([100, 5000, 0.1, 100])
for axis in [ax.xaxis, ax.yaxis]:
axis.set_major_formatter(ScalarFormatter())
plt.tight_layout()
save = True
if save:
plt.savefig('png/vel_size.png', dpi=900)
```
| github_jupyter |
# In-Class Coding Lab: Iterations
The goals of this lab are to help you to understand:
- How loops work.
- The difference between definite and indefinite loops, and when to use each.
- How to build an indefinite loop with complex exit conditions.
- How to create a program from a complex idea.
# Understanding Iterations
Iterations permit us to repeat code until a Boolean expression is `False`. Iterations or **loops** allow us to write succint, compact code. Here's an example, which counts to 3 before [Blitzing the Quarterback in backyard American Football](https://www.quora.com/What-is-the-significance-of-counting-one-Mississippi-two-Mississippi-and-so-on):
```
i = 1
while i <= 4:
print(i,"Mississippi...")
i=i+1
print("Blitz!")
```
## Breaking it down...
The `while` statement on line 2 starts the loop. The code indented beneath it (lines 3-4) will repeat, in a linear fashion until the Boolean expression on line 2 `i <= 3` is `False`, at which time the program continues with line 5.
### Some Terminology
We call `i <=3` the loop's **exit condition**. The variable `i` inside the exit condition is the only thing that we can change to make the exit condition `False`, therefore it is the **loop control variable**. On line 4 we change the loop control variable by adding one to it, this is called an **increment**.
Furthermore, we know how many times this loop will execute before it actually runs: 3. Even if we allowed the user to enter a number, and looped that many times, we would still know. We call this a **definite loop**. Whenever we iterate over a fixed number of values, regardless of whether those values are determined at run-time or not, we're using a definite loop.
If the loop control variable never forces the exit condition to be `False`, we have an **infinite loop**. As the name implies, an Infinite loop never ends and typically causes our computer to crash or lock up.
```
## WARNING!!! INFINITE LOOP AHEAD
## IF YOU RUN THIS CODE YOU WILL NEED TO KILL YOUR BROWSER AND SHUT DOWN JUPYTER NOTEBOOK
i = 1
while i <= 3:
print(i,"Mississippi...")
# i=i+1
print("Blitz!")
```
### For loops
To prevent an infinite loop when the loop is definite, we use the `for` statement. Here's the same program using `for`:
```
for i in range(1,4):
print(i,"Mississippi...")
print("Blitz!")
```
One confusing aspect of this loop is `range(1,4)` why does this loop from 1 to 3? Why not 1 to 4? Well it has to do with the fact that computers start counting at zero. The easier way to understand it is if you subtract the two numbers you get the number of times it will loop. So for example, 4-1 == 3.
### Now Try It
In the space below, Re-Write the above program to count from 10 to 15. Note: How many times will that loop?
```
# TODO Write code here
for i in range(10,16):
print(i, "Mississippi...")
print("Blitz!")
```
## Indefinite loops
With **indefinite loops** we do not know how many times the program will execute. This is typically based on user action, and therefore our loop is subject to the whims of whoever interacts with it. Most applications like spreadsheets, photo editors, and games use indefinite loops. They'll run on your computer, seemingly forever, until you choose to quit the application.
The classic indefinite loop pattern involves getting input from the user inside the loop. We then inspect the input and based on that input we might exit the loop. Here's an example:
```
name = ""
while name != 'mike':
name = input("Say my name! : ")
print("Nope, my name is not %s! " %(name))
```
The classic problem with indefinite loops is that its really difficult to get the application's logic to line up with the exit condition. For example we need to set `name = ""` in line 1 so that line 2 start out as `True`. Also we have this wonky logic where when we say `'mike'` it still prints `Nope, my name is not mike!` before exiting.
### Break statement
The solution to this problem is to use the break statement. **break** tells Python to exit the loop immediately. We then re-structure all of our indefinite loops to look like this:
```
while True:
if exit-condition:
break
```
Here's our program we-written with the break statement. This is the recommended way to write indefinite loops in this course.
```
while True:
name = input("Say my name!: ")
if name == 'mike':
break
print("Nope, my name is not %s!" %(name))
```
### Multiple exit conditions
This indefinite loop pattern makes it easy to add additional exit conditions. For example, here's the program again, but it now stops when you say my name or type in 3 wrong names. Make sure to run this program a couple of times. First enter mike to exit the program, next enter the wrong name 3 times.
```
times = 0
while True:
name = input("Say my name!: ")
times = times + 1
if name == 'mike':
print("You got it!")
break
if times == 3:
print("Game over. Too many tries!")
break
print("Nope, my name is not %s!" %(name))
```
# Number sums
Let's conclude the lab with you writing your own program which
uses an indefinite loop. We'll provide the to-do list, you write the code. This program should ask for floating point numbers as input and stops looping when **the total of the numbers entered is over 100**, or **more than 5 numbers have been entered**. Those are your two exit conditions. After the loop stops print out the total of the numbers entered and the count of numbers entered.
```
## TO-DO List
#1 count = 0
#2 total = 0
#3 loop Indefinitely
#4. input a number
#5 increment count
#6 add number to total
#7 if count equals 5 stop looping
#8 if total greater than 100 stop looping
#9 print total and count
# Write Code here:
# Title and Header
print("Race to 100 Plus!!!")
print("*" * 40)
print("This program adds 5 numbers the goal being to exceed 100. If you exceed 100 or enter 5 numbers program ends.")
print("*" * 40)
#Program
count = 0
total = 0
try:
while True:
number = float(input("Input a number less than or equal to 100: "))
count += 1
total = total + number
if count == 5:
print("*" * 40)
print("You enter the max amout of numberers. Your total is.")
print("*" * 40)
break
if total > 100:
print("*" * 40)
print("You've exceeded 100 in less than 5 numbers! Your total is.")
print("*" * 40)
break
print("*" * 40)
print("Please enter another number.")
print("*" * 40)
if total > 100:
print("Your total is %.2f after %.0f entries. You exceeded 100!" % (total, count))
print("*" * 40)
else:
print("Your total is %.2f after %.0f entries. You did not exceed 100!" % (total, count))
print("*" * 40)
except ValueError:
print("*" * 40)
print("Error 10001: Numerical Value Error. You did not enter a number! ")
print("*" * 40)
```
| github_jupyter |
# 3. Example: Univariate Gaussian
```
# Install and load deps
if (!require("stringr")) {
install.packages("stringr")
}
library(purrr)
```
As an example, we consider the heights in cm of 20 individuals:
We will model the heights using the univariate Gaussian. The univariate Gaussian has two
parameters, its mean and variance, which we aim to estimate.
```
.heights_str <- "178 162 178 178 169 150 156 162 165 189 173 157 154 162 162 161 168 156 169 153"
heights <- unlist(
map(
str_split(.heights_str, " "),
function(el) {
as.numeric(el)
}
)
)
heights
```
### Step 1. Data generative model
We assume the data i.i.d. and choose the univariate Gaussian as model. The `p` will become `N` and the parameters (`mu`, `sigma^2`).
For the i.i.d., `p(X, params) = mult(p(x_i, params))
### Step 2. Simulate data
We can simulate data according to the model using the R function `rnorm()` which draws random samples of the normal distribution.
```
simulate <- function(n, theta) {
x <- rnorm(n, mean = theta[["mu"]], sd = theta[["sigma"]])
return(x)
}
n <- 20
theta <- c(mu = 175, sigma = 5)
x <- simulate(n, theta)
x
```
### Step 3. Parameter estimation procedure
We consider maximum likelihood estimation of the parameters, so the parameters that maximize `L(theta, X)`.
Triks:
* Minimize the negative log-likelihood `NLL(theta)` which is equal to `-log(L(theta, X))`
* Reparametrize Gaussian using precision, which is `1/(sigma^2)`
The parameters estimates are gotten by setting the NLL to the null vector **0**.
*Strange math calcs.......*
`mu` is equal to the sample **mean** while `sigma` is equal to the **biased sample variance**.
### Step 4. Implementation and empirical verification
We code our estimation procedure into a R function `estimate()`
```
estimate <- function(x) {
mean_x <- mean(x)
theta_hat <- c(
mu = mean_x,
sigma = sqrt(mean((x - mean_x)^2))
)
return(theta_hat)
}
n <- 20
x <- simulate(n, theta = c(mu = 175, sigma = 5))
theta_hat <- estimate(x)
theta_hat
```
These estimates are not too far from the ground truth values. Our simple check is good enough
for this didactic toy example. The code would allow you to investigate more systematically
the relationships between estimates and ground truth with various values of the parameters
and sample size n.
### Step 5. Application to real data
We finally apply our estimation to the original dataset:
```
theta_hat <- estimate(heights)
theta_hat
```
# 4. Assessing whether a distribution fits data with Q-Q plots
The strategy described in section 1 allows assessing whether an estimation procedure returns
reasonable estimates on simulated data. It does not assess however ether the simulation
assumptions, i.e. the data generative model, is a reasonable model for the data at hand.
One key modeling assumption of a data generative model is the choice of the distribution.
The quantile-quantile plot is a graphical tool to assess whether a distribution fits the data
reasonably.
As a concrete example, let’s consider 50 data points coming from the uniform distribution
in the [2,3] interval. If you assume your data comes from the uniform distribution in the
[2,3] interval, you expect the first 10% of your data to fall in [2,2.1], the second 10% in
[2.1,2.2] and so forth. A histogram could be used to visually assess this agreement. However,
histograms are shaky because of possible low counts in every bin:
```
par(cex = 0.7)
u <- runif(50, min = 2, max = 3)
hist(u, main = "")
```
Instead of the histogram, one could plot the deciles of the sample distribution against those
of a theoretical distribution. Here are the deciles:
```
dec <- quantile(u, seq(0, 1, 0.1))
dec
par(cex = 0.7)
plot(seq(2, 3, 0.1),
dec,
xlim = c(2, 3), ylim = c(2, 3),
xlab = "Deciles of the uniform distribution over [2,3]",
ylab = "Deciles of the dataset"
)
abline(0, 1) ## diagonal y=x
```
Now we see a clear agreement between the expected values of the deciles of the theoretical
distribution (x-axis) and those empirically observed (y-axis). The advantage of this strategy is
that it also generalizes to other distributions (e.g. Normal), where the shape of the density
can be difficult to assess with a histogram.
For a finite sample we can estimate the quantile for every data point, not just the deciles.
The Q-Q plot scatter plots the quantiles of two distributions against each other. One way is
to use as expected quantile (r − 0.5)/N (Hazen, 1914), where r is the rank of the data point.
The R function ppoints gives more accurate values:
```
par(cex = 0.7)
plot(qunif(ppoints(length(u)), min = 2, max = 3), sort(u),
xlim = c(2, 3), ylim = c(2, 3),
xlab = "Quantiles of the uniform distribution over [2,3]",
ylab = "Quantiles of the dataset"
)
abline(0, 1)
```
In R, Q-Q plots between two datasets can be generated using the function qqplot() . In the
special case of a normal distribution use the function qqnorm() and the function qqline() ,
which adds a line to the “theoretical” quantile-quantile plot passing through the first and
third quartiles.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.